US20100039380A1 - Movable Audio/Video Communication Interface System - Google Patents
Movable Audio/Video Communication Interface System Download PDFInfo
- Publication number
- US20100039380A1 US20100039380A1 US12/604,211 US60421109A US2010039380A1 US 20100039380 A1 US20100039380 A1 US 20100039380A1 US 60421109 A US60421109 A US 60421109A US 2010039380 A1 US2010039380 A1 US 2010039380A1
- Authority
- US
- United States
- Prior art keywords
- user
- display
- assembly
- view
- head
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1601—Constructional details related to the housing of computer displays, e.g. of CRT monitors, of flat displays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1601—Constructional details related to the housing of computer displays, e.g. of CRT monitors, of flat displays
- G06F1/1605—Multimedia displays, e.g. with integrated or attached speakers, cameras, microphones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2200/00—Indexing scheme relating to G06F1/04 - G06F1/32
- G06F2200/16—Indexing scheme relating to G06F1/16 - G06F1/18
- G06F2200/161—Indexing scheme relating to constructional details of the monitor
- G06F2200/1612—Flat panel monitor
Definitions
- the present invention is directed to a system for immersing a user into a multidimensional collaborative environment using position tracking to adjust a position of a display displaying a 3D scene and/or other participants in the collaboration.
- One class of solutions is to reduce the effects of imperfect sight lines by the use of other design elements, while another is to find ways to generate accurate sight lines.
- Accurate sight lines require dynamic tracking of the positions of the eyes of users, and generally require that the visual scene presented to each eye be digitally reconstructed to be of the correct perspective, since it is difficult to consistently place a physical camera at the correct position to capture the proper perspective.
- This approach is generally called tele-immersion.
- a tele-immersion example is Jaron Lanier's prototype described in the Scientific American article referenced. Several problems have made tele-immersion systems impractical.
- a system that includes an assembly of multimodal displays and sensors mounted on a mechanical or robotic arm rising out of a desktop or other base.
- the arm moves the assembly so that it remains within position and orientation tolerances relative to the user's head as the user looks around. This lowers the requirements for sensor and display components so that existing sensors and displays can work well enough for the purpose.
- the arm does not need to be moved with great accuracy or maintain perfect on-axis alignment and uniform distance to the face. It must merely remain within tolerances. Kalman filters are applied to head motion to compensate for latency in the arm's tracking of the head. Tele-immersion is supported by the assembly because local and remote user's heads can be sensed and then represented to each other with true sight lines.
- the invention provides a solution that is full duplex and yet has a small footprint. Users can be placed in any arrangement in virtual space. Because lighting and sound generation take place close to the user's head, the invention will not disrupt other activities in the local physical environment. Near-field speaker arrays supply immersive audio and a microphone array senses a user's voice. In this way a user can be alerted by an audio event such as a voice to look in the direction of the event. Since the display will move to show what is present in that direction, the display need not be encompassing, or restrict access to the local physical environment, in order for the user to benefit from immersive virtual environments.
- the invention is also a haptic interface device; a user can grab the display/sensor array and move it about.
- the invention acts as a planar selection device for 3D data. This is important for volumetric data, such as MRI scan data.
- the physical position and orientation of display assembly provides planar selection and the need for mental rotation is reduced.
- Planar force feedback can also be used to allow a user to feel the center of density within a scalar field as resistance and curl. Users see not only each other through display windows, but can also see the positions and orientations of each others' planar selections of shared 3D models or data, so area of interest is communicated with minimal effort.
- the invention can also be used to subsume or simulate other user interface designs, such as command control rooms with multiple displays, wall-sized displays, “videobots,” or conventional desktop PC displays.
- FIG. 1 illustrates the components of a system according to the present invention.
- FIG. 2 shows a perspective view of the desktop embodiment.
- FIG. 3 depicts a hanging embodiment
- FIG. 4 shows a display according to the present invention.
- FIG. 5 illustrates how other users and their viewpoint can be shown.
- FIG. 6 depicts a master control loop.
- FIG. 7 shows a manual control loop.
- FIG. 8 depicts head tracking and range limits.
- FIG. 9 illustrates eye tracking and head tracking.
- FIG. 10 shows display centering within a desired range.
- FIG. 11 shows robotic arm movement as head motion is extended.
- FIG. 12 shows multiple users and their ability to see each other.
- FIG. 13 shows manual movement of the display assembly.
- FIGS. 14 and 15 depict a hollow arm embodiment.
- the present invention which can also be called a Compact, Collaborative, Desktop, Explorer (COCODEX), is a user interface technology that can provide a solution to some of the most important and longest standing problems in Virtual Reality, Tele-immersion, 3D visualization, and video teleconferencing technologies.
- the invention includes an assembly of display and sensor components mounted on a mechanical arm that allows the assembly to move to a wide variety of locations around a user's head. Because the display and sensors are mobile, it is possible to keep them within constrained positions or tolerances relative to the user's face or head as the user looks around, thus making a variety of functions reliable that are not reliable in other configurations.
- the invention is a full duplex solution for tele-immersion or visual teleconferencing that allows for varied numbers and virtual arrangements of participants, makes demands of sensor and display technologies that can be met using known techniques and materials, and has a practical footprint for widespread deployment.
- the invention can be thought of as the halfway point in a design continuum between head mounted displays and CAVE-like room displays, while offering significant advantages that neither extreme can offer.
- the hardware of the system of an embodiment includes two or more systems (local 102 and remote 104 ) connected by a full duplex communications network 106 , such as the Internet.
- Each system includes a computer 108 connected to a computer controlled robotics arm 110 .
- the arm 110 is a conventional robotics arm that has multiple degrees of freedom (with effectively 6 degrees of freedom in the end attachment) allowing the display to tilt, swivel, move up, down, away, toward, right, left, etc.
- the arm also includes the conventional feedback systems that indicate the position and attitude of the arm so that the direction that the display is “facing” is known.
- the arm 110 holds a visual display 112 , such as a flat panel display, to which are attached (an array of) audio speakers 114 , visual sensors 116 , illumination sources 118 such as LEDs, and an audio sensor 120 , such as a microphone array allowing sound direction to be determined.
- the flat panel display can include autostereo viewing capability by using suitable devices, such as a lenticular screen, through which the images are projected to the user.
- suitable devices such as a lenticular screen
- the display provides a view into the scene that can be adjusted.
- the autostereo view capability allows the user to see stereo cues in the virtual scene.
- the speakers and sensors are positioned around the display so that three-dimensional (3D) effects can be obtained and projected.
- the visual sensors are used to sense the position of a user's head and the near field speakers can be used to present to the user a stereo audio image that approximates a position of a participant that appears on the display 112 while at the same time not projecting the sound too far from the physical space of the user.
- a handle 122 for manual control of the positioning of the display (and the view of the object) is also provided and includes one or more buttons 124 (like the buttons of a conventional mouse I/O device) or interface elements (such as roller balls, thumb wheels, jog wheels) allowing different types of control and selection.
- buttons and a roller ball can be used to select and activate graphical user interface (GUI) elements that appear on the display, such as a typical menu or GUI icon based desktop.
- GUI graphical user interface
- These robotic arm feedback systems can provide manual resistance to movement of the handle as controlled by the computer to allow the user to “feel” the data through which a view or cut-plane is traveling.
- the components 112 - 120 and 124 are conventional components, such as video cameras, microphones, etc and are coupled to the computer 108 through conventional interfaces suitable to the components.
- FIG. 2 depicts a perspective view of a preferred embodiment of the desktop portion of the interface system.
- the display 112 with its attachments can be moved about above the desktop 202 by the user with the handle 122 or the motors of the robotics arm 110 .
- FIG. 3 depicts an alternate embodiment where the display assembly 302 hangs from an overarching gantry type device 304 .
- the freedom of movement is greater, allowing the user more views into the “space” that is being presented to the user.
- the screen can be turned to allow a 360-degree view in both the vertical and horizontal directions, like looking around in a room full of people or even looking about in a theater.
- FIG. 4 illustrates the display 402 in such a position where a cut plane 404 through a 3D object 406 (a head of a person) is being displayed.
- FIG. 5 depicts a display view 502 showing a 3D object 504 being commonly viewed by another viewer 506 .
- the other viewer 506 is being shown along with orientation of the other viewer, the cut plane 508 (or 3D object view) being viewed by the other viewer 506 and the other viewers viewing frustum 510 .
- the other viewer is displayed as a compound portraiture image of the face.
- a compound portraiture image is an image of a user that is constructed using the best data that can be obtained from sensors placed in advantageous positions by the motion of the robotic arm.
- a polygon mesh head deformed by facial landmarks that are tracked by machine vision algorithms (in order to reflect facial expression or pose), to which textures are applied.
- the textures are of varying resolution, and are derived differentially from cameras in the camera array, so that the best-placed camera contributes most to given area of texture on the head.
- Variably-transparent mesh objects extend from the head so that objects that extend substantially from the face, such as large hairstyles or hats, can be rendered so as to fade into the surrounding environment with an ambiguous border.
- FIG. 6 depicts a master flow of control within the computer system 108 .
- the system determines 602 whether the handle of the assembly is being touched. This determination can be made conventionally by, for example, using touch sensors on the handle. If so, the system determines 604 the view, viewing angle, frustum, etc. of the viewer and communicates such to the other systems so that they can depict to the other users the view of the viewer moving the display (see FIG. 8 ). In this way, the other users can be alerted to what the viewer desires to point out, etc.
- the system also moves the assembly and adjusts the local view based on the inputs from the handle. If the user is not touching the control handle, the system determines 606 the head position and eye view using conventional eye tracking and object motion detection procedures and moves 608 the display to keep the head in the display stereo view/sound range and the sensor sensing range using conventional position prediction techniques.
- the display is moved by conventionally controlling the robotic arm 110 based on a desired position determined by the position prediction. As the display is automatically moved, the system also determines 610 whether the display will collide with other objects on the desktop, such as another computer, a telephone, etc. This collision detection is also performed in a conventional manner. If a collision is imminent, the motion is stopped 612 .
- the eye tracking also determines when the user is no longer looking at items that are deemed important within the virtual world display, such as when the user glances at an object in the local environment or room, such as a piece of paper laying on the desk top or at another computer display elsewhere in the room.
- head tracking and motion of the assembly by the robotic arm stops.
- FIG. 7 depicts the flow of operations of the system while the handle of the assembly is being touched. A more detailed description of the flow can be found in the attached pseudocode appendix, which can be used for implementing the system in a preferred language such as C++.
- the viewing frustum is determined 704 and communicated to the other systems.
- the local cut plane is highlighted 706 , along with other user interface elements, such as orientation reference guides and this information is also communicated to the other users systems.
- the system calculates 708 the stereo views of other users along with shared view information and projects 710 an integrated view to the viewer.
- FIG. 8 depicts horizontal limits 802 , 804 of head 806 motion relative to the display/sensor array 808 for head position sensing and the robotic arm 810 .
- the system predicts the limit encounter and moves the arm 810 and/or swivels the display/sensor array 808 .
- the position of the eyes relative to the display/sensor array are used to help determine whether the display 808 needed to be swiveled (or tilted).
- the limits are typically specified by the optics of the stereo view system being used for image projection.
- the viewing geometry of a particular lenticular or other autostereo screen being used for the display is used to set such limits.
- FIG. 9 depicts the system making a predictive guess of a future or derived head position 902 of a moving head 904 using conventional eye tracking 906 and Kalman filter based prediction of future position.
- FIG. 10 shows how the display assembly 1002 on the end of the robotic arm 1004 is automatically moved or swiveled 1006 to maintain the head in a desired center of the viewing/sensing range rather than by moving the arm.
- FIG. 11 shows how the arm 1102 is automatically moved 1104 to provide an extended range 1006 of head motion where the user moves his head from a first position 1108 to a second position 1110 while the system keeps the viewers head with the left 1112 and right 1114 limits.
- FIG. 11 also shows a situation where the user may be looking at a backside of a 3D object or scene being displayed in the first position 1108 and the front/left side of the object in the second position 1110 . With this automatic movement capability and the ability to view the scene within a viewing range, the users can now look at each other as well as at different portions of the object.
- FIG. 12 shows how several viewers in different locations can move their heads 1202 - 1208 while using the system and view others in the group as well as other parts of the common 3D scene during a collaboration.
- the users 1202 - 1208 have moved their heads within the head position tracking limits while their eyes have moved to look obliquely through the displays.
- the system tracks the eye movements of the users 1202 - 1208 and adjusts their view into the scene accordingly.
- the relative spatial positions of the users can defined with great flexibility. User's can be close to each other or far from one another, and can be seated equally around a table or gathered in an audience in front of a user who is giving a lecture.
- FIG. 13 depicts a user 1302 manually moving 1304 the display to look at a particular party of the scene or at another user by grabbing a side of the display assembly.
- This particular example of motion control does not use the handle and relies on the feedback from the position sensors in the robotic arm and display assembly head to make adjustment to the display view, etc.
- the above-discussed figures show the user moving essentially horizontally, the system tracking the user and moving the display accordingly.
- the system is also capable of moving the display vertically and at angles.
- the present invention also uses the conventional virtual reality capabilities that allow a user to view a 3D scene from multiple changing perspectives and that allow other views, such as a view of another user, to be combined in the same view space.
- the present invention can incorporate a merged dual-exit pupil display as its display as depicted in FIGS. 14 and 15 .
- the invention makes smaller exit pupils 1500 usable by moving them to match the user's moving eye positions.
- a variation of the arm 1402 / 1502 is required which is hollow and capable of supporting mirrors 1504 in its joints.
- One display 1506 / 1508 for each eye is placed in the base 1510 and combined with a combiner 1512 .
- Powered mirrors are placed in the joints, so that the invention functions like a periscope, incorporating the optical properties of a stereo microscope.
- a holographic optical element 1512 is one suitable choice for the final powered optical element, coincident with the plane of the sensor/display assembly, in order to reduce weight.
- the aspect of the invention of placing sensors and displays in motion to approximately keep track of a user's head provides multiple benefits: a) Improved integration of virtual and physical tools: With the invention it is easy to look into the 3D scene and then out again while seated, allowing users to easily divert attention between people and things depicted in a virtual space and other people and things present in the physical environment. A user can easily use conventional PC tools and immersive virtual world tools in the same work session. b) Emulation of other user interface designs: The invention can emulate a conventional PC display by defining a virtual PC display at a certain position in the virtual world. When the invention's display is moved to the corresponding physical position it effectively acts as a physical simulation of a conventional PC at the same location.
- the invention can be used to emulate command/control centers, display walls, and other user interface designs.
- a single camera pointed straight at a user is a common design in visual telecommunications, but this design fails to meet human factors requirements. Some degree of reconstruction of the user's head/face is needed to meet these requirements, so that accurate lines of sight can be supported, with each user appearing to the others at the proper perspective angle. Machine vision techniques and cameras have not performed well enough to achieve this when limited to fixed viewing positions, given normal human ranges of motion. Since with this invention cameras keep up with the face, existing cameras and machine vision algorithms can sense a user's face well enough for perspective alteration and other tasks.
- the invention enables rendering of precise points of view within autostereo displays and prevents users from seeing nil, pseudoscopic, or otherwise incorrect image pairs, even while supporting a full range of head motion.
- the invention's single mobile display allows users to look in any direction and, thus, it foresees any number or arrangement of remote participants with only a modest and fixed requirement for local physical space.
- f) Improved exploration of volumetric data With the present invention, by equating physical display position and orientation with virtual viewing frustum, the user's brain is relieved from having to perform a 6D transformation that confuses many users in typical immersive systems. This is significant in medical and scientific applications involving selecting sectional views of volumetric data.
- the invention makes it easy to perform planar selections and manipulations in addition to point-based ones, it is easy to design visualizations of what other participants are doing. Users see both the heads of other users, the screens they are using, and the ways that those screens are coupled to virtual objects that are being transformed. h) Reduced impact on the local shared physical environment: The invention can be desk-mounted and doesn't require low light conditions. i) Improved sound system for collaboration in a shared physical facility: Headphones excel at 3D audio effects, while speakers, though convenient, don't produce these effects well when placed at conventional distances, despite a great deal of effort by many labs to get them to do so. Speakers can also be loud when placed conventionally and this can disturb others in a work environment.
- the invention By coupling near-field speakers approximately to head position, the invention provides 3D sound at low volumes without head contact and without demanding any time to get into or out of the interface. A similar issue exists with microphones. A mobile microphone or microphone array will pick up the voice more consistently.
- Improved integration of audio, haptic, and visual user interface modalities The invention can be used for planar exploration of a scalar or vector volumetric field- or even one with curl.
- the user interface of exploration using any of the three above sensory modalities is identical (moving the display), and this tight integration will make it easier to train and collaborate with users who have certain disabilities. That is to say, a blind user and a deaf user could each explore a virtual object in similar ways, and thus collaborate more effectively.
- a haptic display as described in detail in the pseudocode below, will be available, in addition to an audio display.
- the center of density as calculated to provide haptic feedback of the location of a tumor in the pseudocode below, could also be used as the source of a virtual sound source using conventional 3D sound rendering techniques.
- the present invention solves a number of problems related to positions of sensors and displays.
- the invention provides autostereo without constraining user position unacceptably, provides headphone-like 3D audio performance without headphones, performs visual facial sensing without constraining user position unacceptably, provides consistent illumination of the user's face, isolates the user's voice without constraining user position unacceptably, provides a compact desktop implementation, facilitates instant-in-and-out, easy overall workflow when used in conjunction with other user interfaces, easily depicts what other users are paying attention to and doing, and provides 6 degrees of freedom of the physical display and the virtual viewing frustum, which are equivalent, making it easier for users to understand six degree of freedom navigation.
- a 3D magnetic field based sensor system such as Polhemus sensor and sensor system available from Polhemus, Colchester, Vt.
- Polhemus sensor and sensor system available from Polhemus, Colchester, Vt.
- sensors can also be used to warn the user to manually move the display with the attached sensors when the user's head position is reaching a limit.
- the invention arm can be mounted on a floor-standing pedestal, or a rolling such pedestal.
- the arm can be ceiling-mounted.
- the arm can be mounted on a powered mobile base, so that the base moves on a table or other surface in addition to the other motions described above.
- a mobile floor-mounted base can be incorporated to make the invention functional for a walking user.
- the display/sensor assembly can be hand-supported, if position and orientation are sensed using sensors such as those described above which do not require a rigid mechanical linkage.
- the display/sensor assembly can be hand-supported and wireless, using protocols, such as Bluetooth, to connect all components with computation resources.
- the arm can be mechanically supported, but manually moved.
- the invention display can be a transparent or semi-transparent surface that can present to the user superimposed projected images over the physical scene which is visible beyond the display surface.
- the invention incorporates the functionality of “Augmented Reality” displays (which are well known).
- the arm can be mounted on the inside surface of a vehicle. This can be done to provide simulated presence of other passengers in the vehicle, such as flight instructors (in the case of an aircraft).
- Another example of this variation is a set of commuter trains with invention systems present in each train, so that passengers on different trains could simulate being on the same train at once in order to have a meeting while commuting.
- the arm can be supported by the human body through a mounting system that attaches to a helmet, or directly to the human head, shoulders, and/or waist.
- the invention When attached to the head, the invention resembles a head-mounted display, but is unlike other head-mounted displays in that a) there is sufficient clearance from the face for facial sensing to support tele-immersion, and b) small amounts of motion of the display relative to the head are acceptable because the techniques described throughout this patent compensate for them.
- the screen and other components can be mounted on the mechanical arm using clips or clamps or other easily disengaged fasteners. This facilitates rapid changing of the choice of components present in the invention. For instance, a user can switch between autostereo and higher resolution non-stereo displays.
- the invention can be constructed as a product that includes the arm and the software described in the pseudocode below, with each user adding sensing and display components according to individual preferences.
- the invention can incorporate a conventional computer display, mounted on the reverse side of the autostereo display, facing in the opposite direction.
- the arm swivels the display/sensor assembly so that the conventional display is facing the user, and when the user wishes to perform tasks suitable for the invention, the assembly is turned so that the autostereo display is facing the user.
- the turning action (which switches from an autostereo to a conventional display) can be triggered when the user moves the assembly so that it is coincident with the placement of a simulated conventional computer display in the virtual space.
- the invention can incorporate a front or rear projection screen as its display, where the display surface is in motion, but the light source is either stationary or in motion to a lesser degree.
- the projected image must be directed and distorted to correct for the changing relative placements of the light source and the projection surface, which can be accomplished by various established means, such as moving mirror and lens systems and computer graphic techniques for simulated optical anti-distortion.
- the invention can incorporate a screen element which, rather than being flat, as described above, is concave, in order to provide the user with an effectively wider-angle display.
- a subset of the components described as being mounted on the arm can instead be mounted separately on a stationary or less mobile platform.
- a stationary light source can be substituted for the mobile light sources preferred in this description, or a stationary audio sensing or display system can be substituted.
- the invention can incorporate only a subset of the displays or sensors described in the preferred embodiment. For instance, a silent version might incorporate only the visual components, and none of the audio ones.
- a barrier can be incorporated which surrounds the space to the rear of all the positions the arm and the display/sensor assembly can attain, with sufficient clearance for operation, but which is open in front to give the user access to the device.
- This is an alternative or enhancement to relying on collision detection and prevention subsystems to prevent collisions between the arm or assembly and people or objects in an environment.
- An embodiment of this barrier is an approximate section of a sphere in shape, transparent and composed of a lightweight material like plastic.
- the barrier can be made in several sections that can be attached or detached to facilitate transport.
- the mobile portions of the invention can be made largely of low-weight, soft materials.
- the display screen can be a soft rear-projection surface, such as plastic, or a flexible (such as OLED) display.
- Soft audio speakers are available which are made of piezo and other materials. While soft versions of the sensor components (such as cameras, microphones, and position/orientation sensors) are not available at this time, versions of these components are available which are low weight and small.
- a version of the invention in which the majority of the mass of the components in motion is comprised of soft, lightweight materials will have reduced requirements for collision avoidance.
- the invention can incorporate additional optical components to provide accommodation relief for certain autostereo displays. That is to say, the distance at which the user's eyes must focus to resolve the stereo images presented in the display can be changed by incorporating these optical elements.
- a set of lenses, Fresnel lenses, holographic optical components, or other optical devices can be mechanically connected to the invention and positioned appropriately between the user's eyes and the display. It should be pointed out that these optical components typically only function under narrow positioning tolerances, so the same technique that is used to make other invention components function, of having the components move to track the head's location, makes it possible to incorporate such optical elements.
- the accommodation relief optical elements described in the previous paragraph can be mounted on a separate arm or a subordinate arm. This is desirable if the positioning tolerances of the optical components are tighter than the display.
- the same control software described for the display would be applied to the motion of the optical components, but with tighter adjustments for tolerances as described in detail in the pseudocode below.
- FACEVARS Most recent measured user's head/eyes Position/Orientation (6D relative to COCODEX base)
- FACEFUTURE Predicted near term user head/eyes Position/Orientations (6D list, path, or similar representation)
- CONFIDENCE indication of how well the user's head is currently being tracked
- FACE-PROTOTYPE which can be generic or user-specific
- a labeled graph of 3D points representing typical relative placements of facial landmarks can be simple geometry, or can incorporate biomechanical modeling.
- FACEPOSEFUTURE A prediction of geometric distortions of FACE-PROTOTYPE (a set of future path predictions corresponding to each point in the graph)
- ASSEMVARS Most recent measured display/sensor assembly Position/Orientation (6D relative to COCODEX base)
- ASSEMFUTURE Predicted near term display/sensor assembly Position/Orientations (6D list, path, or similar representation))
- UI-VARS State of such things as buttons, dials, and other UI conventional UI components mounted on the display/sensor assembly or elsewhere on COCODEX WORKING VOLUME: a volume relative to the Position/Orientation of the display/sensor assembly within which display and sensor functions related to the user's face will work; it is the intersection of the individual volumes in which autostereo visual effects, 3D audio, and the various sensors such as cameras and microphones will have adequate functional access to the user's face.
- IDEAL VOLUME a volume within the WORKING Volume that serves as a safety target for maintaining the relative positions and orientation of the display/sensor assembly to the user's face
- FACE-TRACKING VECTOR The change in the Position/Orientation of the display/sensor assembly that is currently projected to keep the user's eyes and the rest of the user's face in the IDEAL VOLUME (in the event that it would otherwise fall out of the IDEAL VOLUME)
- ROBOTIC-MOTION-CALIBRATION-TABLE A pair of sparsely filled in 3D vector fields; the first contains instructions that have been sent to the particular robotic arm installed locally, and the second contains the resulting move that/actually took place.
- FORCE-RESISTANCE-VECTOR Vector indicating one component of haptic feedback control REPULSION-FIELD-VECTOR: Vector indicating another component of haptic feedback control PLANAR Haptic feedback map: A vector field that stores results in advance to speed the calculation of current values for the above vectors USER-APPLIED-FORCE-VECTOR: Vector indicating the force user is applying to the display/sensor assembly by grabbing it (is nil when the display/sensor assembly is not being grabbed) TELE-LAYOUT of shared virtual environment:
- a data structure including at least:
- the IDEAL VOLUME is contained within the WORKING VOLUME, so by testing for near term divergence from the IDEAL VOLUME, the head is continuously kept within the WORKING VOLUME. If a set of fast, high quality sensors and displays is used, the two volumes can be almost the same, while low cost sensors require a larger difference between the two volumes. There are, of course, other techniques that can be used instead to express variable tolerances in control software. Note that in the pseudocode given here, only one local user is assumed. The Eyematic type of facial feature tracking has already been demonstrated to be capable of tracking four local users, however.
- Some autostereo screens such as lenticular or parallax barrier displays, can support enough distinct views in an appropriate configuration to support more than a single local user as well. All the code for this and other functions can be easily extended to support multiple local users, provided the display and sensor subsystems can support a sufficiently large IDEAL zone to contain them all at once.
- the assumed facial feature finding subsystem in this pseudocode is the machine vision-based technology initially described by Eyematic.
- Another example of a potential subsystem is IBM's BlueEyes.
- Four cameras surrounding the display, each running the Eyematic feature-finding algorithms, are assumed, though the number and placement can vary. Each camera will supply image streams used by software to attempt to find a set of facial features. The varied placement will result in the cameras having access to different subsets of the face. For instance, a camera looking at the face from the left might not detect position of the right nostril because the nose will be in the way. While this might sound humorous, it's actually a serious problem in face tracking.
- Another common problem is a user's hand temporarily obscuring a portion of the face from the point of view of one camera, but not all cameras at once. This function performs specialized sensor fusion to address that class of problem.
- COMMENT FACEPOSEFUTURE will play a role in reducing apparent latency in the visual channel for remote users looking at the local user.
- CONFIDENCE variable is being used here as a simple feedback signal to govern a pattern classification sub-system that will sometimes be well “locked on” to a pattern and sometimes not. Many other established methods are available as well.
- COCODEX can support point-based haptics, emulating a device like the Phantom.
- COCODEX also supports a planar mode of haptic interaction.
- the haptic properties of a set of points are combined into a display of force and resistance, including curl.
- the PLANAR Haptic feedback map determines resistance and force to be displayed by the arm as a function of the position and orientation of the assembly at the end of the arm. The map is calculated as specified by the TELE_LAYOUT.
- the TELE-LAYOUT can specify that scalar values associated with voxels be treated as resistance values.
- An example of when this is useful is in radiology. Darker voxels are set to be more resistant, so as the COCODEX assembly is manually guided through an area of volumetric data, a user feels the “center” of resistance of the display plane, corresponding to the location of a tumor.
- 3D volumes of scalar values can be analyzed using classical techniques to generate vectors for force field simulations. In other cases, vector information will already be defined for each voxel. This typically is the case in physical simulation applications, for instance. Another application is the creation of 6D “detents,” or “sticky” position/orientations for the assembly.
- COCODEX When the head of that remote user approaches the corresponding location of some other user's COCODEX screen, that screen is pushed aside.
- a “tele-haptics” capability is also supported. This allows remote collaborators to “Feel each other” as they co-explore complex data such as volumetric medical or geographical information.
- the visual display of data is tightly coupled with haptic and audio displays, creating a multimodal interface.
- a notable advantage of COCODEX is that capabilities such as tele-haptics are accessed using the same instrumentation principles as visual and audio features, so that individuals who have deficits or special abilities in particular sensory modalities can interact with other individuals with different deficits or abilities, without making any change to the interaction practice or instrumentation.
- the PLANAR Haptic feedback map includes scalar resistance values
- COCODEX has a sensor array it can support collision avoidance without extra instrumentation, but there are multiple vendors of collision avoidance subsystems, so for the purposes of this pseudocode, collision avoidance isn't explained in detail.
- This function is for determining the current position of the display/sensor hardware assembly on the robot arm, as well as predicting future values.
- COMMENT Multiple means can be employed to determine arm pose. These can include rotation sensors in joints in the arm; various commercially available 3D or 6D tracking sensors using optical, RF, ultrasound, magnetic or other techniques to track components in known locations in the arm, or the use of sensors in the sensor/display assembly to track visual landmarks in the environment. This last option is possible because the TELE-LAYOUT can record a representation of the local environment that was gathered at an earlier time. Established techniques for visual landmark-based tracking can be applied to generate an additional source of data on arm pose.
- COCODEX-AS-AVATAR mode can be selected. This corresponds to a recent stream of research demonstrations in which a remote user “pilots” a physical local robot that local human users can interact with as if the remote human user was present in the position of the robot.
- COCODEX-AS-AVATAR mode is turned on, a designated remote user's head is tracked by the COCODEX sensor/display assembly instead of the head of a local user.
- the COCODEX assembly appears to “look around” with the head motion of the remote user, and with the remote user's face centered in the screen. This effect is described by other researchers who have implemented robotic display devices for this sole purpose.
- the originality of invention here is not the COCODEX-AS-AVATAR formulation, but the fact that it is available conveniently as an option from a device (COCODEX) that is designed primarily for other uses. Note that the converse is not true.
- Remote robot devices such as those referred to above are NOT able to function like COCODEX.
- This function prepares the local virtual world for graphical rendering. This can be accomplished using a conventional display-list architecture or similar structure.
- the subroutines below are in an approximate far-to-near order.
- the elements of the TELE-LAYOUT are explained in the comments of this function. Note that while assembling the virtual world and rendering are separate steps in this pseudo-code, it is often more efficient in practice to render elements as they are ready instead of waiting for a single render phase.
- TELE-LAYOUT includes 3D objects or data
- the TELE-LAYOUT includes a local virtual mirror
- PREPARE_COMPOUND_PORTRAITURE is for preparing data to support visual display of the local user's face and other local elements both for remote collaborators and locally in a virtual mirror
- This pseudocode describes one particular technique of user rendering, called “Compound Portraiture,” but while this choice is an aspect of this invention, and ideal for COCODEX, other user rendering strategies suitable for tele-immersion can be chosen instead.
- the hand presents special challenges because portions of fingers can be obscured more often than portions of faces. This pseudocode will not address these special challenges.
- COMMENT COCODEX requires a user interface to set up TELE-LAYOUTS, initiate and end calls, and perform the usual functions of a personal telecommunications or information processing tool. There is no requirement that these functions be performed exclusively with the use of COCODEX, however. All these functions can be performed on a conventional computer placed on the desk next to COCODEX, or simulated within a COCODEX TELE-LAYOUT.
- Existing virtual world design tools and 3D modeling products already provide the editing and visualization capabilities required, and must be extended to link with the variables defined above in order to provide output useful for this invention. Available tools are extensible to provide these links.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A system that includes a desk top assembly of a display and sensors mounted on a robotic arm. The arm moves the assembly so that it remains within position and orientation tolerances relative to the user's head as the user looks around. Near-field speaker arrays supply audio and a microphone array senses a user's voice. Filters are applied to head motion to reduce latency for arm's tracking of the head. The system is full duplex with other systems allowing immersive collaboration. Lighting and sound generation take place close to the user's head. A haptic interface device allows the user to grab the display/sensor array and move it about. Motion acts as a planar selection device for 3D data. Planar force feedback allows a user to “feel” the data. Users see not only each other through display windows, but can also see the positions and orientations of each others' planar selections of shared 3D models or data.
Description
- This patent application is a continuation application of U.S. patent application Ser. No. 11/255,920, filed Oct. 24, 2005, and like that application the present application claims priority to U.S. Provisional Application No. 60/621,085 filed Oct. 25, 2004, both of which are hereby incorporated by reference herein in their entireties.
- The present invention is directed to a system for immersing a user into a multidimensional collaborative environment using position tracking to adjust a position of a display displaying a 3D scene and/or other participants in the collaboration.
- In the past a number of different technologies have been used to help people collaborate at a distance by coupling them together in some sort of common environment. These technologies have includes conference telephone systems, video telephones, networked head mounted displays, collaborative document software, etc. These technologies suffer from an inability to create a viable personal communications and computing environment for collaboration among individuals in part because the underlying sensor and display components are not used in a way that allows them to perform well enough to meet human factors needs. What is needed is a better such system.
- For instance, video conferencing systems cannot provide true sight lines between participants, because the camera and display are in different positions. Therefore eye contact between participants is impossible. This problem has led to a very large number of attempted solutions over a period of three quarters of a century.
- One class of solutions is to reduce the effects of imperfect sight lines by the use of other design elements, while another is to find ways to generate accurate sight lines. Accurate sight lines require dynamic tracking of the positions of the eyes of users, and generally require that the visual scene presented to each eye be digitally reconstructed to be of the correct perspective, since it is difficult to consistently place a physical camera at the correct position to capture the proper perspective. This approach is generally called tele-immersion. A tele-immersion example is Jaron Lanier's prototype described in the Scientific American article referenced. Several problems have made tele-immersion systems impractical. One is that displays and eye-position sensors that are currently available or are foreseen to be available in the near future do not work well outside of narrow tolerances for the position and orientation of the user's head. For instance, in order for participants to be able to be apparently placed close to each other in a shared virtual space, stereo vision must Be supported, but for each eye to see a unique point of view, either some form of eyeware must be worn, or an autostereo display must be used, but available autostereo displays place restrictions on a user's head position. Because of these problems, it has been difficult to design tele-immersion systems that combine true sight lines, full duplex (meaning that users can see each other without problems due to intervening machinery such as stereo viewing glasses), and flexible virtual placement (meaning that viewers can be placed at any distance, near or far, and in any arrangement.) Another problem has been that tele-immersion systems have generally required dedicated rooms, which has limited their practicality. The physical layout of tele-immersion instrumentation has placed restrictions on the virtual layout of participants in the virtual space. The blue-c system generates true sight lines but places restrictions on relative placements of users in virtual space, cannot support high resolution sensing or display with currently available components, and requires dedicated rooms. The HP Coliseum system cannot support true sight lines and generalized placement of participants at the same time.
- It is an aspect of the present invention to provide a personal communications and computing environment that can also be used for collaboration among individuals.
- It is another aspect of the present invention to provide an immersive type collaboration experience.
- It is also an aspect of the present invention to provide an immersive type experience that can be easily integrated with other modes of working.
- It is also an aspect of the present invention to provide an immersive type of experience without requiring large resources of floor space or specialized rooms.
- The above aspects can be attained by a system that includes an assembly of multimodal displays and sensors mounted on a mechanical or robotic arm rising out of a desktop or other base. The arm moves the assembly so that it remains within position and orientation tolerances relative to the user's head as the user looks around. This lowers the requirements for sensor and display components so that existing sensors and displays can work well enough for the purpose. The arm does not need to be moved with great accuracy or maintain perfect on-axis alignment and uniform distance to the face. It must merely remain within tolerances. Kalman filters are applied to head motion to compensate for latency in the arm's tracking of the head. Tele-immersion is supported by the assembly because local and remote user's heads can be sensed and then represented to each other with true sight lines. By placing user interface transducers in motion, it becomes possible for users to move as they normally would in group interactions, particularly those including more than two participants. The invention provides a solution that is full duplex and yet has a small footprint. Users can be placed in any arrangement in virtual space. Because lighting and sound generation take place close to the user's head, the invention will not disrupt other activities in the local physical environment. Near-field speaker arrays supply immersive audio and a microphone array senses a user's voice. In this way a user can be alerted by an audio event such as a voice to look in the direction of the event. Since the display will move to show what is present in that direction, the display need not be encompassing, or restrict access to the local physical environment, in order for the user to benefit from immersive virtual environments. The invention is also a haptic interface device; a user can grab the display/sensor array and move it about. The invention acts as a planar selection device for 3D data. This is important for volumetric data, such as MRI scan data. The physical position and orientation of display assembly provides planar selection and the need for mental rotation is reduced. Planar force feedback can also be used to allow a user to feel the center of density within a scalar field as resistance and curl. Users see not only each other through display windows, but can also see the positions and orientations of each others' planar selections of shared 3D models or data, so area of interest is communicated with minimal effort. The invention can also be used to subsume or simulate other user interface designs, such as command control rooms with multiple displays, wall-sized displays, “videobots,” or conventional desktop PC displays.
- These together with other aspects and advantages which will be subsequently apparent, reside in the details of construction and operation as more fully hereinafter described and claimed, reference being had to the accompanying drawings forming a part hereof, wherein like numerals refer to like parts throughout.
- The foregoing features of the invention will be more readily understood by reference to the following detailed description, taken with reference to the accompanying drawings, in which:
-
FIG. 1 illustrates the components of a system according to the present invention. -
FIG. 2 shows a perspective view of the desktop embodiment. -
FIG. 3 depicts a hanging embodiment. -
FIG. 4 shows a display according to the present invention. -
FIG. 5 illustrates how other users and their viewpoint can be shown. -
FIG. 6 depicts a master control loop. -
FIG. 7 shows a manual control loop. -
FIG. 8 depicts head tracking and range limits. -
FIG. 9 illustrates eye tracking and head tracking. -
FIG. 10 shows display centering within a desired range. -
FIG. 11 shows robotic arm movement as head motion is extended. -
FIG. 12 shows multiple users and their ability to see each other. -
FIG. 13 shows manual movement of the display assembly. -
FIGS. 14 and 15 depict a hollow arm embodiment. - The present invention, which can also be called a Compact, Collaborative, Desktop, Explorer (COCODEX), is a user interface technology that can provide a solution to some of the most important and longest standing problems in Virtual Reality, Tele-immersion, 3D visualization, and video teleconferencing technologies. The invention includes an assembly of display and sensor components mounted on a mechanical arm that allows the assembly to move to a wide variety of locations around a user's head. Because the display and sensors are mobile, it is possible to keep them within constrained positions or tolerances relative to the user's face or head as the user looks around, thus making a variety of functions reliable that are not reliable in other configurations. These include auto-stereo display effects, 3D audio without headphones, machine vision analysis of the user's face, illumination of the face, audio sensing of the voice, and so on. This can be accomplished without physical contact with or obscuring of the face, so it becomes possible to accurately accomplish full duplex tele-immersion or other visual communications involving the face. The invention is a full duplex solution for tele-immersion or visual teleconferencing that allows for varied numbers and virtual arrangements of participants, makes demands of sensor and display technologies that can be met using known techniques and materials, and has a practical footprint for widespread deployment. The invention can be thought of as the halfway point in a design continuum between head mounted displays and CAVE-like room displays, while offering significant advantages that neither extreme can offer.
- As depicted in
FIG. 1 , the hardware of the system of an embodiment includes two or more systems (local 102 and remote 104) connected by a fullduplex communications network 106, such as the Internet. Each system includes acomputer 108 connected to a computer controlledrobotics arm 110. Thearm 110 is a conventional robotics arm that has multiple degrees of freedom (with effectively 6 degrees of freedom in the end attachment) allowing the display to tilt, swivel, move up, down, away, toward, right, left, etc. The arm also includes the conventional feedback systems that indicate the position and attitude of the arm so that the direction that the display is “facing” is known. Thearm 110 holds avisual display 112, such as a flat panel display, to which are attached (an array of)audio speakers 114,visual sensors 116,illumination sources 118 such as LEDs, and anaudio sensor 120, such as a microphone array allowing sound direction to be determined. The flat panel display can include autostereo viewing capability by using suitable devices, such as a lenticular screen, through which the images are projected to the user. The display provides a view into the scene that can be adjusted. The autostereo view capability allows the user to see stereo cues in the virtual scene. The speakers and sensors are positioned around the display so that three-dimensional (3D) effects can be obtained and projected. For example, the visual sensors, as will be discussed later herein, are used to sense the position of a user's head and the near field speakers can be used to present to the user a stereo audio image that approximates a position of a participant that appears on thedisplay 112 while at the same time not projecting the sound too far from the physical space of the user. Ahandle 122 for manual control of the positioning of the display (and the view of the object) is also provided and includes one or more buttons 124 (like the buttons of a conventional mouse I/O device) or interface elements (such as roller balls, thumb wheels, jog wheels) allowing different types of control and selection. For example, buttons and a roller ball can be used to select and activate graphical user interface (GUI) elements that appear on the display, such as a typical menu or GUI icon based desktop. These robotic arm feedback systems can provide manual resistance to movement of the handle as controlled by the computer to allow the user to “feel” the data through which a view or cut-plane is traveling. The components 112-120 and 124 are conventional components, such as video cameras, microphones, etc and are coupled to thecomputer 108 through conventional interfaces suitable to the components. -
FIG. 2 depicts a perspective view of a preferred embodiment of the desktop portion of the interface system. In this view it can be seen that thedisplay 112 with its attachments can be moved about above thedesktop 202 by the user with thehandle 122 or the motors of therobotics arm 110. -
FIG. 3 depicts an alternate embodiment where thedisplay assembly 302 hangs from an overarchinggantry type device 304. In this embodiment the freedom of movement is greater, allowing the user more views into the “space” that is being presented to the user. For example, in this version the screen can be turned to allow a 360-degree view in both the vertical and horizontal directions, like looking around in a room full of people or even looking about in a theater. - The freedom of movement of the display of the present invention essentially allows the user to move about and look about in a view space. As a result, the user can take a viewing frustum and move it “through” a virtual object that is being commonly displayed to the interactive collaborating participants.
FIG. 4 illustrates thedisplay 402 in such a position where acut plane 404 through a 3D object 406 (a head of a person) is being displayed. - Because in a situation where many individually may be involved in the collaboration, it may be important for each viewer of a common scene to have an understanding of at where the other viewers are looking.
FIG. 5 depicts adisplay view 502 showing a3D object 504 being commonly viewed by anotherviewer 506. Theother viewer 506 is being shown along with orientation of the other viewer, the cut plane 508 (or 3D object view) being viewed by theother viewer 506 and the otherviewers viewing frustum 510. The other viewer is displayed as a compound portraiture image of the face. A compound portraiture image is an image of a user that is constructed using the best data that can be obtained from sensors placed in advantageous positions by the motion of the robotic arm. It is composed of a polygon mesh head deformed by facial landmarks that are tracked by machine vision algorithms (in order to reflect facial expression or pose), to which textures are applied. The textures are of varying resolution, and are derived differentially from cameras in the camera array, so that the best-placed camera contributes most to given area of texture on the head. Variably-transparent mesh objects extend from the head so that objects that extend substantially from the face, such as large hairstyles or hats, can be rendered so as to fade into the surrounding environment with an ambiguous border. -
FIG. 6 depicts a master flow of control within thecomputer system 108. A more detailed description of the flow can be found in the attached pseudocode appendix, which can be used to for implementing the system in a preferred language such as C++. In this flow, the system determines 602 whether the handle of the assembly is being touched. This determination can be made conventionally by, for example, using touch sensors on the handle. If so, the system determines 604 the view, viewing angle, frustum, etc. of the viewer and communicates such to the other systems so that they can depict to the other users the view of the viewer moving the display (seeFIG. 8 ). In this way, the other users can be alerted to what the viewer desires to point out, etc. The system also moves the assembly and adjusts the local view based on the inputs from the handle. If the user is not touching the control handle, the system determines 606 the head position and eye view using conventional eye tracking and object motion detection procedures and moves 608 the display to keep the head in the display stereo view/sound range and the sensor sensing range using conventional position prediction techniques. The display is moved by conventionally controlling therobotic arm 110 based on a desired position determined by the position prediction. As the display is automatically moved, the system also determines 610 whether the display will collide with other objects on the desktop, such as another computer, a telephone, etc. This collision detection is also performed in a conventional manner. If a collision is imminent, the motion is stopped 612. The eye tracking also determines when the user is no longer looking at items that are deemed important within the virtual world display, such as when the user glances at an object in the local environment or room, such as a piece of paper laying on the desk top or at another computer display elsewhere in the room. When the system determines that the user is not looking at a defined area of interest within the virtual world depicted in the display, head tracking and motion of the assembly by the robotic arm stops. -
FIG. 7 depicts the flow of operations of the system while the handle of the assembly is being touched. A more detailed description of the flow can be found in the attached pseudocode appendix, which can be used for implementing the system in a preferred language such as C++. If the handle is being touched 702, the viewing frustum is determined 704 and communicated to the other systems. In addition, the local cut plane is highlighted 706, along with other user interface elements, such as orientation reference guides and this information is also communicated to the other users systems. When this communication is finished, the system calculates 708 the stereo views of other users along with shared view information andprojects 710 an integrated view to the viewer. -
FIG. 8 depictshorizontal limits head 806 motion relative to the display/sensor array 808 for head position sensing and therobotic arm 810. As thehead 806 approaches and reaches thelimit 802, the system predicts the limit encounter and moves thearm 810 and/or swivels the display/sensor array 808. The position of the eyes relative to the display/sensor array are used to help determine whether thedisplay 808 needed to be swiveled (or tilted). The limits are typically specified by the optics of the stereo view system being used for image projection. The viewing geometry of a particular lenticular or other autostereo screen being used for the display is used to set such limits. -
FIG. 9 depicts the system making a predictive guess of a future or derivedhead position 902 of a movinghead 904 using conventional eye tracking 906 and Kalman filter based prediction of future position. -
FIG. 10 shows how thedisplay assembly 1002 on the end of therobotic arm 1004 is automatically moved or swiveled 1006 to maintain the head in a desired center of the viewing/sensing range rather than by moving the arm. -
FIG. 11 shows how thearm 1102 is automatically moved 1104 to provide anextended range 1006 of head motion where the user moves his head from afirst position 1108 to asecond position 1110 while the system keeps the viewers head with the left 1112 and right 1114 limits.FIG. 11 also shows a situation where the user may be looking at a backside of a 3D object or scene being displayed in thefirst position 1108 and the front/left side of the object in thesecond position 1110. With this automatic movement capability and the ability to view the scene within a viewing range, the users can now look at each other as well as at different portions of the object. -
FIG. 12 shows how several viewers in different locations can move their heads 1202-1208 while using the system and view others in the group as well as other parts of the common 3D scene during a collaboration. The users 1202-1208 have moved their heads within the head position tracking limits while their eyes have moved to look obliquely through the displays. The system tracks the eye movements of the users 1202-1208 and adjusts their view into the scene accordingly. The relative spatial positions of the users can defined with great flexibility. User's can be close to each other or far from one another, and can be seated equally around a table or gathered in an audience in front of a user who is giving a lecture. -
FIG. 13 depicts auser 1302 manually moving 1304 the display to look at a particular party of the scene or at another user by grabbing a side of the display assembly. This particular example of motion control does not use the handle and relies on the feedback from the position sensors in the robotic arm and display assembly head to make adjustment to the display view, etc. - The above-discussed figures show the user moving essentially horizontally, the system tracking the user and moving the display accordingly. The system is also capable of moving the display vertically and at angles.
- The present invention also uses the conventional virtual reality capabilities that allow a user to view a 3D scene from multiple changing perspectives and that allow other views, such as a view of another user, to be combined in the same view space.
- The present invention can incorporate a merged dual-exit pupil display as its display as depicted in
FIGS. 14 and 15 . There have been varied autostereo displays using multiple exit pupils, but they have either required very large footprints to handle the optics to make large exit pupils, or have demanded an artificially small amount of head motion from the user, so that the user can see small exit pupils. The invention makessmaller exit pupils 1500 usable by moving them to match the user's moving eye positions. In an embodiment, a variation of the arm 1402/1502 is required which is hollow and capable of supportingmirrors 1504 in its joints. Onedisplay 1506/1508 for each eye is placed in the base 1510 and combined with acombiner 1512. These are preferably DLP or LCOS micro-displays illuminated by LEDs or other light sources. Powered mirrors are placed in the joints, so that the invention functions like a periscope, incorporating the optical properties of a stereo microscope. A holographicoptical element 1512 is one suitable choice for the final powered optical element, coincident with the plane of the sensor/display assembly, in order to reduce weight. - The aspect of the invention of placing sensors and displays in motion to approximately keep track of a user's head provides multiple benefits: a) Improved integration of virtual and physical tools: With the invention it is easy to look into the 3D scene and then out again while seated, allowing users to easily divert attention between people and things depicted in a virtual space and other people and things present in the physical environment. A user can easily use conventional PC tools and immersive virtual world tools in the same work session. b) Emulation of other user interface designs: The invention can emulate a conventional PC display by defining a virtual PC display at a certain position in the virtual world. When the invention's display is moved to the corresponding physical position it effectively acts as a physical simulation of a conventional PC at the same location. Similarly, the invention can be used to emulate command/control centers, display walls, and other user interface designs. c) Improved upper-body mobility for seated users of tele-immersion services: Available eye tracking technologies, which are required both for facial reconstruction and for the control of autostereo renderings, do not track eyes within the full normal range of human head motion during the course of a conversation in which a person might be looking around at multiple remote participants. By coupling eye-tracking sensors to the mobile display that is allowed to move in approximate conjunction with the eyes that are being tracked, sufficient performance is achieved to support a multi-person conversation’ with diverse relative positions of participants. The same argument is generalized to all visual sensors. A single camera pointed straight at a user is a common design in visual telecommunications, but this design fails to meet human factors requirements. Some degree of reconstruction of the user's head/face is needed to meet these requirements, so that accurate lines of sight can be supported, with each user appearing to the others at the proper perspective angle. Machine vision techniques and cameras have not performed well enough to achieve this when limited to fixed viewing positions, given normal human ranges of motion. Since with this invention cameras keep up with the face, existing cameras and machine vision algorithms can sense a user's face well enough for perspective alteration and other tasks. d) Improve the performance of autostereo displays: The invention enables rendering of precise points of view within autostereo displays and prevents users from seeing nil, pseudoscopic, or otherwise incorrect image pairs, even while supporting a full range of head motion. e) Improved independence of physical and virtual space allocation: The physical arrangement of displays in previous tele-immersion setups placed constraints on virtual participant arrangements. For instance, in order for a user to be able to see remote users to the left and to the right at a virtual table, there had to be local physical displays to the left and right to support sight lines to view those remote users. If a tele-immersive meeting using fixed displays has more than a few participants, the display requirements become expensive and impractical. The invention's single mobile display allows users to look in any direction and, thus, it foresees any number or arrangement of remote participants with only a modest and fixed requirement for local physical space. f) Improved exploration of volumetric data: With the present invention, by equating physical display position and orientation with virtual viewing frustum, the user's brain is relieved from having to perform a 6D transformation that confuses many users in typical immersive systems. This is significant in medical and scientific applications involving selecting sectional views of volumetric data. g) Improved user interface for implicit communication of interest and activity between users: With the invention, users can see renderings of the locations and projective contents of the mobile screens other participants are viewing the world though, so each user can tell what the others are paying attention to. Since the invention makes it easy to perform planar selections and manipulations in addition to point-based ones, it is easy to design visualizations of what other participants are doing. Users see both the heads of other users, the screens they are using, and the ways that those screens are coupled to virtual objects that are being transformed. h) Reduced impact on the local shared physical environment: The invention can be desk-mounted and doesn't require low light conditions. i) Improved sound system for collaboration in a shared physical facility: Headphones excel at 3D audio effects, while speakers, though convenient, don't produce these effects well when placed at conventional distances, despite a great deal of effort by many labs to get them to do so. Speakers can also be loud when placed conventionally and this can disturb others in a work environment. By coupling near-field speakers approximately to head position, the invention provides 3D sound at low volumes without head contact and without demanding any time to get into or out of the interface. A similar issue exists with microphones. A mobile microphone or microphone array will pick up the voice more consistently. j) Improved integration of audio, haptic, and visual user interface modalities: The invention can be used for planar exploration of a scalar or vector volumetric field- or even one with curl. The user interface of exploration using any of the three above sensory modalities is identical (moving the display), and this tight integration will make it easier to train and collaborate with users who have certain disabilities. That is to say, a blind user and a deaf user could each explore a virtual object in similar ways, and thus collaborate more effectively. For the blind user, a haptic display, as described in detail in the pseudocode below, will be available, in addition to an audio display. For instance, the center of density, as calculated to provide haptic feedback of the location of a tumor in the pseudocode below, could also be used as the source of a virtual sound source using conventional 3D sound rendering techniques.
- As can be seen from the above discussion and the attached drawings, the present invention solves a number of problems related to positions of sensors and displays. The invention provides autostereo without constraining user position unacceptably, provides headphone-like 3D audio performance without headphones, performs visual facial sensing without constraining user position unacceptably, provides consistent illumination of the user's face, isolates the user's voice without constraining user position unacceptably, provides a compact desktop implementation, facilitates instant-in-and-out, easy overall workflow when used in conjunction with other user interfaces, easily depicts what other users are paying attention to and doing, and provides 6 degrees of freedom of the physical display and the virtual viewing frustum, which are equivalent, making it easier for users to understand six degree of freedom navigation.
- Other techniques can be used for head position and orientation sensing. For example, a 3D magnetic field based sensor system, such as Polhemus sensor and sensor system available from Polhemus, Colchester, Vt., can be worn on the user's head. These sensors can also be used to warn the user to manually move the display with the attached sensors when the user's head position is reaching a limit.
- The invention arm can be mounted on a floor-standing pedestal, or a rolling such pedestal. The arm can be ceiling-mounted. The arm can be mounted on a powered mobile base, so that the base moves on a table or other surface in addition to the other motions described above. A mobile floor-mounted base can be incorporated to make the invention functional for a walking user.
- The display/sensor assembly can be hand-supported, if position and orientation are sensed using sensors such as those described above which do not require a rigid mechanical linkage. The display/sensor assembly can be hand-supported and wireless, using protocols, such as Bluetooth, to connect all components with computation resources.
- The arm can be mechanically supported, but manually moved.
- The invention display can be a transparent or semi-transparent surface that can present to the user superimposed projected images over the physical scene which is visible beyond the display surface. In this case, the invention incorporates the functionality of “Augmented Reality” displays (which are well known). When an “Augmented Reality” type display is chosen, the arm can be mounted on the inside surface of a vehicle. This can be done to provide simulated presence of other passengers in the vehicle, such as flight instructors (in the case of an aircraft). Another example of this variation is a set of commuter trains with invention systems present in each train, so that passengers on different trains could simulate being on the same train at once in order to have a meeting while commuting.
- The arm can be supported by the human body through a mounting system that attaches to a helmet, or directly to the human head, shoulders, and/or waist. When attached to the head, the invention resembles a head-mounted display, but is unlike other head-mounted displays in that a) there is sufficient clearance from the face for facial sensing to support tele-immersion, and b) small amounts of motion of the display relative to the head are acceptable because the techniques described throughout this patent compensate for them.
- The screen and other components can be mounted on the mechanical arm using clips or clamps or other easily disengaged fasteners. This facilitates rapid changing of the choice of components present in the invention. For instance, a user can switch between autostereo and higher resolution non-stereo displays.
- The invention can be constructed as a product that includes the arm and the software described in the pseudocode below, with each user adding sensing and display components according to individual preferences.
- The invention can incorporate a conventional computer display, mounted on the reverse side of the autostereo display, facing in the opposite direction. When the user is performing conventional computer tasks, the arm swivels the display/sensor assembly so that the conventional display is facing the user, and when the user wishes to perform tasks suitable for the invention, the assembly is turned so that the autostereo display is facing the user. The turning action (which switches from an autostereo to a conventional display) can be triggered when the user moves the assembly so that it is coincident with the placement of a simulated conventional computer display in the virtual space.
- The invention can incorporate a front or rear projection screen as its display, where the display surface is in motion, but the light source is either stationary or in motion to a lesser degree. In this case the projected image must be directed and distorted to correct for the changing relative placements of the light source and the projection surface, which can be accomplished by various established means, such as moving mirror and lens systems and computer graphic techniques for simulated optical anti-distortion.
- The invention can incorporate a screen element which, rather than being flat, as described above, is concave, in order to provide the user with an effectively wider-angle display.
- A subset of the components described as being mounted on the arm can instead be mounted separately on a stationary or less mobile platform. For instance, a stationary light source can be substituted for the mobile light sources preferred in this description, or a stationary audio sensing or display system can be substituted.
- The invention can incorporate only a subset of the displays or sensors described in the preferred embodiment. For instance, a silent version might incorporate only the visual components, and none of the audio ones.
- A barrier can be incorporated which surrounds the space to the rear of all the positions the arm and the display/sensor assembly can attain, with sufficient clearance for operation, but which is open in front to give the user access to the device. This is an alternative or enhancement to relying on collision detection and prevention subsystems to prevent collisions between the arm or assembly and people or objects in an environment. An embodiment of this barrier is an approximate section of a sphere in shape, transparent and composed of a lightweight material like plastic. The barrier can be made in several sections that can be attached or detached to facilitate transport.
- The mobile portions of the invention can be made largely of low-weight, soft materials. For instance the display screen can be a soft rear-projection surface, such as plastic, or a flexible (such as OLED) display. Soft audio speakers are available which are made of piezo and other materials. While soft versions of the sensor components (such as cameras, microphones, and position/orientation sensors) are not available at this time, versions of these components are available which are low weight and small. A version of the invention in which the majority of the mass of the components in motion is comprised of soft, lightweight materials will have reduced requirements for collision avoidance.
- The invention can incorporate additional optical components to provide accommodation relief for certain autostereo displays. That is to say, the distance at which the user's eyes must focus to resolve the stereo images presented in the display can be changed by incorporating these optical elements. A set of lenses, Fresnel lenses, holographic optical components, or other optical devices can be mechanically connected to the invention and positioned appropriately between the user's eyes and the display. It should be pointed out that these optical components typically only function under narrow positioning tolerances, so the same technique that is used to make other invention components function, of having the components move to track the head's location, makes it possible to incorporate such optical elements.
- The accommodation relief optical elements described in the previous paragraph can be mounted on a separate arm or a subordinate arm. This is desirable if the positioning tolerances of the optical components are tighter than the display. The same control software described for the display would be applied to the motion of the optical components, but with tighter adjustments for tolerances as described in detail in the pseudocode below.
- The many features and advantages of the invention are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the invention that fall within the true spirit and scope of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.
- FACEVARS: Most recent measured user's head/eyes Position/Orientation (6D relative to COCODEX base)
FACEFUTURE: Predicted near term user head/eyes Position/Orientations (6D list, path, or similar representation)
CONFIDENCE: indication of how well the user's head is currently being tracked
FACE-PROTOTYPE (which can be generic or user-specific); a labeled graph of 3D points representing typical relative placements of facial landmarks; can be simple geometry, or can incorporate biomechanical modeling.
FACEPOSEFUTURE; A prediction of geometric distortions of FACE-PROTOTYPE (a set of future path predictions corresponding to each point in the graph)
ASSEMVARS: Most recent measured display/sensor assembly Position/Orientation (6D relative to COCODEX base)
ASSEMFUTURE: Predicted near term display/sensor assembly Position/Orientations (6D list, path, or similar representation))
UI-VARS: State of such things as buttons, dials, and other UI conventional UI components mounted on the display/sensor assembly or elsewhere on COCODEX
WORKING VOLUME: a volume relative to the Position/Orientation of the display/sensor assembly within which display and sensor functions related to the user's face will work; it is the intersection of the individual volumes in which autostereo visual effects, 3D audio, and the various sensors such as cameras and microphones will have adequate functional access to the user's face.
IDEAL VOLUME: a volume within the WORKING Volume that serves as a safety target for maintaining the relative positions and orientation of the display/sensor assembly to the user's face
FACE-TRACKING VECTOR—The change in the Position/Orientation of the display/sensor assembly that is currently projected to keep the user's eyes and the rest of the user's face in the IDEAL VOLUME (in the event that it would otherwise fall out of the IDEAL VOLUME)
ROBOTIC-MOTION-CALIBRATION-TABLE: A pair of sparsely filled in 3D vector fields; the first contains instructions that have been sent to the particular robotic arm installed locally, and the second contains the resulting move that/actually took place.
FORCE-RESISTANCE-VECTOR: Vector indicating one component of haptic feedback control
REPULSION-FIELD-VECTOR: Vector indicating another component of haptic feedback control
PLANAR Haptic feedback map: A vector field that stores results in advance to speed the calculation of current values for the above vectors
USER-APPLIED-FORCE-VECTOR: Vector indicating the force user is applying to the display/sensor assembly by grabbing it (is nil when the display/sensor assembly is not being grabbed)
TELE-LAYOUT of shared virtual environment: - A data structure including at least:
-
- Volumetric, polygon-plus-texture, or other 3D representation of local environment, including desk surface, perhaps walls, etc
- Similar representations of remote environments of other users
- Additional virtual elements, such as virtual display walls, command control displays, conventional 2D computer displays to be simulated in the virtual space, and other 3D objects and data displays.
- A seating plan: The relative positions and orientations of all local environments in a merged tele-immersive setting.
- Design elements which merge, hide, or otherwise manage the boundaries of the renderings of local environments that can be seen remotely
- Conventional data associated with online collaborative efforts: List of participants; times when certain meetings are scheduled to start and end, lists of members who potentially join if they are not already present; information related to quality of network services for each participant; billing or other administrative data
AREA OF INTEREST: a volume within a TELE-LAYOUT that contains representations of displays, simulation components, data displays, and other elements that a user might wish to look at
COCODEX-AS-AVATAR: a binary mode indicator
can (End definition of global data structures)
- IF a TELE_LAYOUT is NOT selected
-
- THEN
- CALL FUNCTION SETUP TELE-LAYOUT
- THEN
- CALL FUNCTION COCODEX_AUTO_SENSING
- IF Confidence that the user's head is being tracked is high AND COCODEX-AS-AVATAR mode is NOT activated for local unit
-
- THEN
- CALL FUNCTION KEEP_TRACK OF_FACE
- CALL FUNCTION KEEP_COCODEX_IN_FRONT_OF_FACE
- ELSE IF Confidence that the user's head is being tracked is low
- AND COCODEX-AS-AVATAR mode is OFF for local unit
- CALL FUNCTION
- FACE_NOT_CURRENTLY_TRACKED ELSE IF
- COCODEX-AS-AVATAR mode is ON for local unit
- CALL FUNCTION COCODEX_AS_AVATAR
- THEN
- CALL FUNCTION COCODEX_HAPTICS
- CALL FUNCTION PREPARE_COMPOUND_PORTRAIT
- CALL FUNCTION
- COCODEX_NETWORK_COMMUNICATIONS CALL
- FUNCTION UPDATE_LOCAL_VIRTUAL_WORLD
- CALL FUNCTION AUTOSTEREO_RENDERING
- CALL FUNCTION COCODEX_SOUND
- COMMENT This function describes the most “characteristic” or central feature of COCODEX. The IDEAL VOLUME is contained within the WORKING VOLUME, so by testing for near term divergence from the IDEAL VOLUME, the head is continuously kept within the WORKING VOLUME. If a set of fast, high quality sensors and displays is used, the two volumes can be almost the same, while low cost sensors require a larger difference between the two volumes. There are, of course, other techniques that can be used instead to express variable tolerances in control software.
Note that in the pseudocode given here, only one local user is assumed. The Eyematic type of facial feature tracking has already been demonstrated to be capable of tracking four local users, however. Some autostereo screens, such as lenticular or parallax barrier displays, can support enough distinct views in an appropriate configuration to support more than a single local user as well. All the code for this and other functions can be easily extended to support multiple local users, provided the display and sensor subsystems can support a sufficiently large IDEAL zone to contain them all at once. - FOR a set of near term points in time
-
- READ the value predicted for that point in time stored in ASSEMFUTURE CALCULATE what the IDEAL VOLUME would be in terms of a coordinate system originating in the COCODEX base for that point in time COMPARE with values for same point in time stored in FACEFUTURE
- IF values in FACEFUTURE diverge from predicted values for IDEAL VOLUME
-
- THEN
- CALCULATE the new arm position that would MOST reduce divergence, centering the predicted IDEAL VOLUME on the predicted FACEVARS
- CALCULATE whether the new viewing frustum, were the arm to be moved as calculated above, would still intersect the current Area of Interest
- IF the new frustum would still intersect the AREA OF INTEREST
- THEN
- UPDATE FACE-TRACKING VECTOR with a vector that would move a perfectly responsive arm to the new position calculated above
- THEN
- COMMENT As was pointed out earlier, currently available sensor subsystems for finding and tracking facial features don't function well enough to support tele-immersion. This is because they only work if the user's face remains within an untenably limited range of positions and orientations. COCODEX fundamentally addresses this problem by putting the subsystems in motion to keep up with the face as it moves. When cost or other considerations result in exceptionally poor subsystem performance, it is sometimes necessary to combine multiple instances of particular sensor subsystems or multiple types of subsystems to gain a level of performance necessary for COCODEX to meet human factors requirements. The particular choices of how to do this are within the range of typical skills in the art, and illustrate how the invention enables and improves such techniques. The assumed facial feature finding subsystem in this pseudocode is the machine vision-based technology initially described by Eyematic. Another example of a potential subsystem is IBM's BlueEyes.
Four cameras surrounding the display, each running the Eyematic feature-finding algorithms, are assumed, though the number and placement can vary. Each camera will supply image streams used by software to attempt to find a set of facial features. The varied placement will result in the cameras having access to different subsets of the face. For instance, a camera looking at the face from the left might not detect position of the right nostril because the nose will be in the way. While this might sound humorous, it's actually a serious problem in face tracking. Another common problem is a user's hand temporarily obscuring a portion of the face from the point of view of one camera, but not all cameras at once. This function performs specialized sensor fusion to address that class of problem. -
- IF multiple facial feature finding subsystems with unique physical perspectives are used
- THEN
- QUERY each subsystem
- IF the format of the output from the vision subsystems is 2D
- THEN Perform parallax calculations to derive 3,D positions of features by comparing results from sensors or cameras at different positions
- FOR each potential face detection (as expressed now in 3D terms)
- Scale and rotate potential detected facial features into a normal form
- Compare potential detected facial features with FACE-PROTOTYPE
- IF there is good fit between a sufficient number of features in the potential detected facial features and the face prototype
- THEN
- FOR each potential detected facial feature (or only for those that are sufficiently divergent from the face prototype)
- DETERMINE if it was visible to the camera(s) that detected it
- IF it was not visible
- THEN replace it with the values from the face prototype
COMMENT This is a conventional calculation of occlusion determined by the geometry of the camera location and the hull of the face prototype.
- THEN
- ELSE
- Ignore that detection instance
- IF multiple facial feature finding subsystems with unique physical perspectives are used
- APPLY Bayesian or other conventional techniques to achieve sensor fusion, turning the multiple potential face detections into a single, more robust face detection
- IF latest head position is impossible (too fast a jump from recent positions to be physiologically possible)
-
- THEN
- Ignore reading and lower confidence
- level ELSE
- Raise confidence level
- THEN
- PREDICT near term head Position/Orientations using Kalman filters or other convention predictive filer techniques
- STORE data in FACEFUTURE
- PREDICT near term facial landmark positions, based on variations from recent results
- STORE data in FACEPOSEFUTURE
- COMMENT FACEPOSEFUTURE will play a role in reducing apparent latency in the visual channel for remote users looking at the local user.
-
-
- IF the reason for not tracking is ONLY that the latest predicted frustums (each eye has a different one) would NOT have intersected the AREA OF INTEREST
COMMENT If a user looks away from the area of interest, COCODEX stops tracking the user's face. This is the means by which the concept of “Pseudo-immersion” is implemented. A user can look away from remote users, virtual displays, and whatever else is deemed important on the other side of the COCODEX screen in order to pay attention to a local physical person or tool. The ability to quickly move between physical and virtual interactions is one of the central contributions of the COCODEX design.
It is also important for human communications, since in many cultures people look away from one another much of the time while speaking.
This capability in the control software also influences the choice of the physical display component. For instance, a spatial audio display, with enhanced functionality due to the reduced range of placements relative to the user's head, can provide an audio cue when the user is not looking at the display. A remote participant can speak, and the local user will turn to look in the direction of the apparent source of the remote participant's voice. The local user is then looking back into the AREA OF INTEREST, which results in the tracking process being reinitiated. The desirability of this scenario of use, in which the local user has instant access to both local and remote people, tools, and other resources, suggests the utility of the flat display as a choice, even though peripheral vision will be lost as a result. A happy coincidence of the COCODEX design is that lower-cost flat displays happen to provide enhanced value because of the strategy of “Pseudo-immersion” described here. - THEN Sensor/display assembly should wait where it is, since the user's head will probably re-enter in a similar place
- IF the reason for not tracking is ONLY that the latest predicted frustums (each eye has a different one) would NOT have intersected the AREA OF INTEREST
- ELSE IF the reason for not tracking is that the user is grabbing assembly
-
- THEN Assume head remains in last predicted position and point there again when the user lets go of the assembly
- ELSE IF tracking has been lost for unknown reasons
-
- THEN
- Adjust lighting elements mounted on COCODEX to compensate for local lighting conditions
COMMENT Currently available machine vision systems for sensing the human face are highly sensitive to lighting conditions. For instance, shadows caused by lighting from above can harm performance. LED or other lighting elements in the COCODEX display/sensor assembly provide a light source that moves approximately with the face to compensate for local light source anomalies. Comparing overall scene brightness between cameras mounted at different angles generates an approximate measure of the presence of this potential problem. In the event that there is heavy ceiling light, for instance, the lower LEDs, which face upwards, are more strongly illuminated to compensate.
- Adjust lighting elements mounted on COCODEX to compensate for local lighting conditions
- Use conventional incremental area search algorithms to move COCODEX arm to search for user
- Use conventional adaptive recovery techniques in case there's a software problem; Introduce drift into control parameters.
- If nothing works, eventually give up; set CONFIDENCE to nil
- THEN
- ELSE (Suggesting the system was just turned on or it's been a long time since a user's head was tracked)
-
- CALL FUNCTION MOVE_COCODEX ARM to move the assembly into default position (or whatever other action is deemed appropriate for “waiting”)
- CALL FUNCTION KEEP TRACK OF
- FACE IF there is a detection instance
-
- THEN
- RAISE value of CONFIDENCE
- ELSE
- LOWER value of CONFIDENCE
COMMENT When CONFIDENCE gets high enough, this function is not called. The
- LOWER value of CONFIDENCE
- THEN
- CONFIDENCE variable is being used here as a simple feedback signal to govern a pattern classification sub-system that will sometimes be well “locked on” to a pattern and sometimes not. Many other established methods are available as well.
- COMMENT There are three sources of motion of the COCODEX arm: Manual intervention by the user, and two automatic sources: Face tracking and haptic display. This function reconciles these control sources.
The most common form of haptic feedback is based on the idea of a single abstract point of contact between a haptic input/output device and a virtual model. An example of a common device which implements this type of haptic interaction is the Phantom arm. The Phantom can be pressed against the outside of a virtual object, for instance, allowing the contours of the object to be felt by the user. COCODEX can support point-based haptics, emulating a device like the Phantom. In that case, the center of the COCODEX physical screen is typically treated as the point of contact, and a graphical indicator of that point, typically crosshairs, is added to the TELE-LAYOUT.
COCODEX also supports a planar mode of haptic interaction. For planar interaction, the haptic properties of a set of points (in the planar area intersecting the virtual world that corresponds to the instant physical position of the COCODEX display) are combined into a display of force and resistance, including curl.
The PLANAR Haptic feedback map determines resistance and force to be displayed by the arm as a function of the position and orientation of the assembly at the end of the arm. The map is calculated as specified by the TELE_LAYOUT.
For instance, the TELE-LAYOUT can specify that scalar values associated with voxels be treated as resistance values. An example of when this is useful is in radiology. Darker voxels are set to be more resistant, so as the COCODEX assembly is manually guided through an area of volumetric data, a user feels the “center” of resistance of the display plane, corresponding to the location of a tumor. 3D volumes of scalar values can be analyzed using classical techniques to generate vectors for force field simulations. In other cases, vector information will already be defined for each voxel. This typically is the case in physical simulation applications, for instance. Another application is the creation of 6D “detents,” or “sticky” position/orientations for the assembly.
In this pseudocode, a distinction is drawn between resistance and force display, as expressed by FORCE-RESISTANCE-VECTOR and REPULSION-FIELD-VECTOR. These two domains need not be distinguished, but in practice most resistance information will be locally cached, such as volumetric medical imaging data, while most force field information, such as the “Repulsion field” of another user's head (explained below,) is remote and therefore has network latencies—thus the separation into distinct calculations and data structures. An example of a use of the repulsion field is to reduce the chances that a local COCODEX screen position will intersect a remote collaborator's head. Voxels in the remote person's head are designated to be repulsive. When the head of that remote user approaches the corresponding location of some other user's COCODEX screen, that screen is pushed aside.
A “tele-haptics” capability is also supported. This allows remote collaborators to “Feel each other” as they co-explore complex data such as volumetric medical or geographical information. The visual display of data is tightly coupled with haptic and audio displays, creating a multimodal interface. A notable advantage of COCODEX is that capabilities such as tele-haptics are accessed using the same instrumentation principles as visual and audio features, so that individuals who have deficits or special abilities in particular sensory modalities can interact with other individuals with different deficits or abilities, without making any change to the interaction practice or instrumentation. -
- CALCULATE any changes needed to Haptic feedback map for current virtual world
- QUERY appropriate sensors and perform sensor fusion calculations to determine if user is grabbing Assembly
COMMENT There are various ways a grab can be detected, including externally induced changes in force, rotation, or position sensors in the arm. An alternative is that the user can be required to touch a specific place or device to indicate a desire to grab, requiring additional sensors dedicated to the purpose, such as buttons or capacitive coupling sensors. - IF user is grabbing assembly
- THEN
- CALCULATE force vector user is applying to arm
- STORE it in USER-APPLIED-FORCE-VECTOR ELSE (user isn't grabbing COCODEX)
- SET USER-APPLIED-FORCE-VECTOR to nil
- THEN
- IF the PLANAR Haptic feedback map includes scalar resistance values
-
- THEN
- CALCULATE the center of resistance for the area of voxels corresponding to the COCODEX display area (for clarity, use polar coordinate system)
- CONVERT the center of resistance to a vector centered on the center of physical connection between the COCODEX sensor/display assembly and the arm
- STORE result in FORCE-RESISTANCE-VECTOR
- ELSE
- SET FORCE-RESISTANCE-VECTOR to nil
- THEN
- IF the PLANAR Haptic feedback map includes repulsion field values
-
- THEN
- CALCULATE the center and vector of repulsion for a volume of voxels containing the COCODEX display area (for clarity, use polar coordinate system)
- CONVERT the center and vector of repulsion to a vector centered on the center of physical connection between the COCODEX sensor/display assembly and the arm
- STORE result in REPULSION-FIELD-VECTOR
- ELSE
- SET REPULSION-FIELD-VECTOR to nil
- BLEND (FORCE-RESISTANCE-VECTOR and REPULSION-FIELD-VECTOR and USER-APPLIED-FORCE-VECTOR) with FACE-TRACKING VECTOR
COMMENT The term “blend” is used here for vector calculations since there will generally be additional calculations applied to each vector prior to being summed, including scaling, filtering, and biasing. - IF result would not cause face tracking to fail (if the face would still fall within the IDEAL zone)
- THEN
- CALL FUNCTION MOVE_COCODEX_ARM with the calculated vector
- ELSE IF tracking would fail AND user or application preferences indicate that approximate haptics are preferred over none at all
- REDUCE contribution of the BLENDED vectors (FORCE-RESISTANCE-VECTOR, REPULSION-FIELD-VECTOR, and USER-APPLIED-FORCE-VECTOR) without scaling back influence of FACE-TRACKING VECTOR so that the result lies within IDEAL zone
- CALL FUNCTION MOVE_COCODEX_ARM with the calculated vector
- ELSE IF user or application preferences indicate that haptics should be accurate if displayed at all
- CALL FUNCTION MOVE_COCODEX_ARM with the FACE-TRACKING VECTOR only
- ACTIVATE user interface elements to alert the user to the presence of the problem
- IF local and remote assemblies come into approximate alignment in virtual space AND local and remote COCODEX units are being grabbed
- THEN, initiate tele-haptics
- TRANSFORM remote user's USER-APPLIED-FORCE-VECTOR so that it is correctly oriented in the local space
- ADD result to local user's USER-APPLIED-FORCE-VECTOR
- THEN, initiate tele-haptics
- THEN
- QUERY haptic subsystem on whether screen is being grabbed by user
- PERFORM collision avoidance procedure
- COMMENT Collision avoidance can be implemented using either COCODEX sensors or an additional collision avoidance system, or both.
Since COCODEX has a sensor array it can support collision avoidance without extra instrumentation, but there are multiple vendors of collision avoidance subsystems, so for the purposes of this pseudocode, collision avoidance isn't explained in detail. -
- IF (COCODEX is not being grabbed —AND—there is no indication of collision danger)
- THEN
- LOOKUP nearby positions in ROBOTIC-MOTION-CALIBRATION-TABLE
- BASED on data from above LOOKUP, calculate robotic hardware control signals that are most likely to move arm as requested
- IF hardware is predicted by calculations to be able to move as requested in this function call
- THEN
- MOVE arm according to calculations above
COMMENT IF not, then system will wait until a better opportunity comes along, usually a bigger move that avoids overshooting.
- THEN
- CALL FUNCTION COCODEX AUTO SENSING
- COMPARE results with corresponding entries in ROBOTIC-MOTION-CALIBRATION-TABLE
- IF there is a discrepancy OR no corresponding entry yet exists
- THEN update calibration table
- IF (COCODEX is not being grabbed —AND—there is no indication of collision danger)
- COMMENT This function is for determining the current position of the display/sensor hardware assembly on the robot arm, as well as predicting future values.
- IF COCODEX has just been powered up
-
- THEN
- Set CONFIDENCE to nil
COMMENT face is not tracked yet.
- Set CONFIDENCE to nil
- PERFORM calibration on power-up and confirm that tracking is accurate
COMMENT There are a variety of means of calibrating, or confirming the calibration of the position and rotation measurements of the COCODEX arm at startup. These include the use of cross-reference between multiple sensor systems as occurs during operation, as described below. But certain techniques are available only at startup. For instance, with many arm designs, the camera array will be able to see the COCODEX base when the robot arm turns it to look in that direction, so that it can see at least one known landmark to confirm calibration in one set of positions (those which make the base visible.)
- THEN
- QUERY most recent values for Display/sensor assembly Position/Orientation
- COMMENT Multiple means can be employed to determine arm pose. These can include rotation sensors in joints in the arm; various commercially available 3D or 6D tracking sensors using optical, RF, ultrasound, magnetic or other techniques to track components in known locations in the arm, or the use of sensors in the sensor/display assembly to track visual landmarks in the environment. This last option is possible because the TELE-LAYOUT can record a representation of the local environment that was gathered at an earlier time. Established techniques for visual landmark-based tracking can be applied to generate an additional source of data on arm pose.
-
- APPLY conventional Bayesian or other techniques to achieve sensor fusion if more than one sensor subsystem is available
COMMENT This process is foreseen because COCODEX requires accurate measurements of arm pose, but not accurate arm control; and the accuracy of arm control can be low because of cost concerns, therefore the varied sensors of the Display/sensor assembly might be applied to improve the accuracy of pose measurement. - STORE result in ASSEMVARS
- PREDICT near term display/sensor assembly Position/Orientations using Kalman filters or other conventional predictive filtering technique
- STORE result in ASSEMFUTURE
- CHECK UI instrumentation
COMMENT COCODEX can have a number of physical interaction devices attached to the sensor/display assembly. These can include handles to facilitate grabbing, buttons, dials, triggers, and the like. - STORE values in UI-VARS
- APPLY conventional Bayesian or other techniques to achieve sensor fusion if more than one sensor subsystem is available
- COMMENT The usual use of COCODEX is foreseen to be where one, or perhaps a small number of local users are collaborating with a potentially larger number of people at an unbounded number of remote sites. In the special case where there is a minority of remote users and a majority of physically present users, the COCODEX-AS-AVATAR mode can be selected. This corresponds to a recent stream of research demonstrations in which a remote user “pilots” a physical local robot that local human users can interact with as if the remote human user was present in the position of the robot. When the COCODEX-AS-AVATAR mode is turned on, a designated remote user's head is tracked by the COCODEX sensor/display assembly instead of the head of a local user. The COCODEX assembly appears to “look around” with the head motion of the remote user, and with the remote user's face centered in the screen. This effect is described by other researchers who have implemented robotic display devices for this sole purpose. The originality of invention here is not the COCODEX-AS-AVATAR formulation, but the fact that it is available conveniently as an option from a device (COCODEX) that is designed primarily for other uses. Note that the converse is not true. Remote robot devices such as those referred to above are NOT able to function like COCODEX.
-
- CALCULATE the move for the arm that would place the sensor/display assembly in a position and orientation that matches as closely as possible the head position and orientation of the designated remote user (which implies that the assembly would be looking out from the remote user's perspective instead of inwards, towards the IDEAL zone, as would normally be the case)
- CALL FUNCTION MOVE COCODEX ARM with results of above calculation
- COMMENT This function prepares the local virtual world for graphical rendering. This can be accomplished using a conventional display-list architecture or similar structure. The subroutines below are in an approximate far-to-near order.
The elements of the TELE-LAYOUT are explained in the comments of this function. Note that while assembling the virtual world and rendering are separate steps in this pseudo-code, it is often more efficient in practice to render elements as they are ready instead of waiting for a single render phase. -
- IF the TELE-LAYOUT includes a simulation of a giant screen for a command control room or another type of wall-sized display
- THEN make sure it's in the display list
COMMENT These elements generally become the effective background of the scene from the user's perspective.
This brings to light another one of COCODEX's strengths. Dedicated display rooms are becoming increasingly common. There are three principle forms: Command/control rooms in which many displays are present; CAVES in which the walls present a surrounding stereoscopic virtual environment; and Display Walls, in which a large image is generated from a tiling of smaller displays.
The disadvantages of dedicated rooms include real estate costs and scheduling bottlenecks. COCODEX can emulate much of the value of a dedicated room display with a portable desktop device that overcomes these problems.
- THEN make sure it's in the display list
- IF the TELE-LAYOUT includes a simulation of a giant screen for a command control room or another type of wall-sized display
- IF the TELE-LAYOUT includes augmented reality effects
-
- Make sure a calibrated 3D representation of the local physical environment is in the display list
COMMENT In effect the display simulates its own transparency. This is possible when there is data about the physical environment behind the COCODEX display/sensor assembly. This data can be gathered earlier by pointing the assembly in that direction, or there can be extra cameras pointing backwards, which can be additionally used for collision avoidance. The physical background should be rendered correctly to simulate transparency of the display to support an augmented reality effect. In effect the display simulates its own transparency. An alternative is to incorporate a display that is physically transparent but can convey the computer-generated imagery as an overlay. - IF the TELE-LAYOUT includes representations of the local physical environments at remote locations
- Make sure the remote physical environment is in the display list, according to specifications in the TELE-LAYOUT
COMMENT The areas of transition between the environments of remote collaborators as they appear to the local user must make visual sense. One of the advantages of COCODEX is that it provides correct lines of sight between an arbitrary number of participants in an almost unlimited variety of configurations. At one extreme, a large number of geographically dispersed participants can be organized into an audience looking at a lecturer. The lecturer can look into the audience and not see too much in the way of local environment for each audience member, because of lack of room. At the other extreme, two collaborators can see into each other's local environments with no transitional areas between environments, because each participant can only see one remote environment at a time. The greatest need for transitions will arise when a small number (between 3 and 12) of participants convene in a virtual shared space. Each participant can define whether their local environment as seen by others will include physical elements as captured by COCODEX sensors, virtual elements, or a combination of real and virtual elements. The capturing of the local physical environment in advance or in real time has been well described in earlier Tele-immersion research, as has the use of purely synthetic environments. What is appropriate for COCODEX is dynamic transitional areas, because previous tele-immersion systems imposed fixed geometries on the spatial relationships between collaborators, while COCODEX allows flexibility. The TELE-LAYOUT specifies the transition technique to be used. Some common techniques will be: Placing a virtual wall or partition between adjacent localities to prevent objects in either locality from touching; A blending or fading between localities; An alignment of elements of localities so that they make approximate sense when they are physically adjacent. At a minimum certain horizontal elements, such as tabletops and floors can be aligned, and some wall elements; and certain furniture items can be made to “match,” as in the case of two desk surfaces being merged into one larger desk where both participants are seated.
- Make sure the remote physical environment is in the display list, according to specifications in the TELE-LAYOUT
- IF the TELE-LAYOUT includes simulations of conventional 2D displays within the 3D virtual environment
- Make sure they are in the display list
COMMENT For instance, if a conventional computer (showing a 2D display with a web browser, for instance) is included in the TELE-LAYOUT, that display will be implemented as an animated texture mapped on the geometry of the virtual 2D display. Whenever the COCODEX display is brought into alignment with a virtual 2D display within the TELE-LAYOUT, the physical COCODEX display becomes an emulation of that 2D display.
- Make sure they are in the display list
- Make sure a calibrated 3D representation of the local physical environment is in the display list
- IF the TELE-LAYOUT includes 3D objects or data
-
- Make sure corresponding elements are in the display list
- If the local user is grabbing the assembly
- THEN render the cut-plane through 3D objects or data as an enhanced 2D image aligned on the COCODEX display surface, where the assembly intersects a 3D object.
COMMENT The enhanced cut-plane rendering is optional. An example of such a rendering is that the transparency, brightness, or saturation of the cut-plane can be modified. It is sometimes desirable for the cut-plane to be transparent to enhance clarity of the user's sense of 6D placement in the 3D scene. The 3D components are then visible both in front of and behind the cut-plane. An opaque cut-plane can also be chosen without any 3D data visible in front of or behind it. Medical professionals generally make use of both modes of operation. A simple way to toggle between them in COCODEX is to grab and let go of the assembly, or make use of the UI-VARS to interpolate the two modes.
One, of COCODEX's benefits is the physical manipulation of a 6D cut-plane through volumetric data. This is useful in particular for medical and certain scientific data. A long-standing problem in medical imaging is the difficulty of interpreting cut-plane imagery if the cut-plane can be rotated in arbitrary ways. By using physical manipulation of the display to change the orientation of a cut-plane, users will not have to rely on mental rotation (which most people find very difficult) to interpret the results.
- THEN render the cut-plane through 3D objects or data as an enhanced 2D image aligned on the COCODEX display surface, where the assembly intersects a 3D object.
- IF the TELE-LAYOUT includes remote participants
-
- FOR EACH remote participant
- CALL FUNCTION ASSEMBLE_COMPOUND_PORTRAIT
- FOR EACH remote participant
- IF a remote user is grabbing his or her sensor/display assembly
-
- Make sure the display list contains a representation of the location of the remote display frame and the average (of the two eyes) viewing frustum for that user.
COMMENT This is another interesting quality of COCODEX: with COCODEX it is easy to design user interface elements which indicate interest and activity of users to each other. One user can see where another's display is while grabbed, facilitating joint exploration of data. - IF user interface actions are undertaken by a remote user whose display position is being displayed
- Make sure the display list contains representations of them, as defined by a given application or operating software for COCODEX
COMMENT For instance, the frame of the remote frustum will appear to brighten for a moment if the corresponding remote user clicks on a button in the user interface of the assembly. The frustum will also appear to cast momentary light on objects in the environment during operations on them.
- Make sure the display list contains representations of them, as defined by a given application or operating software for COCODEX
- Make sure the display list contains a representation of the location of the remote display frame and the average (of the two eyes) viewing frustum for that user.
- IF the TELE-LAYOUT includes a local virtual mirror
-
- Make sure the display list contains a mirror with a view of user and local environment that reflects data being sent to remote sites.
- IF local user is already engaged in a tele-immersion session with remote participants
-
- THEN
- IF ANOTHER station is serving as PREDICTIVE HUB for session
COMMENT Since there are significant unavoidable latencies between stations distributed over large geographic distances, a station situated roughly in between other stations will in some cases be in the best position to receive the most recent updates from each locality to predict the informed near future interactions in the shared world. This station, whether or not a local user is present, will be designated the PREDICTIVE HUB.
- IF ANOTHER station is serving as PREDICTIVE HUB for session
- THEN
- STREAM local data to HUB
COMMENT This includes almost all data mentioned in this pseudocode, though tremendous bandwidth can be saved by not sending unchanging data, which includes stationary elements in the local environment, like furniture.
The many streams of data are organized according to priority for low latency. The global variables above, the audio stream, and the portions of the Compound Portraits that are deemed high priority are the streams which are the most latency sensitive.
- STREAM local data to HUB
- ADJUST data streams as directed by HUB
COMMENT If HUB requests less data, or indicates an ability to receive more, adjustments can be made to resolution of medium priority bit maps, wraparound head texture, and other variable streams. (See Compound Portrait functions below, for explanation.) - RECEIVE similar data from HUB for all remote users
COMMENT Data from other users goes through the HUB, which can change the data, since the HUB is charged with detecting collisions and other site-interaction events. For instance, in a virtual baseball game, the HUB computes when a bat hits a ball and sends resulting trajectories to participants. - ELSE IF local station is functioning as PREDICTIVE HUB for session
- IF TELE-LAYOUT is already selected
- MERGE data from local and all remote stations
- CALCULATE potential collisions of other interactions between components of the scene in the predictive data from all sites
- REPORT potential interactions back to sites as needed by application
COMMENT this is the fastest way to detect and report interactions
- IF TELE-LAYOUT is already selected
- MONITOR latencies for all stations, making use of timestamps
- IF a station displays high latency
- SEND request for smaller data streams
- IF a station displays low latency and is sending minimized data
- SEND request for larger data streams
- ELSE (no TELE-LAYOUT selected)
- CALL FUNCTION SETUP TELE-LAYOUT
- ELSE if NO station is serving as PREDICTIVE HUB
- SEND local data to all stations and receive data from all stations
- USE conventional semaphore techniques to negotiate collisions and other interaction events
- THEN
- ELSE (user is not currently engaged in session with remote collaborators or interlocutors)
-
- CALL FUNCTION SETUP_TELE-LAYOUT
COMMENT The user interface for such things as starting new sessions, organizing the shared virtual environment, or adjusting one's appearance can either be in a conventional 2D display of a nearby computer, imbedded as a 3D user interface in the 3D COCODEX virtual world, or imbedded in a conventional 2D user interface found as a simulation within the virtual world. - IF user has chosen to initiate a new multi-user session
- THEN
- All stations should ping each other and the one with the quickest and most reliable access to others becomes the HUB
COMMENT Users usually choose from preset TELE-LAYOUTS which blend their local environments, including desks and so on, into shared arrangements. For instance, one preset places all participants around a round table, while another places one participant at a lectern in front of an audience containing the others. One advantage of COCODEX is that it doesn't impose a scheme on the relative placement of participants in the virtual space. A TELE-LAYOUT also defines the AREA OF INTEREST. If a user looks away from the AREA OF INTEREST, COCODEX will stop tracking that user so that he or she can observe the local physical environment.
- All stations should ping each other and the one with the quickest and most reliable access to others becomes the HUB
- THEN
- CALL FUNCTION SETUP_TELE-LAYOUT
- COMMENT: The function PREPARE_COMPOUND_PORTRAITURE is for preparing data to support visual display of the local user's face and other local elements both for remote collaborators and locally in a virtual mirror
This pseudocode describes one particular technique of user rendering, called “Compound Portraiture,” but while this choice is an aspect of this invention, and ideal for COCODEX, other user rendering strategies suitable for tele-immersion can be chosen instead.
Note that a corresponding data set for hands or other objects can hypothetically be defined, with corresponding similar control software throughout. The hand presents special challenges because portions of fingers can be obscured more often than portions of faces. This pseudocode will not address these special challenges. - LOCAL DATA structures for compound portraiture:
-
- Streaming graph of textures, with each streaming texture associated with a point on the facial features prototype
- Highest priority facial zones are tied to small high resolution images (Examples of the highest priority facial zones include the corners of the mouth and eyes)
- Medium priority facial zones are tied to larger medium resolution images (Examples of the medium priority facial zones include the brow and nostrils. The choice of which feature should be considered high or medium priority will vary with implementations, according to the performance of available network resources. In an ideal situation with excellent network resources, the entire face can be treated as Highest Priority.)
- A wraparound streaming head texture of variable resolution, depending on network performance.
- A streaming 3D graph of facial feature points, including one or more predictive sets of points
- A streaming set of textures associated with peripheral elements of the user's head such as large hairdos or hats.
- The ORTHO-HALO, a set of orthogonal ring-shaped virtual objects that surround the head, serving as projection surfaces for large objects that surround a head, but are not modeled accurately, such as large hairstyles or hats.
COMMENT all the above are time stamped
- Streaming graph of textures, with each streaming texture associated with a point on the facial features prototype
- (End Definitions of Local Data)
- BEGIN
- GATHER highest available resolution image data from key points on face.
-
- FOR each of the highest priority facial zones
- Determine, using conventional trigonometry, how centered each camera was on top of the zone in the most recent image gathering cycle (excluding cameras that were occluded)
- IF a single camera was better positioned than others, select a portion of the image around the feature
- ELSE if two or more cameras were equally centered on top of a zone, use conventional image-based rendering techniques to merge a portion of each image around the feature into a single image of the feature
- FOR each of the medium priority facial zones, do the same as above, but use conventional image sampling techniques to reduce the resolution of the images of the features
COMMENT This resolution reduction is in anticipation of a need to reduce bandwidth - USE conventional image-based techniques to create a wraparound texture of the user's head.
- USE conventional techniques such as image sequence subtraction to find elements of the scene that are moving with the head that lie outside of the area corresponding to the facial feature model. (These will be used as textures for the “ortho-halo”) FUNCTION ASSEMBLE_COMPOUND_PORTRAIT
COMMENT This function assembles a representation of a remote participant from asynchronous streaming data as gathered by an instance of PREPARE_COMPOUND_PORTRAIT running on the remote participant's COCODEX. - DISTORT a FACE-PROTOTYPE according to the time-matched values of the FACEPOSEFUTURE stream for that user
COMMENT Facial pose for remote participants is being predicted in order to reduce apparent latency in the visual channel. - WRAP the wraparound texture on the distorted wireframe head
- BLEND in higher resolution streaming textures for the high and medium priority areas of the face, in the locations of the corresponding facial feature points
- ADD stylistic elements, such as shinier reflectance for eyeballs or lips. ADD “ortho-halo” element to handle large hair or hats
- Use image based techniques to render them approximately as they would be seen from the local viewer's angle of view
- Use transparency to make the boundaries between these elements and the larger environment ambiguous.
- ADD conventional image based or volumetric techniques to render torso, arms, or other visible parts of participants at correct viewing angle for local user.
- APPLY optional modifications that the local user might have chosen, such as virtual makeup, jewelry, and so on.
- APPLY filters, lighting, and other established techniques to soften portions of remote users that are less reliably rendered
- FOR each of the highest priority facial zones
- IF UI-VARS indicate a modification to the viewing perspective
-
- THEN modify perspective accordingly in all following steps
COMMENT While the common mode of use of COCODEX is as a window into a 1-1 scaled virtual world, it is sometimes desirable to modify the viewing perspective. For instance, a spring-loaded macro/micro select lever attached to the assembly has no effect if it is not touched. As the lever is moved by the user's touch towards the macro position, the perspective of the virtual scene shown in the COCODEX display becomes wider and the position of the virtual head (from which points of view are derived) moves to being on a surrounding virtual sphere, looking into the scene. The further the lever is moved, the larger the reference sphere becomes. In this way a user can grab the assembly and move it to explore points of view on the virtual scene from an exterior perspective. In a similar way, moving the lever towards the micro direction magnifies the scene. In this case, rotating the screen moves the point-of-view among virtual inward-looking points of view on the surface of a sphere, as before, but with the effect of changing the power of a microscope lens as the sphere changes size; and moving the display position changes the virtual position of the center of the sphere. The position of the point-of-view can be adjusted in a way that includes momentum and acceleration by a different button or control, without any micro- or macro-zoom component, in which case physically moving the COCODEX window can have the effect of causing the point of view to race around within the virtual space, as if it were the windshield of a racecar. Turning the display turns the virtual racecar and moving the display forward and back applies forward and reverse power. - IF the display hardware is autostereo with two perspective viewing zones, such as a dual exit pupil display, or lenticular or parallax barrier displays
- Adjust the IDEAL viewing zone to be sufficiently small, so that the COCODEX arm will keep the transition between viewing zones effectively placed between the user's eyes
- IF a dual exit pupil display is used
- Anti-distort scene using conventional techniques to compensate for changing positions of optical components due to arm motion
COMMENT Dual perspective autostereo has traditionally required that users reduce head motion, and COCODEX offers a means around that limitation. In particular, dual exit pupil designs are made compact without restricting head motion
- Anti-distort scene using conventional techniques to compensate for changing positions of optical components due to arm motion
- ELSE IF display has >2 perspective zones, such as certain lenticular or parallax barrier displays
COMMENT There is available art about combining eye tracking with multiperspective autostereo. One advantage of COCODEX, however, is that you can make sure a user's eyes won't fall into undefined or pseudoscopic zones because you can move the display to avoid that orientation.
Note that the pseudocode below applies equally well to a single local user or a small number of local users, when there are enough viewing zones to give each eye for each user a unique view. - FOR each eye
- Determine which viewing zone perspective is visible to the eye
- Render scene for that perspective as viewed from the precise eye position
- If an eye is predicted (by FACEFUTURE) to be about to cross into another viewing perspective
- Gradually (but fast enough to anticipate the crossing,) bring the 6D rendering alignment of the adjacent perspective zone into alignment so that the user will not perceive a transition as the eye crosses between perspective zones
- Gradually let the 6D rendering alignment of the previously seen perspective drift back to a cantered position.
- Gradual motion is to avoid visible “jumping” motion artifacts
- IF two eyes are projected to move into the same viewing zone, calculate how to move display to avoid the problem and do so.
- IF an eye is projected to move into an undefined, pseudoscopic, or otherwise undesirable or illegal viewing zone, calculate how to move the sensor/display assembly to avoid the problem and do so.
COMMENT If there is only one local user (two eyes) then these are not a difficult calculations. The display is simply moved in order to bring the position of the eyes into separate legal viewing zones. If there are multiple local users, the calculation becomes more difficult. The assembly is moved so that the IDEAL zone which contains all eyes is contained between illegal viewing zones, but there is also the possibility that local users can be placed so that each eye sees distinct legal views, but an illegal zone is in between them.
- IF despite attempts to avoid problems, two eyes share a perspective for a period of time
- THEN gradually adjust the rendering of that perspective to be at the average of the ideal for the two eyes while the problem persists
COMMENT Because 3D eye position is known, it's possible to build a 3D calibration table for a particular lenticular display to correct for subpixel alignment problems.
- THEN gradually adjust the rendering of that perspective to be at the average of the ideal for the two eyes while the problem persists
- THEN modify perspective accordingly in all following steps
- ELSE IF the display is conventional (no autostereo)
-
- Render from either dominant eye or mid-head perspective; user's choice
- Comment There is extensive work in 3D sound gathering and presentation, so COCODEX will have no shortage of audio subsystems which can be used.
Therefore, this function will be simple. -
- ISOLATE local User Voice
- CALL commercially available full duplex audio telephone subsystem to send voice to remote users
- PLACE sounds of remote users in spatially correct locations using means present in audio subsystem
- GATHER environmental sounds with microphone array STREAM to other users
- RENDER environmental sounds from other users with speaker array or binaural techniques for extreme near field stereo speakers
- COMMENT COCODEX requires a user interface to set up TELE-LAYOUTS, initiate and end calls, and perform the usual functions of a personal telecommunications or information processing tool. There is no requirement that these functions be performed exclusively with the use of COCODEX, however. All these functions can be performed on a conventional computer placed on the desk next to COCODEX, or simulated within a COCODEX TELE-LAYOUT. Existing virtual world design tools and 3D modeling products already provide the editing and visualization capabilities required, and must be extended to link with the variables defined above in order to provide output useful for this invention. Available tools are extensible to provide these links.
Claims (14)
1. A system, comprising: multiple input/output systems coupled together to provide a view of a common scene from perspectives of each of the systems, each system comprising:
a display/sensor assembly presenting the view to a viewer and sensing a user position and user viewpoint;
a robotic arm coupled to the assembly and providing display position and orientation information; and
a computer determining the view responsive to the user position and viewpoint, producing a display responsive to the position and viewpoint, comparing the user position to position range limits and producing robot motion control information to keep the user position within the range limits, the robotic arm moving and orienting the assembly responsive to the motion control information.
2. A system as recited in claim 1 , wherein each assembly includes a video sensor array capturing a multiple view image of a first user and the system displays the image of the first user via the assembly of a second user.
3. A system as recited in claim 2 , wherein the image displayed via the assembly of the second user comprises a compound portraiture of the face of the first user.
4. A system as recited in claim 1 , wherein each assembly includes a sound sensor array and a speaker array and said system captures a sound of a first user via the sound sensor array and projects the sound of the first user to a second user via the speaker array.
5. A system as recited in claim 1 , wherein the assembly can be moved by a hand of a user to a manual position and the computer adjusts the view of the common scene responsive to the manual position.
6. A system as recited in claim 1 , wherein the view of the common scene includes a cut plane view of objects in the scene.
7. A system as recited in claim 1 , wherein the view of the common scene comprises an autostereo three-dimensional view.
8. A system as recited in claim 1 , further comprising a full duplex communication system connecting the input/output systems.
9. A system as recited in claim 1 , wherein the arm is hollow and the view is projected through the arm.
10. An input/output interface, comprising:
a display providing a three dimensional view of a scene;
speakers attached to the display and providing a stereo sound;
tracking sensors attached to the display and tracking viewer head motion and eye position;
sound sensors attached to the display and detecting sound direction;
a handle attached to the display and allowing a user to control position and orientation of the display; and
an I/O control interface attached to the handle.
11. A process, comprising:
sensing a position of a user relative to a virtual scene; and
adjusting a view into the virtual scene responsive to the position using a computer.
12. A system, comprising:
an autostereo display;
a mechanical arm coupled to the display and providing display position and orientation information; and
a computer determining autostereo views responsive to the display position and viewpoint.
13. A system, comprising:
a display/sensor assembly presenting a view to a viewer and sensing a user position and user viewpoint;
a robotic arm coupled to the assembly and providing display position and orientation information; and
a computer determining the view responsive to the user position and viewpoint, producing a display responsive to the position and viewpoint, comparing the user position to position range limits of sensor and display components and producing robot motion control information to keep the user position within the range limits, the robotic arm moving and orienting the assembly responsive to the motion control information.
14. A system, comprising:
multiple input/output systems coupled together to provide a view of a common scene from perspectives of each of the systems, each system comprising:
a display/sensor assembly presenting the view to a viewer and sensing a user;
a mechanical arm coupled to the assembly and providing display position and orientation information; and
a computer determining the view responsive to the display position and orientation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/604,211 US20100039380A1 (en) | 2004-10-25 | 2009-10-22 | Movable Audio/Video Communication Interface System |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US62108504P | 2004-10-25 | 2004-10-25 | |
US11/255,920 US7626569B2 (en) | 2004-10-25 | 2005-10-24 | Movable audio/video communication interface system |
US12/604,211 US20100039380A1 (en) | 2004-10-25 | 2009-10-22 | Movable Audio/Video Communication Interface System |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/255,920 Continuation US7626569B2 (en) | 2004-10-25 | 2005-10-24 | Movable audio/video communication interface system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100039380A1 true US20100039380A1 (en) | 2010-02-18 |
Family
ID=36573616
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/255,920 Expired - Fee Related US7626569B2 (en) | 2004-10-25 | 2005-10-24 | Movable audio/video communication interface system |
US12/604,211 Abandoned US20100039380A1 (en) | 2004-10-25 | 2009-10-22 | Movable Audio/Video Communication Interface System |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/255,920 Expired - Fee Related US7626569B2 (en) | 2004-10-25 | 2005-10-24 | Movable audio/video communication interface system |
Country Status (1)
Country | Link |
---|---|
US (2) | US7626569B2 (en) |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080001866A1 (en) * | 2006-06-28 | 2008-01-03 | Martin Michael M | Control Display Positioning System |
US20090051699A1 (en) * | 2007-08-24 | 2009-02-26 | Videa, Llc | Perspective altering display system |
US20090172557A1 (en) * | 2008-01-02 | 2009-07-02 | International Business Machines Corporation | Gui screen sharing between real pcs in the real world and virtual pcs in the virtual world |
US20100079405A1 (en) * | 2008-09-30 | 2010-04-01 | Jeffrey Traer Bernstein | Touch Screen Device, Method, and Graphical User Interface for Moving On-Screen Objects Without Using a Cursor |
US20100239121A1 (en) * | 2007-07-18 | 2010-09-23 | Metaio Gmbh | Method and system for ascertaining the position and orientation of a camera relative to a real object |
US20110084983A1 (en) * | 2009-09-29 | 2011-04-14 | Wavelength & Resonance LLC | Systems and Methods for Interaction With a Virtual Environment |
US20110087350A1 (en) * | 2009-10-08 | 2011-04-14 | 3D M.T.P. Ltd | Methods and system for enabling printing three-dimensional object models |
WO2011101818A1 (en) * | 2010-02-21 | 2011-08-25 | Rafael Advanced Defense Systems Ltd. | Method and system for sequential viewing of two video streams |
WO2012011893A1 (en) * | 2010-07-20 | 2012-01-26 | Empire Technology Development Llc | Augmented reality proximity sensing |
US20120038738A1 (en) * | 2010-08-12 | 2012-02-16 | Alcatel-Lucent Usa, Incorporated | Gaze correcting apparatus, a method of videoconferencing and a videoconferencing system |
US20120075166A1 (en) * | 2010-09-29 | 2012-03-29 | Samsung Electronics Co. Ltd. | Actuated adaptive display systems |
WO2012047905A2 (en) * | 2010-10-04 | 2012-04-12 | Wavelength & Resonance Llc, Dba Oooii | Head and arm detection for virtual immersion systems and methods |
US20120098931A1 (en) * | 2010-10-26 | 2012-04-26 | Sony Corporation | 3d motion picture adaption system |
US20120127325A1 (en) * | 2010-11-23 | 2012-05-24 | Inventec Corporation | Web Camera Device and Operating Method thereof |
US20120223952A1 (en) * | 2011-03-01 | 2012-09-06 | Sony Computer Entertainment Inc. | Information Processing Device Capable of Displaying A Character Representing A User, and Information Processing Method Thereof. |
DE102011015136A1 (en) * | 2011-03-25 | 2012-09-27 | Institut für Rundfunktechnik GmbH | Apparatus and method for determining a representation of digital objects in a three-dimensional presentation space |
US20120281114A1 (en) * | 2011-05-03 | 2012-11-08 | Ivi Media Llc | System, method and apparatus for providing an adaptive media experience |
US8421844B2 (en) | 2010-08-13 | 2013-04-16 | Alcatel Lucent | Apparatus for correcting gaze, a method of videoconferencing and a system therefor |
US8496218B2 (en) | 2011-06-08 | 2013-07-30 | Alcon Research, Ltd. | Display monitor guide |
US20140176424A1 (en) * | 2012-12-24 | 2014-06-26 | Hon Hai Precision Industry Co., Ltd. | Electronic device and method for adjustting display screen |
US20140180684A1 (en) * | 2012-12-20 | 2014-06-26 | Strubwerks, LLC | Systems, Methods, and Apparatus for Assigning Three-Dimensional Spatial Data to Sounds and Audio Files |
US20150009098A1 (en) * | 2011-02-03 | 2015-01-08 | Echostar Technologies L.L.C. | Apparatus, systems and methods for presenting displayed image information of a mobile media device on a large display and control of the mobile media device therefrom |
US20150085076A1 (en) * | 2013-09-24 | 2015-03-26 | Amazon Techologies, Inc. | Approaches for simulating three-dimensional views |
US20150153865A1 (en) * | 2008-12-08 | 2015-06-04 | Apple Inc. | Selective input signal rejection and modification |
US20150205994A1 (en) * | 2014-01-22 | 2015-07-23 | Samsung Electronics Co., Ltd. | Smart watch and control method thereof |
WO2015142956A1 (en) * | 2014-03-17 | 2015-09-24 | Intuitive Surgical Operations, Inc. | Systems and methods for offscreen indication of instruments in a teleoperational medical system |
CN105163063A (en) * | 2015-06-23 | 2015-12-16 | 中山明杰自动化科技有限公司 | Machine image processing system |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US9437038B1 (en) | 2013-09-26 | 2016-09-06 | Amazon Technologies, Inc. | Simulating three-dimensional views using depth relationships among planes of content |
US9604361B2 (en) | 2014-02-05 | 2017-03-28 | Abb Schweiz Ag | System and method for defining motions of a plurality of robots cooperatively performing a show |
US9891732B2 (en) | 2008-01-04 | 2018-02-13 | Apple Inc. | Selective rejection of touch contacts in an edge region of a touch surface |
US10254544B1 (en) * | 2015-05-13 | 2019-04-09 | Rockwell Collins, Inc. | Head tracking accuracy and reducing latency in dynamic environments |
CN109807892A (en) * | 2019-02-19 | 2019-05-28 | 宁波凯德科技服务有限公司 | A kind of Automobile Welding robot motion planning model |
DE102018201336A1 (en) * | 2018-01-29 | 2019-08-01 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Virtual Reality conference system |
US10388225B2 (en) * | 2016-09-30 | 2019-08-20 | Lg Display Co., Ltd. | Organic light emitting display device and method of controlling same |
CN110720982A (en) * | 2019-10-29 | 2020-01-24 | 京东方科技集团股份有限公司 | Augmented reality system, control method and device based on augmented reality |
CN111895940A (en) * | 2020-04-26 | 2020-11-06 | 鸿富锦精密电子(成都)有限公司 | Calibration file generation method, system, computer device and storage medium |
WO2021113612A1 (en) * | 2019-12-04 | 2021-06-10 | Black-I Robotics, Inc. | Robotic arm system |
US11326888B2 (en) * | 2018-07-25 | 2022-05-10 | Uatc, Llc | Generation of polar occlusion maps for autonomous vehicles |
US11379060B2 (en) | 2004-08-25 | 2022-07-05 | Apple Inc. | Wide touchpad on a portable computer |
Families Citing this family (210)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8797260B2 (en) | 2002-07-27 | 2014-08-05 | Sony Computer Entertainment Inc. | Inertially trackable hand-held controller |
US7623115B2 (en) * | 2002-07-27 | 2009-11-24 | Sony Computer Entertainment Inc. | Method and apparatus for light input device |
GB0410415D0 (en) * | 2004-05-11 | 2004-06-16 | Bamford Excavators Ltd | Operator display system |
WO2006081198A2 (en) * | 2005-01-25 | 2006-08-03 | The Board Of Trustees Of The University Of Illinois | Compact haptic and augmented virtual reality system |
US8933967B2 (en) * | 2005-07-14 | 2015-01-13 | Charles D. Huston | System and method for creating and sharing an event using a social network |
KR101249988B1 (en) * | 2006-01-27 | 2013-04-01 | 삼성전자주식회사 | Apparatus and method for displaying image according to the position of user |
US8433157B2 (en) * | 2006-05-04 | 2013-04-30 | Thomson Licensing | System and method for three-dimensional object reconstruction from two-dimensional images |
US8488895B2 (en) * | 2006-05-31 | 2013-07-16 | Indiana University Research And Technology Corp. | Laser scanning digital camera with pupil periphery illumination and potential for multiply scattered light imaging |
DE102006031799B3 (en) * | 2006-07-06 | 2008-01-31 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method for autostereoscopic display of image information with adaptation to changes in the head position of the viewer |
EP2069889A2 (en) * | 2006-08-03 | 2009-06-17 | France Telecom | Image capture and haptic input device |
US20080097176A1 (en) * | 2006-09-29 | 2008-04-24 | Doug Music | User interface and identification in a medical device systems and methods |
US8265793B2 (en) * | 2007-03-20 | 2012-09-11 | Irobot Corporation | Mobile robot for telecommunication |
EP1972416B1 (en) * | 2007-03-23 | 2018-04-25 | Honda Research Institute Europe GmbH | Robots with occlusion avoidance functionality |
EP1972415B1 (en) * | 2007-03-23 | 2019-01-02 | Honda Research Institute Europe GmbH | Robots with collision avoidance functionality |
EP1974869A1 (en) * | 2007-03-26 | 2008-10-01 | Honda Research Institute Europe GmbH | Apparatus and method for generating and controlling the motion of a robot |
US7840668B1 (en) * | 2007-05-24 | 2010-11-23 | Avaya Inc. | Method and apparatus for managing communication between participants in a virtual environment |
JP4696099B2 (en) * | 2007-08-07 | 2011-06-08 | 日立オムロンターミナルソリューションズ株式会社 | Display image converter |
JP4998156B2 (en) * | 2007-08-30 | 2012-08-15 | ソニー株式会社 | Information presenting system, information presenting apparatus, information presenting method, program, and recording medium recording the program |
JP2009077230A (en) * | 2007-09-21 | 2009-04-09 | Seiko Epson Corp | Image processor, micro computer and electronic equipment |
JP5228716B2 (en) * | 2007-10-04 | 2013-07-03 | 日産自動車株式会社 | Information presentation system |
US8117364B2 (en) * | 2007-11-13 | 2012-02-14 | Microsoft Corporation | Enhanced protocol and architecture for low bandwidth force feedback game controller |
CN101470446B (en) * | 2007-12-27 | 2011-06-08 | 佛山普立华科技有限公司 | Display equipment and method for automatically regulating display direction |
US8384718B2 (en) * | 2008-01-10 | 2013-02-26 | Sony Corporation | System and method for navigating a 3D graphical user interface |
US8327277B2 (en) * | 2008-01-14 | 2012-12-04 | Microsoft Corporation | Techniques to automatically manage overlapping objects |
US10875182B2 (en) * | 2008-03-20 | 2020-12-29 | Teladoc Health, Inc. | Remote presence system mounted to operating room hardware |
US7840638B2 (en) * | 2008-06-27 | 2010-11-23 | Microsoft Corporation | Participant positioning in multimedia conferencing |
US20100057860A1 (en) * | 2008-08-29 | 2010-03-04 | Fry Donna M | Confirmation and acknowledgement of transmission reception |
WO2010025199A1 (en) * | 2008-09-01 | 2010-03-04 | Mitsubishi Digital Electronics America, Inc. | Systems and methods to enhance television viewing |
US8639666B2 (en) * | 2008-09-05 | 2014-01-28 | Cast Group Of Companies Inc. | System and method for real-time environment tracking and coordination |
US8508475B2 (en) * | 2008-10-24 | 2013-08-13 | Microsoft Corporation | User interface elements positioned for display |
US9037468B2 (en) * | 2008-10-27 | 2015-05-19 | Sony Computer Entertainment Inc. | Sound localization for user in motion |
US20100142755A1 (en) * | 2008-11-26 | 2010-06-10 | Perfect Shape Cosmetics, Inc. | Method, System, and Computer Program Product for Providing Cosmetic Application Instructions Using Arc Lines |
KR101590331B1 (en) * | 2009-01-20 | 2016-02-01 | 삼성전자 주식회사 | Mobile display apparatus robot have mobile display apparatus and display method thereof |
KR101496909B1 (en) * | 2009-01-22 | 2015-02-27 | 삼성전자 주식회사 | Robot |
KR101496910B1 (en) * | 2009-01-22 | 2015-02-27 | 삼성전자 주식회사 | Robot |
US8271888B2 (en) * | 2009-01-23 | 2012-09-18 | International Business Machines Corporation | Three-dimensional virtual world accessible for the blind |
US8320588B2 (en) * | 2009-02-10 | 2012-11-27 | Mcpherson Jerome Aby | Microphone mover |
FR2942096B1 (en) * | 2009-02-11 | 2016-09-02 | Arkamys | METHOD FOR POSITIONING A SOUND OBJECT IN A 3D SOUND ENVIRONMENT, AUDIO MEDIUM IMPLEMENTING THE METHOD, AND ASSOCIATED TEST PLATFORM |
US8896527B2 (en) * | 2009-04-07 | 2014-11-25 | Samsung Electronics Co., Ltd. | Multi-resolution pointing system |
US8161398B2 (en) * | 2009-05-08 | 2012-04-17 | International Business Machines Corporation | Assistive group setting management in a virtual world |
US20100295782A1 (en) | 2009-05-21 | 2010-11-25 | Yehuda Binder | System and method for control based on face ore hand gesture detection |
US20100325214A1 (en) * | 2009-06-18 | 2010-12-23 | Microsoft Corporation | Predictive Collaboration |
WO2011058584A1 (en) * | 2009-11-10 | 2011-05-19 | Selex Sistemi Integrati S.P.A. | Avatar-based virtual collaborative assistance |
US8596591B2 (en) * | 2009-11-13 | 2013-12-03 | Ergotron, Inc. | Vertical spring lift systems |
US20110123030A1 (en) * | 2009-11-24 | 2011-05-26 | Sharp Laboratories Of America, Inc. | Dynamic spatial audio zones configuration |
KR101657168B1 (en) * | 2009-12-01 | 2016-09-19 | 삼성전자주식회사 | Display method and apparatus based on user's potion |
US9179106B2 (en) * | 2009-12-28 | 2015-11-03 | Canon Kabushiki Kaisha | Measurement system, image correction method, and computer program |
US8406571B2 (en) * | 2010-02-04 | 2013-03-26 | Yahoo! Inc. | Automatic super-resolution transformation for images |
US20110202845A1 (en) * | 2010-02-17 | 2011-08-18 | Anthony Jon Mountjoy | System and method for generating and distributing three dimensional interactive content |
US8670017B2 (en) | 2010-03-04 | 2014-03-11 | Intouch Technologies, Inc. | Remote presence system including a cart that supports a robot face and an overhead camera |
US9955209B2 (en) * | 2010-04-14 | 2018-04-24 | Alcatel-Lucent Usa Inc. | Immersive viewer, a method of providing scenes on a display and an immersive viewing system |
US8447070B1 (en) * | 2010-04-19 | 2013-05-21 | Amazon Technologies, Inc. | Approaches for device location and communication |
US9294716B2 (en) | 2010-04-30 | 2016-03-22 | Alcatel Lucent | Method and system for controlling an imaging system |
US8788096B1 (en) | 2010-05-17 | 2014-07-22 | Anybots 2.0, Inc. | Self-balancing robot having a shaft-mounted head |
US8963987B2 (en) | 2010-05-27 | 2015-02-24 | Microsoft Corporation | Non-linguistic signal detection and feedback |
US8670018B2 (en) * | 2010-05-27 | 2014-03-11 | Microsoft Corporation | Detecting reactions and providing feedback to an interaction |
CN102713822A (en) * | 2010-06-16 | 2012-10-03 | 松下电器产业株式会社 | Information input device, information input method and programme |
US8537157B2 (en) * | 2010-06-21 | 2013-09-17 | Verizon Patent And Licensing Inc. | Three-dimensional shape user interface for media content delivery systems and methods |
US9132352B1 (en) | 2010-06-24 | 2015-09-15 | Gregory S. Rabin | Interactive system and method for rendering an object |
US9760123B2 (en) * | 2010-08-06 | 2017-09-12 | Dynavox Systems Llc | Speech generation device with a projected display and optical inputs |
KR101729556B1 (en) * | 2010-08-09 | 2017-04-24 | 엘지전자 주식회사 | A system, an apparatus and a method for displaying a 3-dimensional image and an apparatus for tracking a location |
KR101695819B1 (en) * | 2010-08-16 | 2017-01-13 | 엘지전자 주식회사 | A apparatus and a method for displaying a 3-dimensional image |
US8704879B1 (en) | 2010-08-31 | 2014-04-22 | Nintendo Co., Ltd. | Eye tracking enabling 3D viewing on conventional 2D display |
US8730332B2 (en) | 2010-09-29 | 2014-05-20 | Digitaloptics Corporation | Systems and methods for ergonomic measurement |
US8754925B2 (en) | 2010-09-30 | 2014-06-17 | Alcatel Lucent | Audio source locator and tracker, a method of directing a camera to view an audio source and a video conferencing terminal |
CA2720886A1 (en) * | 2010-11-12 | 2012-05-12 | Crosswing Inc. | Customizable virtual presence system |
KR20120053587A (en) * | 2010-11-18 | 2012-05-29 | 삼성전자주식회사 | Display apparatus and sound control method of the same |
US20120154511A1 (en) * | 2010-12-20 | 2012-06-21 | Shi-Ping Hsu | Systems and methods for providing geographically distributed creative design |
US8930019B2 (en) * | 2010-12-30 | 2015-01-06 | Irobot Corporation | Mobile human interface robot |
US8902156B2 (en) * | 2011-01-14 | 2014-12-02 | International Business Machines Corporation | Intelligent real-time display selection in a multi-display computer system |
US20120192088A1 (en) * | 2011-01-20 | 2012-07-26 | Avaya Inc. | Method and system for physical mapping in a virtual world |
US9001029B2 (en) * | 2011-02-15 | 2015-04-07 | Basf Se | Detector for optically detecting at least one object |
US8451344B1 (en) * | 2011-03-24 | 2013-05-28 | Amazon Technologies, Inc. | Electronic devices with side viewing capability |
JP5785753B2 (en) * | 2011-03-25 | 2015-09-30 | 京セラ株式会社 | Electronic device, control method, and control program |
JP5766479B2 (en) * | 2011-03-25 | 2015-08-19 | 京セラ株式会社 | Electronic device, control method, and control program |
US8913005B2 (en) | 2011-04-08 | 2014-12-16 | Fotonation Limited | Methods and systems for ergonomic feedback using an image analysis module |
WO2012140525A1 (en) * | 2011-04-12 | 2012-10-18 | International Business Machines Corporation | Translating user interface sounds into 3d audio space |
KR101806891B1 (en) * | 2011-04-12 | 2017-12-08 | 엘지전자 주식회사 | Mobile terminal and control method for mobile terminal |
KR101859099B1 (en) * | 2011-05-31 | 2018-06-28 | 엘지전자 주식회사 | Mobile device and control method for the same |
EP2716029A1 (en) * | 2011-05-31 | 2014-04-09 | Promptcam Limited | Apparatus and method |
JP5880916B2 (en) | 2011-06-03 | 2016-03-09 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
US9032042B2 (en) | 2011-06-27 | 2015-05-12 | Microsoft Technology Licensing, Llc | Audio presentation of condensed spatial contextual information |
US8885882B1 (en) | 2011-07-14 | 2014-11-11 | The Research Foundation For The State University Of New York | Real time eye tracking for human computer interaction |
US10585472B2 (en) | 2011-08-12 | 2020-03-10 | Sony Interactive Entertainment Inc. | Wireless head mounted display with differential rendering and sound localization |
US10209771B2 (en) | 2016-09-30 | 2019-02-19 | Sony Interactive Entertainment Inc. | Predictive RF beamforming for head mounted display |
DE102011112617A1 (en) * | 2011-09-08 | 2013-03-14 | Eads Deutschland Gmbh | Cooperative 3D workplace |
US9766441B2 (en) * | 2011-09-22 | 2017-09-19 | Digital Surgicals Pte. Ltd. | Surgical stereo vision systems and methods for microsurgery |
US9008487B2 (en) | 2011-12-06 | 2015-04-14 | Alcatel Lucent | Spatial bookmarking |
US8958569B2 (en) | 2011-12-17 | 2015-02-17 | Microsoft Technology Licensing, Llc | Selective spatial audio communication |
US9225891B2 (en) | 2012-02-09 | 2015-12-29 | Samsung Electronics Co., Ltd. | Display apparatus and method for controlling display apparatus thereof |
US9299118B1 (en) * | 2012-04-18 | 2016-03-29 | The Boeing Company | Method and apparatus for inspecting countersinks using composite images from different light sources |
US8965576B2 (en) | 2012-06-21 | 2015-02-24 | Rethink Robotics, Inc. | User interfaces for robot training |
US10176635B2 (en) | 2012-06-28 | 2019-01-08 | Microsoft Technology Licensing, Llc | Saving augmented realities |
EP2685732A1 (en) * | 2012-07-12 | 2014-01-15 | ESSILOR INTERNATIONAL (Compagnie Générale d'Optique) | Stereoscopic pictures generation |
US20140024889A1 (en) * | 2012-07-17 | 2014-01-23 | Wilkes University | Gaze Contingent Control System for a Robotic Laparoscope Holder |
JP5964190B2 (en) * | 2012-09-27 | 2016-08-03 | 京セラ株式会社 | Terminal device |
CA2886910A1 (en) * | 2012-10-01 | 2014-04-10 | Ilya Polyakov | Robotic stand and systems and methods for controlling the stand during videoconference |
WO2014059374A1 (en) * | 2012-10-11 | 2014-04-17 | Imsi Design, Llc | Method for fine-tuning the physical position and orientation on an electronic device |
US8972061B2 (en) | 2012-11-02 | 2015-03-03 | Irobot Corporation | Autonomous coverage robot |
CN102981616B (en) * | 2012-11-06 | 2017-09-22 | 中兴通讯股份有限公司 | The recognition methods of object and system and computer in augmented reality |
US9229477B2 (en) * | 2012-12-11 | 2016-01-05 | Dell Products L.P. | Multi-function information handling system with multi-orientation stand |
US9280179B2 (en) | 2012-12-11 | 2016-03-08 | Dell Products L.P. | Multi-function information handling system tablet with multi-directional cooling |
JP6289498B2 (en) * | 2012-12-14 | 2018-03-07 | ビステオン グローバル テクノロジーズ インコーポレイテッド | System and method for automatically adjusting the angle of a three-dimensional display in a vehicle |
US20140168264A1 (en) | 2012-12-19 | 2014-06-19 | Lockheed Martin Corporation | System, method and computer program product for real-time alignment of an augmented reality device |
CN104969029B (en) | 2012-12-19 | 2018-11-02 | 巴斯夫欧洲公司 | Detector for at least one object of optical detection |
US9539723B2 (en) | 2013-03-13 | 2017-01-10 | Double Robotics, Inc. | Accessory robot for mobile device |
US9266445B2 (en) | 2013-03-14 | 2016-02-23 | Boosted Inc. | Dynamic control for light electric vehicles |
CN103197889B (en) * | 2013-04-03 | 2017-02-08 | 锤子科技(北京)有限公司 | Brightness adjusting method and device and electronic device |
US9786246B2 (en) | 2013-04-22 | 2017-10-10 | Ar Tables, Llc | Apparatus for hands-free augmented reality viewing |
WO2014198629A1 (en) | 2013-06-13 | 2014-12-18 | Basf Se | Detector for optically detecting at least one object |
US9741954B2 (en) | 2013-06-13 | 2017-08-22 | Basf Se | Optical detector and method for manufacturing the same |
AU2014280335B2 (en) | 2013-06-13 | 2018-03-22 | Basf Se | Detector for optically detecting an orientation of at least one object |
US9025863B2 (en) * | 2013-06-27 | 2015-05-05 | Intel Corporation | Depth camera system with machine learning for recognition of patches within a structured light pattern |
EP3036558B1 (en) | 2013-08-19 | 2020-12-16 | Basf Se | Detector for determining a position of at least one object |
JP6403776B2 (en) | 2013-08-19 | 2018-10-10 | ビーエーエスエフ ソシエタス・ヨーロピアBasf Se | Optical detector |
US9207764B2 (en) | 2013-09-18 | 2015-12-08 | Immersion Corporation | Orientation adjustable multi-channel haptic device |
US9672649B2 (en) * | 2013-11-04 | 2017-06-06 | At&T Intellectual Property I, Lp | System and method for enabling mirror video chat using a wearable display device |
US10251008B2 (en) | 2013-11-22 | 2019-04-02 | Apple Inc. | Handsfree beam pattern configuration |
CN104679397A (en) * | 2013-11-29 | 2015-06-03 | 英业达科技有限公司 | Display and control method thereof |
JP5956479B2 (en) * | 2014-01-29 | 2016-07-27 | 株式会社東芝 | Display device and gaze estimation device |
WO2015116179A1 (en) | 2014-01-31 | 2015-08-06 | Empire Technology Development, Llc | Augmented reality skin manager |
EP3100240B1 (en) | 2014-01-31 | 2018-10-31 | Empire Technology Development LLC | Evaluation of augmented reality skins |
EP3100256A4 (en) | 2014-01-31 | 2017-06-28 | Empire Technology Development LLC | Augmented reality skin evaluation |
US10192359B2 (en) * | 2014-01-31 | 2019-01-29 | Empire Technology Development, Llc | Subject selected augmented reality skin |
US10031527B2 (en) * | 2014-02-03 | 2018-07-24 | Husqvarna Ab | Obstacle detection for a robotic working tool |
US9270943B2 (en) | 2014-03-31 | 2016-02-23 | Futurewei Technologies, Inc. | System and method for augmented reality-enabled interactions and collaboration |
US9910505B2 (en) * | 2014-06-17 | 2018-03-06 | Amazon Technologies, Inc. | Motion control for managing content |
GB2528060B (en) * | 2014-07-08 | 2016-08-03 | Ibm | Peer to peer audio video device communication |
KR102397527B1 (en) | 2014-07-08 | 2022-05-13 | 바스프 에스이 | Detector for determining a position of at least one object |
CN104240606B (en) * | 2014-08-22 | 2017-06-16 | 京东方科技集团股份有限公司 | The adjusting method of display device and display device viewing angle |
KR20160025922A (en) * | 2014-08-28 | 2016-03-09 | 삼성전자주식회사 | Method and apparatus for image processing |
US10250813B2 (en) * | 2014-09-03 | 2019-04-02 | Fuji Xerox Co., Ltd. | Methods and systems for sharing views |
EP3201567A4 (en) | 2014-09-29 | 2018-06-06 | Basf Se | Detector for optically determining a position of at least one object |
KR102497704B1 (en) | 2014-12-09 | 2023-02-09 | 바스프 에스이 | Optical detector |
US9704043B2 (en) | 2014-12-16 | 2017-07-11 | Irobot Corporation | Systems and methods for capturing images and annotating the captured images with information |
US10775505B2 (en) | 2015-01-30 | 2020-09-15 | Trinamix Gmbh | Detector for an optical detection of at least one object |
US10684485B2 (en) | 2015-03-06 | 2020-06-16 | Sony Interactive Entertainment Inc. | Tracking system for head mounted display |
US10296086B2 (en) * | 2015-03-20 | 2019-05-21 | Sony Interactive Entertainment Inc. | Dynamic gloves to convey sense of touch and movement for virtual objects in HMD rendered environments |
US9788118B2 (en) * | 2015-03-27 | 2017-10-10 | Thales Avionics, Inc. | Spatial systems including eye tracking capabilities and related methods |
US10062208B2 (en) * | 2015-04-09 | 2018-08-28 | Cinemoi North America, LLC | Systems and methods to provide interactive virtual environments |
DE102015005505B3 (en) * | 2015-04-30 | 2016-06-16 | Ziehm Imaging Gmbh | Manually adjustable monitor mount for a flat screen of a mobile diagnostic device |
US9641800B2 (en) * | 2015-05-29 | 2017-05-02 | Intel Corporation | Method and apparatus to present three-dimensional video on a two-dimensional display driven by user interaction |
US9530426B1 (en) * | 2015-06-24 | 2016-12-27 | Microsoft Technology Licensing, Llc | Filtering sounds for conferencing applications |
EP3112965A1 (en) * | 2015-07-02 | 2017-01-04 | Accenture Global Services Limited | Robotic process automation |
US10955936B2 (en) | 2015-07-17 | 2021-03-23 | Trinamix Gmbh | Detector for optically detecting at least one object |
US9818228B2 (en) | 2015-08-07 | 2017-11-14 | Microsoft Technology Licensing, Llc | Mixed reality social interaction |
US9922463B2 (en) | 2015-08-07 | 2018-03-20 | Microsoft Technology Licensing, Llc | Virtually visualizing energy |
WO2017046121A1 (en) | 2015-09-14 | 2017-03-23 | Trinamix Gmbh | 3d camera |
FR3041804B1 (en) * | 2015-09-24 | 2021-11-12 | Dassault Aviat | VIRTUAL THREE-DIMENSIONAL SIMULATION SYSTEM SUITABLE TO GENERATE A VIRTUAL ENVIRONMENT GATHERING A PLURALITY OF USERS AND RELATED PROCESS |
JP6298432B2 (en) * | 2015-10-19 | 2018-03-20 | 株式会社コロプラ | Image generation apparatus, image generation method, and image generation program |
CN105242787B (en) * | 2015-10-22 | 2019-01-11 | 京东方科技集团股份有限公司 | A kind of display device and its adjusting method |
DE102015014119A1 (en) * | 2015-11-04 | 2017-05-18 | Thomas Tennagels | Adaptive visualization system and visualization method |
US10812778B1 (en) * | 2015-11-09 | 2020-10-20 | Cognex Corporation | System and method for calibrating one or more 3D sensors mounted on a moving manipulator |
US10757394B1 (en) | 2015-11-09 | 2020-08-25 | Cognex Corporation | System and method for calibrating a plurality of 3D sensors with respect to a motion conveyance |
US11562502B2 (en) | 2015-11-09 | 2023-01-24 | Cognex Corporation | System and method for calibrating a plurality of 3D sensors with respect to a motion conveyance |
US9479732B1 (en) * | 2015-11-10 | 2016-10-25 | Irobot Corporation | Immersive video teleconferencing robot |
US10134188B2 (en) * | 2015-12-21 | 2018-11-20 | Intel Corporation | Body-centric mobile point-of-view augmented and virtual reality |
US10021373B2 (en) | 2016-01-11 | 2018-07-10 | Microsoft Technology Licensing, Llc | Distributing video among multiple display zones |
US10591728B2 (en) * | 2016-03-02 | 2020-03-17 | Mentor Acquisition One, Llc | Optical systems for head-worn computers |
CN105892647B (en) * | 2016-03-23 | 2018-09-25 | 京东方科技集团股份有限公司 | A kind of display screen method of adjustment, its device and display device |
KR102537543B1 (en) * | 2016-03-24 | 2023-05-26 | 삼성전자주식회사 | Intelligent electronic device and operating method thereof |
US20170285739A1 (en) * | 2016-04-04 | 2017-10-05 | International Business Machines Corporation | Methods and Apparatus for Repositioning a Computer Display Based on Eye Position |
US10078333B1 (en) * | 2016-04-17 | 2018-09-18 | X Development Llc | Efficient mapping of robot environment |
KR101860370B1 (en) * | 2016-05-23 | 2018-05-24 | 주식회사 토비스 | a Public HMD apparatus and a game machine having the same |
US10242643B2 (en) | 2016-07-18 | 2019-03-26 | Microsoft Technology Licensing, Llc | Constrained head-mounted display communication |
US11211513B2 (en) | 2016-07-29 | 2021-12-28 | Trinamix Gmbh | Optical sensor and detector for an optical detection |
KR102575104B1 (en) | 2016-10-25 | 2023-09-07 | 트리나미엑스 게엠베하 | Infrared optical detector with integrated filter |
US11428787B2 (en) | 2016-10-25 | 2022-08-30 | Trinamix Gmbh | Detector for an optical detection of at least one object |
US11860292B2 (en) | 2016-11-17 | 2024-01-02 | Trinamix Gmbh | Detector and methods for authenticating at least one object |
EP4239371A3 (en) | 2016-11-17 | 2023-11-08 | trinamiX GmbH | Detector for optically detecting at least one object |
US9805306B1 (en) | 2016-11-23 | 2017-10-31 | Accenture Global Solutions Limited | Cognitive robotics analyzer |
CN108231073B (en) * | 2016-12-16 | 2021-02-05 | 深圳富泰宏精密工业有限公司 | Voice control device, system and control method |
CN106476019B (en) * | 2016-12-27 | 2019-04-16 | 深圳市普云智能科技有限公司 | Intelligent robot |
US10231682B2 (en) * | 2017-01-28 | 2019-03-19 | Radiographic Paddle, LLC | Apparatuses for manipulating a sensor |
US10782668B2 (en) * | 2017-03-16 | 2020-09-22 | Siemens Aktiengesellschaft | Development of control applications in augmented reality environment |
US11163379B2 (en) * | 2017-04-05 | 2021-11-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Illuminating an environment for localisation |
JP7204667B2 (en) | 2017-04-20 | 2023-01-16 | トリナミクス ゲゼルシャフト ミット ベシュレンクテル ハフツング | photodetector |
US10223821B2 (en) * | 2017-04-25 | 2019-03-05 | Beyond Imagination Inc. | Multi-user and multi-surrogate virtual encounters |
CN107219889A (en) * | 2017-05-25 | 2017-09-29 | 京东方科技集团股份有限公司 | Eye-protecting display device and display screen method of adjustment |
JP6572943B2 (en) * | 2017-06-23 | 2019-09-11 | カシオ計算機株式会社 | Robot, robot control method and program |
US10235192B2 (en) | 2017-06-23 | 2019-03-19 | Accenture Global Solutions Limited | Self-learning robotic process automation |
WO2019002199A1 (en) | 2017-06-26 | 2019-01-03 | Trinamix Gmbh | Detector for determining a position of at least one object |
CN107454157A (en) * | 2017-07-26 | 2017-12-08 | 上海与德通讯技术有限公司 | Robot interactive method and system |
US11027432B2 (en) * | 2017-09-06 | 2021-06-08 | Stryker Corporation | Techniques for controlling position of an end effector of a robotic device relative to a virtual constraint |
US10560326B2 (en) * | 2017-09-22 | 2020-02-11 | Webroot Inc. | State-based entity behavior analysis |
EP3691840A1 (en) * | 2017-10-06 | 2020-08-12 | Moog Inc. | Teleoperation systems, method, apparatus, and computer-readable medium |
CA2983780C (en) | 2017-10-25 | 2020-07-14 | Synaptive Medical (Barbados) Inc. | Surgical imaging sensor and display unit, and surgical navigation system associated therewith |
US10423821B2 (en) * | 2017-10-25 | 2019-09-24 | Microsoft Technology Licensing, Llc | Automated profile image generation based on scheduled video conferences |
US10009690B1 (en) * | 2017-12-08 | 2018-06-26 | Glen A. Norris | Dummy head for electronic calls |
US11560289B2 (en) * | 2017-12-12 | 2023-01-24 | Otis Elevator Company | Inspection and maintenance system for elevators |
CN107908008A (en) * | 2017-12-28 | 2018-04-13 | 许峰 | A kind of certainly mobile AR display screens |
US20190236976A1 (en) * | 2018-01-31 | 2019-08-01 | Rnd64 Limited | Intelligent personal assistant device |
US10618443B2 (en) * | 2018-02-01 | 2020-04-14 | GM Global Technology Operations LLC | Method and apparatus that adjust audio output according to head restraint position |
US10375632B1 (en) * | 2018-02-06 | 2019-08-06 | Google Llc | Power management for electromagnetic position tracking systems |
DE102018108855A1 (en) * | 2018-04-13 | 2019-10-17 | Stabilus Gmbh | Positioning device, display device and method for automatically positioning a display device |
CN208967398U (en) * | 2018-07-06 | 2019-06-11 | 合肥京东方光电科技有限公司 | Display system and display device pedestal |
US10764660B2 (en) * | 2018-08-02 | 2020-09-01 | Igt | Electronic gaming machine and method with selectable sound beams |
CN110912960B (en) * | 2018-09-18 | 2023-04-28 | 斑马智行网络(香港)有限公司 | Data processing method, device and machine-readable medium |
JP7325173B2 (en) * | 2018-10-06 | 2023-08-14 | シスメックス株式会社 | REMOTE SUPPORT METHOD FOR SURGERY ASSIST ROBOT, AND REMOTE SUPPORT SYSTEM |
US11027430B2 (en) * | 2018-10-12 | 2021-06-08 | Toyota Research Institute, Inc. | Systems and methods for latency compensation in robotic teleoperation |
US11557297B2 (en) * | 2018-11-09 | 2023-01-17 | Embodied, Inc. | Systems and methods for adaptive human-machine interaction and automatic behavioral assessment |
US11023095B2 (en) | 2019-07-12 | 2021-06-01 | Cinemoi North America, LLC | Providing a first person view in a virtual world using a lens |
US11294432B2 (en) * | 2020-02-28 | 2022-04-05 | International Business Machines Corporation | Dynamically aligning a digital display |
US11645328B2 (en) * | 2020-03-17 | 2023-05-09 | Adobe Inc. | 3D-aware image search |
CN112068532B (en) * | 2020-09-10 | 2022-04-15 | 中车大连电力牵引研发中心有限公司 | Test bed and test method for network control system of multi-locomotive |
CN112256044B (en) * | 2020-12-23 | 2021-03-12 | 炬星科技(深圳)有限公司 | Method, device and storage medium for reducing waiting time of human-computer interaction |
US11134217B1 (en) | 2021-01-11 | 2021-09-28 | Surendra Goel | System that provides video conferencing with accent modification and multiple video overlaying |
CN115145303B (en) * | 2022-03-10 | 2023-07-07 | 重庆大学 | Heavy-load hydraulic arm auxiliary control system based on visual hearing enhancement feedback |
US20240144588A1 (en) * | 2022-11-02 | 2024-05-02 | Rovi Guides, Inc. | Systems and methods for emulating a user device in a virtual environment |
WO2024125465A1 (en) * | 2022-12-13 | 2024-06-20 | 深圳市信控科技有限公司 | Viewing-angle following display system, operating system, reconstruction sensing control system, and control method |
Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US218457A (en) * | 1879-08-12 | Improvement in stilts | ||
US218458A (en) * | 1879-08-12 | Improvement in knitting-machines | ||
US1072014A (en) * | 1913-03-07 | 1913-09-02 | Anton C Kudla | Trolley stand and pole. |
US5347306A (en) * | 1993-12-17 | 1994-09-13 | Mitsubishi Electric Research Laboratories, Inc. | Animated electronic meeting place |
US5736982A (en) * | 1994-08-03 | 1998-04-07 | Nippon Telegraph And Telephone Corporation | Virtual space apparatus with avatars and speech |
US5742264A (en) * | 1995-01-24 | 1998-04-21 | Matsushita Electric Industrial Co., Ltd. | Head-mounted display |
US5990865A (en) * | 1997-01-06 | 1999-11-23 | Gard; Matthew Davis | Computer interface device |
US6034653A (en) * | 1997-08-01 | 2000-03-07 | Colorado Microdisplay, Inc. | Head-set display device |
US6222939B1 (en) * | 1996-06-25 | 2001-04-24 | Eyematic Interfaces, Inc. | Labeled bunch graphs for image analysis |
US6272231B1 (en) * | 1998-11-06 | 2001-08-07 | Eyematic Interfaces, Inc. | Wavelet-based facial motion capture for avatar animation |
US6301370B1 (en) * | 1998-04-13 | 2001-10-09 | Eyematic Interfaces, Inc. | Face recognition from video images |
US20020060686A1 (en) * | 1996-08-29 | 2002-05-23 | Sanyo Electric Co., Ltd. | Texture information assignment method, object extraction method, three-dimensional model generating method, and apparatus thereof |
US20020075286A1 (en) * | 2000-11-17 | 2002-06-20 | Hiroki Yonezawa | Image generating system and method and storage medium |
US6455595B1 (en) * | 2000-07-24 | 2002-09-24 | Chevron U.S.A. Inc. | Methods for optimizing fischer-tropsch synthesis |
US6492986B1 (en) * | 1997-06-02 | 2002-12-10 | The Trustees Of The University Of Pennsylvania | Method for human face shape and motion estimation based on integrating optical flow and deformable models |
US20030035001A1 (en) * | 2001-08-15 | 2003-02-20 | Van Geest Bartolomeus Wilhelmus Damianus | 3D video conferencing |
US20030067536A1 (en) * | 2001-10-04 | 2003-04-10 | National Research Council Of Canada | Method and system for stereo videoconferencing |
US6580811B2 (en) * | 1998-04-13 | 2003-06-17 | Eyematic Interfaces, Inc. | Wavelet-based facial motion capture for avatar animation |
US6714661B2 (en) * | 1998-11-06 | 2004-03-30 | Nevengineering, Inc. | Method and system for customizing facial feature tracking using precise landmark finding on a neutral face image |
US20040155902A1 (en) * | 2001-09-14 | 2004-08-12 | Dempski Kelly L. | Lab window collaboration |
US20040210603A1 (en) * | 2003-04-17 | 2004-10-21 | John Roston | Remote language interpretation system and method |
US20050007445A1 (en) * | 2003-07-11 | 2005-01-13 | Foote Jonathan T. | Telepresence system and method for video teleconferencing |
US20050068294A1 (en) * | 2003-08-27 | 2005-03-31 | Linix Cheng | Interface apparatus combining display panel and shaft |
US20050110867A1 (en) * | 2003-11-26 | 2005-05-26 | Karsten Schulz | Video conferencing system with physical cues |
US20060100642A1 (en) * | 2002-09-25 | 2006-05-11 | Guang-Zhong Yang | Control of robotic manipulation |
-
2005
- 2005-10-24 US US11/255,920 patent/US7626569B2/en not_active Expired - Fee Related
-
2009
- 2009-10-22 US US12/604,211 patent/US20100039380A1/en not_active Abandoned
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US218458A (en) * | 1879-08-12 | Improvement in knitting-machines | ||
US218457A (en) * | 1879-08-12 | Improvement in stilts | ||
US1072014A (en) * | 1913-03-07 | 1913-09-02 | Anton C Kudla | Trolley stand and pole. |
US5347306A (en) * | 1993-12-17 | 1994-09-13 | Mitsubishi Electric Research Laboratories, Inc. | Animated electronic meeting place |
US5736982A (en) * | 1994-08-03 | 1998-04-07 | Nippon Telegraph And Telephone Corporation | Virtual space apparatus with avatars and speech |
US5742264A (en) * | 1995-01-24 | 1998-04-21 | Matsushita Electric Industrial Co., Ltd. | Head-mounted display |
US6222939B1 (en) * | 1996-06-25 | 2001-04-24 | Eyematic Interfaces, Inc. | Labeled bunch graphs for image analysis |
US6563950B1 (en) * | 1996-06-25 | 2003-05-13 | Eyematic Interfaces, Inc. | Labeled bunch graphs for image analysis |
US20020060686A1 (en) * | 1996-08-29 | 2002-05-23 | Sanyo Electric Co., Ltd. | Texture information assignment method, object extraction method, three-dimensional model generating method, and apparatus thereof |
US5990865A (en) * | 1997-01-06 | 1999-11-23 | Gard; Matthew Davis | Computer interface device |
US6492986B1 (en) * | 1997-06-02 | 2002-12-10 | The Trustees Of The University Of Pennsylvania | Method for human face shape and motion estimation based on integrating optical flow and deformable models |
US6034653A (en) * | 1997-08-01 | 2000-03-07 | Colorado Microdisplay, Inc. | Head-set display device |
US6580811B2 (en) * | 1998-04-13 | 2003-06-17 | Eyematic Interfaces, Inc. | Wavelet-based facial motion capture for avatar animation |
US6301370B1 (en) * | 1998-04-13 | 2001-10-09 | Eyematic Interfaces, Inc. | Face recognition from video images |
US6272231B1 (en) * | 1998-11-06 | 2001-08-07 | Eyematic Interfaces, Inc. | Wavelet-based facial motion capture for avatar animation |
US6714661B2 (en) * | 1998-11-06 | 2004-03-30 | Nevengineering, Inc. | Method and system for customizing facial feature tracking using precise landmark finding on a neutral face image |
US6455595B1 (en) * | 2000-07-24 | 2002-09-24 | Chevron U.S.A. Inc. | Methods for optimizing fischer-tropsch synthesis |
US20020075286A1 (en) * | 2000-11-17 | 2002-06-20 | Hiroki Yonezawa | Image generating system and method and storage medium |
US20030035001A1 (en) * | 2001-08-15 | 2003-02-20 | Van Geest Bartolomeus Wilhelmus Damianus | 3D video conferencing |
US20040155902A1 (en) * | 2001-09-14 | 2004-08-12 | Dempski Kelly L. | Lab window collaboration |
US20030067536A1 (en) * | 2001-10-04 | 2003-04-10 | National Research Council Of Canada | Method and system for stereo videoconferencing |
US20060100642A1 (en) * | 2002-09-25 | 2006-05-11 | Guang-Zhong Yang | Control of robotic manipulation |
US20040210603A1 (en) * | 2003-04-17 | 2004-10-21 | John Roston | Remote language interpretation system and method |
US20050007445A1 (en) * | 2003-07-11 | 2005-01-13 | Foote Jonathan T. | Telepresence system and method for video teleconferencing |
US20050068294A1 (en) * | 2003-08-27 | 2005-03-31 | Linix Cheng | Interface apparatus combining display panel and shaft |
US20050110867A1 (en) * | 2003-11-26 | 2005-05-26 | Karsten Schulz | Video conferencing system with physical cues |
Cited By (68)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11379060B2 (en) | 2004-08-25 | 2022-07-05 | Apple Inc. | Wide touchpad on a portable computer |
US20080001866A1 (en) * | 2006-06-28 | 2008-01-03 | Martin Michael M | Control Display Positioning System |
US8310468B2 (en) | 2006-06-28 | 2012-11-13 | Novartis Ag | Control display positioning system |
US20100239121A1 (en) * | 2007-07-18 | 2010-09-23 | Metaio Gmbh | Method and system for ascertaining the position and orientation of a camera relative to a real object |
US9008371B2 (en) * | 2007-07-18 | 2015-04-14 | Metaio Gmbh | Method and system for ascertaining the position and orientation of a camera relative to a real object |
US20090051699A1 (en) * | 2007-08-24 | 2009-02-26 | Videa, Llc | Perspective altering display system |
US10063848B2 (en) * | 2007-08-24 | 2018-08-28 | John G. Posa | Perspective altering display system |
US20090172557A1 (en) * | 2008-01-02 | 2009-07-02 | International Business Machines Corporation | Gui screen sharing between real pcs in the real world and virtual pcs in the virtual world |
US11449224B2 (en) | 2008-01-04 | 2022-09-20 | Apple Inc. | Selective rejection of touch contacts in an edge region of a touch surface |
US11886699B2 (en) | 2008-01-04 | 2024-01-30 | Apple Inc. | Selective rejection of touch contacts in an edge region of a touch surface |
US9891732B2 (en) | 2008-01-04 | 2018-02-13 | Apple Inc. | Selective rejection of touch contacts in an edge region of a touch surface |
US10747428B2 (en) | 2008-01-04 | 2020-08-18 | Apple Inc. | Selective rejection of touch contacts in an edge region of a touch surface |
US9606715B2 (en) | 2008-09-30 | 2017-03-28 | Apple Inc. | Touch screen device, method, and graphical user interface for moving on-screen objects without using a cursor |
US10209877B2 (en) | 2008-09-30 | 2019-02-19 | Apple Inc. | Touch screen device, method, and graphical user interface for moving on-screen objects without using a cursor |
US8284170B2 (en) * | 2008-09-30 | 2012-10-09 | Apple Inc. | Touch screen device, method, and graphical user interface for moving on-screen objects without using a cursor |
US20100079405A1 (en) * | 2008-09-30 | 2010-04-01 | Jeffrey Traer Bernstein | Touch Screen Device, Method, and Graphical User Interface for Moving On-Screen Objects Without Using a Cursor |
US10452174B2 (en) | 2008-12-08 | 2019-10-22 | Apple Inc. | Selective input signal rejection and modification |
US20150153865A1 (en) * | 2008-12-08 | 2015-06-04 | Apple Inc. | Selective input signal rejection and modification |
US9632608B2 (en) * | 2008-12-08 | 2017-04-25 | Apple Inc. | Selective input signal rejection and modification |
US20110084983A1 (en) * | 2009-09-29 | 2011-04-14 | Wavelength & Resonance LLC | Systems and Methods for Interaction With a Virtual Environment |
US8175734B2 (en) | 2009-10-08 | 2012-05-08 | 3D M. T. P. Ltd. | Methods and system for enabling printing three-dimensional object models |
US20110087350A1 (en) * | 2009-10-08 | 2011-04-14 | 3D M.T.P. Ltd | Methods and system for enabling printing three-dimensional object models |
WO2011101818A1 (en) * | 2010-02-21 | 2011-08-25 | Rafael Advanced Defense Systems Ltd. | Method and system for sequential viewing of two video streams |
US8982245B2 (en) | 2010-02-21 | 2015-03-17 | Rafael Advanced Defense Systems Ltd. | Method and system for sequential viewing of two video streams |
WO2012011893A1 (en) * | 2010-07-20 | 2012-01-26 | Empire Technology Development Llc | Augmented reality proximity sensing |
US9606612B2 (en) | 2010-07-20 | 2017-03-28 | Empire Technology Development Llc | Augmented reality proximity sensing |
US10437309B2 (en) | 2010-07-20 | 2019-10-08 | Empire Technology Development Llc | Augmented reality proximity sensing |
US20120038738A1 (en) * | 2010-08-12 | 2012-02-16 | Alcatel-Lucent Usa, Incorporated | Gaze correcting apparatus, a method of videoconferencing and a videoconferencing system |
US8421844B2 (en) | 2010-08-13 | 2013-04-16 | Alcatel Lucent | Apparatus for correcting gaze, a method of videoconferencing and a system therefor |
US20120075166A1 (en) * | 2010-09-29 | 2012-03-29 | Samsung Electronics Co. Ltd. | Actuated adaptive display systems |
WO2012047905A3 (en) * | 2010-10-04 | 2014-04-03 | Wavelength & Resonance Llc, Dba Oooii | Head and arm detection for virtual immersion systems and methods |
WO2012047905A2 (en) * | 2010-10-04 | 2012-04-12 | Wavelength & Resonance Llc, Dba Oooii | Head and arm detection for virtual immersion systems and methods |
US20120098931A1 (en) * | 2010-10-26 | 2012-04-26 | Sony Corporation | 3d motion picture adaption system |
US20120127325A1 (en) * | 2010-11-23 | 2012-05-24 | Inventec Corporation | Web Camera Device and Operating Method thereof |
US9569159B2 (en) * | 2011-02-03 | 2017-02-14 | Echostar Technologies L.L.C. | Apparatus, systems and methods for presenting displayed image information of a mobile media device on a large display and control of the mobile media device therefrom |
US20150009098A1 (en) * | 2011-02-03 | 2015-01-08 | Echostar Technologies L.L.C. | Apparatus, systems and methods for presenting displayed image information of a mobile media device on a large display and control of the mobile media device therefrom |
US8830244B2 (en) * | 2011-03-01 | 2014-09-09 | Sony Corporation | Information processing device capable of displaying a character representing a user, and information processing method thereof |
US20120223952A1 (en) * | 2011-03-01 | 2012-09-06 | Sony Computer Entertainment Inc. | Information Processing Device Capable of Displaying A Character Representing A User, and Information Processing Method Thereof. |
DE102011015136A1 (en) * | 2011-03-25 | 2012-09-27 | Institut für Rundfunktechnik GmbH | Apparatus and method for determining a representation of digital objects in a three-dimensional presentation space |
US20120281114A1 (en) * | 2011-05-03 | 2012-11-08 | Ivi Media Llc | System, method and apparatus for providing an adaptive media experience |
US8496218B2 (en) | 2011-06-08 | 2013-07-30 | Alcon Research, Ltd. | Display monitor guide |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US20140180684A1 (en) * | 2012-12-20 | 2014-06-26 | Strubwerks, LLC | Systems, Methods, and Apparatus for Assigning Three-Dimensional Spatial Data to Sounds and Audio Files |
US9983846B2 (en) | 2012-12-20 | 2018-05-29 | Strubwerks, LLC | Systems, methods, and apparatus for recording three-dimensional audio and associated data |
US10725726B2 (en) * | 2012-12-20 | 2020-07-28 | Strubwerks, LLC | Systems, methods, and apparatus for assigning three-dimensional spatial data to sounds and audio files |
US20140176424A1 (en) * | 2012-12-24 | 2014-06-26 | Hon Hai Precision Industry Co., Ltd. | Electronic device and method for adjustting display screen |
US9076364B2 (en) * | 2012-12-24 | 2015-07-07 | Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. | Electronic device and method for adjustting display screen |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US9591295B2 (en) * | 2013-09-24 | 2017-03-07 | Amazon Technologies, Inc. | Approaches for simulating three-dimensional views |
US20150085076A1 (en) * | 2013-09-24 | 2015-03-26 | Amazon Techologies, Inc. | Approaches for simulating three-dimensional views |
US9437038B1 (en) | 2013-09-26 | 2016-09-06 | Amazon Technologies, Inc. | Simulating three-dimensional views using depth relationships among planes of content |
US20150205994A1 (en) * | 2014-01-22 | 2015-07-23 | Samsung Electronics Co., Ltd. | Smart watch and control method thereof |
US9604361B2 (en) | 2014-02-05 | 2017-03-28 | Abb Schweiz Ag | System and method for defining motions of a plurality of robots cooperatively performing a show |
WO2015142956A1 (en) * | 2014-03-17 | 2015-09-24 | Intuitive Surgical Operations, Inc. | Systems and methods for offscreen indication of instruments in a teleoperational medical system |
CN106470634A (en) * | 2014-03-17 | 2017-03-01 | 直观外科手术操作公司 | System and method for the outer instruction of screen of the apparatus in remote manipulation medical system |
US11903665B2 (en) | 2014-03-17 | 2024-02-20 | Intuitive Surgical Operations, Inc. | Systems and methods for offscreen indication of instruments in a teleoperational medical system |
KR20160133515A (en) * | 2014-03-17 | 2016-11-22 | 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 | Systems and methods for offscreen indication of instruments in a teleoperational medical system |
KR102363661B1 (en) | 2014-03-17 | 2022-02-17 | 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 | Systems and methods for offscreen indication of instruments in a teleoperational medical system |
US11317979B2 (en) | 2014-03-17 | 2022-05-03 | Intuitive Surgical Operations, Inc. | Systems and methods for offscreen indication of instruments in a teleoperational medical system |
US10254544B1 (en) * | 2015-05-13 | 2019-04-09 | Rockwell Collins, Inc. | Head tracking accuracy and reducing latency in dynamic environments |
CN105163063A (en) * | 2015-06-23 | 2015-12-16 | 中山明杰自动化科技有限公司 | Machine image processing system |
US10388225B2 (en) * | 2016-09-30 | 2019-08-20 | Lg Display Co., Ltd. | Organic light emitting display device and method of controlling same |
DE102018201336A1 (en) * | 2018-01-29 | 2019-08-01 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Virtual Reality conference system |
US11326888B2 (en) * | 2018-07-25 | 2022-05-10 | Uatc, Llc | Generation of polar occlusion maps for autonomous vehicles |
CN109807892A (en) * | 2019-02-19 | 2019-05-28 | 宁波凯德科技服务有限公司 | A kind of Automobile Welding robot motion planning model |
CN110720982A (en) * | 2019-10-29 | 2020-01-24 | 京东方科技集团股份有限公司 | Augmented reality system, control method and device based on augmented reality |
WO2021113612A1 (en) * | 2019-12-04 | 2021-06-10 | Black-I Robotics, Inc. | Robotic arm system |
CN111895940A (en) * | 2020-04-26 | 2020-11-06 | 鸿富锦精密电子(成都)有限公司 | Calibration file generation method, system, computer device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US7626569B2 (en) | 2009-12-01 |
US20060119572A1 (en) | 2006-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7626569B2 (en) | Movable audio/video communication interface system | |
US20240005808A1 (en) | Individual viewing in a shared space | |
US20230109054A1 (en) | System and method for generating a virtual reality scene based on individual asynchronous motion capture recordings | |
US20210256261A1 (en) | 3d object annotation | |
US10521951B2 (en) | 3D digital painting | |
US7812815B2 (en) | Compact haptic and augmented virtual reality system | |
Hua et al. | SCAPE: supporting stereoscopic collaboration in augmented and projective environments | |
US11032537B2 (en) | Movable display for viewing and interacting with computer generated environments | |
Handa et al. | Immersive technology–uses, challenges and opportunities | |
US20060250391A1 (en) | Three dimensional horizontal perspective workstation | |
US12014455B2 (en) | Audiovisual presence transitions in a collaborative reality environment | |
JP2023514572A (en) | session manager | |
JPH1118025A (en) | Image display device | |
CN107810634A (en) | Display for three-dimensional augmented reality | |
JP7558268B2 (en) | Non-uniform Stereo Rendering | |
JP2024150589A (en) | Communication terminal equipment | |
JP7547501B2 (en) | VR video space generation system | |
JP2023095862A (en) | Program and information processing method | |
JP3939444B2 (en) | Video display device | |
WO2023248832A1 (en) | Remote viewing system and on-site imaging system | |
WO2021153413A1 (en) | Information processing device, information processing system, and information processing method | |
Zhang et al. | Think Fast: Rapid Localization of Teleoperator Gaze in 360° Hosted Telepresence | |
JP2004258287A (en) | Video display system | |
WO2024064278A1 (en) | Devices, methods, and graphical user interfaces for interacting with extended reality experiences | |
WO2024197130A1 (en) | Devices, methods, and graphical user interfaces for capturing media with a camera application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GRAPHICS PROPERTIES HOLDINGS, INC.,CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:SILICON GRAPHICS, INC.;REEL/FRAME:023637/0776 Effective date: 20090603 |
|
AS | Assignment |
Owner name: RPX CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GRAPHICS PROPERTIES HOLDINGS, INC.;REEL/FRAME:029564/0799 Effective date: 20121224 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |