[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US9270943B2 - System and method for augmented reality-enabled interactions and collaboration - Google Patents

System and method for augmented reality-enabled interactions and collaboration Download PDF

Info

Publication number
US9270943B2
US9270943B2 US14/231,375 US201414231375A US9270943B2 US 9270943 B2 US9270943 B2 US 9270943B2 US 201414231375 A US201414231375 A US 201414231375A US 9270943 B2 US9270943 B2 US 9270943B2
Authority
US
United States
Prior art keywords
data
user
remote
local user
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/231,375
Other versions
US20150281649A1 (en
Inventor
Jana EHMANN
Liang Zhou
Onur G. Guleryuz
Fengjun Lv
Fengqing Zhu
Naveen Dhar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FutureWei Technologies Inc
Original Assignee
FutureWei Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FutureWei Technologies Inc filed Critical FutureWei Technologies Inc
Priority to US14/231,375 priority Critical patent/US9270943B2/en
Assigned to FUTUREWEI TECHNOLOGIES, INC. reassignment FUTUREWEI TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZHOU, LIANG, GULERYUZ, ONUR G., DHAR, NAVEEN, EHMANN, JANA, LV, FENGJUN, ZHU, FENGQING
Priority to EP15773862.6A priority patent/EP3055994A4/en
Priority to PCT/CN2015/074237 priority patent/WO2015149616A1/en
Priority to CN201580009875.0A priority patent/CN106165404B/en
Priority to EP20199890.3A priority patent/EP3780590A1/en
Publication of US20150281649A1 publication Critical patent/US20150281649A1/en
Application granted granted Critical
Publication of US9270943B2 publication Critical patent/US9270943B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • H04N13/0037
    • H04N13/0059
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/005Network, LAN, Remote Access, Distributed System
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/016Exploded view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0077Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0092Image segmentation from stereoscopic image signals

Definitions

  • Remote collaboration technologies such as video conferencing software
  • video conferencing software are used to conference multiple users from remote locations together by way of simultaneous two-way transmissions.
  • many conventional systems for performing such tasks are unable to establish communication environments in which participants are able to enjoy a sense of shared presence within the same physical workspace.
  • collaborations and interactions performed over a communications network between remote users can be a difficult task. Accordingly, a need exists for a solution that provides participants of collaborative sessions performed over communication networks with the sensation of sharing a same physical workspace with each other in a manner that also improves user experience during such events.
  • Embodiments of the present invention provide a novel system and/or method for performing over-the-network collaborations and interactions between remote end-users.
  • Embodiments of the present invention produce the perceived effect of each user sharing a same physical workspace while each person is actually located in separate physical environments. In this manner, embodiments of the present invention allow for more seamless interactions between users while relieving them of the burden of using common computer peripheral devices such as mice, keyboards, and other hardware often used to perform such interactions.
  • FIG. 1A depicts an exemplary hardware configuration implemented on a client device for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
  • FIG. 1B depicts exemplary components resident in memory executed by a client device for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
  • FIG. 2 depicts an exemplary local media data computing module for capturing real-world information in real-time from a local environment during performance of augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
  • FIG. 3 depicts an exemplary remote media data computing module for processing data received from remote client devices over a communications network during performance of augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
  • FIG. 4 depicts an exemplary object-based virtual space composition module for generating a virtualized workspace display for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
  • FIG. 5 depicts an exemplary a multi-client real-time communication for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the presentation.
  • FIG. 6A is a flowchart of an exemplary computer-implemented method for generating local media data during a collaborative session performed over a communications network in accordance with embodiments of the present invention.
  • FIG. 6B is a flowchart of an exemplary computer-implemented method of generating configurational data for creating a virtual workspace display for a collaborative session performed over a communications network in accordance with embodiments of the present invention.
  • FIG. 6C is a flowchart of an exemplary computer-implemented method of contemporaneously rendering a virtual workspace display and detecting gesture input during a collaborative session performed over a communications network in accordance with embodiments of the present invention.
  • FIG. 7A depicts an exemplary use case for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
  • FIG. 7B depicts another exemplary use case for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
  • FIG. 7C depicts yet another exemplary use case for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
  • embodiments of present invention provide a system and/or method for performing augmented reality-enabled interactions and collaborations.
  • FIG. 1A depicts an exemplary hardware configuration used by various embodiments of the present invention. Although specific components are disclosed in FIG. 1A , it should be appreciated that such components are exemplary. That is, embodiments of the present invention are well suited to having various other hardware components or variations of the components recited in FIG. 1A . It is appreciated that the hardware components in FIG. 1A can operate with other components than those presented, and that not all of the hardware components described in FIG. 1A are required to achieve the goals of the present invention.
  • Client device 101 can be implemented as an electronic device capable of communicating with other remote computer systems over a communications network.
  • Client device 101 can be implemented as, for example, a digital camera, cell phone camera, portable electronic device (e.g., audio device, entertainment device, handheld device), webcam, video device (e.g., camcorder) and the like.
  • Components of client device 101 can comprise respective functionality to determine and configure respective optical properties and settings including, but not limited to, focus, exposure, color or white balance, and areas of interest (e.g., via a focus motor, aperture control, etc.).
  • components of client device 101 can be coupled via internal communications bus 105 and receive/transmit image data for further processing over such communications bus.
  • client device 101 can comprise sensors 100 , computer storage medium 135 , optional graphics system 141 , multiplexer 260 , processor 110 , and optional display device 111 .
  • Sensors 100 can include a plurality of sensors arranged in a manner that captures different forms of real-world information in real-time from a localized environment external to client device 101 .
  • Optional graphics system 141 can include a graphics processor (not pictured) operable to process instructions from applications resident in computer readable storage medium 135 and to communicate data with processor 110 via internal bus 105 . Data can be communicated in this fashion for rendering the data on optional display device 111 using frame memory buffer(s).
  • optional graphics system 141 can generate pixel data for output images from rendering commands and may be configured as multiple virtual graphic processors that are used in parallel (concurrently) by a number of applications executing in parallel.
  • Multiplexer 260 includes the functionality to transmit data both locally and over a communications network. As such, multiplexer 260 can multiplex outbound data communicated from client device 101 as well as de-multiplex inbound data received by client device 101 .
  • computer readable storage medium 135 can be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. Portions of computer readable storage medium 135 , when executed, facilitate efficient execution of memory operations or requests for groups of threads.
  • FIG. 1B depicts exemplary computer storage medium components used by various embodiments of the present invention. Although specific components are disclosed in FIG. 1B , it should be appreciated that such computer storage medium components are exemplary. That is, embodiments of the present invention are well suited to having various other components or variations of the computer storage medium components recited in FIG. 1B . It is appreciated that the components in FIG. 1B can operate with other components than those presented, and that not all of the computer storage medium components described in FIG. 1B are required to achieve the goals of the present invention.
  • computer readable storage medium 135 can include an operating system (e.g., operating system 112 ). Operating system 112 can be loaded into processor 110 when client device 101 is initialized. Also, upon execution by processor 110 , operating system 112 can be configured to supply a programmatic interface to client device 101 . Furthermore, as illustrated in FIG. 1B , computer readable storage medium 135 can include local media data computing module 200 , remote media data computing module 300 and object-based virtual space composition module 400 , which can provide instructions to processor 110 for processing via internal bus 105 . Accordingly, the functionality of local media data computing module 200 , remote media data computing module 300 and object-based virtual space composition module 400 will now be discussed in greater detail.
  • an operating system e.g., operating system 112
  • Operating system 112 can be loaded into processor 110 when client device 101 is initialized. Also, upon execution by processor 110 , operating system 112 can be configured to supply a programmatic interface to client device 101 .
  • computer readable storage medium 135 can include local media data computing module 200 , remote media
  • FIG. 2 describes the functionality of local media data computing module 200 in greater detail in accordance with embodiments of the present invention.
  • sensors 100 includes a set of sensors (e.g., S 1 , S 2 , S 3 , S 4 , etc.) arranged in a manner that captures different forms of real-world information in real-time from a localized environment external to client device 101 .
  • different sensors within sensors 100 can capture various forms of external data such as video (e.g., RGB data), depth information, infrared reflection data, thermal data, etc.
  • video e.g., RGB data
  • depth information e.g., depth information
  • infrared reflection data e.g., infrared reflection data
  • thermal data e.g., thermal data
  • client device 101 can acquire a set of readings from different sensors within sensors 100 at any given time in the form of data maps.
  • Sensor data enhancement module 210 includes the functionality to pre-process data received via sensors 100 before being passed on to other modules within client device 101 (e.g., context extraction 220 , object-of-interest extraction 230 , user configuration detection 240 , etc.). For example, raw data obtained by each of the different sensors within sensors 100 may not necessarily correspond to a same spatial coordinate system. As such, sensor data enhancement module 210 can perform alignment procedures such that each measurement obtained by sensors within sensors 100 can be harmonized into one unified coordinate system. In this manner, information acquired from the different sensors can be combined and analyzed jointly by other modules within client device 101 .
  • context extraction 220 e.g., object-of-interest extraction 230 , user configuration detection 240 , etc.
  • sensor data enhancement module 210 can perform alignment procedures such that each measurement obtained by sensors within sensors 100 can be harmonized into one unified coordinate system. In this manner, information acquired from the different sensors can be combined and analyzed jointly by other modules within client device 101 .
  • sensor data enhancement module 210 can calibrate the appropriate transformation matrices for each sensor's data into a referent coordinate system.
  • the referent coordinate system created by sensor data enhancement module 210 may be the intrinsic coordinate system of one of the sensors of sensors 100 (e.g., video sensor) or a new coordinate system that is not associated with any of the sensors' respective coordinate systems.
  • a resultant set of transforms applied to raw sensor data acquired by a sensor acquiring color may be depicted as:
  • linear transforms or nonlinear transforms.
  • data obtained from sensors 100 can be noisy.
  • data maps can contain points at which the values are not known or defined, either due to the imperfections of a particular sensor or as a result of re-aligning the data from different viewpoints in space.
  • sensor data enhancement module 210 can also perform corrections to values of signals corrupted by noise or where the values of signals are not defined at all.
  • the output data of sensor data enhancement module 210 can be in the form of updated measurement maps (e.g., denoted as (x, y, z, r, g, b, ir, t . . . ) in FIG. 2 ) which can then be passed to other components within client device 101 for further processing.
  • Object-of-interest extraction module 230 includes the functionality to segment a local user and/or any other object of interest (e.g., various physical objects that the local user wants to present to the remote users, physical documents relevant for the collaboration, etc.) based on data received via sensor data enhancement module 210 during a current collaborative session (e.g., teleconference, telepresence, etc.).
  • Object-of-interest extraction module 230 can detect objects of interest by using external data gathered via sensors 100 (e.g., RGB data, infrared data, thermal data) or by combining the different sources and processing them jointly.
  • object-of-interest extraction module 230 can apply different computer-implemented RGB segmentation procedures, such as watershed, mean shift, etc., to detect users and/or objects.
  • the resultant output produced by object-of-interest extraction module 230 e.g., (x,y,z,r,g,b,m)
  • can include depth data e.g., coordinates (x,y,z)
  • RGB map data e.g., coordinates (r,g,b)
  • Context extraction module 220 includes the functionality to automatically extract high-level information concerning local users within their respective environments from data received via sensor data enhancement module 210 .
  • context extraction module 220 can use computer-implemented procedures to analyze data received from sensor data enhancement module 210 concerning a local user's body temperature and/or determine a user's current mood (e.g., angry, bored, etc.). As such, based on this data, context extraction module 220 can inferentially determine whether the user is actively engaged within a current collaborative session.
  • context extraction module 220 can analyze the facial expressions, posture and movement of a local user to determine user engagement. Determinations made by context extraction module 220 can be sent as context data to the multiplexer 260 , which further transmits the data both locally and over a communications network. In this manner, context data may be made available to the remote participants of a current collaborative session or it can affect the way the data is presented to the local user locally.
  • User configuration detection module 240 includes the functionality to use data processed by object-of-interest extraction module 230 to determine the presence of a recognized gesture performed by a detected user and/or object. For example, in one embodiment, user configuration detection module 240 can detect and extract a subset of points associated with a detected user's hand. As such, user configuration detection module 240 can then further classify and label points of the hand as a finger or palm. Hand features can be detected and computed based on the available configurations in known to configuration alphabet 250 , such as hand pose, finger pose, relative motion between hands, etc.
  • user configuration detection module 240 can detect in-air gestures, such as, for example, “hand waving,” or “sweeping to the right.” In this manner, user configuration detection module 240 can use a configuration database to determine how to translate a detected configuration (hand pose, finger pose, motion etc.) into a detected in-air gesture. The extracted hand features and, if detected, information about the in-air gesture can then be sent to object-based virtual space composition module 400 (e.g., see FIG. 4 ) for further processing.
  • in-air gestures such as, for example, “hand waving,” or “sweeping to the right.” In this manner, user configuration detection module 240 can use a configuration database to determine how to translate a detected configuration (hand pose, finger pose, motion etc.) into a detected in-air gesture. The extracted hand features and, if detected, information about the in-air gesture can then be sent to object-based virtual space composition module 400 (e.g., see FIG. 4 ) for further processing.
  • object-based virtual space composition module 400 e.g.
  • FIG. 3 describes the functionality of remote media data computing module 300 in greater detail in accordance with embodiments of the present invention.
  • Remote media data computing module 300 includes the functionality to receive multiplexed data from remote client device peers (e.g., local media data generated by remote client devices in a manner similar to client device 101 ) and de-multiplex the inbound data via de-multiplexer 330 .
  • Data can be de-multiplexed into remote collaboration parameters (that include remote context data) and remote texture data, which includes depth (x, y, z), texture (r, g, b) and/or object-of-interest (m) data from the remote peers' physical environments. As such, this information can then be distributed to different components within client device 101 for further processing.
  • Artifact reduction module 320 includes the functionality receive remote texture data from de-multiplexer 330 and minimize the appearance of segmentation errors to create a more visually pleasing rendering of remote user environments.
  • the blending of the segmented user and/or the background of the user can be accomplished through computer-implemented procedures involving contour-hatching textures. Further information and details regarding segmentation procedures may be found with reference to U.S. Patent Publication. No. US 2013/0265382 A1 entitled “VISUAL CONDITIONING FOR AUGMENTED-REALITY-ASSISTED VIDEO CONFERENCING,” which was filed on Dec. 31, 2012 by inventors Onur G. Guleryuz and Antonius Kalker, which is incorporated herein by reference in its entirety. These procedures can wrap the user boundaries and reduce the appearance of segmentation imperfections.
  • Artifact reduction module 320 can also determine the regions within remote user environments that need to be masked, based on potential estimated errors of a given subject's segmentation boundary. Additionally, artifact reduction module 320 can perform various optimization procedures that may include, but are not limited to, adjusting the lighting of the user's visuals, changing the contrast, performing color correction, etc. As such, refined remote texture data can be forwarded to the object-based virtual space composition module 400 and/or virtual space generation module 310 for further processing.
  • Virtual space generation module 310 includes the functionality to configure the appearance of a virtual workspace for a current collaborative session. For instance, based on a set of pre-determined system settings, virtual space generation module 310 can select a room size or room type (e.g., conference room, lecture hall, etc.) and insert and/or position virtual furniture within the room selected. In this manner, virtualized chairs, desks, tables, etc. can be rendered to give the effect of each participant being seated in the same physical environment during a session. Also, within this virtualized environment, other relevant objects such as boards, slides, presentation screens, etc. that are necessary for the collaborative session can also be included within the virtualized workspace.
  • a room size or room type e.g., conference room, lecture hall, etc.
  • virtualized chairs, desks, tables, etc. can be rendered to give the effect of each participant being seated in the same physical environment during a session.
  • other relevant objects such as boards, slides, presentation screens, etc. that are necessary for the collaborative session can also be included within the virtualized workspace.
  • virtual space generation module 310 can enable users to be rendered in a manner that hides the differences within their respective native physical environments during a current collaborative session. Furthermore, virtual space generation module 310 can adjust the appearance of the virtual workspace such that users from various different remote environments can be rendered in a more visually pleasing fashion. For example, subjects of interest that are further away from their respective cameras can appear disproportionally smaller than those subjects that are closer to their respective cameras. As such, virtual space generation module 310 can adjust the appearance of subjects by utilizing the depth information about each subject participating in a collaborative session as well as other objects of interest. In this manner, virtual space generation module 310 can be configured to select a scale to render the appearance of users such that they can fit within the dimensions of a given display based on a pre-determined layout conformity metric.
  • virtual space generation module 310 can also ensure that the color, lighting, contrast, etc. of the virtual workspace forms a more visually pleasing combination with the appearances of each user.
  • the colors of certain components within the virtual workspace e.g., walls, backgrounds, furniture, etc.
  • maximization of the layout conformity metric and the color conformity metric can result in a number of different virtual environments.
  • virtual space generation module 310 can generate an optimal virtual environment for a given task/collaboration session for any number of users. Accordingly, results generated by virtual space generation module 310 can be communicated to object-based virtual space composition module 400 for further processing.
  • FIG. 4 describes the functionality of object-based virtual space composition module 400 in greater detail in accordance with embodiments of the present invention.
  • Collaboration application module 410 includes the functionality to receive local media data from local media data computing module 200 , as well as any remote collaboration parameters (e.g., gesture data, type status indicator data) from remote media data computing module 300 . Based on the data received, collaboration application module 410 can perform various functions that enable a user to interact with other participants during a current collaboration.
  • remote collaboration parameters e.g., gesture data, type status indicator data
  • collaboration application module 410 includes the functionality to process gesture data received via user configuration detection module 240 and/or determine whether a local user or a remote user wishes to manipulate a particular object rendered on their respective display screens during a current collaboration session. In this manner, collaboration application module 410 can serve a gesture control interface that enables participants of a collaborative session to freely manipulate digital media objects (e.g., slide presentation, documents, etc.) rendered on their respective display screens, without a specific user maintaining complete control over the entire collaboration session.
  • digital media objects e.g., slide presentation, documents, etc.
  • collaboration application module 410 can be configured to perform in-air gesture detection and/or control collaboration objects. In this manner, collaboration application module 410 can translate detected hand gestures, such as swiping (e.g., swiping the hand to the right) and determine a corresponding action to be performed in response to the gesture detected (e.g., returning to a previous slide in response to detecting the hand swipe gesture).
  • collaboration application module 410 can be configured to detect touch input provided by a user via a touch sensitive display panel which expresses the user's desire to manipulate an object currently rendered on the user's local display screen. Manipulation of on-screen data can involve at least one participant and one digital media object.
  • collaboration application module 410 can be configured to recognize permissions set for a given collaborative session (e.g., which user is the owner of a particular collaborative process, which user is allowed to manipulate certain media objects, etc.). As such, collaboration application module 410 can enable multiple users to control the same object and/or different objects rendered on their local display screens.
  • object-based virtual space rendering module 420 can render the virtual workspace display using data received from remote client devices and data generated locally (e.g., presentation data, context data, data generated by collaboration application module 410 , etc.). In this manner, object-based virtual space rendering module 420 can feed virtual space parameters to a local graphics system for rendering a display to a user (e.g., via optional display device 111 ). As such, the resultant virtual workspace display generated by object-based virtual space rendering module 420 enables a local user to perceive the effect of sharing a common physical workspace with all remote users participating in a current collaborative session.
  • data generated locally e.g., presentation data, context data, data generated by collaboration application module 410 , etc.
  • object-based virtual space rendering module 420 can feed virtual space parameters to a local graphics system for rendering a display to a user (e.g., via optional display device 111 ).
  • the resultant virtual workspace display generated by object-based virtual space rendering module 420 enables a local user to perceive the effect of sharing a
  • FIG. 5 depicts an exemplary a multi-client, real-time communication in accordance with embodiments of the presentation.
  • FIG. 5 depicts two client devices (e.g., client devices 101 and 101 - 1 ) exchanging information over a communication network during the performance of a collaborative session.
  • client devices 101 and 101 - 1 can each include a set of sensors 100 that are capable of capturing information from their respective local environments.
  • local media data computing modules 200 and 200 - 1 can analyze their respect local data while remote media data computing modules 300 and 300 - 1 analyze the data received from each other.
  • object-based virtual space composition modules 400 and 400 - 1 can combine their respective local and remote data for the final presentation to their respective local users for the duration of a collaborative session.
  • FIG. 6A is a flowchart of an exemplary computer-implemented method for generating local media data during a collaborative session performed over a communications network in accordance with embodiments of the present invention.
  • a local client device actively captures external data from within its localized physical environment using a set of sensors coupled to the device.
  • Data gathered from the sensors include different forms of real-world information (e.g., RGB data, depth information, infrared reflection data, thermal data) collected in real-time.
  • the object-of-interest module of the local client device performs segmentation procedures to detect an end-user and/or other objects of interest based on the data gathered during step 801 .
  • the object-of-interest module generates resultant output in the form of data maps which includes the location of the detected end-user and/or objects.
  • the context extraction module of the local client device extracts high-level data associated with the end-user (e.g., user mood, body temperature, facial expressions, posture, movement).
  • high-level data associated with the end-user e.g., user mood, body temperature, facial expressions, posture, movement.
  • the user configuration module of the local client device receives data map information from the object-of-interest module to determine the presence of a recognized gesture (e.g., hand gesture) performed by a detected user or object.
  • a recognized gesture e.g., hand gesture
  • step 805 data produced during step 803 and/or 804 is packaged as local media data and communicated to the object-based virtual space composition module of the local client device for further processing.
  • step 806 the local media generated during step 805 is multiplexed and communicated to other remote client devices engaged within the current collaborative session over the communication network.
  • FIG. 6B is a flowchart of an exemplary computer-implemented method of generating configurational data for creating a virtual workspace display for a collaborative session performed over a communications network in accordance with embodiments of the present invention.
  • the remote media data computing module of the local client device receives and de-multiplexes media data received from the remote client devices.
  • Media data received from the remote client devices includes context data, collaborative data and/or sensor data (e.g., RGB data, depth information, infrared reflections, thermal data) gathered by the remote client devices in real-time.
  • sensor data e.g., RGB data, depth information, infrared reflections, thermal data
  • the artifact reduction module of the local client device performs segmentation correction procedures on data (e.g., RGB data) received during step 901 .
  • the virtual space generation module of the local client device uses data received during steps 901 and 902 to generate configurational data for creating a virtual workspace display for participants of the collaborative session.
  • the data includes configurational data for creating a virtual room furnished with virtual furniture and/or other virtualized objects.
  • the virtual space generation module adjusts and/or scales RGB data received during step 902 in a manner designed to render each remote user in a consistent and uniform manner on the local client device, irrespective of each remote user's current physical surroundings and/or distance from the user's camera.
  • step 904 data generated by the virtual space generation module during step 903 is communicated to the local client device's object-based virtual space composition module for further processing.
  • FIG. 6C is a flowchart of an exemplary computer-implemented method of contemporaneously rendering a virtual workspace display and detecting gesture input during a collaborative session performed over a communications network in accordance with embodiments of the present invention.
  • the object-based virtual space composition module of the local client device receives the local media data generated during step 805 and data generated by the virtual space generation module during step 904 to render a computer-generated virtual workspace display for each end-user participating in the collaboration session.
  • the object-based virtual space rendering modules of each end-user's local display device renders the virtual workspace in a manner that enables each participant in the session to perceive the effect of sharing a common physical workspace with each other.
  • the collaboration application modules of each client device engaged in the collaboration session waits to receive gesture data (e.g., in-air gestures, touch input) from their respective end-users via the user configuration detection module of each end-user's respective client device.
  • gesture data e.g., in-air gestures, touch input
  • a collaboration application module receives gesture data from a respective user configuration detection module and determines whether the gesture recognized by the user configuration detection module is a command by an end-user to manipulate an object currently rendered on each participant's local display screen.
  • the gesture is determined by the collaboration application module as being indicative of a user expressing a desire to manipulate an object currently rendered on her screen, and therefore, the collaboration application enables the user to control and manipulate the object.
  • the action performed on the object by the user is rendered on the display screens of all users participating in the collaborative session in real-time. Additionally, the system continues to wait for gesture data, as detailed in step 1002 .
  • FIG. 7A depicts an exemplary slide presentation performed during a collaborative session in accordance with embodiments of the present invention.
  • FIG. 7A simultaneously presents both a local user's view and a remote user's view of a virtualized workspace display generated by embodiments of the present invention (e.g., virtualized workspace display 305 ) for the slide presentation.
  • a virtualized workspace display generated by embodiments of the present invention (e.g., virtualized workspace display 305 ) for the slide presentation.
  • subject 601 can participate in a collaborative session over a communication network device with other remote participants using similar client devices.
  • embodiments of the present invention can encode and transmit their respective local collaboration application data in the manner described herein.
  • this data can include, but is not limited to, the spatial positioning of slides presented, display scale data, virtual pointer position data, control state data, etc. to the client devices of all remote users viewing the presentation (e.g., during Times 1 through 3 ).
  • FIGS. 7B and 7C depict an exemplary telepresence session performed in accordance with embodiments of the present invention.
  • subject 602 can be a user participating in a collaborative session with several remote users (e.g., via client device 101 ) over a communications network.
  • subject 602 can participate in the session from physical location 603 , which can be a hotel room, office room, etc. that is physically separated from other participants.
  • FIG. 7C depicts an exemplary virtualized workspace environment generated during a collaborative session in accordance with embodiments of the present invention.
  • embodiments of the present invention render virtualized workspace displays 305 - 1 , 305 - 2 , and 305 - 3 in a manner that enables each participant in the collaborative session (including subject 602 ) to perceive the effect of sharing a common physical workspace with each other.
  • virtualized workspace displays 305 - 1 , 305 - 2 , and 305 - 3 include a background or “virtual room” that can be furnished with virtual furniture and/or other virtualized objects.
  • virtualized workspace displays 305 - 1 , 305 - 2 , and 305 - 3 can be adjusted and/or scaled in a manner designed to render each remote user in a consistent and uniform manner, irrespective of each user's current physical surroundings and/or distance from the user's camera.
  • embodiments of the present invention allow users to set up layout of media objects in the shared virtual workspace depending on the type of interaction or collaboration. For instance, users can select a 2-dimensional shared conference space with simple background for visual interaction or a 3-dimensional shared conference space for visual interaction with media object collaboration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments of the present invention provide a novel system and/or method for performing over-the-network collaborations and interactions between remote end-users. Embodiments of the present invention produce the perceived effect of each user sharing a same physical workspace while each person is actually located in separate physical environments. In this manner, embodiments of the present invention allow for more seamless interactions between users while relieving them of the burden of using common computer peripheral devices such as mice, keyboards, and other hardware often used to perform such interactions.

Description

BACKGROUND OF THE INVENTION
Remote collaboration technologies, such as video conferencing software, are used to conference multiple users from remote locations together by way of simultaneous two-way transmissions. However, many conventional systems for performing such tasks are unable to establish communication environments in which participants are able to enjoy a sense of shared presence within the same physical workspace. As such, collaborations and interactions performed over a communications network between remote users can be a difficult task. Accordingly, a need exists for a solution that provides participants of collaborative sessions performed over communication networks with the sensation of sharing a same physical workspace with each other in a manner that also improves user experience during such events.
SUMMARY OF THE INVENTION
Embodiments of the present invention provide a novel system and/or method for performing over-the-network collaborations and interactions between remote end-users. Embodiments of the present invention produce the perceived effect of each user sharing a same physical workspace while each person is actually located in separate physical environments. In this manner, embodiments of the present invention allow for more seamless interactions between users while relieving them of the burden of using common computer peripheral devices such as mice, keyboards, and other hardware often used to perform such interactions.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and form a part of this specification and in which like numerals depict like elements, illustrate embodiments of the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1A depicts an exemplary hardware configuration implemented on a client device for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
FIG. 1B depicts exemplary components resident in memory executed by a client device for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
FIG. 2 depicts an exemplary local media data computing module for capturing real-world information in real-time from a local environment during performance of augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
FIG. 3 depicts an exemplary remote media data computing module for processing data received from remote client devices over a communications network during performance of augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
FIG. 4 depicts an exemplary object-based virtual space composition module for generating a virtualized workspace display for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
FIG. 5 depicts an exemplary a multi-client real-time communication for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the presentation.
FIG. 6A is a flowchart of an exemplary computer-implemented method for generating local media data during a collaborative session performed over a communications network in accordance with embodiments of the present invention.
FIG. 6B is a flowchart of an exemplary computer-implemented method of generating configurational data for creating a virtual workspace display for a collaborative session performed over a communications network in accordance with embodiments of the present invention.
FIG. 6C is a flowchart of an exemplary computer-implemented method of contemporaneously rendering a virtual workspace display and detecting gesture input during a collaborative session performed over a communications network in accordance with embodiments of the present invention.
FIG. 7A depicts an exemplary use case for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
FIG. 7B depicts another exemplary use case for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
FIG. 7C depicts yet another exemplary use case for performing augmented reality-enabled interactions and collaborations in accordance with embodiments of the present invention.
DETAILED DESCRIPTION
Reference will now be made in detail to the various embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. While described in conjunction with these embodiments, it will be understood that they are not intended to limit the disclosure to these embodiments. On the contrary, the disclosure is intended to cover alternatives, modifications and equivalents, which can be included within the spirit and scope of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present disclosure, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be understood that the present disclosure can be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present disclosure.
Reference will now be made in detail to the preferred embodiments of the claimed subject matter, a method and system for the use of a radiographic system, examples of which are illustrated in the accompanying drawings. While the claimed subject matter will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit these embodiments. On the contrary, the claimed subject matter is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope as defined by the appended claims.
Furthermore, in the following detailed descriptions of embodiments of the claimed subject matter, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. However, it will be recognized by one of ordinary skill in the art that the claimed subject matter may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to obscure unnecessarily aspects of the claimed subject matter.
Some portions of the detailed descriptions which follow are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits that can be performed on computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer generated step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present claimed subject matter, discussions utilizing terms such as “capturing”, “receiving”, “rendering” or the like, refer to the action and processes of a computer system or integrated circuit, or similar electronic computing device, including an embedded system, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Accordingly, embodiments of present invention provide a system and/or method for performing augmented reality-enabled interactions and collaborations.
Exemplary Client Device for Performing Augmented Reality-Enabled Interactions and Collaborations
FIG. 1A depicts an exemplary hardware configuration used by various embodiments of the present invention. Although specific components are disclosed in FIG. 1A, it should be appreciated that such components are exemplary. That is, embodiments of the present invention are well suited to having various other hardware components or variations of the components recited in FIG. 1A. It is appreciated that the hardware components in FIG. 1A can operate with other components than those presented, and that not all of the hardware components described in FIG. 1A are required to achieve the goals of the present invention.
Client device 101 can be implemented as an electronic device capable of communicating with other remote computer systems over a communications network. Client device 101 can be implemented as, for example, a digital camera, cell phone camera, portable electronic device (e.g., audio device, entertainment device, handheld device), webcam, video device (e.g., camcorder) and the like. Components of client device 101 can comprise respective functionality to determine and configure respective optical properties and settings including, but not limited to, focus, exposure, color or white balance, and areas of interest (e.g., via a focus motor, aperture control, etc.). Furthermore, components of client device 101 can be coupled via internal communications bus 105 and receive/transmit image data for further processing over such communications bus.
In its most basic hardware configuration, client device 101 can comprise sensors 100, computer storage medium 135, optional graphics system 141, multiplexer 260, processor 110, and optional display device 111.
Sensors 100 can include a plurality of sensors arranged in a manner that captures different forms of real-world information in real-time from a localized environment external to client device 101. Optional graphics system 141 can include a graphics processor (not pictured) operable to process instructions from applications resident in computer readable storage medium 135 and to communicate data with processor 110 via internal bus 105. Data can be communicated in this fashion for rendering the data on optional display device 111 using frame memory buffer(s).
In this manner, optional graphics system 141 can generate pixel data for output images from rendering commands and may be configured as multiple virtual graphic processors that are used in parallel (concurrently) by a number of applications executing in parallel. Multiplexer 260 includes the functionality to transmit data both locally and over a communications network. As such, multiplexer 260 can multiplex outbound data communicated from client device 101 as well as de-multiplex inbound data received by client device 101. Depending on the exact configuration and type of client device, computer readable storage medium 135 can be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. Portions of computer readable storage medium 135, when executed, facilitate efficient execution of memory operations or requests for groups of threads.
FIG. 1B depicts exemplary computer storage medium components used by various embodiments of the present invention. Although specific components are disclosed in FIG. 1B, it should be appreciated that such computer storage medium components are exemplary. That is, embodiments of the present invention are well suited to having various other components or variations of the computer storage medium components recited in FIG. 1B. It is appreciated that the components in FIG. 1B can operate with other components than those presented, and that not all of the computer storage medium components described in FIG. 1B are required to achieve the goals of the present invention.
As depicted in FIG. 1B, computer readable storage medium 135 can include an operating system (e.g., operating system 112). Operating system 112 can be loaded into processor 110 when client device 101 is initialized. Also, upon execution by processor 110, operating system 112 can be configured to supply a programmatic interface to client device 101. Furthermore, as illustrated in FIG. 1B, computer readable storage medium 135 can include local media data computing module 200, remote media data computing module 300 and object-based virtual space composition module 400, which can provide instructions to processor 110 for processing via internal bus 105. Accordingly, the functionality of local media data computing module 200, remote media data computing module 300 and object-based virtual space composition module 400 will now be discussed in greater detail.
FIG. 2 describes the functionality of local media data computing module 200 in greater detail in accordance with embodiments of the present invention. As illustrated in FIG. 2, sensors 100 includes a set of sensors (e.g., S1, S2, S3, S4, etc.) arranged in a manner that captures different forms of real-world information in real-time from a localized environment external to client device 101. As such, different sensors within sensors 100 can capture various forms of external data such as video (e.g., RGB data), depth information, infrared reflection data, thermal data, etc. For example, an exemplary set of data gathered by sensors 100 at time ti, may be depicted as:
(X, Y, R, G, B) for texture (image) data;
(X′, Y′, Z′) for depth data;
(X″, Y″, IR″) for infrared data;
(X′″, Y′″, T′″) for thermal data
where X and Y represent spatial coordinates and prime marks denote different coordinate systems; R, G, and B values each represent a respective color channel value (e.g., red, green and blue channels, respectively); Z represents a depth value; IR represents infrared values; and T represents thermal data. In this manner, client device 101 can acquire a set of readings from different sensors within sensors 100 at any given time in the form of data maps.
Sensor data enhancement module 210 includes the functionality to pre-process data received via sensors 100 before being passed on to other modules within client device 101 (e.g., context extraction 220, object-of-interest extraction 230, user configuration detection 240, etc.). For example, raw data obtained by each of the different sensors within sensors 100 may not necessarily correspond to a same spatial coordinate system. As such, sensor data enhancement module 210 can perform alignment procedures such that each measurement obtained by sensors within sensors 100 can be harmonized into one unified coordinate system. In this manner, information acquired from the different sensors can be combined and analyzed jointly by other modules within client device 101.
For example, during alignment procedures, sensor data enhancement module 210 can calibrate the appropriate transformation matrices for each sensor's data into a referent coordinate system. In one instance, the referent coordinate system created by sensor data enhancement module 210 may be the intrinsic coordinate system of one of the sensors of sensors 100 (e.g., video sensor) or a new coordinate system that is not associated with any of the sensors' respective coordinate systems. For example, a resultant set of transforms applied to raw sensor data acquired by a sensor acquiring color (e.g., video sensor) may be depicted as:
(X*, Y*, R*, G*, B*)=Trgb (X, Y, R, G, B) for texture (image) data;
(X*, Y*, Z*)=Tz (X′, Y′, Z′) for depth data;
(X*, Y*, (IR)*)=Tir (X″, Y″, IR″) for infrared data;
(X*, Y*, T*)=Tt (X′″, Y′″, T′″) for thermal data
where the transforms Trgb, Tz, Tir, and Tt have been previously determined by registration procedures for each sensor of sensors 100. Transforms T can be affine transforms (i.e. T(v)=Av+b, where v is the input vector to be transformed, A is a matrix, and b is another vector), linear transforms, or nonlinear transforms. After the performance of alignment procedures, each point in the referent coordinate system, described by (X*, y*) should have associated values from all the input sensors.
In certain scenarios, data obtained from sensors 100 can be noisy. Additionally, data maps can contain points at which the values are not known or defined, either due to the imperfections of a particular sensor or as a result of re-aligning the data from different viewpoints in space. As such, sensor data enhancement module 210 can also perform corrections to values of signals corrupted by noise or where the values of signals are not defined at all. Accordingly, the output data of sensor data enhancement module 210 can be in the form of updated measurement maps (e.g., denoted as (x, y, z, r, g, b, ir, t . . . ) in FIG. 2) which can then be passed to other components within client device 101 for further processing.
Object-of-interest extraction module 230 includes the functionality to segment a local user and/or any other object of interest (e.g., various physical objects that the local user wants to present to the remote users, physical documents relevant for the collaboration, etc.) based on data received via sensor data enhancement module 210 during a current collaborative session (e.g., teleconference, telepresence, etc.). Object-of-interest extraction module 230 can detect objects of interest by using external data gathered via sensors 100 (e.g., RGB data, infrared data, thermal data) or by combining the different sources and processing them jointly. In this manner, object-of-interest extraction module 230 can apply different computer-implemented RGB segmentation procedures, such as watershed, mean shift, etc., to detect users and/or objects. As illustrated in FIG. 2, the resultant output produced by object-of-interest extraction module 230 (e.g., (x,y,z,r,g,b,m)) can include depth data (e.g., coordinates (x,y,z)) and/or RGB map data (e.g., coordinates (r,g,b)), along with object-of-interest data map (m). For example, further information and details regarding RGB segmentation procedures may be found with reference to U.S. Provisional Application No. 61/869,574 entitled “TEMPORALLY COHERENT SEGMENTATION OF RGBt VOLUMES WITH AID OF NOISY OR INCOMPLETE AUXILIARY DATA,” which was filed on Aug. 23, 2013 by inventor Jana Ehmann, which is incorporated herein by reference in its entirety. This result can be then forwarded to multiplexer 260, as well as to the user configuration detection module 240 for further processing.
Context extraction module 220 includes the functionality to automatically extract high-level information concerning local users within their respective environments from data received via sensor data enhancement module 210. For instance, context extraction module 220 can use computer-implemented procedures to analyze data received from sensor data enhancement module 210 concerning a local user's body temperature and/or determine a user's current mood (e.g., angry, bored, etc.). As such, based on this data, context extraction module 220 can inferentially determine whether the user is actively engaged within a current collaborative session.
In another example, context extraction module 220 can analyze the facial expressions, posture and movement of a local user to determine user engagement. Determinations made by context extraction module 220 can be sent as context data to the multiplexer 260, which further transmits the data both locally and over a communications network. In this manner, context data may be made available to the remote participants of a current collaborative session or it can affect the way the data is presented to the local user locally.
User configuration detection module 240 includes the functionality to use data processed by object-of-interest extraction module 230 to determine the presence of a recognized gesture performed by a detected user and/or object. For example, in one embodiment, user configuration detection module 240 can detect and extract a subset of points associated with a detected user's hand. As such, user configuration detection module 240 can then further classify and label points of the hand as a finger or palm. Hand features can be detected and computed based on the available configurations in known to configuration alphabet 250, such as hand pose, finger pose, relative motion between hands, etc. Additionally, user configuration detection module 240 can detect in-air gestures, such as, for example, “hand waving,” or “sweeping to the right.” In this manner, user configuration detection module 240 can use a configuration database to determine how to translate a detected configuration (hand pose, finger pose, motion etc.) into a detected in-air gesture. The extracted hand features and, if detected, information about the in-air gesture can then be sent to object-based virtual space composition module 400 (e.g., see FIG. 4) for further processing.
FIG. 3 describes the functionality of remote media data computing module 300 in greater detail in accordance with embodiments of the present invention. Remote media data computing module 300 includes the functionality to receive multiplexed data from remote client device peers (e.g., local media data generated by remote client devices in a manner similar to client device 101) and de-multiplex the inbound data via de-multiplexer 330. Data can be de-multiplexed into remote collaboration parameters (that include remote context data) and remote texture data, which includes depth (x, y, z), texture (r, g, b) and/or object-of-interest (m) data from the remote peers' physical environments. As such, this information can then be distributed to different components within client device 101 for further processing.
Artifact reduction module 320 includes the functionality receive remote texture data from de-multiplexer 330 and minimize the appearance of segmentation errors to create a more visually pleasing rendering of remote user environments. In order to increase the appeal of the subject's rendering in the virtual space and to hide the segmentation artifacts such as noisy boundaries, missing regions etc., the blending of the segmented user and/or the background of the user can be accomplished through computer-implemented procedures involving contour-hatching textures. Further information and details regarding segmentation procedures may be found with reference to U.S. Patent Publication. No. US 2013/0265382 A1 entitled “VISUAL CONDITIONING FOR AUGMENTED-REALITY-ASSISTED VIDEO CONFERENCING,” which was filed on Dec. 31, 2012 by inventors Onur G. Guleryuz and Antonius Kalker, which is incorporated herein by reference in its entirety. These procedures can wrap the user boundaries and reduce the appearance of segmentation imperfections.
Artifact reduction module 320 can also determine the regions within remote user environments that need to be masked, based on potential estimated errors of a given subject's segmentation boundary. Additionally, artifact reduction module 320 can perform various optimization procedures that may include, but are not limited to, adjusting the lighting of the user's visuals, changing the contrast, performing color correction, etc. As such, refined remote texture data can be forwarded to the object-based virtual space composition module 400 and/or virtual space generation module 310 for further processing.
Virtual space generation module 310 includes the functionality to configure the appearance of a virtual workspace for a current collaborative session. For instance, based on a set of pre-determined system settings, virtual space generation module 310 can select a room size or room type (e.g., conference room, lecture hall, etc.) and insert and/or position virtual furniture within the room selected. In this manner, virtualized chairs, desks, tables, etc. can be rendered to give the effect of each participant being seated in the same physical environment during a session. Also, within this virtualized environment, other relevant objects such as boards, slides, presentation screens, etc. that are necessary for the collaborative session can also be included within the virtualized workspace.
Additionally, virtual space generation module 310 can enable users to be rendered in a manner that hides the differences within their respective native physical environments during a current collaborative session. Furthermore, virtual space generation module 310 can adjust the appearance of the virtual workspace such that users from various different remote environments can be rendered in a more visually pleasing fashion. For example, subjects of interest that are further away from their respective cameras can appear disproportionally smaller than those subjects that are closer to their respective cameras. As such, virtual space generation module 310 can adjust the appearance of subjects by utilizing the depth information about each subject participating in a collaborative session as well as other objects of interest. In this manner, virtual space generation module 310 can be configured to select a scale to render the appearance of users such that they can fit within the dimensions of a given display based on a pre-determined layout conformity metric.
Furthermore, virtual space generation module 310 can also ensure that the color, lighting, contrast, etc. of the virtual workspace forms a more visually pleasing combination with the appearances of each user. The colors of certain components within the virtual workspace (e.g., walls, backgrounds, furniture, etc.) can be adjusted in accordance to a pre-determined color conformity metric that measures the pleasantness of the composite renderings of the virtual workspace as well as the participants of a collaboration session. As such, maximization of the layout conformity metric and the color conformity metric can result in a number of different virtual environments. Accordingly, virtual space generation module 310 can generate an optimal virtual environment for a given task/collaboration session for any number of users. Accordingly, results generated by virtual space generation module 310 can be communicated to object-based virtual space composition module 400 for further processing.
FIG. 4 describes the functionality of object-based virtual space composition module 400 in greater detail in accordance with embodiments of the present invention. Collaboration application module 410 includes the functionality to receive local media data from local media data computing module 200, as well as any remote collaboration parameters (e.g., gesture data, type status indicator data) from remote media data computing module 300. Based on the data received, collaboration application module 410 can perform various functions that enable a user to interact with other participants during a current collaboration.
For instance, collaboration application module 410 includes the functionality to process gesture data received via user configuration detection module 240 and/or determine whether a local user or a remote user wishes to manipulate a particular object rendered on their respective display screens during a current collaboration session. In this manner, collaboration application module 410 can serve a gesture control interface that enables participants of a collaborative session to freely manipulate digital media objects (e.g., slide presentation, documents, etc.) rendered on their respective display screens, without a specific user maintaining complete control over the entire collaboration session.
For example, collaboration application module 410 can be configured to perform in-air gesture detection and/or control collaboration objects. In this manner, collaboration application module 410 can translate detected hand gestures, such as swiping (e.g., swiping the hand to the right) and determine a corresponding action to be performed in response to the gesture detected (e.g., returning to a previous slide in response to detecting the hand swipe gesture). In one embodiment, collaboration application module 410 can be configured to detect touch input provided by a user via a touch sensitive display panel which expresses the user's desire to manipulate an object currently rendered on the user's local display screen. Manipulation of on-screen data can involve at least one participant and one digital media object. Additionally, collaboration application module 410 can be configured to recognize permissions set for a given collaborative session (e.g., which user is the owner of a particular collaborative process, which user is allowed to manipulate certain media objects, etc.). As such, collaboration application module 410 can enable multiple users to control the same object and/or different objects rendered on their local display screens.
With the assistance of a local graphics system (e.g., optional graphics system 141), object-based virtual space rendering module 420 can render the virtual workspace display using data received from remote client devices and data generated locally (e.g., presentation data, context data, data generated by collaboration application module 410, etc.). In this manner, object-based virtual space rendering module 420 can feed virtual space parameters to a local graphics system for rendering a display to a user (e.g., via optional display device 111). As such, the resultant virtual workspace display generated by object-based virtual space rendering module 420 enables a local user to perceive the effect of sharing a common physical workspace with all remote users participating in a current collaborative session.
FIG. 5 depicts an exemplary a multi-client, real-time communication in accordance with embodiments of the presentation. FIG. 5 depicts two client devices (e.g., client devices 101 and 101-1) exchanging information over a communication network during the performance of a collaborative session. Accordingly, as illustrated in FIG. 5, client devices 101 and 101-1 can each include a set of sensors 100 that are capable of capturing information from their respective local environments. In a manner described herein, local media data computing modules 200 and 200-1 can analyze their respect local data while remote media data computing modules 300 and 300-1 analyze the data received from each other. Accordingly, in a manner described herein, object-based virtual space composition modules 400 and 400-1 can combine their respective local and remote data for the final presentation to their respective local users for the duration of a collaborative session.
Exemplary Method for Performing Augmented Reality-Enabled Interactions and Collaborations
FIG. 6A is a flowchart of an exemplary computer-implemented method for generating local media data during a collaborative session performed over a communications network in accordance with embodiments of the present invention.
At step 801, during a collaborative session with other remote client devices over a communication network, a local client device actively captures external data from within its localized physical environment using a set of sensors coupled to the device. Data gathered from the sensors include different forms of real-world information (e.g., RGB data, depth information, infrared reflection data, thermal data) collected in real-time.
At step 802, the object-of-interest module of the local client device performs segmentation procedures to detect an end-user and/or other objects of interest based on the data gathered during step 801. The object-of-interest module generates resultant output in the form of data maps which includes the location of the detected end-user and/or objects.
At step 803, the context extraction module of the local client device extracts high-level data associated with the end-user (e.g., user mood, body temperature, facial expressions, posture, movement).
At step 804, the user configuration module of the local client device receives data map information from the object-of-interest module to determine the presence of a recognized gesture (e.g., hand gesture) performed by a detected user or object.
At step 805, data produced during step 803 and/or 804 is packaged as local media data and communicated to the object-based virtual space composition module of the local client device for further processing.
At step 806, the local media generated during step 805 is multiplexed and communicated to other remote client devices engaged within the current collaborative session over the communication network.
FIG. 6B is a flowchart of an exemplary computer-implemented method of generating configurational data for creating a virtual workspace display for a collaborative session performed over a communications network in accordance with embodiments of the present invention.
At step 901, during a collaborative session with other remote client devices over a communication network, the remote media data computing module of the local client device receives and de-multiplexes media data received from the remote client devices. Media data received from the remote client devices includes context data, collaborative data and/or sensor data (e.g., RGB data, depth information, infrared reflections, thermal data) gathered by the remote client devices in real-time.
At step 902, the artifact reduction module of the local client device performs segmentation correction procedures on data (e.g., RGB data) received during step 901.
At step 903, using data received during steps 901 and 902, the virtual space generation module of the local client device generates configurational data for creating a virtual workspace display for participants of the collaborative session. The data includes configurational data for creating a virtual room furnished with virtual furniture and/or other virtualized objects. Additionally, the virtual space generation module adjusts and/or scales RGB data received during step 902 in a manner designed to render each remote user in a consistent and uniform manner on the local client device, irrespective of each remote user's current physical surroundings and/or distance from the user's camera.
At step 904, data generated by the virtual space generation module during step 903 is communicated to the local client device's object-based virtual space composition module for further processing.
FIG. 6C is a flowchart of an exemplary computer-implemented method of contemporaneously rendering a virtual workspace display and detecting gesture input during a collaborative session performed over a communications network in accordance with embodiments of the present invention.
At step 1001, the object-based virtual space composition module of the local client device receives the local media data generated during step 805 and data generated by the virtual space generation module during step 904 to render a computer-generated virtual workspace display for each end-user participating in the collaboration session. Using their respective local graphics systems, the object-based virtual space rendering modules of each end-user's local display device renders the virtual workspace in a manner that enables each participant in the session to perceive the effect of sharing a common physical workspace with each other.
At step 1002, the collaboration application modules of each client device engaged in the collaboration session waits to receive gesture data (e.g., in-air gestures, touch input) from their respective end-users via the user configuration detection module of each end-user's respective client device.
At step 1003, a collaboration application module receives gesture data from a respective user configuration detection module and determines whether the gesture recognized by the user configuration detection module is a command by an end-user to manipulate an object currently rendered on each participant's local display screen.
At step 1004, a determination is made by the collaboration application module as to whether the gesture data received during step 1003 is indicative of a user expressing a desire to manipulate an object currently rendered on her screen. If the gesture is determined by the collaboration application module as not being indicative of a user expressing a desire to manipulate an object currently rendered on her screen, then the collaboration application modules of each client device engaged in the collaboration session continue waiting for gesture data, as detailed in step 1002. If the gesture is determined by the collaboration application module as being indicative of a user expressing a desire to manipulate an object currently rendered on her screen, then the collaboration application enables the user to manipulate the object, as detailed in step 1005.
At step 1005, the gesture is determined by the collaboration application module as being indicative of a user expressing a desire to manipulate an object currently rendered on her screen, and therefore, the collaboration application enables the user to control and manipulate the object. The action performed on the object by the user is rendered on the display screens of all users participating in the collaborative session in real-time. Additionally, the system continues to wait for gesture data, as detailed in step 1002.
Exemplary Use Cases for Performing Augmented Reality-Enabled Interactions and Collaborations
FIG. 7A depicts an exemplary slide presentation performed during a collaborative session in accordance with embodiments of the present invention. FIG. 7A simultaneously presents both a local user's view and a remote user's view of a virtualized workspace display generated by embodiments of the present invention (e.g., virtualized workspace display 305) for the slide presentation. As illustrated in FIG. 7A, using a device similar to client device 101, subject 601 can participate in a collaborative session over a communication network device with other remote participants using similar client devices. As such, embodiments of the present invention can encode and transmit their respective local collaboration application data in the manner described herein. For example, this data can include, but is not limited to, the spatial positioning of slides presented, display scale data, virtual pointer position data, control state data, etc. to the client devices of all remote users viewing the presentation (e.g., during Times 1 through 3).
FIGS. 7B and 7C depict an exemplary telepresence session performed in accordance with embodiments of the present invention. With reference to FIG. 7B, subject 602 can be a user participating in a collaborative session with several remote users (e.g., via client device 101) over a communications network. As illustrated in FIG. 7B, subject 602 can participate in the session from physical location 603, which can be a hotel room, office room, etc. that is physically separated from other participants.
FIG. 7C depicts an exemplary virtualized workspace environment generated during a collaborative session in accordance with embodiments of the present invention. As depicted in FIG. 7C, embodiments of the present invention render virtualized workspace displays 305-1, 305-2, and 305-3 in a manner that enables each participant in the collaborative session (including subject 602) to perceive the effect of sharing a common physical workspace with each other. As such, virtualized workspace displays 305-1, 305-2, and 305-3 include a background or “virtual room” that can be furnished with virtual furniture and/or other virtualized objects. Additionally, virtualized workspace displays 305-1, 305-2, and 305-3 can be adjusted and/or scaled in a manner designed to render each remote user in a consistent and uniform manner, irrespective of each user's current physical surroundings and/or distance from the user's camera. Furthermore, embodiments of the present invention allow users to set up layout of media objects in the shared virtual workspace depending on the type of interaction or collaboration. For instance, users can select a 2-dimensional shared conference space with simple background for visual interaction or a 3-dimensional shared conference space for visual interaction with media object collaboration.
In the foregoing specification, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicant to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Hence, no limitation, element, property, feature, advantage, or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims (16)

What is claimed is:
1. An apparatus comprising:
a sensor operable to capture a first set of sensor data concerning a local user's physical environment;
a receiver operable to receive a second set of sensor data over a communications network concerning a remote user's physical environment a processor;
a computer readable storage medium storing computer-readable instructions that when executed by the processor cause the processor to detect and extract a subset of points associated with an input received from the remote user over the communication network or the local user; translate a detected configuration of the subset of points into a detected in-air gesture; send information about the detected in-air gesture to a processor; and
the processor operable to process the in-air gesture to manipulate an object currently rendered on a virtual workspace display which produces a sharing room to the local user and the remote user; wherein to manipulate an object further comprises to furnish the virtualized workspace with virtualized objects.
2. The apparatus as described in claim 1, wherein the first and second sets of sensor data comprise coordinate data gathered from a plurality of different sensors, and said sensor is further operable to unify said coordinate data into a common spatial coordinate system for generating said virtual workspace display.
3. The apparatus as described in claim 2, wherein said sensor is further operable to convert said coordinate data into a spatial coordinate system recognized by a specific sensor of said plurality of different sensors.
4. The apparatus as described in claim 2, wherein said coordinate data comprises: RGB data, depth information, infrared reflection data and thermal data.
5. The apparatus as described in claim 1, wherein said processor performs computer-implemented segmentation procedures on said first set of sensor data to detect said local user within said local user's physical environment.
6. The apparatus as described in claim 1, wherein said processor performs computer-implemented segmentation procedures on said first set of sensor data to detect an object of interested located within said local user's physical environment.
7. The apparatus as described in claim 1, wherein said virtual workspace display produces a perceived effect of said local user and a plurality of remote users sharing said same physical room.
8. The apparatus as described in claim 7, wherein said processor adjusts said virtual workspace display according to a pre-determined lay-out conformity metric to render said plurality of remote users remote in a uniform manner.
9. A method of interacting over a network, said method comprising:
capturing a first set of sensor data concerning a local user's physical environment;
receiving a second set of sensor data over said communications network concerning a remote user's physical environment detecting and extracting a subset of points associated with an input received from the remote user over the communication network or the local user;
translating a detected configuration of the subset of points into a detected in-air gesture; and
processing the in-air gesture to manipulate an object currently rendered on a virtual workspace display which produces a sharing room to the local user and the remote user; wherein to manipulate an object further comprises to furnish the virtualized workspace with virtualized objects.
10. The method as described in claim 9, wherein the first and second sets of sensor data comprise coordinate data gathered from a plurality of different sensors; and
said capturing further comprises unifying said coordinate data into a common spatial coordinate system for generating said virtual workspace display.
11. The method as described in claim 10, wherein said capturing further comprises converting said coordinate data into a spatial coordinate system recognized by a specific sensor of said plurality of different sensors.
12. The method as described in claim 10, wherein said coordinate data comprises: RGB data, depth information, infrared reflection data and thermal data.
13. The method as described in claim 9, wherein said capturing further comprises performing computer-implemented segmentation procedures on said first set of sensor data to detect said local user within said local user's physical environment.
14. The method as described in claim 9, wherein said capturing further comprises performing computer-implemented segmentation procedures on said first set of sensor data to detect an object of interest located within said local user's physical environment.
15. The method as described in claim 9, wherein said virtual workspace display produces a perceived effect of said local user and a plurality of remote users sharing said same physical room.
16. The method as described in claim 15, wherein said rendering further comprises adjusting said virtual workspace display according to a pre-determined lay-out conformity metric to render said plurality of remote users remote in a uniform manner.
US14/231,375 2014-03-31 2014-03-31 System and method for augmented reality-enabled interactions and collaboration Active 2034-06-04 US9270943B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US14/231,375 US9270943B2 (en) 2014-03-31 2014-03-31 System and method for augmented reality-enabled interactions and collaboration
EP15773862.6A EP3055994A4 (en) 2014-03-31 2015-03-13 System and method for augmented reality-enabled interactions and collaboration
PCT/CN2015/074237 WO2015149616A1 (en) 2014-03-31 2015-03-13 System and method for augmented reality-enabled interactions and collaboration
CN201580009875.0A CN106165404B (en) 2014-03-31 2015-03-13 The system and method for supporting interaction and the cooperation of augmented reality
EP20199890.3A EP3780590A1 (en) 2014-03-31 2015-03-13 A system and method for augmented reality-enabled interactions and collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/231,375 US9270943B2 (en) 2014-03-31 2014-03-31 System and method for augmented reality-enabled interactions and collaboration

Publications (2)

Publication Number Publication Date
US20150281649A1 US20150281649A1 (en) 2015-10-01
US9270943B2 true US9270943B2 (en) 2016-02-23

Family

ID=54192217

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/231,375 Active 2034-06-04 US9270943B2 (en) 2014-03-31 2014-03-31 System and method for augmented reality-enabled interactions and collaboration

Country Status (4)

Country Link
US (1) US9270943B2 (en)
EP (2) EP3055994A4 (en)
CN (1) CN106165404B (en)
WO (1) WO2015149616A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220028168A1 (en) * 2020-07-21 2022-01-27 International Business Machines Corporation Mobile device based vr control

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10701318B2 (en) 2015-08-14 2020-06-30 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
WO2017172528A1 (en) 2016-04-01 2017-10-05 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
US10499997B2 (en) 2017-01-03 2019-12-10 Mako Surgical Corp. Systems and methods for surgical navigation
WO2018226508A1 (en) * 2017-06-09 2018-12-13 Pcms Holdings, Inc. Spatially faithful telepresence supporting varying geometries and moving users
US11467992B1 (en) 2020-09-24 2022-10-11 Amazon Technologies, Inc. Memory access operation in distributed computing system
US11354258B1 (en) 2020-09-30 2022-06-07 Amazon Technologies, Inc. Control plane operation at distributed computing system
US20240007590A1 (en) * 2020-09-30 2024-01-04 Beijing Zitiao Network Technology Co., Ltd. Image processing method and apparatus, and electronic device, and computer readable medium
WO2022120255A1 (en) * 2020-12-04 2022-06-09 VR-EDU, Inc. Virtual information board for collaborative information sharing
US20220264055A1 (en) * 2021-02-12 2022-08-18 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V Video Conference Apparatus, Video Conference Method and Computer Program Using a Spatial Virtual Reality Environment
CN113868455A (en) * 2021-10-21 2021-12-31 联想(北京)有限公司 Information processing method, information processing device, electronic equipment and storage medium
WO2023191773A1 (en) * 2022-03-29 2023-10-05 Hewlett-Packard Development Company, L.P. Interactive regions of audiovisual signals
US11825237B1 (en) * 2022-05-27 2023-11-21 Motorola Mobility Llc Segmented video preview controls by remote participants in a video communication session
US12019943B2 (en) 2022-05-27 2024-06-25 Motorola Mobility Llc Function based selective segmented video feed from a transmitting device to different participants on a video communication session

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030067536A1 (en) 2001-10-04 2003-04-10 National Research Council Of Canada Method and system for stereo videoconferencing
US20060119572A1 (en) 2004-10-25 2006-06-08 Jaron Lanier Movable audio/video communication interface system
US7119829B2 (en) * 2003-07-31 2006-10-10 Dreamworks Animation Llc Virtual conference room
US20080180519A1 (en) 2007-01-31 2008-07-31 Cok Ronald S Presentation control system
US20090033737A1 (en) * 2007-08-02 2009-02-05 Stuart Goose Method and System for Video Conferencing in a Virtual Environment
CN102263772A (en) 2010-05-28 2011-11-30 经典时空科技(北京)有限公司 Virtual conference system based on three-dimensional technology
US20130057642A1 (en) * 2011-09-07 2013-03-07 Cisco Technology, Inc. Video conferencing system, method, and computer program storage device
US20130265382A1 (en) 2012-04-09 2013-10-10 Futurewei Technologies, Inc. Visual Conditioning for Augmented-Reality-Assisted Video Conferencing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215498B1 (en) * 1998-09-10 2001-04-10 Lionhearth Technologies, Inc. Virtual command post
US20040189701A1 (en) * 2003-03-25 2004-09-30 Badt Sig Harold System and method for facilitating interaction between an individual present at a physical location and a telecommuter
US9007427B2 (en) * 2011-12-14 2015-04-14 Verizon Patent And Licensing Inc. Method and system for providing virtual conferencing
US9077846B2 (en) * 2012-02-06 2015-07-07 Microsoft Technology Licensing, Llc Integrated interactive space

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030067536A1 (en) 2001-10-04 2003-04-10 National Research Council Of Canada Method and system for stereo videoconferencing
US7119829B2 (en) * 2003-07-31 2006-10-10 Dreamworks Animation Llc Virtual conference room
US20060119572A1 (en) 2004-10-25 2006-06-08 Jaron Lanier Movable audio/video communication interface system
US20080180519A1 (en) 2007-01-31 2008-07-31 Cok Ronald S Presentation control system
US20090033737A1 (en) * 2007-08-02 2009-02-05 Stuart Goose Method and System for Video Conferencing in a Virtual Environment
CN102263772A (en) 2010-05-28 2011-11-30 经典时空科技(北京)有限公司 Virtual conference system based on three-dimensional technology
US20130057642A1 (en) * 2011-09-07 2013-03-07 Cisco Technology, Inc. Video conferencing system, method, and computer program storage device
US20130265382A1 (en) 2012-04-09 2013-10-10 Futurewei Technologies, Inc. Visual Conditioning for Augmented-Reality-Assisted Video Conferencing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220028168A1 (en) * 2020-07-21 2022-01-27 International Business Machines Corporation Mobile device based vr control
US11393171B2 (en) * 2020-07-21 2022-07-19 International Business Machines Corporation Mobile device based VR content control

Also Published As

Publication number Publication date
EP3055994A4 (en) 2016-11-16
CN106165404A (en) 2016-11-23
EP3780590A1 (en) 2021-02-17
US20150281649A1 (en) 2015-10-01
EP3055994A1 (en) 2016-08-17
CN106165404B (en) 2019-10-22
WO2015149616A1 (en) 2015-10-08

Similar Documents

Publication Publication Date Title
US9270943B2 (en) System and method for augmented reality-enabled interactions and collaboration
US10554921B1 (en) Gaze-correct video conferencing systems and methods
US11100664B2 (en) Depth-aware photo editing
US20230206569A1 (en) Augmented reality conferencing system and method
US11023093B2 (en) Human-computer interface for computationally efficient placement and sizing of virtual objects in a three-dimensional representation of a real-world environment
US10122969B1 (en) Video capture systems and methods
EP3111636B1 (en) Telepresence experience
US8125510B2 (en) Remote workspace sharing
US20230206531A1 (en) Avatar display device, avatar generating device, and program
CN112243583B (en) Multi-endpoint mixed reality conference
US9979921B2 (en) Systems and methods for providing real-time composite video from multiple source devices
US11048464B2 (en) Synchronization and streaming of workspace contents with audio for collaborative virtual, augmented, and mixed reality (xR) applications
US10748341B2 (en) Terminal device, system, program and method for compositing a real-space image of a player into a virtual space
CN110168630A (en) Enhance video reality
JP6090917B2 (en) Subject image extraction and synthesis apparatus and method
KR20170014818A (en) System and method for multi-party video conferencing, and client apparatus for executing the same
US11887249B2 (en) Systems and methods for displaying stereoscopic rendered image data captured from multiple perspectives
KR102728245B1 (en) Avatar display device, avatar creation device and program
Van Broeck et al. Real-time 3D video communication in 3D virtual worlds: Technical realization of a new communication concept
WO2023075810A1 (en) System and method for extracting, transplanting live images for streaming blended, hyper-realistic reality
WO2024019713A1 (en) Copresence system
KR101540110B1 (en) System, method and computer-readable recording media for eye contact among users

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUTUREWEI TECHNOLOGIES, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EHMANN, JANA;ZHOU, LIANG;GULERYUZ, ONUR G.;AND OTHERS;SIGNING DATES FROM 20140331 TO 20140402;REEL/FRAME:032589/0885

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8