WO2024061462A1 - Rendering user avatar and digital object in extended reality based on user interactions with physical object - Google Patents
Rendering user avatar and digital object in extended reality based on user interactions with physical object Download PDFInfo
- Publication number
- WO2024061462A1 WO2024061462A1 PCT/EP2022/076308 EP2022076308W WO2024061462A1 WO 2024061462 A1 WO2024061462 A1 WO 2024061462A1 EP 2022076308 W EP2022076308 W EP 2022076308W WO 2024061462 A1 WO2024061462 A1 WO 2024061462A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- physical object
- participant
- environment
- physical
- avatar
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/157—Conference systems defining a virtual conference space and using avatars or agents
Definitions
- the present disclosure relates to rendering extended reality (XR) environments and associated XR rendering devices, and more particularly to rendering avatars in immersive XR environments displayed on XR participant devices.
- XR extended reality
- Immersive extended reality (XR) environments have been developed to enable a myriad of different types of user experiences for gaming, on-line meetings, co-creation of products, etc.
- Immersive XR environments can include virtual reality (VR) environments where human users see computer generated graphical renderings and can include augmented reality (AR) environments where users see a combination of computer generated graphical renderings overlaid on a view of the physical real-world through, e.g., see-through display screens.
- VR virtual reality
- AR augmented reality
- Example XR environment rendering devices include, without limitation, XR environment servers, XR headsets, gaming consoles, smartphones running an XR application, and tablet/laptop/desktop computers running an XR application.
- Oculus Quest is an example XR device and Google Glass is an example AR device.
- XR meeting applications are tools for native digital meetings and also useful as a thinking and planning space for oneself as well as having online meetings in a digital environment.
- Some XR meeting applications support AR devices, browsers, and VR devices.
- a participant using a browser may join via desktop, tablet-PC or smartphone and share their views using a front faced cam or a web cam.
- some XR meeting solutions have mobile application versions, e.g., Android and iOS, which allow a user to navigate in the virtual space on the screen or activate an augmented reality mode to display the meeting in their own surroundings.
- the XR meeting solutions introduce new features to online meetings that allow for new ways to share and create content etc.
- Today’s commonly and commercially available XR devices typically include a head-mounted display (HMD) and a pair of hand controllers, sometimes with more advanced solutions also “foot controllers”.
- HMD head-mounted display
- foot controllers sometimes with more advanced solutions also “foot controllers”.
- Immersive XR environments such as gaming environments and meeting environments, are often configured to display computer generated avatars which represent poses of human users in the immersive XR environments.
- a user may select and customize an avatar, such as gender, clothing, hair style, etc. to represent that user for viewing by other users participating in the immersive XR environment.
- an avatar such as gender, clothing, hair style, etc.
- users can be unexpectedly disappointed with how their avatar is viewed by other participants as the user's avatar moves through an environment and/or transitions between different poses, such as standing, sitting, squatting, and laying.
- Some embodiments disclosed herein are directed to an XR rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment.
- the XR rendering device includes at least one processor, and at least one memory storing instructions executable by the at least one processor to perform operations. Operations include determining the participant is interacting with a physical object in the participant’s physical environment. Operations also include identifying characteristics of the physical object in the participant’s physical environment which the participant is interacting with. Operations also include determining a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment. Operations also include rendering the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
- Some other related embodiments are directed to a corresponding method by an XR rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment.
- the method includes determining the participant is interacting with a physical object in the participant’s physical environment.
- the method also includes identifying characteristics of the physical object in the participant’s physical environment which the participant is interacting with.
- the method also includes determining a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment.
- the method also includes rendering the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
- embodiments include providing additional data to an XR application that may use physical object data as input for typical motion patterns/range of body parts in an immersive environment. Additionally, embodiments may provide and share textures of physical -to-digital object renderings to other VR meeting users, with usermanaged constraints on whom, when, in which context and for how long time a digital object texture can be loaned for others’ renderings. Additionally, embodiments may reduce computational resources consumed by rendering based on using characteristics of identified physical objects which can reduce the range of motion of body parts to be rendered and/or reduce the range of motion of the physical object to be rendered.
- Figure 1 illustrates an XR system that includes a plurality of participant devices that communicate through networks with an XR rendering device to operate in accordance with some embodiments of the present disclosure
- Figure 2 illustrates an immersive XR environment with participants' avatars and a shared virtual presentation screen that are rendered with various poses within the XR environment, in accordance with some embodiments of the present disclosure
- Figure 3 is a further block diagram of an XR rendering system which illustrates data flows and operations between a plurality of participant devices and an XR rendering device in accordance with some embodiments of the present disclosure
- Figure 4 illustrates an example of various operations which are performed by a user device and XR rendering device based on a user’s interaction with a physical object, in accordance with some embodiments of the present disclosure.
- Figures 5 through 10 are flowcharts of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure
- Figure 11 is a block diagram of components of an XR rendering device that are configured to operate in accordance with some embodiments of the present disclosure.
- Figure 1 illustrates an XR system that includes a plurality of participant devices 1 lOa-d that communicate through networks 120 with an XR rendering device 100 to operate in accordance with some embodiments of the present disclosure.
- the XR rendering device 100 is configured to generate a graphical representation of an immersive XR environment (also called an "XR environment" for brevity) which is viewable from various perspectives of virtual poses of human participants in the XR environment through display screens of the various participant devices 1 lOa-d.
- an immersive XR environment also called an "XR environment" for brevity
- the illustrated devices include VR headsets 1 lOa-c which can be worn by participants to view and navigate through the XR environment, and a participant electronic device 1 lOd, such as a personal computer, laptop, tablet, smartphone, smart ring, or smart fabrics, which can be operated by a participant to view and navigate through the XR environment.
- the participants have associated avatars which are rendered in the XR environment to represent poses (e.g., location, body assembly orientation, etc.) of the participants relative to a coordinate system of the XR environment.
- the XR rendering device 100 may include a rendering module 102 that performs operations disclosed herein for determine a participant avatar posture based on participant’s interactions with a physical object in the participant’s physical environment. The XR rendering device 100 then renders the participant avatar with the determined participant avatar posture for viewing by other participants through their respective devices, e.g., 110b- HOd.
- the XR rendering device 100 is illustrated in Figure 1 as being a centralized network computing server separate from one or more of the participant devices, in some other embodiments the XR rendering device 100 is implemented as a component of one or more of the participant devices.
- one of the participant devices may be configured to perform operations of the XR rendering device in a centralized manner controlling rendering for or by other ones of the participant devices.
- each of the participant devices may be configured to perform at least some of the operations of the XR rendering device in a distributed decentralized manner with coordinated communications being performed between the distributed XR rendering devices (e.g., between software instances of XR rendering devices).
- FIG. 2 illustrates an immersive XR environment with avatars 200a-f that are graphically rendered with poses (e.g., at locations and with orientations) representing the present field of views (FOVs) of associated human participants in the XR environment.
- streaming video from a camera of the participant device 1 lOd is displayed in a virtual screen 230 instead of rendering an avatar to represent the participant.
- a shared virtual presentation screen 210 is also graphically rendered at a location within the XR environment, and can display pictures and/or video that are being presented for viewing by the participants in the XR environment.
- a virtual object 204 is graphically rendered in the XR environment.
- the virtual object 204 may be graphically rendered in the XR environment with any shape or size, and can represent any type of object (e.g., table, chair, object on table, door, window, television or computer, virtual appliance, animated vehicle, animated animal, etc.).
- the virtual object 204 may represent a physical object in the XR environment and may be animated to track movement and pose of the physical object within the XR environment responsive to movement input or physical interaction from the human participant.
- an XR rendering device e.g., an XR environment server or a participant device 110a
- an XR rendering device can become constrained by its processing bandwidth limitations when attempting to simultaneously render in real-time each of the participants' avatars, the virtual screen, the shared virtual presentation screen 210, and the virtual objects 204 including room surfaces and other parts of the XR environment.
- FIG. 3 is a further block diagram of an XR rendering system which illustrates data flows and operations between a plurality of participant devices and an XR rendering device in accordance with some embodiments of the present disclosure.
- each of the participants can define a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment.
- the participant avatar posture based on participant’s interactions with the physical object may be stored as an attribute of a physical object in the participant's device.
- the participant avatar posture is used by the rendering circuit 300 of the XR rendering device 100 for rendering the respective avatars.
- a XR rendering device 100 of a first participant can define a participant avatar posture which is provided 310a to the first participant device and requests that rendering participant avatar posture be given to an avatar associated with the first participant for rendering.
- a XR Rendering Device 100 of a second participant can define a participant avatar posture which is provided 310b to the second participant device and requests that rendering participant avatar posture be given to an avatar associated with the second participant for rendering.
- Other participants can similarly define participant avatar posture which are provided to the rendering devices 100 to control rendering related to the respective other participants.
- the XR rendering device 100 can use the participant avatar posture that have been defined to participant avatar posture for other participants 314a, 314b, etc. which control the rendering operations performed by the respective participant devices.
- Various embodiments of the present disclosure describe determining, based on user interactions with physical objects, how to further digitally represent changes in user body posture or actions depending on which physical object the user is interacting with while in VR meetings. Examples of physical objects which users may interact with include but are not limited to chairs, sofas, mugs, writing utensils, and electronic devices.
- the system should be able to identify the object, its characteristics, main use, its impact on user body posture, limb motion range and user height in relation to other users in the application which then is displayed to the users in an immersive VR application.
- Various embodiments identify physical objects that a user in VR interacts with by means of image processing, object recognition and sensor inputs. Additionally, the various embodiments utilize data associated to objects to determine associated typical motion patterns of body parts, such as upper body, lower body, limbs or hand or fingers. [0029] Potential advantages of various embodiments include providing additional data to an XR application that may use physical object data as input for typical motion patterns/range of body parts in an immersive environment. Additionally, embodiments may provide and share textures of physical -to-digital object renderings to other VR meeting users, with usermanaged constraints on whom, when, in which context and for how long time a digital object texture can be loaned for others’ renderings. Additionally, embodiments may reduce computational resources consumed by rendering based on using characteristics of identified physical objects which can reduce the range of motion of body parts to be rendered and/or reduce the range of motion of the physical object to be rendered.
- Embodiments describe a solution/method that determines, based on user interactions with physical objects, how to further digitally represent changes in user body posture or actions depending on which physical object the user is interacting with while in VR meetings.
- Figure 5 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
- an extended reality rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment.
- the XR rendering device includes at least one processor, and at least one memory storing instructions executable by the at least one processor to perform operations.
- Operations include determining 500 the participant is interacting with a physical object in the participant’s physical environment.
- Operations also include identifying 502 characteristics of the physical object in the participant’s physical environment which the participant is interacting with.
- Operations also include determining 504 a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment.
- Operations also include rendering 506 the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
- Figure 6 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
- the operations further include to determine 600 a body part of the participant that is interacting with the physical object.
- the operations further include to determine 602 a predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
- An example scenario using the operations of Figure 6 can include the XR rendering device 100 determining 600 that the participant’s hand has picked up a coffee mug from a table.
- the determination 600 may be performed based on processing images from a front facing camera of the participant device 110a and/or a point cloud from a lidar sensor of the participant device 110a to identify locations of the hand and the coffee mug and the associated physical interaction of the hand holding the coffee mug.
- the XR rendering device 100 responsively determines 602 a predicted motion pattern of the hand holding the coffee mug.
- the predicted motion pattern may define a pathway along which the hand and coffee mug will travel, such as along an arc between the previous location of the coffee mug resting on the table and a mouth location on the participant’s avatar, and/or may define one or more limits on the range of predicted motion of the hand and coffee mug.
- the predicted motion pattern may optionally be scaled based on the participant’s defined attributes, such as gender, height, weight, age, and more particular anatomical measurements and/or other characteristics, e.g., wheel chair usage, etc.
- the XR rendering device 100 can then use the predicted motion pattern to render the avatar of the participant interacting with a virtual representation of the coffee mug.
- the XR rendering device 100 may render motion of the arm and the virtual representation of the coffee mug in a manner that is constrained by the predicted motion pattern so as to avoid rendering movements that would appear to other participants to be in an unnatural manner, e.g., which may have otherwise occurred if processing of time sequence of images and/or point cloud data indicates erratic (e.g., jittery) movements that would have resulted in rendering erratic (unnatural) avatar movements.
- erratic e.g., jittery
- the physical object rendered in the immersive XR environment may be scaled for physical user attributes such as gender, length, age, weight, and anatomy attributes if available.
- the VR system may then display typical motions related to the physical object. Adapt the object motion pattern to generate corresponding avatar motion pattern which controls the VR system rendering of an avatar interacting with the VR digital representation of object (e.g., avatar arm is moved according to avatar motion pattern (hand, arm, torso, and head) so that cup moves according to object motion pattern).
- avatar motion pattern e.g., avatar arm is moved according to avatar motion pattern (hand, arm, torso, and head) so that cup moves according to object motion pattern).
- These operations may be selectively initiated responsive to the VR system identifying a known object that the person is interacting with, and which has a corresponding object and associated motion pattern in the database accessed by the VR system.
- the operations may be triggered by the user's eye gaze and/or observed motion correlating to a real -world object.
- the operation to render 506 the avatar of the participant interacting with the virtual object is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
- Figure 7 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
- the operations further include initiating 700 a request for the user to input a typical motion pattern of the body part interacting with the physical object.
- the operations further include determining 702 the predicted motion pattern of the body part to be associated with the identified characteristics of the physical object, based on the user input of the typical motion pattern of the body part interacting with the physical object.
- the operations further include initiating the request for the user to input a typical motion pattern of the body part interacting with the physical object responsive to determining that no predicted motion pattern of the body part is associated with the identified characteristics of the physical object.
- Figure 4 illustrates an example of various operations which are performed by a user device 110 and an XR rendering device (server) 100 based on a user’s interaction with a physical object, in accordance with some embodiments of the present disclosure.
- the user device 100 determines 400 that a user is engaged in an XR meeting application and using an XR rendering device which is capable of capturing camera images and/or lidar point cloud data of a physical environment.
- the user device 100 displays 402 a rendering of the user’s avatar in the XR meeting, which may be obtained from the XR rendering server 100.
- a determination 404 and 406 is made, by the user device 100 or the XR rendering server 100, as to whether the user is interacting with a physical object, e.g., coffee cup, chair, handle of a door, etc. If the determination is “yes”, then the XR rendering device 100 operates to identify 408 characteristics of the physical object, e.g., type of object (cup, chair, door, etc.), physical size, shape, texture, color/pattern, etc.
- a physical object e.g., coffee cup, chair, handle of a door, etc.
- the XR rendering device 100 determines 410 the participant’s avatar posture based on the relative location of the physical participant relative to the physical object, and based on the characteristics of the physical object and characteristics of the participant, e.g., height, weight, gender, age, etc.
- the XR rendering device 100 determines 412 a body part of the participant, e.g., hand, which is interacting with the physical object, e.g., coffee cup, and the predicted motion pattern of the body part based on characteristics that have been associated with the physical object, such as the predicted motion of the hand and coffee cup being moved from a resting location on a table to the mouth of the participant’s avatar.
- the XR rendering device 100 then renders 414 graphical representations of the participant’s avatar interacting with a virtual object representation of the physical object, e.g., graphical representation of the coffee cup.
- the XR rendering device 100 communicates the rendered graphical representations to the user device 100, which responsively displays 416 the avatar interacting with the virtual object.
- Device or cloud server application may furthermore select digital object attributes according to resemblance, history, and/or context.
- the operation to identify 502 characteristics of the physical object includes processing image data from a camera arranged to capture images of the physical object, to identify at least one of: size of the physical object, form of the physical object, and texture of the physical object.
- the operation to render 506 the avatar of the participant interacting with the virtual object in the immersive XR environment is performed to render the avatar with a form, size, and/or texture defined based on the size of the physical object, the form of the physical object, and/or the texture of the physical object.
- the operations may use a recorded (stored) history of a previously used object, such as a recorded historical indication of previously used mugs, chairs, and shoes.
- a recorded (stored) history of a previously used object such as a recorded historical indication of previously used mugs, chairs, and shoes.
- the operations can record digital attributes of a previously used digital object, such as a chair, to be re-used in a future digital meeting session responsive to determining the user body posture indicates use of “same physical chair.”
- the user may have an opportunity to select among a set of previously imported (stored) objects of similar type, e.g., satisfying a defined similarity rule.
- Figure 8 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
- the operations further include to compare 800 the processed image data of the physical object to historical virtual objects in a historical virtual object repository which defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which the user has previously interacted in the immersive XR environment.
- the operations further include to select 802 one of the historical virtual objects in the historical virtual object repository based on similarity between the image data of the physical object to the one of the historical virtual objects.
- the operations further include to render 804 the virtual object in the immersive XR environment based on the selected one of the historical virtual objects.
- context of a digital meeting may be used, such as the digital meeting being private, for leisure, or for business purposes. For example, determining a type of sitting object and associated body postures expected in a digital business meeting may be separate from an expected and accepted sitting object and associated body postures in a private after work digital meeting.
- the operations further include to determine at least one of the following context parameters: XR rendering device location data; time data; date data; characteristic of a background noise component; and sensor data indicating a sensed type of physical object or environmental parameter.
- the operation to render 506 the avatar of the participant interacting with the virtual object in the immersive XR environment is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the context parameters.
- operations by a participating XR device, a managing cloud server, or XR meeting application may select digital object attributes according to resemblance, history, and/or context as discussed above.
- the identification, selection, and rendering application may provide to a database an instance of digital object attributes, such as physical form factor or textures, associated with the associated digital object.
- a second user may also search or match for a second digital object among the set that corresponds to the first user's digital object and attributes.
- the first user may in these operations further associate the provided digital object with a lifetime or persistence value indicative for how long time the digital object and attributes may be accessible for other participants.
- Figure 9 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
- the operations further include to compare 900 the processed image data of the physical object to predefined virtual objects in a predefined virtual object repository which defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which are predefined in the immersive XR environment.
- the operations further include to select 902 one of the predefined virtual objects in the predefined virtual object repository based on similarity between the image data of the physical object to the one of the predefined virtual objects.
- the operations further include to render 904 the virtual object in the immersive XR environment based on the selected one of the predefined virtual objects.
- Figure 10 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
- the operations further include to obtain 1000 from another XR rendering device a predicted motion pattern of a body part that is defined as being associated with the identified characteristics of the physical object.
- the operation to render the avatar of the participant interacting with the virtual object is performed based on the determined participant avatar posture and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
- the identification/selection/rendering application may in the aspect of “within the set of digital objects available for rendering in the digital meeting” provide to a database (or similar) an instance of digital object attributes, such as physical form factor, and textures, associated with the “own digital object.”
- a second user in the step of “find matching digital object from a set of digital objects available for rendering in the digital meeting” now also search/match the corresponding second digital object among the set also including first users digital object and attributes.
- the first user may in these aspects further associate provided digital object with a lifetime or persistence value indicative for how long time the object may be accessible for other participants to adopt; for example, “duration of ongoing meeting,” “for meetings where first user is present,” “today,” etc.
- the first user may also specify what other users and in which context the “own object” may be used by other users’ XR renderings, for example “only for family and friends,” “only for business colleagues but not external meeting participants.” [0062] Example XR Rendering Device Configuration.
- FIG 11 is a block diagram of components of an XR rendering device 100 that are configured to operate in accordance with some embodiments of the present disclosure.
- the XR rendering device 100 can include at least one processor circuit 1100 (processor), at least one memory 1110 (memory), at least one network interface 1120 (network interface), and a display device 1130.
- the processor 1100 is operationally connected to these various components.
- the memory 1110 stores executable instructions 1112 that are executed by the processor 1100 to perform operations.
- the processor 1100 may include one or more data processing circuits, such as a general purpose and/or special purpose processor (e.g., microprocessor and/or digital signal processor), which may be collocated or distributed across one or more data networks.
- a general purpose and/or special purpose processor e.g., microprocessor and/or digital signal processor
- the processor 1100 is configured to execute the instructions 1112 in the memory 1110, described below as a computer readable medium, to perform some or all of the operations and methods for one or more of the embodiments disclosed herein for an XR rendering device.
- the XR rendering device may be separate from and communicatively connect to the participant devices, or may be at least partially integrated within one or more of the participant devices.
- the terms “comprise”, “comprising”, “comprises”, “include”, “including”, “includes”, “have”, “has”, “having”, or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof.
- the common abbreviation “e.g.”, which derives from the Latin phrase “exempli gratia” may be used to introduce or specify a general example or examples of a previously mentioned item, and is not intended to be limiting of such item.
- the common abbreviation “i.e.”, which derives from the Latin phrase “id est,” may be used to specify a particular item from a more general recitation.
- Example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits.
- These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
An extended reality rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment to perform operations. Operations include determining the participant is interacting with a physical object in the participant's physical environment. Operations also include identifying characteristics of the physical object in the participant's physical environment which the participant is interacting with. Operations also include determining a participant avatar posture based on participant's interactions with the physical object in the participant's physical environment. Operations also include rendering the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
Description
RENDERING USER AVATAR AND DIGITAL OBJECT IN EXTENDED REALITY BASED ON USER INTERACTIONS WITH PHYSICAL OBJECT
TECHNICAL FIELD
[0001] The present disclosure relates to rendering extended reality (XR) environments and associated XR rendering devices, and more particularly to rendering avatars in immersive XR environments displayed on XR participant devices.
BACKGROUND
[0002] Immersive extended reality (XR) environments have been developed to enable a myriad of different types of user experiences for gaming, on-line meetings, co-creation of products, etc. Immersive XR environments (also referred to as "XR environments") can include virtual reality (VR) environments where human users see computer generated graphical renderings and can include augmented reality (AR) environments where users see a combination of computer generated graphical renderings overlaid on a view of the physical real-world through, e.g., see-through display screens.
[0003] Example XR environment rendering devices include, without limitation, XR environment servers, XR headsets, gaming consoles, smartphones running an XR application, and tablet/laptop/desktop computers running an XR application. Oculus Quest is an example XR device and Google Glass is an example AR device.
[0004] XR meeting applications are tools for native digital meetings and also useful as a thinking and planning space for oneself as well as having online meetings in a digital environment. Some XR meeting applications support AR devices, browsers, and VR devices. A participant using a browser may join via desktop, tablet-PC or smartphone and share their views using a front faced cam or a web cam. Also, some XR meeting solutions have mobile application versions, e.g., Android and iOS, which allow a user to navigate in the virtual space on the screen or activate an augmented reality mode to display the meeting in their own surroundings. The XR meeting solutions introduce new features to online meetings that allow for new ways to share and create content etc. Today’s commonly and commercially available XR devices typically include a head-mounted display (HMD) and a pair of hand controllers, sometimes with more advanced solutions also “foot controllers”.
[0005] Immersive XR environments, such as gaming environments and meeting environments, are often configured to display computer generated avatars which represent poses of human users in the immersive XR environments. A user may select and customize
an avatar, such as gender, clothing, hair style, etc. to represent that user for viewing by other users participating in the immersive XR environment. Although some user customization of avatars is provided, users can be unexpectedly disappointed with how their avatar is viewed by other participants as the user's avatar moves through an environment and/or transitions between different poses, such as standing, sitting, squatting, and laying.
SUMMARY
[0006] Some embodiments disclosed herein are directed to an XR rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment. The XR rendering device includes at least one processor, and at least one memory storing instructions executable by the at least one processor to perform operations. Operations include determining the participant is interacting with a physical object in the participant’s physical environment. Operations also include identifying characteristics of the physical object in the participant’s physical environment which the participant is interacting with. Operations also include determining a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment. Operations also include rendering the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
[0007] Some other related embodiments are directed to a corresponding method by an XR rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment. The method includes determining the participant is interacting with a physical object in the participant’s physical environment. The method also includes identifying characteristics of the physical object in the participant’s physical environment which the participant is interacting with. The method also includes determining a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment. The method also includes rendering the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
[0008] Potential advantages of various embodiments include providing additional data to an XR application that may use physical object data as input for typical motion patterns/range of body parts in an immersive environment. Additionally, embodiments may provide and share textures of physical -to-digital object renderings to other VR meeting users, with usermanaged constraints on whom, when, in which context and for how long time a digital object texture can be loaned for others’ renderings. Additionally, embodiments may reduce computational resources consumed by rendering based on using characteristics of identified physical objects which can reduce the range of motion of body parts to be rendered and/or reduce the range of motion of the physical object to be rendered.
[0009] Other XR rendering devices, methods, and computer program products according to embodiments will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional XR rendering devices, methods, and computer program products be included within this description and protected by the accompanying claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying drawings. In the drawings:
[0011] Figure 1 illustrates an XR system that includes a plurality of participant devices that communicate through networks with an XR rendering device to operate in accordance with some embodiments of the present disclosure;
[0012] Figure 2 illustrates an immersive XR environment with participants' avatars and a shared virtual presentation screen that are rendered with various poses within the XR environment, in accordance with some embodiments of the present disclosure;
[0013] Figure 3 is a further block diagram of an XR rendering system which illustrates data flows and operations between a plurality of participant devices and an XR rendering device in accordance with some embodiments of the present disclosure;
[0014] Figure 4 illustrates an example of various operations which are performed by a user device and XR rendering device based on a user’s interaction with a physical object, in accordance with some embodiments of the present disclosure.
[0015] Figures 5 through 10 are flowcharts of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure; and [0016] Figure 11 is a block diagram of components of an XR rendering device that are configured to operate in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION
[0017] Inventive concepts will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of various present inventive concepts to those skilled in the art. It should also be noted that these embodiments are not mutually exclusive. Components from one embodiment may be tacitly assumed to be present/used in another embodiment.
[0018] Figure 1 illustrates an XR system that includes a plurality of participant devices 1 lOa-d that communicate through networks 120 with an XR rendering device 100 to operate in accordance with some embodiments of the present disclosure. The XR rendering device 100 is configured to generate a graphical representation of an immersive XR environment (also called an "XR environment" for brevity) which is viewable from various perspectives of virtual poses of human participants in the XR environment through display screens of the various participant devices 1 lOa-d. For example, the illustrated devices include VR headsets 1 lOa-c which can be worn by participants to view and navigate through the XR environment, and a participant electronic device 1 lOd, such as a personal computer, laptop, tablet, smartphone, smart ring, or smart fabrics, which can be operated by a participant to view and navigate through the XR environment. The participants have associated avatars which are rendered in the XR environment to represent poses (e.g., location, body assembly orientation, etc.) of the participants relative to a coordinate system of the XR environment.
[0019] The XR rendering device 100 may include a rendering module 102 that performs operations disclosed herein for determine a participant avatar posture based on participant’s interactions with a physical object in the participant’s physical environment. The XR rendering device 100 then renders the participant avatar with the determined participant avatar posture for viewing by other participants through their respective devices, e.g., 110b- HOd.
[0020] Although the XR rendering device 100 is illustrated in Figure 1 as being a centralized network computing server separate from one or more of the participant devices, in some other embodiments the XR rendering device 100 is implemented as a component of one or more of the participant devices. For example, one of the participant devices may be configured to perform operations of the XR rendering device in a centralized manner
controlling rendering for or by other ones of the participant devices. Alternatively, each of the participant devices may be configured to perform at least some of the operations of the XR rendering device in a distributed decentralized manner with coordinated communications being performed between the distributed XR rendering devices (e.g., between software instances of XR rendering devices).
[0021] Figure 2 illustrates an immersive XR environment with avatars 200a-f that are graphically rendered with poses (e.g., at locations and with orientations) representing the present field of views (FOVs) of associated human participants in the XR environment. In the illustrated example, streaming video from a camera of the participant device 1 lOd (personal computer) is displayed in a virtual screen 230 instead of rendering an avatar to represent the participant. A shared virtual presentation screen 210 is also graphically rendered at a location within the XR environment, and can display pictures and/or video that are being presented for viewing by the participants in the XR environment. A virtual object 204 is graphically rendered in the XR environment. The virtual object 204 may be graphically rendered in the XR environment with any shape or size, and can represent any type of object (e.g., table, chair, object on table, door, window, television or computer, virtual appliance, animated vehicle, animated animal, etc.). The virtual object 204 may represent a physical object in the XR environment and may be animated to track movement and pose of the physical object within the XR environment responsive to movement input or physical interaction from the human participant.
[0022] In a multi-participant XR environment scenario such as illustrated in Figure 2, an XR rendering device (e.g., an XR environment server or a participant device 110a) can become constrained by its processing bandwidth limitations when attempting to simultaneously render in real-time each of the participants' avatars, the virtual screen, the shared virtual presentation screen 210, and the virtual objects 204 including room surfaces and other parts of the XR environment.
[0023] Existing XR rendering environments can have undesirable operations for how avatars are rendered (e.g., when operating with a hand(s)-headset (head mounted display)- only sensor setup), such as how a participant’s avatar's legs-feet are attached to a torso, and/or how a transition of a physical person from standing to sitting is represented through the rendering of the person's avatar in the XR environment. For example, when physical person transitions from a standing position to sitting on a chair in a real room, this physical movement can trigger a corresponding change in height of the person's avatar responsive to the sensed person's height changing.
[0024] Figure 3 is a further block diagram of an XR rendering system which illustrates data flows and operations between a plurality of participant devices and an XR rendering device in accordance with some embodiments of the present disclosure.
[0025] Referring to Figure 3, each of the participants can define a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment. The participant avatar posture based on participant’s interactions with the physical object may be stored as an attribute of a physical object in the participant's device. The participant avatar posture is used by the rendering circuit 300 of the XR rendering device 100 for rendering the respective avatars. For example, a XR rendering device 100 of a first participant can define a participant avatar posture which is provided 310a to the first participant device and requests that rendering participant avatar posture be given to an avatar associated with the first participant for rendering. Similarly, a XR Rendering Device 100 of a second participant can define a participant avatar posture which is provided 310b to the second participant device and requests that rendering participant avatar posture be given to an avatar associated with the second participant for rendering. Other participants can similarly define participant avatar posture which are provided to the rendering devices 100 to control rendering related to the respective other participants. Alternatively or additionally, the XR rendering device 100 can use the participant avatar posture that have been defined to participant avatar posture for other participants 314a, 314b, etc. which control the rendering operations performed by the respective participant devices. [0026] Various embodiments of the present disclosure describe determining, based on user interactions with physical objects, how to further digitally represent changes in user body posture or actions depending on which physical object the user is interacting with while in VR meetings. Examples of physical objects which users may interact with include but are not limited to chairs, sofas, mugs, writing utensils, and electronic devices.
[0027] In an example, if a user grabs a physical item in a certain way then the system should be able to identify the object, its characteristics, main use, its impact on user body posture, limb motion range and user height in relation to other users in the application which then is displayed to the users in an immersive VR application.
[0028] Various embodiments identify physical objects that a user in VR interacts with by means of image processing, object recognition and sensor inputs. Additionally, the various embodiments utilize data associated to objects to determine associated typical motion patterns of body parts, such as upper body, lower body, limbs or hand or fingers.
[0029] Potential advantages of various embodiments include providing additional data to an XR application that may use physical object data as input for typical motion patterns/range of body parts in an immersive environment. Additionally, embodiments may provide and share textures of physical -to-digital object renderings to other VR meeting users, with usermanaged constraints on whom, when, in which context and for how long time a digital object texture can be loaned for others’ renderings. Additionally, embodiments may reduce computational resources consumed by rendering based on using characteristics of identified physical objects which can reduce the range of motion of body parts to be rendered and/or reduce the range of motion of the physical object to be rendered.
[0030] Embodiments describe a solution/method that determines, based on user interactions with physical objects, how to further digitally represent changes in user body posture or actions depending on which physical object the user is interacting with while in VR meetings.
[0031] Figure 5 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
[0032] Referring to Figures 1, 2, and 5, in some embodiments, an extended reality rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment. The XR rendering device includes at least one processor, and at least one memory storing instructions executable by the at least one processor to perform operations. Operations include determining 500 the participant is interacting with a physical object in the participant’s physical environment. Operations also include identifying 502 characteristics of the physical object in the participant’s physical environment which the participant is interacting with. Operations also include determining 504 a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment. Operations also include rendering 506 the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
[0033] Figure 6 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure. In some embodiments, the operations further include to determine 600 a body part of the participant that is interacting with the physical object. The operations further include to determine 602 a
predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
[0034] An example scenario using the operations of Figure 6 can include the XR rendering device 100 determining 600 that the participant’s hand has picked up a coffee mug from a table. The determination 600 may be performed based on processing images from a front facing camera of the participant device 110a and/or a point cloud from a lidar sensor of the participant device 110a to identify locations of the hand and the coffee mug and the associated physical interaction of the hand holding the coffee mug. The XR rendering device 100 responsively determines 602 a predicted motion pattern of the hand holding the coffee mug. The predicted motion pattern may define a pathway along which the hand and coffee mug will travel, such as along an arc between the previous location of the coffee mug resting on the table and a mouth location on the participant’s avatar, and/or may define one or more limits on the range of predicted motion of the hand and coffee mug. The predicted motion pattern may optionally be scaled based on the participant’s defined attributes, such as gender, height, weight, age, and more particular anatomical measurements and/or other characteristics, e.g., wheel chair usage, etc. The XR rendering device 100 can then use the predicted motion pattern to render the avatar of the participant interacting with a virtual representation of the coffee mug. For example, the XR rendering device 100 may render motion of the arm and the virtual representation of the coffee mug in a manner that is constrained by the predicted motion pattern so as to avoid rendering movements that would appear to other participants to be in an unnatural manner, e.g., which may have otherwise occurred if processing of time sequence of images and/or point cloud data indicates erratic (e.g., jittery) movements that would have resulted in rendering erratic (unnatural) avatar movements.
[0035] In some embodiments, the physical object rendered in the immersive XR environment may be scaled for physical user attributes such as gender, length, age, weight, and anatomy attributes if available.
[0036] The VR system may then display typical motions related to the physical object. Adapt the object motion pattern to generate corresponding avatar motion pattern which controls the VR system rendering of an avatar interacting with the VR digital representation of object (e.g., avatar arm is moved according to avatar motion pattern (hand, arm, torso, and head) so that cup moves according to object motion pattern). These operations may be selectively initiated responsive to the VR system identifying a known object that the person is interacting with, and which has a corresponding object and associated motion pattern in the
database accessed by the VR system. The operations may be triggered by the user's eye gaze and/or observed motion correlating to a real -world object.
[0037] In some of the embodiments, the operation to render 506 the avatar of the participant interacting with the virtual object is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
[0038] Figure 7 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure. In some embodiments, the operations further include initiating 700 a request for the user to input a typical motion pattern of the body part interacting with the physical object. The operations further include determining 702 the predicted motion pattern of the body part to be associated with the identified characteristics of the physical object, based on the user input of the typical motion pattern of the body part interacting with the physical object.
[0039] In some embodiments, the operations further include initiating the request for the user to input a typical motion pattern of the body part interacting with the physical object responsive to determining that no predicted motion pattern of the body part is associated with the identified characteristics of the physical object.
[0040] Figure 4 illustrates an example of various operations which are performed by a user device 110 and an XR rendering device (server) 100 based on a user’s interaction with a physical object, in accordance with some embodiments of the present disclosure. Referring to Figure 4, the user device 100 determines 400 that a user is engaged in an XR meeting application and using an XR rendering device which is capable of capturing camera images and/or lidar point cloud data of a physical environment. The user device 100 displays 402 a rendering of the user’s avatar in the XR meeting, which may be obtained from the XR rendering server 100. A determination 404 and 406 is made, by the user device 100 or the XR rendering server 100, as to whether the user is interacting with a physical object, e.g., coffee cup, chair, handle of a door, etc. If the determination is “yes”, then the XR rendering device 100 operates to identify 408 characteristics of the physical object, e.g., type of object (cup, chair, door, etc.), physical size, shape, texture, color/pattern, etc.
[0041] The XR rendering device 100 determines 410 the participant’s avatar posture based on the relative location of the physical participant relative to the physical object, and based on the characteristics of the physical object and characteristics of the participant, e.g., height, weight, gender, age, etc. The XR rendering device 100 determines 412 a body part of
the participant, e.g., hand, which is interacting with the physical object, e.g., coffee cup, and the predicted motion pattern of the body part based on characteristics that have been associated with the physical object, such as the predicted motion of the hand and coffee cup being moved from a resting location on a table to the mouth of the participant’s avatar. The XR rendering device 100 then renders 414 graphical representations of the participant’s avatar interacting with a virtual object representation of the physical object, e.g., graphical representation of the coffee cup. The XR rendering device 100 communicates the rendered graphical representations to the user device 100, which responsively displays 416 the avatar interacting with the virtual object.
[0042] Device or cloud server application may furthermore select digital object attributes according to resemblance, history, and/or context.
[0043] In some embodiments, the operation to identify 502 characteristics of the physical object includes processing image data from a camera arranged to capture images of the physical object, to identify at least one of: size of the physical object, form of the physical object, and texture of the physical object.
[0044] Regarding resemblance, resemblance with the by-user currently grabbed-for physical object counterpart, in aspect of form, size, and textures.
[0045] In some embodiments, the operation to render 506 the avatar of the participant interacting with the virtual object in the immersive XR environment is performed to render the avatar with a form, size, and/or texture defined based on the size of the physical object, the form of the physical object, and/or the texture of the physical object.
[0046] The operations may use a recorded (stored) history of a previously used object, such as a recorded historical indication of previously used mugs, chairs, and shoes. In a real- world example, the operations can record digital attributes of a previously used digital object, such as a chair, to be re-used in a future digital meeting session responsive to determining the user body posture indicates use of “same physical chair.” The user may have an opportunity to select among a set of previously imported (stored) objects of similar type, e.g., satisfying a defined similarity rule.
[0047] Figure 8 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure. In some embodiments, the operations further include to compare 800 the processed image data of the physical object to historical virtual objects in a historical virtual object repository which defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which the user has previously interacted in the immersive XR environment. The operations
further include to select 802 one of the historical virtual objects in the historical virtual object repository based on similarity between the image data of the physical object to the one of the historical virtual objects. The operations further include to render 804 the virtual object in the immersive XR environment based on the selected one of the historical virtual objects.
[0048] Regarding context, context of a digital meeting may be used, such as the digital meeting being private, for leisure, or for business purposes. For example, determining a type of sitting object and associated body postures expected in a digital business meeting may be separate from an expected and accepted sitting object and associated body postures in a private after work digital meeting.
[0049] In some embodiments, the operations further include to determine at least one of the following context parameters: XR rendering device location data; time data; date data; characteristic of a background noise component; and sensor data indicating a sensed type of physical object or environmental parameter.
[0050] In some embodiments, the operation to render 506 the avatar of the participant interacting with the virtual object in the immersive XR environment is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the context parameters.
[0051] Various operations for performing texture sharing are now discussed.
[0052] In some further embodiments in the context of a multi-user XR meeting, operations by a participating XR device, a managing cloud server, or XR meeting application may select digital object attributes according to resemblance, history, and/or context as discussed above.
[0053] In some further embodiments, the identification, selection, and rendering application may provide to a database an instance of digital object attributes, such as physical form factor or textures, associated with the associated digital object.
[0054] In some embodiments, a second user may also search or match for a second digital object among the set that corresponds to the first user's digital object and attributes. The first user may in these operations further associate the provided digital object with a lifetime or persistence value indicative for how long time the digital object and attributes may be accessible for other participants.
[0055] Figure 9 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure. In some embodiments, the operations further include to compare 900 the processed image data of the physical object to predefined virtual objects in a predefined virtual object repository which
defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which are predefined in the immersive XR environment. The operations further include to select 902 one of the predefined virtual objects in the predefined virtual object repository based on similarity between the image data of the physical object to the one of the predefined virtual objects. The operations further include to render 904 the virtual object in the immersive XR environment based on the selected one of the predefined virtual objects. [0056] Figure 10 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure. In some embodiments, the operations further include to obtain 1000 from another XR rendering device a predicted motion pattern of a body part that is defined as being associated with the identified characteristics of the physical object. The operation to render the avatar of the participant interacting with the virtual object is performed based on the determined participant avatar posture and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
[0057] Selecting digital object attributes by other digital users for their respective digital object rendering is now discussed.
[0058] The identification/selection/rendering application may in the aspect of “within the set of digital objects available for rendering in the digital meeting” provide to a database (or similar) an instance of digital object attributes, such as physical form factor, and textures, associated with the “own digital object.”
[0059] In that aspect, a second user in the step of “find matching digital object from a set of digital objects available for rendering in the digital meeting” now also search/match the corresponding second digital object among the set also including first users digital object and attributes.
[0060] The first user may in these aspects further associate provided digital object with a lifetime or persistence value indicative for how long time the object may be accessible for other participants to adopt; for example, “duration of ongoing meeting,” “for meetings where first user is present,” “today,” etc.
[0061] The first user may also specify what other users and in which context the “own object” may be used by other users’ XR renderings, for example “only for family and friends,” “only for business colleagues but not external meeting participants.” [0062] Example XR Rendering Device Configuration.
[0063] Figure 11 is a block diagram of components of an XR rendering device 100 that are configured to operate in accordance with some embodiments of the present disclosure.
The XR rendering device 100 can include at least one processor circuit 1100 (processor), at least one memory 1110 (memory), at least one network interface 1120 (network interface), and a display device 1130. The processor 1100 is operationally connected to these various components. The memory 1110 stores executable instructions 1112 that are executed by the processor 1100 to perform operations. The processor 1100 may include one or more data processing circuits, such as a general purpose and/or special purpose processor (e.g., microprocessor and/or digital signal processor), which may be collocated or distributed across one or more data networks. The processor 1100 is configured to execute the instructions 1112 in the memory 1110, described below as a computer readable medium, to perform some or all of the operations and methods for one or more of the embodiments disclosed herein for an XR rendering device. As explained above, the XR rendering device may be separate from and communicatively connect to the participant devices, or may be at least partially integrated within one or more of the participant devices.
[0064] Further Definitions and Embodiments:
[0065] In the above-description of various embodiments of present inventive concepts, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of present inventive concepts. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which present inventive concepts belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense expressly so defined herein.
[0066] When an element is referred to as being "connected", "coupled", "responsive", or variants thereof to another element, it can be directly connected, coupled, or responsive to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected", "directly coupled", "directly responsive", or variants thereof to another element, there are no intervening elements present. Like numbers refer to like elements throughout. Furthermore, "coupled", "connected", "responsive", or variants thereof as used herein may include wirelessly coupled, connected, or responsive. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Well-known functions or constructions may not be described in detail for brevity and/or clarity. The term "and/or" includes any and all combinations of one or more of the associated listed items.
[0067] It will be understood that although the terms first, second, third, etc. may be used herein to describe various elements/operations, these elements/operations should not be limited by these terms. These terms are only used to distinguish one element/operation from another element/operation. Thus, a first element/operation in some embodiments could be termed a second element/operation in other embodiments without departing from the teachings of present inventive concepts. The same reference numerals or the same reference designators denote the same or similar elements throughout the specification.
[0068] As used herein, the terms "comprise", "comprising", "comprises", "include", "including", "includes", "have", "has", "having", or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof. Furthermore, as used herein, the common abbreviation "e.g.", which derives from the Latin phrase "exempli gratia," may be used to introduce or specify a general example or examples of a previously mentioned item, and is not intended to be limiting of such item. The common abbreviation "i.e.", which derives from the Latin phrase "id est," may be used to specify a particular item from a more general recitation.
[0069] Example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s).
[0070] These computer program instructions may also be stored in a tangible computer- readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable
medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks. Accordingly, embodiments of present inventive concepts may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor such as a digital signal processor, which may collectively be referred to as "circuitry," "a module" or variants thereof.
[0071] It should also be noted that in some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Finally, other blocks may be added/inserted between the blocks that are illustrated, and/or blocks/operations may be omitted without departing from the scope of inventive concepts. Moreover, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
[0072] Many variations and modifications can be made to the embodiments without substantially departing from the principles of the present inventive concepts. All such variations and modifications are intended to be included herein within the scope of present inventive concepts. Accordingly, the above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended examples of embodiments are intended to cover all such modifications, enhancements, and other embodiments, which fall within the spirit and scope of present inventive concepts. Thus, to the maximum extent allowed by law, the scope of present inventive concepts are to be determined by the broadest permissible interpretation of the present disclosure including the following examples of embodiments and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
Claims
1. An extended reality, XR, rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment, the XR rendering device comprising: at least one processor; and at least one memory storing instructions executable by the at least one processor to perform operations to: determine the participant is interacting with a physical object in the participant’s physical environment; identify characteristics of the physical object in the participant’s physical environment which the participant is interacting with; determine a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment; and render the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
2. The XR rendering device of Claim 1, wherein the operations further comprise to: determine a body part of the participant that is interacting with the physical object; and determine a predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
3. The XR rendering device of Claim 2, wherein the operation to render the avatar of the participant interacting with the virtual object is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
4. The XR rendering device of any of Claims 2 to 3, wherein the operations further comprise to: initiate a request for the user to input a typical motion pattern of the body part interacting with the physical object; and
determine the predicted motion pattern of the body part to be associated with the identified characteristics of the physical object, based on the user input of the typical motion pattern of the body part interacting with the physical object.
5. The XR rendering device of Claim 4, wherein the operations further comprise to: initiate the request for the user to input a typical motion pattern of the body part interacting with the physical object responsive to determining that no predicted motion pattern of the body part is associated with the identified characteristics of the physical object.
6. The XR rendering device of any of Claims 1 to 5, wherein the operation to identify characteristics of the physical object comprises to: process image data from a camera arranged to capture images of the physical object, to identify at least one of: size of the physical object, form of the physical object, and texture of the physical object.
7. The XR rendering device of Claim 6, wherein the operation to render the avatar of the participant interacting with the virtual object in the immersive XR environment is performed to render the avatar with a form, size, and/or texture defined based on the size of the physical object, the form of the physical object, and/or the texture of the physical object.
8. The XR rendering device of any of Claims 6 to 7, wherein the operations further comprise to: compare the processed image data of the physical object to historical virtual objects in a historical virtual object repository which defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which the user has previously interacted in the immersive XR environment; and select one of the historical virtual objects in the historical virtual object repository based on similarity between the image data of the physical object to the one of the historical virtual objects; and render the virtual object in the immersive XR environment based on the selected one of the historical virtual objects.
9. The XR rendering device of any of Claims 1 to 8, wherein the operations further comprise to: determine at least one of the following context parameters:
XR rendering device location data; time data; date data; characteristic of a background noise component; and sensor data indicating a sensed type of physical object or environmental parameter.
10. The XR rendering device of Claim 9, wherein the operation to render the avatar of the participant interacting with the virtual object in the immersive XR environment is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the context parameters.
11. The XR rendering device of any of Claims 1 to 10, wherein the operations further comprise to: compare the processed image data of the physical object to predefined virtual objects in a predefined virtual object repository which defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which are predefined in the immersive XR environment; select one of the predefined virtual objects in the predefined virtual object repository based on similarity between the image data of the physical object to the one of the predefined virtual objects; and render the virtual object in the immersive XR environment based on the selected one of the predefined virtual objects.
12. The XR rendering device of any of Claims 1 to 11, wherein the operations further comprise to: obtain from another XR rendering device a predicted motion pattern of a body part that is defined as being associated with the identified characteristics of the physical object, wherein the operation to render the avatar of the participant interacting with the virtual object is performed based on the determined participant avatar posture
and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
13. A method by an extended reality, XR, rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment, the method comprising: determining (500) the participant is interacting with a physical object in the participant’s physical environment; identifying (502) characteristics of the physical object in the participant’s physical environment which the participant is interacting with; determining (504) a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment; and rendering (506) the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
14. The method of Claim 13, further comprising: determining (600) a body part of the participant that is interacting with the physical object; and determining (602) a predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
15. The method of Claim 14, wherein the operation to render (506) the avatar of the participant interacting with the virtual object is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
16. The method of any of Claims 14 to 15, further comprising: initiating (700) a request for the user to input a typical motion pattern of the body part interacting with the physical object; and
determining (702) the predicted motion pattern of the body part to be associated with the identified characteristics of the physical object, based on the user input of the typical motion pattern of the body part interacting with the physical object.
17. The method of Claim 16, further comprising: initiating the request for the user to input a typical motion pattern of the body part interacting with the physical object responsive to determining that no predicted motion pattern of the body part is associated with the identified characteristics of the physical object.
18. The method of any of Claims 13 to 17, wherein the operation to identify (502) characteristics of the physical object comprises to: process image data from a camera arranged to capture images of the physical object, to identify at least one of: size of the physical object, form of the physical object, and texture of the physical object.
19. The method of Claim 18, wherein the operation to render (506) the avatar of the participant interacting with the virtual object in the immersive XR environment is performed to render the avatar with a form, size, and/or texture defined based on the size of the physical object, the form of the physical object, and/or the texture of the physical object.
20. The method of any of Claims 18 to 19, further comprising: comparing (800) the processed image data of the physical object to historical virtual objects in a historical virtual object repository which defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which the user has previously interacted in the immersive XR environment; and selecting (802) one of the historical virtual objects in the historical virtual object repository based on similarity between the image data of the physical object to the one of the historical virtual objects; and rendering (804) the virtual object in the immersive XR environment based on the selected one of the historical virtual objects.
21. The method of any of Claims 13 to 20, further comprising: determining at least one of the following context parameters:
XR rendering device location data; time data; date data; characteristic of a background noise component; and sensor data indicating a sensed type of physical object or environmental parameter.
22. The method of Claim 21, wherein the operation to render (506) the avatar of the participant interacting with the virtual object in the immersive XR environment is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the context parameters.
23. The method of any of Claims 13 to 22, further comprising:
Comparing (900) the processed image data of the physical object to predefined virtual objects in a predefined virtual object repository which defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which are predefined in the immersive XR environment; and
Selecting (902) one of the predefined virtual objects in the predefined virtual object repository based on similarity between the image data of the physical object to the one of the predefined virtual objects; and rendering (904) the virtual object in the immersive XR environment based on the selected one of the predefined virtual objects.
24. The method of any of Claims 13 to 23, further comprising: obtaining (1000) from another XR rendering device a predicted motion pattern of a body part that is defined as being associated with the identified characteristics of the physical object, wherein the operation to render the avatar of the participant interacting with the virtual object is performed based on the determined participant avatar posture and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2022/076308 WO2024061462A1 (en) | 2022-09-22 | 2022-09-22 | Rendering user avatar and digital object in extended reality based on user interactions with physical object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2022/076308 WO2024061462A1 (en) | 2022-09-22 | 2022-09-22 | Rendering user avatar and digital object in extended reality based on user interactions with physical object |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024061462A1 true WO2024061462A1 (en) | 2024-03-28 |
Family
ID=83506483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/076308 WO2024061462A1 (en) | 2022-09-22 | 2022-09-22 | Rendering user avatar and digital object in extended reality based on user interactions with physical object |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024061462A1 (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9007422B1 (en) * | 2014-09-03 | 2015-04-14 | Center Of Human-Centered Interaction For Coexistence | Method and system for mutual interaction using space based augmentation |
US20170365084A1 (en) * | 2016-06-15 | 2017-12-21 | Fujitsu Limited | Image generating apparatus and image generating method |
US20190206141A1 (en) * | 2017-12-29 | 2019-07-04 | Facebook, Inc. | Systems and methods for generating and displaying artificial environments based on real-world environments |
US10924710B1 (en) * | 2020-03-24 | 2021-02-16 | Htc Corporation | Method for managing avatars in virtual meeting, head-mounted display, and non-transitory computer readable storage medium |
US20210392175A1 (en) * | 2020-05-12 | 2021-12-16 | True Meeting Inc. | Sharing content during a virtual 3d video conference |
KR20220026186A (en) * | 2020-08-25 | 2022-03-04 | 한국과학기술원 | A Mixed Reality Telepresence System for Dissimilar Spaces Using Full-Body Avatar |
US20220269336A1 (en) * | 2021-02-25 | 2022-08-25 | Quebec Inc. (Auger Groupe Conseil) | Systems and methods for virtual interaction |
-
2022
- 2022-09-22 WO PCT/EP2022/076308 patent/WO2024061462A1/en unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9007422B1 (en) * | 2014-09-03 | 2015-04-14 | Center Of Human-Centered Interaction For Coexistence | Method and system for mutual interaction using space based augmentation |
US20170365084A1 (en) * | 2016-06-15 | 2017-12-21 | Fujitsu Limited | Image generating apparatus and image generating method |
US20190206141A1 (en) * | 2017-12-29 | 2019-07-04 | Facebook, Inc. | Systems and methods for generating and displaying artificial environments based on real-world environments |
US10924710B1 (en) * | 2020-03-24 | 2021-02-16 | Htc Corporation | Method for managing avatars in virtual meeting, head-mounted display, and non-transitory computer readable storage medium |
US20210392175A1 (en) * | 2020-05-12 | 2021-12-16 | True Meeting Inc. | Sharing content during a virtual 3d video conference |
KR20220026186A (en) * | 2020-08-25 | 2022-03-04 | 한국과학기술원 | A Mixed Reality Telepresence System for Dissimilar Spaces Using Full-Body Avatar |
US20220269336A1 (en) * | 2021-02-25 | 2022-08-25 | Quebec Inc. (Auger Groupe Conseil) | Systems and methods for virtual interaction |
Non-Patent Citations (1)
Title |
---|
WANG XUANYU ET AL: "Predict-and-Drive: Avatar Motion Adaption in Room-Scale Augmented Reality Telepresence with Heterogeneous Spaces", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, IEEE, USA, vol. 28, no. 11, 31 August 2022 (2022-08-31), pages 3705 - 3714, XP011924452, ISSN: 1077-2626, [retrieved on 20220901], DOI: 10.1109/TVCG.2022.3203109 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7554290B2 (en) | Presenting avatars in 3D environments | |
US10554921B1 (en) | Gaze-correct video conferencing systems and methods | |
US20230130535A1 (en) | User Representations in Artificial Reality | |
US9842246B2 (en) | Fitting glasses frames to a user | |
TW201911082A (en) | Image processing method, device and storage medium | |
US20240242449A1 (en) | Extended reality rendering device prioritizing which avatar and/or virtual object to render responsive to rendering priority preferences | |
US20240257434A1 (en) | Prioritizing rendering by extended reality rendering device responsive to rendering prioritization rules | |
US20210201002A1 (en) | Moving image distribution computer program, server device, and method | |
CN119923616A (en) | Methods for manipulating virtual objects | |
US20220405996A1 (en) | Program, information processing apparatus, and information processing method | |
US20230298240A1 (en) | Control program for terminal device, terminal device, control method for terminal device, control program for server device, server device, and control method for server device | |
KR20250049399A (en) | User interfaces for managing live communication sessions. | |
JP2016105279A (en) | Device and method for processing visual data, and related computer program product | |
CN119094502A (en) | Portal content for communication sessions | |
JP2023103317A (en) | Live communication system with characters | |
JP7488420B2 (en) | Contents providing device | |
US20240205370A1 (en) | Extended reality servers preforming actions directed to virtual objects based on overlapping field of views of participants | |
WO2024061462A1 (en) | Rendering user avatar and digital object in extended reality based on user interactions with physical object | |
US20240320931A1 (en) | Adjusting pose of video object in 3d video stream from user device based on augmented reality context information from augmented reality display device | |
JP2020520487A (en) | Improved method and system for VR interaction | |
JP7643424B2 (en) | Method, program, and terminal device | |
US20250086873A1 (en) | Cross-device communication with adaptive avatar interaction | |
WO2024083302A1 (en) | Virtual portal between physical space and virtual space in extended reality environments | |
WO2024037160A1 (en) | Video frame processing method and apparatus, computer device, and storage medium | |
WO2023208317A1 (en) | Equitable rendering of avatar heights for extended reality environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22777991 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |