WO2020250106A1 - A system and a method for teleportation for enhanced audio-visual interaction in mixed reality (mr) using a head mounted device (hmd) - Google Patents
A system and a method for teleportation for enhanced audio-visual interaction in mixed reality (mr) using a head mounted device (hmd) Download PDFInfo
- Publication number
- WO2020250106A1 WO2020250106A1 PCT/IB2020/055366 IB2020055366W WO2020250106A1 WO 2020250106 A1 WO2020250106 A1 WO 2020250106A1 IB 2020055366 W IB2020055366 W IB 2020055366W WO 2020250106 A1 WO2020250106 A1 WO 2020250106A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subject
- hmd
- dynamic
- audio
- location
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
Definitions
- Embodiment of the present invention generally relates to mixed reality-based telepresence technologies and more particularly to a system and a method for teleportation for enhanced audio-visual interaction in Mixed Reality (MR) using a Head Mounted Device (HMD).
- MR Mixed Reality
- HMD Head Mounted Device
- An object of the present invention is to provide a system and a method for teleportation for enhanced audio-visual interaction in Mixed Reality (MR) using a Head Mounted Device (HMD).
- MR Mixed Reality
- HMD Head Mounted Device
- Another object of the present invention is to utilise a fusion of depth sensors (such as phase-based or time-of-flight based sensors) and thermal cameras to capture the shape and exact motion of a subject.
- depth sensors such as phase-based or time-of-flight based sensors
- thermal cameras to capture the shape and exact motion of a subject.
- Yet another object of the present invention is to generate dynamic spline-based 3D models without colour information, which may be easily and efficiently transmitted via wireless communication networks.
- Yet another object of the present invention is to separately receive and combine the colour and texture information at a receiver HMD to increase the overall efficiency of the system, decrease the latency and reduce the computation and network load by eliminating the requirement of transmitting 3D volumetric data, point cloud or meshes for every instance.
- Yet another object of the present invention is to enable one-to-one, one-to- many and many-to-many teleportation sessions seamlessly along with sensory feedback like touch and odour as well.
- Yet another object of the present invention is to it capture and stream rapid dynamic movements with minimal latency as a very simple data stream and no high speed colour camera
- Yet another object of the present invention is to increase data efficiency of the system and reduce the high computation, network as well as bandwidth requirements.
- Yet another object of the present invention is to utilise one or more depth sensors and one or more thermal cameras for scanning so as to enable the system to work in low light or no light conditions as well.
- a system for teleportation for enhanced Audio-Visual Interaction in Mixed Reality (MR) using a Head Mounted Device comprises, but not limited to, a first scanning zone present at a first location having one or more depth sensors to capture depth data of a first subject; one or more thermal cameras to capture thermal imaging data of the first subject; one or more microphones to capture audio data of the first subject; a network module to establish a wireless communication network; and a processing module connected with the one or more depth sensors, the one or more thermal cameras, one or more microphones and the network module.
- the system further comprises a mixed reality-based first HMD present at a second location, connected with the first scanning zone via the wireless communication network.
- the processing module is configured to receive dynamic depth data, dynamic thermal imaging data and the audio data of the first subject captured by the one or more depth sensors, the one or more thermal cameras and the one or more microphones respectively in the first scanning zone; generate a first dynamic spline-based 3D model of the first subject based on the received dynamic depth data and the dynamic thermal imaging data thereby replicating a shape, a size, an orientation and dynamic movements of the first subject in real-time and sync the audio data of the first subject with the dynamic movements of the first dynamic spline-based 3D model; and send the first dynamic spline-based 3D model of the first subject along with synced audio data via the wireless communication network to the first HMD.
- the first HMD is configured to receive a single coloured image of the first subject from an external source; process the coloured image for extracting colour and texture information of the first subject; receive the first dynamic spline-based 3D model of the first subject along with the synced audio data via the wireless communication network from the first processing module; process and apply the colour and texture information on the first dynamic spline- based 3D model of the first subject to generate a first dynamic holographic projection that replicates an appearance, the audio and the dynamic movements of the first subject in real-time; and display the first dynamic holographic projection of the first subject in a mixed reality space of the first HMD and play the synced audio of the first subject in an audio unit of the first HMD in real-time, thereby teleporting the first subject from the first location to the second location.
- the first scanning zone further comprises one or more odour sensors connected with the processing module and the processing module is further configured to receive an odour data of the first subject captured by the one or more odour sensors and send the odour data via the wireless communication network.
- the first HMD is further configured to receive and generate an odour replicating the odour of the first subject based on the received odour data using one or more odour generators, thereby adding more realism to the first dynamic holographic projection of the first subject.
- the first scanning zone may include one or more haptic sensors.
- the system further comprises a second scanning zone at the second location of the first HMD, configured to generate and send a second dynamic spline-based 3D model of the second subject along with synced audio data via the wireless communication network; and a second HMD at the first location of the first subject.
- the second HMD is configured to receive and process the coloured image of the second subject for extracting colour and texture information of the second subject; receive the second dynamic spline-based 3D model along with synced audio data via the wireless communication network; process and apply the colour and texture information on the second dynamic spline-based 3D model of the second subject to generate a second dynamic holographic projection that replicates an appearance, the audio and the dynamic movements of the second subject in real time; and display the second dynamic holographic projection of the second subject in a mixed reality space of the second HMD and play the synced audio of the second subject in an audio unit of the second HMD in real-time, thereby teleporting the subject from the second location to the first location of the first HMD and enabling a two-way communication between the first HMD and the second HMD.
- the first subject and the second subject are selected from living and non-living things such as humans, plants, objects or a combination thereof.
- the one or more depth sensors are selected from one or more of Time of Flight (ToF) sensors, LIDARs, RADARs, ultrasonic sensors, infrared sensors and lasers.
- ToF Time of Flight
- the Head mounted devices may include one or more haptic sensors.
- the external source for receiving the coloured image is selected from an external communication network, external storage, cloud storage, or a computing device such as a smartphone, a tablet, a laptop or a desktop PC.
- the method comprises receiving dynamic depth data, dynamic thermal imaging data and the audio data of a first subject present at a first location, from one or more depth sensors, one or more thermal cameras and one or more microphones respectively; generating a first dynamic spline-based 3D model of the first subject based on the dynamic depth data and the dynamic thermal imaging data being received, thereby replicating a shape, a size, an orientation and dynamic movements of the first subject in real time and syncing the audio data of the first subject with the dynamic movements of the first dynamic spline-based 3D model; sending the first dynamic spline-based 3D model of the first subject along with synced audio data via a wireless communication network to a first HMD; receiving a coloured image of the first subject from an external source at first HMD present at a second location; processing the
- the method further comprises receiving an odour data of the first subject from the one or more odour sensors, at the first HMD; and generating an odour replicating the odour of the first subject based on the received odour data using one or more odour generators in the first HMD, thereby adding more realism to the holographic projection of the first subject.
- the method further comprises a step of enabling a two-way teleportation by generating and sending a second dynamic spline-based 3D model of a second subject present at the first location, along with synced audio data via the wireless communication network; receiving and processing the coloured image of the second subject at a second HMD present at the first location, for extracting colour and texture information of the second subject; receiving the second dynamic spline-based 3D model along with synced audio data via the wireless communication network at the second HMD; processing and applying the colour and texture information on the second dynamic spline-based 3D model of the second subject to generate a second dynamic holographic projection that replicates an appearance, the audio and the dynamic movements of the second subject in real-time; and displaying the second dynamic holographic projection of the second subject in a mixed reality space of the second HMD and playing the synced audio of the second subject in an audio unit of the second HMD in real-time, thereby teleporting the
- the first subject and the second subject are selected from living and non-living things such as humans, plants, objects or a combination thereof.
- the one or more depth sensors are selected from one or more of Time of Flight (ToF) sensors, LIDARs, RADARs, ultrasonic sensors, infrared sensors and lasers.
- TOF Time of Flight
- the external source for receiving the coloured image is selected from an external communication network, external storage, cloud storage, or a computing device such as a smartphone, a tablet, a laptop or a desktop PC.
- FIG. 1 illustrates a system for teleportation for enhanced Audio-Visual Interaction in Mixed Reality (MR) using a Flead Mounted Device (FIMD), in accordance with an embodiment of the present invention
- FIG. 2 illustrates a method for teleportation for enhanced Audio-Visual Interaction in the Mixed Reality (MR) using the Head Mounted Device (HMD), in accordance with an embodiment of the present invention
- FIG. 3A-3B illustrate information flow diagrams of a 1 -way teleportation using the system of Fig. 1 and the method of Fig. 2, in accordance with an embodiment of the present invention.
- FIG. 4 illustrates exemplary implementation of a two-way teleportation using the system of Fig. 1 and the method of Fig. 2, in accordance with another embodiment of the present invention.
- compositions or an element or a group of elements are preceded with the transitional phrase“comprising”, it is understood that we also contemplate the same composition, element or group of elements with transitional phrases“consisting of, “consisting”, “selected from the group of consisting of, “including”, or“is” preceding the recitation of the composition, element or group of elements and vice versa.
- transitional phrases“consisting of, “consisting”, “selected from the group of consisting of, “including”, or“is” preceding the recitation of the composition, element or group of elements and vice versa are described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein.
- Figure 1 illustrates a system (100) for teleportation for enhanced Audio-Visual Interaction in Mixed Reality (MR) using a Head Mounted Device (HMD), in accordance with an embodiment of the present invention.
- the system (100) comprises a first scanning zone (102) present at a first location, connected with a first HMD (120) present at a second location, via a wireless communication network (1 18).
- the first location and the second location may be closely or remotely located with respect to each other. In that sense, the first location and the second location may be, but not limited to, different areas of a same building or located in different cities, different states or even different countries.
- the first scanning zone (102) is adapted to scan or capture data associated with a first subject (1 16).
- the first subject (116) are selected from living and non living things such as, but not limited to, humans, plants, objects or a combination thereof.
- the first subject (1 16) may be human being holding a book or a flower in his/her hand in the first scanning zone (102). Therefore, the first scanning zone (102) comprises, but not limited to, one or more depth sensors (104), one or more thermal cameras (106), one or more microphones (108), a network module (110) and a processing module (114) connected with each of the one or more depth sensors (104), the one or more thermal cameras (106), one or more microphones (108) and the network module (110).
- the first scanning zone also comprises one or more odour sensors (1 12).
- the one or more depth sensors (104) are configured to capture depth data of the first subject (1 16). So, the one or more depth sensors (104) are selected from one or more of, but not limited to, Time of Flight (ToF) sensors, LIDARs, RADARs, ultrasonic sensors, infrared sensors and lasers. In one embodiment, these sensors may be used in a combination and disposed at multiple angles and locations to cover and capture depth data of every aspect of the first subject (1 16). In another embodiment, only LIDARs may be used. In general, the Light Detection and Ranging (LIDAR) is commonly used for 3D sensing.
- ToF Time of Flight
- LIDARs Light Detection and Ranging
- the LIDAR is used for measuring distances (ranging) by illuminating a target with laser light and measuring the reflection with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3-D representations of the target. So, herein the one or more depth sensors continuously capture the depth data continuously covering every even the smallest movements of the first subject (1 16) such as breathing etc.
- one or more thermal cameras (106) are configured to detect the dynamic thermal imaging data of the first subject (1 16) in the scanning zone.
- the one or more thermal cameras (106) also continuously capture the thermal imaging data continuously covering every aspect of the first subject (1 16).
- a thermal camera detects radiation in the long-infrared range of the electromagnetic spectrum and produce images of that radiation, called thermograms. Since infrared radiation is emitted by all objects with a temperature above absolute zero according to the black body radiation law, therefore, thermal imaging data makes it possible to see the first subject (1 16) with or without visible illumination.
- the present invention does not require any coloured image or volumetric data to be continuously captured or transmitted from the first scanning zone (102) for our teleportation implementation.
- the present invention only uses a fusion of one or more depth sensors (104) (such as of phase-based or time-of-flight based sensors etc.) and one or more thermal cameras (106) to capture the shape and exact motion of the first subject (1 16).
- the one or more microphones (108) are configured to capture audio data of the first subject (1 16).
- the one or more microphones (108) capture binaural audio along the motion of the first subject (1 16) and 3D stereo sound.
- the first scanning zone (102) may also implement acoustic source localization and background noise cancellation techniques to further enhance the experience.
- the one or more odour sensors (112) are connected with the processing module (1 14). The one or more odour sensors (1 12) are configured to capture an odour data of the first subject (1 16) within the first scanning zone (102).
- the processing module (1 14) are envisaged to include computing capabilities such as a memory unit configured to store machine readable instructions.
- the machine-readable instructions may be loaded into the memory unit from a non-transitory machine-readable medium, such as, but not limited to, CD-ROMs, DVD-ROMs and Flash Drives. Alternately, the machine-readable instructions may be loaded in a form of a computer software program into the memory unit.
- the memory unit in that manner may be selected from a group comprising EPROM, EEPROM and Flash memory.
- the processing module (114) includes a processor operably connected with the memory unit.
- the processor is one of, but not limited to, a general-purpose processor, an application specific integrated circuit (ASIC) and a field- programmable gate array (FPGA).
- the processing module (1 14) may be a part of a dedicated computing device or may be a microprocessor that may be a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory and provides results as output.
- the processing module (114) may further implement artificial intelligence and machine learning based technologies for, but not limited to, data analysis, collating data and presentation of data in real-time.
- the network module (110) Also connected with the processing module (1 14) is the network module (110). Further, the network module (1 10) configured to establish a wireless communication network (118) to enable a wireless communication between the first scanning zone (102) and the first FIMD (120).
- the network module (1 10) may include one or more of, but not limited to, a WiFi module and a GSM/GPRS module. Therefore, the wireless communication network (1 18) may be, but not limited to, wireless intranet network, WIFI internet or GSM/GPRS based 2G, 3G, 4G, LTE or 5G communication network.
- a data repository may be also be connected with the system (100). The data repository may be, but not limited to, a local or a cloud-based storage.
- the system (100) comprises the Mixed Reality based first Head Mounted Device (HMD) (120) connected with the first scanning zone (102) via the wireless communication network (118).
- the first HMD (120) may be envisaged to include capabilities of generating an augmented reality environment, mixed reality environment and an Extended reality environment that lets a user interact with digital content within the environment generated in the first HMD (120).
- the first HMD (120) is envisaged to be worn by the user and therefore, may be provided with, but not limited to, one or more bands, straps and locks for mounting on the head; or may even be provided as smart glasses to be worn just like spectacles.
- the first HMD (120) may include components (not shown) selected from, but not limited to, an optical unit having one or more lenses, one or more reflective mirrors & a display unit; a sensing unit having one or more sensors & an image acquisition device; an audio unit comprising one or more speakers and one or more microphones; a user interface; a wireless communication module and a microprocessor.
- the optical unit is envisaged to provide a high resolution and wider field of view.
- the display unit may comprise a Liquid Crystal on Silicon (LCoS) display and a visor.
- the one or more sensors may selected from, but not limited to, RGB sensor, a depth sensor, an eye tracking sensor, an EM sensor, ambient light sensor, an accelerometer, a gyroscope and a magnetometer.
- the image acquisition device is selected from one or more of, but not limited to, omnidirectional cameras, wide angle stereo vision camera, RGB-D camera, digital cameras, thermal cameras, Infrared cameras and night vision cameras.
- the one or more microphones in the audio unit is configured to capture binaural audio along the motion of the user and 3D stereo sound with acoustic source localization with the help of IMU.
- the audio unit may also implement background noise cancellation techniques to further enhance the experience.
- the one or more speakers may have an audio projection mechanism that projects sound directly to the concha of an ear of the user and reaches an ear canal after multiple reflections.
- the one or more ports configured to enable wired connection between one or more external sources and the first HMD (120).
- the one or more ports may be, but not limited to, micro- USB port, USB Type-C ports and HDMI ports.
- the wireless communication module is configured to connect with the wireless communication network (1 18) to enable wireless communication between the first scanning zone (102) and the first HMD (120). Additionally, it may also connect the first HMD (120) with other available wireless networks for sending and receiving information wirelessly.
- the first HMD (120) includes a microprocessor that may be a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory and provides results as output.
- the microprocessor may contain both combinational logic and sequential digital logic.
- the microprocessor is the brain of the first HMD (120) and is configured to facilitate operation of each of the components of the first HMD (120).
- the first HMD (120) may also implement artificial intelligence and machine learning based technologies for, but not limited to, data analysis, collating data and presentation of data in real-time.
- FIG. 2 illustrates a method (200) for teleportation for enhanced Audio-Visual Interaction in the Mixed Reality (MR) using the Head Mounted Device (HMD), in accordance with an embodiment of the present invention.
- the method (200) starts at step 210, by receiving dynamic depth data, dynamic thermal imaging data and the audio data of the first subject (116) present at a first location, from the one or more depth sensors (104), the one or more thermal cameras (106) and the one or more microphones (108) respectively.
- FIGs 3A-3B illustrate information flow diagrams of a 1 -way teleportation using the system (100) of Fig. 1 and the method (200) of Fig. 2, in accordance with an embodiment of the present invention.
- the first subject (1 16) is a man standing in the first scanning zone (102) at the first location.
- the one or more depth sensors (104), the one or more thermal cameras (106) and the one or more microphones (108) capture the dynamic depth data, dynamic thermal imaging data and the audio data respectively, and send the same to processing module (1 14).
- the processing module (1 14) generates a first dynamic spline-based 3D model of the first subject (1 16) based on the dynamic depth data and the dynamic thermal imaging data being received.
- the first dynamic spline- based 3D model comprises of a plurality of splines and mathematical curves along with their parameters, representing the curves and surfaces of the first subject (116), wherein each spline is controlled by a plurality of control points.
- the plurality of control points enable the plurality of splines to form/take a shape and orientation of the first subject (116) in a three-dimensional space.
- the first dynamic spline-based 3D model replicates a shape, a size, an orientation and dynamic movements of the first subject (116) in real-time.
- the processing module (114) synchronizes (hereinafter referred to as“syncs” or“synced”) the audio data of the first subject (116) with the dynamic movements of the first dynamic spline-based 3D model.
- the audio data comprises of a voice of the man in figure 3A, so voice can be synced with the lip movements in real time.
- the first dynamic spline-based 3D model of the first subject (1 16) along with synced audio data is sent via the wireless communication network (1 18) to the first FIMD (120).
- the first dynamic spline-based 3D model are generated and transmitted more easily and efficiently without the colour or RGB cameras, unlike the prior arts. This increases the overall efficiency of the system (100), minimizes the latency and reduces the computation and network load because the prior teleportation systems are focused on transmitting 3D volumetric data, point cloud or meshes for every instance (which might not be feasible even with high speed data networks presently available for general public).
- the processing module (114) instead of streaming millions of voxels or 3D points at every instance, the processing module (114) only streams the parameters of the mathematical curve to denote changes.
- the encrypted parameters also act as a much better way to protect user privacy against data breach as compared to images and appearance-based volumetric data. Therefore, the network bandwidth and processing requirements of the present invention are much less than presently available state of the art teleportation systems for mixed reality headsets.
- a single coloured image of the first subject (1 16) is received from an external source at first HMD (120) present at the second location.
- the generated first dynamic spline-based 3D model does not include any colour of the first subject (1 16), so the coloured image of the first subject (116) is separately received at the first HMD (120) from the external source.
- the external source may be, but not limited to, an external communication network, external storage, cloud storage, or a computing device such as a smartphone, a tablet, a laptop or a desktop PC.
- the received coloured image may be scanned by the image acquisition device of the first HMD (120).
- the coloured image is directly received in the first HMD (120) in one of the formats such as, but not limited to, .png, .jpg, .jpeg, .gif etc.
- more than one coloured images or even videos may be received for processing and extracting colour and information of the first subject.
- the first HMD (120) processes the coloured image for extracting colour and texture information of the first subject (1 16).
- the colour and texture information may include physical and visual characteristics defining complete appearance of the first subject (1 16) including, but not limited to, visual features, skin, colours etc. of the first subject (1 16).
- the colour and texture information may include skin texture, colour, facial features, hair colour, eye colour, dress colour etc.
- physical characteristics such as, but not limited to, size, shape, structure etc. of the first subject (1 16) are also extracted from the coloured image.
- the steps 240 and 250 may be separately carried out by the first HMD (120), even before steps 210-230 or even after next step of the present method (200), without departing from the scope of the present invention.
- the coloured photo may be sent and processed before the teleportation session.
- the first HMD (120) receives the first dynamic spline-based 3D model of the first subject (1 16) along with the synced audio data via the wireless communication network (1 18). The same has been illustrated in figure 3B.
- the first HMD (120) processes and applies the colour and texture information on the first dynamic spline-based 3D model of the first subject (116). This generates a first dynamic holographic projection (320) that replicates an appearance, the audio and the dynamic movements of the first subject (1 16) in real time.
- the first dynamic holographic projection (320) may be understood as a realistic holographic reconstruction of the first subject (116) created with minimal latency in real-time.
- the coloured image received at step 240 may not be the current image of the person and the person may be wearing some other clothes in the coloured image than those actually worn during the teleportation session. So, the first HMD (120) before applying the extracted colour and texture information, compares the physical characteristics including shape, size, body structure etc. of the first subject (116) extracted from the coloured image with the physical characteristics including shape, size, body structure, face structure etc. of the first dynamic spline-based 3D model of the first subject (1 16).
- the first HMD (120) processes and applies the colour and texture information from the coloured image on the first dynamic spline-based 3D model of the first subject (116).
- a predetermined level say 95% & above
- the first HMD (120) processes and applies the colour and texture information from the coloured image on the first dynamic spline-based 3D model of the first subject (116).
- Such feature is advantageous in a scenario where the first subject (1 16) needs to be present for the teleportation session but does not have formals, then he/she simply send a coloured image of himself/herself wearing formals, and the first dynamic holographic projection (320) of the first subject (1 16) would generated wearing the clothes worn in the coloured image.
- the audio and body movements are still transmitted real-time.
- the predetermined level of above 95% prevents any possible misuse of the feature. Apart from this, several other measures may easily be implemented for preventing any misuse.
- the single coloured image can be received by any medium before the start of the teleportation session.
- the Al-based system efficiently adds the colour and texture information from the single coloured image to this data to create the holographic projection. This is a one-time activity.
- the spline-based 3D model and holographic projection are generated, only changes in the orientation and movements of the subject are applied to this spline-based 3D model and the resultant holographic projection.
- the generated first dynamic holographic projection (320) of the first subject (1 16) is displayed in a mixed reality (310) space of the first HMD (120). Simultaneously, as the first subject (116) speaks or there is an audio from the first scanning zone (102), the same is played in synchronization with the first subject (1 16), in the audio unit of the first HMD (120) in real-time. In this manner, the first subject (116) is easily and efficiently teleported from the first location to the second location where the first HMD (120) is present. The same can be seen in figure 3B as the first dynamic holographic projection (320) of the man is being displayed in the mixed reality (310) space of the first HMD (120).
- a major advantage of the present invention is that it captures and streams rapid dynamic movements with minimal latency as a very simple data stream and no high-speed colour camera is required. Additionally, rapid dynamic movements like small vibrations can also be captured and replicated remotely in the holographic projection in real time. For example: the present invention captures and replicates even the smallest dynamic movements as subtle as breathing etc. that adds the next level of realism to the teleported holographic projections.
- the first HMD (120) may receive the odour data of the first subject (1 16) captured from the one or more odour sensors (1 12) from the first scanning zone (102). Accordingly, the first HMD (120) generates an odour replicating the odour of the first subject (1 16) during the teleportation session, based on the received odour data using one or more odour generators in the first HMD (120). For example, if the first subject (1 16) is a person wearing a perfume, the scent of the perfume may be replicated by the first HMD (120) during the teleportation, to add realism of physical meeting.
- FIG. 3A-3B The above mentioned implementation of figure 2 and Figures 3A-3B is an example of a one-way teleportation where a first subject (116) from the first scanning zone (102) at the first location is teleported to the first HMD (120) at the second location, where the user wearing the first HMD (120) can see the first dynamic holographic projection (320) of the first subject (116) in the respective mixed reality space (310). So, the user can feel the realistic holographic presence of the first subject (116) but the first subject (116) does not see/feel the user of the first HMD (120). Therefore, the present invention also provides another implementation involving a two-way teleportation using the system (100) and the same methodology of transmitting and generating the dynamic holographic projection.
- Figure 4 illustrates exemplary implementation of a two-way teleportation using the system (100), in accordance with another embodiment of the present invention.
- this example may continue from the previous mentioned example, where it was assumed that the first subject (116) is a“man”, present at the first location standing at the first scanning zone (102) and there is the first HMD (120) present at the second location which received the first dynamic holographic projection (320) of the first subject (1 16) (i.e. man).
- the first HMD (120) (at the second location) has been worn by a “ woman” and she is the second subject (404) in this case.
- the woman is seeing the first dynamic holographic projection (320) of the man in the respective mixed reality (310) space of the first HMD (120). Additionally, the woman (the second subject (404)) is also being scanned at a second scanning zone (402) present at the second location. Further, the second scanning zone (402) is connected with a second HMD (406), which is worn by the first subject (1 16) (i.e. the man) present at the first location, via the same wireless communication network (1 18).
- the second scanning zone (402) and the first scanning zone (102) are envisaged to have same or similar components and functionalities. Same may be said for the first HMD (120) and the second HMD (406).
- a second dynamic holographic projection (410) of the second subject (404) is generated and displayed in a respective mixed reality (408) space of the second HMD (406) worn by the man.
- the first subject (1 16) i.e. the man
- the second subject (404) i.e. the woman
- the system (100) may also include a haptic feedback mechanism configured to generate a realistic touch and grab feel during interactions between the first/second subjects and the first/second dynamic holographic projections in the mixed reality space.
- the first subject (1 16) i.e. the man
- second dynamic holographic projection (410) of the second subject (404) i.e. the woman
- the second subject (404) i.e. the woman
- first dynamic holographic projection (320) of the first subject (1 16) i.e. the man
- mixed reality space (310) of the first HMD (120) at the first location a two-teleportation is realised using the present invention.
- Such features raises the level of the realism offered the present invention to higher level than any existing teleportation systems.
- each of the one-way or two-way teleportation involving multiple subjects, multiple scanning zones, multiple HMDs at multiple locations may also be easily carried out using the present system (100) and method (200).
- the first dynamic holographic projection (320) may be generated and displayed at multiple HMDs at multiple locations at the same time. So, users of each of the multiple HMDs will be able to see and interact with the first dynamic holographic projection (320) of the first subject (1 16) in the respective mixed reality spaces of their HMDs. This would be similar to a live 3D broadcast. Such an implementation would be beneficial for scenarios where a person has to travel to long distances to give the same presentations, seminars etc. to different audiences present at different places.
- the present system (100) and method (200) may also be implemented for group teleportation. This may be similar to a group call but with a real life-like experience.
- multiple dynamic spline-based 3D models and multiple coloured images (separately) of multiple subjects involved in the teleportation session may be received at the multiple HMDs via the wireless communication network (1 18).
- multiple dynamic holographic projections of the multiple subjects may be generated and received at each of the HMDs and displayed in the respective mixed reality spaces. This enables each user to experience a realistic group meeting or gathering experience.
- the system (100) may enable the users of the HMDs to change a background of the mixed reality to a another location such as a beach, a mountain zone side, or any location of any part of the world.
- the respective audio units of the respective HMDs may play 3D stereo or spatial sound effects and the respective one or more odour generators of the respective HMDs may produce an odour based selected location.
- the respective audio units may play sounds of the sea and waves and the odour such as that of dry/wet sand or other odours common in beach area may be produced.
- the system (100) have audio-visual and odour data of multiple locations pre-stored in the data repository. Such features make group meetings and get-together more fun and enjoyable.
- the present invention offers a number of advantages. Firstly, the present invention provides a simple, cost-effective and easy to use solution for the problems of prior art. Further, the present invention eliminates the barriers of high bandwidth and extremely fast data speed requirements for teleportation using the prior art, which may even be beyond the feasible limits. The present invention achieves the above-mentioned solution by using dynamic 3D spline-based models that are generated and transmitted more easily and efficiently without the colour or RGB cameras. The increases the overall efficiency of the system, decrease the latency and reduce the computation and network load because the prior arts are focused on transmitting 3D volumetric data, point cloud or meshes for every instance.
- the processing module instead of streaming millions of voxels or 3D points at every instance, the processing module only streams the parameters of the mathematical curve to denote changes.
- the encrypted parameters also act as a much better way to protect user privacy against data breach as compared to images and appearance- based volumetric data.
- a major advantage of the present invention is that it captures and streams rapid dynamic movements with minimal latency as a very simple data stream and no high speed colour camera is required. Additionally, rapid dynamic movements like small vibrations can also be captured and replicated remotely in the holographic projection in real time. For example: the present invention captures and replicates even the smallest dynamic movements as subtle as breathing etc. that adds the next level of realism to the teleported holographic projections. Furthermore, as no colour cameras are required and the present invention uses one or more depth sensors and one or more thermal cameras, the present invention can work in low or no light conditions as well. The system can work seamlessly even at 1 Lux illumination.
- the present invention finds a number of applications in multiple fields.
- the present invention can simply transform the calling experience to a next level and replace the presently famous video calling functionality.
- the present invention can completely change the present way of how meetings, seminars and events take place.
- a person cannot just attend events but also address the events/meetings without being physically present.
- the system and the method enable a person to be present at a multiple places at the same time.
- Such a system and a method would save a lot of money, effort and resources for people who have to travel long distances such as cross countries and states to attend one or two events.
- it can transform the field of education as more guest lectures from renowned teaches, lecturers and professors may be arranged in school and colleges.
- the present invention encourage such beneficial collaborations across the world, thereby making the world smaller and bringing people closer.
- the wireless communication network used in the system can be a short-range communication network and/or a long-range communication network, wire or wireless communication network.
- the communication interface includes, but not limited to, a serial communication interface, a parallel communication interface or a combination thereof.
- the Head Mounted Devices may also include more components such as, but not limited to, a Graphics Processing Unit (GPU) or any other graphics generation and processing module.
- the GPU may be a single-chip processor primarily used to manage and boost the performance of video and graphics such as 2-D or 3-D graphics, texture mapping, hardware overlays etc.
- the GPU may be selected from, but not limited to, NVIDIA, AMD, Intel and ARM for real time 3D imaging.
- module refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, Python or assembly.
- One or more software instructions in the modules may be embedded in firmware, such as an EPROM.
- modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors.
- the modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
- any function or operation that has been described as being performed by a module could alternatively be performed by a different server, by the cloud computing platform, or a combination thereof.
- the techniques of the present disclosure might be implemented using a variety of technologies.
- the methods described herein may be implemented by a series of computer executable instructions residing on a suitable computer readable medium.
- Suitable computer readable media may include volatile (e.g. RAM) and/or non-volatile (e.g. ROM, disk) memory, carrier waves and transmission media.
- Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publicly accessible network such as the Internet.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Optics & Photonics (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A system (100) for teleportation for enhanced Audio-Visual Interaction in Mixed Reality (MR) using a Head Mounted Device (HMD) comprises a first scanning zone (102) present at a first location and a mixed reality-based first HMD (120) present at a second location, connected with the first scanning zone (102) via the wireless communication network (118). The first scanning zone (102) comprises one or more depth sensors (104) to capture depth data of a first subject (116); one or more thermal cameras (106) to capture thermal imaging data of the first subject (116); one or more microphones (108) to capture audio data of the first subject (116); a network module (110) to establish a wireless communication network (118); and a processing module (114) connected with the one or more depth sensors (104), the one or more thermal cameras (106), one or more microphones (108) and the network module (110).
Description
A SYSTEM AND A METHOD FOR TELEPORTATION FOR ENHANCED AUDIO¬
VISUAL INTERACTION IN MIXED REALITY (MR) USING A HEAD MOUNTED
DEVICE (HMD)
FIELD OF THE INVENTION
[0001] Embodiment of the present invention generally relates to mixed reality-based telepresence technologies and more particularly to a system and a method for teleportation for enhanced audio-visual interaction in Mixed Reality (MR) using a Head Mounted Device (HMD).
BACKGROUND OF THE INVENTION
[0002] The most intuitive modality for sensing the presence of a living or non-living entity incorporates the usage of auditory and visual sense of humans. Despite huge innovations in the telecommunication industry over the past decades, from the rise of mobile phones to the emergence of video conferencing, these technologies are far from delivering an experience close to physical co-presence. For example, despite a myriad of telecommunication technologies, huge amount of money is being spent annually on travelling for meeting people around the globe. Indeed, virtual human telepresence has been cited as key in battling carbon emissions in the future. However, despite assuring towards deploying feasible technology of telepresence, a great deal of time, money, and CO2 is spent on getting on planes to meet face-to-face. Somehow much of the subtleties of face-to-face co-located communication— eye contact, body language, physical presence— are still lost in even high-end audio and video conferencing. There is still a clear gap between even the highest fidelity telecommunication tools available today and physically being there.
[0003] Teleporting a living entity with its accurate real world pose to interact with the surroundings of another dimension of space remains a challenging task due to the need of sophisticated hardware. The existing solutions towards human telepresence involve a computational overhead which makes it less feasible to be
deployed for public usage. The major constraint towards deployment of such an invention for public usage lies in the sophisticated hardware setup which requires the availability of multiple high speed colour cameras and mic devices around the subject to completely reconstruct the dense 3D model of human body with exact representational pose.
[0004] The presently available teleportation systems involving transmitting 3D volumetric data, point cloud or meshes for every instance via a wireless network such as internet, which increases the latency as well as the computation and network load on the system, thereby decreasing the overall efficiency of the system. Millions of voxels or 3d points are required to be streamed at every instance which requires huge network bandwidth and processing which is not feasible at all times, especially for general public which mostly use 3G, LTE or 4G network.
[0005] Therefore, there is a need in the art for a system and a method for teleportation for enhanced audio-visual interaction in Mixed Reality (MR) using a Head Mounted Device (HMD), that bridges the gap between people located all around the globe to remotely interact with each other seamlessly in an altogether new kind of experience using the presently available internet speeds.
OBJECT OF THE INVENTION
[0006] An object of the present invention is to provide a system and a method for teleportation for enhanced audio-visual interaction in Mixed Reality (MR) using a Head Mounted Device (HMD).
[0007] Another object of the present invention is to utilise a fusion of depth sensors (such as phase-based or time-of-flight based sensors) and thermal cameras to capture the shape and exact motion of a subject.
[0008] Yet another object of the present invention is to generate dynamic spline-based 3D models without colour information, which may be easily and efficiently transmitted via wireless communication networks.
[0009] Yet another object of the present invention is to separately receive and combine the colour and texture information at a receiver HMD to increase the overall efficiency of the system, decrease the latency and reduce the computation and network load by eliminating the requirement of transmitting 3D volumetric data, point cloud or meshes for every instance.
[0010] Yet another object of the present invention is to enable one-to-one, one-to- many and many-to-many teleportation sessions seamlessly along with sensory feedback like touch and odour as well.
[0011] Yet another object of the present invention is to it capture and stream rapid dynamic movements with minimal latency as a very simple data stream and no high speed colour camera
[0012] Yet another object of the present invention is to increase data efficiency of the system and reduce the high computation, network as well as bandwidth requirements.
[0013] Yet another object of the present invention is to utilise one or more depth sensors and one or more thermal cameras for scanning so as to enable the system to work in low light or no light conditions as well.
SUMMARY OF THE INVENTION
[0014] According to a first aspect of the invention, there is provided a system for teleportation for enhanced Audio-Visual Interaction in Mixed Reality (MR) using a Head Mounted Device (HMD). The system comprises, but not limited to, a first scanning zone present at a first location having one or more depth sensors to capture depth data of a first subject; one or more thermal cameras to capture thermal imaging data of the first subject; one or more microphones to capture audio data of the first subject; a network module to establish a wireless communication network; and a processing module connected with the one or more depth sensors, the one or more thermal cameras, one or more microphones and the network module. The system further comprises a mixed reality-based first HMD present at a second location, connected with the first scanning zone via the wireless communication network.
[0015] Furthermore, the processing module is configured to receive dynamic depth data, dynamic thermal imaging data and the audio data of the first subject captured by the one or more depth sensors, the one or more thermal cameras and the one or more microphones respectively in the first scanning zone; generate a first dynamic spline-based 3D model of the first subject based on the received dynamic depth data and the dynamic thermal imaging data thereby replicating a shape, a size, an orientation and dynamic movements of the first subject in real-time and sync the audio data of the first subject with the dynamic movements of the first
dynamic spline-based 3D model; and send the first dynamic spline-based 3D model of the first subject along with synced audio data via the wireless communication network to the first HMD. Additionally, the first HMD is configured to receive a single coloured image of the first subject from an external source; process the coloured image for extracting colour and texture information of the first subject; receive the first dynamic spline-based 3D model of the first subject along with the synced audio data via the wireless communication network from the first processing module; process and apply the colour and texture information on the first dynamic spline- based 3D model of the first subject to generate a first dynamic holographic projection that replicates an appearance, the audio and the dynamic movements of the first subject in real-time; and display the first dynamic holographic projection of the first subject in a mixed reality space of the first HMD and play the synced audio of the first subject in an audio unit of the first HMD in real-time, thereby teleporting the first subject from the first location to the second location.
[0016] In accordance with an embodiment of the present invention, the first scanning zone further comprises one or more odour sensors connected with the processing module and the processing module is further configured to receive an odour data of the first subject captured by the one or more odour sensors and send the odour data via the wireless communication network. Moreover, the first HMD is further configured to receive and generate an odour replicating the odour of the first subject based on the received odour data using one or more odour generators, thereby adding more realism to the first dynamic holographic projection of the first subject. In another aspect, the first scanning zone may include one or more haptic sensors.
[0017] In accordance with an embodiment of the present invention, the system further comprises a second scanning zone at the second location of the first HMD, configured to generate and send a second dynamic spline-based 3D model of the second subject along with synced audio data via the wireless communication network; and a second HMD at the first location of the first subject. Further, the second HMD is configured to receive and process the coloured image of the second subject for extracting colour and texture information of the second subject; receive the second dynamic spline-based 3D model along with synced audio data via the wireless communication network; process and apply the colour and texture information on the second dynamic spline-based 3D model of the second subject to generate a second dynamic holographic projection that replicates an
appearance, the audio and the dynamic movements of the second subject in real time; and display the second dynamic holographic projection of the second subject in a mixed reality space of the second HMD and play the synced audio of the second subject in an audio unit of the second HMD in real-time, thereby teleporting the subject from the second location to the first location of the first HMD and enabling a two-way communication between the first HMD and the second HMD.
[0018] In accordance with an embodiment of the present invention, the first subject and the second subject are selected from living and non-living things such as humans, plants, objects or a combination thereof.
[0019] In accordance with an embodiment of the present invention, the one or more depth sensors are selected from one or more of Time of Flight (ToF) sensors, LIDARs, RADARs, ultrasonic sensors, infrared sensors and lasers.
[0020] In another aspect, the Head mounted devices may include one or more haptic sensors.
[0021] In accordance with an embodiment of the present invention, the external source for receiving the coloured image is selected from an external communication network, external storage, cloud storage, or a computing device such as a smartphone, a tablet, a laptop or a desktop PC.
[0022] According to a second aspect of the present invention, there is provided a method for teleportation for enhanced Audio-Visual Interaction in Mixed Reality (MR) using a Head Mounted Device (HMD). The method comprises receiving dynamic depth data, dynamic thermal imaging data and the audio data of a first subject present at a first location, from one or more depth sensors, one or more thermal cameras and one or more microphones respectively; generating a first dynamic spline-based 3D model of the first subject based on the dynamic depth data and the dynamic thermal imaging data being received, thereby replicating a shape, a size, an orientation and dynamic movements of the first subject in real time and syncing the audio data of the first subject with the dynamic movements of the first dynamic spline-based 3D model; sending the first dynamic spline-based 3D model of the first subject along with synced audio data via a wireless communication network to a first HMD; receiving a coloured image of the first subject from an external source at first HMD present at a second location; processing the coloured image for extracting colour and texture information of the first subject; receiving the first dynamic spline-based 3D model of the first subject along with the synced audio
data via the wireless communication network at the first HMD; processing and applying the colour and texture information on the first dynamic spline-based 3D model of the first subject to generate a first dynamic holographic projection that replicates an appearance, the audio and the dynamic movements of the first subject in real-time; and displaying the first dynamic holographic projection of the first subject in a mixed reality space of the first HMD and playing the synced audio of the first subject in an audio unit of the first HMD in real-time, thereby teleporting the first subject from the first location to the second location of the first HMD.
[0023] In accordance with an embodiment of the present invention, the method further comprises receiving an odour data of the first subject from the one or more odour sensors, at the first HMD; and generating an odour replicating the odour of the first subject based on the received odour data using one or more odour generators in the first HMD, thereby adding more realism to the holographic projection of the first subject.
[0024] In accordance with an embodiment of the present invention, the method further comprises a step of enabling a two-way teleportation by generating and sending a second dynamic spline-based 3D model of a second subject present at the first location, along with synced audio data via the wireless communication network; receiving and processing the coloured image of the second subject at a second HMD present at the first location, for extracting colour and texture information of the second subject; receiving the second dynamic spline-based 3D model along with synced audio data via the wireless communication network at the second HMD; processing and applying the colour and texture information on the second dynamic spline-based 3D model of the second subject to generate a second dynamic holographic projection that replicates an appearance, the audio and the dynamic movements of the second subject in real-time; and displaying the second dynamic holographic projection of the second subject in a mixed reality space of the second HMD and playing the synced audio of the second subject in an audio unit of the second HMD in real-time, thereby teleporting the subject from the second location to the first location of the first HMD and enabling a two-way communication between the first HMD and the second HMD.
[0025] In accordance with an embodiment of the present invention, the first subject and the second subject are selected from living and non-living things such as humans, plants, objects or a combination thereof.
[0026] In accordance with an embodiment of the present invention, the one or more depth sensors are selected from one or more of Time of Flight (ToF) sensors, LIDARs, RADARs, ultrasonic sensors, infrared sensors and lasers.
[0027] In accordance with an embodiment of the present invention, the external source for receiving the coloured image is selected from an external communication network, external storage, cloud storage, or a computing device such as a smartphone, a tablet, a laptop or a desktop PC.
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] So that the manner in which the above recited features of the present invention can be understood in detail, a more particular to the description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, the invention may admit to other equally effective embodiments.
[0029] These and other features, benefits and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein:
[0030] Fig. 1 illustrates a system for teleportation for enhanced Audio-Visual Interaction in Mixed Reality (MR) using a Flead Mounted Device (FIMD), in accordance with an embodiment of the present invention;
[0031] Fig. 2 illustrates a method for teleportation for enhanced Audio-Visual Interaction in the Mixed Reality (MR) using the Head Mounted Device (HMD), in accordance with an embodiment of the present invention;
[0032] Fig. 3A-3B illustrate information flow diagrams of a 1 -way teleportation using the system of Fig. 1 and the method of Fig. 2, in accordance with an embodiment of the present invention; and
[0033] Fig. 4 illustrates exemplary implementation of a two-way teleportation using the system of Fig. 1 and the method of Fig. 2, in accordance with another embodiment of the present invention.
DETAILED DESCRIPTION OF DRAWINGS
[0034] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
[0035] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase“comprising”, it is understood that we also contemplate the same composition, element or group of elements with transitional phrases“consisting of, “consisting”, “selected from the group of consisting of, “including”, or“is” preceding the recitation of the composition, element or group of elements and vice versa.
[0036] The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
[0037] Figure 1 illustrates a system (100) for teleportation for enhanced Audio-Visual Interaction in Mixed Reality (MR) using a Head Mounted Device (HMD), in accordance with an embodiment of the present invention. As shown in figure 1 , the system (100) comprises a first scanning zone (102) present at a first location, connected with a first HMD (120) present at a second location, via a wireless communication network (1 18). Herein, the first location and the second location may be closely or remotely located with respect to each other. In that sense, the first location and the second location may be, but not limited to, different areas of a same building or located in different cities, different states or even different countries.
[0038] The first scanning zone (102) is adapted to scan or capture data associated with a first subject (1 16). The first subject (116) are selected from living and non living things such as, but not limited to, humans, plants, objects or a combination thereof. For example: the first subject (1 16) may be human being holding a book or a flower in his/her hand in the first scanning zone (102). Therefore, the first scanning zone (102) comprises, but not limited to, one or more depth sensors (104), one or more thermal cameras (106), one or more microphones (108), a network module (110) and a processing module (114) connected with each of the one or more depth sensors (104), the one or more thermal cameras (106), one or more microphones (108) and the network module (110). In one embodiment, the first scanning zone also comprises one or more odour sensors (1 12).
[0039] Herein, the one or more depth sensors (104) are configured to capture depth data of the first subject (1 16). So, the one or more depth sensors (104) are selected from one or more of, but not limited to, Time of Flight (ToF) sensors, LIDARs, RADARs, ultrasonic sensors, infrared sensors and lasers. In one embodiment, these sensors may be used in a combination and disposed at multiple angles and locations to cover and capture depth data of every aspect of the first subject (1 16). In another embodiment, only LIDARs may be used. In general, the Light Detection and Ranging (LIDAR) is commonly used for 3D sensing. The LIDAR is used for measuring distances (ranging) by illuminating a target with laser light and measuring the reflection with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3-D representations of the target. So, herein the one or more depth sensors continuously capture the depth data continuously covering every even the smallest movements of the first subject (1 16) such as breathing etc.
[0040] Further, one or more thermal cameras (106) are configured to detect the dynamic thermal imaging data of the first subject (1 16) in the scanning zone. The one or more thermal cameras (106) also continuously capture the thermal imaging data continuously covering every aspect of the first subject (1 16). In general, a thermal camera detects radiation in the long-infrared range of the electromagnetic spectrum and produce images of that radiation, called thermograms. Since infrared radiation is emitted by all objects with a temperature above absolute zero according to the black body radiation law, therefore, thermal imaging data makes it possible to see the first subject (1 16) with or without visible illumination.
[0041] It’ll be appreciated by a skilled addressee that there are no colour or RGB cameras or sensors to capture colour information in the first scanning zone (102). This is because the present invention does not require any coloured image or volumetric data to be continuously captured or transmitted from the first scanning zone (102) for our teleportation implementation. The present invention only uses a fusion of one or more depth sensors (104) (such as of phase-based or time-of-flight based sensors etc.) and one or more thermal cameras (106) to capture the shape and exact motion of the first subject (1 16).
[0042] Furthermore, the one or more microphones (108) are configured to capture audio data of the first subject (1 16). The one or more microphones (108) capture binaural audio along the motion of the first subject (1 16) and 3D stereo sound. In
one embodiment, the first scanning zone (102) may also implement acoustic source localization and background noise cancellation techniques to further enhance the experience. In accordance with an embodiment, the one or more odour sensors (112) are connected with the processing module (1 14). The one or more odour sensors (1 12) are configured to capture an odour data of the first subject (1 16) within the first scanning zone (102).
[0043] In addition, the processing module (1 14) are envisaged to include computing capabilities such as a memory unit configured to store machine readable instructions. The machine-readable instructions may be loaded into the memory unit from a non-transitory machine-readable medium, such as, but not limited to, CD-ROMs, DVD-ROMs and Flash Drives. Alternately, the machine-readable instructions may be loaded in a form of a computer software program into the memory unit. The memory unit in that manner may be selected from a group comprising EPROM, EEPROM and Flash memory. Further, the processing module (114) includes a processor operably connected with the memory unit. In various embodiments, the processor is one of, but not limited to, a general-purpose processor, an application specific integrated circuit (ASIC) and a field- programmable gate array (FPGA). In one embodiment the processing module (1 14) may be a part of a dedicated computing device or may be a microprocessor that may be a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory and provides results as output.
[0044] The processing module (114) may further implement artificial intelligence and machine learning based technologies for, but not limited to, data analysis, collating data and presentation of data in real-time. Also connected with the processing module (1 14) is the network module (110). Further, the network module (1 10) configured to establish a wireless communication network (118) to enable a wireless communication between the first scanning zone (102) and the first FIMD (120). In that sense, the network module (1 10) may include one or more of, but not limited to, a WiFi module and a GSM/GPRS module. Therefore, the wireless communication network (1 18) may be, but not limited to, wireless intranet network, WIFI internet or GSM/GPRS based 2G, 3G, 4G, LTE or 5G communication network. In accordance with an embodiment of the present invention, a data repository may
be also be connected with the system (100). The data repository may be, but not limited to, a local or a cloud-based storage.
[0045] Further, as shown in figure 1 , the system (100) comprises the Mixed Reality based first Head Mounted Device (HMD) (120) connected with the first scanning zone (102) via the wireless communication network (118). The first HMD (120) may be envisaged to include capabilities of generating an augmented reality environment, mixed reality environment and an Extended reality environment that lets a user interact with digital content within the environment generated in the first HMD (120). The first HMD (120) is envisaged to be worn by the user and therefore, may be provided with, but not limited to, one or more bands, straps and locks for mounting on the head; or may even be provided as smart glasses to be worn just like spectacles. It will be understood by a person skilled in the art that below mentioned components of the first HMD (120) and their description should be considered as exemplary and not in a strict sense. The first HMD (120) may include components (not shown) selected from, but not limited to, an optical unit having one or more lenses, one or more reflective mirrors & a display unit; a sensing unit having one or more sensors & an image acquisition device; an audio unit comprising one or more speakers and one or more microphones; a user interface; a wireless communication module and a microprocessor.
[0046] In accordance with an embodiment of the present invention, the optical unit is envisaged to provide a high resolution and wider field of view. The display unit may comprise a Liquid Crystal on Silicon (LCoS) display and a visor. In accordance with an embodiment of the present invention, the one or more sensors may selected from, but not limited to, RGB sensor, a depth sensor, an eye tracking sensor, an EM sensor, ambient light sensor, an accelerometer, a gyroscope and a magnetometer.
[0047] Furthermore, the image acquisition device is selected from one or more of, but not limited to, omnidirectional cameras, wide angle stereo vision camera, RGB-D camera, digital cameras, thermal cameras, Infrared cameras and night vision cameras. In accordance with an embodiment of the present invention, the one or more microphones in the audio unit is configured to capture binaural audio along the motion of the user and 3D stereo sound with acoustic source localization with the help of IMU. The audio unit may also implement background noise cancellation techniques to further enhance the experience. Furthermore, the one or more
speakers may have an audio projection mechanism that projects sound directly to the concha of an ear of the user and reaches an ear canal after multiple reflections.
[0048] In accordance with an embodiment of the present invention, the one or more ports configured to enable wired connection between one or more external sources and the first HMD (120). The one or more ports may be, but not limited to, micro- USB port, USB Type-C ports and HDMI ports. Further, the wireless communication module is configured to connect with the wireless communication network (1 18) to enable wireless communication between the first scanning zone (102) and the first HMD (120). Additionally, it may also connect the first HMD (120) with other available wireless networks for sending and receiving information wirelessly.
[0049] Further, the first HMD (120) includes a microprocessor that may be a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory and provides results as output. The microprocessor may contain both combinational logic and sequential digital logic. The microprocessor is the brain of the first HMD (120) and is configured to facilitate operation of each of the components of the first HMD (120). The first HMD (120) may also implement artificial intelligence and machine learning based technologies for, but not limited to, data analysis, collating data and presentation of data in real-time.
[0050] Figure 2 illustrates a method (200) for teleportation for enhanced Audio-Visual Interaction in the Mixed Reality (MR) using the Head Mounted Device (HMD), in accordance with an embodiment of the present invention. It will be appreciated by a skilled addressee that the steps of method (200) are not limited to a particular order and the order followed here is to be considered as exemplary only. The method (200) starts at step 210, by receiving dynamic depth data, dynamic thermal imaging data and the audio data of the first subject (116) present at a first location, from the one or more depth sensors (104), the one or more thermal cameras (106) and the one or more microphones (108) respectively. The term“dynamic” herein denotes the continuously changing data in accordance with an activity of the first subject (1 16). The method (200) would be understood more clearly by way of an example. Figures 3A-3B illustrate information flow diagrams of a 1 -way teleportation using the system (100) of Fig. 1 and the method (200) of Fig. 2, in accordance with an embodiment of the present invention. For the example shown in figure 3A, it is assumed that the first subject (1 16) is a man standing in the first scanning zone
(102) at the first location. So, the one or more depth sensors (104), the one or more thermal cameras (106) and the one or more microphones (108) capture the dynamic depth data, dynamic thermal imaging data and the audio data respectively, and send the same to processing module (1 14).
[0051] Then, at step 220, the processing module (1 14) generates a first dynamic spline-based 3D model of the first subject (1 16) based on the dynamic depth data and the dynamic thermal imaging data being received. The first dynamic spline- based 3D model comprises of a plurality of splines and mathematical curves along with their parameters, representing the curves and surfaces of the first subject (116), wherein each spline is controlled by a plurality of control points. The plurality of control points enable the plurality of splines to form/take a shape and orientation of the first subject (116) in a three-dimensional space. In this manner, the first dynamic spline-based 3D model replicates a shape, a size, an orientation and dynamic movements of the first subject (116) in real-time. Further, the processing module (114) synchronizes (hereinafter referred to as“syncs” or“synced”) the audio data of the first subject (116) with the dynamic movements of the first dynamic spline-based 3D model. For example: the audio data comprises of a voice of the man in figure 3A, so voice can be synced with the lip movements in real time.
[0052] Returning to figure 2, at step 230, the first dynamic spline-based 3D model of the first subject (1 16) along with synced audio data is sent via the wireless communication network (1 18) to the first FIMD (120). It will be appreciated by a skilled addressee, the first dynamic spline-based 3D model are generated and transmitted more easily and efficiently without the colour or RGB cameras, unlike the prior arts. This increases the overall efficiency of the system (100), minimizes the latency and reduces the computation and network load because the prior teleportation systems are focused on transmitting 3D volumetric data, point cloud or meshes for every instance (which might not be feasible even with high speed data networks presently available for general public). Flence, instead of streaming millions of voxels or 3D points at every instance, the processing module (114) only streams the parameters of the mathematical curve to denote changes. The encrypted parameters also act as a much better way to protect user privacy against data breach as compared to images and appearance-based volumetric data. Therefore, the network bandwidth and processing requirements of the present
invention are much less than presently available state of the art teleportation systems for mixed reality headsets.
[0053] Further, at step 240, a single coloured image of the first subject (1 16) is received from an external source at first HMD (120) present at the second location. In an alternative aspect, there may be more than one coloured images of the first subject (1 16) that may be received. As the generated first dynamic spline-based 3D model does not include any colour of the first subject (1 16), so the coloured image of the first subject (116) is separately received at the first HMD (120) from the external source. The external source may be, but not limited to, an external communication network, external storage, cloud storage, or a computing device such as a smartphone, a tablet, a laptop or a desktop PC. In one embodiment, the received coloured image may be scanned by the image acquisition device of the first HMD (120). In another embodiment, the coloured image is directly received in the first HMD (120) in one of the formats such as, but not limited to, .png, .jpg, .jpeg, .gif etc. In yet another embodiment, more than one coloured images or even videos may be received for processing and extracting colour and information of the first subject.
[0054] Then, at step 250, the first HMD (120) processes the coloured image for extracting colour and texture information of the first subject (1 16). The colour and texture information may include physical and visual characteristics defining complete appearance of the first subject (1 16) including, but not limited to, visual features, skin, colours etc. of the first subject (1 16). For example, if the first subject (116) is human, the colour and texture information may include skin texture, colour, facial features, hair colour, eye colour, dress colour etc. In one embodiment, physical characteristics such as, but not limited to, size, shape, structure etc. of the first subject (1 16) are also extracted from the coloured image. However, it will be appreciated by a skilled addressee the steps 240 and 250 may be separately carried out by the first HMD (120), even before steps 210-230 or even after next step of the present method (200), without departing from the scope of the present invention. For example: the coloured photo may be sent and processed before the teleportation session.
[0055] Further, at step 260, the first HMD (120) receives the first dynamic spline-based 3D model of the first subject (1 16) along with the synced audio data via the wireless communication network (1 18). The same has been illustrated in figure 3B. After
that, at step 270, the first HMD (120) processes and applies the colour and texture information on the first dynamic spline-based 3D model of the first subject (116). This generates a first dynamic holographic projection (320) that replicates an appearance, the audio and the dynamic movements of the first subject (1 16) in real time. The first dynamic holographic projection (320) may be understood as a realistic holographic reconstruction of the first subject (116) created with minimal latency in real-time.
[0056] In one embodiment of the present invention, where the first subject (1 16) is a person, the coloured image received at step 240 may not be the current image of the person and the person may be wearing some other clothes in the coloured image than those actually worn during the teleportation session. So, the first HMD (120) before applying the extracted colour and texture information, compares the physical characteristics including shape, size, body structure etc. of the first subject (116) extracted from the coloured image with the physical characteristics including shape, size, body structure, face structure etc. of the first dynamic spline-based 3D model of the first subject (1 16). If the physical characteristics match up to a predetermined level, say 95% & above, then the first HMD (120) processes and applies the colour and texture information from the coloured image on the first dynamic spline-based 3D model of the first subject (116). Such feature is advantageous in a scenario where the first subject (1 16) needs to be present for the teleportation session but does not have formals, then he/she simply send a coloured image of himself/herself wearing formals, and the first dynamic holographic projection (320) of the first subject (1 16) would generated wearing the clothes worn in the coloured image. The audio and body movements are still transmitted real-time. Additionally, the predetermined level of above 95% prevents any possible misuse of the feature. Apart from this, several other measures may easily be implemented for preventing any misuse.
[0057] To summarise that steps 240-270 in simple/general terms, it can be said that the single coloured image can be received by any medium before the start of the teleportation session. Once the shape, size, orientation, and movement data is received from a remote location during the teleportation session, the Al-based system (100) efficiently adds the colour and texture information from the single coloured image to this data to create the holographic projection. This is a one-time activity. Once the spline-based 3D model and holographic projection are generated,
only changes in the orientation and movements of the subject are applied to this spline-based 3D model and the resultant holographic projection. In an alternative aspect, there may be more than one coloured images.
[0058] Onwards, at step 280, the generated first dynamic holographic projection (320) of the first subject (1 16) is displayed in a mixed reality (310) space of the first HMD (120). Simultaneously, as the first subject (116) speaks or there is an audio from the first scanning zone (102), the same is played in synchronization with the first subject (1 16), in the audio unit of the first HMD (120) in real-time. In this manner, the first subject (116) is easily and efficiently teleported from the first location to the second location where the first HMD (120) is present. The same can be seen in figure 3B as the first dynamic holographic projection (320) of the man is being displayed in the mixed reality (310) space of the first HMD (120). A major advantage of the present invention is that it captures and streams rapid dynamic movements with minimal latency as a very simple data stream and no high-speed colour camera is required. Additionally, rapid dynamic movements like small vibrations can also be captured and replicated remotely in the holographic projection in real time. For example: the present invention captures and replicates even the smallest dynamic movements as subtle as breathing etc. that adds the next level of realism to the teleported holographic projections.
[0059] Further, to add to more realism to the experience, the first HMD (120) may receive the odour data of the first subject (1 16) captured from the one or more odour sensors (1 12) from the first scanning zone (102). Accordingly, the first HMD (120) generates an odour replicating the odour of the first subject (1 16) during the teleportation session, based on the received odour data using one or more odour generators in the first HMD (120). For example, if the first subject (1 16) is a person wearing a perfume, the scent of the perfume may be replicated by the first HMD (120) during the teleportation, to add realism of physical meeting.
[0060] The above mentioned implementation of figure 2 and Figures 3A-3B is an example of a one-way teleportation where a first subject (116) from the first scanning zone (102) at the first location is teleported to the first HMD (120) at the second location, where the user wearing the first HMD (120) can see the first dynamic holographic projection (320) of the first subject (116) in the respective mixed reality space (310). So, the user can feel the realistic holographic presence of the first subject (116) but the first subject (116) does not see/feel the user of the
first HMD (120). Therefore, the present invention also provides another implementation involving a two-way teleportation using the system (100) and the same methodology of transmitting and generating the dynamic holographic projection.
[0061] Figure 4 illustrates exemplary implementation of a two-way teleportation using the system (100), in accordance with another embodiment of the present invention. As shown in figure 4, this example may continue from the previous mentioned example, where it was assumed that the first subject (116) is a“man”, present at the first location standing at the first scanning zone (102) and there is the first HMD (120) present at the second location which received the first dynamic holographic projection (320) of the first subject (1 16) (i.e. man). Continuing from there, it is assumed that the first HMD (120) (at the second location) has been worn by a “woman” and she is the second subject (404) in this case. So, the woman is seeing the first dynamic holographic projection (320) of the man in the respective mixed reality (310) space of the first HMD (120). Additionally, the woman (the second subject (404)) is also being scanned at a second scanning zone (402) present at the second location. Further, the second scanning zone (402) is connected with a second HMD (406), which is worn by the first subject (1 16) (i.e. the man) present at the first location, via the same wireless communication network (1 18).
[0062] It will be understood by a skilled addressee, the second scanning zone (402) and the first scanning zone (102) are envisaged to have same or similar components and functionalities. Same may be said for the first HMD (120) and the second HMD (406).
[0063] So, following the same steps as followed for generation and display of the first dynamic holographic projection (320) of the first subject (116) (i.e. the man), a second dynamic holographic projection (410) of the second subject (404) (i.e. the woman) is generated and displayed in a respective mixed reality (408) space of the second HMD (406) worn by the man. In simple terms, the first subject (1 16) (i.e. the man) is teleported to the second location and the second subject (404) (i.e. the woman) is teleported to the first location, thereby providing a realistic experience of a physical meeting. Furthermore, apart from the respective one or more odour sensors (1 12) and generators, the system (100) may also include a haptic feedback mechanism configured to generate a realistic touch and grab feel during interactions
between the first/second subjects and the first/second dynamic holographic projections in the mixed reality space.
[0064] For example, as shown in figure 4, the first subject (1 16) (i.e. the man) can feel the handshake (apart from the odour) with second dynamic holographic projection (410) of the second subject (404) (i.e. the woman) in the respective mixed reality space (408) of the first HMD (120) at the first location. At the same time, the second subject (404) (i.e. the woman) can feel the handshake (apart from the odour) with first dynamic holographic projection (320) of the first subject (1 16) (i.e. the man) in the respective mixed reality space (310) of the first HMD (120) at the first location. In this manner a two-teleportation is realised using the present invention. Such features raises the level of the realism offered the present invention to higher level than any existing teleportation systems.
[0065] However, it will be appreciated by a skilled addressee, each of the one-way or two-way teleportation involving multiple subjects, multiple scanning zones, multiple HMDs at multiple locations may also be easily carried out using the present system (100) and method (200). For example: in the one-way teleportation, the first dynamic holographic projection (320) may be generated and displayed at multiple HMDs at multiple locations at the same time. So, users of each of the multiple HMDs will be able to see and interact with the first dynamic holographic projection (320) of the first subject (1 16) in the respective mixed reality spaces of their HMDs. This would be similar to a live 3D broadcast. Such an implementation would be beneficial for scenarios where a person has to travel to long distances to give the same presentations, seminars etc. to different audiences present at different places.
[0066] In accordance with an embodiment of the present invention, the present system (100) and method (200) may also be implemented for group teleportation. This may be similar to a group call but with a real life-like experience. In this scenario, multiple dynamic spline-based 3D models and multiple coloured images (separately) of multiple subjects involved in the teleportation session may be received at the multiple HMDs via the wireless communication network (1 18). Accordingly, multiple dynamic holographic projections of the multiple subjects may be generated and received at each of the HMDs and displayed in the respective mixed reality spaces. This enables each user to experience a realistic group meeting or gathering experience. Additionally, the system (100) may enable the users of the HMDs to change a background of the mixed reality to a another location such as a beach, a
mountain zone side, or any location of any part of the world. The respective audio units of the respective HMDs may play 3D stereo or spatial sound effects and the respective one or more odour generators of the respective HMDs may produce an odour based selected location.
[0067] For example: if the location selected is a beach, the respective audio units may play sounds of the sea and waves and the odour such as that of dry/wet sand or other odours common in beach area may be produced. For this implementation, the system (100) have audio-visual and odour data of multiple locations pre-stored in the data repository. Such features make group meetings and get-together more fun and enjoyable.
[0068] The present invention offers a number of advantages. Firstly, the present invention provides a simple, cost-effective and easy to use solution for the problems of prior art. Further, the present invention eliminates the barriers of high bandwidth and extremely fast data speed requirements for teleportation using the prior art, which may even be beyond the feasible limits. The present invention achieves the above-mentioned solution by using dynamic 3D spline-based models that are generated and transmitted more easily and efficiently without the colour or RGB cameras. The increases the overall efficiency of the system, decrease the latency and reduce the computation and network load because the prior arts are focused on transmitting 3D volumetric data, point cloud or meshes for every instance. Hence, instead of streaming millions of voxels or 3D points at every instance, the processing module only streams the parameters of the mathematical curve to denote changes. The encrypted parameters also act as a much better way to protect user privacy against data breach as compared to images and appearance- based volumetric data.
[0069] A major advantage of the present invention is that it captures and streams rapid dynamic movements with minimal latency as a very simple data stream and no high speed colour camera is required. Additionally, rapid dynamic movements like small vibrations can also be captured and replicated remotely in the holographic projection in real time. For example: the present invention captures and replicates even the smallest dynamic movements as subtle as breathing etc. that adds the next level of realism to the teleported holographic projections. Furthermore, as no colour cameras are required and the present invention uses one or more depth sensors and one or more thermal cameras, the present invention can work in low
or no light conditions as well. The system can work seamlessly even at 1 Lux illumination.
[0070] Also, the present invention finds a number of applications in multiple fields. The present invention can simply transform the calling experience to a next level and replace the presently famous video calling functionality. Moreover, the present invention can completely change the present way of how meetings, seminars and events take place. With the present invention, a person cannot just attend events but also address the events/meetings without being physically present. Also, the system and the method enable a person to be present at a multiple places at the same time. Such a system and a method would save a lot of money, effort and resources for people who have to travel long distances such as cross countries and states to attend one or two events. Similarly, it can transform the field of education as more guest lectures from renowned teaches, lecturers and professors may be arranged in school and colleges. The present invention encourage such beneficial collaborations across the world, thereby making the world smaller and bringing people closer.
[0071] The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments explained herein above. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art.
[0072] Further, one would appreciate that the wireless communication network used in the system can be a short-range communication network and/or a long-range communication network, wire or wireless communication network. The communication interface includes, but not limited to, a serial communication interface, a parallel communication interface or a combination thereof.
[0073] The Head Mounted Devices (HMDs) referred herein may also include more components such as, but not limited to, a Graphics Processing Unit (GPU) or any other graphics generation and processing module. The GPU may be a single-chip processor primarily used to manage and boost the performance of video and graphics such as 2-D or 3-D graphics, texture mapping, hardware overlays etc. The GPU may be selected from, but not limited to, NVIDIA, AMD, Intel and ARM for real time 3D imaging.
[0074] In general, the word“module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a
programming language, such as, for example, Java, C, Python or assembly. One or more software instructions in the modules may be embedded in firmware, such as an EPROM. It will be appreciated that modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
[0075] Further, while one or more operations have been described as being performed by or otherwise related to certain modules, devices or entities, the operations may be performed by or otherwise related to any module, device or entity. As such, any function or operation that has been described as being performed by a module could alternatively be performed by a different server, by the cloud computing platform, or a combination thereof. It should be understood that the techniques of the present disclosure might be implemented using a variety of technologies. For example, the methods described herein may be implemented by a series of computer executable instructions residing on a suitable computer readable medium. Suitable computer readable media may include volatile (e.g. RAM) and/or non-volatile (e.g. ROM, disk) memory, carrier waves and transmission media. Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publicly accessible network such as the Internet.
[0076] It should also be understood that, unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "controlling" or "obtaining" or "computing" or "storing" or "receiving" or "determining" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that processes and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0077] Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be
limited to the embodiments shown along with the accompanying drawings but is to be providing broadest scope of consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention and the appended claims.
Claims
1. A system (100) for teleportation for enhanced Audio-Visual Interaction in Mixed Reality (MR) using a Head Mounted Device (HMD), the system (100) comprising:
a first scanning zone (102) present at a first location having:
one or more depth sensors (104) to capture depth data of a first subject (116);
one or more thermal cameras (106) to capture thermal imaging data of the first subject (1 16);
one or more microphones (108) to capture audio data of the first subject (116);
a network module (110) to establish a wireless communication network (1 18); and
a processing module (1 14) connected with the one or more depth sensors (104), the one or more thermal cameras (106), one or more microphones (108) and the network module (1 10); and
a mixed reality-based first HMD (120) present at a second location, connected with the first scanning zone (102) via the wireless communication network (118);
wherein the processing module (1 14) is configured to:
receive dynamic depth data, dynamic thermal imaging data and the audio data of the first subject (1 16) captured by the one or more depth sensors (104), the one or more thermal cameras (106) and the one or more microphones (108) respectively in the first scanning zone (102); generate a first dynamic spline-based 3D model of the first subject (1 16) based on the received dynamic depth data and the dynamic thermal imaging data thereby replicating a shape, a size, an orientation and dynamic movements of the first subject (1 16) in real-time and sync the audio data of the first subject (116) with the dynamic movements of the first dynamic spline-based 3D model; and
send the first dynamic spline-based 3D model of the first subject (1 16) along with synced audio data via the wireless communication network (1 18) to the first HMD (120);
wherein the first HMD (120) is configured to:
receive a single coloured image of the first subject (116) from an external source;
process the coloured image for extracting colour and texture information of the first subject (116);
receive the first dynamic spline-based 3D model of the first subject (1 16) along with the synced audio data via the wireless communication network (1 18) from the first processing module (114);
process and apply the colour and texture information on the first dynamic spline-based 3D model of the first subject (1 16) to generate a first dynamic holographic projection (320) that replicates an appearance, the audio and the dynamic movements of the first subject (1 16) in real time; and
display the first dynamic holographic projection (320) of the first subject (1 16) in a mixed reality (310) space of the first HMD (120) and play the synced audio of the first subject (116) in an audio unit of the first HMD (120) in real-time, thereby teleporting the first subject (116) from the first location to the second location.
2. The system (100) as claimed in claim 1 , wherein the first scanning zone (102) further comprises one or more odour sensors (1 12) connected with the processing module (1 14) and the processing module (1 14) is further configured to receive an odour data of the first subject (1 16) captured by the one or more odour sensors (1 12) and send the odour data via the wireless communication network (1 18); and
wherein the first HMD (120) is further configured to receive and generate an odour replicating the odour of the first subject (116) based on the received odour data using one or more odour generators, thereby adding more realism to the first dynamic holographic projection (320) of the first subject (1 16).
3. The system (100) as claimed in claim 1 , wherein the system (100) further comprises:
a second scanning zone (402) at the second location of the first HMD (120), configured to generate and send a second dynamic spline-based 3D model of the second subject (404) along with synced audio data via the wireless communication network (1 18); and
a second HMD (406) at the first location of the first subject (116), the second HMD (406) configured to:
receive and process the coloured image of the second subject (404) for extracting colour and texture information of the second subject (404); receive the second dynamic spline-based 3D model along with synced audio data via the wireless communication network (118);
process and apply the colour and texture information on the second dynamic spline-based 3D model of the second subject (404) to generate a second dynamic holographic projection (410) that replicates an appearance, the audio and the dynamic movements of the second subject (404) in real-time; and
display the second dynamic holographic projection (410) of the second subject (404) in a mixed reality (408) space of the second HMD (406) and play the synced audio of the second subject (404) in an audio unit of the second HMD (406) in real-time, thereby teleporting the subject from the second location to the first location of the first HMD (120) and enabling a two-way communication between the first HMD (120) and the second HMD (406).
4. The system (100) as claimed in claim 3, wherein the first subject (1 16) and the second subject (404) are selected from living and non-living things such as humans, plants, objects or a combination thereof.
5. The system (100) as claimed in claim 1 , wherein the one or more depth sensors (104) are selected from one or more of Time of Flight (ToF) sensors, LIDARs, RADARs, ultrasonic sensors, infrared sensors and lasers.
6. The system (100) as claimed in claim 1 , wherein the external source for receiving the coloured image is selected from an external communication network, external storage, cloud storage, or a computing device such as a smartphone, a tablet, a laptop or a desktop PC.
7. A method (200) for teleportation for enhanced Audio-Visual Interaction in
Mixed Reality (MR) using a Head Mounted Device (HMD), the method (200) comprising:
receiving dynamic depth data, dynamic thermal imaging data and the audio data of a first subject (116) present at a first location, from one or more depth sensors (104), one or more thermal cameras (106) and one or more microphones (108) respectively;
generating a first dynamic spline-based 3D model of the first subject (1 16) based on the dynamic depth data and the dynamic thermal imaging data being received, thereby replicating a shape, a size, an orientation and dynamic movements of the first subject (1 16) in real-time and syncing the audio data of the first subject (1 16) with the dynamic movements of the first dynamic spline-based 3D model;
sending the first dynamic spline-based 3D model of the first subject (1 16) along with synced audio data via a wireless communication network (1 18) to a first HMD (120);
receiving a coloured image of the first subject (116) from an external source at first HMD (120) present at a second location;
processing the coloured image for extracting colour and texture information of the first subject (116);
receiving the first dynamic spline-based 3D model of the first subject (1 16) along with the synced audio data via the wireless communication network (1 18) at the first HMD (120);
processing and applying the colour and texture information on the first dynamic spline-based 3D model of the first subject (116) to generate a first dynamic holographic projection (320) that replicates an appearance, the audio and the dynamic movements of the first subject (1 16) in real-time; and
displaying the first dynamic holographic projection (320) of the first subject (1 16) in a mixed reality (310) space of the first HMD (120) and playing the
synced audio of the first subject (116) in an audio unit of the first HMD (120) in real-time, thereby teleporting the first subject (1 16) from the first location to the second location of the first HMD (120).
8. The method (200) as claimed in claim 7, further comprising:
receiving an odour data of the first subject (1 16) from the one or more odour sensors (1 12), at the first HMD (120); and
generating an odour replicating the odour of the first subject (1 16) based on the received odour data using one or more odour generators in the first HMD (120), thereby adding more realism to the holographic projection of the first subject (1 16).
9. The method (200) as claimed in claim 7, further comprising a step of enabling a two-way teleportation by:
generating and sending a second dynamic spline-based 3D model of a second subject (404) present at the first location, along with synced audio data via the wireless communication network (1 18);
receiving and processing the coloured image of the second subject (404) at a second HMD (406) present at the first location, for extracting colour and texture information of the second subject (404);
receiving the second dynamic spline-based 3D model along with synced audio data via the wireless communication network (118) at the second HMD (406);
processing and applying the colour and texture information on the second dynamic spline-based 3D model of the second subject (404) to generate a second dynamic holographic projection (410) that replicates an appearance, the audio and the dynamic movements of the second subject (404) in real-time; and
displaying the second dynamic holographic projection (410) of the second subject (404) in a mixed reality (408) space of the second HMD (406) and playing the synced audio of the second subject (404) in an audio unit of the second HMD (406) in real-time, thereby teleporting the subject from the
second location to the first location of the first HMD (120) and enabling a two- way communication between the first HMD (120) and the second HMD (406).
10. The method (200) as claimed in claim 9, wherein the first subject (116) and the second subject (404) are selected from living and non-living things such as humans, plants, objects or a combination thereof.
1 1. The method (200) as claimed in claim 7, wherein the one or more depth sensors (104) are selected from one or more of Time of Flight (ToF) sensors, LIDARs, RADARs, ultrasonic sensors, infrared sensors, and lasers.
12. The method (200) as claimed in claim 7, wherein the external source for receiving the coloured image is selected from an external communication network, external storage, cloud storage, or a computing device such as a smartphone, a tablet, a laptop or a desktop PC.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN201921000754 | 2019-06-08 | ||
IN201921000754 | 2019-06-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020250106A1 true WO2020250106A1 (en) | 2020-12-17 |
Family
ID=73780980
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2020/055366 WO2020250106A1 (en) | 2019-06-08 | 2020-06-08 | A system and a method for teleportation for enhanced audio-visual interaction in mixed reality (mr) using a head mounted device (hmd) |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020250106A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634346A (en) * | 2020-12-21 | 2021-04-09 | 上海影创信息科技有限公司 | AR (augmented reality) glasses-based real object size acquisition method and system |
CN114822324A (en) * | 2021-01-29 | 2022-07-29 | 陕西红星闪闪网络科技有限公司 | Holographic article display system and holographic article display method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104238738A (en) * | 2013-06-07 | 2014-12-24 | 索尼电脑娱乐美国公司 | Systems and Methods for Generating an Augmented Virtual Reality Scene Within A Head Mounted System |
WO2017096351A1 (en) * | 2015-12-03 | 2017-06-08 | Google Inc. | Teleportation in an augmented and/or virtual reality environment |
WO2017223134A1 (en) * | 2016-06-21 | 2017-12-28 | Blue Goji Llc | Multiple electronic control and tracking devices for mixed-reality interaction |
US20190066387A1 (en) * | 2017-08-31 | 2019-02-28 | Disney Enterprises, Inc. | Collaborative multi-modal mixed-reality system and methods leveraging reconfigurable tangible user interfaces for the production of immersive, cinematic, and interactive content |
-
2020
- 2020-06-08 WO PCT/IB2020/055366 patent/WO2020250106A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104238738A (en) * | 2013-06-07 | 2014-12-24 | 索尼电脑娱乐美国公司 | Systems and Methods for Generating an Augmented Virtual Reality Scene Within A Head Mounted System |
WO2017096351A1 (en) * | 2015-12-03 | 2017-06-08 | Google Inc. | Teleportation in an augmented and/or virtual reality environment |
WO2017223134A1 (en) * | 2016-06-21 | 2017-12-28 | Blue Goji Llc | Multiple electronic control and tracking devices for mixed-reality interaction |
US20190066387A1 (en) * | 2017-08-31 | 2019-02-28 | Disney Enterprises, Inc. | Collaborative multi-modal mixed-reality system and methods leveraging reconfigurable tangible user interfaces for the production of immersive, cinematic, and interactive content |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634346A (en) * | 2020-12-21 | 2021-04-09 | 上海影创信息科技有限公司 | AR (augmented reality) glasses-based real object size acquisition method and system |
CN114822324A (en) * | 2021-01-29 | 2022-07-29 | 陕西红星闪闪网络科技有限公司 | Holographic article display system and holographic article display method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11818506B2 (en) | Circumstances based 3D representations of participants of virtual 3D communications | |
US9479736B1 (en) | Rendered audiovisual communication | |
Orts-Escolano et al. | Holoportation: Virtual 3d teleportation in real-time | |
KR102005106B1 (en) | System and method for augmented and virtual reality | |
WO2018153267A1 (en) | Group video session method and network device | |
US8928659B2 (en) | Telepresence systems with viewer perspective adjustment | |
US20240312212A1 (en) | Real-time video dimensional transformations of video for presentation in mixed reality-based virtual spaces | |
JP2015184689A (en) | Moving image generation device and program | |
US11302063B2 (en) | 3D conversations in an artificial reality environment | |
US10955911B2 (en) | Gazed virtual object identification module, a system for implementing gaze translucency, and a related method | |
US20180336069A1 (en) | Systems and methods for a hardware agnostic virtual experience | |
EP3087727B1 (en) | An emotion based self-portrait mechanism | |
WO2020250106A1 (en) | A system and a method for teleportation for enhanced audio-visual interaction in mixed reality (mr) using a head mounted device (hmd) | |
CN112272296B (en) | Video illumination using depth and virtual light | |
CN116530078A (en) | 3D video conferencing system and method for displaying stereo-rendered image data acquired from multiple perspectives | |
WO2017124871A1 (en) | Method and apparatus for presenting multimedia information | |
US12106413B1 (en) | Joint autoencoder for identity and expression | |
US20240331317A1 (en) | Information processing device, information processing system and method | |
US11830182B1 (en) | Machine learning-based blood flow tracking | |
US20230298250A1 (en) | Stereoscopic features in virtual reality | |
TW202347266A (en) | Systems and methods of image processing for privacy management | |
CN112954139A (en) | Wedding celebration photographic system based on VR virtual reality technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20823178 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20823178 Country of ref document: EP Kind code of ref document: A1 |