[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111615832B - Method and apparatus for generating a composite reality reconstruction of planar video content - Google Patents

Method and apparatus for generating a composite reality reconstruction of planar video content Download PDF

Info

Publication number
CN111615832B
CN111615832B CN201980008675.1A CN201980008675A CN111615832B CN 111615832 B CN111615832 B CN 111615832B CN 201980008675 A CN201980008675 A CN 201980008675A CN 111615832 B CN111615832 B CN 111615832B
Authority
CN
China
Prior art keywords
scene
video content
implementations
performer
actionable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980008675.1A
Other languages
Chinese (zh)
Other versions
CN111615832A (en
Inventor
I·M·里克特
D·乌尔布莱特
J-D·E·纳米亚斯
O·埃尔阿菲菲
P·迈耶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202211357526.6A priority Critical patent/CN115564900A/en
Publication of CN111615832A publication Critical patent/CN111615832A/en
Application granted granted Critical
Publication of CN111615832B publication Critical patent/CN111615832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

In one implementation, a method includes: identifying a first scene within a scene associated with a portion of video content; synthesizing a scene depiction of the scene corresponding to the trajectory of the first scene performer within settings associated with the scene and the action performed by the first scene performer; and generating a corresponding Synthetic Reality (SR) reconstruction of the scene by driving a first digital asset associated with the first scene actor in accordance with the scene depiction of the scene.

Description

Method and apparatus for generating a synthetic reality reconstruction of planar video content
Technical Field
The present disclosure relates generally to Synthetic Reality (SR) and, in particular, to systems, methods, and apparatus for generating SR reconstructions of planar video content.
Background
Virtual Reality (VR) and Augmented Reality (AR) are becoming increasingly popular due to their prominent ability to change the user's perception of the world. For example, VR and AR are used for learning purposes, gaming purposes, content creation purposes, social media and interaction purposes, and the like. These techniques differ in the user's perception of his/her presence. VR transposes the user into virtual space such that his VR perception is different from his/her real world perception. In contrast, AR exploits the user's real-world perception and adds something to it.
These techniques are becoming increasingly common due to, for example, miniaturization of hardware components, improvements in hardware performance, and improvements in software efficiency. As one example, a user may experience AR content overlaid on a real-time video feed of user settings on a handheld display (e.g., an AR-enabled mobile phone or tablet with video-through). As another example, a user may experience AR content by wearing a Head Mounted Device (HMD) or head mounted shell that still allows the user to see his/her surroundings (e.g., glasses with optical perspective). As another example, a user may experience VR content by using an HMD that encloses the user's field of view and is tethered to a computer.
Drawings
Accordingly, the present disclosure may be understood by those of ordinary skill in the art and a more particular description may be had by reference to certain illustrative embodiments, some of which are illustrated in the accompanying drawings.
Fig. 1A is a block diagram of an exemplary operating architecture, according to some implementations.
FIG. 1B is a block diagram of another exemplary operating architecture, according to some implementations.
Fig. 2 is a block diagram of an example controller according to some implementations.
FIG. 3 is a block diagram of an example electronic device, according to some implementations.
Fig. 4 is a block diagram of a Synthetic Reality (SR) content generation architecture, according to some implementations.
FIG. 5 illustrates a scene understanding spectrum according to some implementations.
FIG. 6 illustrates an exemplary SR content generation context, according to some implementations.
Fig. 7 is a flowchart representation of a method of generating SR reconstruction of planar video content according to some implementations.
Figure 8 is a flowchart representation of a method of generating SR reconstruction of planar video content according to some implementations.
In accordance with common practice, the various features shown in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. Additionally, some of the figures may not depict all of the components of a given system, method, or apparatus. Finally, throughout the specification and drawings, like reference numerals may be used to refer to like features.
Disclosure of Invention
Various implementations disclosed herein include apparatus, systems, and methods for generating Synthetic Reality (SR) content from flat video content. According to some implementations, the method is performed at a device that includes a non-transitory memory and one or more processors coupled with the non-transitory memory. The method comprises the following steps: identifying a first scene actor within a scene associated with a portion of video content; synthesizing a scene depiction of the scene, the scene depiction corresponding to a trajectory of the first scene performer within settings associated with the scene and an action performed by the first scene performer; and generating a corresponding SR reconstruction for the scene by driving a first digital asset associated with a first scene performer in accordance with the scene depiction for the scene.
According to some implementations, an apparatus includes one or more processors, non-transitory memory, and one or more programs; the one or more programs are stored in a non-transitory memory and configured to be executed by one or more processors, and the one or more programs include instructions for performing, or causing the performance of, any of the methods described herein. According to some implementations, a non-transitory computer-readable storage medium has stored therein instructions that, when executed by one or more processors of a device, cause the device to perform, or cause performance of, any of the methods described herein. According to some implementations, an apparatus includes: one or more processors, non-transitory memory, and means for performing or causing performance of any of the methods described herein.
Detailed Description
Numerous details are described in order to provide a thorough understanding of example implementations shown in the drawings. The drawings, however, illustrate only some example aspects of the disclosure and therefore should not be considered limiting. It will be understood by those of ordinary skill in the art that other effective aspects and/or variations do not include all of the specific details described herein. In other instances, well-known systems, methods, components, devices, and circuits have not been described in detail so as not to obscure more pertinent aspects of the example implementations described herein.
A physical environment refers to a world that an individual can perceive and/or the individual can interact without the aid of an electronic system. A physical environment (e.g., a physical forest) includes physical elements (e.g., physical trees, physical structures, and physical animals). The individual may interact directly with and/or perceive the physical environment, such as through touch, vision, smell, hearing, and taste.
In contrast, a Synthetic Reality (SR) environment refers to a fully or partially computer-created environment that an individual is able to perceive and/or interact with via an electronic system. In a SR, a subset of individual movements is monitored, and in response thereto, one or more properties of one or more virtual objects in the SR environment are caused to change in a manner that complies with one or more laws of physics. For example, the SR system may detect that an individual steps forward and, in response, adjust the graphics and audio presented to the individual in a manner similar to how such scenarios and sounds may change in a physical environment. Modification of one or more properties of one or more virtual objects in the SR environment may also be made in response to a representation of the movement (e.g., audio instructions).
The individual may interact with and/or perceive the SR object using any of his senses, including touch, smell, sight, taste, and sound. For example, an individual may interact with and/or perceive an auditory object that creates a multi-dimensional (e.g., three-dimensional) or spatial auditory environment and/or implements auditory transparency. A multi-dimensional or spatial auditory environment provides an individual with the perception of discrete auditory sources in a multi-dimensional space. Auditory transparency selectively combines sound from the physical environment with or without computer-created audio. In some SR environments, an individual may only interact with and/or perceive auditory objects.
One example of an SR is Virtual Reality (VR). A VR environment refers to a simulated environment designed to include only computer-created sensory inputs for at least one sensation. A VR environment includes a plurality of virtual objects that an individual may interact with and/or perceive. The individual may interact with and/or perceive the virtual object in the VR environment by simulating a subset of the individual's actions within the computer-created environment and/or by simulating the individual or its presence within the computer-created environment.
Another example of an SR is Mixed Reality (MR). An MR environment refers to a simulated environment designed to integrate computer-created sensory inputs (e.g., virtual objects) with sensory inputs from a physical environment or representations thereof. On the real spectrum, the mixed reality environment is between and does not include the VR environment on one end and the full physical environment on the other end.
In some MR environments, computer-created sensory inputs may adapt to changes in sensory inputs from the physical environment. In addition, some electronic systems for presenting MR environments may monitor orientation and/or position relative to a physical environment to enable virtual objects to interact with real objects (i.e., physical elements or representations thereof from the physical environment). For example, the system may monitor motion such that the virtual plant appears to be stationary relative to the physical building.
One example of mixed reality is Augmented Reality (AR). An AR environment refers to a simulated environment in which at least one virtual object is superimposed on a physical environment or representation thereof. For example, an electronic system may have an opaque display and at least one imaging sensor for capturing images or video of a physical environment, which are representations of the physical environment. The system combines the image or video with the virtual object and displays the combination on the opaque display. An individual uses the system to indirectly view the physical environment via an image or video of the physical environment and to observe virtual objects superimposed over the physical environment. When the system captures images of the physical environment using one or more image sensors, and uses those images to render the AR environment on an opaque display, the displayed images are referred to as video passthrough. Alternatively, an electronic system for displaying an AR environment may have a transparent or translucent display through which an individual may directly view the physical environment. The system may display the virtual object on a transparent or translucent display such that an individual uses the system to view the virtual object superimposed over the physical environment. As another example, the system may include a projection system that projects the virtual object into the physical environment. The virtual object may be projected, for example, on a physical surface or as a hologram, such that an individual uses the system to view the virtual object superimposed over a physical environment.
An augmented reality environment may also refer to a simulated environment in which a representation of a physical environment is altered by computer-created sensory information. For example, a portion of the representation of the physical environment may be graphically altered (e.g., enlarged) such that the altered portion may still represent one or more of the initially captured images but not a faithfully reproduced version. As another example, in providing video-through, the system may change at least one of the sensor images to impose a particular viewpoint that is different from the viewpoint captured by the one or more image sensors. As another example, the representation of the physical environment may be altered by graphically blurring or eliminating portions thereof.
Another example of mixed reality is Augmented Virtual (AV). An AV environment refers to a simulated environment in which a computer-created environment or a virtual environment incorporates at least one sensory input from a physical environment. The one or more sensory inputs from the physical environment may be a representation of at least one characteristic of the physical environment. For example, the virtual object may render a color of the physical element captured by the one or more imaging sensors. As another example, the virtual object may exhibit characteristics consistent with actual weather conditions in the physical environment, as identified via weather-related imaging sensors and/or online weather data. In another example, an augmented reality forest may have virtual trees and structures, but an animal may have features that are accurately reproduced from images taken of the physical animal.
Many electronic systems enable individuals to interact with and/or perceive various SR environments. One example includes a head-mounted system. The head-mounted system may have an opaque display and one or more speakers. Alternatively, the head-mounted system may be designed to receive an external display (e.g., a smartphone). The head-mounted system may have one or more imaging sensors and/or microphones for capturing images/video of the physical environment and/or capturing audio of the physical environment, respectively. The head mounted system may also have a transparent or translucent display. A transparent or translucent display may incorporate a substrate through which light representing an image is directed to an individual's eye. The display may incorporate LEDs, OLEDs, digital light projectors, laser scanning light sources, liquid crystal on silicon, or any combination of these technologies. The substrate transmitting light may be an optical waveguide, an optical combiner, an optical reflector, a holographic substrate or any combination of these substrates. In one embodiment, a transparent or translucent display may be selectively switched between an opaque state and a transparent or translucent state. As another example, the electronic system may be a projection-based system. Projection-based systems may use retinal projections to project images onto the individual's retina. Alternatively, the projection system may also project the virtual object into the physical environment (e.g., onto a physical surface or as a hologram). Other examples of SR systems include head-up displays, automotive windshields capable of displaying graphics, windows capable of displaying graphics, lenses capable of displaying graphics, headphones or earpieces, speaker arrangements, input mechanisms (e.g., controllers with or without haptic feedback), tablets, smart phones, and desktop or laptop computers.
The user may wish to experience video content (e.g., a television episode or movie) as if he/she were in a scene with characters. In other words, the user wishes to view the video content as an SR experience, rather than simply viewing the video content on a television or other display device.
SR content is often carefully created in advance and accessed by a user from a library of available SR content. Implementations disclosed herein include a method of generating on-demand SR reconstruction of video content by utilizing digital assets. Thus, the flat video content can be seamlessly and quickly interfaced into the SR experience.
Fig. 1A is a block diagram of an exemplary operating architecture 100A, according to some implementations. While related features are illustrated, those of ordinary skill in the art will recognize from the present disclosure that various other features are not illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the exemplary implementations disclosed herein. To this end, as a non-limiting example, the operating architecture 100A includes an electronic device 120.
In some implementations, the electronic device 120 is configured to present an SR experience to the user. In some implementations, the electronic device 120 includes a suitable combination of software, firmware, and/or hardware. The electronic device 120 is described in more detail below with reference to fig. 3. According to some implementations, when the user 150 is physically present within the physical environment 103, the electronic device 120 presents a Synthetic Reality (SR) experience to the user, where the physical environment 103 includes the table 107 within the field of view 111 of the electronic device 120. Thus, in some implementations, the user holds the electronic device 120 in his/her hand. In some implementations, in presenting an Augmented Reality (AR) experience, the electronic device 120 is configured to present AR content (e.g., AR cylinder 109) and enable video-through of the physical environment 103 (e.g., including the table 107) on the display 122.
Fig. 1B is a block diagram of an exemplary operating architecture 100B, according to some implementations. While relevant features are shown, those of ordinary skill in the art will recognize from the present disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the exemplary implementations disclosed herein. To this end, as a non-limiting example, the operating architecture 100B includes a controller 110 and an electronic device 120.
In some implementations, the controller 110 is configured to manage and coordinate the user's SR experience. In some implementations, the controller 110 includes a suitable combination of software, firmware, and/or hardware. The controller 110 is described in more detail below with reference to fig. 2. In some implementations, the controller 110 is a computing device that is local or remote with respect to the physical environment 105. For example, controller 110 is a local server located within physical environment 105. In another example, the controller 110 is a remote server (e.g., a cloud server, a central server, etc.) located outside the physical environment 105. In some implementations, the controller 110 is communicatively coupled with the electronic device 120 via one or more wired or wireless communication channels 144 (e.g., bluetooth, IEEE 802.11x, IEEE802.16x, IEEE 802.3x, etc.).
In some implementations, the electronic device 120 is configured to present an SR experience to the user 150. In some implementations, the electronic device 120 includes a suitable combination of software, firmware, and/or hardware. The electronic device 120 is described in more detail below with reference to fig. 3. In some implementations, the functionality of the controller 110 and/or the display device 130 is provided by the electronic device 120 and/or integrated with the electronic device 120.
According to some implementations, the electronic device 120 presents a Synthetic Reality (SR) experience to the user 150 when the user 150 is virtually and/or physically present within the physical environment 105. In some implementations, in presenting an Augmented Reality (AR) experience, the electronic device 120 is configured to present AR content and enable optical perspective of the physical environment 105. In some implementations, in presenting a Virtual Reality (VR) experience, the electronic device 120 is configured to present VR content and optionally enable optical passthrough of the physical environment 105.
In some implementations, the user 150 wears an electronic device 120, such as a Head Mounted Device (HMD), on his/her head. Accordingly, the electronic device 120 includes one or more displays provided for displaying SR content. For example, the electronic device 120 encompasses the field of view of the user 150. As another example, the electronic device 120 slides into or is otherwise attached to the head-mounted enclosure. In some implementations, the electronic device 120 is replaced with an SR chamber, enclosure, or room configured to present SR content, in which the user 150 does not wear the electronic device 120.
Fig. 2 is a block diagram of an example of the controller 110 according to some implementations. While some specific features are shown, those skilled in the art will appreciate from the present disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the particular implementations disclosed herein. To this end, as non-limiting examples, in some implementations, the controller 110 includes one or more processing units 202 (e.g., a microprocessor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Graphics Processing Unit (GPU), a Central Processing Unit (CPU), a processing core, etc.), one or more input/output (I/O) devices 206, one or more communication interfaces 208 (e.g., a Universal Serial Bus (USB), an IEEE 802.3x, an IEEE 802.11x, an IEEE802.16x, a global system for mobile communications (GSM), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), a Global Positioning System (GPS), infrared (IR), bluetooth, ZIGBEE, and/or similar types of interfaces), one or more programming (e.g., I/O) interfaces 210, a memory 220, and one or more communication buses 204 for interconnecting these and various other components.
In some implementations, the one or more communication buses 204 include circuitry to interconnect system components and control communications between system components. In some implementations, the one or more I/O devices 206 include at least one of a keyboard, a mouse, a trackpad, a joystick, one or more microphones, one or more speakers, one or more image sensors, one or more displays, and the like.
The memory 220 includes high speed random access memory such as Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), double data rate random access memory (DDR RAM), or other random access solid state memory devices. In some implementations, the memory 220 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 220 optionally includes one or more storage devices located remotely from the one or more processing units 202. Memory 220 includes a non-transitory computer-readable storage medium. In some implementations, the memory 220 or a non-transitory computer-readable storage medium of the memory 220 stores programs, modules, and data structures, or a subset thereof, including an optional operating system 230, a Synthetic Reality (SR) experience engine 240, and an SR content generator 250.
Operating system 230 includes processes for handling various underlying system services and for performing hardware related tasks.
In some implementations, the SR experience engine 240 is configured to manage and coordinate single or multiple SR experiences of one or more users (e.g., single SR experience of one or more users, or multiple SR experiences of a respective group of one or more users). To this end, in various implementations, the SR experience engine 240 includes a data fetcher 242, a sealer and locator engine 244, a coordinator 246, and a data transmitter 248.
In some implementations, the data obtainer 242 is configured to obtain data (e.g., presentation data, user interaction data, sensor data, location data, etc.) from at least one of a sensor in the physical environment 105, a sensor associated with the controller 110, and the electronic device 120. To this end, in various implementations, the data fetcher 242 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
In some implementations, the marker and locator engine 244 is configured to map the physical environment 105 and track the position/location of at least the electronic device 120 relative to the physical environment 105. To this end, in various implementations, marker and locator engine 244 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
In some implementations, the coordinator 246 is configured to manage and coordinate the SR experience that the electronic device 120 presents to the user. To this end, in various implementations, the coordinator 246 includes instructions and/or logic for instructions and heuristics and metadata for heuristics.
In some implementations, the data transmitter 248 is configured to transmit data (e.g., presentation data, location data, etc.) at least to the electronic device 120. To this end, in various implementations, the data transmitter 248 includes instructions and/or logic for the instructions as well as heuristics and metadata for the heuristics.
In some implementations, the SR content generator 250 is configured to generate an SR reconstruction of a scene from the video content. To this end, in various implementations, the SR content generator 250 includes an ingester 252 and a reconstruction engine 254.
In some implementations, the ingester 252 is configured to capture video content (e.g., a two-dimensional or "flat" AVI, FLV, WMV, MOV, MP4, or similar file associated with a television episode or movie). In some implementations, ingestor 252 is further configured to perform a scene understanding process and a scene parsing process on the scene in order to synthesize a scene depiction of the scene (e.g., a portion of video content associated with scene settings, key frames, etc.). The ingester 252 is discussed in more detail below with reference to fig. 4.
In some implementations, the reconstruction engine 254 is configured to obtain digital assets associated with scenes within the video content (e.g., character point clouds, item/object point clouds, scene setting point clouds, video game models, item/object models, scene setting models, etc.). In some implementations, the reconstruction engine 254 is further configured to instantiate a dominant line for each scene actor within the scene. In some implementations, the reconstruction engine 254 is further configured to drive the digital assets according to the scene depiction to generate an SR reconstruction of the scene.
Although the SR experience engine 240 and the SR content generator 250 are illustrated as residing on a single device (e.g., the controller 110), it should be understood that in other implementations, any combination of the SR experience engine 240 and the SR content generator 250 may be located in separate computing devices.
Moreover, FIG. 2 serves more as a functional description of the various features present in a particular embodiment, as opposed to the structural schematic of the specific implementations described herein. As one of ordinary skill in the art will recognize, the items displayed separately may be combined, and some items may be separated. For example, some of the functional blocks shown separately in fig. 2 may be implemented in a single module, and various functions of a single functional block may be implemented by one or more functional blocks in various implementations. The actual number of modules and the division of particular functions and how features are allocated therein will vary from embodiment to embodiment and, in some implementations, will depend in part on the particular combination of hardware, software, and/or firmware selected for a particular embodiment.
Fig. 3 is a block diagram of an example of an electronic device 120 according to some implementations. While some specific features are shown, those skilled in the art will recognize from this disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the particular implementations disclosed herein. For this purpose, as a non-limiting example, in some implementations, the electronic device 120 includes one or more processing units 302 (e.g., a microprocessor, ASIC, FPGA, GPU, CPU, processing core, etc.), one or more input/output (I/O) devices and sensors 306, one or more communication interfaces 308 (e.g., USB, IEEE 802.3x, IEEE 802.11x, IEEE802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, and/or similar types of interfaces), one or more programming (e.g., I/O) interfaces 310, one or more displays 312, one or more optional internally and/or externally facing image sensors 314, memory 320, and one or more communication buses 304 for interconnecting these and various other components.
In some implementations, the one or more communication buses 304 include circuitry to interconnect and control communications between system components. In some implementations, the one or more I/O devices and sensors 306 include an Inertial Measurement Unit (IMU), an accelerometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., a blood pressure monitor, a heart rate monitor, a blood oxygen sensor, a blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptic engine, a heating and/or cooling unit, a skin-shearing engine, one or more depth sensors (e.g., structured light, time of flight, etc.), and the like.
In some implementations, the one or more displays 312 are configured to present an SR experience to a user. In some implementations, the one or more displays 312 are also configured to present flat video content (e.g., a two-dimensional or "flat" AVI, FLV, WMV, MOV, MP4, or similar file associated with a television episode or movie, or a live video passthrough of the physical environment 105) to the user. In some implementations, the one or more displays 312 correspond to holographic, digital Light Processing (DLP), liquid Crystal Displays (LCD), liquid crystal on silicon (LCoS), organic light emitting field effect transistors (OLET), organic Light Emitting Diodes (OLED), surface conduction electron emitter displays (SED), field Emission Displays (FED), quantum dot light emitting diodes (QD-LED), micro-electro-mechanical systems (MEMS), and/or similar display types. In some implementations, the one or more displays 312 correspond to diffractive, reflective, polarizing, holographic, etc. waveguide displays. For example, the electronic device 120 includes a single SR display. As another example, the electronic device 120 includes an SR display for each eye of the user. In some implementations, the one or more displays 312 are capable of presenting AR and VR content. In some implementations, the one or more displays 312 are capable of presenting AR or VR content. In some implementations, the one or more optional image sensors 314 correspond to one or more RGB cameras (e.g., with Complementary Metal Oxide Semiconductor (CMOS) image sensors or Charge Coupled Device (CCD) image sensors), IR image sensors, event-based cameras, and the like.
The memory 320 comprises high speed random access memory such as DRAM, SRAM, DDR RAM or other random access solid state memory devices. In some implementations, the memory 320 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 320 optionally includes one or more storage devices located remotely from the one or more processing units 302. The memory 320 includes a non-transitory computer-readable storage medium. In some implementations, the memory 320 or a non-transitory computer-readable storage medium of the memory 320 stores programs, modules, and data structures, or a subset thereof, including an optional operating system 330 and an SR presentation engine 340.
Operating system 330 includes processes for handling various basic system services and for performing hardware dependent tasks. In some implementations, the SR presentation engine 340 is configured to present SR content to a user via the one or more displays 312. For this purpose, in various implementations, the SR presentation engine 340 includes a data fetcher 342, an SR presenter 344, an interaction processor 346, and a data transmitter 350.
In some implementations, the data obtainer 342 is configured to obtain data (e.g., presentation data, user interaction data, sensor data, location data, etc.) from at least one of a sensor in the physical environment 105, a sensor associated with the electronic device 120, and the controller 110. To this end, in various implementations, the data fetcher 342 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
In some implementations, the SR renderer 344 is configured to render and update SR content via the one or more displays 312. To this end, in various implementations, the SR renderer 344 includes instructions and/or logic for the instructions as well as heuristics and metadata for the heuristics.
In some implementations, the interaction processor 346 is configured to detect and interpret user interactions with presented SR content. To this end, in various implementations, the interaction processor 346 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
In some implementations, the data transmitter 350 is configured to transmit data (e.g., presentation data, location data, user interaction data, etc.) to at least the controller 110. To this end, in various implementations, the data transmitter 350 includes instructions and/or logic for the instructions as well as heuristics and metadata for the heuristics.
Although the data obtainer 342, the SR renderer 344, the interaction processor 346, and the data transmitter 350 are illustrated as residing on a single device (e.g., the electronic device 120), it should be understood that in other implementations, any combination of the data obtainer 342, the SR renderer 344, the interaction processor 346, and the data transmitter 350 may be located in separate computing devices.
In addition, FIG. 3 serves more as a functional description of the various features present in a particular embodiment, rather than as a structural schematic of the specific implementations described herein. As one of ordinary skill in the art will recognize, the items displayed separately may be combined, and some items may be separated. For example, some of the functional blocks shown separately in fig. 3 may be implemented in a single module, and various functions of a single functional block may be implemented in various implementations by one or more functional blocks. The actual number of modules and the division of particular functions and how features are allocated therein will vary from embodiment to embodiment and, in some implementations, will depend in part on the particular combination of hardware, software, and/or firmware selected for a particular embodiment.
Fig. 4 illustrates an exemplary SR content generation architecture 400 according to some implementations. While relevant features are shown, those of ordinary skill in the art will recognize from the present disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the exemplary implementations disclosed herein. To this end, and by way of non-limiting example, the SR content generation architecture 400 includes the SR content generator 250, the SR content generator 250 generating an SR reconstruction 440 of a scene within the video content 402 by driving the digital asset 404 (e.g., a video game model of a character/actor, a point cloud of characters/actors, etc.) according to the scene depiction 420. As shown in fig. 2, the SR content generator 250 includes an ingester 252 and a reconstruction engine 254.
In some implementations, the ingester 252 is configured to obtain the video content 402 in response to a request (e.g., a command from a user). For example, the SR content generator 250 obtains a request from a user to view an SR reconstruction of specified video content (e.g., a television episode or movie). Continuing the example, in response to obtaining the request, SR content generator 250 or a component thereof (e.g., ingestor 252) obtains (e.g., receives or retrieves) video content 402 from a local or remote repository (e.g., a remote server, a third-party content provider, etc.). In some implementations, ingestor 252 is further configured to perform scene understanding processes and scene parsing processes on scenes in order to synthesize a scene depiction 420 of a particular scene within video content 402 (e.g., a portion of the video content associated with scene settings, key frames, etc.).
To this end, in some implementations, the ingester 252 includes a scene understanding engine 412 and a scene parsing engine 414. In some implementations, the scene understanding engine 412 is configured to perform a scene understanding process on scenes in the video content 402. In some implementations, as part of the scene understanding process, the scene understanding engine 412 identifies scene actors, actionable objects, and non-actionable environment elements and infrastructure within the scene. For example, a scene performer corresponds to a person (e.g., a class person, an animal, a robot, etc.) within a scene that affects a scene associated with the scene. For example, an actionable object corresponds to an environmental element within a scene (e.g., a tool, a drink container, movable furniture such as a chair, etc.) acted upon by a situational actor. For example, non-actionable environmental elements and infrastructure correspond to environmental elements within a scene that are not acted upon by a situational actor (e.g., carpet, fixed furniture, walls, etc.). The scenario actors, actionable objects, and non-actionable environment elements and infrastructure are described in more detail below with reference to FIG. 5.
In some implementations, the scene understanding engine 412 identifies scene actors within the scene based on facial, skeletal, and/or humanoid recognition techniques. In some implementations, the scene understanding engine 412 identifies scene actors within the scene based on object recognition and/or classification techniques. In some implementations, the scene understanding engine 412 identifies actionable objects and non-actionable environment elements and infrastructures within the scene based on object recognition and/or classification techniques.
In some implementations, as part of the scene understanding process, the scene understanding engine 412 also determines the spatial relationships between the scene actors, actionable objects, and non-actionable environmental elements and the infrastructure in the scene. For example, scene understanding engine 412 creates a 3-dimensional map of settings associated with a scene and positions scene actors, actionable objects, and non-actionable environment elements and infrastructure relative to the 3-dimensional map.
In some implementations, the scene parsing engine 414 is configured to perform a scene parsing process on scenes in the video content 402. In some implementations, as part of the scene parsing process, the scene parsing engine 414 determines a sequence of actions for each scene actor within the scene. For example, the action sequence associated with a first scene performer of a scene includes the following time-ordered action sequences: enter door, sit down in chair a, pick up coffee cup, drink from coffee cup, put down coffee cup, stand up, speak to a second scene performer, wave hands, walk around table, and exit door. In some implementations, the scene parsing engine 414 also determines trajectories for each scene actor within the scene as part of the scene parsing process. For example, the sequence of trajectories associated with a first situational performer within a scene includes a route or path taken by the first situational performer relative to a set three-dimensional map associated with the scene.
In some implementations, the ingester engine 252 utilizes external data related to the video content 402 in performing scene understanding and scene parsing processes (e.g., existing scene summaries, scene action sequences, scene information, etc.). In some implementations, the ingestor 252 is configured to synthesize a scene depiction 420 that includes a sequence of actions and trajectories of each scene performer relative to a three-dimensional plot of settings associated with the scene.
In some implementations, the reconstruction engine 254 is configured to retrieve the digital assets 404 associated with the video content 402 in response to the request. For example, the SR content generator 250 obtains a request from a user to view an SR reconstruction of a specified video content (e.g., a television episode or movie). Continuing with the example, in response to obtaining the request, the SR content generator 250 or a component thereof (e.g., reformulation engine 254) obtains (e.g., receives or retrieves) the digital asset 404 from a local or remote repository (e.g., remote server, third party asset provider, etc.). For example, the digital assets 404 include point clouds associated with scene performers (e.g., characters or actors) within the video content 402, video game models associated with scene performers (e.g., characters or actors) within the video content 402, and so forth. In another example, the digital assets 404 include point clouds, models, etc. associated with items and/or objects (e.g., furniture, household items, appliances, tools, food, etc.). In yet another example, digital assets 404 include point clouds, models, etc. associated with settings associated with a scene.
In some implementations, if a point cloud or video game model for a situational performer is not available, the reconstruction engine 254 is configured to generate a model of the situational performer based on the video content 402 and/or other external data associated with the situational performer (e.g., other video content, images, dimensions, etc. associated with the situational performer).
In some implementations, the reconstruction engine 254 includes: a scenario setting generator 432, a mainline processor 434, and a digital asset driver 436. In some implementations, the scenario setting generator 432 is configured to generate an SR reconstruction of scenario settings associated with a scenario. In some implementations, the scenario setting generator 432 generates an SR reconstruction of scenario settings based at least in part on the digital assets 404. In some implementations, the scene settings generator 432 generates an SR reconstruction of the scene settings at least in part by synthesizing a three-dimensional model of the scene settings associated with the scene based on the identified environmental elements and infrastructure within the scene.
In some implementations, the mainline processor 434 is configured to instantiate and manage a mainline for each scenario actor within a scene. In some implementations, the mainline processor 434 is also configured to instantiate and manage a mainline for each actionable object within the scene.
In some implementations, the digital asset driver 436 is configured to drive the digital assets (e.g., video game assets or point clouds) of each scene performer according to the scene depiction 420. In some implementations, the digital asset driver 436 drives the digital assets according to natural speech, natural biodynamics/motion, etc. techniques. For example, facial features (e.g., lips, mouth, cheeks, etc.) of the digital asset of the respective situational performer are synchronized with the voice track of the respective situational performer.
In some implementations, the reconstruction engine 254 is configured to generate an SR reconstruction 440 of the scene by driving the digital asset 404 according to the scene depiction 420 within an SR reconstruction of settings associated with the scene. In some implementations, the SR reconstruction 440 is provided to an SR rendering pipeline 450 for rendering to a user. In some implementations, the SR reconstruction 440 is rendered by the controller 110 and transmitted to the electronic device 120 as presentation data, where the SR reconstruction 440 is presented via the one or more displays 312.
Fig. 5 illustrates an exemplary scenario understanding spectrum 500 according to some implementations. While related features are illustrated, those of ordinary skill in the art will recognize from the present disclosure that various other features are not illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the exemplary implementations disclosed herein. To this end, as a non-limiting example, the scene understanding spectrum 500 includes a spectrum of scene elements identified within a scene by the SR content generation unit 250 in fig. 2 and 4, or a component thereof (e.g., the scene understanding unit 412 in fig. 4), ordered based on the degree of dynamics (e.g., movement, speech, etc.) of the scene elements within the scene.
In some implementations, as part of the scene understanding process, the scene understanding engine 412 identifies non-actionable environment elements and infrastructure 502, actionable objects 504, and scene actors 506 within the scene. For example, the scene performer 506 corresponds to a person (e.g., a class person, an animal, a robot, etc.) within the scene that affects the scene associated with the scene. For example, actionable object 504 corresponds to an environmental element within a scene (e.g., tools, drinking containers, furniture, etc.) that is acted upon by a situational actor. For example, the non-actionable environment elements and infrastructure 502 correspond to environment elements within the scene that are not acted upon by the situational performer 506 (e.g., carpet, furniture, walls, etc.). Thus, the scenario actor 506 may move significantly within the scene, generating audible noise or speech, or act on actionable objects 504 (e.g., a ball, steering wheel, sword, etc.), which are also placed in motion within the scene, but the non-actionable environmental elements and infrastructure 502 are static scene elements that are not altered by the actions of the scenario actor 506.
FIG. 6 illustrates an exemplary SR content generation context 600 according to some implementations. While related features are illustrated, those of ordinary skill in the art will recognize from the present disclosure that various other features are not illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the exemplary implementations disclosed herein. To this end, as a non-limiting example, in the SR content generation scenario 600, video content 402 (e.g., a flat tv episode or movie) and digital assets 404 (e.g., a point cloud or video game model associated with a scene performer in the video content 402) are provided as inputs to the SR content generator 250 in fig. 2 and 4. As described above with reference to fig. 4, the SR content generator 250 synthesizes a scene depiction 420 for a particular scene within the video content 402 and generates an SR reconstruction 440 of the scene by driving the digital asset 404 according to the scene depiction 420.
Fig. 7 is a flowchart representation of a method 700 of generating SR reconstruction of planar video content according to some implementations. In various implementations, the method 700 is performed by a device having one or more processors and non-transitory memory (e.g., the controller 110 of fig. 1B and 2, the electronic device 120 of fig. 1A-1B and 3, or a suitable combination thereof) or components thereof (e.g., the SR content generator 250 of fig. 2 and 4). In some implementations, the method 700 is performed by processing logic (including hardware, firmware, software, or a combination thereof). In some implementations, the method 700 is performed by a processor executing code stored in a non-transitory computer-readable medium (e.g., memory). Briefly, in some cases, method 700 includes: identifying a first scene performer within a scene associated with a portion of video content; synthesizing a scene depiction of the scene corresponding to the trajectory of the first scene performer within settings associated with the scene and the action performed by the first scene performer; and generating a corresponding SR reconstruction for the scene by driving a first digital asset associated with a first scene performer in accordance with the scene depiction for the scene.
As shown at block 7-1, method 700 includes identifying a first scene performer (e.g., a person or object to be associated with a dominant line) within a scene associated with a portion of video content. In some implementations, as part of the scene understanding process, the SR content generator 250 or a component thereof (e.g., the scene understanding engine 412 in fig. 4) identifies the first scene performer. In some implementations, the first context performer corresponds to a human-like character, a robotic character, an animal, a carrier, etc. (e.g., an entity that performs an action and/or accomplishes a goal). In some implementations, as part of the scene understanding process, SR content generator 250 or a component thereof (e.g., scene understanding engine 412 in fig. 4) identifies one or more other scene actors within the scene. For example, the SR content generator 250 or its components (e.g., the scene understanding engine 412 in fig. 4) perform a scene understanding process on a per scene basis based on key frames or the like. The process associated with identifying a situational actor is described in more detail above with reference to fig. 4.
As shown in block 7-2, method 700 includes synthesizing a scene depiction of the scene that corresponds to the trajectory of the first scene performer within the setting associated with the scene and the action performed by the first scene performer. In some implementations, the scene depiction is generated from an image description/parsing process whereby, first, the SR content generator 250 performs object/humanoid recognition on each frame. Next, the SR content generator 250 determines the spatial relationship (e.g., depth) between the identified object/class person and the scene/setting. The SR content generator 250 then instantiates the dominant line of the identified object/class persona. Next, the SR content generator 250 generates a scene depiction (e.g., a transcript) of the video content that tracks the main line. In some implementations, the scene depiction includes a sequence of actions and also includes trajectories for each scene actor within the scene. The process associated with the composite scene depiction is described in more detail above with reference to FIG. 4.
As shown in block 7-3, method 700 includes generating a corresponding SR reconstruction for the scene by driving a first digital asset associated with a first scene performer in accordance with the scene depiction for the scene. In some implementations, the SR content generator 250 also generates an SR reconstruction of the scene with other digital assets associated with the scene settings, objects, and so on. The SR reconstruction process is described in more detail above with reference to fig. 4.
In some implementations, the digital assets correspond to video game models of scene actors (e.g., characters/actors) in the video content. In some implementations, the digital assets correspond to skinned point clouds associated with scene actors (e.g., characters/actors) in the video content. In some implementations, the digital assets correspond to settings associated with the scene and models of objects within the settings.
Fig. 8 is a flowchart representation of a method 800 of generating SR reconstruction of planar video content according to some implementations. In various implementations, the method 800 is performed by a device having one or more processors and non-transitory memory (e.g., the controller 110 of fig. 1B and 2, the electronic device 120 of fig. 1A-1B and 3, or a suitable combination thereof) or components thereof (e.g., the SR content generator 250 of fig. 2 and 4). In some implementations, the method 800 is performed by processing logic (including hardware, firmware, software, or a combination thereof). In some implementations, the method 800 is performed by a processor executing code stored in a non-transitory computer-readable medium (e.g., memory). Briefly, in some cases, method 800 includes: detecting a trigger for generating SR content based on the specified video content; acquiring the video content; performing a scene understanding process on the video content; performing a scene parsing process on the video content to synthesize a scene depiction; obtaining a digital asset associated with the video content; generating an SR reconstruction of the scene by driving the digital asset according to the scene depiction; and presenting the SR reconstruction.
As shown at block 8-1, the method 800 includes detecting a trigger for generating SR content based on video content. In some implementations, the SR content generator 250 or components thereof obtain a request from a user to view an SR reconstruction of specified video content (e.g., a television episode or movie). Thus, a request from a user to view SR reconstruction of the specified video content corresponds to a trigger for generating SR content based on the specified video content.
As shown in block 8-2, method 800 includes obtaining video content. In some implementations, the SR content generator 250 or a component thereof obtains (e.g., receives or retrieves) video content. For example, the SR content generator 250 obtains video content from a local or remote repository (e.g., a remote server, a third party content provider, etc.).
In some implementations, the SR content generator 250 or a component thereof obtains (e.g., receives or retrieves) audio content instead of or in addition to video content. For example, the audio content corresponds to an audio track or audio portion associated with the video content. Thus, in some implementations, SR content generator 250 creates SR reconstructions of video content based at least in part on the video content, associated audio content, and/or external data associated with the video content (e.g., pictures of actors in the video content, height and other measurements of actors in the video content, various views (e.g., plan, side, perspective, etc.) of settings and objects, etc. in another example, audio content corresponds to an audiobook, a radio play, etc. accordingly, in some implementations, SR content generator 250 creates SR reconstructions of audio content based at least in part on the audio content and external data associated with the audio content (e.g., pictures of characters in the audio content, height and other measurements of characters in the audio content, various views (e.g., plan, side, perspective, etc.) and the like).
In some implementations, the SR content generator 250 or components thereof obtain (e.g., receive or retrieve) textual content instead of or in addition to video content. For example, the textual content corresponds to a transcript or script associated with the video content. Thus, in some implementations, the SR content generator 250 creates SR reconstructions of the video content based at least in part on the video content, the associated text content, and/or external data associated with the video content (e.g., pictures of actors in the video content, height and other measurements of actors in the video content, various views of settings and objects (e.g., plan, side, perspective, etc.) and the like.
As shown in block 8-3, method 800 includes performing a scene understanding process on the video content. In some implementations, the SR content generator 250 or a component thereof (e.g., the scene understanding engine 412) performs a scene understanding process on the video content. The scene understanding process is described in more detail above with reference to fig. 4.
In some implementations, as shown in block 8-3a, the method 800 includes identifying a situational actor within a scene. In some implementations, the SR content generator 250 or its components (e.g., the scene understanding engine 412) identify scene actors, actionable objects, and non-actionable environment elements and infrastructure within a scene.
For example, a scene performer corresponds to a person (e.g., a class person, an animal, a robot, a carrier, a robot, etc.) within a scene that affects a scene associated with the scene. For example, actionable objects correspond to environmental elements within a scene (e.g., tools, toys, drinking containers, furniture, etc.) that are acted upon by a situational actor. For example, non-actionable environment elements and infrastructures correspond to environment elements within a scene that are not acted upon by a situationactor (e.g., carpet, furniture, walls, etc.). Contextual actors, actionable objects, and non-actionable environment elements and infrastructure are described in more detail below with reference to fig. 5.
In some implementations, the scene understanding engine 412 identifies scene actors within the scene based on facial, skeletal, and/or humanoid recognition techniques. In some implementations, the scene understanding engine 412 identifies scene actors within the scene based on object recognition and/or classification techniques. In some implementations, the scene understanding engine 412 identifies actionable objects and non-actionable environment elements and infrastructure within the scene based on object recognition and/or classification techniques.
In some implementations, as shown in block 8-3b, the method 800 includes determining a spatial relationship between a situational actor and a scene setting associated with the scene. For example, a scene corresponds to a theatrical scene or a predefined portion of video content. In some implementations, the SR content generator 250 or its components (e.g., the scene understanding engine 412) determine the spatial relationships between scene actors, actionable objects, and non-actionable environment elements and infrastructure in a scene. For example, scene understanding engine 412 creates a 3-dimensional map of settings associated with a scene and positions scene actors, actionable objects, and non-actionable environment elements and infrastructure relative to the 3-dimensional map. Thus, for example, the SR content generator 250 determines the position of the scene performer in the depth dimension relative to the setting associated with the scene.
In some implementations, the SR content generator 250 or a component thereof (e.g., the scene understanding engine 412) identifies at least one environmental element (e.g., furniture, etc.) within a scene and determines a spatial relationship between at least a first scene performer and the at least one environmental element. In some implementations, the wall/outside dimensions of the scene are ignored. Instead, environmental objects such as tables or sofas are used as reference points for scene delineation. Thus, the SR reconstruction shows a scene with environmental elements but no accompanying walls.
As shown at block 8-4, the method 800 includes performing a scene parsing process on the video content to synthesize a scene depiction. In some implementations, the scene depiction includes an overall micro-script of the scene or an interactive sequence of actors per scene in the scene (e.g., for character A: pick up a cup, drink from a cup, put down a cup, look at character B, speak with character B, stand up from a chair, walk out of a room). In some implementations, the SR content generator 250 or a component thereof (e.g., the scene parsing engine 414) performs a scene parsing process on the video content to generate a scene depiction. The scene parsing process is described in more detail above with reference to fig. 4.
In some implementations, as shown in block 8-4a, the method 800 includes determining a sequence of actions for each situational actor. In some implementations, the SR content generator 250 or a component thereof (e.g., the scene parsing engine 414) determines a sequence of actions for each scene actor within a scene. For example, the action sequence associated with a first scene performer of a scene includes the following time-ordered action sequence: enter door, sit down in chair a, pick up coffee cup, drink from coffee cup, put down coffee cup, stand up, speak to the second scenario actor, wave hand, walk around table, and exit door.
In some implementations, as shown in block 8-4b, the method 800 includes determining a trajectory for each situational performer. In some implementations, the SR content generator 250 or its components (e.g., the scene parsing engine 414) also determine trajectories for each scene performer within the scene. For example, the sequence of trajectories associated with a first situational performer within a scene includes a route or path taken by the first situational performer relative to a set three-dimensional map associated with the scene.
As shown in block 8-5, method 800 includes obtaining digital assets associated with video content. In some implementations, the digital assets are received or retrieved from a library of assets associated with the video content. In some implementations, the digital assets correspond to pre-existing video game models of scene actors (e.g., objects and/or human-like characters) in the scene. In some implementations, the digital assets correspond to pre-existing skinned point clouds of scene actors. In some implementations, the digital assets correspond to pre-existing models of the settings (e.g., bridge of a spacecraft, interior of a car, apartment living room, NYC Times Square, etc.).
In some implementations, the digital assets are generated on-the-fly based at least in part on the video content and external data associated with the video content. In some implementations, the external data associated with the video content corresponds to pictures of actors, height and other measurements of actors, various views of settings and objects (e.g., plan, side, perspective, etc. views), and so on.
As shown in blocks 8-6, method 800 includes generating an SR reconstruction of the scene by driving the digital asset according to the scene depiction. In some implementations, the SR content generator 250 or a component thereof (e.g., the reconstruction engine 254) generates an SR reconstruction of a scene by driving digital assets according to the scene depiction. The SR reconstruction process is described in more detail above with reference to fig. 4.
In some implementations, generating the SR reconstruction of the scene includes instantiating a main line of each scene actor. For example, each dominant line corresponds to a sequence of actions for each person or object in the scene. As one example, for a first scene actor in a scene, the main line comprises the following sequence of actions: the first scenario the practitioner sits down on a chair, eats a meal, stands up, walks to a sofa, and sits on a sofa. As another example, for a vase-like object in a scene, the main line includes the following sequence of actions: the vase is taken up and thrown against the wall, broken into blocks and dropped onto the floor.
As shown in blocks 8-7, the method 800 includes presenting SR reconstruction. For example, referring to FIG. 4, the reconstruction engine 254 provides the SR reconstruction 440 to the SR rendering pipeline 450 for rendering to the user. As one example, referring to fig. 1B-4, SR reconstruction 440 is rendered by controller 110 and transmitted to electronic device 120 as presentation data, where SR reconstruction 440 is presented via the one or more displays 312. For example, a user of the electronic device 120 can experience the SR reconstruction 440 as if he/she were in action (e.g., first person experience). In another example, the user of the electronic device 120 can experience the SR reconstruction 440 as if he/she were looking down the action from a bird's eye view (e.g., a third person experience). In this example, the SR reconstruction 440 may be presented as if the SR reconstruction 440 were occurring within the physical environment 105 (e.g., a three-dimensional projection of video content appears on a flat surface within the physical environment 105).
While various aspects of the implementations described above are described within the scope of the appended claims, it should be apparent that various features of the implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node may be referred to as a second node, and similarly, a second node may be referred to as a first node, which changes the meaning of the description, as long as all occurrences of the "first node" are renamed consistently and all occurrences of the "second node" are renamed consistently. The first node and the second node are both nodes, but they are not the same node.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the claims. As used in the description of the present embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" may be interpreted to mean "when the prerequisite is true" or "in response to a determination" or "according to a determination" or "in response to a detection" that the prerequisite is true, depending on the context. Similarly, the phrase "if it is determined that [ the prerequisite is true ]" or "if [ the prerequisite is true ]" or "when [ the prerequisite is true ]" is interpreted to mean "upon determining that the prerequisite is true" or "in response to determining" or "according to determining that the prerequisite is true" or "upon detecting that the prerequisite is true" or "in response to detecting" that the prerequisite is true, depending on context.

Claims (20)

1. A method of generating a synthetic reality, SR, reconstruction, comprising:
at a computing system comprising non-transitory memory and one or more processors, wherein the computing system is communicatively coupled to a display device and one or more input devices:
detecting, via the one or more input devices, a request to view a composite reality, SR, reconstruction of at least a portion of pre-existing video content; and
in response to detecting the request:
disambiguating between one or more actionable items, one or more non-actionable environment elements, and a first scenario performer within a scene associated with a portion of the pre-existing video content;
synthesizing a scene depiction of the scene, the scene depiction comprising a trajectory of the first scene performer relative to the non-actionable environment elements within a physical environment associated with the scene and an action performed by the first scene performer with the one or more actionable items; and
generating a corresponding SR reconstruction for the scene by driving a first digital asset associated with the first scene actor in accordance with the scene depiction for the scene.
2. The method of claim 1, further comprising:
determining a spatial relationship between the first situational actor and the physical environment associated with the scene.
3. The method of claim 1, further comprising:
determining a spatial relationship between at least the first scene performer and at least one non-actionable environmental element within the scene.
4. The method of any of claims 1 to 3, further comprising:
one or more other situational actors within the scene are identified.
5. The method of any of claims 1-3, wherein the first situational actor corresponds to one of a humanoid, an animal, a carrier, or a robot.
6. The method of any of claims 1-3, wherein the first digital asset is obtained from a library of digital assets associated with the pre-existing video content.
7. The method of any of claims 1-3, wherein the first digital asset is generated on-the-fly based at least in part on the pre-existing video content and external data associated with the pre-existing video content.
8. The method of any of claims 1-3, wherein the first digital asset corresponds to a pre-existing video game model of the first situational performer in the scene.
9. The method of any of claims 1-3, wherein the first digital asset corresponds to a preexisting skinned point cloud of the first scene performer in the scene.
10. The method of any of claims 1-3, wherein a second digital asset corresponds to a pre-existing model of the physical environment associated with the scene.
11. The method of any of claims 1-3, wherein generating the scene depiction of the scene comprises determining a transcript of the scene, the transcript comprising a sequence of actions of each scene performer in the scene.
12. The method of any of claims 1 to 3, further comprising:
causing presentation of the SR reconstruction of the scene via the display device.
13. A computing system, comprising:
one or more processors;
a non-transitory memory;
an interface for communicating with a display device and one or more input devices: and
one or more programs stored in the non-transitory memory that, when executed by the one or more processors, cause the computing system to:
detecting, via the one or more input devices, a request to view a composite reality, SR, reconstruction of at least a portion of pre-existing video content; and
in response to detecting the request:
disambiguating between one or more actionable items, one or more non-actionable environment elements, and a first scenario performer within a scene associated with a portion of the pre-existing video content;
synthesizing a scene depiction of the scene, the scene depiction comprising a trajectory of the first scene performer relative to the non-actionable environment element within a physical environment associated with the scene and an action performed by the first scene performer with the one or more actionable items; and
generating a corresponding SR reconstruction for the scene by driving a first digital asset associated with the first scene actor in accordance with the scene depiction for the scene.
14. The computing system of claim 13, wherein the one or more programs further cause the computing system to:
determining a spatial relationship between at least the first scene performer and at least one non-actionable environmental element within the scene.
15. The computing system of any of claims 13 to 14, wherein the first digital asset is obtained from a library of digital assets associated with the pre-existing video content.
16. The computing system of any of claims 13 to 14, wherein the first digital asset is generated on-the-fly based at least in part on the pre-existing video content and external data associated with the pre-existing video content.
17. A non-transitory memory storing one or more programs that, when executed by one or more processors of a computing system with an interface for communicating with a display device and one or more input devices, cause the computing system to:
detecting, via the one or more input devices, a request to view a synthetic reality, SR, reconstruction of at least a portion of pre-existing video content in response to detecting the request:
disambiguating between one or more actionable items, one or more non-actionable environment elements, and a first scenario performer within a scene associated with a portion of the pre-existing video content;
synthesizing a scene depiction of the scene, the scene depiction comprising a trajectory of the first scene performer relative to the non-actionable environment element within a physical environment associated with the scene and an action performed by the first scene performer with the one or more actionable items; and
generating a corresponding SR reconstruction of the scene by driving a first digital asset associated with the first scene actor in accordance with the scene depiction of the scene.
18. The non-transitory memory of claim 17, wherein the one or more programs further cause the computing system to:
determining a spatial relationship between at least the first scene actor and at least one non-actionable environmental element within the scene.
19. The non-transitory memory of any one of claims 17-18, wherein the first digital asset is obtained from a library of digital assets associated with the pre-existing video content.
20. The non-transitory memory of any one of claims 17-18, wherein the first digital asset is generated on-the-fly based at least in part on the pre-existing video content and external data associated with the pre-existing video content.
CN201980008675.1A 2018-01-22 2019-01-18 Method and apparatus for generating a composite reality reconstruction of planar video content Active CN111615832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211357526.6A CN115564900A (en) 2018-01-22 2019-01-18 Method and apparatus for generating a synthetic reality reconstruction of planar video content

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201862620334P 2018-01-22 2018-01-22
US62/620,334 2018-01-22
US201862734061P 2018-09-20 2018-09-20
US62/734,061 2018-09-20
PCT/US2019/014260 WO2019143984A1 (en) 2018-01-22 2019-01-18 Method and device for generating a synthesized reality reconstruction of flat video content

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202211357526.6A Division CN115564900A (en) 2018-01-22 2019-01-18 Method and apparatus for generating a synthetic reality reconstruction of planar video content

Publications (2)

Publication Number Publication Date
CN111615832A CN111615832A (en) 2020-09-01
CN111615832B true CN111615832B (en) 2022-10-25

Family

ID=66334527

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201980008675.1A Active CN111615832B (en) 2018-01-22 2019-01-18 Method and apparatus for generating a composite reality reconstruction of planar video content
CN202211357526.6A Pending CN115564900A (en) 2018-01-22 2019-01-18 Method and apparatus for generating a synthetic reality reconstruction of planar video content

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211357526.6A Pending CN115564900A (en) 2018-01-22 2019-01-18 Method and apparatus for generating a synthetic reality reconstruction of planar video content

Country Status (4)

Country Link
US (1) US11386653B2 (en)
EP (2) EP4462796A1 (en)
CN (2) CN111615832B (en)
WO (1) WO2019143984A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2034441A1 (en) * 2007-09-05 2009-03-11 Sony Corporation System and method for communicating a representation of a scene
CN101593349A (en) * 2009-06-26 2009-12-02 福州华映视讯有限公司 Bidimensional image is converted to the method for 3-dimensional image
CN101631257A (en) * 2009-08-06 2010-01-20 中兴通讯股份有限公司 Method and device for realizing three-dimensional playing of two-dimensional video code stream
CN101917636A (en) * 2010-04-13 2010-12-15 上海易维视科技有限公司 Method and system for converting two-dimensional video of complex scene into three-dimensional video
CN106060522A (en) * 2016-06-29 2016-10-26 努比亚技术有限公司 Video image processing device and method
CN106792151A (en) * 2016-12-29 2017-05-31 上海漂视网络科技有限公司 A kind of virtual reality panoramic video player method
KR101754700B1 (en) * 2016-05-17 2017-07-19 주식회사 카이 OTT service system and method for changing video mode based on 2D video and VR video
CN108140263A (en) * 2015-12-21 2018-06-08 大连新锐天地传媒有限公司 AR display systems and method applied to image or video

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW452748B (en) * 1999-01-26 2001-09-01 Ibm Description of video contents based on objects by using spatio-temporal features and sequential of outlines
CN1581959A (en) * 2003-08-11 2005-02-16 本·库特奈尔 Simulation of attendance at a live event
EP2230629A3 (en) * 2008-07-16 2012-11-21 Verint Systems Inc. A system and method for capturing, storing, analyzing and displaying data relating to the movements of objects
US20140267228A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Mapping augmented reality experience to various environments
US9438878B2 (en) * 2013-05-01 2016-09-06 Legend3D, Inc. Method of converting 2D video to 3D video using 3D object models
US10026228B2 (en) * 2015-02-25 2018-07-17 Intel Corporation Scene modification for augmented reality using markers with parameters
KR20170011190A (en) * 2015-07-21 2017-02-02 엘지전자 주식회사 Mobile terminal and control method thereof
US10416667B2 (en) * 2016-02-03 2019-09-17 Sony Corporation System and method for utilization of multiple-camera network to capture static and/or motion scenes
CN106803283A (en) * 2016-12-29 2017-06-06 东莞新吉凯氏测量技术有限公司 Interactive three-dimensional panorama multimedium virtual exhibiting method based on entity museum
WO2018144315A1 (en) * 2017-02-01 2018-08-09 Pcms Holdings, Inc. System and method for augmented reality content delivery in pre-captured environments

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2034441A1 (en) * 2007-09-05 2009-03-11 Sony Corporation System and method for communicating a representation of a scene
CN101593349A (en) * 2009-06-26 2009-12-02 福州华映视讯有限公司 Bidimensional image is converted to the method for 3-dimensional image
CN101631257A (en) * 2009-08-06 2010-01-20 中兴通讯股份有限公司 Method and device for realizing three-dimensional playing of two-dimensional video code stream
CN101917636A (en) * 2010-04-13 2010-12-15 上海易维视科技有限公司 Method and system for converting two-dimensional video of complex scene into three-dimensional video
CN108140263A (en) * 2015-12-21 2018-06-08 大连新锐天地传媒有限公司 AR display systems and method applied to image or video
KR101754700B1 (en) * 2016-05-17 2017-07-19 주식회사 카이 OTT service system and method for changing video mode based on 2D video and VR video
CN106060522A (en) * 2016-06-29 2016-10-26 努比亚技术有限公司 Video image processing device and method
CN106792151A (en) * 2016-12-29 2017-05-31 上海漂视网络科技有限公司 A kind of virtual reality panoramic video player method

Also Published As

Publication number Publication date
CN115564900A (en) 2023-01-03
CN111615832A (en) 2020-09-01
WO2019143984A1 (en) 2019-07-25
EP4462796A1 (en) 2024-11-13
EP3744108B1 (en) 2024-09-25
US11386653B2 (en) 2022-07-12
EP3744108A1 (en) 2020-12-02
US20200387712A1 (en) 2020-12-10

Similar Documents

Publication Publication Date Title
CN111273766B (en) Method, apparatus and system for generating an affordance linked to a simulated reality representation of an item
CN111602104B (en) Method and apparatus for presenting synthetic reality content in association with identified objects
US20230324985A1 (en) Techniques for switching between immersion levels
CN110715647A (en) Object detection using multiple three-dimensional scans
CN110633617A (en) Plane detection using semantic segmentation
US12039659B2 (en) Method and device for tailoring a synthesized reality experience to a physical setting
CN112189183A (en) Method and apparatus for presenting audio and synthetic reality experiences
US11468611B1 (en) Method and device for supplementing a virtual environment
CN111602105B (en) Method and apparatus for presenting synthetic reality accompanying content
CN113678173A (en) Method and apparatus for graph-based placement of virtual objects
CN112987914A (en) Method and apparatus for content placement
CN111615832B (en) Method and apparatus for generating a composite reality reconstruction of planar video content
CN112654951A (en) Mobile head portrait based on real world data
CN112740280A (en) Computationally efficient model selection
CN118450313A (en) Method and apparatus for sound processing of synthetic reality scenes
US20240013487A1 (en) Method and device for generating a synthesized reality reconstruction of flat video content
CN112639889A (en) Content event mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant