WO2018175680A1 - Uhd holographic filming and computer generated video process - Google Patents
Uhd holographic filming and computer generated video process Download PDFInfo
- Publication number
- WO2018175680A1 WO2018175680A1 PCT/US2018/023695 US2018023695W WO2018175680A1 WO 2018175680 A1 WO2018175680 A1 WO 2018175680A1 US 2018023695 W US2018023695 W US 2018023695W WO 2018175680 A1 WO2018175680 A1 WO 2018175680A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- holographic
- performance
- remote
- venue
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000008569 process Effects 0.000 title description 60
- 238000009877 rendering Methods 0.000 claims abstract description 12
- 238000005516 engineering process Methods 0.000 claims description 20
- 238000004519 manufacturing process Methods 0.000 claims description 20
- 230000001360 synchronised effect Effects 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 11
- 230000001815 facial effect Effects 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 6
- 230000008921 facial expression Effects 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 21
- 241000282414 Homo sapiens Species 0.000 description 16
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 10
- 238000013461 design Methods 0.000 description 7
- 230000008451 emotion Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 241000282412 Homo Species 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 230000010365 information processing Effects 0.000 description 3
- 230000003340 mental effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 244000118350 Andrographis paniculata Species 0.000 description 2
- 108020004414 DNA Proteins 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 102000053602 DNA Human genes 0.000 description 1
- 101100286668 Mus musculus Irak1bp1 gene Proteins 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000004821 distillation Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000001343 mnemonic effect Effects 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/0005—Adaptation of holography to specific applications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/0005—Adaptation of holography to specific applications
- G03H2001/0088—Adaptation of holography to specific applications for video-holography, i.e. integrating hologram acquisition, transmission and display
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H2226/00—Electro-optic or electronic components relating to digital holography
- G03H2226/04—Transmission or communication means, e.g. internet protocol
Definitions
- the present invention is related to increasing the holographic video resolution from HD to Ultra Hig Definition (UHD).
- the present embodiments are related to the live or pre-recorded events and a UHD filming process for various types of venues, stages, individuals, performers, athletes, and/or UHD computer generated stages, backgrounds and/or celebrity look alike video images, and creating a UHD life-like holographic video. More particularl , the embodiments are related to capturing separate UHD video elements of individual real-life venues or Chroma Keying and/or black screen technology venue, and adding separate elements filmed or computer generated separately from each other and then combining two or more elements into one UHD video and sending them through a data network connection to remote holographic imaging devices and/or holographic venues.
- This new process can also creat a more life-i ike/realistic illusion of an event by combining two or more UHD elements together such as venues, live or prerecorded individuals, performers, athletes, venues, venue equipment, venue back drops/backgrounds, venue props and/or computer generated individuals, live or legacy celebrity/artist, athletes, back drops/backgrounds, equipment, props, text, and sending the UHD video through a data network connection where the UHD video is being viewed on a local or remote holographic device and/or at remote venues anywhere around the world, and/or the ability to send multiple UHD holographic videos that can be synchronized together and simultaneously be sent through one or more data network connections to a holographic device and/or a venue with two or more holographic projection type transparent screen(s) to receive the synchronized images from two or more projectors positioned to create multiple layers of 3D holographic optical illusions also to optionally be seen anywhere around the world.
- UHD elements such as venues, live or prerecorded individuals,
- live or recorded audio and real-life holographic imagery can be combined with a computer generated holographic imagery of a person or persons such as live or deceased individuals, celebrities, athletes, and/or combine multiple persons and re-create a person(s), animate the emotion, movements, voice of a person(s) and create a UHD holographic video images of a person(s) and combine the computer generated person(s) with real-life holographic environments such as public or private venues, venue equipment, venue backgrounds, back drops, venue props to provide a more realistic "life-like* UHD holographic video image of the person(s) behaving like a real "live” person with all the same facia! emotions, movements, singing, talking such as but not limited to performing on stage, entertaining at public venues, sporting events, at home, or in an office.
- the embodiments can also enable the re-creation of various past performances or events and/or the creation of present events by using our computer generated UHD person(s) video cooibined with the real-life UHD holographic environment video to create one or more UHD holographic video(s) wherein either a singular UHD holographic video and/or multiple UHD holographic videos that can be sent through one or more data network connection to a remote holographic device or venue with at least one holographic projection type transparent screen to receive the image from a projector and to create a 3D holographic optical illusion on the screen anywhere around the world.
- UHD holographic videos can be synchronized together and simultaneousl can be sent through one or more data network connections to holographic devices or venues with two or more holographic projection type transparent screen(s) to receive the synchronized images from two or more projectors positioned to create multiple layers of 3D holographic optical illusions for spectators anywhere around the world.
- This process can also allow for local and worldwide viewing of local and worldwide UHD holographic events being held past or present, and where anyone can also watch the UHD holographic event in 2D video, save a 2D video, and/or send 2D video to a friend.
- the present embodiments are also related to the holographic video capturing and creating process and bring higher video resolution than current HD technology, and increase holographic video to at least 2k, 4K, 8K, or more UHD (Ultra High Definition) by using one or more cameras with one or multiple camera perspectives of venues, live or prerecorded individuals, performers, athletes, venues, venue equipment, venue back drops/backgrounds, venue props, and/or adding computer generated images of a person or persons, real-life holographic environments such as public or private venues, venue equipment, venue backgrounds, back drops, venue props to provide a more realistic "!ife- like 8 UNO holographic video image of the person(s) behaving like a real person with all the same facial emotions, movements, singing, talking, then creating multiple UHD videos, and then either combine them in to one holographic performance sent to holographic devices and/or various holographic venues and/or stage designs, or creat two or more UHD videos that ca be created and synchronized together and simultaneously sent to hol
- a method can Include providing more than one camera disposed around a performance of more than one individual capturing various perspectives of video representing each individual activity at the performance, and access to a production server for receiving the various perspectives captured by the more than one camera and for processing the various perspectives for rendering on at least one screen by remot clients, wherein the processing includes the synchronization of the various perspectives into a data file.
- At least two videos of varying resolution can be captured at the performance featuring individuals, entertainers, professionals, venues, stages, and/or computer generated holographic video images, combined into a data file in a production server, sent including a combination of the at least two video elements at varying resolution through a data network to a remote client and rendered on a display.
- FIG. 1 illustrates a diagram of a standard filming process with a camera in a norma! upright position and showing the camera field of view as a rectangle horizontal image being captured In accordance with a feature of the embodiments;
- FIG. 2 illustrates ' a diagram of an unconventional filming process with a camera rotated to a ninety-degree angle and showing tr e camera field of view as a rectangle vertical Image being captured in accordance with a feature of the embodiments;
- FIG. 3 illustrates a diagram showing multiple unconventional filming process with a camera rotated to a ninety-degree angle and showing the camera field of view as a rectangle vertical image being captured and a standard filming process with a camera in an norma! upright position capturing a background and combining all elements Into one UHD video in accordance with a feature of the embodiments;
- FIG. 4 illustrates a diagram of a standard filming process with a camera or multiple cameras at different heights during the recording process in conjunction with the different heights of the viewing audience ma be sitting to watch the holographic image in accordance with a feature of the embodiments;
- FIG. 5 illustrates a diagram of a standard filming process with a camera in a norma! upright position and showing the live band and stage being captured at one time in accordance with a feature of the embodiments;
- FIG. 8 illustrates a diagram of a multiple unconventional filming process with multiple cameras rotated to a ninety-degree angle and capturing each individual performer and some equipment at the same time filmed separately using Chroma Keying and/or black screen technolog stage in accordance with a feature of the embodiments;
- FIG. 7 illustrates a diagram of a multiple unconventional filming process that captured each individual performer and equipment using Chroma Keying and/or black screen technology, a choice of either a iive stage or a computer generated background and combining all elements into one UHD video in accordance with a feature of the embodiments;
- F!G. 8 illustrates a diagram of both a live stage background and the performers, and capturing each individual performer and equipment using Chroma Keying and/or black screen technology, and combining a live stage background or a computer generated background with the performers into one video and sending this UHD video through a network connection to many different types of venues anoVqr devices in accordance with a feature of the embodiments;
- FIG, 9 illustrates the process for creating a computer generated person from existing film or video footage by capturing the face and/or head of that person at mu!tip!e angles and re-creating a facial or head image Into a iife-!ike computer generated image and taking that image to create one facial or head video with real-life emotions, movements, singing, and/or talking, and the ability to make said image younger or older in accordance with a feature of the embodiments;
- FiG. 10 illustrates the process for creating a computer generated body of a person from a body scan of person and taking that image and adding movement to said image by means of computer automated movement program, and/or an actual human wearing body sensors to manipulate the computer generated body by means of a software, and/or camera aimed at an actual human with software that can identify and detect particular body parts and movement of said bod parts, and adding a face to said body image identified in FIG. 9 to create one UHD video with rea!-life emotions, body movements, dancing, singing, sitting, and/or talking or other such required life-like re-creations of a person(s) in accordance with a feature of the embodiments;
- FIG. 1 1 illustrates the process for creating a computer generated body or bodies from FIG. 10 and adding them to a real-life stage or background, and/or a computer generated stage and/or background and creating one or more UHD videos that can either combine in to one holographic performance sent to holographic devices and/or various holographic venues and/or stage designs, or create two or more UHD videos that can be created and synchronized together and simultaneously sent to holographic devices and/or various holographic venues and/or stage designs in accordance with a feature of the embodiments; and 0022]
- FiG. 12 illustrates the just one of many processes for compressing and encrypting said video from FIGS. 1-11 in accordance with a feature of the embodiments.
- the term "one or more” as used herein, depending at least in part upon context, s may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures, or characteristics in a plural sense.
- terms such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a piuraS usage, depending at least in part upon context.
- the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
- the term “step” can be utilized interchangeably with “instruction” or "operation.”
- FIG. 1 labeled as "Prior art”, illustrates a diagram of a standard filming process 1 with a camera 2 in a normal upright position and showing the subject matter in this case a person being filmed and the camera field of view as a rectangle horizontal image being captured is 1080 pixels high and 1920 pixels wide.
- This standard field of view recording ratio of the width is one and a half times larger (wider) than the height.
- the field of view to capture the subject matter in this case a person standing with no other subject matter in the image, it has a minimum height that is required to fit just the subject matter into the video, in this case a person standing who is approximately six feet tall, therefore the width of the camera field of view will be nine feet three inches wide, this means that the subject matter being filmed, in this case is only thirty six percent of the cameras field of view.
- the amount of excess data being wasted during a recording is sixty four percent; we will call this "wasted receding pixels.”
- These "wasted recording pixels” use up sixty four percent of video resolution when there is no other subject matter in the cameras field of view, regardless of the size of the subject matter being filmed.
- HD is equivalent to approximately four hundred thirty four gigabytes, of which two hundred seventy seven gigabytes are "wasted receding pixels.”
- FIG. 2 illustrates a diagram of an unconventional filming process 3 with a camera 4 rotated to a ninety-degree angle and the subject matter in this case a person being filmed and the camera field of view as a rectangle vertical image being captured at 1920 pixels high and 1080 pixels wide.
- This unconventional filming process 3 field of view to capture the subject matter in this case a person standing with no other subject matter in the image, can have a minimum height that is required to fit just the subject matte info the video, in this case a person standing who is approximately six feet tali, therefore the width of the camera field of view can be four feet wide, which means that the subject matter being filmed in this case is about sixty four percent of the cameras field of view.
- F!G. 3 illustrates a diagram showing multiple unconventional filming processes 3 with six cameras 4A-4F each rotated to a ninety-degree angle and showing six different subject matter elements being filmed, in this case four people and two pieces of equipment being Individually filmed and the camera field of view as a rectangle vertical image being captured creating 6 different video elements. Also illustrated is a standard filming process 1 with a camera 2 in an normal upright position that can capture a background 5 as another element, and then combining all elements into one video 50.
- HQ High Qualify
- HD High Qualify
- UHD Ultra High Quality
- the unconventional filming process 3 and the standard filming process 1 can now be combined to create multiple individual videos using the same or multiple video resolutions for each element and to create a video that can be configured to maximize the holographic image experience being viewed by an audience at a holographic venue.
- the individual videos in the case illustrated in FIG. 3, there are 7 individual videos, which can be combined into a presentation from the two, three, four, five, or six separate videos. Having two, three, four, five, or six separate videos can allow a holographic venue with two or more transparent screens, and two, three, four, five, or six separate projector/light sources, with the ability to project one or multiple images onto one transparent screen and project one or multiple images onto a second, third, or more transparent screens.
- the six individual video elements that can be recorded at one or at various video resolutions using the unconventional filming process 3 is combined with one video of the standard filming process 1 of the background 5 that can be recorded at various video resolutions, and these videos can then be synchronized together and can then be simultaneously sent through a data network connection to holographic venues with two or more holographic projection type transparent screens and two or more projector/light sources for rendering.
- this same filming process in FIG. 3 is the six individual video elements that can be recorded at one or at various video resolutions using the unconventional filming process 3 is combined with one video of the standard filming process 1 of the background 5 that can be recorded at various video resolutions, and these videos can then be synchronized together and can then be simultaneously sent through a data network connection to holographic venues with two or more holographic projection type transparent screens and two or more projector/light sources for rendering.
- a producer can also create a singular holographic video with multiple video elements using the unconventional filming process 3 and the standard filming process 1 that can be recorded individually one or at various video resolutions, and can be combined to create a singular holographic video that can be sent through one or more data network connection to holographic venues with only one holographic projection type transparent screen and one projector/light sources.
- FIG. 4 illustrates a diagram of what appears as the standard filming process 1 with a camera 2A or multiple cameras 2A-2C positioned at different heights during the recording process in conjunction with the different heights that members of a viewing audience 7A-7C may fee sitting at to watch th holographic image, in reference to FIG. 4 and the standard filming process 1 with a camera 2, we can also use the same technology found in FIG. 2, the unconventional filming process 3 with a camera 4 rotated to a ninety- degree angle.
- FIG. 4 illustrates that using multiple cameras 2 or 4, producers can now film the holographic video at multiple heights at the same time. In using this process producers can now provide holographic videos that are in conjunction with the many different types of holographic venues.
- holographic venues can be defined again as an example as height one, two, or three.
- holographic venues receiving said holographic videos can identify their holographic venue as let's say, for example, the venue with a "two" in height, and when this holographic venue receives the holographic video image, the only holographic image they can receive will be that of height "two.”
- This same process can be used for holographic venues at height "one” and "three.”
- Now only one live event can be filmed with three different cameras at the same time and receiving holographic venues can choose the best holographic video based on their venue requirements.
- FIG. 5 illustrates a diagram of a standard filming process 1 with a camera 2 in a normal upright position and showing the live band and stage 8 being captured at one time.
- This holographic video filming process can be filmed In various holographic video resolutions such as High Quality (HQ). HD. or UHD resolutions.
- This one camera technology can be filmed at different heights, referring to the detailed description of FIG. 4.
- HQ High Quality
- HD. or UHD resolutions This one camera technology can be filmed at different heights, referring to the detailed description of FIG. 4.
- In order to maximize the holographic experience of the viewer at a holographic venue we can record the holographic video at one or more heights to provide the best holographic experience for holographic venues being watched at similar viewing angles that the holographic video images were recorded.
- F!G F!G.
- FIG. 8 Illustrates a diagram of what has now been taught to be a multiple unconventional filming process with multiple cameras 4A-4F being rotated to a ninety- degree angle and simultaneously capturing each Individual element of performers 10. 11 , 13, and 15, and some equipment 12 and 14 of a live event taking place on a stage 9, and which can use Chroma Keying and/or black screen technology stage 9.
- FIG. 7 illustrates a diagram of a multiple unconventional filming process that is used to capture each Individual performer 10, 11 , 13, and 15, and equipment 12 and 14 using Chroma Keying and/or black screen technology, and combining said elements of 10, 11, 13, and 15, and equipment 12 and 14, and which can use Chroma Keying and/or black screen technology, into one holographic video 18.
- one or more of the individual elements, in this case individual performer 10, 1 13, and 15, and equipment 1 and 14 can be captured at one or more various holographic video resolutions such as High Quality (HQ), HD, or UHD resolutions.
- HQ High Quality
- HD High Quality
- UHD resolutions Ultra High Quality
- FIG. 8 illustrates a diagram of both a live stage background 5 and the performers and equipment 18, and capturing each individual performer and equipment 18 using Chroma Keying and/or black screen technology, and combining either a live stage background 5 and/or a computer generated background 8 with the performers and equipment 16 into one or more holographic video 50 and 60, and combining these holographic videos using the process identified in FIGS. 2 to 6, of variations of these elements into various holographic video resolution such as High Quality (HQ), HD, or UHD resolutions, and then combining one or more of these elements using the process identified in FIGS.
- HQ High Quality
- FIG. 9 illustrates another of the various systems and processes that are possible in accordance with the embodiments for creating a computer generated person from existing film 25 or video footage 28 b capturing the face 27 and/or head 28 of one or more persons and/or multiple faces at multiple angles 29 and creating a facial or head mapping image 30 into a computer generated face image 31 of the face image 27 captured and identifying particular facial components 32 to create one or more facial or head video with real-life emotions 33, movements 33, singing 34, and/or talking 35, and the ability to manipulate the image to make the person depicted look younger or older 36.
- FIG. 9 illustrates another of the various systems and processes that are possible in accordance with the embodiments for creating a computer generated person from existing film 25 or video footage 28 b capturing the face 27 and/or head 28 of one or more persons and/or multiple faces at multiple angles 29 and creating a facial or head mapping image 30 into a computer generated face image 31 of the face image 27 captured and identifying particular facial components 32 to create one or more facial or head video with real-life emotions 33,
- these same facial holographic videos can be combined with a computer generated body to create a life-like animated holographic videos into various holographic video resolution such as High Quality (HQ), HD, or UHD resolutions.
- HQ High Quality
- HD High Quality
- UHD resolutions Ultra High Quality
- FIG. 10 illustrates the process for creating a computer generated body of a person from a body scanner 37 of person, creating a computer generated body image 38, taking that image 38 and adding movement to said image 38 by means of computer automated movement program 39, and creating a computer generated body with movement 40, or an actual human wearing body sensors on the hand 41 and the body 42 to manipulate the computer generated body 38 by means of a software to create a computer generated body 40 with movement.
- Aiming camera 2 and/or 3 at an actual human 44 with software that can identify and detect particular body parts and movement of said body parts 44, and a face 27 can be added to said bod image 40 identified in FIG.
- FIG. 11 illustrates a process for creating a ⁇ computer generated body 40 or bodies 48 from FIG.
- HQ High Quality
- HD High Quality
- UHD resolution videos can either combine in to one or more hoiographic performances 50 and/or 60 that can be sent through one or more data network connection to holographic, devices and/or various holographic venues and or stage designs.
- Two or more created High Quality (HQ), HD, or UHD resolution video can be synchronized together and simultaneously sent them through a data network connection to holographic venues with two or more holographic projection type transparent screens and two or more projector/Sight sources.
- F!G. 12 Illustrates just one of many processes that can be used for compressing/decompressing High Quality (HQ), HD, or UHD resolution videos, and/or the encrypting of said video from FIGS. 1-11.
- HQ High Quality
- HD High Quality
- UHD resolution videos and/or the encrypting of said video from FIGS. 1-11.
- a holographic image stage can provide unlimited scenes, props, and/or backgrounds with or without a celebrity performer. This will provide the local celebrities or performer's singer the feel and sensation of being on a well-designed stage like the show "The Voice" uses for their contestants. Also, being able to interject to celebrities or performer's singer into the same holographic image/stage as the celebrity and/or stage design holographic image wi!l now create a holographic video of both the celebrities or performer's singer combined and stage as if they were standing right there on that stage.
- producers can be able to produce a real-time professional stage experience with or without the celebrity viewed by a local audience.
- the present system can also enable you to be able to connect and broadcast any live celebrities or performers or recorded event from the local location to anywhere around the world, now a local or remote viewer using their smartphone, tablets, PC, and TV monitors can watch the holographic performance and/or event.
- This process can allow for local and worldwide viewing of local and worldwide events/contest to be held and where anyone can save the video and send it to a friend and or vote for their favorite celebrities or performer's singer who may be in a contest
- Capturing two or more UND video elements of the same events featuring individuals, entertainers, professionals, venues, stages, and/or computer generated holographic video images separately, then combining two or more of these elements and sending them through a data network connection where a remote person may now attend a .live or recorded event or performance using a holographic device and/or venue to project said performance and creating a viewable UHD life-like holographic image to a remote audience has not been attempted until the system and processes the present inventors have described herein.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular data types and instructions.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular data types and instructions.
- routines routines, subroutines, software applications, programs, objects, components, data structures, etc.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular data types and instructions.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular data types and instructions.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular
- module may refer to a collection of routines and data structures that perform a particular task or implements a particular data type. Modules may be composed of two parts: an interface, which lists the constants, data types, variable, and routines that can be accessed by other modules or routines; and an implementation, which is typically private (accessible only to that module) and which i ncludes source code that actually i mplements the ..routines in the module.
- the term module may also simpl refer to an application, such as a computer program designed to assist in the performanc of a specific task, soch as word processing, accounting, inventor management, etc.
- FiGS. 1-12 are thus intended as examples and not as architectural limitations of disclosed embodiments. Additionally, such embodiments are not limited to any particular application or computing or data processing environment. Instead, those skilled in the art wi!i appreciate that the disclosed approach may be advantageously applied to a variety of systems and application software. Moreover, the disclosed embodiments can be embodied on a variety of different computing platforms, including Macintosh, UNIX, LINUX, and the like.
- VHDL Very high speed Hardware Description Language
- a high-level programming language is a programming language with strong abstraction, e.g., multiple levels of abstraction, from the details of the sequential organizations, states, inputs, outputs, etc., of the machines that a high-level programming language actually specifies, !n order to facilitate human comprehension, in many instances, high-level programming languages resemble or even share symbols with natural languages.
- the hardware used in the computational machines typically consists of some type of ordered matter (e.g., traditional electronic devices (e.g., transistors), deoxyribonucleic acid (D A), quantum devices, mechanical switches, optics, fSuldics, pneumatics, optical devices (e.g., optical interference devices), molecules, etc.) that are arranged to form logic gates.
- Logic gates are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to change physical state in order to create a physical reality of Boolean logic.
- Logic gates may be arranged to form logic circuits, which are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to create a physical reality of certai logical functions.
- Types of logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), computer memory devices, etc., each type of which may be combined to form yet other types of physical devices, such as a central processing unit (CPU)— the best known of which is the microprocessor.
- CPU central processing unit
- a modern microprocessor will often contain more than one hundred million logic gates in its many logic circuits (and often more than a billion transistors).
- the logic circuits forming the microprocessor are arranged to provide a micro architecture that will carry out the instructions defined by that microprocessor's defined Instruction Set Architecture.
- the Instruction Set Architecture is the part of the microprocessor architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external Input/Output.
- the Instruction Set Architecture includes a specification of the machine language that can be used by programmers to use/control the microprocessor. Since the machine language instructions are such that they may be executed directl b the microprocessor, typicall they consist of strings of binary digits, or bits. For example, a typical machine language instruction might be many bits long (e.g., 32, 64, or 128 bit strings are currently common). A typical machine language instruction might take the form 1 11000010101 11100001 11 100111111" (a 32 bit instruction).
- the binary number "1" (e.g., logical "1") in a machine language instruction specifies around +5 volts applied to a specific 3 ⁇ 4ire'' (e.g., metallic traces on a printed circuit board) and the binary number "0" (e.g., logical "0") in a machine language instruction specifies around -5 volts applied to a specific "wire.”
- machine language instructions also select out and activate specific groupings of logic gates from the millions of logic gates of the more general machine.
- Machine language is typically incomprehensible by most humans (e.g., the above example was just ONE instruction, and some personal computers execute more than two billion instructions every second).
- a compiler is a device that takes a statement that is more comprehensible to a human than either machine or assembly language, such as "add 2+2 and output the result," and translates that human understandable statement into a complicated, tedious, and immense machine language code (e.g., millions of 32, 64, or 128 bit length strings). Compilers thus translate high-level programming language into machine language. 10059] This compiled machine language, as described above, is then used as the lechnica!
- any such operational/functional technical descriptions may be understood as operations made into physical reality by (a) one or more interconnected physical machines, (b) interconnected logic gates configured to create one or more physical machine(s) representative of sequential/combinatorial !ogic(s), (c) interconnected ordered components making up logic gates (e.g., interconnected electronic devices (e.g., transistors), DNA, quantum devices, mechanical switches, optics, fluidics, pneumatics, molecules, etc.) that create physical reality representative of iogic(s), or (d) virtually an combination of the foregoing.
- logic gates e.g., interconnected electronic devices (e.g., transistors), DNA, quantum devices, mechanical switches, optics, fluidics, pneumatics, molecules, etc.
- an physical object which has a stable, measurable, and changeable state may be used to construct a machine based on the above technical description. Charles Babbage, for example, constructed the first computer out of wood and powered by cranking a handle.
- the logical operations/functions set forth in the present technica! description are representative of static or sequenced specifications of various ordered-matter elements in order that such specifications may be comprehensible to the human mind and adaptable to create many various hardware configurations.
- the logical operations/functions disclosed herein should be treated as such, and should not be disparagingly characterized as abstract ideas merely because the specifications they represent are presented in a manner that one skilled in the art can readily understand and apply in a manner independent of a specific vendor's hardware implementation.
- At least a portion of the devices or processes described herein can be integrated into an information processing system.
- An information processing system generally includes one or more of a system unit housing, a video display device, memory, such as volatile or non-volatile memory, processors such as microprocessors or digital signal processors, computational entities such as operating systems, drivers, graphicai user interfaces, and applications programs, one or more interaction devices (e.g., a touch pad, a touch screen, an antenna, etc.), or control systems including feedback loops and control motors (e.g., feedback for detecting position or velocity, control motors for moving or adjusting components or quantities).
- a system unit housing such as volatile or non-volatile memory
- processors such as microprocessors or digital signal processors
- computational entities such as operating systems, drivers, graphicai user interfaces, and applications programs
- interaction devices e.g., a touch pad, a touch screen, an antenna, etc.
- control systems including feedback loops and control motors (e.g., feedback for detecting position or velocity, control motors for moving or adjusting components or quantities).
- an impSementer determines that speed and accuracy are paramount, the implements may opt for a mainly hardware or firmware vehicle; alternatively, if flexibility is paramount, the imp!ementer may opt for a mainly software implementation that is implemented in one or more machines or articles of manufacture; or, yet again alternatively, the implementer may opt for some combination of hardware, software, firmware, etc., in one or more machines or articles of manufacture.
- any two components so associated can also be viewed as being “operabiy connected,” “interconnected,” or “operabi coupled” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operabiy coupleable” to each other to achieve the desired functionality.
- operabiy coupleable include, but are not limited to, physically mateable, physically interacting components, wire!ess!y interactab!e, wire!essiy interacting components, logicall interacting, logically interactabie components, etc.
- one or more components may be referred to herein as “configured to,” “configurable to,” “operable/operative to,” “adapted/adaptable,” “able to,” “conformable/conformed to,” etc.
- Such terms can generally encompass active-state components, or inactive-state components, or standby-state components, unless context requires otherwise.
- ASICs Application Specific Integrated Circuits
- FPGAs Field Programmable Gate Arrays
- DSPs digital signal processors
- Non-limiting examples of a signal-bearing medium include the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication Sink (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.).
- a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.
- a transmission type medium such as a digital or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication Sink (e.g., transmitter, receiver, transmission logic, reception logic, etc.),
- K a system having at least one of A, B, and C would include but not be limited to systems that have A alone, B alone, C aione, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.).
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
Systems and methods capturing two or more video elements of the same event featuring individuals, entertainers, professionals, venues, stages, and/or computer generated holographic video images separately, combining two or more of these elements into a data file, and sending them through a data network connection where a remote client and audience attending can view a performance that can include three dimensional holographic performances on a display that can include more than one screen. Live or recorded events or performances can be used to generate a data file, and rendering can include the use of a holographic devices and screens at a remote venue of a performance wherein a viewable life-like holographic images of a performances can be rendered and presented to remote audiences.
Description
UHD HOLOGRAPHIC FILMING AND COMPUTER GENERATED VIDEO PROCESS
TECHNICAL FIELD
(00011 The present invention is related to increasing the holographic video resolution from HD to Ultra Hig Definition (UHD). The present embodiments are related to the live or pre-recorded events and a UHD filming process for various types of venues, stages, individuals, performers, athletes, and/or UHD computer generated stages, backgrounds and/or celebrity look alike video images, and creating a UHD life-like holographic video. More particularl , the embodiments are related to capturing separate UHD video elements of individual real-life venues or Chroma Keying and/or black screen technology venue, and adding separate elements filmed or computer generated separately from each other and then combining two or more elements into one UHD video and sending them through a data network connection to remote holographic imaging devices and/or holographic venues.
BACKGROUND OF THE INVENTION
[0002] Live festivals and events are being attended worldwide, ranging from the well- known celebrit to the localiy recognized entertainer or professional These performances are limited to individuals who are actually attending the event, or by watching the event remotely on a 2D device such as a TV/monitor, a computer, tablet, and/or a smartphone, often In only HD 1080 resolution, of the live or pre-recorded performance.
[0003] In addition, there are computer generated 3D imagery being used to create backgrounds, celebrities, gaming programs, movies, and cartoons in, for example, HD 1080 resolution that are being watched through a 3D system such as virtual realit glasses, movie theaters, and 3D TV/monitors. These remote venues are very limited, however, because the participating users for the most part must be using or wearing some sort of 3D enhancing glasses/headgear device to be able to view the 3D image.
SUMMARY OF THE EMBODIMENTS
[0004] Tiie following summary Is provided to facilitate a better understanding of the inventive features unique to the disclosed embodiments and is not intended to be a full description. A full appreciation of the various aspects of the disclosed embodiments herein can be gained by taking the entire specifications, claims, drawings, and abstracts as a whole.
[0005] What is needed is a way to improve the quality of the holographic experience by revolutionizing the holographic video capturing and creating process and bring higher video resolution than the currently available HD technology, by increasing holographic video to at least 2k, 4K, 8 , or more In UHD (Ultra Hig Definition) and send this UNO video through a data network connection to remote holographic devices and/or holographic venues in order to provide viewing audiences with a more realistic viewing experience of live or prerecorded events. This new process can also creat a more life-i ike/realistic illusion of an event by combining two or more UHD elements together such as venues, live or prerecorded individuals, performers, athletes, venues, venue equipment, venue back drops/backgrounds, venue props and/or computer generated individuals, live or legacy celebrity/artist, athletes, back drops/backgrounds, equipment, props, text, and sending the UHD video through a data network connection where the UHD video is being viewed on a local or remote holographic device and/or at remote venues anywhere around the world, and/or the ability to send multiple UHD holographic videos that can be synchronized together and simultaneously be sent through one or more data network connections to a holographic device and/or a venue with two or more holographic projection type transparent screen(s) to receive the synchronized images from two or more projectors positioned to create multiple layers of 3D holographic optical illusions also to optionally be seen anywhere around the world.
[0006] With the present embodiments, live or recorded audio and real-life holographic imagery can be combined with a computer generated holographic imagery of a person or persons such as live or deceased individuals, celebrities, athletes, and/or combine multiple persons and re-create a person(s), animate the emotion, movements, voice of a person(s) and create a UHD holographic video images of a person(s) and combine the computer
generated person(s) with real-life holographic environments such as public or private venues, venue equipment, venue backgrounds, back drops, venue props to provide a more realistic "life-like* UHD holographic video image of the person(s) behaving like a real "live" person with all the same facia! emotions, movements, singing, talking such as but not limited to performing on stage, entertaining at public venues, sporting events, at home, or in an office.
[0007] The embodiments can also enable the re-creation of various past performances or events and/or the creation of present events by using our computer generated UHD person(s) video cooibined with the real-life UHD holographic environment video to create one or more UHD holographic video(s) wherein either a singular UHD holographic video and/or multiple UHD holographic videos that can be sent through one or more data network connection to a remote holographic device or venue with at least one holographic projection type transparent screen to receive the image from a projector and to create a 3D holographic optical illusion on the screen anywhere around the world. There can be multiple UHD holographic videos that can be synchronized together and simultaneousl can be sent through one or more data network connections to holographic devices or venues with two or more holographic projection type transparent screen(s) to receive the synchronized images from two or more projectors positioned to create multiple layers of 3D holographic optical illusions for spectators anywhere around the world. This process can also allow for local and worldwide viewing of local and worldwide UHD holographic events being held past or present, and where anyone can also watch the UHD holographic event in 2D video, save a 2D video, and/or send 2D video to a friend.
[0008] The present embodiments are also related to the holographic video capturing and creating process and bring higher video resolution than current HD technology, and increase holographic video to at least 2k, 4K, 8K, or more UHD (Ultra High Definition) by using one or more cameras with one or multiple camera perspectives of venues, live or prerecorded individuals, performers, athletes, venues, venue equipment, venue back drops/backgrounds, venue props, and/or adding computer generated images of a person or persons, real-life holographic environments such as public or private venues, venue equipment, venue backgrounds, back drops, venue props to provide a more realistic "!ife-
like8 UNO holographic video image of the person(s) behaving like a real person with all the same facial emotions, movements, singing, talking, then creating multiple UHD videos, and then either combine them in to one holographic performance sent to holographic devices and/or various holographic venues and/or stage designs, or creat two or more UHD videos that ca be created and synchronized together and simultaneously sent to holographic devices and/or various holographic venues and or stage designs.
[000®] A method can Include providing more than one camera disposed around a performance of more than one individual capturing various perspectives of video representing each individual activity at the performance, and access to a production server for receiving the various perspectives captured by the more than one camera and for processing the various perspectives for rendering on at least one screen by remot clients, wherein the processing includes the synchronization of the various perspectives into a data file. At least two videos of varying resolution can be captured at the performance featuring individuals, entertainers, professionals, venues, stages, and/or computer generated holographic video images, combined into a data file in a production server, sent including a combination of the at least two video elements at varying resolution through a data network to a remote client and rendered on a display.
BRIEF DESCRIPTIONS OF DRAWINGS
[0010] The accompanying figures, In which like reference numerals refer to identical or functionally-similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate the present invention and, together with the detailed description of the invention, serve to explain the principles of the present invention.
[00113 FIG. 1 illustrates a diagram of a standard filming process with a camera in a norma! upright position and showing the camera field of view as a rectangle horizontal image being captured In accordance with a feature of the embodiments;
0012] FIG. 2 illustrates 'a diagram of an unconventional filming process with a camera rotated to a ninety-degree angle and showing tr e camera field of view as a rectangle vertical Image being captured in accordance with a feature of the embodiments;
[0013] FIG. 3 illustrates a diagram showing multiple unconventional filming process with a camera rotated to a ninety-degree angle and showing the camera field of view as a rectangle vertical image being captured and a standard filming process with a camera in an norma! upright position capturing a background and combining all elements Into one UHD video in accordance with a feature of the embodiments;
[0014] FIG. 4 illustrates a diagram of a standard filming process with a camera or multiple cameras at different heights during the recording process in conjunction with the different heights of the viewing audience ma be sitting to watch the holographic image in accordance with a feature of the embodiments;
[0015] FIG. 5 illustrates a diagram of a standard filming process with a camera in a norma! upright position and showing the live band and stage being captured at one time in accordance with a feature of the embodiments;
[0016] FIG. 8 illustrates a diagram of a multiple unconventional filming process with multiple cameras rotated to a ninety-degree angle and capturing each individual performer and some equipment at the same time filmed separately using Chroma Keying and/or black screen technolog stage in accordance with a feature of the embodiments;
[0017] FIG. 7 illustrates a diagram of a multiple unconventional filming process that captured each individual performer and equipment using Chroma Keying and/or black screen technology, a choice of either a iive stage or a computer generated background and combining all elements into one UHD video in accordance with a feature of the embodiments;
[0018] F!G. 8 illustrates a diagram of both a live stage background and the performers, and capturing each individual performer and equipment using Chroma Keying and/or black
screen technology, and combining a live stage background or a computer generated background with the performers into one video and sending this UHD video through a network connection to many different types of venues anoVqr devices in accordance with a feature of the embodiments;
100193 FIG, 9 illustrates the process for creating a computer generated person from existing film or video footage by capturing the face and/or head of that person at mu!tip!e angles and re-creating a facial or head image Into a iife-!ike computer generated image and taking that image to create one facial or head video with real-life emotions, movements, singing, and/or talking, and the ability to make said image younger or older in accordance with a feature of the embodiments;
[0020] FiG. 10 illustrates the process for creating a computer generated body of a person from a body scan of person and taking that image and adding movement to said image by means of computer automated movement program, and/or an actual human wearing body sensors to manipulate the computer generated body by means of a software, and/or camera aimed at an actual human with software that can identify and detect particular body parts and movement of said bod parts, and adding a face to said body image identified in FIG. 9 to create one UHD video with rea!-life emotions, body movements, dancing, singing, sitting, and/or talking or other such required life-like re-creations of a person(s) in accordance with a feature of the embodiments;
[0021] FIG. 1 1 illustrates the process for creating a computer generated body or bodies from FIG. 10 and adding them to a real-life stage or background, and/or a computer generated stage and/or background and creating one or more UHD videos that can either combine in to one holographic performance sent to holographic devices and/or various holographic venues and/or stage designs, or create two or more UHD videos that can be created and synchronized together and simultaneously sent to holographic devices and/or various holographic venues and/or stage designs in accordance with a feature of the embodiments; and
0022] FiG. 12 illustrates the just one of many processes for compressing and encrypting said video from FIGS. 1-11 in accordance with a feature of the embodiments.
DETAILED DESCRIPTION OF THE EM iBODIMEKTS
J0023] The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate one or more embodiments and are not intended to limit the scope thereof,
[0024] Subject matter wil! now be described more full hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific example embodiments. Subject matter may, however, be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any example embodiments set forth herein; example embodiments are provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. Accordingly, embodiments may, for example, take the form of hardware, software, firmware, or any combination thereof (other than software per se). The following detailed description is, therefore, not intended to be interpreted in a limiting sense.
[00253 Throughout the specification and claims, terms may have nuanoed meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, phrases such as "in one embodiment" or "in an example embodiment" and variations thereof as utilized herein do not necessarily refer to the same embodiment and the phrase "in another embodiment" or "in another example embodiment" and variations thereof as utilized herein may or may not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of example embodiments in whole or in part.
[0026] In general, terminology may be understood, at least in part, from usage in context. For example, terms such as "and,11 "or," or "and/or" as used herein may include a variety of meanings that may depend, at least in part, upon the context in which such terms are used. Typically, "or" if used to associate a list, such as A, B, or C, is intended to mean A, B, and C, here used i the Inclusive sense, as well as A, B, or C, here used in the exclusive sense. In addition, the term "one or more" as used herein, depending at least in part upon context, s
may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures, or characteristics in a plural sense. Similarly, terms such as "a," "an," or "the," again, may be understood to convey a singular usage or to convey a piuraS usage, depending at least in part upon context. In addition, the term "based on" may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context. Additionally, the term "step" can be utilized interchangeably with "instruction" or "operation."
[0027] FIG. 1 , labeled as "Prior art", illustrates a diagram of a standard filming process 1 with a camera 2 in a normal upright position and showing the subject matter in this case a person being filmed and the camera field of view as a rectangle horizontal image being captured is 1080 pixels high and 1920 pixels wide. This standard field of view recording ratio of the width is one and a half times larger (wider) than the height. The field of view to capture the subject matter, in this case a person standing with no other subject matter in the image, it has a minimum height that is required to fit just the subject matter into the video, in this case a person standing who is approximately six feet tall, therefore the width of the camera field of view will be nine feet three inches wide, this means that the subject matter being filmed, in this case is only thirty six percent of the cameras field of view. The amount of excess data being wasted during a recording is sixty four percent; we will call this "wasted receding pixels." These "wasted recording pixels" use up sixty four percent of video resolution when there is no other subject matter in the cameras field of view, regardless of the size of the subject matter being filmed. In a standard 1 hour 1080 video using standard filming process 1 with a camera 2 in a normal upright position of a person being filmed at either 24fps (cinema) or 25fps (TV/PAL) is approximately 121.5 MB/sec - 8- bit uncompressed 1080 29.97! HD is equivalent to approximately four hundred thirty four gigabytes, of which two hundred seventy seven gigabytes are "wasted receding pixels."
[0028] FIG. 2 illustrates a diagram of an unconventional filming process 3 with a camera 4 rotated to a ninety-degree angle and the subject matter in this case a person being filmed and the camera field of view as a rectangle vertical image being captured at 1920 pixels high and 1080 pixels wide. This unconventional filming process 3 field of view, to capture
the subject matter in this case a person standing with no other subject matter in the image, can have a minimum height that is required to fit just the subject matte info the video, in this case a person standing who is approximately six feet tali, therefore the width of the camera field of view can be four feet wide, which means that the subject matter being filmed in this case is about sixty four percent of the cameras field of view. The amount of excess data being wasted during a recording can now be onl thirty six percent; this can be called "minimized wasted receding pixels." This new unconventional filming process 3 can reduce the amount of "wasted recording pixels" in half. These "minimized wasted receding pixels" use up approximately thirty percent of video resolution regardless of the size of the subject matter being filmed. To achieve the same 1080 video resolution of the same subject matter in FIG. 1 , in this case a person, and using the unconventional filming process 3 with a camera 4 rotated to a ninety-degree angle of a person being filmed in a 1 hour 1080 video at either 24fps (cinema) or 25fps (TV/PAL) is approximately 121.5 B/sec - 8-bit uncompressed 1080 29.97i HD is now equivalent to approximately one hundred fifty seven gigabytes, of which only eighty three gigabytes are "wasted recording pixels." As can be appreciated, this process can allow for faster and more efficient compression of videos, faster and more efficient streaming speeds, and/or can allow for Ultra Higher Definition "UHD" resolutions such as 2k, 4k, 8k, or even higher of subject matter (in this illustrated and exemplary case a person). This process can enable holographic images to be recorded at much higher resolutions to enhance the overall viewing experience of holographic videos.
[0029] F!G. 3 illustrates a diagram showing multiple unconventional filming processes 3 with six cameras 4A-4F each rotated to a ninety-degree angle and showing six different subject matter elements being filmed, in this case four people and two pieces of equipment being Individually filmed and the camera field of view as a rectangle vertical image being captured creating 6 different video elements. Also illustrated is a standard filming process 1 with a camera 2 in an normal upright position that can capture a background 5 as another element, and then combining all elements into one video 50. in this example using the unconventional filming process 3, six individual elements are being filmed; a variety of video resolutions can be used such as High Qualify (HQ), HD, or UHD resolutions for one or more of the individual subject matters, therefore enabling a producer to better control the
overall data file size of the video being recorded. In the case using the standard filming process 1 and capturing a background 5, a producer can also record the background 5 at various resolutions such as High Quality (HQ), HD, or UHD, again being able to control the overall data file size of the background 5 video being recorded. With this process the unconventional filming process 3 and the standard filming process 1 can now be combined to create multiple individual videos using the same or multiple video resolutions for each element and to create a video that can be configured to maximize the holographic image experience being viewed by an audience at a holographic venue. With this system and process the individual videos, in the case illustrated in FIG. 3, there are 7 individual videos, which can be combined into a presentation from the two, three, four, five, or six separate videos. Having two, three, four, five, or six separate videos can allow a holographic venue with two or more transparent screens, and two, three, four, five, or six separate projector/light sources, with the ability to project one or multiple images onto one transparent screen and project one or multiple images onto a second, third, or more transparent screens. This creates the perception of depth to the viewing audience, !n the case of FIG. 3 as just one example, the six individual video elements that can be recorded at one or at various video resolutions using the unconventional filming process 3 is combined with one video of the standard filming process 1 of the background 5 that can be recorded at various video resolutions, and these videos can then be synchronized together and can then be simultaneously sent through a data network connection to holographic venues with two or more holographic projection type transparent screens and two or more projector/light sources for rendering. With this same filming process in FIG. 3, a producer can also create a singular holographic video with multiple video elements using the unconventional filming process 3 and the standard filming process 1 that can be recorded individually one or at various video resolutions, and can be combined to create a singular holographic video that can be sent through one or more data network connection to holographic venues with only one holographic projection type transparent screen and one projector/light sources.
[0030] FIG. 4 illustrates a diagram of what appears as the standard filming process 1 with a camera 2A or multiple cameras 2A-2C positioned at different heights during the recording process in conjunction with the different heights that members of a viewing
audience 7A-7C may fee sitting at to watch th holographic image, in reference to FIG. 4 and the standard filming process 1 with a camera 2, we can also use the same technology found in FIG. 2, the unconventional filming process 3 with a camera 4 rotated to a ninety- degree angle. Here, in order to maximize the holographic experience of the viewer at a holographic venue, we can record the holographic video a! one or more heights to provide the best holographic experience for holographic venues being watched at a similar viewing angle that the holographic video images was recorded. In addition, there can be various types of holographic venues such as one with a built-in holographic stage that is raised above the viewing audience, we call this height one, and/or at the same height as the viewing audience, this can be referred to as height two, and/or below the viewing audience, which can be referred to as height three. FIG. 4 illustrates that using multiple cameras 2 or 4, producers can now film the holographic video at multiple heights at the same time. In using this process producers can now provide holographic videos that are in conjunction with the many different types of holographic venues. These various holographic venues can be defined again as an example as height one, two, or three. As an example, holographic venues receiving said holographic videos can identify their holographic venue as let's say, for example, the venue with a "two" in height, and when this holographic venue receives the holographic video image, the only holographic image they can receive will be that of height "two." This same process can be used for holographic venues at height "one" and "three." Now only one live event can be filmed with three different cameras at the same time and receiving holographic venues can choose the best holographic video based on their venue requirements.
[0031] FIG. 5 illustrates a diagram of a standard filming process 1 with a camera 2 in a normal upright position and showing the live band and stage 8 being captured at one time. This holographic video filming process can be filmed In various holographic video resolutions such as High Quality (HQ). HD. or UHD resolutions. This one camera technology can be filmed at different heights, referring to the detailed description of FIG. 4. In order to maximize the holographic experience of the viewer at a holographic venue we can record the holographic video at one or more heights to provide the best holographic experience for holographic venues being watched at similar viewing angles that the holographic video images were recorded.
[0032] F!G. 8 Illustrates a diagram of what has now been taught to be a multiple unconventional filming process with multiple cameras 4A-4F being rotated to a ninety- degree angle and simultaneously capturing each Individual element of performers 10. 11 , 13, and 15, and some equipment 12 and 14 of a live event taking place on a stage 9, and which can use Chroma Keying and/or black screen technology stage 9.
[0033] FIG. 7 illustrates a diagram of a multiple unconventional filming process that is used to capture each Individual performer 10, 11 , 13, and 15, and equipment 12 and 14 using Chroma Keying and/or black screen technology, and combining said elements of 10, 11, 13, and 15, and equipment 12 and 14, and which can use Chroma Keying and/or black screen technology, into one holographic video 18. Again referring to FIG. 3, one or more of the individual elements, in this case individual performer 10, 1 13, and 15, and equipment 1 and 14 can be captured at one or more various holographic video resolutions such as High Quality (HQ), HD, or UHD resolutions. I addition with ou process, you have a choice of adding a holographic video of either a live stage 5 and/or a computer generated background 6 and combining all the said elements into one UHD video using a live stage 50 and a computer generated stage 80 with or without text, and/or combining multiple variations of these elements into various holographic video resolution such as High Quality (HQ), HD, or UHD resolutions and then combining one or more of these elements so they can be sent through one or more data network connection to holographic venues as identified in FIG. 3.
[0034] FIG. 8 illustrates a diagram of both a live stage background 5 and the performers and equipment 18, and capturing each individual performer and equipment 18 using Chroma Keying and/or black screen technology, and combining either a live stage background 5 and/or a computer generated background 8 with the performers and equipment 16 into one or more holographic video 50 and 60, and combining these holographic videos using the process identified in FIGS. 2 to 6, of variations of these elements into various holographic video resolution such as High Quality (HQ), HD, or UHD resolutions, and then combining one or more of these elements using the process identified in FIGS. 2 to 8 so they can be sent through one or more data network connection 18 to
holographic venues 17, large 1 or smaller 20 mobile stages, and/or sending said a singular holographic video through a network connection to man different devices designed to receive holographic video images such as smartphones 21, tablets, laptop top computers 22, virtual reality glasses 23, or other headgear 24 designed to receive holographic video images.
[0035] FIG. 9 illustrates another of the various systems and processes that are possible in accordance with the embodiments for creating a computer generated person from existing film 25 or video footage 28 b capturing the face 27 and/or head 28 of one or more persons and/or multiple faces at multiple angles 29 and creating a facial or head mapping image 30 into a computer generated face image 31 of the face image 27 captured and identifying particular facial components 32 to create one or more facial or head video with real-life emotions 33, movements 33, singing 34, and/or talking 35, and the ability to manipulate the image to make the person depicted look younger or older 36. With the process of FIG. 9, in keeping with the holographic process of various holographic video resolution, these same facial holographic videos can be combined with a computer generated body to create a life-like animated holographic videos into various holographic video resolution such as High Quality (HQ), HD, or UHD resolutions.
[0036] FIG. 10 illustrates the process for creating a computer generated body of a person from a body scanner 37 of person, creating a computer generated body image 38, taking that image 38 and adding movement to said image 38 by means of computer automated movement program 39, and creating a computer generated body with movement 40, or an actual human wearing body sensors on the hand 41 and the body 42 to manipulate the computer generated body 38 by means of a software to create a computer generated body 40 with movement. Aiming camera 2 and/or 3 at an actual human 44 with software that can identify and detect particular body parts and movement of said body parts 44, and a face 27 can be added to said bod image 40 identified in FIG. 9 to create a computer generated body 43 with face 27 of one or more High Quality (HQ), HD, or UHD resolution videos of a computer generated person 43 with real-life emotions, body movements, dancing, singing, sitting, talking, and/or other such required life-like human characteristics in the re-creations of a person or persons.
[0037] FIG. 11 illustrates a process for creating a■computer generated body 40 or bodies 48 from FIG. 10 and adding them to a reai-!lfe stage or background 5, and/or a computer generated stage and/or background 6, and creating one or more High Quality (HQ), HD, or UHD resolution videos that can either combine in to one or more hoiographic performances 50 and/or 60 that can be sent through one or more data network connection to holographic, devices and/or various holographic venues and or stage designs. Two or more created High Quality (HQ), HD, or UHD resolution video can be synchronized together and simultaneously sent them through a data network connection to holographic venues with two or more holographic projection type transparent screens and two or more projector/Sight sources.
[0038] F!G. 12 Illustrates just one of many processes that can be used for compressing/decompressing High Quality (HQ), HD, or UHD resolution videos, and/or the encrypting of said video from FIGS. 1-11.
[0039] The system and processes described herein can create the illusion that celebrities or performers are preforming their song live on this mobile stage. Because the local venues have limited access to back drops and/or stage props, a holographic image stage can provide unlimited scenes, props, and/or backgrounds with or without a celebrity performer. This will provide the local celebrities or performer's singer the feel and sensation of being on a well-designed stage like the show "The Voice" uses for their contestants. Also, being able to interject to celebrities or performer's singer into the same holographic image/stage as the celebrity and/or stage design holographic image wi!l now create a holographic video of both the celebrities or performer's singer combined and stage as if they were standing right there on that stage. With the present system and processes, producers can be able to produce a real-time professional stage experience with or without the celebrity viewed by a local audience. The present system can also enable you to be able to connect and broadcast any live celebrities or performers or recorded event from the local location to anywhere around the world, now a local or remote viewer using their smartphone, tablets, PC, and TV monitors can watch the holographic performance and/or event. This process can allow for local and worldwide viewing of local and worldwide
events/contest to be held and where anyone can save the video and send it to a friend and or vote for their favorite celebrities or performer's singer who may be in a contest Capturing two or more UND video elements of the same events featuring individuals, entertainers, professionals, venues, stages, and/or computer generated holographic video images separately, then combining two or more of these elements and sending them through a data network connection where a remote person may now attend a .live or recorded event or performance using a holographic device and/or venue to project said performance and creating a viewable UHD life-like holographic image to a remote audience has not been attempted until the system and processes the present inventors have described herein.
[0040] The foregoing discussion was intended to provide a brief, general description of suitable computing environments in which the system and method may be implemented. Although not required, the disclosed embodiments can be described in the general context of computer-executable instructions, such as program modules, being executed by a single computer. In most instances, a "module" can constitute a software application, but can also be implemented as both software and hardware (i.e., a combination of software and hardware).
[0041] Generally, program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular data types and instructions. Moreover, those skilled in the art will appreciate that the disclosed method and system ma be practiced with other computer system configurations, such as, for example, hand-held devices, multi-processor systems, data networks, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, mainframe computers, servers, and the like.
[0042] Note that the term module as utilized herein may refer to a collection of routines and data structures that perform a particular task or implements a particular data type. Modules may be composed of two parts: an interface, which lists the constants, data types, variable, and routines that can be accessed by other modules or routines; and an implementation, which is typically private (accessible only to that module) and which
i ncludes source code that actually i mplements the ..routines in the module. The term module may also simpl refer to an application, such as a computer program designed to assist in the performanc of a specific task, soch as word processing, accounting, inventor management, etc.
[0043] FiGS. 1-12 are thus intended as examples and not as architectural limitations of disclosed embodiments. Additionally, such embodiments are not limited to any particular application or computing or data processing environment. Instead, those skilled in the art wi!i appreciate that the disclosed approach may be advantageously applied to a variety of systems and application software. Moreover, the disclosed embodiments can be embodied on a variety of different computing platforms, including Macintosh, UNIX, LINUX, and the like.
[0044] The claims, description, and drawings of this application may describe one or more of the instant technologies in operational/functional language, for example, as a set of operations to be performed by a computer. Such operational/functional description in most instances can be specifically configured hardware (e.g., because a general purpose computer in effect becomes a special-purpose computer once it is programmed to perform particular functions pursuant to instructions from program software). Note that the data- processing system or apparatus discussed herein may be implemented as special-purpose computer in some example embodiments. In some example embodiments, the data- processing system or apparatus can be programmed to perform the aforementioned particular instructions thereby becoming in effect a special-purpose computer.
[0045] Importantly, although the operational/functional descriptions described herein are understandable by the human mind, they are not abstract ideas of the operations/functions divorced from computational implementation of those operations/functions. Rather, the operations/functions represent a specification for the massively complex computational machines or other means. As discussed in detail below, the operational/functional language must be read in its proper technological context, i.e., as concrete specifications for physical implementations.
10046] The logical operations/functions described herein can be a distiation of machine specifications or other physical mechanssms specified by the operations/functions such that the otherwise inscrutable machine speciicatbns may be comprehensible to the human mind. The distillation also allows one skilled in the art to adapt the operational functional description of the technology across many different specific vendors' hardware configurations or platforms, without being limited to specific vendors' hardware configurations or platforms.
[0047] Some of the present technical description (e.g., detailed description, drawings, claims, etc.) may be set forth in terms of logical operations/functions. As described in more detail in the foilowing paragraphs, these logical operations/functions are not representations of abstract ideas, but rather representative of static or sequenced specifications of various hardware elements. Differently stated, unless context dictates otherwise, the logical operations/functions are representative of static or sequenced specifications of various hardware elements. This is true because fools available to implement technical disclosures set forth in operational/functional formats— toots in the form of a high-ieve! programming language (e.g., C, Java, Visual Basic, etc.), or tools in the form of Very high speed Hardware Description Language ("VHDL" which is a language that uses text to describe logic circuits)— are generators of static or sequenced specifications of various hardware configurations. This fact is sometimes obscured by the broad term "software," but, as shown by the following explanation, what is termed "software" is shorthand for a massively complex specification of ordered program elements. The term "ordered -matter elements" may refer to physical components of computation, such as assemblies of electronic logic gates, molecular computing logic constituents, quantum computing mechanisms, etc.
[0048] For example, a high-level programming language is a programming language with strong abstraction, e.g., multiple levels of abstraction, from the details of the sequential organizations, states, inputs, outputs, etc., of the machines that a high-level programming language actually specifies, !n order to facilitate human comprehension, in many instances, high-level programming languages resemble or even share symbols with natural languages.
IS
[0049] St has been argued that because liigh-teveS programming languages use strong abstraction (e.g., that they ma resemble or share symbols with natural languages), the are therefore a "purely mental construct" (e.g., that "software"— a computer program or computer programming - is somehow an ineffable mental construct, because at a high level of abstraction, it can be conceived and understood in the human mind). This argument has been used to characterize technical description in the form of functions/operations as somehow "abstract ideas." !n fact, in technological arts (e.g., the information and communication technologies) this is not true.
[0050] The fact that high-level programming languages use strong abstraction to facilitate human understanding should not be taken as an indication that what is expressed is an abstract idea, !n an example embodiment, if a high-level programming language is the tool used to implement a technical disclosure in the form of functions/operations, it can be understood that, far from being abstract imprecise, "fuzzy," or "mental" in any significant semantic sense, such a tool is instead a near incomprehensibl precise sequential specification of specific computational— machines— the parts of which are built up by activating/selecting such parts from typically more general computational machines over time (e.g., clocked time). This fact is sometimes obscured by the superficial similarities between high-level programming languages and natural languages. These superficial similarities also may cause a glossing over of the fact that high-level programming language implementations ultimately perform valuable work by creating/controlling many different computational machines.
[0051] The many different computational machines that a high-level programming language specifies are almost unimaginably complex. At base, the hardware used in the computational machines typically consists of some type of ordered matter (e.g., traditional electronic devices (e.g., transistors), deoxyribonucleic acid (D A), quantum devices, mechanical switches, optics, fSuldics, pneumatics, optical devices (e.g., optical interference devices), molecules, etc.) that are arranged to form logic gates. Logic gates are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to change physical state in order to create a physical reality of Boolean logic.
[0052] Logic gates may be arranged to form logic circuits, which are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to create a physical reality of certai logical functions. Types of logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), computer memory devices, etc., each type of which may be combined to form yet other types of physical devices, such as a central processing unit (CPU)— the best known of which is the microprocessor. A modern microprocessor will often contain more than one hundred million logic gates in its many logic circuits (and often more than a billion transistors).
[0053] The logic circuits forming the microprocessor are arranged to provide a micro architecture that will carry out the instructions defined by that microprocessor's defined Instruction Set Architecture. The Instruction Set Architecture is the part of the microprocessor architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external Input/Output.
[0054] The Instruction Set Architecture includes a specification of the machine language that can be used by programmers to use/control the microprocessor. Since the machine language instructions are such that they may be executed directl b the microprocessor, typicall they consist of strings of binary digits, or bits. For example, a typical machine language instruction might be many bits long (e.g., 32, 64, or 128 bit strings are currently common). A typical machine language instruction might take the form 1 11000010101 11100001 11 100111111" (a 32 bit instruction).
[0055] It is significant here that, although the machine language instructions are written as sequences of binary digits, in actuality those binary digits specify physical reality. For example, If certain semiconductors are used to make the operations of Boolean logic a physical reality, the apparently mathematical bits "1" and "0" in a machine language instruction actually constitute a shorthand that specifies the application of specific voltages to specific wires. For example, in some semiconductor technologies, the binary number "1" (e.g., logical "1") in a machine language instruction specifies around +5 volts applied to a
specific ¾ire'' (e.g., metallic traces on a printed circuit board) and the binary number "0" (e.g., logical "0") in a machine language instruction specifies around -5 volts applied to a specific "wire." addition to specifying voltages of the machines' configuration, such machine language instructions also select out and activate specific groupings of logic gates from the millions of logic gates of the more general machine. Thus, far from abstract mathematical expressions, machine language instruction programs, even though written as a string of zeros and ones, specify many, many constructed physical machines or physical machine states.
[0056] Machine language is typically incomprehensible by most humans (e.g., the above example was just ONE instruction, and some personal computers execute more than two billion instructions every second).
[0057] Thus, programs written in machine language— which may be tens of millions of machine language instructions long— are incomprehensible, in view of this, early assembly languages were developed that used mnemonic codes to refer to machine language instructions, rather than using the machine language instructions' numeric values directly (e.g., for performing a multiplication operation, programmers coded the abbreviation "mult," which represents the binary number "01 1000" in MIPS machine code). While assembly languages were initially a great aid to humans controlling the microprocessors to perform work, in time the complexity of the work that needed to be done by the humans outstripped the ability of humans to control the microprocessors using merely assembly languages.
[0058] At this point, it was noted that the same tasks needed to be done over and over, and the machine language necessary to do those repetitive tasks was the same. In view of this, compilers were created. A compiler is a device that takes a statement that is more comprehensible to a human than either machine or assembly language, such as "add 2+2 and output the result," and translates that human understandable statement into a complicated, tedious, and immense machine language code (e.g., millions of 32, 64, or 128 bit length strings). Compilers thus translate high-level programming language into machine language.
10059] This compiled machine language, as described above, is then used as the lechnica! specification which sequentialiy constructs and causes the interoperation of many different cdmputationai machines such that humanly useful, tangible, and concrete work is done. For example, as indicated above, such machine language— the complied version of the higher-levef language— functions as a technical specification, which selects out hardware logic gates, specifies voltage levels, voltage transition timings, etc., such that the humanly useful work is accomplished by the hardware.
[0060] Thus, a functional/operational technical description, when viewed by one skilled in the art, is far from an abstract idea. Rather, such a functional/operational technical description, when understood through the tools available in the art such as those just described, is instead understood to be a humanly understandable representation of a hardware specification, the complexity and specificity of which far exceeds the comprehension of most an one human. Accordingly, any such operational/functional technical descriptions may be understood as operations made into physical reality by (a) one or more interconnected physical machines, (b) interconnected logic gates configured to create one or more physical machine(s) representative of sequential/combinatorial !ogic(s), (c) interconnected ordered components making up logic gates (e.g., interconnected electronic devices (e.g., transistors), DNA, quantum devices, mechanical switches, optics, fluidics, pneumatics, molecules, etc.) that create physical reality representative of iogic(s), or (d) virtually an combination of the foregoing. Indeed, an physical object, which has a stable, measurable, and changeable state may be used to construct a machine based on the above technical description. Charles Babbage, for example, constructed the first computer out of wood and powered by cranking a handle.
[0061] Thus, far from being understood as an abstract idea, it can be recognized that a functional/operational technicai description as a humanly understandabie representation of one or more almost unimaginably complex and time sequenced hardware instantiations. The fact that functional/operational technicai descriptions might lend themselves readily to high-level computing languages (or high-level block diagrams for that matter) that share some words, structures, phrases, etc., with natural language simply cannot be taken as an indication that such functional/operational technical descriptions are abstract ideas, or mere
expressions of abstract ideas. In fact, as outlined herein, in the technological arts this is stmply not true. When viewed through the tools available to those skilled in the art, such functional/operationai technical descriptions are seen as specifying hardware configurations of almost unimaginable complexity.
10062] As outlined above, the reason for the use of fanciional/operational technical descriptions is at least twofold. First, the use of functional/operational technical descriptions allows near-infmitely complex machines and machine operations arising from interconnected hardware elements to be described in a manner that the human mind can process (e.g., by mimicking natural language and logical narrative flow). Second, the use of functional/operational technica! descriptions assists the person skilled in the art in understanding the described subject matter by providing a description that is more or less independent of any specific vendor's piece(s) of hardware.
[0063] The use of functional/operational technical descriptions assists the person skilled in the art in understanding the described subject matter since, as is evident from the above discussion, one could easily, although not quickly, transcribe the technical descriptions set forth in this document as trillions of ones and zeroes, billions of single lines of assembly- level machine code, millions of logic gates, thousands of gate arrays, or any number of intermediate levels of abstractions. However, if any such low-level technical descriptions were to replace the present technical description, a person skilled in the art could encounter undue difficulty in implementing the disclosure, because such a !ow-leve! technical description would likely add complexity without a corresponding benefit (e.g., by describing the subject matter utilizing the conventions of one or more vendor-specific pieces of hardware). Thus, the use of functional/operational technica! descriptions assists those skilled in the art by separating the technica! descriptions from the conventions of any vendor-specific piece of hardware.
[0064] In view of the foregoing, the logical operations/functions set forth in the present technica! description are representative of static or sequenced specifications of various ordered-matter elements in order that such specifications may be comprehensible to the human mind and adaptable to create many various hardware configurations. The logical
operations/functions disclosed herein should be treated as such, and should not be disparagingly characterized as abstract ideas merely because the specifications they represent are presented in a manner that one skilled in the art can readily understand and apply in a manner independent of a specific vendor's hardware implementation. 0085] At least a portion of the devices or processes described herein can be integrated into an information processing system. An information processing system generally includes one or more of a system unit housing, a video display device, memory, such as volatile or non-volatile memory, processors such as microprocessors or digital signal processors, computational entities such as operating systems, drivers, graphicai user interfaces, and applications programs, one or more interaction devices (e.g., a touch pad, a touch screen, an antenna, etc.), or control systems including feedback loops and control motors (e.g., feedback for detecting position or velocity, control motors for moving or adjusting components or quantities). An information processing system can be implemented utilizing suitable commercially available components, such as those typically found in data computing/communication or network computing/communication systems.
[0066] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes or systems or other technologies described herein can be effected (e.g., hardware, software, firmware, etc., in one or more machines or articles of manufacture), and that the preferred vehicle will vary with the context in which the processes, systems, other technologies, etc., are deployed. For example, if an impSementer determines that speed and accuracy are paramount, the implements may opt for a mainly hardware or firmware vehicle; alternatively, if flexibility is paramount, the imp!ementer may opt for a mainly software implementation that is implemented in one or more machines or articles of manufacture; or, yet again alternatively, the implementer may opt for some combination of hardware, software, firmware, etc., in one or more machines or articles of manufacture. Hence, there
are several possible vehicles by which the processes, devices, other technologies, etc., described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the smplementer, any of which ma vary, in an embodiment, optical aspects of implementations will typically employ optically oriented hardware, software, firmware, etc., i one or more machines or articles of manufacture.
[0067] The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact, many other architectures can be implemented that achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionaiity is effectively "associated- such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operabiy connected," "interconnected," or "operabi coupled" to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being "operabiy coupleable" to each other to achieve the desired functionality. Specific examples of operabiy coupleable include, but are not limited to, physically mateable, physically interacting components, wire!ess!y interactab!e, wire!essiy interacting components, logicall interacting, logically interactabie components, etc.
[0068] In an example embodiment, one or more components may be referred to herein as "configured to," "configurable to," "operable/operative to," "adapted/adaptable," "able to," "conformable/conformed to," etc. Such terms (e.g., "configured to") can generally encompass active-state components, or inactive-state components, or standby-state components, unless context requires otherwise.
[0069] The foregoing detailed description has set forth various embodiments of the devices or processes via the use of block diagrams, flowcharts, or examples, insofar as
such block diagrams, flowcharts, or examples contain one or more functions or operations, it will be understood by the reader that each function or operation within such block diagrams, flowcharts, or examples can be implemented, individually or collectively, by a wide range of hardware, software, firmware in one or more machines or articles of manufacture, or virtually any combination thereof. Further, the use of "Start," "End,5' or "Stop" blocks in the block diagrams Is not intended to indicate a limitation on the beginning or end of any functions in the diagram. Such flowcharts or diagrams may be incorporated into other flowcharts or diagrams where additional functions are performed before or after the functions shown in the diagrams of this application. In an embodiment, several portions of the subject matter described herein is implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitr or writing the code for the software and/or firmware would be well within the skill of one skilled in the art in light of this disclosure. In addition, the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal-bearing medium used to actually carry out the distribution. Non-limiting examples of a signal-bearing medium include the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication Sink (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.).
[0070] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to the reader that, based upon the teachings herein, changes and modifications can be made without departing from the subject matter
■described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope al such changes and modifications as are within the true spirit and scope of the sobject matter described herein, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended' .claims) ar generally intended as "open" terms (e.g., the term Including" should be interpreted as Including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as 'includes but is not limited to," etc.). Further, if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, such recitation should typicall be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typicall means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense of the convention (e.g., Ka system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C aione, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense of the convention (e.g., "a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). Typically a disjunctive word or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the
terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase !iA or B" wtH .be typically understood to include the possibilities of "A* or "B" or "A and B "
[0071] With respect to the appended claims, the operations recited therein generally may be performed in an order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations might be performed in orders other than those that are illustrated, or may be performed concurrently. ExampSes of such alternate orderings include overlapping, interleaved, interrupted, reordered, incremental preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise. Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.
[0072] It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. It will also be appreciated that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein ma be subsequently made b those skilled in the art which are also intended to be encompassed by the following claims.
Claims
Wh t is claimed is:
1. A method to improve entertainment, comprising:
capturing at least two videos of varying resolution at a performance featuring individuals, entertainers, professionals, venues, stages, and/or computer generated holographic video Images;
combining the at least two video elements of varying resolution into a data file in a production server;
sending the data file including a combination of the at least two video elements at varying resolution through a data network to a remote client; and
rendering the data file comprising the at least two video elements combined by the production server on a display.
2. The method of claim 1 , wherein the display is at least one display screen and more than one camera is used for projection of images representing each of the at least two videos on at least one screen at a remote venue, wherein remote persons attending the remote venue can view the performance including life-like holographic images created from the data file on at least one display screen.
3. The method of claim 1, further comprising at least one remote client receiving through the data network the data file including the various perspectives produced by the production server for rendering of the various perspective in synchronous as a representation of the performance on at least one screen associated with the at least one remote client.
4. The method of claim 1, wherein the performance is at least one of a five event, a pre-recorded event, a computer generated video, and is sent from a production server through data network connection to at least one remote client fo rendering on at least one display screen as holographic images.
5. The method of claim 1, wherein at least one remote client is located at a remote venue and the display includes at least two transparent screens each for displaying video of the various perspectives from the performance as holographic images.
8. The method of claim 1 , wherein the display is at least one display screen and more than one projector is used for projection of images representing each of the at least two videos on at least one screen at a remote venue, wherein remote persons attending the remote venue can view the performance including life-like holographic images created from the data file on at least one display screen.
7. A method to improve entertainment, comprising:
providing more than one camera disposed around a performance of more than one individual capturing various perspectives of video representing each individual activity at the performance, and access to a production server for receiving the various perspectives captured by the more than one camera and for processing the various perspectives for rendering on at least one screen by remote clients, wherein the processing includes the synchronization of the various perspectives into a data file;
capturing at least two videos of varying resolution at the performance featuring individuals, entertainers, professionals, venues, stages, and/or computer generated holographic video images;
combining the at least two video elements of varying resolution into a data file in a production server;
sending the data file including a combination of the at least two video elements at varying resolution through a data network to a remote client; and
rendering the data file comprising the at least two video elements combined by the production server on a display.
8. The method of claim 7, wherein the display is a display screen and more than one camera is used for projection of Images representing each of the at least two videos on at: least one screen at a remote venue, wherein remote persons attending the remot venue can view the performance including life-like holographic images created from videos stored in the data file.
8. The method of claim 8, wherein the display comprises at least one screen and the method further comprises rendering the data file compnssng the at least two video elements combined by the production server using more than one camera projection images on at least one screen at a remote venue, wherein remote persons attending the remote venue can view a performance including as life-like holographic images created from the data file.
1Q. The method of claim 8, wherein the remote client includes more than one projector that is located at a live entertainment venu and the at least one screen includes at least two transparent screens each for displaying video of the various perspectives from the performance as holographic images produced on the at least two transparent screens by the at least one projector.
11. A system for creating video including individually captured video elements of various resolutions captured by more than one camera at a performance, comprising:
more than one camera disposed around a performance of more than one individual capturing various perspectives of video representing each Individual activity at the performance; and
a production server receiving the various perspectives captured by the more than one camera and processing the various perspectives for rendering on a screen by remote clients, wherein the processing includes synchronizing the various perspectives into a data file.
12. The system of claim 11 , further comprising at least one remote client receiving through the data network the data file including the various perspectives produced by the production server for rendering of the various perspective in synchronous as a representation of the performance on at least one screen associated with the at ieast one remote client.
13. The system of claim 11 , wherein video elements of the various perspective are synchronized together and simultaneously streamed from a local server and sent through at least one data network to a remote server.
14. The system of claim 11, wherein the performance is at least one of a live event, a pre-recorded event, a computer generated video, and is sent from a production server through data network, .connection to at least one remote client for rendering on at least one display screen as holographic images.
15. The system of claim 12, wherein at least one remote client is located at a remote venue and the at least one screen includes at least two transparent screens each for displaying video of the various perspectives from the performance as holographic images.
18. The system of claim 15, wherein the holographic images are computer generated holographic video images representing life-sized video images of performers.
17. The system of claim 16, wherein the life-sized video images are of at ieast one legacy artist or celebrity who is dead.
18. The system of claim 16, wherein a computer generated video image is of at least one of a back drop, background, props, equipment, Sighting, or special effects from the performance.
19. The system of claim 1 1 , wherein the performance is a live performance occurring at a live venue and the live venue includes at least one of a Chroma Keying technology stage, a black screen technology stage, an entertainment venue, or a sports venue.
20. The system of claim 17, wherein the legacy artist or celebrity that is dead is generated by the production server with access to video files including movements and facial expressions that are combined with a computer generated body captured by a scanner, and wherein the video produced by the production server is synchronized together with a pre-recorded audio or a song associated with the legacy artist or celebrity that is dead to create one life-like video where the audio or song and the computer generated video is synchronized so the facial movements including the lips and facial expressions, and body movements all coincide with the audio or song.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762475979P | 2017-03-24 | 2017-03-24 | |
US62/475,979 | 2017-03-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018175680A1 true WO2018175680A1 (en) | 2018-09-27 |
Family
ID=63584701
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/023695 WO2018175680A1 (en) | 2017-03-24 | 2018-03-22 | Uhd holographic filming and computer generated video process |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018175680A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110975307A (en) * | 2019-12-18 | 2020-04-10 | 青岛博海数字创意研究院 | Immersive naked eye 3D stage deduction system |
CN113259544A (en) * | 2021-06-15 | 2021-08-13 | 大爱全息(北京)科技有限公司 | Remote interactive holographic demonstration system and method |
CN113821104A (en) * | 2021-09-17 | 2021-12-21 | 武汉虹信技术服务有限责任公司 | Visual interactive system based on holographic projection |
CN114245101A (en) * | 2021-12-14 | 2022-03-25 | 中科星宇天文科技研究院(北京)有限公司 | Three-dimensional screen display system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080060034A1 (en) * | 2006-02-13 | 2008-03-06 | Geoffrey Egnal | System and method to combine multiple video streams |
US20090237492A1 (en) * | 2008-03-18 | 2009-09-24 | Invism, Inc. | Enhanced stereoscopic immersive video recording and viewing |
US20100000003A1 (en) * | 2008-07-07 | 2010-01-07 | O harry | Upper garment with pockets |
US20110050848A1 (en) * | 2007-06-29 | 2011-03-03 | Janos Rohaly | Synchronized views of video data and three-dimensional model data |
US20140043485A1 (en) * | 2012-08-10 | 2014-02-13 | Logitech Europe S.A. | Wireless video camera and connection methods including multiple video streams |
US20170034501A1 (en) * | 2015-07-31 | 2017-02-02 | Hsni, Llc | Virtual three dimensional video creation and management system and method |
-
2018
- 2018-03-22 WO PCT/US2018/023695 patent/WO2018175680A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080060034A1 (en) * | 2006-02-13 | 2008-03-06 | Geoffrey Egnal | System and method to combine multiple video streams |
US20110050848A1 (en) * | 2007-06-29 | 2011-03-03 | Janos Rohaly | Synchronized views of video data and three-dimensional model data |
US20090237492A1 (en) * | 2008-03-18 | 2009-09-24 | Invism, Inc. | Enhanced stereoscopic immersive video recording and viewing |
US20100000003A1 (en) * | 2008-07-07 | 2010-01-07 | O harry | Upper garment with pockets |
US20140043485A1 (en) * | 2012-08-10 | 2014-02-13 | Logitech Europe S.A. | Wireless video camera and connection methods including multiple video streams |
US20170034501A1 (en) * | 2015-07-31 | 2017-02-02 | Hsni, Llc | Virtual three dimensional video creation and management system and method |
Non-Patent Citations (1)
Title |
---|
HAIKUO: "The development, special traits and potential of holographic display technology", BACHELOR'S THESIS, 2015, XP055543456, Retrieved from the Internet <URL:https://www.theseus.fi/bitstream/handle/10024/97045/Zhou%20HaikuoThesis.pdf?sequence=1> [retrieved on 20180525] * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110975307A (en) * | 2019-12-18 | 2020-04-10 | 青岛博海数字创意研究院 | Immersive naked eye 3D stage deduction system |
CN113259544A (en) * | 2021-06-15 | 2021-08-13 | 大爱全息(北京)科技有限公司 | Remote interactive holographic demonstration system and method |
CN113821104A (en) * | 2021-09-17 | 2021-12-21 | 武汉虹信技术服务有限责任公司 | Visual interactive system based on holographic projection |
CN114245101A (en) * | 2021-12-14 | 2022-03-25 | 中科星宇天文科技研究院(北京)有限公司 | Three-dimensional screen display system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11381739B2 (en) | Panoramic virtual reality framework providing a dynamic user experience | |
US10650645B2 (en) | Method and apparatus of converting control tracks for providing haptic feedback | |
KR101348521B1 (en) | Personalizing a video | |
US5790124A (en) | System and method for allowing a performer to control and interact with an on-stage display device | |
KR102027410B1 (en) | Transmission of reconstruction data in a tiered signal quality hierarchy | |
WO2018175680A1 (en) | Uhd holographic filming and computer generated video process | |
US10311917B2 (en) | Systems and methods for featuring a person in a video using performance data associated with the person | |
US10460501B2 (en) | System and method for processing digital video | |
US10931930B2 (en) | Methods and apparatus for immersive media content overlays | |
US20200388068A1 (en) | System and apparatus for user controlled virtual camera for volumetric video | |
US20200005831A1 (en) | Systems and methods for processing digital video | |
TW202123178A (en) | Method for realizing lens splitting effect, device and related products thereof | |
CN114040318A (en) | Method and equipment for playing spatial audio | |
Schreer et al. | Lessons learned during one year of commercial volumetric video production | |
McIlvenny | Inhabiting spatial video and audio data: Towards a scenographic turn in the analysis of social interaction | |
WO2022257480A1 (en) | Livestreaming data generation method and apparatus, storage medium, and electronic device | |
KR101843025B1 (en) | System and Method for Video Editing Based on Camera Movement | |
JP2019512177A (en) | Device and related method | |
KR102674577B1 (en) | Reference to a neural network model by immersive media for adaptation of media for streaming to heterogeneous client endpoints | |
WO2009044351A1 (en) | Generation of image data summarizing a sequence of video frames | |
Podborski et al. | 360-degree video streaming with MPEG-DASH | |
TWI820490B (en) | Methods and systems for implementing scene descriptions using derived visual tracks | |
TWI778749B (en) | Transmission method, processing device, and generating system of video for virtual reality | |
Mate et al. | Automatic video remixing systems | |
TWI793743B (en) | Methods and apparatus for processing multimedia data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18770597 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18770597 Country of ref document: EP Kind code of ref document: A1 |