CN114640838B - Picture synthesis method and device, electronic equipment and readable storage medium - Google Patents
Picture synthesis method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN114640838B CN114640838B CN202210255128.7A CN202210255128A CN114640838B CN 114640838 B CN114640838 B CN 114640838B CN 202210255128 A CN202210255128 A CN 202210255128A CN 114640838 B CN114640838 B CN 114640838B
- Authority
- CN
- China
- Prior art keywords
- picture
- camera
- synthesized
- virtual
- rendering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001308 synthesis method Methods 0.000 title claims abstract description 6
- 238000009877 rendering Methods 0.000 claims abstract description 164
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 65
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 59
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 27
- 239000002131 composite material Substances 0.000 claims description 31
- 239000000203 mixture Substances 0.000 claims description 24
- 230000008859 change Effects 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000012958 reprocessing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/327—Calibration thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention provides a picture synthesis method, a picture synthesis device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: acquiring positioning information of a camera at the rendering time of the first synthesized picture, and re-projecting the first synthesized picture according to the positioning information of the camera at the rendering time; acquiring positioning information of a camera at the actual shooting time of a first synthesized picture, and recapturing a virtual picture for synthesis from the re-projected first synthesized picture according to the positioning information of the camera at the actual shooting time; and synthesizing the captured virtual picture and the actual shooting picture of the camera at the rendering moment to obtain a second synthesized picture. The camera has the same fixed delay at the shooting time of capturing the virtual picture again and the rendering time of the actual shooting picture for synthesis, so that the problem that the direct synthesis cannot be aligned due to the fact that the camera is positioned at different positions in the rendering and shooting modes, namely different positions, is solved, and further, the picture content and the position required to be synthesized are aligned simultaneously under the condition of display delay.
Description
Technical Field
The present invention relates to the field of multimedia technology, and in particular, to a picture synthesizing method, a picture synthesizing apparatus, an electronic device, and a computer-readable storage medium.
Background
XR (Extended Reality) is a technology for tracking a video camera and rendering an image in real time, so that a display medium (LED, light Emitting Diode, light emitting display screen/projection bowl) and a virtual scene outside the display medium track the visual angle of the video camera in real time, and are combined with a real shooting picture before a camera lens, thereby creating an infinite sense of space.
At present, performance forms based on-site LED large screens are becoming more and more popular, wherein an XR-based performance mode exists, particularly a performance mode that content outside the large screen is rendered according to camera positioning and then overlapped with a camera shooting picture is adopted, and therefore the boundary of a stage is expanded to achieve immersive performance experience. The emphasis of the performance mode is on the combination of virtual and real parts, and the virtual pictures are usually combined with the photographed actual photographed pictures at photographing time, however, when the camera continuously moves, the position of the camera when rendering each actual photographed picture changes with the position of the camera when actually photographing the actual photographed picture, so that the problem that the virtual and real parts cannot be completely aligned in various live performances occurs.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a method, an apparatus, an electronic device, and a readable storage medium for synthesizing a picture, so as to align the content and the position of the picture to be synthesized at the same time under the condition of display delay, thereby achieving the possibility of completely aligning virtual and real parts in theory. The specific technical scheme is as follows:
in a first aspect of the present invention, there is provided a picture synthesis method, the method including:
acquiring a first synthesized picture and positioning information of a camera at the rendering time of the first synthesized picture, and re-projecting the first synthesized picture according to the positioning information of the camera at the rendering time;
acquiring positioning information of the camera at the actual shooting moment of the first synthesized picture, and recapturing a virtual picture for synthesis from the re-projected first synthesized picture according to the positioning information of the camera at the actual shooting moment;
and synthesizing the captured virtual picture and the actual shooting picture of the camera at the rendering moment to obtain a second synthesized picture.
Optionally, the first synthesized picture includes synthesized results corresponding to different rendering moments, and the re-projecting the first synthesized picture according to positioning information of the camera at the rendering moments includes:
Obtaining a synthesis result at any rendering moment and positioning information of a camera at the rendering moment; the synthesis result comprises an actual shot picture and a virtual picture;
and adopting the positioning information of the camera at the rendering moment to carry out reprojection operation on the actual shooting picture and the virtual picture corresponding to the rendering moment on a preset virtual screen.
Optionally, the positioning information of the camera at the rendering time includes position information and projection position change information of the camera, and the adopting the positioning information of the camera at the rendering time carries out a reprojection operation on an actual shooting picture and a virtual picture corresponding to the rendering time on a preset virtual screen, including:
and projecting the actual shot picture and the virtual picture corresponding to the rendering moment onto a preset virtual screen according to the position information and the projection position change information of the camera at the rendering moment.
Optionally, the first composite screen of the re-projection includes an actual shot screen and a virtual screen; the recapturing the virtual picture for synthesis from the first synthesized picture of the re-projection according to the positioning information of the camera at the actual shooting moment comprises the following steps:
Acquiring positioning information of a camera when the same frame of picture as the actual shooting picture of the camera at the rendering moment is actually shot in the first composite picture of the re-projection;
and capturing a virtual picture from the re-projected first synthesized picture by adopting positioning information of a camera when the actual shooting picture is actually shot.
Optionally, the positioning information of the camera at the rendering time includes position information and projection position change information of the camera, and capturing the virtual picture from the first composite picture of the re-projection by using the positioning information of the camera when the actual shooting picture is actually shot includes:
and capturing a virtual picture which accords with the position information and the projection position change information when the actual shooting picture is actually shot from the first synthesized picture of the re-projection.
Optionally, the synthesizing the captured virtual picture and the actual shot picture of the camera at the rendering time to obtain a second synthesized picture includes:
and acquiring an actual shooting picture contained in the first synthesized picture at the rendering moment by the camera, synthesizing the captured virtual picture and the same frame picture of the actual shooting picture contained in the first synthesized picture at the rendering moment by the camera in the re-projected first synthesized picture, and obtaining a re-synthesized second synthesized picture.
Optionally, the rendering time of the camera has display delay relative to the composition time of the composition picture, the actual shooting time of the camera has positioning delay relative to the composition time of the composition picture, and the rendering time of the camera has fixed delay relative to the actual shooting time of the camera;
the step of synthesizing the captured virtual picture and the actual shooting picture of the camera at the rendering moment to obtain a second synthesized picture comprises the following steps:
and synthesizing the captured virtual picture and an actual shooting picture in a synthesized picture before the fixed time delay exists between the captured virtual picture and the captured virtual picture in the first synthesized picture of the re-projection, so as to obtain a re-synthesized second synthesized picture.
In a second aspect of the present invention, there is also provided a picture synthesizing apparatus, the apparatus including:
the re-projection module is used for acquiring positioning information of the camera at the rendering time of the first synthesized picture aiming at the first synthesized picture, and re-projecting the first synthesized picture according to the positioning information of the camera at the rendering time;
the recapture module is used for acquiring the positioning information of the camera at the actual shooting moment of the first synthesized picture, and recapturing the virtual picture used for synthesis from the first synthesized picture subjected to the reprojection according to the positioning information of the camera at the actual shooting moment;
And the picture synthesis module is used for synthesizing the captured virtual picture and the actual shooting picture of the camera at the rendering moment to obtain a second synthesized picture.
Optionally, the first composite screen includes composite results corresponding to different rendering moments, and the re-projection module includes:
the synthesis result acquisition sub-module is used for acquiring a synthesis result at any rendering moment and positioning information of the camera at the rendering moment; the synthesis result comprises an actual shot picture and a virtual picture; and the re-projection sub-module is used for carrying out re-projection operation on the actual shooting picture and the virtual picture corresponding to the rendering moment on a preset virtual screen by adopting the positioning information of the camera at the rendering moment.
Optionally, the positioning information of the camera at the rendering time includes position information and projection position change information of the camera, and the re-projection submodule includes:
and the re-projection unit is used for projecting the actual shooting picture and the virtual picture corresponding to the rendering moment onto a preset virtual screen according to the position information and the projection position change information of the camera at the rendering moment.
Optionally, the first composite screen of the re-projection includes an actual shot screen and a virtual screen; the recapture module includes:
A positioning information obtaining sub-module, configured to obtain positioning information of the camera in the first composite screen of the re-projection when the same frame of screen as the actual shooting screen of the camera at the rendering time is actually shot;
and the re-capturing sub-module is used for capturing the virtual picture from the re-projected first synthesized picture by adopting the positioning information of the camera when the actual shooting picture is actually shot.
Optionally, the positioning information of the camera at the rendering time includes position information and projection position change information of the camera, and the recapturing submodule includes:
and the re-capturing unit is used for capturing a virtual picture which accords with the position information and the projection position change information when the actual shooting picture is actually shot from the first synthesized picture of the re-projection.
Optionally, the picture synthesis module includes:
and the first picture synthesis submodule is used for acquiring an actual shooting picture contained in the first synthesis picture at the rendering moment of the camera, synthesizing the captured virtual picture with the same frame picture of the actual shooting picture contained in the first synthesis picture at the rendering moment of the camera in the first synthesis picture of the re-projection, and obtaining a re-synthesized second synthesis picture.
Optionally, the rendering time of the camera has display delay relative to the composition time of the composition picture, the actual shooting time of the camera has positioning delay relative to the composition time of the composition picture, and the rendering time of the camera has fixed delay relative to the actual shooting time of the camera; the picture composition module includes:
and the second picture synthesis submodule is used for synthesizing the captured virtual picture and the actual shooting picture in the synthesized picture before the fixed time delay exists between the captured virtual picture and the captured virtual picture in the first synthesized picture of the re-projection to obtain a re-synthesized second synthesized picture.
In yet another aspect of the present invention, there is also provided an electronic device including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory perform communication with each other through the communication bus; a memory for storing a computer program; and a processor configured to cause the computer program to execute the steps of the picture combining method according to any one of the above-described steps when executing the program stored in the memory.
In yet another aspect of the present invention, there is also provided a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the steps of the picture composition method as set forth in any one of the preceding claims.
In a further aspect of the invention there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the picture composition method as claimed in any one of the preceding claims.
According to the picture synthesis method provided by the embodiment of the invention, the synthesized picture is re-projected based on the positioning information of the camera at the rendering time, and the virtual picture in the re-projected picture is re-captured based on the positioning information of the camera at the shooting time to obtain a new synthesized background, so that the shooting time of the re-captured virtual picture and the rendering time of the actual shooting picture used for synthesis have the same fixed time delay, the problem that direct synthesis cannot be aligned due to the fact that the rendering and shooting cameras are at different positions, namely different machine positions, is solved, and the possibility that the picture content and the position required to be synthesized are simultaneously aligned under the condition of display time delay is achieved, and the theoretical virtual and real parts are completely aligned is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic flow chart of an XR rendering method according to an embodiment of the invention;
fig. 2A and 2B are schematic diagrams of the synthesis result in the related art;
FIG. 3 is a flowchart illustrating steps of an embodiment of a method for synthesizing pictures according to the present invention;
FIG. 4 is a schematic diagram of different machine positions in a virtual synthesis environment according to an embodiment of the present invention;
FIG. 5 is a system block diagram of picture composition according to an embodiment of the present invention;
FIG. 6 is a block diagram illustrating an exemplary embodiment of a picture synthesizing apparatus according to the present invention;
fig. 7 is a schematic structural diagram of an embodiment of an electronic device in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
XR-based presentations focus on combining virtual and real parts. The alignment of the virtual-real part is essentially the accurate calculation of the imaging of the LED screen on the camera, and the accuracy of the imaging of the camera can be affected by factors such as internal parameters, distortion, positioning accuracy, stage modeling accuracy and the like of the camera, besides, when the camera continues to move, the camera position when rendering each picture and the camera position when actually shooting can change, and this delay makes it impossible to completely align the virtual-real part even if all the parameters reach the theoretical accuracy.
In an XR-based performance mode, various live performances have the problem that virtual and real parts cannot be completely aligned, and referring to fig. 1, a flow diagram of the XR performance mode provided by the embodiment of the invention is shown, and a display medium (LED/projection bowl curtain) and a virtual scene outside the display medium track a camera view angle in real time through a camera tracking and real-time image rendering technology, so that the virtual scene is synthesized with a real shooting picture in front of a camera lens. The process may include several processes of rendering by the renderer, tracking camera shooting and compositing by the compositing machine, which respectively correspond to the rendering time, the shooting time and the compositing time, wherein the rendering time of the renderer has display delay relative to the compositing time of the compositing machine, the shooting time of the camera has positioning delay relative to the compositing time of the achievement, so that the rendering time of the renderer and the shooting time of the camera have delay, and based on the display delay, the positioning delay and the delay between the rendering time and the shooting time, the positioning information of the camera at different times is different, that is, the positioning information of the camera has delay difference.
Specifically, after a frame of picture is rendered by the renderer, the frame of picture has two processing paths, as shown in fig. 1, path 1 can perform projection calculation according to the position of the camera, the rendered picture is displayed on an LED screen (delay exists at the moment), the rendered picture is shot by the camera after being displayed on the screen, then the shot picture is shot, an actual object on a stage exists in the picture, including actors and the like, is transmitted to the synthesizer (delay also exists at the moment), and the positions of the camera at the shooting moment and the rendering moment of the renderer are different at the moment; the path 2 may directly send the rendered frame to the synthesizer (the delay existing at this time is smaller than that of the path 1), and then the rendered frame is overlapped with the same frame of the path 1 after delay calculation to obtain an XR synthesis result, and the path 2 also has delay, and even if the rendered frame is directly sent to the synthesizer, the image is overlapped with the same frame of the path 1, which is shot in the rendered frame, because the image is required to be overlapped with the same frame of the path 1, the phenomenon that the positions of the camera at the shooting time and the rendering time are different due to the existing delay also exists.
However, in the case of no error in the static situation, the processing procedure performed after rendering the frame in the path 1 and the path 2 is simulated by theory, as shown in fig. 2A, where the same frame is used for the frame, the virtual and real synthesized parts are seriously misplaced, as shown in fig. 2B, where the result of rendering at the shooting time is used for synthesis, the virtual and real alignment still has misplacement, and the frame contents may not be consistent. Because the cameras at the rendering time and the shooting time in the path 1 and the path 2 are at different positions, even if the camera is theoretically simulated, the synthesized results obtained by the cameras are not aligned under the condition that the cameras continuously move.
One of the core ideas of the embodiment of the invention is to re-project the synthesized picture based on the positioning information of the camera at the rendering time and re-capture the virtual picture in the re-projected picture based on the positioning information of the camera at the shooting time to obtain a new synthesized background, so that the shooting time of the re-captured virtual picture of the camera has the same fixed delay as the rendering time of the actual shooting picture used for synthesis, thereby solving the problem that direct synthesis cannot be aligned due to different positions of the rendering and shooting camera.
Referring to fig. 3, a flowchart illustrating steps of an embodiment of a picture synthesizing method according to the present invention may specifically include the following steps:
step 301, obtaining positioning information of a camera at a rendering time of a first synthesized picture, and re-projecting the first synthesized picture according to the positioning information of the camera at the rendering time;
in order to solve the problem that the virtual and real parts cannot be aligned, at this time, it may be assumed that the area of the screen for displaying the picture rendered by the rendering machine is very large, so that the camera cannot shoot the outside of the screen when shooting the picture on the screen, then no matter how fast the camera moves, although the perspective relationship of the picture (refer to the spatial relationship of the object when the viewer views the two-dimensional picture) may be wrong, that is, the spatial position, the contour and the spatial sense of the projected picture of different objects formed based on different viewing angles are not in line with the sense of the viewer in the picture, but because the camera cannot shoot the outside of the screen, at this time, alignment of the virtual and real pictures will not be required any more, the problem that the alignment cannot occur does not occur, based on the assumption that the screen area is sufficiently large, in the embodiment of the invention, in the case that the static positioning error is very small (internal reference, positioning error, modeling error, etc.), the process under the assumption that the area is under the ultra-large screen is simulated in the virtual composite environment by using the extended screen reprojection mode, and the shooting picture of the actual camera is not required to be aligned with the superimposed picture.
In one embodiment of the present invention, in order to simulate the shooting process under the supposedly existing oversized screen in the virtual synthesis environment, and use the image actually shot by the camera to be overlapped with the image to obtain the XR synthesized image, the operation of reprojection of the extended screen is required at this time, which can be mainly represented by reprojecting the first synthesized image onto the extended screen to reprocess the first synthesized image.
The first synthesized picture for re-projection may refer to a synthesized picture, and the extended screen projected to may refer to a virtual synthesized environment under the assumption that alignment is not required, and this extended screen may be represented as an extended LED as shown in fig. 4.
In practical applications, the reprocessing performed on the first synthesized picture mainly is to reproject the first synthesized picture to remove the rendering delay existing in the related synthesis technology.
In order to remove the rendering delay existing in the related synthesis technology, positioning information of the camera at the rendering time of the first synthesized picture can be acquired, and the first synthesized picture is re-projected according to the positioning information of the camera at the rendering time. As shown in fig. 4, a virtual camera may be set in a virtual synthesis environment, and by having two positions for the virtual camera, the position a is responsible for reprojection of a synthesized picture, which may be mainly represented by reprojection of the same frame of content, that is, reprojection of the same frame of content as in the first synthesized picture, and the position B is responsible for recapturing the synthesized picture, so that the picture content and the position to be synthesized are aligned at the same time under the condition of display delay, so as to improve the rendering efficiency and avoid repeated computation. It should be noted that, the positions of the positions a and B shown in fig. 4 are only for describing the positions of the virtual cameras set in the virtual synthesis environment, and the positions of the positions a and B are not limited to the positions shown in fig. 4, and are mainly set according to practical situations, which is not a limitation of the embodiments of the present invention.
Specifically, the first synthesized picture that is re-projected in the virtual synthesis environment may include synthesized results corresponding to different rendering moments, where the synthesized results at different rendering moments may include an actual shot picture (i.e., a shot picture displayed on a screen) and a virtual picture, and at this time, the synthesized result at any rendering moment and positioning information of the camera at this rendering moment, for example, a machine position a shown in fig. 4, may be acquired, and the actual shot picture and the virtual picture corresponding to this rendering moment are re-projected on a preset virtual screen based on the machine position a, that is, the re-projected picture is onto the virtual screen in the virtual synthesis environment.
The positioning information of the camera at the rendering time may be represented as position information of the camera at the rendering time, for example, position a, which may include position information of the camera, projection position change information, displacement information of camera movement, and the like. The reprojection operation on the first synthesized picture can be specifically realized by projecting the actual shot picture and the virtual picture corresponding to the rendering time onto a preset virtual screen according to the position information and the projection position change information of the camera at the rendering time.
Step 302, recapturing a virtual picture for synthesis from the re-projected first synthesized picture according to the positioning information of the camera at the actual shooting time of the first synthesized picture;
after the first synthesized picture is re-projected according to the positioning information of the camera at the rendering time to remove the rendering delay existing in the related synthesis technology, the virtual picture serving as the new synthesis background can be re-captured at the moment, which is mainly represented by capturing the virtual picture according to the positioning information of the camera at the actual shooting time of the first synthesized picture, so that the virtual picture rendered in the first synthesized picture can be aligned according to the position information of the camera at the shooting time during the subsequent synthesis, namely, the virtual and real alignment in the new synthesized picture is ensured.
The virtual frame for composition is re-captured from the first composite frame based on the positioning information of the camera at the actual shooting time of the first composite frame, and the positioning information of the actual shooting time of the first composite frame may be represented as a machine position B as shown in fig. 4, where the machine position B is responsible for re-capturing the composite frame, and may be mainly represented as re-capturing with the machine position, that is, capturing the virtual frame based on the same positioning information as the actual shooting time based on the actual shooting time of the actual shooting frame in the first composite frame.
In practical applications, the first synthesized picture that is re-projected in the virtual synthesized environment may include an actual shot picture (i.e., a shot picture displayed on a screen) and a virtual picture, at which time positioning information of the camera when the same frame picture as the actual shot picture of the camera at the rendering time is actually shot in the re-projected first synthesized picture may be acquired, for example, a machine position B shown in fig. 4, and the virtual picture is captured from the re-projected first synthesized picture using the machine position B to take the captured virtual picture as a background of a new synthesized picture.
The positioning information of the camera at the rendering time may be represented as position information of the camera at the rendering time, for example, position B, which may include position information of the camera, projection position change information, displacement information of camera movement, and the like.
The recapturing of the virtual frame may be specifically realized by capturing a virtual frame conforming to the position information and the projection position change information when the actual shot frame is actually shot from the first composite frame of the re-projection. Specifically, as shown in fig. 5, there are several processes of rendering by the rendering machine, capturing by tracking the camera, and compositing by the compositing machine in the virtual compositing environment, which correspond to the rendering time, capturing time, and compositing time, wherein the rendering time of the camera has a display delay relative to the compositing time of the composited frame, and the actual capturing time of the camera has a positioning delay relative to the compositing time of the composited frame.
Then, as shown in fig. 5, when the camera at the rendering time and the camera at the actual capturing time are at different positions, the display delay of the rendering time with respect to the combining time is different from the positioning delay of the actual capturing time with respect to the combining time, so that even if the same frame of the actual capturing picture included in the first combined picture or the content rendered at the actual capturing time is used in the subsequent virtual alignment, the virtual picture at the time can be captured again based on the positioning information of the camera at the actual capturing time of the actual capturing picture based on the combining result re-projected at the certain rendering time, on the basis of the determination of the actual capturing picture required for combining.
Step 303, the captured virtual picture and the actual shot picture of the camera at the rendering time are combined to obtain a second combined picture.
After capturing the virtual frame as the background again, the frame actually shot by the camera can be used to be overlapped with the virtual frame, in practical application, the frame actually shot by the camera can refer to the frame which is determined to be synthesized during re-projection, namely the frame which is the same as the frame of the screen actually shot, in this case, when synthesizing the new frame, the captured virtual frame and the frame which is the same as the frame actually shot can be synthesized to obtain the newly synthesized frame, and the XR synthesis is completed under the condition of perfect alignment.
In an embodiment of the present invention, the frame to be synthesized determined during the re-projection may be an actual shot frame included in the first synthesized frame at the rendering time of the camera, and the captured camera positioning information of the same frame of frame as the actual shot frame included in the first synthesized frame at the rendering time is in the same position as the positioning information of the camera at the actual shot time.
When the second synthesized picture, i.e., the newly synthesized picture, is synthesized, it may be expressed that the captured virtual picture and the same frame picture of the first synthesized picture in the re-projection as the actual shot picture included in the first synthesized picture at the rendering time of the camera are synthesized to obtain the newly synthesized second synthesized picture.
It should be noted that, since the first composite screen of the re-projection and the screen of the original first composite screen will be affected by time, which is mainly caused by the difference between the rendering time and the positioning information of the camera at the shooting time, the essence of the scheme is that when the first composite screen is re-projected according to the positioning information of the camera at the rendering time, the virtual screen conforming to the positioning information can be re-captured according to the positioning information of the actual shooting time of the actual shooting part in the re-projected screen, so as to combine the virtual screen with the actual shooting part in the screen at the rendering time to realize virtual-real alignment. The same frame of picture is the actual shooting part corresponding to the positioning information of the camera at the rendering time in the composite picture, namely the actual shooting picture, but because the original first composite picture and the re-projected first composite picture are different in the composite environment in the scheme, the actual shooting part at the rendering time in the picture is needed to be re-synthesized, but the content of the part is actually needed to be acquired from the re-projected first composite picture, namely the same frame of picture of the actual shooting picture contained in the first composite picture at the rendering time of the camera can be understood as the actual shooting picture contained in the first composite picture at the rendering time of the camera.
In a preferred embodiment, as shown in fig. 5, there are several processes of rendering by the rendering machine, capturing by tracking the camera, and compositing by the compositing machine in the virtual compositing environment, which correspond to the rendering time, capturing time, and compositing time, wherein the rendering time of the camera has a display delay with respect to the compositing time of the compositing picture, and the actual capturing time of the camera has a positioning delay with respect to the compositing time of the compositing picture, that is, the rendering time of the camera has a fixed delay with respect to the actual capturing time of the camera.
Then, for the virtual picture as the new synthesis background, there is a fixed delay between the actual shooting time based on which it is captured and the actual shooting time of the actual shooting picture used for synthesis, for example, the difference between the display delay and the positioning delay, the camera positioning information of the captured virtual picture is in the same position as the camera positioning information of the actual shooting picture in the synthesized picture before the captured virtual picture has the fixed delay, at this time, the captured virtual picture and the actual shooting picture in the synthesized picture before the fixed delay with the captured virtual picture in the first synthesized picture of the re-projection can be synthesized, a re-synthesized second synthesized picture is obtained, and virtual-real alignment is realized. It should be noted that, the embodiment of the present invention can greatly improve the synthesis effect of the XR portion in the virtual performance, and remove various limitations (such as machine position limitation, mirror speed, lens switching, etc.) made by the authoring team due to technical reasons, and is applicable to application scenarios such as virtual performance, virtual shooting, etc., and can achieve the purposes of improving rendering efficiency and avoiding repetitive computation based on the scheme, which is not limited in this embodiment of the present invention.
In the embodiment of the invention, the new synthesized background is obtained by re-projecting the synthesized picture based on the positioning information of the camera at the rendering moment and re-capturing the virtual picture in the re-projected picture based on the positioning information of the camera at the shooting moment, so that the camera has the same fixed delay at the shooting moment of re-capturing the virtual picture and the rendering moment of the actual shooting picture for synthesis, the problem that the direct synthesis cannot be aligned due to the fact that the rendering and shooting cameras are positioned at different positions, namely different machine positions, is solved, and the possibility of simultaneously aligning the picture content and the position required to be synthesized under the condition of display delay is further achieved, and the theoretical full alignment of virtual and real parts is achieved.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 6, a block diagram illustrating an embodiment of a picture synthesizing apparatus according to the present invention may specifically include the following modules:
a re-projection module 601, configured to acquire positioning information for the first synthesized picture and the camera at the rendering time of the first synthesized picture, and re-project the first synthesized picture according to the positioning information of the camera at the rendering time;
a recapturing module 602, configured to acquire positioning information of the camera at an actual shooting time of the first synthesized picture, and recapture a virtual picture for synthesis from the first synthesized picture based on the positioning information of the camera at the actual shooting time;
and a picture synthesis module 603, configured to synthesize the captured virtual picture with an actual shot picture of the camera at the rendering time to obtain a second synthesized picture.
In one embodiment of the present invention, the first composite screen includes composite results corresponding to different rendering moments, and the re-projection module 601 may include the following sub-modules:
the synthesis result acquisition sub-module is used for acquiring a synthesis result at any rendering moment and positioning information of the camera at the rendering moment; the synthesis result comprises an actual shot picture and a virtual picture;
And the re-projection sub-module is used for carrying out re-projection operation on the actual shooting picture and the virtual picture corresponding to the rendering moment on a preset virtual screen by adopting the positioning information of the camera at the rendering moment.
In an embodiment of the present invention, the positioning information of the camera at the rendering time includes position information and projection position change information of the camera, and the re-projection sub-module may include the following units:
and the re-projection unit is used for projecting the actual shooting picture and the virtual picture corresponding to the rendering moment onto a preset virtual screen according to the position information and the projection position change information of the camera at the rendering moment.
In one embodiment of the present invention, the first composite screen of the re-projection includes an actual photographed screen and a virtual screen; the reacquisition module 602 may include the following sub-modules:
a positioning information obtaining sub-module, configured to obtain positioning information of the camera in the first composite screen of the re-projection when the same frame of screen as the actual shooting screen of the camera at the rendering time is actually shot;
and the re-capturing sub-module is used for capturing the virtual picture from the re-projected first synthesized picture by adopting the positioning information of the camera when the actual shooting picture is actually shot.
In one embodiment of the present invention, the positioning information of the camera at the rendering time includes position information and projection position change information of the camera, and the recapturing sub-module may include the following units:
and the re-capturing unit is used for capturing a virtual picture which accords with the position information and the projection position change information when the actual shooting picture is actually shot from the first synthesized picture of the re-projection.
In one embodiment of the present invention, the preset actual shot screen includes the same frame as the actual shot screen, and the screen composition module 603 may include the following sub-modules:
and the first picture synthesis submodule is used for acquiring an actual shooting picture contained in the first synthesis picture at the rendering moment of the camera, synthesizing the captured virtual picture with the same frame picture of the actual shooting picture contained in the first synthesis picture at the rendering moment of the camera in the first synthesis picture of the re-projection, and obtaining a re-synthesized second synthesis picture.
In one embodiment of the present invention, a display delay exists between a rendering time of the camera and a composition time of the composition picture, a positioning delay exists between an actual shooting time of the camera and the composition time of the composition picture, and a fixed delay exists between the rendering time of the camera and the actual shooting time of the camera;
The picture composition module 603 may include the following sub-modules:
and the second picture synthesis submodule is used for synthesizing the captured virtual picture and the actual shooting picture in the synthesized picture before the fixed time delay exists between the captured virtual picture and the captured virtual picture in the first synthesized picture of the re-projection to obtain a re-synthesized second synthesized picture.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the present invention further provides an electronic device, as shown in fig. 7, including a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702, and the memory 703 perform communication with each other through the communication bus 704,
a memory 703 for storing a computer program;
the processor 701 is configured to execute the program stored in the memory 703, and implement the following steps:
acquiring a first synthesized picture and positioning information of a camera at the rendering time of the first synthesized picture, and re-projecting the first synthesized picture according to the positioning information of the camera at the rendering time;
Acquiring positioning information of the camera at the actual shooting moment of the first synthesized picture, and recapturing a virtual picture for synthesis from the re-projected first synthesized picture according to the positioning information of the camera at the actual shooting moment;
and synthesizing the captured virtual picture and the actual shooting picture of the camera at the rendering moment to obtain a second synthesized picture.
Specifically, the first synthesized picture which is re-projected in the virtual synthesized environment may include synthesized results corresponding to different rendering moments, at this time, the synthesized result of any rendering moment may be obtained, and positioning information of the camera at the rendering moment is adopted to re-project the actual shot picture and the virtual picture corresponding to the rendering moment on the preset virtual screen by adopting the positioning information of the camera at the rendering moment.
In the embodiment of the invention, the first synthesized picture which is re-projected in the virtual synthesized environment may include an actual shot picture (i.e., a shot picture displayed on a screen) and a virtual picture, and at this time, positioning information of the camera when the same frame picture as the actual shot picture of the camera at the rendering time is actually shot in the re-projected first synthesized picture may be acquired, and the positioning information of the camera when the same frame picture as the actual shot picture of the camera at the rendering time is actually shot is adopted, so as to capture the virtual picture from the re-projected first synthesized picture, to take the captured virtual picture as a background of a new synthesized picture, and then perform new picture synthesis of the captured virtual picture and the actual shot picture, i.e., synthesis of the second synthesized picture, and complete XR synthesis with virtual-real parts aligned.
The communication bus mentioned by the above terminal may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include random access memory (RandomAccess Memory, RAM) or non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a Network Processor (NP), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable GateArray, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, there is also provided a computer-readable storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the steps of the picture composition method of any of the above embodiments.
In a further embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the picture composition method of any of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.
Claims (9)
1. A picture synthesis method, the method comprising:
acquiring a first synthesized picture and positioning information of a camera at the rendering time of the first synthesized picture, and re-projecting the first synthesized picture according to the positioning information of the camera at the rendering time; the first synthesized picture comprises synthesized results corresponding to different rendering moments, and the synthesized results comprise an actual shooting picture and a virtual picture;
acquiring positioning information of the camera at the actual shooting moment of the first synthesized picture, and recapturing a virtual picture for synthesis from the re-projected first synthesized picture according to the positioning information of the camera at the actual shooting moment;
synthesizing the captured virtual picture and the actual shooting picture of the camera at the rendering moment to obtain a second synthesized picture; the second synthesized picture is obtained by synthesizing the captured virtual picture with the actual shot picture in the synthesized picture before the fixed delay exists in the first synthesized picture of the re-projection and the captured virtual picture.
2. The method of claim 1, wherein the re-projecting the first composite picture based on positioning information of the camera at a rendering time comprises:
obtaining a synthesis result at any rendering moment and positioning information of a camera at the rendering moment; and adopting the positioning information of the camera at the rendering moment to carry out reprojection operation on the actual shooting picture and the virtual picture corresponding to the rendering moment on a preset virtual screen.
3. The method according to claim 2, wherein the positioning information of the camera at the rendering time includes position information and projection position change information of the camera, and the re-projecting the actual shot picture and the virtual picture corresponding to the rendering time on a preset virtual screen by using the positioning information of the camera at the rendering time includes:
and projecting the actual shot picture and the virtual picture corresponding to the rendering moment onto a preset virtual screen according to the position information and the projection position change information of the camera at the rendering moment.
4. The method of claim 1, wherein the re-projected first composite frame comprises an actual shot frame and a virtual frame; the recapturing the virtual picture for synthesis from the first synthesized picture of the re-projection according to the positioning information of the camera at the actual shooting moment comprises the following steps:
Acquiring positioning information of a camera when the same frame of picture as the actual shooting picture of the camera at the rendering moment is actually shot in the first composite picture of the re-projection;
and capturing a virtual picture from the re-projected first synthesized picture by adopting positioning information of a camera when the actual shooting picture is actually shot.
5. The method according to claim 4, wherein the positioning information of the camera at the rendering time includes position information and projection position change information of the camera, and capturing the virtual picture from the re-projected first composite picture using the positioning information of the camera when the actual shot picture is actually shot includes:
and capturing a virtual picture which accords with the position information and the projection position change information when the actual shooting picture is actually shot from the first synthesized picture of the re-projection.
6. A method according to claim 3, wherein the synthesizing the captured virtual picture and the actual shot picture of the camera at the rendering time to obtain a second synthesized picture comprises:
and acquiring an actual shooting picture contained in the first synthesized picture when the camera is at the rendering moment, and synthesizing the captured virtual picture and the same frame picture of the actual shooting picture contained in the first synthesized picture when the camera is at the rendering moment in the first synthesized picture of the re-projection, so as to obtain a re-synthesized second synthesized picture.
7. A picture composition apparatus, the apparatus comprising:
the re-projection module is used for acquiring positioning information of the camera at the rendering time of the first synthesized picture aiming at the first synthesized picture, and re-projecting the first synthesized picture according to the positioning information of the camera at the rendering time; the first synthesized picture comprises synthesized results corresponding to different rendering moments, and the synthesized results comprise an actual shooting picture and a virtual picture;
the recapture module is used for acquiring the positioning information of the camera at the actual shooting moment of the first synthesized picture, and recapturing the virtual picture used for synthesis from the first synthesized picture subjected to the reprojection according to the positioning information of the camera at the actual shooting moment;
the picture synthesis module is used for synthesizing the captured virtual picture and the actual shooting picture of the camera at the rendering moment to obtain a second synthesized picture; the second synthesized picture is obtained by synthesizing the captured virtual picture with the actual shot picture in the synthesized picture before the fixed delay exists in the first synthesized picture of the re-projection and the captured virtual picture.
8. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the picture composition method according to any one of claims 1-6 when executing a program stored on a memory.
9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor performs the steps of the picture composition method according to any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210255128.7A CN114640838B (en) | 2022-03-15 | 2022-03-15 | Picture synthesis method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210255128.7A CN114640838B (en) | 2022-03-15 | 2022-03-15 | Picture synthesis method and device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114640838A CN114640838A (en) | 2022-06-17 |
CN114640838B true CN114640838B (en) | 2023-08-25 |
Family
ID=81947557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210255128.7A Active CN114640838B (en) | 2022-03-15 | 2022-03-15 | Picture synthesis method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114640838B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140108128A (en) * | 2013-02-28 | 2014-09-05 | 삼성전자주식회사 | Method and apparatus for providing augmented reality |
CN107223270A (en) * | 2016-12-28 | 2017-09-29 | 深圳前海达闼云端智能科技有限公司 | A kind of data display processing method and device |
WO2019080047A1 (en) * | 2017-10-26 | 2019-05-02 | 腾讯科技(深圳)有限公司 | Augmented reality image implementation method, device, terminal device and storage medium |
CN110244840A (en) * | 2019-05-24 | 2019-09-17 | 华为技术有限公司 | Image processing method, relevant device and computer storage medium |
JP2020064592A (en) * | 2018-10-16 | 2020-04-23 | 株式会社ソニー・インタラクティブエンタテインメント | Image generator, image generation system, image generation method, and program |
CN112040092A (en) * | 2020-09-08 | 2020-12-04 | 杭州时光坐标影视传媒股份有限公司 | Real-time virtual scene LED shooting system and method |
KR20210052884A (en) * | 2019-11-01 | 2021-05-11 | 에이엠셀 주식회사 | Personalized Video Production System and Method Using Chroma Key |
WO2021258994A1 (en) * | 2020-06-24 | 2021-12-30 | 腾讯科技(深圳)有限公司 | Method and apparatus for displaying virtual scene, and device and storage medium |
CN114125301A (en) * | 2021-11-29 | 2022-03-01 | 卡莱特云科技股份有限公司 | Virtual reality technology shooting delay processing method and device |
WO2022048373A1 (en) * | 2020-09-01 | 2022-03-10 | 华为技术有限公司 | Image processing method, mobile terminal, and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11087549B2 (en) * | 2018-10-15 | 2021-08-10 | University Of Maryland, College Park | Methods and apparatuses for dynamic navigable 360 degree environments |
-
2022
- 2022-03-15 CN CN202210255128.7A patent/CN114640838B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140108128A (en) * | 2013-02-28 | 2014-09-05 | 삼성전자주식회사 | Method and apparatus for providing augmented reality |
CN107223270A (en) * | 2016-12-28 | 2017-09-29 | 深圳前海达闼云端智能科技有限公司 | A kind of data display processing method and device |
WO2019080047A1 (en) * | 2017-10-26 | 2019-05-02 | 腾讯科技(深圳)有限公司 | Augmented reality image implementation method, device, terminal device and storage medium |
JP2020064592A (en) * | 2018-10-16 | 2020-04-23 | 株式会社ソニー・インタラクティブエンタテインメント | Image generator, image generation system, image generation method, and program |
CN110244840A (en) * | 2019-05-24 | 2019-09-17 | 华为技术有限公司 | Image processing method, relevant device and computer storage medium |
KR20210052884A (en) * | 2019-11-01 | 2021-05-11 | 에이엠셀 주식회사 | Personalized Video Production System and Method Using Chroma Key |
WO2021258994A1 (en) * | 2020-06-24 | 2021-12-30 | 腾讯科技(深圳)有限公司 | Method and apparatus for displaying virtual scene, and device and storage medium |
WO2022048373A1 (en) * | 2020-09-01 | 2022-03-10 | 华为技术有限公司 | Image processing method, mobile terminal, and storage medium |
CN112040092A (en) * | 2020-09-08 | 2020-12-04 | 杭州时光坐标影视传媒股份有限公司 | Real-time virtual scene LED shooting system and method |
CN114125301A (en) * | 2021-11-29 | 2022-03-01 | 卡莱特云科技股份有限公司 | Virtual reality technology shooting delay processing method and device |
Non-Patent Citations (1)
Title |
---|
基于环境映照自动对齐的高质量虚实融合技术;夏麟;董子龙;章国锋;;计算机辅助设计与图形学学报(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114640838A (en) | 2022-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10600157B2 (en) | Motion blur simulation | |
KR102013978B1 (en) | Method and apparatus for fusion of images | |
CN107945112B (en) | Panoramic image splicing method and device | |
CN110691175B (en) | Video processing method and device for simulating motion tracking of camera in studio | |
WO2023207452A1 (en) | Virtual reality-based video generation method and apparatus, device, and medium | |
CN111766951A (en) | Image display method and apparatus, computer system, and computer-readable storage medium | |
CN109246477B (en) | Panoramic video frame interpolation method and device | |
CN213461894U (en) | XR-augmented reality system | |
CN113596544A (en) | Video generation method and device, electronic equipment and storage medium | |
CN116524157A (en) | Augmented reality synthesis method, device, electronic equipment and storage medium | |
CN114072848A (en) | Rendering backboard | |
CN114640838B (en) | Picture synthesis method and device, electronic equipment and readable storage medium | |
CN108462860A (en) | A kind of method and projection arrangement for realizing projection process | |
KR102561903B1 (en) | AI-based XR content service method using cloud server | |
CN116506563A (en) | Virtual scene rendering method and device, electronic equipment and storage medium | |
CN112312041B (en) | Shooting-based image correction method and device, electronic equipment and storage medium | |
CN114520903A (en) | Rendering display method, device, storage medium and computer program product | |
US20220026849A1 (en) | Holographic imagery for on set eyeline reference | |
WO2024185483A1 (en) | Information processing device, information processing method, and recording medium | |
CN113870165B (en) | Image synthesis method, device, electronic equipment and storage medium | |
CN114461165B (en) | Virtual-real camera picture synchronization method, device and storage medium | |
CN111641779B (en) | Shooting control method and system and computing device | |
CN112995514B (en) | Method and equipment for acquiring photo object distance of industrial camera | |
CN116962651A (en) | Dynamic projection method, device, server and system for XR virtual shooting | |
KR20190114276A (en) | Method for processing composite video and apparatus for the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |