CN109598775B - Dynamic image synthesis method, device, terminal and storage medium - Google Patents
Dynamic image synthesis method, device, terminal and storage medium Download PDFInfo
- Publication number
- CN109598775B CN109598775B CN201710922942.9A CN201710922942A CN109598775B CN 109598775 B CN109598775 B CN 109598775B CN 201710922942 A CN201710922942 A CN 201710922942A CN 109598775 B CN109598775 B CN 109598775B
- Authority
- CN
- China
- Prior art keywords
- image
- animation
- display control
- action
- live
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001308 synthesis method Methods 0.000 title claims abstract description 8
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 66
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 66
- 230000002194 synthesizing effect Effects 0.000 claims description 51
- 238000000034 method Methods 0.000 claims description 34
- 210000000988 bone and bone Anatomy 0.000 claims description 31
- 230000015654 memory Effects 0.000 claims description 14
- 230000033001 locomotion Effects 0.000 claims description 7
- 230000009471 action Effects 0.000 description 16
- 230000000694 effects Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 6
- 230000001960 triggered effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 241000282326 Felis catus Species 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a dynamic image synthesis method, a device, a terminal and a storage medium; the embodiment of the invention adopts the live-action image captured by the display terminal and displays the skeleton animation of the animation model on the live-action image; intercepting a currently displayed target live-action image when a shooting instruction is received, and recording skeleton animation of the animation model to obtain recorded skeleton animation; respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain a plurality of synthesized images; combining the plurality of combined images into corresponding dynamic images; the scheme can automatically synthesize the live-action image and the skeleton animation of the animation model into the corresponding dynamic image, and a user does not need to perform a large amount of repeated image adding and selecting operations, so that the synthesis efficiency of the dynamic image can be improved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for synthesizing a dynamic image.
Background
At present, in order to increase interest, some products, such as social products, add a virtual pet function in the products, and also design various actions of the virtual pet, so that the actions are played in the process of using the products by users, and the animation effect of executing various actions by the virtual pet is realized.
Common products such as social products all provide a photographing application, and a user can call a terminal camera to take a picture through the photographing application. At present, in order to increase the diversity and interest of photos, an image synthesis function is provided in the photographing application. Specifically, a user calls a camera to shoot a photo image through a shooting application, after the terminal shoots, the terminal jumps to an image editing page, the terminal selects a static image to be added, such as a static action image of a pet, according to the selection operation of the user, and then the terminal synthesizes the shot image and the image selected to be added, so that a new static image can be synthesized.
With the increasing demand of users, users want to combine the shot images with the motion animation of the virtual pet in the product, and at this time, the users need to repeat the above process of synthesizing a single static image through multiple operations to synthesize multiple static images and then obtain a dynamic image.
It can be seen that the current dynamic image synthesis scheme requires a large number of repeated operations by the user, resulting in low dynamic image synthesis efficiency.
Disclosure of Invention
The embodiment of the invention provides a dynamic image synthesis method, a dynamic image synthesis device, a terminal and a storage medium, which can improve the synthesis efficiency of dynamic images.
The embodiment of the invention provides a dynamic image synthesis method, which comprises the following steps:
displaying a real-scene image captured by a terminal, and displaying skeleton animation of the animation model on the real-scene image;
intercepting a currently displayed target live-action image when a shooting instruction is received, and recording skeleton animation of the animation model to obtain at least two frames of recorded skeleton animation;
respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain at least two synthesized images;
and combining the at least two combined images into a corresponding dynamic image.
Correspondingly, an embodiment of the present invention further provides a dynamic image synthesizing apparatus, including:
the first display unit is used for displaying the real-scene image captured by the terminal and displaying the skeleton animation of the animation model on the real-scene image;
the capturing unit is used for capturing the currently displayed target live-action image when receiving the shooting instruction, and recording the skeleton animation of the animation model to obtain at least two frames of recorded skeleton animation;
the synthesizing unit is used for respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain at least two synthesized images;
a combining unit for combining the at least two synthesized images into a corresponding moving image.
Correspondingly, the embodiment of the present invention further provides a terminal, which includes a memory and a processor, where the memory stores instructions, and the processor loads the instructions to execute the dynamic image synthesis method provided in any one of the embodiments of the present invention.
Correspondingly, the embodiment of the present invention further provides a storage medium, where the storage medium stores instructions, and the instructions, when executed by a processor, implement the steps of any of the methods provided in the embodiment of the present invention.
The embodiment of the invention adopts the live-action image captured by the display terminal and displays the skeleton animation of the animation model on the live-action image; when a shooting instruction is received, intercepting a currently displayed target live-action image, and recording a skeleton animation of the animation model to obtain a recorded skeleton animation; respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain a plurality of synthesized images; and combining the plurality of combined images into a corresponding dynamic image. The scheme can automatically synthesize the live-action image and the skeleton animation of the animation model into the corresponding dynamic image without the need of a user to perform a large number of repeated image adding and selecting operations, so that the synthesis efficiency of the dynamic image can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1a is a schematic view of a scene of a dynamic image synthesis system according to an embodiment of the present invention;
FIG. 1b is a schematic flow chart of a dynamic image synthesizing method according to an embodiment of the present invention;
FIG. 1c is a skeletal diagram of an animated model provided by an embodiment of the present invention;
FIG. 1d is a schematic diagram of an animation model dragging on a camera interface according to an embodiment of the present invention;
FIG. 1e is a diagram illustrating dragging of an animation model on an editing interface according to an embodiment of the present invention;
FIG. 2a is another schematic flow chart of a dynamic image synthesizing method according to an embodiment of the present invention;
FIG. 2b is a message page diagram of a social product provided by an embodiment of the present invention;
FIG. 2c is a schematic diagram of a home page of a pet according to an embodiment of the present invention;
FIG. 2d is a schematic view of a camera interface provided by an embodiment of the present invention;
FIG. 2e is a diagram of an image editing interface provided by an embodiment of the invention;
FIG. 3a is a schematic diagram of a first structure of a dynamic image synthesizing apparatus according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a second structure of a dynamic image synthesizing apparatus according to an embodiment of the present invention;
FIG. 3c is a schematic diagram of a third structure of a dynamic image synthesizing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the present invention provides a dynamic image synthesizing system, which includes any of the dynamic image synthesizing apparatuses provided in the embodiments of the present invention, and the dynamic image synthesizing apparatus may be integrated in a terminal, and the terminal may be a mobile phone, a tablet computer, and other devices.
Referring to fig. 1a, an embodiment of the present invention provides a dynamic image synthesis system, including: the terminal is connected with the server through a network. Specifically, the dynamic image synthesis process is as follows:
the terminal displays the live-action image captured by the terminal and displays the skeleton animation of the animation model on the live-action image; intercepting a currently displayed target live-action image when a shooting instruction is received, and recording skeleton animation of the animation model to obtain recorded skeleton animation; respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain a plurality of synthesized images; combining the plurality of combined images into a corresponding dynamic image; the terminal sends the dynamic image to the server or stores the dynamic image locally.
The details will be described below separately.
The first embodiment,
The embodiment will be described from the perspective of a dynamic image synthesizing apparatus, which is embodied in a terminal, and the terminal may be a mobile phone, a tablet computer, a notebook computer, or the like.
A moving image synthesizing method comprising: .
As shown in fig. 1b, the specific flow of the dynamic image synthesizing method may be as follows:
101. and displaying the live-action image captured by the terminal, and displaying the skeleton animation of the animation model on the live-action image.
The live-action image may be an image captured by a camera of the terminal, that is, the live-action image may be captured by the camera of the terminal and the captured live-action image may be displayed in this embodiment.
The animation model is an object body of the animation, and the object body can be set according to actual requirements, for example, the object body can be a pet such as a cat and a dog, a person, and the like.
The skeleton animation of the animated model may be an animation generated by controlling the position, rotation direction, and enlargement and reduction of the skeleton of the animated model. For example, animation can be generated by controlling the head, left hand, right hand, body, left foot, right foot, etc. of the skeleton of the animated model to move accordingly.
In this embodiment, the skeleton animation may be spine skeleton animation, or may be skeleton animation of other frames or other runtime libraries.
In practical applications, the user can select a desired skeleton animation, for example, the terminal can display action icons of various skeleton actions of the animation model, each skeleton action corresponds to one skeleton animation, and the user can select the displayed skeleton animation through the action picture. That is, not displaying the skeletal animation of the animated model on the live-action image may include:
receiving an animation display instruction triggered by a user through a bone action icon, wherein the animation display instruction indicates that a bone animation corresponding to the bone action icon is displayed;
and displaying the bone animation corresponding to the animation model on the live-action image according to the animation display instruction.
For example, referring to fig. 1c, a plurality of bone action icons are displayed at the bottom of the camera interface, and a user may select a corresponding bone animation by clicking a certain bone action icon or moving the bone action icon into a selection box (a circle-shaped photographing button in the figure), at this time, the terminal triggers generation of an animation display instruction, so that the dynamic image synthesis apparatus of this embodiment receives the instruction, and displays the bone animation corresponding to the animation model on the live-action image according to the instruction.
Optionally, in order to improve the diversity of the synthesized dynamic images, the embodiment may further set the display position of the animation model on the live-action image according to the user requirement; for example, the animated model may be moved to a specified position on the live-action image, and then a skeletal animation of the animated model may be displayed at the specified position. Specifically, the method of this embodiment may further include: receiving a moving instruction of the animation model, wherein the instruction indicates the target position of the animation model on the live-action image to be moved, and moving the animation model to the target position on the live-action image according to the moving instruction. At this time, a skeletal animation of the animated model may be displayed at the target position on the live-action image.
For example, referring to FIG. 1d, after the live-action image is displayed on the camera interface, the animated model (i.e., the pet) may be moved to the customized location, and then the user may select to play the corresponding skeletal animation of the animated model at that location by triggering the skeletal action icon, referring to FIG. 1c.
In order to improve the display effect of the live-action image and the skeleton animation and prevent the display of the live-action image and the skeleton animation from being influenced with each other, the two image display components may be adopted in the embodiment to respectively display the live-action image and the skeleton animation. That is, the step of "displaying the live-action image captured by the terminal and displaying the skeleton animation of the animated model on the live-action image" may include:
displaying the live-action image captured by the terminal through a first image display component of the terminal system;
and displaying the skeleton animation of the animation model through a second image display component of the terminal system, wherein the second image display component is superposed on the first image display component.
The first image display assembly and the second image display assembly can be the same in size and can be completely overlapped. In order to better display the skeleton animation, the background of the second image display component can be set to be transparent, so that the skeleton animation can be better displayed on the live-action image.
For example, when the terminal system is an android system, the first image display component may be TextureView, and the second image display component may be GLSurfaceView. Therefore, a full-screen TextureView can be placed on the whole Activity and used for displaying a camera image, namely a real image, meanwhile, GLSurfaceView with the same size is placed on the TextureView and completely superposed, the GLSurfaceView is used for rendering animation, the animation can adopt Spine skeleton animation, the skeleton animation is displayed on the GLSurfaceView, and the background is set to be transparent, so that the skeleton animation can be displayed on the real image.
102. And when a shooting instruction is received, intercepting the currently displayed target live-action image, and recording the skeleton animation of the animation model to obtain at least two frames of recorded skeleton animation.
The shooting instruction can be triggered in various ways, such as by user operation, voice control, and the like. For example, referring to fig. 1c, when the user clicks the photo button on the camera interface, a shooting command is triggered, and the moving image synthesizing apparatus of the present embodiment receives the shooting command.
Optionally, the present embodiment may intercept the image by a display component displaying the image, for example, the step of "intercepting the currently displayed target live-action image and recording the skeleton animation of the animation model" may include:
and intercepting the currently displayed target live-action image through the first image display component, and recording the skeleton animation of the animation model through the second image display component.
For example, when a capture command is received, the TextureView captures the currently displayed live-action image and writes the currently displayed live-action image into the memory, and the GLSurfaceView records the currently displayed skeleton animation.
Specifically, the second image display component may intercept the currently displayed skeleton animation at regular intervals to implement recording the animation. That is, the step of "recording the skeletal animation of the animated model through the second image display assembly" may include: and intercepting the bone animation image of the currently displayed animation model through a second image display component according to a preset time interval.
The preset time period may be set according to an actual requirement, for example, may be 1ms, 3ms, and the like.
Optionally, in order to facilitate the user to synthesize a dynamic image, the embodiment may display the captured live-action image and the recorded animation to the corresponding interface; in order to improve the display effect and quality of the live-action image and the recorded animation on the interface, the embodiment may display the image through the parent-child display control. That is, after recording the skeleton animation and before synthesizing the images, the synthesizing method of the embodiment may further include:
setting the target live-action image into a parent image display control, and setting the target frame image of the recorded skeleton animation into a child image display control of the parent image display control;
and displaying the target live-action image and the target frame image on corresponding interfaces through the parent image display control and the child image display control.
The interface may be set according to actual requirements, for example, the real-scene image and the frame image of the animation may be displayed on the image composition editing interface. In addition, the target frame image of the skeleton animation can be selected according to actual requirements, for example, the first frame image of the skeleton animation is selected as the target frame image, and the like.
In this embodiment, the parent-child image display control may be a parent-child ImageView in the android system.
Optionally, in order to restore the positional relationship between the bone animation and the target live-action image during image display and improve the accuracy and quality of animation image synthesis, this embodiment may record an animation offset position of the bone animation relative to the live-action image when a shooting instruction is received (for example, a user presses a shutter button on a camera interface to trigger a shooting instruction), and may adjust or set the relative position between the parent-child image display controls based on the animation offset position when a subsequent image is displayed on the interface. That is, the method of this embodiment may further include: and recording the animation offset position of the skeleton animation of the current animation model relative to the live-action image when receiving the shooting instruction.
At this time, the step of displaying the target live-action image and the target frame image on the corresponding interface through the parent image display control and the child image display control may include:
setting the offset position of the child image display control relative to the parent image display control according to the animation offset position;
and displaying the target live-action image and the target frame image on a corresponding interface according to the set parent image display control and the set child image display control.
The animation offset position may be an offset position of the skeleton animation with respect to the top left corner or the top right corner of the real image, and the skeleton animation offset reference on the real image may be set according to actual requirements, such as taking the center point, the top, the bottom, and the like of the real image as references.
Optionally, in order to make the display of the live-action image and the frame image suitable for the size of the terminal screen, after setting the offset position between the parent sub-image display controls, the method of this embodiment may further perform overall scaling on the parent sub-image display controls to adapt to the size of the terminal screen. For example, the parent sub-image display control is scaled as a whole according to the size parameters (high and wide) of the terminal screen.
103. And respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain at least two synthesized images.
For example, the target live-action image may be used as a background image, and then each frame of image of the recorded skeleton animation is image-synthesized with the background image. For example, when 5 frames of the recorded skeleton animation are provided, 5 frame images may be respectively synthesized with the target live-action image.
Optionally, in order to enable the relative position between the skeleton animation and the live-action image in the synthesized dynamic image to be the same as that before the synthesis, and improve the accuracy of the dynamic image synthesis, in the embodiment, each frame image of the recorded skeleton animation may be image-synthesized with the background image according to the offset position of the current child image display control relative to the parent image display control (i.e., the offset position of the target frame image relative to the target live-action image).
In practical applications, the offset position between the current parent-child image display controls may be the same as or different from the offset position set according to the animation offset. For example, a user may change the offset position by dragging the animation model across the interface, where the offset is such that the offset positions between parent sub-image display controls that are set according to the animation offset are not the same.
For example, after the target live-action image and the target frame image are displayed on the corresponding interface, a movement request input by a user is received, the movement request indicates a target position to which the sub-image display control needs to be moved, and the sub-image display control is moved to the target position according to the movement request. Referring to fig. 1e, a user may drag the animation model to a specified position on the editing interface, and the terminal moves the sub-image display control to the corresponding specified position in response to the user dragging.
At this time, the offset positions between parent and child image display controls are changed, and image synthesis can be performed based on the changed offset positions at the time of subsequent synthesis.
Optionally, in order to increase the diversity of the dynamic images and improve the user experience, an information display frame may be added to the interface in this embodiment, and specifically, the information display frame may be added to a corresponding position based on the size of the skeleton animation and the offset position of the animation; that is, the step of performing image synthesis on the target live-action image and each frame of image of the recorded skeleton animation respectively includes:
adding an information display frame at a corresponding position of the interface according to the animation offset position and the size of the skeleton animation;
displaying corresponding characters in the information display frame;
image interception is carried out on the information display frame to obtain an information display frame image;
and respectively carrying out image synthesis on the target live-action image and the information display frame image and each frame image of the recorded skeleton animation.
The size of the bone animation comprises size parameters such as width and height of the bone animation, for example, an information display frame can be added at a corresponding position of the editing interface according to the animation offset position and the width and height of the bone animation.
The information display frame may be in various forms, for example, a bubble or the like.
After the information display frame is added, the information input by the user can be acquired and displayed in the information display frame. The information displayed by the information display frame may include text information, picture information, and the like.
After the information is displayed in the information display frame, the method of the embodiment can independently perform image interception on the information display frame to obtain a corresponding information display frame image such as a bubble character image, and then perform image synthesis on the target live-action image and the information display frame image and each frame image of the recorded skeleton animation respectively.
The synthesis sequence of the frame images of the information display frame image, the target live-action image and the skeleton animation can be set according to actual requirements, for example, to improve the synthesis quality of the dynamic image, the sequence of image synthesis in this embodiment may be live-action image- > skeleton animation- > information display frame image; such as live-action photo- > skeleton animation- > bubble text picture. That is, the live view image is synthesized with the frame image of the skeleton animation, and then the frame image is displayed in the synthesis information. Specifically, the step of image-synthesizing the target live-action image and the information display frame image with each frame image of the recorded skeleton animation may include:
determining a frame image to be synthesized currently in the recorded skeleton animation;
taking the target live-action image as a background image;
synthesizing the background image and the frame image to be synthesized at present according to the offset position of the current sub-image display control relative to the parent image display control to obtain a synthesized skeleton animation image;
carrying out image synthesis on the information display frame image and the synthesized skeleton animation image;
determining whether the recorded skeleton animation has a frame image to be synthesized; if yes, returning to the step of determining the frame image to be synthesized currently in the recorded skeleton animation.
104. And combining the at least two combined images into a corresponding dynamic image.
This embodiment can obtain a set of continuous synthesized images after image synthesis, and at this time, a set of continuous synthesized images can be synthesized into a moving image.
For example, the combination may be performed according to the capturing time of the skeleton animation frame corresponding to the synthesized image.
The dynamic image of the present embodiment may use the target live-action image as a background, and frame images of the skeleton animation are continuously displayed on the background.
The embodiment can also store the dynamic image locally after obtaining the dynamic image, or send the dynamic image to a server and send the dynamic image to other user terminals by the server.
As can be seen from the above, the embodiment of the present invention employs the live-action image captured by the display terminal, and displays the skeleton animation of the animation model on the live-action image; intercepting a currently displayed target live-action image when a shooting instruction is received, and recording skeleton animation of the animation model to obtain recorded skeleton animation; respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain a plurality of synthesized images; and combining the plurality of combined images into a corresponding dynamic image. The scheme can automatically synthesize the live-action image and the skeleton animation of the animation model into the corresponding dynamic image, and a user does not need to perform a large amount of repeated image adding and selecting operations, so that the synthesis efficiency of the dynamic image can be improved.
In addition, the scheme of the embodiment of the invention provides a dynamic image synthesis technology combining animation and photographing, and is also a new photographing mode, so that the user is more vivid and rich in photographing experience, the pet camera function in the product is attracted to the user, the activity and the uploading amount of the photos are pulled, and the user can take distinctive dynamic image photographing effects.
Example II,
The method according to the first embodiment will be described in further detail below.
An embodiment of the present invention provides a dynamic image synthesis system, which includes a terminal and a server, and referring to fig. 1a, the terminal and the server are connected through a network.
The animation realization method of the present invention will be further described based on the moving image composition system shown above.
As shown in fig. 2a, a specific flow of a dynamic image synthesizing method may be as follows:
201. the terminal displays the captured live-action image through a first image display component of the system.
The live-action image can be an image captured by a camera of the terminal, that is, the live-action image can be captured by the camera on the terminal and the captured live-action image is displayed by the first image display component
The first image display component may be a TextureView in an android system.
Referring to fig. 2b, when the user clicks the pet, a corresponding menu pops up; when the user clicks the pet home page icon, a pet home page interface is displayed, referring to figure 2c. And then, when the user clicks the camera shooting icon in the pet homepage, the terminal enters a camera shooting interface, and reference is made to fig. 2d. The terminal displays the real image captured by the camera on the camera shooting interface through the first image display component, and displays the animation model on the camera interface through the second image display component.
202. The terminal displays the skeleton animation of the animation model on the live-action image through a second image display component of the system.
The animation model is an object body of the animation, and the object body can be set according to actual requirements, for example, the object body can be a pet such as a cat and a dog, a person, and the like.
The skeleton animation of the animated model may be an animation generated by controlling the position, rotation direction, and enlargement and reduction of the skeleton of the animated model. For example, animation can be generated by controlling the head, left hand, right hand, body, left foot, right foot, etc. of the skeleton of the animated model to move accordingly.
In this embodiment, the skeleton animation may be spine skeleton animation, or may be skeleton animation of other frames or other runtime libraries.
The second image display component may be GLSurfaceView in the android system. In the embodiment, a full-screen textview can be placed on the whole Activity and used for displaying a camera image, namely a live-action image, meanwhile, a GLSurfaceView with the same size is placed on the textview and completely overlapped, the GLSurfaceView is used for rendering animation, the skeleton animation can be displayed on the GLSurfaceView by adopting Spine skeleton animation, and the background is set to be transparent, so that the skeleton animation can be displayed on the real image.
In practical application, the terminal can receive an animation display instruction triggered by a user through a bone action icon, the animation display instruction indicates that the bone animation corresponding to the bone action icon is displayed, and the bone animation corresponding to the animation model is displayed on the live-action image according to the animation display instruction. Referring to fig. 1c, the user may drag the animation model to a designated position of the camera interface and select a corresponding skeletal animation by clicking a skeletal action picture.
203. When a shooting instruction is received, the terminal intercepts a currently displayed target live-action image through the first image display assembly, records the skeleton animation of the animation model through the second image display assembly to obtain the recorded skeleton animation, and records the animation offset position of the skeleton animation of the current animation model relative to the live-action image when the shooting instruction is received.
The shooting instruction can be triggered in various ways, such as by user operation and voice control. For example, referring to fig. 1c, when the user clicks the photo button on the camera interface, a shooting command is triggered, and the moving image synthesizing apparatus of the present embodiment receives the shooting command.
For example, when a shooting command is received, the TextureView captures a currently displayed live-action image and writes the currently displayed live-action image into the memory, and the GLSurfaceView captures a bone animation image of the currently displayed animation model at a preset time interval to record the bone animation.
The preset time period may be set according to an actual requirement, and may be, for example, 0.11ms, 2ms, or the like.
204. And the terminal sets the target live-action image into a parent image display control and sets the target frame image of the recorded skeleton animation into a child image display control of the parent image display control.
In this embodiment, the parent-child image display control may be a parent-child ImageView in the android system.
205. And the terminal sets the offset position of the sub-image display control relative to the parent image display control according to the animation offset position, and displays a target live-action image and a target frame image on the image editing interface according to the set parent image display control and the sub-image display control.
The implementation can set the animation frame picture to the child ImageView and restore the picture to the offset position relative to the parent ImageView according to the previously recorded animation offset value.
206. And adding an information display frame at a corresponding position of the image editing interface by the terminal according to the animation offset position and the size of the skeleton animation.
The size of the bone animation comprises size parameters such as width and height of the bone animation, for example, an information display frame can be added at a corresponding position of the editing interface according to the animation offset position and the width and height of the bone animation. The information display frame may be in various forms, for example, a bubble, a balloon, or the like.
207. And the terminal displays corresponding information in the information display frame and intercepts the image of the information display frame to obtain an image of the information display frame.
The information displayed by the information display frame may include text information, picture information, and the like.
Referring to fig. 2e, a text bubble is added at a corresponding position on the image editing interface, and text information and the like input by the user may be displayed within the text bubble.
208. And the terminal carries out image synthesis on the target live-action image and the information display frame image and each frame image of the recorded skeleton animation respectively to obtain a synthesized image group.
Wherein the set of synthesized images includes a plurality of synthesized images, such as a set of consecutive synthesized images.
The synthesis sequence of the frame images of the information display frame image, the target live-action image and the skeleton animation can be set according to actual requirements, for example, to improve the synthesis quality of the dynamic image, the sequence of image synthesis in this embodiment may be live-action image- > skeleton animation- > information display frame image; such as live-action photo- > skeleton animation- > bubble text picture.
For example, the image synthesis process of the terminal may include:
determining a frame image to be synthesized currently in the recorded skeleton animation;
taking the target live-action image as a background image;
synthesizing the background image and the frame image to be synthesized at present according to the offset position of the current sub-image display control relative to the parent image display control to obtain a synthesized skeleton animation image;
carrying out image synthesis on the information display frame image and the synthesized skeleton animation image;
determining whether the recorded skeleton animation has a frame image to be synthesized; if yes, returning to the step of determining the frame image to be synthesized currently in the recorded skeleton animation; if not, go to step 209.
For example, when the user clicks the "complete" button in fig. 2e, the terminal performs image synthesis on the target live-action image and the information display frame image, and each frame image of the recorded skeleton animation respectively, to obtain a synthesized image group.
209. And the terminal combines the images in the combined image group into a corresponding dynamic image.
For example, the combination may be performed according to the capturing time of the skeleton animation frame corresponding to the synthesized image.
The dynamic image of the present embodiment may use the target live-action image as a background, and frame images of the skeleton animation are continuously displayed on the background.
210. The terminal stores the dynamic image in the local or uploads the dynamic image to the server.
For example, the terminal sends a moving image to the server, and the server forwards the moving image to other terminals.
As can be seen from the above, the embodiment of the present invention employs the live-action image captured by the display terminal, and displays the skeleton animation of the animation model on the live-action image; when a shooting instruction is received, intercepting a currently displayed target live-action image, and recording a skeleton animation of the animation model to obtain a recorded skeleton animation; respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain a plurality of synthesized images; and combining the plurality of combined images into a corresponding dynamic image. The scheme can automatically synthesize the live-action image and the skeleton animation of the animation model into the corresponding dynamic image, and a user does not need to perform a large amount of repeated image adding and selecting operations, so that the synthesis efficiency of the dynamic image can be improved.
In addition, the scheme of the embodiment of the invention provides a dynamic image synthesis technology combining animation and photographing, and is also a new photographing mode, so that the user is more vivid and rich in photographing experience, the pet camera function in the product is attracted to the user, the activity and the uploading amount of the photos are pulled, and the user can take distinctive dynamic image photographing effects.
Example III,
In order to better implement the above method, an embodiment of the present invention further provides a moving image synthesizing apparatus, as shown in fig. 3a, which may include: the first display unit 301, the clipping unit 302, the combining unit 303, and the combining unit 304 are as follows:
(1) A first display unit 301;
the first display unit 301 is configured to display a live-action image captured by the terminal, and display a skeleton animation of the animation model on the live-action image.
The live-action image may be an image captured by a camera of the terminal, that is, the embodiment may capture the live-action image by the camera on the terminal and display the captured live-action image.
The animation model is an object subject of the animation, and the object subject can be set according to actual requirements, for example, the object subject can be a pet such as a cat or a dog, a person, and the like.
In this embodiment, the skeleton animation may be spine skeleton animation, or may be skeleton animation of other frames or other runtime libraries.
The first display unit 301 may be configured to: displaying the live-action image captured by the terminal through a first image display component of the terminal system; and displaying the skeleton animation of the animation model through a second image display component of the terminal system, wherein the second image display component is superposed on the first image display component. For example, when the terminal system is an android system, the first image display component may be TextureView, and the second image display component may be GLSurfaceView. Therefore, a full-screen TextureView can be placed on the whole Activity and used for displaying a camera image, namely a live-action image, meanwhile, a GLSurfaceView with the same size is placed on the TextureView and completely overlapped, the GLSurfaceView is used for rendering animation, the skeleton animation can be displayed on the GLSurfaceView by adopting Spine skeleton animation, and the background is set to be transparent, so that the skeleton animation can be displayed on the real image.
(2) An intercepting unit 302;
the intercepting unit 302 is configured to intercept the currently displayed target live-action image when receiving the shooting instruction, and record a bone animation of the animation model to obtain the recorded bone animation.
For example, the capturing unit 302 may be configured to capture a currently displayed target live-action image through the first image display component, and record a skeleton animation of the animation model through the second image display component.
The clipping unit 302 may be configured to clip the bone animation image of the currently displayed animation model through the second image display component at a preset time interval.
For example, when a capture command is received, the TextureView captures the currently displayed live-action image and writes the currently displayed live-action image into the memory, and the glfaceview records the currently displayed skeleton animation.
The preset time period may be set according to an actual requirement, for example, may be 1ms, 3ms, and the like.
(3) A synthesizing unit 303;
a synthesizing unit 303, configured to perform image synthesis on the target live-action image and each frame of image of the recorded skeleton animation, respectively, to obtain a plurality of synthesized images.
For example, the synthesizing unit 303 may be configured to use the target live-action image as a background image; and respectively carrying out image synthesis on each frame of image of the recorded skeletal animation and the background image according to the offset position of the current child image display control relative to the parent image display control.
For another example, the synthesizing unit 303 may be specifically configured to:
adding an information display frame at a corresponding position of the interface according to the animation offset position and the size of the skeleton animation;
displaying corresponding information in the information display frame;
image interception is carried out on the information display frame to obtain an information display frame image;
and respectively carrying out image synthesis on the target live-action image and the information display frame image and each frame image of the recorded skeleton animation.
The synthesizing unit 303 may be specifically configured to:
determining a frame image to be synthesized currently in the recorded skeleton animation;
taking the target live-action image as a background image;
synthesizing the background image and the frame image to be synthesized at present according to the offset position of the current sub-image display control relative to the parent image display control to obtain a synthesized skeleton animation image;
synthesizing the information display frame image and the synthesized bone animation image;
determining whether the recorded skeleton animation has a frame image to be synthesized; if yes, returning to the step of determining the frame image to be synthesized currently in the recorded skeleton animation.
(4) A combining unit 304;
a combining unit 304, configured to combine the plurality of combined images into a corresponding dynamic image.
For example, the combining unit 304 may combine the images according to the capturing time of the skeleton animation frames corresponding to the synthesized images.
Optionally, in order to facilitate the user to synthesize a dynamic image, the embodiment may display the captured live-action image and the recorded animation to the corresponding interface; in order to improve the display effect and quality of the live-action image and the recorded animation on the interface, the embodiment may display the image through the parent-child display control. Referring to fig. 3b, the moving image synthesizing apparatus of the present embodiment may further include: a second display unit 305;
the second display unit 305 is configured to set the target live-action image into a parent image display control and set the target frame image of the recorded skeleton animation into a child image display control of the parent image display control after the capturing unit 302 records the animation and before the synthesizing unit 303 synthesizes the images;
and displaying the target live-action image and the target frame image on corresponding interfaces through the parent image display control and the child image display control.
Optionally, in order to restore the position relationship between the skeleton animation and the target live-action image when the images are displayed, and improve the accuracy and quality of the animation image synthesis, the embodiment may record the animation offset position of the skeleton animation relative to the live-action image when a shooting instruction is received (for example, a user presses a shutter button on a camera interface to trigger the shooting instruction), and may adjust or set the relative position between the parent-child image display controls based on the animation offset position when the subsequent images are displayed on the interface. Referring to fig. 3c, the moving image synthesizing apparatus of the present embodiment may further include: an offset recording unit 306;
an offset recording unit 306, configured to record an animation offset position of the skeletal animation of the current animation model relative to the live-action image when the shooting instruction is received;
at this time, the second display unit 305 may be configured to:
setting the offset position of the child image display control relative to the parent image display control according to the animation offset position;
and displaying the target live-action image and the target frame image on a corresponding interface according to the set parent image display control and the set child image display control.
The animation offset position may be an offset position of the skeleton animation with respect to the top left corner or the top right corner of the real image, and the skeleton animation offset reference on the real image may be set according to actual requirements, such as taking the center point, the top, the bottom, and the like of the real image as references.
In specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily, and implemented as the same or several entities, and specific implementations of the above units may refer to the foregoing method embodiment, which is not described herein again.
The dynamic image synthesizing apparatus may specifically be integrated with a terminal, for example, integrated in the terminal in the form of a client, where the terminal may be a mobile phone, a tablet computer, or other devices.
As can be seen from the above, in the embodiment of the present invention, the first display unit 301 is adopted to display the live-action image captured by the terminal, and the skeleton animation of the animation model is displayed on the live-action image; when receiving a shooting instruction, the intercepting unit 302 intercepts the currently displayed target live-action image and records the skeleton animation of the animation model to obtain the recorded skeleton animation; the target live-action image is subjected to image synthesis with each frame of image of the recorded skeleton animation by a synthesis unit 303 to obtain a plurality of synthesized images; the several synthesized images are synthesized into a corresponding moving image by the combining unit 304. The scheme can automatically synthesize the live-action image and the skeleton animation of the animation model into the corresponding dynamic image, and a user does not need to perform a large amount of repeated image adding and selecting operations, so that the synthesis efficiency of the dynamic image can be improved.
In addition, the scheme of the embodiment of the invention provides a dynamic image synthesis technology combining animation and photographing, and is a new photographing mode, so that the user is more vivid and rich in photographing experience, the pet camera function in the product is attracted to the user, and the activity and the uploading amount of the photos are pulled, so that the user can photograph distinctive dynamic image photographing effects.
Examples IV,
In order to better implement the method, the embodiment of the invention also provides a terminal, which can be a mobile phone, a tablet computer and other equipment.
Referring to fig. 4, an embodiment of the present invention provides a terminal 400, which may include one or more processors 401 of a processing core, one or more memories 402 of a computer-readable storage medium, a Radio Frequency (RF) circuit 403, a power supply 404, an input unit 405, and a display unit 406. Those skilled in the art will appreciate that the terminal configuration shown in fig. 4 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402.
The RF circuit 403 may be used for receiving and transmitting signals during information transmission and reception, and in particular, for receiving downlink information of a base station and then processing the received downlink information by the one or more processors 401; in addition, data relating to uplink is transmitted to the base station.
The terminal also includes a power supply 404 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 401 via a power management system that manages charging, discharging, and power consumption. The power supply 404 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The terminal may further include an input unit 405, and the input unit 405 may be used to receive input numeric or character information and generate a keyboard, mouse, joystick, optical or trackball signal input in relation to user settings and function control.
The terminal may further include a display unit 406, and the display unit 406 may be used to display information input by the user or provided to the user, as well as various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 408 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Specifically, in this embodiment, the processor 401 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
displaying a live-action image captured by a terminal, and displaying skeleton animation of an animation model on the live-action image;
when a shooting instruction is received, intercepting a currently displayed target live-action image, and recording skeleton animation of the animation model to obtain at least two frames of recorded skeleton animation;
respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain at least two synthesized images;
and combining the at least two combined images into a corresponding dynamic image.
Optionally, displaying the live-action image captured by the terminal, and displaying the skeleton animation of the animated model on the live-action image, including:
displaying the live-action image captured by the terminal through a first image display component of the terminal system;
displaying a skeletal animation of the animated model through a second image display component of the terminal system, wherein the second image display component is superimposed on the first image display component;
intercepting a currently displayed target live-action image and recording skeleton animation of an animation model, wherein the method comprises the following steps:
and intercepting the currently displayed target live-action image through the first image display component, and recording the skeleton animation of the animation model through the second image display component.
Optionally, recording a skeletal animation of the animated model by the second image display assembly, comprising:
and intercepting the bone animation image of the currently displayed animation model through a second image display component according to a preset time interval.
Optionally, after recording the skeletal animation and before the image synthesis, the processor is further configured to perform the following steps:
setting the target live-action image into a parent image display control, and setting the target frame image of the recorded skeleton animation into a child image display control of the parent image display control;
and displaying the target live-action image and the target frame image on corresponding interfaces through the parent image display control and the child image display control.
Optionally, the processor is further configured to perform: recording the animation offset position of the skeleton animation of the current animation model relative to the live-action image when a shooting instruction is received;
at this time, the displaying the target live-action image and the target frame image on the corresponding interface through the parent image display control and the child image display control includes:
setting the offset position of the child image display control relative to the parent image display control according to the animation offset position;
and displaying the target live-action image and the target frame image on corresponding interfaces according to the set parent image display control and the set child image display control.
Optionally, image synthesizing the target live-action image with each frame of image of the recorded skeleton animation respectively includes:
taking the target live-action image as a background image;
and respectively carrying out image synthesis on each frame of image of the recorded skeleton animation and the background image according to the offset position of the current child image display control relative to the parent image display control.
Optionally, image synthesizing the target live-action image with each frame of image of the recorded skeleton animation respectively includes:
adding an information display frame at a corresponding position of the interface according to the animation offset position and the size of the skeletal animation;
displaying corresponding information in the information display frame;
image interception is carried out on the information display frame to obtain an information display frame image;
and carrying out image synthesis on the target live-action image and the information display frame image and each frame image of the recorded skeleton animation respectively.
Optionally, image synthesizing the target live-action image and the information display frame image with each frame image of the recorded skeleton animation includes:
determining a frame image to be synthesized currently in the recorded skeleton animation;
taking the target live-action image as a background image;
synthesizing the background image and the frame image to be synthesized at present according to the offset position of the current sub-image display control relative to the parent image display control to obtain a synthesized skeleton animation image;
carrying out image synthesis on the information display frame image and the synthesized skeleton animation image;
determining whether the recorded skeleton animation has a frame image to be synthesized; and if so, returning to the step of determining the frame image to be synthesized currently in the recorded skeleton animation.
As can be seen from the above, the terminal in the embodiment of the present invention uses the live-action image captured by the display terminal, and displays the skeleton animation of the animation model on the live-action image; intercepting a currently displayed target live-action image when a shooting instruction is received, and recording skeleton animation of the animation model to obtain recorded skeleton animation; respectively carrying out image synthesis on the target live-action image and each frame of image of the recorded skeleton animation to obtain a plurality of synthesized images; and combining the plurality of combined images into a corresponding dynamic image. The scheme can automatically synthesize the live-action image and the skeleton animation of the animation model into the corresponding dynamic image without the need of a user to perform a large number of repeated image adding and selecting operations, so that the synthesis efficiency of the dynamic image can be improved.
In addition, the scheme of the embodiment of the invention provides a dynamic image synthesis technology combining animation and photographing, and is also a new photographing mode, so that the user is more vivid and rich in photographing experience, the pet camera function in the product is attracted to the user, the activity and the uploading amount of the photos are pulled, and the user can take distinctive dynamic image photographing effects. Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, and the program may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
The method, the apparatus, the terminal and the storage medium for synthesizing a dynamic image according to the embodiments of the present invention are described in detail above, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as limiting the present invention.
Claims (11)
1. A moving image synthesizing method comprising:
displaying a live-action image captured by a terminal, and displaying skeleton animation of an animation model on the live-action image;
when a shooting instruction is received, intercepting a currently displayed target live-action image, and recording a skeleton animation of the animation model to obtain the recorded skeleton animation, wherein the skeleton animation is generated after the skeleton of the animation model moves;
setting the target live-action image into a parent image display control, and setting the target frame image of the recorded skeleton animation into a child image display control of the parent image display control;
displaying the target live-action image and the target frame image on a corresponding interface of a display terminal through the parent image display control and the child image display control;
receiving a moving request input by a user, wherein the moving request indicates a target position to which the sub-image display control needs to be moved on the corresponding interface, and the sub-image display control is moved to the target position according to the moving request;
based on the moved sub-image display control, obtaining the offset position of the sub-image display control relative to the parent image display control;
adding an information display frame at a corresponding position of the corresponding interface according to the offset position and the size of the target frame image, wherein the form of the information display frame comprises a bubble or a balloon;
displaying corresponding information in the information display frame, wherein the information comprises character information and picture information;
image interception is carried out on the information display frame to obtain an information display frame image;
carrying out image synthesis on the target live-action image and the information display frame image and each frame image of the recorded skeleton animation respectively to obtain a synthesized image;
and combining the combined images into corresponding dynamic images.
2. The dynamic image synthesizing method according to claim 1, wherein displaying the live-action image captured by the terminal and displaying the skeleton animation of the animated model on the live-action image comprises:
displaying the live-action image captured by the terminal through a first image display component of the terminal system;
displaying a skeletal animation of the animated model through a second image display component of the terminal system, wherein the second image display component is superimposed on the first image display component;
intercepting a currently displayed target live-action image and recording skeleton animation of an animation model, wherein the method comprises the following steps:
and intercepting the currently displayed target live-action image through the first image display component, and recording the skeleton animation of the animation model through the second image display component.
3. A dynamic image synthesis method according to claim 2, wherein recording a skeletal animation of the animated model by the second image display component comprises:
and intercepting the bone animation image of the currently displayed animation model through a second image display component according to a preset time interval.
4. A moving image synthesizing method according to claim 1, characterized by further comprising: recording the animation offset position of the skeleton animation of the current animation model relative to the live-action image when a shooting instruction is received;
displaying the target live-action image and the target frame image on a corresponding interface through the parent image display control and the child image display control, including:
setting the offset position of the child image display control relative to the parent image display control according to the animation offset position;
and displaying the target live-action image and the target frame image on corresponding interfaces according to the set parent image display control and the set child image display control.
5. The method for synthesizing a dynamic image according to claim 4, wherein the image-synthesizing the target live-action image with each frame of the target frame image in the sub-image display control includes:
taking the target live-action image as a background image;
and respectively carrying out image synthesis on each frame of image of the recorded bone animation and the background image according to the offset position of the current child image display control relative to the parent image display control.
6. The dynamic synthesis method according to claim 1, wherein image-synthesizing the target live-action image and the information display frame image with each frame image of the recorded skeleton animation comprises:
determining a frame image to be synthesized currently in the recorded skeleton animation;
taking the target live-action image as a background image;
synthesizing the background image and the frame image to be synthesized at present according to the offset position of the current sub-image display control relative to the parent image display control to obtain a synthesized skeleton animation image;
carrying out image synthesis on the information display frame image and the synthesized skeleton animation image;
determining whether the recorded skeleton animation has a frame image to be synthesized; and if so, returning to the step of determining the frame image to be synthesized currently in the recorded skeletal animation.
7. A moving image synthesizing apparatus comprising:
the first display unit is used for displaying the real-scene image captured by the terminal and displaying the skeleton animation of the animation model on the real-scene image;
the capturing unit is used for capturing a currently displayed target live-action image when a shooting instruction is received, and recording skeleton animation of the animation model to obtain the recorded skeleton animation, wherein the skeleton animation is generated after the skeleton of the animation model moves;
the moving image synthesizing apparatus further includes: a second display unit;
the second display unit is used for setting the target live-action image into a parent image display control and setting the target frame image for recording the skeleton animation into a child image display control of the parent image display control after the animation is recorded by the intercepting unit and before the image is synthesized by the synthesizing unit;
displaying the target live-action image and the target frame image on a corresponding interface of a display terminal through the parent image display control and the child image display control;
receiving a movement request input by a user, wherein the movement request indicates a target position to which the sub-image display control needs to be moved on the corresponding interface, and the sub-image display control is moved to the target position according to the movement request;
the synthesis unit is used for obtaining the offset position of the sub-image display control relative to the parent image display control based on the moved sub-image display control;
adding an information display frame at a corresponding position of the corresponding interface according to the offset position and the size of the target frame image, wherein the form of the information display frame comprises a bubble or a balloon;
displaying corresponding information in the information display frame, wherein the information comprises character information and picture information;
image interception is carried out on the information display frame to obtain an information display frame image;
carrying out image synthesis on the target live-action image and the information display frame image and each frame image of the recorded skeleton animation respectively to obtain a synthesized image;
a combining unit for combining the combined images into a corresponding dynamic image.
8. The moving image synthesizing apparatus according to claim 7, wherein the first display unit is configured to: displaying the live-action image captured by the terminal through a first image display component of the terminal system; displaying a skeletal animation of the animated model through a second image display component of the terminal system, wherein the second image display component is superimposed on the first image display component;
the intercepting unit is used for intercepting the currently displayed target live-action image through the first image display assembly and recording the skeleton animation of the animation model through the second image display assembly.
9. The moving image synthesizing apparatus according to claim 7, further comprising: an offset recording unit;
the offset recording unit is used for recording the animation offset position of the skeleton animation of the current animation model relative to the live-action image when a shooting instruction is received;
the second display unit is configured to:
setting the offset position of the child image display control relative to the parent image display control according to the animation offset position;
and displaying the target live-action image and the target frame image on a corresponding interface according to the set parent image display control and the set child image display control.
10. A terminal comprising a memory storing instructions and a processor loading the instructions to perform the steps of the dynamic image synthesizing method according to any one of claims 1 to 6.
11. A storage medium storing instructions which, when executed by a processor, implement the steps of the moving image synthesizing method according to any one of claims 1 to 6.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710922942.9A CN109598775B (en) | 2017-09-30 | 2017-09-30 | Dynamic image synthesis method, device, terminal and storage medium |
PCT/CN2018/105961 WO2019062571A1 (en) | 2017-09-30 | 2018-09-17 | Dynamic image synthesis method and device, terminal and storage medium |
US16/799,640 US11308674B2 (en) | 2017-09-30 | 2020-02-24 | Dynamic image compositing method and apparatus, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710922942.9A CN109598775B (en) | 2017-09-30 | 2017-09-30 | Dynamic image synthesis method, device, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109598775A CN109598775A (en) | 2019-04-09 |
CN109598775B true CN109598775B (en) | 2023-03-31 |
Family
ID=65900725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710922942.9A Active CN109598775B (en) | 2017-09-30 | 2017-09-30 | Dynamic image synthesis method, device, terminal and storage medium |
Country Status (3)
Country | Link |
---|---|
US (1) | US11308674B2 (en) |
CN (1) | CN109598775B (en) |
WO (1) | WO2019062571A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175061B (en) * | 2019-05-20 | 2022-09-09 | 北京大米科技有限公司 | Animation-based interaction method and device and electronic equipment |
CN111640170B (en) * | 2020-04-17 | 2023-08-01 | 深圳市集慧技术有限公司 | Bone animation generation method, device, computer equipment and storage medium |
CN113538637A (en) * | 2020-04-21 | 2021-10-22 | 阿里巴巴集团控股有限公司 | Method, device, storage medium and processor for generating animation |
WO2022213088A1 (en) * | 2021-03-31 | 2022-10-06 | Snap Inc. | Customizable avatar generation system |
US11941227B2 (en) | 2021-06-30 | 2024-03-26 | Snap Inc. | Hybrid search system for customizable media |
CN114898022B (en) * | 2022-07-15 | 2022-11-01 | 杭州脸脸会网络技术有限公司 | Image generation method, image generation device, electronic device, and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001008064A (en) * | 1999-06-24 | 2001-01-12 | Casio Comput Co Ltd | Electronic camera and superimpostion display information layout method |
CN105704507A (en) * | 2015-10-28 | 2016-06-22 | 北京七维视觉科技有限公司 | Method and device for synthesizing animation in video in real time |
CN106504304A (en) * | 2016-09-14 | 2017-03-15 | 厦门幻世网络科技有限公司 | A kind of method and device of animation compound |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3539553B2 (en) * | 2000-05-30 | 2004-07-07 | シャープ株式会社 | Animation creation method, animation creation device, and computer-readable recording medium recording animation creation program |
US7039569B1 (en) * | 2000-06-09 | 2006-05-02 | Haws Richard R | Automatic adaptive dimensioning for CAD software |
KR20080037263A (en) * | 2006-10-25 | 2008-04-30 | 고윤용 | Presentation method of story telling and manufacturing method of multimedia file using computer and computer input device and system for the same |
CN100507950C (en) * | 2006-12-12 | 2009-07-01 | 北京中星微电子有限公司 | Processing method and system for video cartoon background of digital camera apparatus |
US8106998B2 (en) * | 2007-08-31 | 2012-01-31 | Fujifilm Corporation | Image pickup apparatus and focusing condition displaying method |
CN101232598A (en) * | 2008-02-28 | 2008-07-30 | 北京中星微电子有限公司 | Equipment and method for displaying video image |
CN102915551A (en) * | 2011-08-04 | 2013-02-06 | 深圳光启高等理工研究院 | Video synthesis method and system |
US9041717B2 (en) * | 2011-09-12 | 2015-05-26 | Disney Enterprises, Inc. | Techniques for processing image data generated from three-dimensional graphic models |
JP5930693B2 (en) * | 2011-12-15 | 2016-06-08 | キヤノン株式会社 | Movie recording apparatus and control method thereof |
US9996516B2 (en) * | 2012-05-16 | 2018-06-12 | Rakuten, Inc. | Image processing device for determining a display position of an annotation |
US10032480B2 (en) * | 2013-10-24 | 2018-07-24 | Visible Ink Television Ltd. | Motion tracking system |
US9397972B2 (en) * | 2014-01-24 | 2016-07-19 | Mitii, Inc. | Animated delivery of electronic messages |
KR20170011065A (en) * | 2015-07-21 | 2017-02-02 | 고창용 | A system and method for composing real-time image and animation Image of subject |
EP3345160A4 (en) * | 2015-09-02 | 2019-06-05 | Thumbroll LLC | Camera system and method for aligning images and presenting a series of aligned images |
-
2017
- 2017-09-30 CN CN201710922942.9A patent/CN109598775B/en active Active
-
2018
- 2018-09-17 WO PCT/CN2018/105961 patent/WO2019062571A1/en active Application Filing
-
2020
- 2020-02-24 US US16/799,640 patent/US11308674B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001008064A (en) * | 1999-06-24 | 2001-01-12 | Casio Comput Co Ltd | Electronic camera and superimpostion display information layout method |
CN105704507A (en) * | 2015-10-28 | 2016-06-22 | 北京七维视觉科技有限公司 | Method and device for synthesizing animation in video in real time |
CN106504304A (en) * | 2016-09-14 | 2017-03-15 | 厦门幻世网络科技有限公司 | A kind of method and device of animation compound |
Also Published As
Publication number | Publication date |
---|---|
WO2019062571A1 (en) | 2019-04-04 |
US11308674B2 (en) | 2022-04-19 |
CN109598775A (en) | 2019-04-09 |
US20200193670A1 (en) | 2020-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109598775B (en) | Dynamic image synthesis method, device, terminal and storage medium | |
US11450350B2 (en) | Video recording method and apparatus, video playing method and apparatus, device, and storage medium | |
CN108924464B (en) | Video file generation method and device and storage medium | |
US10482660B2 (en) | System and method to integrate content in real time into a dynamic real-time 3-dimensional scene | |
US20240078703A1 (en) | Personalized scene image processing method, apparatus and storage medium | |
CN113766129B (en) | Video recording method, video recording device, electronic equipment and medium | |
CN104540012B (en) | Content shared method, apparatus and terminal | |
KR20190013308A (en) | Mobile terminal and method for controlling the same | |
CN112565911B (en) | Bullet screen display method, bullet screen generation device, bullet screen equipment and storage medium | |
CN107087137B (en) | Method and device for presenting video and terminal equipment | |
CN108055587A (en) | Sharing method, device, mobile terminal and the storage medium of image file | |
CN112532887B (en) | Shooting method, device, terminal and storage medium | |
CN114415907B (en) | Media resource display method, device, equipment and storage medium | |
CN112044064A (en) | Game skill display method, device, equipment and storage medium | |
WO2024051556A1 (en) | Wallpaper display method, electronic device and storage medium | |
CN110751707A (en) | Animation display method, animation display device, electronic equipment and storage medium | |
CN114697568B (en) | Special effect video determining method and device, electronic equipment and storage medium | |
CN112954201A (en) | Shooting control method and device and electronic equipment | |
CN108401173A (en) | Interactive terminal, method and the computer readable storage medium of mobile live streaming | |
CN114827686B (en) | Recorded data processing method and device and electronic equipment | |
CN114143455B (en) | Shooting method and device and electronic equipment | |
CN115967854A (en) | Photographing method and device and electronic equipment | |
CN114546228A (en) | Expression image sending method, device, equipment and medium | |
CN112887620A (en) | Video shooting method and device and electronic equipment | |
CN114520878B (en) | Video shooting method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |