[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2023051185A1 - 图像处理方法、装置、电子设备及存储介质 - Google Patents

图像处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023051185A1
WO2023051185A1 PCT/CN2022/117167 CN2022117167W WO2023051185A1 WO 2023051185 A1 WO2023051185 A1 WO 2023051185A1 CN 2022117167 W CN2022117167 W CN 2022117167W WO 2023051185 A1 WO2023051185 A1 WO 2023051185A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
special effect
video frame
model
image
Prior art date
Application number
PCT/CN2022/117167
Other languages
English (en)
French (fr)
Inventor
龙华
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023051185A1 publication Critical patent/WO2023051185A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • Embodiments of the present disclosure relate to the technical field of image processing, for example, to an image processing method, device, electronic equipment, and storage medium.
  • the present disclosure provides an image processing method, device, electronic equipment, and storage medium, so as to realize the richness and interest of video shooting content, thereby improving the technical effect of user experience.
  • an embodiment of the present disclosure provides an image processing method, the method including:
  • an embodiment of the present disclosure further provides an image processing device, which includes:
  • the display video frame determination module is configured to respond to the detection of the trigger special effect display function, add a virtual model to the collected image to be processed, and obtain the display video frame;
  • a three-dimensional special effect determination module configured to enlarge and display the image area of the virtual model in the display video frame, and process the virtual model as a target three-dimensional special effect model in response to determining that a stop zoom-in condition is detected;
  • the target video frame determination module is configured to fuse and process the target three-dimensional special effect model and the target object in the image to be processed, and display the target video frame.
  • an embodiment of the present disclosure further provides an electronic device, and the electronic device includes:
  • processors one or more processors
  • storage means configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the image processing method according to any one of the embodiments of the present disclosure.
  • an embodiment of the present disclosure further provides a storage medium containing computer-executable instructions, the computer-executable instructions are configured to perform the image processing described in any one of the embodiments of the present disclosure when executed by a computer processor method.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of adding virtual special effects to an image to be processed provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of enlarging an image area corresponding to a virtual special effect provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of processing a virtual special effect into a target 3D special effect and merging the target 3D special effect with a target object provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart of an image processing method provided by another embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an image processing device provided by a third embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • the technical solution of the present disclosure can be applied to any scene where a video can be shot and played.
  • the special effects disclosed in this technical solution can be added to the calling user; or, in a live broadcast scene, the special effects disclosed in this technical solution can be added to the host user; of course, it can also be applied in the short video shooting process, you can
  • the technical solution of the present disclosure is executed during the process of photographing the subject.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure is applicable to any scene of video display supported by the Internet, and is used to add special effects to the target object during video shooting, and then display video frames including special effects, and the method can be executed by an image processing device , the device may be implemented in the form of software and/or hardware.
  • the electronic device may be a mobile terminal, a PC terminal, or a server.
  • the scene of arbitrary image display is usually implemented by the cooperation of the client and the server.
  • the method provided in this embodiment can be executed by the server, the client, or the cooperation of the client and the server.
  • the device for executing the image processing method may be integrated into application software supporting image processing functions, and the software may be installed in electronic equipment.
  • the electronic device may be a mobile terminal or a PC.
  • the application software may be a type of software for image/video processing, and its specific application software will not be described here one by one, as long as the image/video processing can be realized.
  • the display interface may include buttons for target objects and adding special effects.
  • the special effect button When the user triggers the special effect button, at least one special effect to be added may pop up, and the user may select one of the multiple special effects to be added as the target special effect.
  • the server may determine to add the corresponding special effect to the object in the captured image.
  • the image to be processed may be an image collected based on application software, and in a specific application scenario, the image to be processed may be collected in real time or periodically.
  • the camera device collects images including the target object in the target scene in real time, and at this time, the image collected by the camera device may be used as an image to be processed.
  • the virtual model can be a model added to it, and the model can be a virtual model corresponding to a special effect to be displayed.
  • the special effect to be displayed is a cat
  • the color of the cat is black
  • the virtual model can be an augmented reality corresponding to a big black cat ( Augmented Reality, AR) special effect model, see the content marked as 1 in Figure 2.
  • Virtual models can be added anywhere in the image to be processed.
  • the image to be processed after adding the special effect model can be used as a display video frame.
  • the video frame corresponding to the triggering special effect display function can be used as the image to be processed, and a virtual model is added to the image to be processed during the video shooting process to obtain the display video frame.
  • the user can trigger a button for adding a special effect on the display interface, that is, the special effect display function is triggered.
  • the special effect display function is triggered.
  • multiple special effects to be added may be displayed, and the user may determine one special effect from the multiple special effects to be added.
  • the image corresponding to the triggering special effect display function can be used as the image to be processed, and the virtual model corresponding to the special effect can be added to the image to be processed to obtain a display video frame.
  • the area occupied by the virtual model in the image to be processed may be used as the image area of the virtual model.
  • the enlarged display may be to enlarge and display the virtual model according to a preset ratio. For example, the image area is enlarged by 10% each time, or the virtual model is enlarged and displayed according to a preset time. For example, every 50ms, it will be enlarged by 10% on the original basis. That is to say, the enlarged image area of the virtual model is displayed as a video frame in the video.
  • the zoom-in stop condition can be understood as a condition for no longer zooming in when the image area is zoomed in to a certain extent.
  • the zoom-in please refer to the content marked 1 in Fig. 3 .
  • the target 3D special effect model corresponds to the virtual model.
  • the 3D target special effect can be a big cat model as shown in Figure 4.
  • the big cat target 3D special effect model is a model close to a real pet.
  • the pet in the target 3D special effect model Many parts of the pet can be moved, for example, the pupils in the eyes can be rotated, the eyes can be closed, and the tail can be shaken.
  • the image area corresponding to the virtual model can be determined, and the image area can be enlarged and displayed.
  • the target three-dimensional special effect model corresponding to the virtual model may be determined.
  • a plurality of virtual special effects of pets and target three-dimensional special effects corresponding to the virtual special effects can be pre-stored in the server, which can be set according to actual needs in a specific application process.
  • the target display video frame may be a video frame obtained after the target three-dimensional special effect is displayed to the target object on the image to be processed.
  • the subject in the image to be processed may be photographed, for example, a user, and the user may be used as a target object in the image to be processed.
  • Fusion processing may be the effect of fusing the target 3D special effect model onto the body of the target object so that the target 3D special effect model matches the target object. See FIG. 4 . Fusion processing may be that the target object embraces the target 3D special effect model.
  • the target three-dimensional model and the target object may be fused together to obtain a final displayed target video frame.
  • the display video frame corresponding to the stop zoom-in condition can be directly jumped to the target video frame, that is, there is a case of video frame jumping.
  • the special effect display function when recording a video, if the special effect display function is triggered, the image in the target scene can be captured, that is, the screen-opening panorama effect, and virtual special effects can be added at this time.
  • the kitten special effect that is, the AR virtual special effect shown in Figure 2
  • the enlarged display can be to enlarge the AR special effect itself, or push the AR special effect into the lens, or
  • the camera focuses and advances the AR special effect, that is, focuses on the kitten special effect, and obtains the video frame shown in Figure 3.
  • the target 3D special effect corresponding to the AR special effect can be determined, that is, the target 3D special effect can be fused with the target object to obtain and display the target video frame, see the effect shown in Figure 4 .
  • a virtual model can be added to the collected image to be processed to obtain a display video frame, and the image area of the virtual model in the display video frame is enlarged and displayed, and detected
  • the virtual model is processed into the target 3D special effect model, and the target 3D special effect model is fused with the target object in the image to be processed to obtain the final displayed video frame, which improves the richness and The video content is interesting, thereby improving the user experience.
  • FIG. 5 is a schematic flowchart of an image processing method provided by another embodiment of the present disclosure.
  • technical terms that are the same as or corresponding to those in the foregoing embodiments will not be repeated here.
  • the method includes:
  • triggering the special effect display function includes at least one of the following conditions: detecting that the captured image to be processed includes target gesture information; detecting a triggering special effect display button.
  • the image to be processed including the target object can be collected in real time, and it can be determined whether the target object in the image to be processed is the target pose information.
  • the target posture information is used to trigger special effect display.
  • the target gesture information may be pouting, a specific gesture, etc.
  • the specific gesture may be a gesture of making a fist, a gesture of facing the palm of the hand, and the like. If it is detected that the pose information of the target object in the image to be processed is the target pose information, it is considered that the special effect display function is triggered. Another way may be to display a special effect display button on the display interface. If the user triggers the special effect display button, it can be understood that a corresponding special effect needs to be added to the target object in the image to be processed.
  • the posture information of the target object can be detected in real time or at intervals, and when the posture information of the detected target object is consistent with the preset posture information, or trigger the After the special effect display button, it can be considered that the target special effect needs to be added to the target object.
  • the target effect may be an avatar.
  • a virtual model may be added to the target object, that is, a special effect may be added.
  • adding a virtual model to the collected image to be processed to obtain a display video frame includes: adding the virtual model to the position to be displayed corresponding to the target object in the image to be processed to obtain the display video frame.
  • the position to be displayed may be any position of the body part of the target object.
  • the position to be displayed may be on the hand, on the arm, on the shoulder, on the top of the head, etc.
  • the position to be displayed may also be a position corresponding to the target posture information.
  • the target posture information is gesture information
  • the position to be displayed may be a finger, a palm, and the like.
  • the determined virtual model can be added to the position to be displayed corresponding to the target object in the image to be processed, so as to obtain a video frame in which the virtual model is added for the target object.
  • the position to be displayed is the palm
  • the virtual model can be added to the palm of the target object, and then the image processing of the virtual model on the palm can be enlarged.
  • the zoom-in process can be zooming in on the virtual model, or zooming in on the lens, so that the image area corresponding to the virtual model can be zoomed in, so that the user can enjoy the enlarged image of the virtual model.
  • zooming in on the image area of the virtual model See Figure 3 for a schematic diagram.
  • adding a virtual model to the collected image to be processed to obtain a display video frame includes: if the target object in the image to be processed is the target pose information, then in the image corresponding to the target pose information Add the virtual model to the part to obtain the display video frame.
  • the target object in the image to be processed triggers the target pose information
  • special effects need to be added to the target object in the image to be processed.
  • the target posture information may be a specific posture, for example, a posture with arms stretched forward and palms up.
  • the target gesture may be a pouting gesture or the like.
  • adding the virtual model to the target object can be, the virtual model can be added to the palm, or added to the beeping mouth.
  • the target pose information can be determined, and the virtual model can be added to the position to be displayed corresponding to the target pose information.
  • the target posture is a pouting posture
  • the virtual model may be added to the mouth; if the target posture information is a palm-up posture, the virtual model may be added to the palm.
  • gradual enlargement can be understood as, if the initial image ratio is 30%, and the end image ratio is 100%, the image can be enlarged according to the ratio of 1% each time, and this kind of enlargement can be understood as gradual enlargement.
  • the gradual magnification can be 30% to 50% in a single step.
  • the image area displaying the virtual model in the video frame may be gradually enlarged and displayed.
  • the gradual zoom-in display may be zoomed in proportionally, and may be zoomed in by 20% each time on the basis of the previous zoom-in.
  • processing the virtual model as a target three-dimensional special effect model includes: if it is detected that the zoom-in duration of the virtual model reaches a preset zoom-in duration threshold, processing the virtual model The virtual model is processed as a target three-dimensional special effects model; or, if it is detected that the enlarged ratio of the virtual model reaches a preset zoom ratio threshold, the virtual model is processed as a target three-dimensional special effects model.
  • the preset zoom-in duration threshold is preset, for example, the zoom-in duration threshold may be 1 second.
  • the target three-dimensional special effect model is a model corresponding to the virtual model.
  • the virtual model is an A special effect model, and the target three-dimensional special effect model is an AR object corresponding to the AR special effect model.
  • the AR object can be a static model or a dynamic model.
  • the static model may be an AR real object.
  • the real object may be an object corresponding to the virtual model.
  • the virtual model is an AR pet model, and the AR object may be a real pet.
  • the dynamic model can be a model in which the AR pet can move, just like a real pet. It should be noted that the virtual model can be any virtual model, not limited to the pet model.
  • the preset magnification ratio threshold is preset, that is, the maximum magnification ratio.
  • the preset magnification ratio threshold is 300%, and if the magnification ratio of the image area reaches the preset magnification ratio threshold, the magnification may be stopped.
  • the image area when zooming in on the image area of the virtual model, the image area can be zoomed in step by step, and the actual zoom-in duration can be recorded during the zoom-in process.
  • the actual zoom-in time reaches the preset zoom-in time threshold, it indicates that the virtual model needs to be processed into a target three-dimensional special effect model.
  • the enlarging ratio can be recorded, and when it is detected that the enlarging ratio reaches the preset enlarging ratio threshold, it indicates that the virtual model needs to be processed into a target three-dimensional special effect.
  • the target display position may be any position on the pre-selected target image, for example, if the image to be processed includes a table, the ground, etc., the table and the ground may be used as the target display positions.
  • the target position may also be any position preset on the body of the target object, for example, the shoulder or the bosom.
  • the target three-dimensional special effect model may be placed at the preset target display position of the target object in the image to be processed, so as to obtain the target video frame, and display the target video frame. It is also possible to place the target three-dimensional special effect at any position displayed on the image to be processed. For example, if the virtual special effect is a bird special effect, the target three-dimensional special effect corresponding to the bird special effect can be displayed in the sky in the image to be processed place.
  • the target object can be a user, a pet, or any other subject.
  • the virtual special effect is a small cat AR special effect
  • the target 3D special effect is an AR real big cat.
  • the AR can be determined.
  • the real big cat, the big cat has dynamic special effects, that is, it is the same as the real cat, and the paws, tail, eyes, mouth, etc. can all be moved. You can place the real-life special effects of the big cat in the arms of the target object, and at the same time, the target object is holding the big cat, and the video frame obtained at this time can be used as the display video frame, see Figure 4.
  • the displaying the target video frame includes: transitioning from the video frame corresponding to the stop zoom-in condition to the target video frame through a preset animation special effect, so as to display the target video frame .
  • the preset animation special effect may be a special effect inserted into a video frame, may be a special effect inserted into a transitional picture, or may be a feint special effect.
  • the target video frame may be directly jumped from the corresponding video frame when the zoom-in stop condition is met, so as to display the target video frame.
  • a virtual model when the special effect display function is triggered, a virtual model can be added to the target object in the collected image to be processed, and the image area corresponding to the virtual model can be enlarged and displayed.
  • the virtual model When zooming in, the virtual model can be processed as the target 3D special effect model.
  • the target 3D special effect model can be fused with the target object in the image to be processed to obtain the target special effect image.
  • FIG. 6 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure. As shown in FIG. 6 , the device includes: a display video frame determination module 310 , a 3D special effect determination module 320 and a target video frame determination module 330 .
  • the display video frame determination module 310 is configured to add a virtual model to the collected image to be processed to obtain a display video frame when it is detected that the special effect display function is triggered;
  • the image area of the virtual model is enlarged and displayed, and when it is detected that the zoom-in stop condition is reached, the virtual model is processed as a target three-dimensional special effect model;
  • the target video frame determination module 330 is configured to set the target three-dimensional special effect model and the target three-dimensional special effect model
  • the target object in the image to be processed is fused to display the target video frame.
  • triggering the special effect display function includes at least one of the following conditions: detecting that the collected image to be processed includes target posture information; detecting a triggering special effect display button.
  • the display video frame determining module is configured to add the virtual model to the position to be displayed corresponding to the target object in the image to be processed, so as to obtain the display video frame.
  • the display video frame determination module is configured to add the virtual model to the part corresponding to the target posture information if the target object in the image to be processed is the target posture information , to obtain the displayed video frame.
  • the three-dimensional special effect determination module is configured to gradually enlarge and display the image area of the virtual model in the display video frame.
  • the three-dimensional special effect determining module is configured to process the virtual model as a target three-dimensional special effect model if it is detected that the enlarged time of the virtual model reaches the preset zoom-in time threshold; or, if it is detected that The enlarged ratio of the virtual model reaches a preset enlarged ratio threshold, and the virtual model is processed as a target three-dimensional special effect model.
  • the target video frame determination module is set to:
  • the target three-dimensional special effect model is placed at the target display position of the target object in the image to be processed, the target video frame is obtained, and the target video frame is displayed.
  • the target video frame determination module is set to:
  • the video frame corresponding to when the zoom-in stop condition is reached is transitioned to the target video frame through a preset animation special effect, so as to display the target video frame.
  • the virtual model is an AR special effect model
  • the target three-dimensional special effect model is an AR object corresponding to the AR special effect model
  • the target three-dimensional special effect model is a static model or a dynamic model. kind of.
  • a virtual model can be added to the collected image to be processed to obtain a display video frame, and the image area of the virtual model in the display video frame is enlarged and displayed, and detected
  • the virtual model is processed into the target 3D special effect model, and the target 3D special effect model is fused with the target object in the image to be processed to obtain the final displayed video frame, which improves the richness and The video content is interesting, thereby improving the user experience.
  • the image processing device provided by the embodiment of the present disclosure can execute the image processing method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the terminal equipment in the embodiments of the present disclosure may include but not limited to mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), vehicle-mounted terminals (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 7 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 400 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 401, which may be randomly accessed according to a program stored in a read-only memory (ROM) 402 or loaded from a storage device 406. Various appropriate actions and processes are executed by programs in the memory (RAM) 403 . In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored.
  • the processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An edit/output (I/O) interface 405 is also connected to the bus 404 .
  • I/O interface 405 editing devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration an output device 407 such as a computer; a storage device 406 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409.
  • the communication means 409 may allow the electronic device 400 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 7 shows electronic device 400 having various means, it should be understood that implementing or possessing all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 409, or from storage means 406, or from ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the electronic device provided by the embodiment of the present disclosure belongs to the same idea as the image processing method provided by the above embodiment, and the technical details not described in detail in this embodiment can be referred to the above embodiment, and this embodiment has the same benefits as the above embodiment Effect.
  • An embodiment of the present disclosure provides a computer storage medium, on which a computer program is stored, and when the program is executed by a processor, the image processing method provided in the foregoing embodiments is implemented.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to electrical wires, optical fiber cables, radio frequency (RF), etc., or any suitable combination of the foregoing.
  • the client and the server can communicate using any currently known or future network protocols such as Hypertext Transfer Protocol (HyperText Transfer Protocol, HTTP), and can communicate with digital data in any form or medium Communications (eg, communication networks) are interconnected.
  • Examples of communication networks include local area networks (LANs), wide area networks (WANs), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device:
  • a virtual model is added to the collected image to be processed to obtain a display video frame
  • Computer program code for carrying out operations of the present disclosure can be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Included are conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (such as through an Internet Service Provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the unit does not constitute a limitation of the unit itself under certain circumstances, for example, the first obtaining unit may also be described as "a unit for obtaining at least two Internet Protocol addresses".
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides an image processing method, the method including:
  • a virtual model is added to the collected image to be processed to obtain a display video frame
  • Example 2 provides an image processing method, and the method further includes:
  • triggering the special effect display function includes at least one of the following conditions:
  • the collected images to be processed include target pose information
  • a key that triggers a special effect display is detected.
  • Example 3 provides an image processing method, and the method further includes:
  • adding a virtual model to the collected image to be processed to obtain a display video frame includes:
  • Example 4 provides an image processing method, and the method further includes:
  • adding a virtual model to the collected image to be processed to obtain a display video frame includes:
  • the target object in the image to be processed is target pose information
  • Example 5 provides an image processing method, and the method further includes:
  • the enlarging and displaying the image area of the virtual model in the display video frame includes:
  • the image area of the virtual model in the display video frame is gradually enlarged and displayed.
  • Example 6 provides an image processing method, and the method further includes:
  • processing the virtual model as a target three-dimensional special effect model includes:
  • the virtual model is processed as a target three-dimensional special effect model.
  • Example 7 provides an image processing method, and the method further includes:
  • the fusion processing of the target three-dimensional special effect model and the target object in the image to be processed to display the target video frame includes:
  • the target three-dimensional special effect model is placed at the target display position of the target object in the image to be processed, the target video frame is obtained, and the target video frame is displayed.
  • Example 8 provides an image processing method, and the method further includes:
  • the displaying the target video frame includes:
  • the video frame corresponding to when the zoom-in stop condition is reached is transitioned to the target video frame through a preset animation special effect, so as to display the target video frame.
  • Example 9 provides an image processing method, and the method further includes:
  • the virtual model is an AR special effect model
  • the target 3D special effect model is an AR object corresponding to the AR special effect model
  • the target 3D special effect model is one of a static model or a dynamic model.
  • Example 10 provides an image processing device, including:
  • the display video frame determination module is configured to add a virtual model to the collected image to be processed to obtain a display video frame when a triggering special effect display function is detected;
  • the three-dimensional special effect determination module is configured to enlarge and display the image area of the virtual model in the display video frame, and process the virtual model as a target three-dimensional special effect model when it is detected that the zoom-in stop condition is reached;
  • the target video frame determination module is configured to fuse the target three-dimensional special effect model and the target object in the image to be processed, and display the target video frame

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种图像处理方法、装置、电子设备及存储介质,该方法包括:响应于确定检测到触发特效显示功能,为采集的待处理图像添加虚拟模型,得到展示视频帧;将所述展示视频帧中所述虚拟模型的图像区域放大显示,并响应于确定检测到达到停止放大条件,将所述虚拟模型处理为目标三维特效模型;将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧。

Description

图像处理方法、装置、电子设备及存储介质
本申请要求在2021年9月29日提交中国专利局、申请号为202111151627.3的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及图像处理技术领域,例如涉及一种图像处理方法、装置、电子设备及存储介质。
背景技术
随着网络技术的发展,越来越多的应用程序进入了用户的生活,例如是一系列可以拍摄短视频的软件,深受用户的喜爱。
目前,在基于拍摄短视频的软件拍摄相应的视频或者图像时,为了提高视频的趣味性,经常会做一些特效处理,但是相关技术中的特效存在内容不够丰富,显示比较单一,从而存在使用户观看体验和使用体验较差的问题。
发明内容
本公开提供一种图像处理方法、装置、电子设备及存储介质,以实现视频拍摄内容的丰富性和趣味性,从而提高用户体验的技术效果。
第一方面,本公开实施例提供了一种图像处理方法,该方法包括:
响应于确定检测到触发特效显示功能,为采集的待处理图像添加虚拟模型,得到展示视频帧;
将所述展示视频帧中所述虚拟模型的图像区域放大显示,并响应于确定检测到达到停止放大条件,将所述虚拟模型处理为目标三维特效模型;
将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧。
第二方面,本公开实施例还提供了一种图像处理装置,该装置包括:
展示视频帧确定模块,设置为响应于确定检测到触发特效显示功能,为采集的待处理图像添加虚拟模型,得到展示视频帧;
三维特效确定模块,设置为将所述展示视频帧中所述虚拟模型的图像区域放大显示,并响应于确定检测到达到停止放大条件,将所述虚拟模型处理为目标三维特效模型;
目标视频帧确定模块,设置为将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧。
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本公开实施例任一所述的图像处理方法。
第四方面,本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时设置为执行如本公开实施例任一所述的图像处理方法。
附图说明
图1为本公开一实施例所提供的一种图像处理方法流程示意图;
图2为本公开实施例所提供的为待处理图像添加虚拟特效的示意图;
图3为本公开实施例所提供的对虚拟特效所对应的图像区域进行放大的示意图;
图4为本公开实施例所提供的将虚拟特效处理为目标三维特效,且目标三维特效与目标对象融合后的示意图;
图5为本公开另一实施例所提供的一种图像处理方法流程示意图;
图6为本公开一实施例三所提供的一种图像处理装置结构示意图;
图7为本公开一实施例所提供的一种电子设备结构示意图。
具体实施方式
应当理解,本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
在介绍本技术方案之前,可以先对应用场景进行示例性说明。可以将本公开技术方案应用在任意可以视频拍摄并播放的场景中。例如,视频通话中,可以为通话用户添加本技术方案公开的特效;或者,直播场景中,可以为主播用户添加本技术方案公开的特效;当然,也可以是应用在短视频拍摄过程中,可以在对被拍摄对象拍摄的过程中执行本公开技术方案。
图1为本公开一实施例所提供的一种图像处理方法流程示意图。本公开实施例适用于在互联网所支持的任意视频展示的场景中,用于在视频拍摄过程中为目标对象添加特效,进而 展示包括特效的视频帧的情形,该方法可以由图像处理装置来执行,该装置可以通过软件和/或硬件的形式实现。例如,通过电子设备来实现,该电子设备可以是移动终端、PC端或服务器等。任意图像展示的场景通常是由客户端和服务器来配合实现的,本实施例所提供的方法可以由服务端来执行,客户端来执行,或者是客户端和服务端的配合来执行。
S110、当检测到触发特效显示功能时,为采集的待处理图像添加虚拟模型,得到展示视频帧。
需要说明的是,上述已对多种可以应用的场景进行简单的说明,在此不再具体阐述。
还需要说明的是,执行本公开实施例提供的图像处理方法的装置,可以集成在支持图像处理功能的应用软件中,且该软件可以安装至电子设备中。例如,电子设备可以是移动终端或者PC端等。应用软件可以是对图像/视频处理的一类软件,其具体的应用软件在此不再一一赘述,只要可以实现图像/视频处理即可。
其中,在用户拍摄短视频、直播等的过程中,显示界面上可以包括目标对象、添加特效的按键。当用户触发特效按键后,可以弹出至少一个待添加特效,用户可以从多个待添加特效中选择一个特效作为目标特效。或者是,当检测到触发添加特效所对应的控件后,服务器可以确定要为入镜画面中的对象添加相应的特效。待处理图像可以是基于应用软件采集的图像,在具体的应用场景中,可以实时或者周期性的采集待处理图像。例如,在直播场景或者拍摄视频的场景中,摄像装置实时采集目标场景中包括目标对象的图像,此时,可以将摄像装置采集的图像作为待处理图像。虚拟模型可以是添加到的模型,该模型可以是待显示特效所对应虚拟模型,例如,待显示特效为猫咪,猫咪的颜色为黑色,那么虚拟模型可以是与黑色大猫咪相对应的增强现实(Augmented Reality,AR)特效模型,参见图2中标识为1的内容。虚拟模型可以添加在待处理图像中的任意位置。可以将添加特效模型后的待处理图像作为展示视频帧。
需要说明的是,如果是在视频拍摄场景中,可以将触发特效显示功能所对应的视频帧作为待处理图像,在视频拍摄的过程中,为待处理图像添加虚拟模型,以得到展示视频帧。
例如,在短视频拍摄、视频通话、或者直播场景中,用户可以触发显示界面上的特效添加的按键,即触发了特效显示功能。此时,可以显示多个待添加特效,用户可以从多个待添加特效中确定一个特效。在确定特效后,可以将触发特效显示功能所对应的图像作为待处理图像,可以将特效所对应的虚拟模型添加到待处理图像中,得到展示视频帧。
S120、将所述展示视频帧中所述虚拟模型的图像区域放大显示,并检测到达到停止放大条件时,将所述虚拟模型处理为目标三维特效模型。
其中,可以将待处理图像中虚拟模型所占用的区域,作为虚拟模型的图像区域。放大显示可以是将虚拟模型按照预设比例放大显示,。例如,将图像区域每次放大百分之十,或者是,将虚拟模型按照预设时间放大展示。例如,每50ms在原有基础上放大百分之十。也就是说,将虚拟模型的图像区域放大显示作为视频中的视频帧。停止放大条件可以理解为在将图像区域放大到一定程度时,不再对其进行放大的条件。停止放大时所对应的虚拟模型,可以参见 图3标识1的内容。目标三维特效模型与虚拟模型相对应,例如,三维目标特效可以是如图4所示的大猫咪模型,该大猫咪目标三维特效模型是一个接近于真实宠物的模型,该目标三维特效模型中宠物的多个部位都是可以动起来的,例如,眼睛中的瞳孔是可以转动,眼睛可以闭合,尾巴可以摇晃的宠物。
例如,在展示视频帧的基础上,可以确定虚拟模型所对应的图像区域,并将该图像区域放大显示。在对图像区域放大显示的过程中,如果检测到图像区域达到放大停止条件时,则可以确定与虚拟模型相对应的目标三维特效模型。
需要说明的是,服务器中可以预先存储多个宠物的虚拟特效和与虚拟特效相对应的目标三维特效,在具体应用过程中,可以根据实际需求进行设置。
S130、将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧。
其中,目标展示视频帧可以是将目标三维特效展示至待处理图像上的目标对象后,得到的视频帧。待处理图像中可以被拍摄对象,例如,用户,可以将用户作为待处理图像中的目标对象。融合处理可以是将目标三维特效模型融合至目标对象的身上,以使目标三维特效模型和目标对象相匹配的效果,参见图4,融合处理可以是目标对象抱着目标三维特效模型。
例如,在得到目标三维模型后,可以将目标三维模型与目标对象融合在一起,得到最终显示的目标视频帧。此时,可以直接由满足停止放大条件时所对应的展示视频帧直接跳转至目标视频帧,即存在视频画面跳转的情况。
可以理解为,在录制视频时,如果触发了特效显示功能,可以拍摄目标场景中的图像,即开屏全景效果,此时可以添加虚拟特效。例如,小猫特效,即图2所示的AR虚拟特效,可以对AR特效所对应的图像区域放大显示,放大显示可以是将AR特效本身放大,还可以是将AR特效推进镜头,还可以是镜头聚焦并推进AR特效,即聚焦小猫特效,得到如图3所示的视频帧。在检测到达到停止放大条件时,可以确定与AR特效相对应的目标三维特效即,可以将目标三维特效与目标对象融合,得到目标视频帧并加以展示,参见图4所示的效果。
本公开实施例的技术方案,在检测到触发特效显示功能时,可以为采集的待处理图像添加虚拟模型,得到展示视频帧,将所述展示视频帧中虚拟模型的图像区域放大显示,并检测到达到停止放大条件时,将虚拟模型处理为目标三维特效模型,并将目标三维特效模型和待处理图像中的目标对象融合处理,得到展示最终显示的视频帧,提高了视频画面的丰富性和视频内容趣味性,进而提高了用户使用体验。
图5为本公开另一实施例所提供的一种图像处理方法流程示意图。在前述实施例的基础上,可以对“为采集的待处理图像添加虚拟模型,得到展示视频帧”、“将所述展示视频帧中所述虚拟模型的图像区域放大显示”以及“将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧”进行进一步细化,其具体的实施方式可以参见本技术方案的详细阐述。其中,与上述实施例相同或者相应的技术术语在此不再赘述。
如图5所示,所述方法包括:
S210、当检测到触发特效显示功能时,为采集的待处理图像添加虚拟模型,得到展示视频帧。
在本实施例中,触发特效显示功能包括下述条件中的至少一种:检测到采集的待处理图像中包括目标姿态信息;检测到触发特效显示按键。
其中,在视频拍摄或者直播的场景中,可以实时采集包括目标对象的待处理图像,并确定待处理图像中目标对象是否为目标姿态信息。该目标姿态信息用于触发特效显示。例如,目标姿态信息可以嘴巴嘟起、特定的手势等,例如,特定的手势可以是握拳手势、手心朝向的姿势等。如果检测到待处理图像中目标对象的姿态信息为目标姿态信息,则认为触发了特效显示功能。另一种方式可以是,显示界面上特效显示按键,如果用户触发了特效显示按键,则可以理解为需要为待处理图像中的目标对象添加相应的特效。
也就是说,在实际应用中,可以实时或者间隔性的检测目标对象的姿态信息,并在检测到目标对象的姿态信息与预设姿态信息相一致的情况下,或者,触发了显示界面上的特效显示按键后,则可以认为需要为目标对象添加目标特效。目标特效可以是虚拟模型。
在本实施例中,在检测到触发特效显示功能时,可以为目标对象添加虚拟模型,即添加特效。例如,所述为采集的待处理图像添加虚拟模型,得到展示视频帧,包括:将所述虚拟模型添加至所述待处理图像中与目标对象所对应的待展示位置处,得到所述展示视频帧。
其中,待展示位置可以是目标对象身体部位的任意位置。例如,待展示位置可以是手上、胳膊上、肩膀上,头顶上等。待展示位置还可以是与目标姿态信息相对应的位置。例如,目标姿态信息为手势信息,待展示位置可以是手指、手掌等。将虚拟模型添加到待处理图像中的待展示位置处后,得到的图像作为展示视频帧。
例如,在检测到触发显示特效功能的按键后,可以将确定出的虚拟模型添加至待处理图像中与目标对象所对应的待展示位置处,从而得到为目标对象添加虚拟模型的视频帧。
示例性的,继续参见图2,待展示位置为手掌,在检测到触发特效显示功能后,可以将虚拟模型添加到目标对象的手掌上,进而对手掌上虚拟模型的图像处理进行放大处理。例如,放大处理可以是将虚拟模型放大、还可以是将镜头推近,以使虚拟模型所对应的图像区域放大,进而使用户欣赏到虚拟模型放大后的图像,对虚拟模型的图像区域放大后的示意图参见图3。
在本实施例中,所述为采集的待处理图像添加虚拟模型,得到展示视频帧,包括:如果所述待处理图像中目标对象为目标姿态信息,则在与所述目标姿态信息相对应的部位上添加所述虚拟模型,得到所述展示视频帧。
其中,如果待处理图像中目标对象触发了目标姿态信息,则说明需要为待处理图像中的目标对象添加特效。目标姿态信息可以是特定的姿势,例如,手臂前伸,手掌向上的姿势。或者是,目标姿势可以是嘟嘴姿势等。此时为目标对象添加虚拟模型可以是,可以将虚拟模型添加到手掌上,或者添加到嘟嘟嘴上。
例如,如果基于目标姿势确定触发图像显示功能时,可以确定目标姿势信息,可以将虚拟模型添加到与目标姿势信息相对应的待展示位置。如,目标姿态为嘟嘴姿态,则将虚拟模型添加到嘴巴上;如果目标姿态信息为手掌向上的姿态,则可以将虚拟模型添加到手掌上。
S220、将所述展示视频帧中所述虚拟模型的图像区域逐步放大显示。
其中,逐步放大可以理解为,如果起始图像比例为30%,终止图像比例为100%,可以依据每次放大百分之一的比例对图像放大,此种放大可以理解为逐步放大。当然,为了起到最佳的放大效果,逐步放大可以是单次放大百分之三十到百分之五十。
例如,可以将展示视频帧中虚拟模型的图像区域逐步放大展示。逐步放大展示可以是按比例放大,可以是每次在前一次放大的基础上放大百分之二十。
S230、在检测到达到停止放大条件时,将所述虚拟模型处理为目标三维特效模型。
在本实施例中,所述检测到达到停止放大条件时,将所述虚拟模型处理为目标三维特效模型,包括:如果检测到所述虚拟模型放大的时长达到预设放大时长阈值,则将所述虚拟模型处理为目标三维特效模型;或,如果检测到所述虚拟模型放大的比例达到预设放大比例阈值,将所述虚拟模型处理为目标三维特效模型。
其中,预设放大时长阈值为预先设置的,例如,放大时长阈值可以是1秒。目标三维特效模型是与虚拟模型相对应的模型。虚拟模型为A特效模型,目标三维特效模型是与AR特效模型相对应的AR实物。该AR实物可以是静态模型也可以是动态模型。静态模型可以是AR实物静置实物,该实物可以是与虚拟模型对应的实物,例如,虚拟模型为AR宠物模型,AR实物可以是宠物实物。动态模型可以是AR宠物可以动的模型,就如真实的宠物一般。需要说明的是,虚拟模型可以是任意虚拟模型,不局限于宠物模型。预设放大比例阈值为预先设置的,即最大放大比例。例如,预设放大比例阈值为300%,如果图像区域的放大比例达到预设放大比例阈值,则可以停止放大。
例如,在对虚拟模型的图像区域进行放大处理时,可以逐步对图像区域进行放大,并在放大的过程中记录实际放大时长。当实际放大时长达到预设放大时长阈值,则说明需要将虚拟模型处理为目标三维特效模型。当然,也可以是,对图像区域逐步放大的过程中,可以记录放大比例,并在检测到放大比例达到预设放大比例阈值时,则说明需要将虚拟模型处理为目标三维特效。
S240、将所述目标三维特效模型放置在所述待处理图像中目标对象的目标展示位置处,得到所述目标视频帧,并展示所述目标视频帧。
其中,目标展示位置可以是预选设置的目标图像上的任意位置,例如,如果待处理图像中包括桌子、地面等,可以将桌子和地面等作为目标展示位置。目标位置还可以是预先设置在目标对象身上的任意位置,例如,肩膀,或者怀里。
例如,可以将目标三维特效模型放置在待处理图像中预先设置的目标对象的目标展示位置处,从而得到目标视频帧,并展示该目标视频帧。还可以是,将目标三维特效放置在待处理图像上显示的任意位置处,例如,如果虚拟特效为小鸟特效,可以将与小鸟特效所对应的 目标三维特效展示在待处理图像中的天空处。目标对象可以是是用户、宠物、或者任意主体。
示例性的,虚拟特效为小猫咪AR特效,目标三维特效为AR实物大猫咪,可以在对小猫咪AR特效所对应的图像区域放大显示时,并在检测到达到停止放大条件时,可以确定AR实物大猫咪,该大猫咪为动态特效,即与真实猫腻是相同的,爪子、尾巴、眼睛、嘴巴等都是可以动起来的。可以将大猫咪实物特效放置在目标对象的怀里,同时,目标对象抱着大猫咪,可以将此时得到的视频帧作为展示视频帧,参见图4。
在本实施例中,所述展示所述目标视频帧,包括:通过预先设置的动画特效将从达到停止放大条件时所对应的视频帧过渡至所述目标视频帧,以展示所述目标视频帧。
其中,预先设置的动画特效可以是插入视频帧的特效,可以是插入过渡画面的特效,也可以是虚晃特效。
例如,在确定目标视频帧之后,可以直接从达到停止放大条件时所对应的视频帧直接跳转至目标视频帧,以展示目标视频帧。
本公开实施例的技术方案,在检测到触发特效显示功能时,可以为采集的待处理图像中的目标对象添加虚拟模型,并对虚拟模型所对应的图像区域进行放大显示,在检测到达到停止放大条件时,可以将虚拟模型处理为目标三维特效模型,同时,可以将目标三维特效模型与待处理图像中的目标对象融合处理,以时得到目标特效的图像,此方式避免了相关技术中特效比较单一,画面丰富度比较差的情况,提升了特效画面的丰富性,从而使展示的视频画面更加吸引用户,提高用户使用体验。
图6为本公开实施例所提供的一种图像处理装置结构示意图,如图6所示,所述装置包括:展示视频帧确定模块310、三维特效确定模块320以及目标视频帧确定模块330。
其中,展示视频帧确定模块310,设置为当检测到触发特效显示功能时,为采集的待处理图像添加虚拟模型,得到展示视频帧;三维特效确定模块320,设置为将所述展示视频帧中所述虚拟模型的图像区域放大显示,并检测到达到停止放大条件时,将所述虚拟模型处理为目标三维特效模型;目标视频帧确定模块330,设置为将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧。
在上述技术方案的基础上,触发特效显示功能包括下述条件中的至少一种:检测到采集的待处理图像中包括目标姿态信息;检测到触发特效显示按键。
在上述技术方案的基础上,所述展示视频帧确定模块,设置为将所述虚拟模型添加至所述待处理图像中与目标对象所对应的待展示位置处,得到所述展示视频帧。
在上述技术方案的基础上,所述展示视频帧确定模块,设置为如果所述待处理图像中目标对象为目标姿态信息,则在与所述目标姿态信息相对应的部位上添加所述虚拟模型,得到所述展示视频帧。
在上述技术方案的基础上,三维特效确定模块,设置为将所述展示视频帧中所述虚拟模型的图像区域逐步放大显示。
在上述技术方案的基础上,三维特效确定模块,设置为如果检测到所述虚拟模型放大的 时长达到预设放大时长阈值,则将所述虚拟模型处理为目标三维特效模型;或,如果检测到所述虚拟模型放大的比例达到预设放大比例阈值,将所述虚拟模型处理为目标三维特效模型。
在上述技术方案的基础上,目标视频帧确定模块,设置为:
将所述目标三维特效模型放置在所述待处理图像中目标对象的目标展示位置处,得到所述目标视频帧,并展示所述目标视频帧。
在上述技术方案的基础上,目标视频帧确定模块,设置为:
通过预先设置的动画特效将从达到停止放大条件时所对应的视频帧过渡至所述目标视频帧,以展示所述目标视频帧。
在上述技术方案的基础上,所述虚拟模型为AR特效模型,所述目标三维特效模型为与所述AR特效模型相对应的AR实物,且所述目标三维特效模型为静态模型或动态模型中的一种。
本公开实施例的技术方案,在检测到触发特效显示功能时,可以为采集的待处理图像添加虚拟模型,得到展示视频帧,将所述展示视频帧中虚拟模型的图像区域放大显示,并检测到达到停止放大条件时,将虚拟模型处理为目标三维特效模型,并将目标三维特效模型和待处理图像中的目标对象融合处理,得到展示最终显示的视频帧,提高了视频画面的丰富性和视频内容趣味性,进而提高了用户使用体验。
本公开实施例所提供的图像处理装置可执行本公开任意实施例所提供的图像处理方法,具备执行方法相应的功能模块和有益效果。
值得注意的是,上述装置所包括的多个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,多个功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
图7为本公开实施例所提供的一种电子设备的结构示意图。下面参考图7,其示出了适于用来实现本公开实施例的电子设备(例如图7中的终端设备或服务器)400的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置406加载到随机访问存储器(RAM)403中的程序而执行多种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的多种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。编辑/输出(I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的编辑装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置406;以及通信装置409。通信 装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有多种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置406被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的方法中限定的上述功能。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开实施例提供的电子设备与上述实施例提供的图像处理方法属于同一构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的有益效果。
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的图像处理方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(LAN),广域网(WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及 任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:
当检测到触发特效显示功能时,为采集的待处理图像添加虚拟模型,得到展示视频帧;
将所述展示视频帧中所述虚拟模型的图像区域放大显示,并检测到达到停止放大条件时,将所述虚拟模型处理为目标三维特效模型;
将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言-诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言-诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)-连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁 性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种图像处理方法,该方法包括:
当检测到触发特效显示功能时,为采集的待处理图像添加虚拟模型,得到展示视频帧;
将所述展示视频帧中所述虚拟模型的图像区域放大显示,并检测到达到停止放大条件时,将所述虚拟模型处理为目标三维特效模型;
将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧。
根据本公开的一个或多个实施例,【示例二】提供了一种图像处理方法,该方法,还包括:
例如,触发特效显示功能包括下述条件中的至少一种:
检测到采集的待处理图像中包括目标姿态信息;
检测到触发特效显示按键。
根据本公开的一个或多个实施例,【示例三】提供了一种图像处理方法,该方法,还包括:
例如,所述为采集的待处理图像添加虚拟模型,得到展示视频帧,包括:
将所述虚拟模型添加至所述待处理图像中与目标对象所对应的待展示位置处,得到所述展示视频帧。
根据本公开的一个或多个实施例,【示例四】提供了一种图像处理方法,该方法,还包括:
例如,所述为采集的待处理图像添加虚拟模型,得到展示视频帧,包括:
如果所述待处理图像中目标对象为目标姿态信息,则在与所述目标姿态信息相对应的部位上添加所述虚拟模型,得到所述展示视频帧。
根据本公开的一个或多个实施例,【示例五】提供了一种图像处理方法,该方法,还包括:
例如,所述将所述展示视频帧中所述虚拟模型的图像区域放大显示,包括:
将所述展示视频帧中所述虚拟模型的图像区域逐步放大显示。
根据本公开的一个或多个实施例,【示例六】提供了一种图像处理方法,该方法,还包括:
例如,所述检测到达到停止放大条件时,将所述虚拟模型处理为目标三维特效模型,包括:
如果检测到所述虚拟模型放大的时长达到预设放大时长阈值,则将所述虚拟模型处理为目标三维特效模型;或,
如果检测到所述虚拟模型放大的比例达到预设放大比例阈值,将所述虚拟模型处理为目标三维特效模型。
根据本公开的一个或多个实施例,【示例七】提供了一种图像处理方法,该方法,还包括:
例如,所述将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧,包括:
将所述目标三维特效模型放置在所述待处理图像中目标对象的目标展示位置处,得到所述目标视频帧,并展示所述目标视频帧。
根据本公开的一个或多个实施例,【示例八】提供了一种图像处理方法,该方法,还包括:
例如,所述展示所述目标视频帧,包括:
通过预先设置的动画特效将从达到停止放大条件时所对应的视频帧过渡至所述目标视频帧,以展示所述目标视频帧。
根据本公开的一个或多个实施例,【示例九】提供了一种图像处理方法,该方法,还包括:
例如,所述虚拟模型为AR特效模型,所述目标三维特效模型为与所述AR特效模型相对应的AR实物,且所述目标三维特效模型为静态模型或动态模型中的一种。
根据本公开的一个或多个实施例,【示例十】提供了一种图像处理装置,该装置,包括:
展示视频帧确定模块,设置为当检测到触发特效显示功能时,为采集的待处理图像添加虚拟模型,得到展示视频帧;
三维特效确定模块,设置为将所述展示视频帧中所述虚拟模型的图像区域放大显示,并检测到达到停止放大条件时,将所述虚拟模型处理为目标三维特效模型;
目标视频帧确定模块,设置为将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧
此外,虽然采用特定次序描绘了多种操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。

Claims (12)

  1. 一种图像处理方法,包括:
    响应于确定检测到触发特效显示功能,为采集的待处理图像添加虚拟模型,得到展示视频帧;
    将所述展示视频帧中所述虚拟模型的图像区域放大显示,并响应于确定检测到达到停止放大条件,将所述虚拟模型处理为目标三维特效模型;
    将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧。
  2. 根据权利要求1所述的方法,其中,所述触发特效显示功能包括下述条件中的至少一种:
    检测到采集的待处理图像中包括目标姿态信息;
    检测到触发特效显示按键。
  3. 根据权利要求1所述的方法,其中,所述为采集的待处理图像添加虚拟模型,得到展示视频帧,包括:
    将所述虚拟模型添加至所述待处理图像中与目标对象所对应的待展示位置处,得到所述展示视频帧。
  4. 根据权利要求1所述的方法,其中,所述为采集的待处理图像添加虚拟模型,得到展示视频帧,包括:
    响应于确定所述待处理图像中目标对象为目标姿态信息,在与所述目标姿态信息相对应的部位上添加所述虚拟模型,得到所述展示视频帧。
  5. 根据权利要求1所述的方法,其中,所述将所述展示视频帧中所述虚拟模型的图像区域放大显示,包括:
    将所述展示视频帧中所述虚拟模型的图像区域逐步放大显示。
  6. 根据权利要求1所述的方法,其中,所述响应于确定检测到达到停止放大条件,将所述虚拟模型处理为目标三维特效模型,包括:
    响应于确定检测到所述虚拟模型放大的时长达到预设放大时长阈值,将所述虚拟模型处理为目标三维特效模型;或,
    响应于确定检测到所述虚拟模型放大的比例达到预设放大比例阈值,将所述虚拟模型处理为目标三维特效模型。
  7. 根据权利要求1所述的方法,其中,所述将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧,包括:
    将所述目标三维特效模型放置在所述待处理图像中目标对象的目标展示位置处,得到所述目标视频帧,并展示所述目标视频帧。
  8. 根据权利要求7所述的方法,其中,所述展示所述目标视频帧,包括:
    通过预先设置的动画特效将从达到停止放大条件时所对应的视频帧过渡至所述目标视频帧,以展示所述目标视频帧。
  9. 根据权利要求1-8中任一所述的方法,其中,所述虚拟模型为增强现实AR特效模型, 所述目标三维特效模型为与所述AR特效模型相对应的AR实物,且所述目标三维特效模型为静态模型或动态模型中的一种。
  10. 一种图像处理装置,包括:
    展示视频帧确定模块,设置为响应于确定检测到触发特效显示功能,为采集的待处理图像添加虚拟模型,得到展示视频帧;
    三维特效确定模块,设置为将所述展示视频帧中所述虚拟模型的图像区域放大显示,并响应于确定检测到达到停止放大条件,将所述虚拟模型处理为目标三维特效模型;
    目标视频帧确定模块,设置为将所述目标三维特效模型以及所述待处理图像中的目标对象融合处理,展示目标视频帧。
  11. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,设置为存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-9中任一所述的图像处理方法。
  12. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时设置为执行如权利要求1-9中任一所述的图像处理方法。
PCT/CN2022/117167 2021-09-29 2022-09-06 图像处理方法、装置、电子设备及存储介质 WO2023051185A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111151627.3A CN113850746B (zh) 2021-09-29 2021-09-29 图像处理方法、装置、电子设备及存储介质
CN202111151627.3 2021-09-29

Publications (1)

Publication Number Publication Date
WO2023051185A1 true WO2023051185A1 (zh) 2023-04-06

Family

ID=78977168

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/117167 WO2023051185A1 (zh) 2021-09-29 2022-09-06 图像处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN113850746B (zh)
WO (1) WO2023051185A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025020813A1 (zh) * 2023-07-24 2025-01-30 北京字跳网络技术有限公司 一种连麦方法及装置

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744414B (zh) * 2021-09-06 2022-06-28 北京百度网讯科技有限公司 图像处理方法、装置、设备和存储介质
CN113850746B (zh) * 2021-09-29 2024-11-22 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114331823A (zh) * 2021-12-29 2022-04-12 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114401443B (zh) * 2022-01-24 2023-09-01 脸萌有限公司 特效视频处理方法、装置、电子设备及存储介质
CN116630488A (zh) * 2022-02-10 2023-08-22 北京字跳网络技术有限公司 视频图像处理方法、装置、电子设备及存储介质
CN114677386A (zh) * 2022-03-25 2022-06-28 北京字跳网络技术有限公司 特效图像处理方法、装置、电子设备及存储介质
CN114697703B (zh) * 2022-04-01 2024-03-22 北京字跳网络技术有限公司 视频数据生成方法、装置、电子设备及存储介质
CN114900621B (zh) * 2022-04-29 2024-08-13 北京字跳网络技术有限公司 特效视频确定方法、装置、电子设备及存储介质
CN114842120B (zh) * 2022-05-19 2024-08-20 北京字跳网络技术有限公司 一种图像渲染处理方法、装置、设备及介质
CN116126182A (zh) * 2022-09-08 2023-05-16 北京字跳网络技术有限公司 特效处理方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394313A (zh) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 特效视频生成方法及装置
CN109727303A (zh) * 2018-12-29 2019-05-07 广州华多网络科技有限公司 视频展示方法、系统、计算机设备、存储介质和终端
CN109803165A (zh) * 2019-02-01 2019-05-24 北京达佳互联信息技术有限公司 视频处理的方法、装置、终端及存储介质
CN112333491A (zh) * 2020-09-23 2021-02-05 字节跳动有限公司 视频处理方法、显示装置和存储介质
CN112544070A (zh) * 2020-03-02 2021-03-23 深圳市大疆创新科技有限公司 视频的处理方法和装置
US20210118236A1 (en) * 2019-10-15 2021-04-22 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for presenting augmented reality data, device and storage medium
CN113850746A (zh) * 2021-09-29 2021-12-28 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021952A (zh) * 2007-03-23 2007-08-22 北京中星微电子有限公司 一种实现三维视频特效的方法及装置
US9754416B2 (en) * 2014-12-23 2017-09-05 Intel Corporation Systems and methods for contextually augmented video creation and sharing
CN112188074B (zh) * 2019-07-01 2022-08-05 北京小米移动软件有限公司 图像处理方法及装置、电子设备、可读存储介质
CN111225231B (zh) * 2020-02-25 2022-11-22 广州方硅信息技术有限公司 虚拟礼物的显示方法、装置、设备及存储介质
CN111565332A (zh) * 2020-04-27 2020-08-21 北京字节跳动网络技术有限公司 视频传输方法、电子设备和计算机可读介质
CN111526412A (zh) * 2020-04-30 2020-08-11 广州华多网络科技有限公司 全景直播方法、装置、设备及存储介质
CN111935491B (zh) * 2020-06-28 2023-04-07 百度在线网络技术(北京)有限公司 直播的特效处理方法、装置以及服务器
CN111880709A (zh) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 一种展示方法、装置、计算机设备及存储介质
CN112732152B (zh) * 2021-01-27 2022-05-24 腾讯科技(深圳)有限公司 直播处理方法、装置、电子设备及存储介质
CN113453034B (zh) * 2021-06-29 2023-07-25 上海商汤智能科技有限公司 数据展示方法、装置、电子设备以及计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394313A (zh) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 特效视频生成方法及装置
CN109727303A (zh) * 2018-12-29 2019-05-07 广州华多网络科技有限公司 视频展示方法、系统、计算机设备、存储介质和终端
CN109803165A (zh) * 2019-02-01 2019-05-24 北京达佳互联信息技术有限公司 视频处理的方法、装置、终端及存储介质
US20210118236A1 (en) * 2019-10-15 2021-04-22 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for presenting augmented reality data, device and storage medium
CN112544070A (zh) * 2020-03-02 2021-03-23 深圳市大疆创新科技有限公司 视频的处理方法和装置
CN112333491A (zh) * 2020-09-23 2021-02-05 字节跳动有限公司 视频处理方法、显示装置和存储介质
CN113850746A (zh) * 2021-09-29 2021-12-28 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025020813A1 (zh) * 2023-07-24 2025-01-30 北京字跳网络技术有限公司 一种连麦方法及装置

Also Published As

Publication number Publication date
CN113850746B (zh) 2024-11-22
CN113850746A (zh) 2021-12-28

Similar Documents

Publication Publication Date Title
WO2023051185A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2022171024A1 (zh) 图像显示方法、装置、设备及介质
CN113076048B (zh) 视频的展示方法、装置、电子设备和存储介质
CN110070496B (zh) 图像特效的生成方法、装置和硬件装置
WO2022100735A1 (zh) 视频处理方法、装置、电子设备及存储介质
WO2023284708A1 (zh) 一种视频处理方法、装置、电子设备和存储介质
WO2022188305A1 (zh) 信息展示方法及装置、电子设备、存储介质及计算机程序
CN112035046B (zh) 榜单信息显示方法、装置、电子设备及存储介质
CN114245028B (zh) 图像展示方法、装置、电子设备及存储介质
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
US12019669B2 (en) Method, apparatus, device, readable storage medium and product for media content processing
CN115002359B (zh) 视频处理方法、装置、电子设备及存储介质
CN114598815B (zh) 一种拍摄方法、装置、电子设备和存储介质
WO2022037484A1 (zh) 图像处理方法、装置、设备及存储介质
WO2023040749A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2023169305A1 (zh) 特效视频生成方法、装置、电子设备及存储介质
WO2024165010A1 (zh) 信息生成方法、信息显示方法、装置、设备和存储介质
US20220272283A1 (en) Image special effect processing method, apparatus, and electronic device, and computer-readable storage medium
CN114697568B (zh) 特效视频确定方法、装置、电子设备及存储介质
WO2021089002A1 (zh) 多媒体信息处理方法、装置、电子设备及介质
CN115278107A (zh) 视频处理方法、装置、电子设备及存储介质
WO2021027632A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
WO2023143240A1 (zh) 图像处理方法、装置、设备、存储介质和程序产品
WO2021073204A1 (zh) 对象的显示方法、装置、电子设备及计算机可读存储介质
GB2600341A (en) Image special effect processing method and apparatus, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22874576

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18695813

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/07/2024)

122 Ep: pct application non-entry in european phase

Ref document number: 22874576

Country of ref document: EP

Kind code of ref document: A1