[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022033272A1 - 图像处理方法以及电子设备 - Google Patents

图像处理方法以及电子设备 Download PDF

Info

Publication number
WO2022033272A1
WO2022033272A1 PCT/CN2021/106910 CN2021106910W WO2022033272A1 WO 2022033272 A1 WO2022033272 A1 WO 2022033272A1 CN 2021106910 W CN2021106910 W CN 2021106910W WO 2022033272 A1 WO2022033272 A1 WO 2022033272A1
Authority
WO
WIPO (PCT)
Prior art keywords
image frame
processing progress
image
processing
target
Prior art date
Application number
PCT/CN2021/106910
Other languages
English (en)
French (fr)
Inventor
李钊
毛珊珊
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022033272A1 publication Critical patent/WO2022033272A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Definitions

  • the present disclosure relates to the field of multimedia technologies, and in particular, to an image processing method and an electronic device.
  • one or more pieces of video material need to be processed by a video processing application, such as adding drawn graphics to a piece of video material or splicing multiple pieces of video material into a complete video .
  • the present disclosure provides an image processing method and an electronic device, and the technical solutions of the present disclosure are as follows:
  • an image processing method comprising:
  • the target image frame is an image frame corresponding to the processing progress
  • the target image frame is displayed.
  • the determining a target image frame from the plurality of image frames according to the processing progress of the video material includes:
  • the any one of the image frames is determined to be the target image frame.
  • the determining a target image frame from the plurality of image frames according to the processing progress of the video material includes:
  • the processed one of the image frames is determined as the target image frame.
  • the obtaining a plurality of image frames from the video material includes:
  • the plurality of image frames are determined from the plurality of reference image frames based on the quality information of the plurality of reference image frames, the quality information of the plurality of image frames conforming to the target condition.
  • the determining the quality information of the plurality of reference image frames comprises:
  • the quality information of the multiple reference image frames is obtained by fusing at least two items of the definition information, the color richness information and the content information of the multiple reference image frames.
  • the displaying of the target image frame on the processing progress display interface includes:
  • the target image frame is cropped, and the cropped target image frame is displayed on the image frame display area of the processing progress display interface.
  • the cropping the target image frame comprises:
  • the displaying of the target image frame on the processing progress display interface includes:
  • the first image frame is controlled to move outside the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is displayed on the image frame display area of the processing progress display interface, and the second image frame is after the change The processing progress of the corresponding image frame.
  • the displaying of the target image frame on the processing progress display interface includes:
  • the first image frame is controlled to move outside the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is controlled to enter the image frame display area of the processing progress display interface, and the second image frame is an image frame corresponding to the changed processing progress.
  • the displaying of the target image frame on the processing progress display interface includes:
  • the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.
  • an image processing apparatus comprising:
  • an acquisition unit configured to acquire a plurality of image frames from the video material in response to a processing instruction for the video material
  • a determining unit configured to determine a target image frame from the plurality of image frames according to the processing progress of the video material, where the target image frame is an image frame corresponding to the processing progress;
  • the display unit is configured to display the target image frame on the processing progress display interface.
  • the determining unit is configured to, in response to starting to process any one of the plurality of image frames, determine the any one of the image frames as the target image frame.
  • the determining unit is configured to, in response to completion of processing any image frame among the plurality of image frames, determine the processed image frame as the target image frame.
  • the obtaining unit is configured to obtain a plurality of reference image frames from the video material; determine quality information of the plurality of reference image frames; based on the quality information of the plurality of reference image frames , determining the plurality of image frames from the plurality of reference image frames, and the quality information of the plurality of image frames conforms to the target condition.
  • the obtaining unit is configured to obtain at least two items of sharpness information, color richness information and content information of the multiple reference image frames; At least two items of degree information, color richness information, and content information are obtained to obtain the quality information of the multiple reference image frames.
  • the display unit is configured to crop the target image frame, and displays the cropped target image frame on the image frame display area of the processing progress display interface.
  • the display unit is configured to determine a target area, the target area contains the target object; and delete the part outside the target area.
  • the display unit is configured to control the first image frame to move out of the image frame display area of the processing progress display interface in response to the change of the processing progress, the first image frame is The image frame corresponding to the processing progress before the change; in response to the first image frame being completely moved out of the image frame display area of the processing progress display interface, the second image frame is displayed on the image frame display area of the processing progress display interface , the second image frame is an image frame corresponding to the changed processing progress.
  • the display unit is configured to control the first image frame to move out of the image frame display area of the processing progress display interface in response to the change of the processing progress, the first image frame is The image frame corresponding to the processing progress before the change; while the first image frame is moving, control the second image frame to enter the image frame display area of the processing progress display interface, and the second image frame is the processing after the change The image frame corresponding to the progress.
  • the display unit is configured to, in response to a change in the processing progress, cancel the display of a first image frame, where the first image frame is an image frame corresponding to the processing progress before the change; On the image frame display area of the processing progress display interface, a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.
  • an electronic device comprising:
  • processors one or more processors
  • processor configured to perform the following steps:
  • the target image frame is an image frame corresponding to the processing progress
  • the target image frame is displayed.
  • the processor is configured to:
  • the any one of the image frames is determined to be the target image frame.
  • the processor is configured to:
  • the processed one of the image frames is determined as the target image frame.
  • the processor is configured to:
  • the plurality of image frames are determined from the plurality of reference image frames based on the quality information of the plurality of reference image frames, the quality information of the plurality of image frames conforming to the target condition.
  • the processor is configured to:
  • the quality information of the multiple reference image frames is obtained by fusing at least two items of the definition information, the color richness information and the content information of the multiple reference image frames.
  • the processor is configured to:
  • the target image frame is cropped, and the cropped target image frame is displayed on the image frame display area of the processing progress display interface.
  • the processor is configured to:
  • the processor is configured to:
  • the first image frame is controlled to move outside the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is displayed on the image frame display area of the processing progress display interface, and the second image frame is after the change The processing progress of the corresponding image frame.
  • the processor is configured to:
  • the first image frame is controlled to move outside the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is controlled to enter the image frame display area of the processing progress display interface, and the second image frame is an image frame corresponding to the changed processing progress.
  • the processor is configured to:
  • the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.
  • a non-volatile storage medium when the program code in the storage medium is executed by the processor of the electronic device, the server of the electronic device can perform the following steps:
  • the target image frame is an image frame corresponding to the processing progress
  • the target image frame is displayed.
  • the processor is configured to:
  • the any one of the image frames is determined to be the target image frame.
  • the processor is configured to:
  • the processor is configured to:
  • the plurality of image frames are determined from the plurality of reference image frames based on the quality information of the plurality of reference image frames, the quality information of the plurality of image frames conforming to the target condition.
  • the processor is configured to:
  • the quality information of the multiple reference image frames is obtained by fusing at least two items of the definition information, the color richness information and the content information of the multiple reference image frames.
  • the processor is configured to:
  • the target image frame is cropped, and the cropped target image frame is displayed on the image frame display area of the processing progress display interface.
  • the processor is configured to:
  • the processor is configured to:
  • the first image frame is controlled to move outside the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is displayed on the image frame display area of the processing progress display interface, and the second image frame is after the change The processing progress of the corresponding image frame.
  • the processor is configured to:
  • the first image frame is controlled to move outside the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is controlled to enter the image frame display area of the processing progress display interface, and the second image frame is an image frame corresponding to the changed processing progress.
  • the processor is configured to:
  • the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.
  • a computer program product stores one or more program codes, and the one or more program codes can be executed by a processor of an electronic device to complete the following steps:
  • a target image frame is determined from the plurality of image frames, and the target image frame is an image frame corresponding to the processing progress;
  • the target image frame is displayed.
  • the processor is configured to:
  • the any one of the image frames is determined to be the target image frame.
  • the processor is configured to:
  • the processor is configured to:
  • the plurality of image frames are determined from the plurality of reference image frames based on the quality information of the plurality of reference image frames, the quality information of the plurality of image frames conforming to the target condition.
  • the processor is configured to:
  • the quality information of the multiple reference image frames is obtained by fusing at least two items of the definition information, the color richness information and the content information of the multiple reference image frames.
  • the processor is configured to:
  • the target image frame is cropped, and the cropped target image frame is displayed on the image frame display area of the processing progress display interface.
  • the processor is configured to:
  • the processor is configured to:
  • the first image frame is controlled to move outside the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is displayed on the image frame display area of the processing progress display interface, and the second image frame is after the change The processing progress of the corresponding image frame.
  • the processor is configured to:
  • the first image frame is controlled to move outside the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is controlled to enter the image frame display area of the processing progress display interface, and the second image frame is an image frame corresponding to the changed processing progress.
  • the processor is configured to:
  • the first image frame is the image frame corresponding to the processing progress before the change;
  • a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.
  • the terminal in the process of processing the video material, can display the image frame corresponding to the processing progress to the user as the processing progress changes, so that the user can know the processing of the video material by viewing the image frame. schedule.
  • the present disclosure displays the processing progress to the user through the image frame, which not only has a better and more vivid and intuitive display effect, but also has a higher efficiency of human-computer interaction. And it can reduce the user's perception of the time cost in the waiting process.
  • FIG. 1 is a schematic diagram of an implementation environment of an image processing method according to an exemplary embodiment
  • Fig. 2 is a schematic diagram showing a video material selection interface according to an exemplary embodiment
  • FIG. 3 is a flowchart of an image processing method according to an exemplary embodiment
  • FIG. 4 is a flowchart of an image processing method according to an exemplary embodiment
  • FIG. 5 is a schematic diagram of a processing progress display interface according to an exemplary embodiment
  • FIG. 6 is a schematic diagram of a processing progress display interface according to an exemplary embodiment
  • FIG. 7 is a schematic diagram of a processing progress display interface according to an exemplary embodiment
  • FIG. 8 is a schematic diagram of a processing progress display interface according to an exemplary embodiment
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to an exemplary embodiment.
  • FIG. 10 is a schematic structural diagram of a terminal according to an exemplary embodiment
  • Fig. 11 is a schematic structural diagram of a server according to an exemplary embodiment.
  • the user information involved in the present disclosure may be information authorized by the user or fully authorized by all parties.
  • FIG. 1 is a schematic diagram of an implementation environment of an image processing method according to an exemplary embodiment. As shown in FIG. 1 , it includes a terminal 101 and a server 102 .
  • the terminal 101 is at least one of a smartphone, a smart watch, a desktop computer, a laptop computer, and a laptop portable computer.
  • An application program that supports video processing can be installed and run on the terminal 101, and the user can log in to the application program through the terminal 101 to perform speech recognition. For example, the user selects multiple pieces of video on the terminal 101, and the multiple pieces of video are synthesized into one piece through the application program. video.
  • the terminal 101 can be connected to the server 102 through a wireless network or a wired network.
  • the terminal 101 is one of multiple terminals, and this embodiment only takes the terminal 101 as an example for illustration.
  • the number of the above-mentioned terminals can be more or less.
  • the above-mentioned terminals 101 can be only a few, or the above-mentioned terminals 101 can be dozens or hundreds, or more, and the embodiments of the present disclosure do not limit the number of terminals 101 and device types.
  • the server 102 is at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server 102 can be used to determine the quality information of the image frame, and can also be used to process the video sent by the terminal 101 .
  • the number of the foregoing servers 102 can be more or less, which is not limited in this embodiment of the present disclosure.
  • the server 102 also includes other functional servers in order to provide more comprehensive and diverse services.
  • the image processing method provided by the present disclosure can be applied to a variety of scenarios. For ease of understanding, the application scenarios that may be involved in the present disclosure are first described.
  • the terminal 101 and the server are also the server 102 in the above implementation environment.
  • the image processing method provided by the present disclosure can be applied in the process of synthesizing multiple video materials.
  • the user wants to synthesize three video materials: video material A, video material B, and video material C into one video D, Then, the user starts an application program supporting video processing through the terminal.
  • the user imports video materials A, B, and C into the application, and the above three video materials are synthesized through the application.
  • the terminal can display the image frames of video material A, video material B, and video material C on the processing progress display interface of the application, and the user can know the current video material synthesis through the image frames. progress.
  • 201 is a video material selection interface provided by an application supporting video material processing
  • 202 is a content option
  • “All” means to display all video materials and pictures
  • Video means to display all video materials
  • "Picture” Indicates that all pictures are displayed, and the user selects different content by clicking on different content options.
  • 203 is the cover of the video material to be processed, the total duration of the video material is displayed on the cover of the video material, and the number in the upper right corner of the cover of the video material indicates the order in which the video material was selected. to determine the video material for processing.
  • 204 is a time display box, which is used to display the total duration of the video material selected by the user.
  • the 205 is the cover of the video material selected by the user, and the user can deselect the video material by clicking the "X" in the upper right corner of the cover.
  • the user clicks the "one-click film output” button and in response to the click operation of the "one-click film output” button, the terminal synthesizes the video material selected by the user through an application program that supports video processing.
  • click the "Next” button to select different filters and synthesis methods for the video material, and the number after "Next" indicates the number of the video material selected by the user.
  • the image processing method provided by the present disclosure can be applied to the process of processing a single video material.
  • the user adds different display elements to different image frames of the video material through an application program that supports video material processing. , or the image frames are processed differently, such as adding a line of subtitles to image frame A, adding a pattern to image frame B, sharpening image frame C, and mosaicking image frame D .
  • the terminal When the terminal performs the above-mentioned processing on the video material through the application program supporting video processing, if the above-mentioned image frame A, image frame B, image frame C, and image frame D are the target image frames, according to the progress of the video material processing, in the Image frame A, image frame B, image frame C, and image frame D are displayed on the interface of the application program, and the user can know the current video material processing progress by viewing the target image frame.
  • the terminal in addition to displaying image frame A, image frame B, image frame C, and image frame D on the processing progress display interface of the application, can also display the post-processing on the processing progress display interface of the application.
  • image frame A, image frame B, image frame C, and image frame D in addition to displaying image frame A, image frame B, image frame C, and image frame D.
  • the execution body of the image processing method may be a terminal or a server.
  • the execution subject is the server
  • the user sends the video material to the server through the terminal
  • the server processes the video material, obtains the target image frame, and displays the target image frame through the terminal.
  • the following description process is performed by taking the execution subject as the terminal as an example.
  • Fig. 3 is a flowchart of an image processing method according to an exemplary embodiment. As shown in Fig. 3 , the method includes the following steps.
  • the terminal In response to the processing instruction for the video material, the terminal acquires a plurality of image frames from the video material.
  • a video material includes multiple image frames.
  • the terminal determines a target image frame from a plurality of image frames according to the processing progress of the video material, where the target image frame is an image frame corresponding to the processing progress.
  • the processing progress is used to describe which image frame of the plurality of image frames the terminal has processed.
  • the terminal displays the target image frame on the processing progress display interface.
  • the terminal can display the image frame corresponding to the processing progress to the user in the process of processing the video material, along with the change of the processing progress, so that the user can know the processing of the video material by viewing the image frame. schedule.
  • the present disclosure displays the processing progress to the user through the image frame, which not only has a better and more vivid and intuitive display effect, but also has a higher efficiency of human-computer interaction. And it can reduce the user's perception of the time cost in the waiting process.
  • determining the target image frame from the plurality of image frames includes:
  • any one of the image frames is determined to be the target image frame.
  • determining the target image frame from the plurality of image frames includes:
  • any one of the processed image frames is determined as the target image frame.
  • obtaining the plurality of image frames from the video footage includes:
  • Quality information for a plurality of reference image frames is determined.
  • a plurality of image frames whose quality information meets the target condition are determined from the plurality of reference image frames.
  • determining quality information for the plurality of reference image frames includes:
  • At least two items of definition information, color richness information, and content information of a plurality of reference image frames are acquired.
  • the quality information of the multiple reference image frames is obtained by fusing at least two items of the definition information, the color richness information and the content information of the multiple reference image frames.
  • displaying the target image frame includes:
  • the target image frame is cropped, and the cropped target image frame is displayed on the image frame display area of the processing progress display interface.
  • cropping the target image frame includes:
  • the target area contains the target object.
  • displaying the target image frame includes:
  • the first image frame is controlled to move out of the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change.
  • a second image frame is displayed on the image frame display area of the processing progress display interface, where the second image frame is an image corresponding to the changed processing progress frame.
  • displaying the target image frame includes:
  • the first image frame is controlled to move out of the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change.
  • the second image frame is controlled to enter the image frame display area of the processing progress display interface, and the second image frame is the image frame corresponding to the changed processing progress.
  • displaying the target image frame includes:
  • the display of the first image frame is canceled, and the first image frame is the image frame corresponding to the processing progress before the change.
  • a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.
  • Fig. 4 is a flowchart of an image processing method according to an exemplary embodiment. As shown in Fig. 4 , the method includes the following steps.
  • the terminal In response to the processing instruction for the video material, the terminal acquires a plurality of reference image frames from the video material.
  • the video material is the video material stored on the terminal, or the video material obtained by the terminal from the Internet, or the video material captured by the terminal in real time.
  • the number of video materials may be one or multiple, and the embodiments of the present disclosure do not limit the source and number of video materials.
  • the terminal in response to the processing instruction on the video material, obtains reference image frames from the video material at every target time interval to obtain a plurality of reference image frames.
  • the number of reference image frames is equal to the number of reference image frames.
  • the total duration of the video footage is related.
  • the terminal can acquire reference image frames from the video material at target time intervals, which can ensure that the reference image frames acquired by the terminal are evenly distributed in the video material.
  • the terminal when the terminal processes a video material, in response to a processing instruction for the video material, the terminal decodes the video material to obtain a reference image frame in the video material. For example, the terminal obtains a reference image frame from the video material every M seconds, and obtains multiple reference image frames, where M is a positive integer. In some embodiments, the total duration of the video material A is 30 seconds, and the target time interval is 2 seconds, then the terminal acquires image frames from the video material A every 2 seconds, and a total of 15 image frames are acquired.
  • the terminal when the terminal processes three video materials, in response to the processing instruction for the three video materials, the terminal decodes the three video materials to obtain reference image frames in the three video materials.
  • the terminal determines the number N of acquired reference image frames according to the total duration of the three video materials and the target time interval M. According to the processing sequence of the three video materials, the terminal first obtains the reference image frame from the first video material. In response to the number of reference image frames acquired from the first video material reaching the first number, the terminal acquires the reference image frames from the second video material. In response to the number of reference image frames acquired from the second video material reaching the second number, the terminal acquires reference image frames from the third video material, and a third number of reference image frames are acquired from the third video material in total .
  • the first quantity is the ratio of the total duration of the first video material to the target time interval M
  • the second quantity is the ratio of the total duration of the second video material to the target time interval M
  • the third quantity is the third video The ratio of the total duration of the material to the target time interval M.
  • the terminal performs video material synthesis processing on video material A, video material B, and video material C.
  • the total duration of video material A is 30 seconds
  • the total duration of video material B is 20 seconds
  • the total duration of video material C is 20 seconds.
  • the duration is 10 seconds
  • the composition sequence of video material A, video material B, and video material C is A+video material B+video material C.
  • the terminal determines that the first quantity corresponding to video material A is 15, the second quantity corresponding to video material B is 10, and the third quantity corresponding to video material C is 5.
  • the terminal acquires 15 reference image frames from the video material A every 2 seconds.
  • the terminal acquires 10 reference image frames from the video material B every 2 seconds.
  • the terminal acquires 5 reference image frames from the video material C every 2 seconds, so that the terminal acquires a total of 30 reference image frames from the video material A, the video material B, and the video material C.
  • the terminal in response to the processing instruction for at least one video material, randomly obtains reference image frames from at least one video material, respectively, to obtain multiple reference image frames. In this case, the terminal obtains multiple reference image frames from each video material. The number of randomly obtained reference image frames is related to the total duration of the video material.
  • the terminal acquires the reference image frame from at least one video material randomly, and the obtained reference image frame can better reflect the characteristics of each video material as a whole.
  • the terminal when the terminal processes three video materials, in response to the processing instruction for the three video materials, the terminal decodes the three video materials to obtain the reference in the three video materials. image frame.
  • the terminal determines the number of reference image frames obtained from the three video materials respectively according to the total duration of the three video materials.
  • the terminal randomly acquires reference image frames from the three video materials, respectively, to obtain multiple reference image frames.
  • the total duration of video material D is 10 seconds
  • the total duration of video material E is 5 seconds
  • the total duration of video material F is 15 seconds.
  • the terminal determines, according to the total duration of video material D, video material E, and video material F, that the number of reference image frames obtained from video material D is 2, and the number of reference image frames obtained from video material E is 2. is 1, and the number of reference image frames obtained from the video material F is 3.
  • the terminal randomly acquires two reference image frames from the video material D, one reference image frame from the video material E, and three reference image frames from the video material F.
  • the terminal acquires at least two items of definition information, color richness information, and content information of multiple reference image frames.
  • the terminal acquires at least two of the definition information, color richness information, and content information of the multiple reference image frames, including the following situations: the terminal acquires the sharpness information and color richness information of the multiple reference image frames; Acquire sharpness information and content information of multiple reference image frames; the terminal acquires color richness information and content information of multiple reference image frames; and the terminal acquires sharpness information, color richness information and content information of multiple reference image frames.
  • the terminal converts the reference image frame into a grayscale reference image frame, executes an objective function to process the grayscale values of the pixels of the grayscale reference image frame, and obtains the reference image frame clarity.
  • the terminal can directly execute the objective function to process the grayscale values of the pixels of the reference image frame to obtain the definition information of the reference image frame.
  • the function is a sharpness information acquisition function, such as a Brenner gradient function, a Tenengrad gradient function, or a Laplacian gradient function, a grayscale variance (SMD) function, a grayscale variance product ( SMD2) function, entropy function and other functions, the embodiment of the present disclosure does not limit the type of the objective function.
  • a sharpness information acquisition function such as a Brenner gradient function, a Tenengrad gradient function, or a Laplacian gradient function, a grayscale variance (SMD) function, a grayscale variance product ( SMD2) function, entropy function and other functions
  • the terminal adopts any one of formula (1), formula (2), formula (3) or formula (4) to convert the color channels of the pixels of the color reference image frame into the grayscale reference image frame The pixel value of the pixel point.
  • Gray is the gray value of the pixel of the gray reference image frame
  • R is the red channel value of the pixel of the color reference image frame
  • G is the green channel value of the pixel of the color reference image frame
  • B is the color reference image.
  • the terminal substitutes the gray value of the pixel point of the gray reference image frame into the Brenner gradient function to obtain the definition information of the reference image frame.
  • D(f) is the definition information of the reference image frame
  • (x, y) is the pixel point coordinate of the grayscale reference image frame
  • f(x,y) is the grayscale value of the pixel point of the grayscale reference image frame .
  • the terminal substitutes the gray value of the pixel point of the gray reference image frame into the Tenengrad gradient function to obtain the definition information of the reference image frame.
  • D(f) is the definition information of the reference image frame
  • (x, y) is the pixel coordinates of the gray reference image frame
  • G x (x, y) is the gray value of the x pixel in the x direction.
  • Gradient value, G y (x, y) is the gradient value of the gray value of the x pixel in the y direction
  • T is the edge detection threshold.
  • the terminal determines the red channel value, the green channel value and the blue channel value of the pixel point of the reference image frame, according to the red channel value and the green channel value of the pixel point of the reference image frame.
  • the first parameter is obtained by the difference of the values
  • the second parameter is obtained according to the red channel value, the green channel value and the blue channel value of the pixel point of the reference image frame.
  • the terminal determines the average value and standard deviation of the first parameter of different pixel points of the reference image frame, and determines the average value and standard deviation of the second parameter of different pixel points of the reference image frame.
  • the terminal obtains the color richness information of the reference image frame according to the average value and standard deviation of the first parameter and the average value and standard deviation of the second parameter.
  • , and the second parameter yb
  • the terminal adds the first parameters rg of different pixel points of the reference image frame and divides it by the total number of pixels in the reference image frame to obtain the average value rg mean and standard deviation of the first parameters rg of different pixel points of the reference image frame rg sta .
  • the terminal adds the second parameters yb of different pixel points of the reference image frame and divides it by the total number of pixels in the reference image frame to obtain the mean value yb mean and the standard deviation of the first parameters yb of different pixel points of the reference image frame yb sta .
  • the terminal adds the square of rg mean and the square of yb mean and then takes the square root to obtain the third parameter a, and adds the square of rg sta and the square of yb sta to take the square root to obtain the fourth parameter b.
  • the terminal performs image recognition on the reference image frame to obtain target objects included in the reference image frame, where the target objects include human faces, pets, and buildings.
  • the terminal sets different weights for the reference image frames according to the recognized objects, for example, a weight of 1 is set for a reference image frame containing a human face, a weight of 0.8 is set for a reference image frame containing a pet, and a weight is set for a reference image frame containing a building.
  • the reference image frame is set with a weight of 0.6, etc., and the terminal adopts the weight to represent the content information of the reference image frame.
  • the terminal can also identify completeness information and repetition information of the reference image frame, where the completeness information of the reference image frame is used to indicate whether there is a missing part in the reference image frame, and the coincidence information of the reference image frame is used to indicate the reference image frame.
  • the terminal fuses the weights corresponding to the objects included in the reference image frame with the completeness information and the coincidence information of the reference image frame to obtain content information of the reference image frame.
  • the terminal can preferentially determine, from the multiple reference image frames, the target reference image frame that contains the target object, the content of the reference image frame is complete, and the content of the reference image frame is less repeated.
  • the terminal fuses at least two items of definition information, color richness information, and content information of multiple reference image frames to obtain quality information of multiple reference image frames.
  • the terminal performs weighted summation on at least two items among the definition information, color richness information, and content information of the reference image frame, to obtain quality information of the reference image frame, wherein the definition information of the reference image frame is , the color richness information, and the weight corresponding to the content information may be set according to the actual situation, which is not limited in this embodiment of the present disclosure.
  • the terminal determines, from the multiple reference image frames, multiple image frames whose quality information meets the target condition.
  • the quality information conforming to the target condition means that the quality information is greater than the quality information threshold or the quality information is the K with the highest quality information among the multiple reference image frames, where K is a positive integer.
  • the terminal determines a target image frame from a plurality of image frames according to the processing progress of the video material.
  • the image frame in response to the current processing progress being that the terminal starts to process any image frame among the plurality of image frames, the image frame is determined as the target image frame. That is, in response to starting to process any image frame among the plurality of image frames, the terminal determines the image frame as the target image frame.
  • the terminal can determine the image frame being processed as the target image frame when processing at least one video material.
  • the user sets the graphics to be added to the video material through the video processing application and selects the image frame to be added, and then uses the video processing application to add graphics to the video. Add graphics to image frames in footage.
  • the terminal determines the image frame as the target image frame, and when the user sees the image frame, the user can know that the processing application is adding graphics to the image frame. The display is more intuitive.
  • processing as the terminal to synthesize multiple video materials as an example, after the user sets the video materials to be synthesized through the video processing application, the video materials are synthesized through the video processing application.
  • the terminal determines the image frame currently being synthesized, and determines the image frame as the target image frame, and the user can know that the processing application is adding the image frame to the image frame when the user sees the image frame. Graphics, the display of processing progress is more intuitive.
  • the terminal determines the processed image frame as the target image frame. That is, in response to the completion of processing any one of the multiple image frames, the terminal determines the processed image frame as the target image frame.
  • the user can use the video processing application to send graphics to the video material. Add graphics to image frames in video footage.
  • the terminal determines that the image frame to which the graphics have been added is the target image frame.
  • the terminal determines the display time of multiple image frames during the processing according to the processing progress of the video material.
  • the terminal determines the display time corresponding to the processing progress, and determines the image frame corresponding to the display time as the target image frame.
  • the terminal can determine the image frame corresponding to the processing progress in real time during the processing process, the display of the processing progress is more intuitive, and the efficiency of human-computer interaction is higher.
  • the terminal determines the display time of the target image frame according to the number of image frames, the sequence of the target image frames in the multiple image frames, and the progress of the processing. In some embodiments, there are 100 image frames. , the order of the target image frame in the 100 image frames is 28, then when the progress of processing the video material reaches 28%, the terminal determines the image frame with the order of 28 as the target image frame, wherein the target image frame is in multiple The sequence in the image frames is determined according to the time when the terminal acquires the target image frame.
  • step S401-S405 are described by taking the execution subject as the terminal as an example, and in other possible implementations, the server can also be executed as the execution subject.
  • the target image frame can be sent to the terminal for display by the terminal.
  • the terminal displays the target image frame on the processing progress display interface.
  • the display duration of the target image frame is the target duration, wherein the target duration is determined by the terminal according to the progress of the processing.
  • the terminal sets the target duration to be the same as the processing percentage change interval, for example, there are 100 image frames, In the case that the processing progress corresponding to the currently displayed image frame is 1%, then in response to the processing progress changing to 2%, the terminal displays the image frame corresponding to the processing progress 2%, and the target duration is that the processing progress changes by 1% for the duration of 2%.
  • the target duration can also be determined by the terminal according to the processing time of the image frame.
  • the terminal in response to the image frame corresponding to the processing progress being 1%, displays the image frame corresponding to the processing progress of 1%, and in response to the image frame corresponding to the processing progress being 2%, the terminal displays the processing progress.
  • the time interval between the image frames corresponding to the processing progress of 1% and the image frames corresponding to the processing progress of 2% is the target duration.
  • the terminal can also determine the target duration in other ways, such as The time interval between the image frames corresponding to the processing progress of 1% and the completion of processing of the image frames corresponding to the processing progress of 2% is determined as the target duration, which is not limited in this embodiment of the present disclosure.
  • the terminal can set a duration threshold for the target duration, and in response to the target duration being greater than or equal to the display threshold, the terminal displays the image frame according to the target duration; in response to the target duration being less than the display threshold, the terminal displays the image frame according to the display threshold.
  • the terminal crops the target image frame, and displays the cropped target image frame on the image frame display area of the processing progress display interface.
  • the terminal determines a target area, and the target area includes a target object, wherein the target object is a human face, a pet or a building, and accordingly, the target area is an area including a human face, or an area including a pet, or an area including a building
  • the content of the target area is not limited in this embodiment of the present disclosure.
  • the terminal performs image recognition on the target image frame to obtain the target area in the target image frame.
  • the terminal deletes the part outside the target area, that is, the terminal deletes the part outside the target area in the target image frame. This can make the display size of the image frame corresponding to the processing progress more appropriate.
  • the processing progress information and the manual editing button can also be displayed on the processing progress display interface of the processing application.
  • the user can click the manual editing button to intervene in the processing process. Operations such as adding display elements to the target image frame, stopping the processing process, and adding other video materials to the video material being processed are not limited in this embodiment of the present disclosure.
  • 501 is the processing progress display interface of the processing application
  • 502 is the target image frame
  • 503 is the progress information of the processing
  • 504 is the manual editing button.
  • the terminal in response to a change in the processing progress, controls the first image frame to move out of the image frame display area of the processing progress display interface, where the first image frame is an image frame corresponding to the processing progress before the change.
  • a second image frame is displayed on the image frame display area of the processing progress display interface, the second image frame being the image frame corresponding to the changed processing progress.
  • the display of the processing progress is more intuitive, and the efficiency of human-computer interaction is higher.
  • 601 is a processing progress display interface of a video processing application
  • 602 is a first image frame
  • 603 is an image frame display area
  • 604 is a second image frame.
  • the terminal moves the first image frame 602 out of the image frame display area 603 and displays the second image frame 604 in the image frame display area 603 .
  • the terminal in response to a change in the processing progress, controls the first image frame to move out of the image frame display area of the processing progress display interface, where the first image frame is an image frame corresponding to the processing progress before the change. While the first image frame is moving, the terminal controls the second image frame to enter the image frame display area of the processing progress display interface, and the second image frame is the image frame corresponding to the changed processing progress.
  • 701 is a processing progress display interface of a video processing application
  • 702 is a first image frame
  • 703 is an image frame display area
  • 704 is a second image frame.
  • the terminal moves the first image frame 702 out of the image frame display area 703, and in the process of moving out of the first image frame 702, controls the second image frame 704 to follow
  • the first image frame 702 enters the image frame display area 703 until the second image frame 704 completely enters the image frame display area 703 .
  • the terminal in response to a change in the processing progress, cancels the display of the first image frame, which is an image frame corresponding to the processing progress before the change.
  • the terminal On the image frame display area of the processing progress display interface, the terminal displays a second image frame, and the second image frame is an image frame corresponding to the changed processing progress.
  • 801 is a processing progress display interface of a video processing application
  • 802 is a first image frame
  • 803 is an image frame display area
  • 804 is a second image frame.
  • the terminal cancels the display of the first image frame 802 and displays the second image frame 804 on the processing progress display interface.
  • the terminal may combine at least one image frame to generate a Graphics Interchange Format (GIF) image, and display the GIF image on the processing progress display interface of the application, and the user can view the GIF image by viewing the GIF image. to sense the progress of processing.
  • GIF Graphics Interchange Format
  • the terminal in the process of processing the video material, can display the image frame corresponding to the processing progress to the user as the processing progress changes, so that the user can know the processing of the video material by viewing the image frame. schedule.
  • the present disclosure displays the processing progress to the user through the image frame, which not only has a better and more vivid and intuitive display effect, but also has a higher efficiency of human-computer interaction. And it can reduce the user's perception of the time cost in the waiting process.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to an exemplary embodiment.
  • the apparatus includes an acquisition unit 901 , a determination unit 902 and a display unit 903 .
  • the obtaining unit 901 is configured to obtain a plurality of image frames from the video material in response to the processing instruction for the video material.
  • the determining unit 902 is configured to determine a target image frame from a plurality of image frames according to the processing progress of the video material, where the target image frame is an image frame corresponding to the processing progress.
  • the display unit 903 is configured to display the target image frame on the processing progress display interface.
  • the determining unit is configured to, in response to starting to process any one of the plurality of image frames, determine any one of the image frames as the target image frame.
  • the determining unit is configured to, in response to the completion of processing any image frame in the plurality of image frames, determine any image frame that has been processed as the target image frame.
  • the acquisition unit is configured to acquire a plurality of reference image frames from the video material. Quality information for a plurality of reference image frames is determined. Based on the quality information of the plurality of reference image frames, a plurality of image frames whose quality information meets the target condition are determined from the plurality of reference image frames.
  • the obtaining unit is configured to obtain at least two items of sharpness information, color richness information and content information of the plurality of reference image frames.
  • the quality information of the multiple reference image frames is obtained by fusing at least two items of the definition information, the color richness information and the content information of the multiple reference image frames.
  • the display unit is configured to crop the target image frame, and displays the cropped target image frame on the image frame display area of the processing progress display interface.
  • the display unit is configured to determine a target area, the target area containing the target object. Delete the part outside the target area.
  • the display unit is configured to control the first image frame to move out of the image frame display area of the processing progress display interface in response to the change in the processing progress, where the first image frame is an image corresponding to the processing progress before the change frame.
  • a second image frame is displayed on the image frame display area of the processing progress display interface, the second image frame being the image frame corresponding to the changed processing progress.
  • the display unit is configured to control the first image frame to move out of the image frame display area of the processing progress display interface in response to the change in the processing progress, where the first image frame is an image corresponding to the processing progress before the change frame. While the first image frame is moving, the second image frame is controlled to enter the image frame display area of the processing progress display interface, and the second image frame is the image frame corresponding to the changed processing progress.
  • the display unit is configured to, in response to a change in the processing progress, cancel the display of the first image frame, where the first image frame is an image frame corresponding to the processing progress before the change.
  • the first image frame is an image frame corresponding to the processing progress before the change.
  • a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.
  • the terminal in the process of processing the video material, can display the image frame corresponding to the processing progress to the user as the processing progress changes, so that the user can know the processing of the video material by viewing the image frame. schedule.
  • the present disclosure displays the processing progress to the user through the image frame, which not only has a better and more vivid and intuitive display effect, but also has a higher efficiency of human-computer interaction. And it can reduce the user's perception of the time cost in the waiting process.
  • Embodiments of the present disclosure provide an electronic device, including one or more processors;
  • Memory for storing program code executable by the processor.
  • processor configured to perform the following steps:
  • a plurality of image frames are obtained from the video material.
  • a target image frame is determined from a plurality of image frames, and the target image frame is an image frame corresponding to the processing progress.
  • the target image frame is displayed.
  • the processor is configured to, in response to initiating processing of any one of the plurality of image frames, determine any one of the image frames as the target image frame.
  • the processor is configured to: in response to the completion of processing any one of the plurality of image frames, determine any processed image frame as the target image frame.
  • the processor is configured to obtain a plurality of reference image frames from the video material.
  • Quality information for a plurality of reference image frames is determined.
  • a plurality of image frames are determined from the plurality of reference image frames, and the quality information of the plurality of image frames conforms to the target condition.
  • the processor is configured to:
  • At least two items of definition information, color richness information, and content information of a plurality of reference image frames are acquired.
  • the quality information of the multiple reference image frames is obtained by fusing at least two items of the definition information, the color richness information and the content information of the multiple reference image frames.
  • the processor is configured to:
  • the target image frame is cropped, and the cropped target image frame is displayed on the image frame display area of the processing progress display interface.
  • the processor is configured to:
  • the target area contains the target object.
  • the processor is configured to:
  • the first image frame is controlled to move out of the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change.
  • a second image frame is displayed on the image frame display area of the processing progress display interface, the second image frame being the image frame corresponding to the changed processing progress.
  • the processor is configured to:
  • the first image frame is controlled to move out of the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change.
  • the second image frame is controlled to enter the image frame display area of the processing progress display interface, and the second image frame is the image frame corresponding to the changed processing progress.
  • the processor is configured to:
  • the display of the first image frame is canceled, and the first image frame is the image frame corresponding to the processing progress before the change.
  • a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.
  • the electronic device may be implemented as a terminal.
  • the structure of the terminal will be described:
  • FIG. 10 is a schematic structural diagram of a terminal according to an exemplary embodiment.
  • the terminal 1000 may be a terminal used by a user.
  • the terminal 1000 can be: a smart phone, a tablet computer, a notebook computer or a desktop computer.
  • Terminal 1000 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and the like by other names.
  • the terminal 1000 includes: a processor 1001 and a memory 1002 .
  • the processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1001 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 1001 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state.
  • the processor 1001 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1001 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. At least one piece of program code is stored in the memory 1002, and the at least one piece of program code is loaded and executed by the processor 1001 to implement the image processing methods provided by the above method embodiments. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices.
  • the terminal 1000 may optionally further include: a peripheral device interface 1003 and at least one peripheral device.
  • the processor 1001, the memory 1002 and the peripheral device interface 1003 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1003 through a bus, a signal line or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 1004 , a display screen 1005 , a camera assembly 1006 , an audio circuit 1007 , a positioning assembly 1008 and a power supply 1009 .
  • the peripheral device interface 1003 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1001 and the memory 1002 .
  • processor 1001, memory 1002, and peripherals interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1001, memory 1002, and peripherals interface 1003 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1004 communicates with the communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1004 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • radio frequency circuitry 1004 includes: an antenna system, an RF transceiver, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and the like.
  • the radio frequency circuit 1004 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to, metropolitan area networks, mobile communication networks of various generations (2G, 3G, 4G and 5G), wireless local area networks and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
  • the radio frequency circuit 1004 may further include a circuit related to NFC (Near Field Communication, short-range wireless communication), which is not limited in the present disclosure.
  • the display screen 1005 is used for displaying UI (User Interface, user interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • display screen 1005 is a touch display screen, display screen 1005 also has the ability to acquire touch signals on or above the surface of display screen 1005 .
  • the touch signal can be input to the processor 1001 as a control signal for processing.
  • the display screen 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 1005 there may be one display screen 1005, which is provided on the front panel of the terminal 1000; in other embodiments, there may be at least two display screens 1005, which are respectively arranged on different surfaces of the terminal 1000 or in a folded design; In some embodiments, the display screen 1005 may be a flexible display screen disposed on a curved surface or a folding surface of the terminal 1000 . Even, the display screen 1005 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 1005 can be prepared by using materials such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light emitting diode).
  • the camera assembly 1006 is used to capture images or video.
  • camera assembly 1006 includes a front-facing camera and a rear-facing camera.
  • the front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal.
  • there are at least two rear cameras which are any one of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function
  • the main camera It is integrated with the wide-angle camera to achieve panoramic shooting and VR (Virtual Reality, virtual reality) shooting functions or other integrated shooting functions.
  • VR Virtual Reality, virtual reality
  • the camera assembly 1006 may also include a flash.
  • the flash can be a single color temperature flash or a dual color temperature flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • Audio circuitry 1007 may include a microphone and speakers.
  • the microphone is used to collect the sound waves of the user and the environment, convert the sound waves into electrical signals, and input them to the processor 1001 for processing, or to the radio frequency circuit 1004 to realize voice communication.
  • the microphone may also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves.
  • the loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker.
  • the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for purposes such as distance measurement.
  • the audio circuit 1007 may also include a headphone jack.
  • the positioning component 1008 is used to locate the current geographic location of the terminal 1000 to implement navigation or LBS (Location Based Service).
  • the positioning component 1008 may be a positioning component based on the GPS (Global Positioning System, global positioning system) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.
  • the power supply 1009 is used to power various components in the terminal 1000 .
  • the power source 1009 may be alternating current, direct current, disposable batteries or rechargeable batteries. Where the power source 1009 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery can also be used to support fast charging technology.
  • the terminal 1000 further includes one or more sensors 1010 .
  • the one or more sensors 1010 include, but are not limited to, an acceleration sensor 1011 , a gyro sensor 1012 , a pressure sensor 1013 , a fingerprint sensor 1014 , an optical sensor 1015 and a proximity sensor 1016 .
  • the acceleration sensor 1011 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 1000 . In some embodiments, the acceleration sensor 1011 can be used to detect the components of the gravitational acceleration on the three coordinate axes.
  • the processor 1001 can control the display screen 1005 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011 .
  • the acceleration sensor 1011 can also be used for game or user movement data collection.
  • the gyroscope sensor 1012 can detect the body direction and rotation angle of the terminal 1000 , and the gyroscope sensor 1012 can cooperate with the acceleration sensor 1011 to collect 3D actions of the user on the terminal 1000 .
  • the processor 1001 can implement the following functions according to the data collected by the gyroscope sensor 1012 : motion sensing (for example, changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1013 may be disposed on the side frame of the terminal 1000 and/or the lower layer of the display screen 1005 .
  • the processor 1001 can perform left and right hand identification or shortcut operations according to the holding signal collected by the pressure sensor 1013 .
  • the processor 1001 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 1005 .
  • the operability controls include at least one of button controls, scroll bar controls, icon controls, and menu controls.
  • the fingerprint sensor 1014 is used to collect the user's fingerprint, and the processor 1001 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user's identity according to the collected fingerprint. When the user's identity is identified as a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, making payments, and changing settings.
  • the fingerprint sensor 1014 may be provided on the front, back or side of the terminal 1000 . In the case where the terminal 1000 is provided with a physical button or a manufacturer's logo, the fingerprint sensor 1014 may be integrated with the physical button or the manufacturer's logo.
  • the optical sensor 1015 is used to collect ambient light intensity.
  • the processor 1001 can control the display brightness of the display screen 1005 according to the ambient light intensity collected by the optical sensor 1015 . In some embodiments, when the ambient light intensity is high, the display brightness of the display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the display screen 1005 is decreased. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the ambient light intensity collected by the optical sensor 1015 .
  • a proximity sensor 1016 also called a distance sensor, is usually disposed on the front panel of the terminal 1000 .
  • the proximity sensor 1016 is used to collect the distance between the user and the front of the terminal 1000 .
  • the processor 1001 controls the display screen 1005 to switch from the bright screen state to the off screen state; 1016
  • the processor 1001 controls the display screen 1005 to switch from the closed screen state to the bright screen state.
  • FIG. 10 does not constitute a limitation on the terminal 1000, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • the electronic device may be implemented as a server, and the structure of the server will be described below:
  • FIG. 11 is a schematic structural diagram of a server 1100 according to an exemplary embodiment.
  • the server 1100 may vary greatly due to different configurations or performance, and may include one or more processors (Central Processing Units, CPU) 1101 and one or more than one memory 1102, the storage medium included in the memory 1102 may be a read-only memory (Read-Only Memory, ROM) 1103 and a random access memory (RAM) 1104.
  • the server may also have components such as a wired or wireless network interface 1105, an input and output interface 1106, etc. for input and output, the server 1100 may also include a mass storage device 1107, and the server 1100 may also include other devices for realizing device functions components, which will not be repeated here.
  • a non-volatile storage medium including program codes is also provided, such as a memory 1002 including program codes, and the above program codes can be executed by the processor 1101 of the server 1100 to perform the following steps:
  • a plurality of image frames are obtained from the video material.
  • a target image frame is determined from a plurality of image frames, and the target image frame is an image frame corresponding to the processing progress.
  • the target image frame is displayed.
  • the processor is configured to, in response to initiating processing of any one of the plurality of image frames, determine any one of the image frames as the target image frame.
  • the processor is configured to: in response to the completion of processing any one of the plurality of image frames, determine any processed image frame as the target image frame.
  • the processor is configured to obtain a plurality of reference image frames from the video material.
  • Quality information for a plurality of reference image frames is determined.
  • a plurality of image frames are determined from the plurality of reference image frames, and the quality information of the plurality of image frames conforms to the target condition.
  • the processor is configured to:
  • At least two items of definition information, color richness information, and content information of a plurality of reference image frames are acquired.
  • the quality information of the multiple reference image frames is obtained by fusing at least two items of the definition information, the color richness information and the content information of the multiple reference image frames.
  • the processor is configured to:
  • the target image frame is cropped, and the cropped target image frame is displayed on the image frame display area of the processing progress display interface.
  • the processor is configured to:
  • the target area contains the target object.
  • the processor is configured to:
  • the first image frame is controlled to move out of the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change.
  • a second image frame is displayed on the image frame display area of the processing progress display interface, the second image frame being the image frame corresponding to the changed processing progress.
  • the processor is configured to:
  • the first image frame is controlled to move out of the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change.
  • the second image frame is controlled to enter the image frame display area of the processing progress display interface, and the second image frame is the image frame corresponding to the changed processing progress.
  • the processor is configured to:
  • the display of the first image frame is canceled, and the first image frame is the image frame corresponding to the processing progress before the change.
  • a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.
  • the storage medium is a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium is ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage devices, etc. .
  • a computer program product comprising one or more instructions executable by a processor of an electronic device to accomplish the following steps:
  • a plurality of image frames are obtained from the video material.
  • a target image frame is determined from a plurality of image frames, and the target image frame is an image frame corresponding to the processing progress.
  • the target image frame is displayed.
  • the processor is configured to, in response to initiating processing of any one of the plurality of image frames, determine any one of the image frames as the target image frame.
  • the processor is configured to: in response to the completion of processing any one of the plurality of image frames, determine any processed image frame as the target image frame.
  • the processor is configured to obtain a plurality of reference image frames from the video material.
  • Quality information for a plurality of reference image frames is determined.
  • a plurality of image frames are determined from the plurality of reference image frames, and the quality information of the plurality of image frames conforms to the target condition.
  • the processor is configured to:
  • At least two items of definition information, color richness information, and content information of a plurality of reference image frames are acquired.
  • the quality information of the multiple reference image frames is obtained by fusing at least two items of the definition information, the color richness information and the content information of the multiple reference image frames.
  • the processor is configured to:
  • the target image frame is cropped, and the cropped target image frame is displayed on the image frame display area of the processing progress display interface.
  • the processor is configured to:
  • the target area contains the target object.
  • the part other than the target area is the part other than the target area.
  • the processor is configured to:
  • the first image frame is controlled to move out of the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change.
  • a second image frame is displayed on the image frame display area of the processing progress display interface, and the second image frame is an image frame corresponding to the changed processing progress.
  • the processor is configured to:
  • the first image frame is controlled to move out of the image frame display area of the processing progress display interface, and the first image frame is the image frame corresponding to the processing progress before the change.
  • the second image frame is controlled to enter the image frame display area of the processing progress display interface, and the second image frame is the image frame corresponding to the changed processing progress.
  • the processor is configured to:
  • the display of the first image frame is canceled, and the first image frame is the image frame corresponding to the processing progress before the change.
  • a second image frame is displayed, and the second image frame is an image frame corresponding to the changed processing progress.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

本公开关于一种图像处理方法以及电子设备,属于多媒体技术领域,方法包括:响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧为与所述处理进度对应的图像帧;在处理进度显示界面上,显示目标图像帧。

Description

图像处理方法以及电子设备
本公开要求于2020年8月13日提交、申请号为202010814852.X的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及多媒体技术领域,尤其涉及一种图像处理方法以及电子设备。
背景技术
随着计算机技术的发展,越来越多的用户会通过观看视频来进行娱乐。对于视频的作者来说,在制作视频的过程中,需要通过视频处理应用程序对一段或者多段视频素材进行处理,例如在一段视频素材中加入绘制的图形或者将多段视频素材拼接为一个完整的视频。
发明内容
本公开提供一种图像处理方法以及电子设备,本公开的技术方案如下:
一方面,提供了一种图像处理方法,包括:
响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧为与所述处理进度对应的图像帧;
在处理进度显示界面上,显示所述目标图像帧。
在一些实施例中,所述根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧包括:
响应于开始处理所述多个图像帧中的任一图像帧,将所述任一图像帧确定为所述目标图像帧。
在一些实施例中,所述根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧包括:
响应于对所述多个图像帧中的任一图像帧处理完毕,将处理完毕的所述任一图像帧确定为所述目标图像帧。
在一些实施例中,所述从所述视频素材中获取多个图像帧包括:
从所述视频素材中获取多个参考图像帧;
确定所述多个参考图像帧的质量信息;
基于所述多个参考图像帧的质量信息,从所述多个参考图像帧中确定所述多个图像帧,所述多个图像帧的质量信息符合目标条件。
在一些实施例中,所述确定所述多个参考图像帧的质量信息包括:
获取所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项;
融合所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到所述多个参考图像帧的质量信息。
在一些实施例中,所述在处理进度显示界面上,显示目标图像帧包括:
裁剪所述目标图像帧,在所述处理进度显示界面的图像帧显示区域上,显示裁剪后的所述目标图像帧。
在一些实施例中,所述裁剪所述目标图像帧包括:
确定目标区域,所述目标区域包含目标对象;
删除所述目标区域以外的部分。
在一些实施例中,所述在处理进度显示界面上,显示目标图像帧包括:
响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
响应于所述第一图像帧完全移出所述处理进度显示界面的图像帧显示区域,在所述处理进度显示界面的图像帧显示区域上显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,所述在处理进度显示界面上,显示目标图像帧包括:
响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
在所述第一图像帧移动的同时,控制第二图像帧进入所述处理进度显示界面的图像帧显示区域,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,所述在处理进度显示界面上,显示目标图像帧包括:
响应于所述处理进度发生变化,取消第一图像帧的显示,所述第一图像帧为变化前的处理进度对应的图像帧;
在所述处理进度显示界面的图像帧显示区域上,显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
一方面,提供了一种图像处理装置,包括:
获取单元,被配置为响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
确定单元,被配置为根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧为与所述处理进度对应的图像帧;
显示单元,被配置为在处理进度显示界面上,显示所述目标图像帧。
在一些实施例中,所述确定单元,被配置为响应于开始处理所述多个图像帧中的任一图像帧,将所述任一图像帧确定为所述目标图像帧。
在一些实施例中,所述确定单元,被配置为响应于对所述多个图像帧中的任一图像帧处理完毕,将处理完毕的所述任一图像帧确定为所述目标图像帧。
在一些实施例中,所述获取单元,被配置为从所述视频素材中获取多个参考图像帧;确定所述多个参考图像帧的质量信息;基于所述多个参考图像帧的质量信息,从所述多个参考图像帧中确定所述多个图像帧,所述多个图像帧的质量信息符合目标条件。
在一些实施例中,所述获取单元,被配置为获取所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项;融合所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到所述多个参考图像帧的质量信息。
在一些实施例中,所述显示单元,被配置为裁剪所述目标图像帧,在所述处理进度显示界面的图像帧显示区域上,显示裁剪后的所述目标图像帧。
在一些实施例中,所述显示单元,被配置为确定目标区域,所述目标区域包含目标对象;删除所述目标区域以外的部分。
在一些实施例中,所述显示单元,被配置为响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;响应于所述第一图像帧完全移出所述处理进度显示界面的图像帧显示区域,在所述处理进度显示界面的图像帧显示区域上显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,所述显示单元,被配置为响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;在所述第一图像帧移动的同时,控制第二图像帧进入所述处理进度显示界面的图像帧显示区域,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,所述显示单元,被配置为响应于所述处理进度发生变化,取消第一图像帧的显示,所述第一图像帧为变化前的处理进度对应的图像帧;在所述处理进度显示界面的图像帧显示区域上,显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
一方面,提供一种电子设备,该电子设备包括:
一个或多个处理器;
用于存储该处理器可执行程序代码的存储器;
其中,该处理器被配置为下述步骤:
响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧为与所述处理进度对应的图像帧;
在处理进度显示界面上,显示所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于开始处理所述多个图像帧中的任一图像帧,将所述任一图像帧确定为所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于对所述多个图像帧中的任一图像帧处理完毕,将处理完毕的所述任一图像帧确定为所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
从所述视频素材中获取多个参考图像帧;
确定所述多个参考图像帧的质量信息;
基于所述多个参考图像帧的质量信息,从所述多个参考图像帧中确定所述多个图像帧, 所述多个图像帧的质量信息符合目标条件。
在一些实施例中,该处理器被配置为下述步骤:
获取所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项;
融合所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到所述多个参考图像帧的质量信息。
在一些实施例中,该处理器被配置为下述步骤:
裁剪所述目标图像帧,在所述处理进度显示界面的图像帧显示区域上,显示裁剪后的所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
确定目标区域,所述目标区域包含目标对象;
删除所述目标区域以外的部分。
在一些实施例中,该处理器被配置为下述步骤:
响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
响应于所述第一图像帧完全移出所述处理进度显示界面的图像帧显示区域,在所述处理进度显示界面的图像帧显示区域上显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
在所述第一图像帧移动的同时,控制第二图像帧进入所述处理进度显示界面的图像帧显示区域,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于所述处理进度发生变化,取消第一图像帧的显示,所述第一图像帧为变化前的处理进度对应的图像帧;
在所述处理进度显示界面的图像帧显示区域上,显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
一方面,提供一种非易失性存储介质,当该存储介质中的程序代码由电子设备的处理器执行时,使得电子设备的服务器能够执行下述步骤:
响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧为与所述处理进度对应的图像帧;
在处理进度显示界面上,显示所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于开始处理所述多个图像帧中的任一图像帧,将所述任一图像帧确定为所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于对所述多个图像帧中的任一图像帧处理完毕,将处理完毕的所述任一图像帧确定为所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
从所述视频素材中获取多个参考图像帧;
确定所述多个参考图像帧的质量信息;
基于所述多个参考图像帧的质量信息,从所述多个参考图像帧中确定所述多个图像帧,所述多个图像帧的质量信息符合目标条件。
在一些实施例中,该处理器被配置为下述步骤:
获取所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项;
融合所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到所述多个参考图像帧的质量信息。
在一些实施例中,该处理器被配置为下述步骤:
裁剪所述目标图像帧,在所述处理进度显示界面的图像帧显示区域上,显示裁剪后的所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
确定目标区域,所述目标区域包含目标对象;
删除所述目标区域以外的部分。
在一些实施例中,该处理器被配置为下述步骤:
响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
响应于所述第一图像帧完全移出所述处理进度显示界面的图像帧显示区域,在所述处理进度显示界面的图像帧显示区域上显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
在所述第一图像帧移动的同时,控制第二图像帧进入所述处理进度显示界面的图像帧显示区域,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于所述处理进度发生变化,取消第一图像帧的显示,所述第一图像帧为变化前的处理进度对应的图像帧;
在所述处理进度显示界面的图像帧显示区域上,显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
一方面,提供一种计算机程序产品,该计算机程序产品存储有一条或多条程序代码,该一条或多条程序代码可以由电子设备的处理器执行,以完成下述步骤:
响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧 为与所述处理进度对应的图像帧;
在处理进度显示界面上,显示所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于开始处理所述多个图像帧中的任一图像帧,将所述任一图像帧确定为所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于对所述多个图像帧中的任一图像帧处理完毕,将处理完毕的所述任一图像帧确定为所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
从所述视频素材中获取多个参考图像帧;
确定所述多个参考图像帧的质量信息;
基于所述多个参考图像帧的质量信息,从所述多个参考图像帧中确定所述多个图像帧,所述多个图像帧的质量信息符合目标条件。
在一些实施例中,该处理器被配置为下述步骤:
获取所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项;
融合所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到所述多个参考图像帧的质量信息。
在一些实施例中,该处理器被配置为下述步骤:
裁剪所述目标图像帧,在所述处理进度显示界面的图像帧显示区域上,显示裁剪后的所述目标图像帧。
在一些实施例中,该处理器被配置为下述步骤:
确定目标区域,所述目标区域包含目标对象;
删除所述目标区域以外的部分。
在一些实施例中,该处理器被配置为下述步骤:
响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
响应于所述第一图像帧完全移出所述处理进度显示界面的图像帧显示区域,在所述处理进度显示界面的图像帧显示区域上显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
在所述第一图像帧移动的同时,控制第二图像帧进入所述处理进度显示界面的图像帧显示区域,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,该处理器被配置为下述步骤:
响应于所述处理进度发生变化,取消第一图像帧的显示,所述第一图像帧为变化前的处理进度对应的图像帧;
在所述处理进度显示界面的图像帧显示区域上,显示第二图像帧,所述第二图像帧为变 化后的处理进度对应的图像帧。
本公开中,终端能够在对视频素材进行处理的过程中,随着处理进度的变化,向用户展示与处理进度对应的图像帧,从而使得用户能够通过查看该图像帧,得知视频素材的处理进度。相较于通过进度条或者百分比的形式向用户展示视频素材的处理进度来说,本公开通过图像帧来向用户展示处理进度,不但展示的效果更佳生动直观,人机交互的效率更高,而且能够降低用户在等待过程中对时间成本的感知。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
图1是根据一示例性实施例示出的一种图像处理方法的实施环境示意图;
图2是根据一示例性实施例示出的一种视频素材选择界面的示意图;
图3是根据一示例性实施例示出的一种图像处理方法的流程图;
图4是根据一示例性实施例示出的一种图像处理方法的流程图;
图5是根据一示例性实施例示出的一种处理进度显示界面的示意图;
图6是根据一示例性实施例示出的一种处理进度显示界面的示意图;
图7是根据一示例性实施例示出的一种处理进度显示界面的示意图;
图8是根据一示例性实施例示出的一种处理进度显示界面的示意图;
图9是根据一示例性实施例示出的一种图像处理装置的结构示意图;
图10是根据一示例性实施例示出的一种终端的结构示意图;
图11是根据一示例性实施例示出的一种服务器的结构示意图。
具体实施方式
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。
以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
本公开所涉及的用户信息可以为经用户授权或者经过各方充分授权的信息。
图1是根据一示例性实施例示出的一种图像处理方法的实施环境示意图,如图1所示,包括终端101和服务器102。
在一些实施例中,终端101为智能手机、智能手表、台式电脑、手提电脑和膝上型便携计算机等设备中的至少一种。终端101上可以安装并运行有支持视频处理的应用程序,用户可以通过终端101登录该应用程序来进行语音识别,例如用户在终端101上选取多段视频, 通过该应用程序来将多段视频合成为一段视频。终端101可以通过无线网络或有线网络与服务器102相连。
在一些实施例中,终端101为多个终端中的一个,本实施例仅以终端101来举例说明。本领域技术人员可以知晓,上述终端的数量能够更多或更少。例如上述终端101能够仅为几个,或者上述终端101为几十个或几百个,或者更多数量,本公开实施例对终端101的数量和设备类型均不加以限定。
在一些实施例中,服务器102为一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。服务器102能够用于确定图像帧的质量信息,还可以用于对终端101发送的视频进行处理。
上述服务器102的数量能够为更多或更少,本公开实施例对此不加以限定。当然,在一些实施例中,服务器102还包括其他功能服务器,以便提供更全面且多样化的服务。
本公开提供的图像处理方法可以应用于多种场景,为了便于理解,首先对本公开可能涉及到的应用场景进行说明,需要说明的是,在下述说明过程中,终端也即是上述实施环境中的终端101,服务器也即是上述实施环境中的服务器102。
本公开提供的图像处理方法能够应用在多个视频素材的合成过程中,在一些实施例中,用户想要将视频素材A、视频素材B以及视频素材C三个视频素材合成为一个视频D,那么用户通过终端,启动支持视频处理的应用程序。用户向该应用程序中导入视频素材A、B以及C,通过该应用程序执行上述三个视频素材合成。终端通过该应用程序合成视频素材的过程中,能够在应用程序的处理进度显示界面上显示视频素材A、视频素材B以及视频素材C中的图像帧,用户能够通过图像帧得知当前视频素材合成的进度。参见图2,201为支持视频素材处理的应用程序提供的视频素材选择界面,202为内容选项,“全部”表示显示所有的视频素材和图片,“视频”表示显示所有的视频素材,“图片”表示显示所有的图片,用户通过点击不同的内容选项,选择不同的内容。203为待进行处理的视频素材的封面,视频素材的封面上显示有该视频素材的总时长,视频素材封面右上角的数字表示该视频素材被选中的顺序,用户通过选择不同的视频素材封面,来确定进行处理的视频素材。204为时间显示框,用于显示用户选择视频素材的总时长。205为用户选择的视频素材的封面,用户通过点击封面右上角的“×”,能够取消选择该视频素材。用户选择完毕之后,点击“一键出片”按键,响应于对“一键出片”按键的点击操作,终端通过支持视频处理的应用程序将用户选择的视频素材进行合成。或者,用户选择视频素材之后,点击“下一步”按键,为视频素材选择不同的滤镜以及合成方式,“下一步”之后的数字表示用户选择视频素材的数量。
本公开提供的图像处理方法能够应用在对单个视频素材进行处理的过程中,在一些实施例中,用户通过支持视频素材处理的应用程序,在视频素材的不同图像帧上添加了不同的显示元素,或者对图像帧进行了不同的处理,例如在图像帧A上添加了一行字幕,在图像帧B上添加了一个图案,对图像帧C进行了锐化处理,对图像帧D进行了马赛克处理。终端通过支持视频处理的应用程序在对视频素材进行上述处理时,在上述图像帧A、图像帧B、图像帧C以及图像帧D为目标图像帧的情况下,根据视频素材处理的进度,在该应用程序的界面上显示图像帧A、图像帧B、图像帧C以及图像帧D,用户能够通过查看目标图像帧得知当 前的视频素材处理进度。
在一些实施例中,终端除了在应用程序的处理进度显示界面上显示图像帧A、图像帧B、图像帧C以及图像帧D之外,还能够在应用程序的处理进度显示界面上显示处理过后的图像帧A、图像帧B、图像帧C以及图像帧D。
在本公开实施例中,图像处理方法的执行主体可以为终端,也可以为服务器。在执行主体为服务器的情况下,那么在方法的执行过程中,用户通过终端向服务器发送视频素材,由服务器对视频素材进行处理,得到目标图像帧,通过终端显示目标图像帧。为了便于理解,下述说明过程以执行主体为终端为例进行。
图3是根据一示例性实施例示出的一种图像处理方法的流程图,如图3所示,包括以下步骤。
S301、响应于对视频素材的处理指令,终端从视频素材中获取多个图像帧。
在一些实施例中,一个视频素材包括多个图像帧。
S302、终端根据视频素材的处理进度,从多个图像帧中确定目标图像帧,目标图像帧为与处理进度对应的图像帧。
在一些实施例中,处理进度用于描述终端处理到多个图像帧中的哪个图像帧。
S303、终端在处理进度显示界面上,显示目标图像帧。
本公开中,终端能够在对视频素材进行处理的过程中,随着处理进度的变化,向用户展示与处理进度对应的图像帧,从而使得用户能够通过查看该图像帧,得知视频素材的处理进度。相较于通过进度条或者百分比的形式向用户展示视频素材的处理进度来说,本公开通过图像帧来向用户展示处理进度,不但展示的效果更佳生动直观,人机交互的效率更高,而且能够降低用户在等待过程中对时间成本的感知。
在一种可能的实施方式中在一些实施例中,根据视频素材的处理进度,从多个图像帧中确定目标图像帧包括:
响应于开始处理多个图像帧中的任一图像帧,将任一图像帧确定为目标图像帧。
在一些实施例中,根据视频素材的处理进度,从多个图像帧中确定目标图像帧包括:
响应于对多个图像帧中的任一图像帧处理完毕,将处理完毕的任一图像帧确定为目标图像帧。
在一些实施例中,从视频素材中获取多个图像帧包括:
从视频素材中获取多个参考图像帧。
确定多个参考图像帧的质量信息。
基于多个参考图像帧的质量信息,从多个参考图像帧中确定出质量信息符合目标条件的多个图像帧。
在一些实施例中,确定多个参考图像帧的质量信息包括:
获取多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项。
融合多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到多个参考图像帧的质量信息。
在一些实施例中,在处理进度显示界面上,显示目标图像帧包括:
裁剪目标图像帧,在处理进度显示界面的图像帧显示区域上,显示裁剪后的目标图像帧。
在一些实施例中,对目标图像帧进行裁剪包括:
确定目标区域,目标区域包含目标对象。
删除目标区域以外的部分。
在一些实施例中,在处理进度显示界面上,显示目标图像帧包括:
响应于处理进度发生变化,控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。
响应于第一图像帧完全移出处理进度显示界面的图像帧显示区域,在处理进度显示界面的图像帧显示区域上显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,在处理进度显示界面上,显示目标图像帧包括:
响应于处理进度发生变化,控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。
在第一图像帧移动的同时,控制第二图像帧进入处理进度显示界面的图像帧显示区域,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,在处理进度显示界面上,显示目标图像帧包括:
响应于处理进度发生变化,取消第一图像帧的显示,所述第一图像帧为变化前的处理进度对应的图像帧。
在处理进度显示界面的图像帧显示区域上,显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
图4是根据一示例性实施例示出的一种图像处理方法的流程图,如图4所示,包括以下步骤。
S401、响应于对视频素材的处理指令,终端从视频素材中获取多个参考图像帧。
在一些实施例中,视频素材为终端上存储的视频素材,或者为终端从互联网上获取的视频素材,或者为终端实时拍摄的视频素材。视频素材的数量可以为一个,也可以为多个,本公开实施例对于视频素材的来源和数量不做限定。
在一些实施例中,响应于对视频素材的处理指令,终端从视频素材中,每隔目标时间间隔获取参考图像帧,得到多个参考图像帧,在这种情况下,参考图像帧的数量与视频素材的总时长相关。
在这种实现方式下,终端能够从视频素材中,每隔目标时间间隔获取参考图像帧,这样可以保证终端获取的参考图像帧均匀分布在视频素材中。
举例来说在一些实施例中,在终端对一个视频素材进行处理的情况下,响应于对该视频素材的处理指令,终端对该视频素材进行解码,得到该视频素材中的参考图像帧。例如,终端每隔M秒,从该视频素材中获取一个参考图像帧,得到多个参考图像帧,M为正整数。在一些实施例中,视频素材A的总时长为30秒,目标时间间隔为2秒,那么终端每隔2秒,从视频素材A中获取图像帧,总共获取到15个图像帧。
在一些实施例中,在终端对三个视频素材进行处理的情况下,响应于对三个视频素材的 处理指令,终端对三个视频素材进行解码,得到三个视频素材中的参考图像帧。终端根据三个视频素材的总时长以及目标时间间隔M,确定获取参考图像帧的数量N。终端根据三个视频素材的处理顺序,先从第一个视频素材中获取参考图像帧。响应于从第一个视频素材中获取的参考图像帧的数量达到第一数量,终端从第二个视频素材中获取参考图像帧。响应于从第二个视频素材中获取的参考图像帧的数量达到第二数量,终端从第三个视频素材中获取参考图像帧,总共从第三个视频素材中获取第三数量的参考图像帧。其中,第一数量为第一个视频素材的总时长与目标时间间隔M的比值,第二数量为第二个视频素材的总时长与目标时间间隔M的比值,第三数量为第三个视频素材的总时长与目标时间间隔M的比值。在一些实施例中,终端对视频素材A、视频素材B以及视频素材C进行视频素材合成处理,视频素材A的总时长为30秒、视频素材B的总时长为20秒、视频素材C的总时长为10秒,视频素材A、视频素材B以及视频素材C的合成顺序为A+视频素材B+视频素材C。在目标时间间隔为2秒的情况下,终端确定视频素材A对应的第一数量为15、视频素材B对应的第二数量为10、视频素材C对应的第三数量为5。终端每隔2秒,从视频素材A中获取到15个参考图像帧。终端每隔2秒,从视频素材B中获取到10个参考图像帧。终端每隔2秒,从视频素材C中获取到5个参考图像帧,这样终端总共从视频素材A、视频素材B以及视频素材C中获取了30个参考图像帧。
在一些实施例中,响应于对至少一个视频素材的处理指令,终端分别从至少一个视频素材中随机获取参考图像帧,得到多个参考图像帧,在这种情况下,终端从每个视频素材中随机获取参考图像帧的数量与该视频素材的总时长相关。
在这种实现方式下,终端从至少一个视频素材中获取参考图像帧的方式为随机获取,这样得到的参考图像帧更能从整体上反映每一个视频素材的特点。
举例来说在一些实施例中,在终端对三个视频素材进行处理的情况下,响应于对三个视频素材的处理指令,终端对三个视频素材进行解码,得到三个视频素材中的参考图像帧。终端分别根据三个视频素材的总时长,确定从三个视频素材中获取参考图像帧的数量。终端分别从三个视频素材中随机获取参考图像帧,得到多个参考图像帧。例如,存在视频素材D、视频素材E以及视频素材F三个视频素材,视频素材D的总时长为10秒、视频素材E的总时长为5秒、视频素材F的总时长为15秒,获取参考图像帧的数量为6,那么终端根据视频素材D、视频素材E以及视频素材F的总时长,确定从视频素材D中获取参考图像帧的数量为2、从视频素材E中获取参考图像帧的数量为1,从视频素材F中获取参考图像帧的数量为3。终端随机从视频素材D中获取2个参考图像帧、从视频素材E中获取1个参考图像帧,从视频素材F中获取3个参考图像帧。
S402、终端获取多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项。
其中,终端获取多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项包括以下几种情况,终端获取多个参考图像帧的清晰度信息和色彩丰富度信息;终端获取多个参考图像帧的清晰度信息以及内容信息;终端获取多个参考图像帧的色彩丰富度信息以及内容信息;终端获取多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息。
为了便于理解,在下述说明过程中,以参考图像帧的数量为一个为例进行说明。
对于清晰度信息来说,在一些实施例中,终端将参考图像帧转化为灰度参考图像帧,执行目标函数对灰度参考图像帧的像素点的灰度值进行处理,得到该参考图像帧的清晰度。当然,在参考图像帧原本就为灰度图像帧的情况下,那么终端能够直接执行目标函数,对参考图像帧的像素点的灰度值进行处理,得到该参考图像帧的清晰度信息,目标函数为清晰度信息获取函数,例如为布伦纳(Brenner)梯度函数,或者为Tenengrad梯度函数,或者为拉普拉斯(Laplacian)梯度函数、灰度方差(SMD)函数、灰度方差乘积(SMD2)函数以及熵函数等函数,本公开实施例对于目标函数的类型不做限定。
首先对终端将参考图像帧转化为灰度参考图像帧的方法进行说明。
在一些实施例中,终端采用公式(1)、公式(2)、公式(3)或者公式(4)中的任一个将彩色参考图像帧的像素点的颜色通道之转化为灰度参考图像帧的像素点的像素值。
Gray=R*0.299+G*0.587+B*0.114    (1)
Gray=(R*299+G*587+B*114+500)/1000    (2)
Gray=(R^2.2*0.2973+G^2.2*0.6274+B^2.2*0.0753)^(1/2.2)    (3)
Gray=(R+B+G)/3    (4)
其中,Gray为灰度参考图像帧的像素点的灰度值,R为彩色参考图像帧的像素点的红色通道值,G为彩色参考图像帧的像素点的绿色通道值,B为彩色参考图像帧的像素点的蓝色通道值。
下面对终端执行目标函数对灰度参考图像帧进行处理,得到参考图像帧的清晰度信息的方法进行说明:
以目标函数为布伦纳(Brenner)梯度函数为例,参见公式(5),终端将灰度参考图像帧的像素点的灰度值代入Brenner梯度函数,得到参考图像帧的清晰度信息。
D(f)=Σ xΣ y|f(x+2,y)-f(x,y)| 2    (5)
其中,D(f)为参考图像帧的清晰度信息,(x,y)为灰度参考图像帧的像素点坐标,f(x,y)为灰度参考图像帧的像素点的灰度值。
以目标函数为Tenengrad梯度函数为例,参见公式(6),终端将灰度参考图像帧的像素点的灰度值代入Tenengrad梯度函数,得到参考图像帧的清晰度信息。
Figure PCTCN2021106910-appb-000001
其中,D(f)为参考图像帧的清晰度信息,(x,y)为灰度参考图像帧的像素点坐标,G x(x,y)为x像素点的灰度值在x方向的梯度值,G y(x,y)为x像素点的灰度值在y方 向的梯度值,T为边缘检测阈值。
对于色彩丰富度信息来说,在一些实施例中,终端确定参考图像帧的像素点的红色通道值、绿色通道值以及蓝色通道值,根据参考图像帧的像素点的红色通道值和绿色通道值的差值得到第一参数,根据参考图像帧的像素点的红色通道值、绿色通道值以及蓝色通道值得到第二参数。终端确定参考图像帧的不同像素点的第一参数的平均值和标准差,确定参考图像帧的不同像素点的第二参数的平均值和标准差。终端根据第一参数的平均值和标准差以及第二参数的平均值和标准差,得到参考图像帧的色彩丰富度信息。
在一些实施例中,以rg表示第一参数,以yb表示第二参数,那么第一参数rg=|R-H|,第二参数yb=|0.5(R+G)-B|。终端将参考图像帧的不同像素点的第一参数rg相加后与参考图像帧中像素点的总数相除,得到参考图像帧的不同像素点的第一参数rg的平均值rg mean以及标准差rg sta。终端将参考图像帧的不同像素点的第二参数yb相加后与参考图像帧中像素点的总数相除,得到参考图像帧的不同像素点的第一参数yb的平均值yb mean以及标准差yb sta。终端将rg mean的平方与yb mean的平方相加后开根号,得到第三参数a,将rg sta的平方与yb sta的平方相加后开根号,得到第四参数b。终端通过第三参数a和第四参数b,得到参考图像帧的色彩丰富度信息,色彩丰富度信息C=b+0.3a。
对于内容信息来说,在一些实施例中,终端对参考图像帧进行图像识别,得到参考图像帧包括的目标对象,其中,目标对象包括人脸、宠物以及建筑物等。在一些实施例中,终端根据识别出的对象,为参考图像帧设置不同的权重,例如为包含人脸的参考图像帧设置权重1,为包含宠物的参考图像帧设置权重0.8,为包含建筑物的参考图像帧设置权重0.6等,终端采用权重来表示参考图像帧的内容信息。
另外,终端还能够识别参考图像帧的完整度信息以及重复度信息等,其中参考图像帧的完整度信息用于表示参考图像帧是否存在缺失的部分,参考图像帧的重合度信息用于表示参考图像帧中相同内容的数量。终端将上述参考图像帧包括的对象对应的权重与参考图像帧的完整度信息以及重合度信息进行融合,得到参考图像帧的内容信息。终端能够根据参考图像帧的内容信息,优先从多个参考图像帧中确定出包含目标对象、参考图像帧内容完整以及参考图像帧内容重复较少的目标参考图像帧。
S403、终端融合多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到多个参考图像帧的质量信息。
在一些实施例中,终端对参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项进行加权求和,得到参考图像帧的质量信息,其中,参考图像帧的清晰度信息、色彩丰富度信息以及内容信息对应的权重可以根据实际情况进行设置,本公开实施例对此不做限定。
S404、终端基于多个参考图像帧的质量信息,从多个参考图像帧中确定出质量信息符合目标条件的多个图像帧。
其中,质量信息符合目标条件是指,质量信息大于质量信息阈值或者质量信息为多个参考图像帧中质量信息最高的K个,其中K为正整数。
在这种实现方式下,终端后续基于该图像帧进行显示时,能够达到更好的显示效果。
S405、终端根据视频素材的处理进度,从多个图像帧中,确定目标图像帧。
在一些实施例中,响应于当前处理进度为终端开始对多个图像帧中的任一图像帧进行处理,将该图像帧确定为目标图像帧。也即是响应于开始处理多个图像帧中的任一图像帧,终端将该图像帧确定为目标图像帧。
在这种实现方式下,终端能够在对至少一个视频素材进行处理时,将正在被处理的图像帧确定为目标图像帧。
以处理为终端向视频素材中的图像帧添加图形为例,用户通过视频处理应用程序设置好待向视频素材中添加的图形以及选择好待添加图形的图像帧之后,通过视频处理应用程序向视频素材中的图像帧添加图形。响应于视频处理应用程序开始向任一图像帧添加图形,终端将该图像帧确定为目标图像帧,用户看到该图像帧可以得知处理应用程序正在向该图像帧中添加图形,处理进度的显示更加直观。
以处理为终端合成多个视频素材为例,用户通过视频处理应用程序设置好待合成的视频素材之后,通过视频处理应用程序进行视频素材的合成。响应于视频处理应用程序开始进行视频素材合成,终端确定当前正在合成的图像帧,将该图像帧确定为目标图像帧,用户看到该图像帧可以得知处理应用程序正在向该图像帧中添加图形,处理进度的显示更加直观。
在一些实施例中,响应于当前处理进度为对多个图像帧中的任一图像帧处理完毕,终端将处理完毕的图像帧确定为目标图像帧。也即是,响应于对多个图像帧中的任一图像帧处理完毕,终端将处理完毕的该图像帧确定为目标图像帧。
以处理为终端向视频素材中的图像帧添加图形为例,用户通过视频处理应用程序设置好待向视频素材中添加的图形以及选择好待添加图形的图像帧之后,可以通过视频处理应用程序向视频素材中的图像帧添加图形。响应于处理应用程序向任一图像帧中添加图形完毕,终端确定添加图形完毕的图像帧为目标图像帧。
在一些实施例中,在对至少一个视频素材的处理过程中,终端根据视频素材的处理进度,确定多个图像帧在处理过程中的显示时间。终端确定处理进度对应显示时间,将该显示时间对应的图像帧确定为目标图像帧。
在这种实现方式下,终端能够在处理过程中,实时确定与处理进度对应的图像帧,处理的进度显示更加直观,人机交互的效率更高。
举例来说在一些实施例中,终端根据图像帧的数量、目标图像帧在多个图像帧中的顺序以及处理的进度,确定目标图像帧的显示时间,在一些实施例中存在100个图像帧,目标图像帧在100个图像帧中的顺序为28,那么终端在对视频素材进行处理的进度到达28%时,确定顺序为28的图像帧为目标图像帧,其中,目标图像帧在多个图像帧中的顺序根据终端获取目标图像帧的时间确定。
需要说明的是,上述步骤S401-S405是以执行主体为终端为例进行说明的,在其他可能的实施方式中,也能够由服务器作为执行主体来执行。在上述步骤S401-S405以服务器作为执行主体来执行的情况下,那么服务器执行完步骤S405之后,能够将目标图像帧发送给终端,通过终端进行显示。
S406、终端在处理进度显示界面上,显示目标图像帧。
其中,目标图像帧的显示时长为目标时长,其中,目标时长由终端根据处理的进度确定,在一些实施例中,终端将目标时长设置为与处理百分比变化间隔相同,例如存在100个图像 帧,在当前显示图像帧对应的处理进度为1%的情况下,那么响应于处理进度变化为2%,终端显示与处理进度为2%对应的图像帧,目标时长也即是处理进度由1%变化为2%的时长。另外,目标时长也能够由终端根据图像帧的处理时间确定。在一些实施例中,响应于处理进度为1%对应的图像帧开始处理,终端显示处理进度为1%对应的图像帧,响应于处理进度为2%对应的图像帧开始处理,终端显示处理进度为2%对应的,处理进度为1%对应的图像帧和处理进度为2%对应的图像帧开始处理的时间间隔也即是目标时长,当然,终端也能够通过其他方式确定目标时长,例如将处理进度为1%对应的图像帧和处理进度为2%对应的图像帧处理完毕的时间间隔确定为目标时长,本公开实施例对此不做限定。还有,终端能够为目标时长设置时长阈值,响应于目标时长大于或等于显示阈值,终端按照目标时长对图像帧进行显示;响应于目标时长小于显示阈值,终端按照显示阈值对图像帧进行显示。
在一些实施例中,终端裁剪目标图像帧,在处理进度显示界面的图像帧显示区域上,显示裁剪后的目标图像帧。
在一些实施例中,终端确定目标区域,目标区域包含目标对象,其中,目标对象为人脸、宠物或者建筑物,相应的,目标区域为包含人脸的区域,或包含宠物的区域,或包含建筑物的区域,本公开实施例对于目标区域的内容不做限定。比如,终端对目标图像帧进行图像识别,得到目标图像帧中的目标区。终端删除目标区域以外的部分,也即是终端在目标图像帧中,删除目标区域以外的部分。这样可以使得处理进度对应的图像帧的显示尺寸更加合适。
另外,在终端对视频素材进行处理的过程中,也能够在处理应用程序的处理进度显示界面上显示处理的进度信息以及手动编辑按键,用户点击手动编辑按键能够实现对处理过程的介入,例如向目标图像帧中添加显示元素、停止处理过程、向正在处理的视频素材中添加其他视频素材等操作,本公开实施例对此不作限定。参见图5,501为处理应用程序的处理进度显示界面,502为目标图像帧,503为处理的进度信息,504为手动编辑按键。
在一些实施例中,响应于处理进度发生变化,终端控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。响应于第一图像帧完全移出处理进度显示界面的图像帧显示区域,在处理进度显示界面的图像帧显示区域上显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
在这种实现方式下,处理进度的显示更加直观,人机交互的效率更高。
举例来说在一些实施例中,参见图6,601为视频处理应用程序的处理进度显示界面,602为第一图像帧,603为图像帧显示区域,604为第二图像帧。响应于处理进度发生变化,例如从1%变化为2%,终端将第一图像帧602移出图像帧显示区域603,在图像帧显示区域603显示第二图像帧604。
在一些实施例中,响应于处理进度发生变化,终端控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。在第一图像帧移动的同时,终端控制第二图像帧进入处理进度显示界面的图像帧显示区域,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,参见图7,701为视频处理应用程序的处理进度显示界面,702为第一图像帧,703为图像帧显示区域,704为第二图像帧。响应于处理进度发生变化,例如从1%变化为2%,终端将第一图像帧702移出图像帧显示区域703,在移出第一图像帧702的过程 中,控制第二图像帧704紧跟着第一图像帧702进入图像帧显示区域703,直至第二图像帧704完全进入图像帧显示区域703。
在一些实施例中,响应于处理进度发生变化,终端取消第一图像帧的显示,第一图像帧为变化前的处理进度对应的图像帧。在处理进度显示界面的图像帧显示区域上,终端显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
在这种实现方式下,第二图像帧的显示过程更加直观。
举例来说在一些实施例中,参见图8,801为视频处理应用程序的处理进度显示界面,802为第一图像帧,803为图像帧显示区域,804为第二图像帧。响应于处理进度发生变化,例如从1%变化为2%,终端将第一图像帧802取消显示,在处理进度显示界面上显示第二图像帧804。在一些实施例中,终端可以将至少一个图像帧进行组合,生成图形交换格式(Graphics Interchange Format,GIF)图像,在应用程序的处理进度显示界面上显示该GIF图像,用户可以通过观看该GIF图像来感知处理的进度。
本公开中,终端能够在对视频素材进行处理的过程中,随着处理进度的变化,向用户展示与处理进度对应的图像帧,从而使得用户能够通过查看该图像帧,得知视频素材的处理进度。相较于通过进度条或者百分比的形式向用户展示视频素材的处理进度来说,本公开通过图像帧来向用户展示处理进度,不但展示的效果更佳生动直观,人机交互的效率更高,而且能够降低用户在等待过程中对时间成本的感知。
图9是根据一示例性实施例示出的一种图像处理装置的结构示意图。参照图9,该装置包括获取单元901、确定单元902和显示单元903。
获取单元901,被配置为响应于对视频素材的处理指令,从视频素材中获取多个图像帧。
确定单元902,被配置为根据视频素材的处理进度,从多个图像帧中确定目标图像帧,目标图像帧为与处理进度对应的图像帧。
显示单元903,被配置为在处理进度显示界面上,显示目标图像帧。
在一些实施例中,确定单元,被配置为响应于开始处理多个图像帧中的任一图像帧,将任一图像帧确定为目标图像帧。
在一些实施例中,确定单元,被配置为响应于对多个图像帧中的任一图像帧处理完毕,将处理完毕的任一图像帧确定为目标图像帧。
在一些实施例中,获取单元,被配置为从视频素材中获取多个参考图像帧。确定多个参考图像帧的质量信息。基于多个参考图像帧的质量信息,从多个参考图像帧中确定出质量信息符合目标条件的多个图像帧。
在一些实施例中,获取单元,被配置为获取多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项。融合多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到多个参考图像帧的质量信息。
在一些实施例中,显示单元,被配置为裁剪目标图像帧,在处理进度显示界面的图像帧显示区域上,显示裁剪后的目标图像帧。
在一些实施例中,显示单元,被配置为确定目标区域,目标区域包含目标对象。删除目标区域以外的部分。在一些实施例中,显示单元,被配置为响应于处理进度发生变化,控制 第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。响应于第一图像帧完全移出处理进度显示界面的图像帧显示区域,在处理进度显示界面的图像帧显示区域上显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,显示单元,被配置为响应于处理进度发生变化,控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。在第一图像帧移动的同时,控制第二图像帧进入处理进度显示界面的图像帧显示区域,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,显示单元,被配置为响应于处理进度发生变化,取消第一图像帧的显示,第一图像帧为变化前的处理进度对应的图像帧。在处理进度显示界面的图像帧显示区域上,显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
本公开中,终端能够在对视频素材进行处理的过程中,随着处理进度的变化,向用户展示与处理进度对应的图像帧,从而使得用户能够通过查看该图像帧,得知视频素材的处理进度。相较于通过进度条或者百分比的形式向用户展示视频素材的处理进度来说,本公开通过图像帧来向用户展示处理进度,不但展示的效果更佳生动直观,人机交互的效率更高,而且能够降低用户在等待过程中对时间成本的感知。
本公开实施例提供了一种电子设备,包括一个或多个处理器;
用于存储该处理器可执行程序代码的存储器。
其中,该处理器被配置为下述步骤:
响应于对视频素材的处理指令,从视频素材中获取多个图像帧。
根据视频素材的处理进度,从多个图像帧中确定目标图像帧,目标图像帧为与处理进度对应的图像帧。
在处理进度显示界面上,显示目标图像帧。
在一些实施例中,处理器被配置为下述步骤:响应于开始处理多个图像帧中的任一图像帧,将任一图像帧确定为目标图像帧。
在一些实施例中,处理器被配置为下述步骤:响应于对多个图像帧中的任一图像帧处理完毕,将处理完毕的任一图像帧确定为目标图像帧。
在一些实施例中,处理器被配置为下述步骤:从视频素材中获取多个参考图像帧。
确定多个参考图像帧的质量信息。
基于多个参考图像帧的质量信息,从多个参考图像帧中确定多个图像帧,多个图像帧的质量信息符合目标条件。
在一些实施例中,处理器被配置为下述步骤:
获取多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项。
融合多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到多个参考图像帧的质量信息。
在一些实施例中,处理器被配置为下述步骤:
裁剪目标图像帧,在处理进度显示界面的图像帧显示区域上,显示裁剪后的目标图像帧。
在一些实施例中,处理器被配置为下述步骤:
确定目标区域,目标区域包含目标对象。
删除目标区域以外的部分。
在一些实施例中,处理器被配置为下述步骤:
响应于处理进度发生变化,控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。
响应于第一图像帧完全移出处理进度显示界面的图像帧显示区域,在处理进度显示界面的图像帧显示区域上显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,处理器被配置为下述步骤:
响应于处理进度发生变化,控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。
在第一图像帧移动的同时,控制第二图像帧进入处理进度显示界面的图像帧显示区域,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,处理器被配置为下述步骤:
响应于处理进度发生变化,取消第一图像帧的显示,第一图像帧为变化前的处理进度对应的图像帧。
在处理进度显示界面的图像帧显示区域上,显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
在本公开实施例中,电子设备可以实现为终端,首先对终端的结构进行说明:
图10是根据一示例性实施例示出的一种终端的结构示意图。该终端1000可以为用户所使用的终端。该终端1000可以是:智能手机、平板电脑、笔记本电脑或台式电脑。终端1000还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端1000包括有:处理器1001和存储器1002。
处理器1001可以包括一个或多个处理核心,例如4核心处理器、8核心处理器等。处理器1001可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1001也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1001可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1001还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1002可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。该存储器1002中存储有至少一条程序代码,该至少一条程序代码由该处理器1001加载并执行以实现上述各个方法实施例提供的图像处理方法。存储器1002还可包括高速随机存取存储器,以及非易失性存储器,例如一个或多个磁盘存储设备、闪存存储设备。
在一些实施例中,终端1000还可选包括有:外围设备接口1003和至少一个外围设备。 处理器1001、存储器1002和外围设备接口1003之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1003相连。在一些实施例中,外围设备包括:射频电路1004、显示屏1005、摄像头组件1006、音频电路1007、定位组件1008和电源1009中的至少一种。
外围设备接口1003可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1001和存储器1002。在一些实施例中,处理器1001、存储器1002和外围设备接口1003被集成在同一芯片或电路板上;在一些其他实施例中,处理器1001、存储器1002和外围设备接口1003中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1004用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1004通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1004将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。在一些实施例中,射频电路1004包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1004可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1004还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本公开对此不加以限定。
显示屏1005用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。在显示屏1005是触摸显示屏的情况下,显示屏1005还具有采集在显示屏1005的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1001进行处理。此时,显示屏1005还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1005可以为一个,设置终端1000的前面板;在另一些实施例中,显示屏1005可以为至少两个,分别设置在终端1000的不同表面或呈折叠设计;在一些实施例中,显示屏1005可以是柔性显示屏,设置在终端1000的弯曲表面上或折叠面上。甚至,显示屏1005还可以设置成非矩形的不规图形,也即异形屏。显示屏1005可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1006用于采集图像或视频。在一些实施例中,摄像头组件1006包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1006还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1007可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1001进行处理,或者输入至射频电路1004以实现语音通信。出 于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端1000的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器用于将来自处理器1001或射频电路1004的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。在扬声器是压电陶瓷扬声器的情况下,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1007还可以包括耳机插孔。
定位组件1008用于定位终端1000的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1008可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。
电源1009用于为终端1000中的各个组件进行供电。电源1009可以是交流电、直流电、一次性电池或可充电电池。在电源1009包括可充电电池的情况下,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端1000还包括有一个或多个传感器1010。该一个或多个传感器1010包括但不限于:加速度传感器1011、陀螺仪传感器1012、压力传感器1013、指纹传感器1014、光学传感器1015以及接近传感器1016。
加速度传感器1011可以检测以终端1000建立的坐标系的三个坐标轴上的加速度大小。在一些实施例中,加速度传感器1011可以用于检测重力加速度在三个坐标轴上的分量。处理器1001可以根据加速度传感器1011采集的重力加速度信号,控制显示屏1005以横向视图或纵向视图进行用户界面的显示。加速度传感器1011还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器1012可以检测终端1000的机体方向及转动角度,陀螺仪传感器1012可以与加速度传感器1011协同采集用户对终端1000的3D动作。处理器1001根据陀螺仪传感器1012采集的数据,可以实现如下功能:动作感应(例如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1013可以设置在终端1000的侧边框和/或显示屏1005的下层。在压力传感器1013设置在终端1000的侧边框的情况下,可以检测用户对终端1000的握持信号,由处理器1001根据压力传感器1013采集的握持信号进行左右手识别或快捷操作。在压力传感器1013设置在显示屏1005的下层的情况下,由处理器1001根据用户对显示屏1005的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器1014用于采集用户的指纹,由处理器1001根据指纹传感器1014采集到的指纹识别用户的身份,或者,由指纹传感器1014根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1001授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1014可以被设置终端1000的正面、背面或侧面。在终端1000上设置有物理按键或厂商Logo的情况下,指纹传感器1014可以与物理按键或厂商Logo集成在一起。
光学传感器1015用于采集环境光强度。在一个实施例中,处理器1001可以根据光学传感器1015采集的环境光强度,控制显示屏1005的显示亮度。在一些实施例中,在环境光强 度较高的情况下,调高显示屏1005的显示亮度;在环境光强度较低的情况下,调低显示屏1005的显示亮度。在另一个实施例中,处理器1001还可以根据光学传感器1015采集的环境光强度,动态调整摄像头组件1006的拍摄参数。
接近传感器1016,也称距离传感器,通常设置在终端1000的前面板。接近传感器1016用于采集用户与终端1000的正面之间的距离。在一个实施例中,在接近传感器1016检测到用户与终端1000的正面之间的距离逐渐变小的情况下,由处理器1001控制显示屏1005从亮屏状态切换为息屏状态;在接近传感器1016检测到用户与终端1000的正面之间的距离逐渐变大的情况下,由处理器1001控制显示屏1005从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图10中示出的结构并不构成对终端1000的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在本公开实施例中,电子设备可以实现为服务器,下面对服务器的结构进行说明:
图11是根据一示例性实施例示出的一种服务器1100的结构示意图,该服务器1100可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(Central Processing Units,CPU)1101和一个或一个以上的存储器1102,存储器1102包含的存储介质可以是只读存储器(Read-Only Memory,ROM)1103和随机存取存储器(RAM)1104。当然,该服务器还可以具有有线或无线网络接口1105、输入输出接口1106等部件,以便进行输入输出,该服务器1100还可以包括大容量存储设备1107,该服务器1100还可以包括其他用于实现设备功能的部件,在此不做赘述。
在示例性实施例中,还提供了一种包括程序代码的非易失性存储介质,例如包括程序代码的存储器1002,上述程序代码可由服务器1100的处理器1101执行下述步骤:
响应于对视频素材的处理指令,从视频素材中获取多个图像帧。
根据视频素材的处理进度,从多个图像帧中确定目标图像帧,目标图像帧为与处理进度对应的图像帧。
在处理进度显示界面上,显示目标图像帧。
在一些实施例中,处理器被配置为下述步骤:响应于开始处理多个图像帧中的任一图像帧,将任一图像帧确定为目标图像帧。
在一些实施例中,处理器被配置为下述步骤:响应于对多个图像帧中的任一图像帧处理完毕,将处理完毕的任一图像帧确定为目标图像帧。
在一些实施例中,处理器被配置为下述步骤:从视频素材中获取多个参考图像帧。
确定多个参考图像帧的质量信息。
基于多个参考图像帧的质量信息,从多个参考图像帧中确定多个图像帧,多个图像帧的质量信息符合目标条件。
在一些实施例中,处理器被配置为下述步骤:
获取多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项。
融合多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到多个参考图像帧的质量信息。
在一些实施例中,处理器被配置为下述步骤:
裁剪目标图像帧,在处理进度显示界面的图像帧显示区域上,显示裁剪后的目标图像帧。
在一些实施例中,处理器被配置为下述步骤:
确定目标区域,目标区域包含目标对象。
删除目标区域以外的部分。
在一些实施例中,处理器被配置为下述步骤:
响应于处理进度发生变化,控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。
响应于第一图像帧完全移出处理进度显示界面的图像帧显示区域,在处理进度显示界面的图像帧显示区域上显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,处理器被配置为下述步骤:
响应于处理进度发生变化,控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。
在第一图像帧移动的同时,控制第二图像帧进入处理进度显示界面的图像帧显示区域,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,处理器被配置为下述步骤:
响应于处理进度发生变化,取消第一图像帧的显示,第一图像帧为变化前的处理进度对应的图像帧。
在处理进度显示界面的图像帧显示区域上,显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,存储介质是非临时性计算机可读存储介质,例如,非临时性计算机可读存储介质是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机程序产品,包括一条或多条指令,该一条或多条指令可以由电子设备的处理器执行,以完成下述步骤:
响应于对视频素材的处理指令,从视频素材中获取多个图像帧。
根据视频素材的处理进度,从多个图像帧中确定目标图像帧,目标图像帧为与处理进度对应的图像帧。
在处理进度显示界面上,显示目标图像帧。
在一些实施例中,处理器被配置为下述步骤:响应于开始处理多个图像帧中的任一图像帧,将任一图像帧确定为目标图像帧。
在一些实施例中,处理器被配置为下述步骤:响应于对多个图像帧中的任一图像帧处理完毕,将处理完毕的任一图像帧确定为目标图像帧。
在一些实施例中,处理器被配置为下述步骤:从视频素材中获取多个参考图像帧。
确定多个参考图像帧的质量信息。
基于多个参考图像帧的质量信息,从多个参考图像帧中确定多个图像帧,多个图像帧的质量信息符合目标条件。
在一些实施例中,处理器被配置为下述步骤:
获取多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项。
融合多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到多个参考图像帧的质量信息。
在一些实施例中,处理器被配置为下述步骤:
裁剪目标图像帧,在处理进度显示界面的图像帧显示区域上,显示裁剪后的目标图像帧。
在一些实施例中,处理器被配置为下述步骤:
确定目标区域,目标区域包含目标对象。
除目标区域以外的部分。
在一些实施例中,处理器被配置为下述步骤:
响应于处理进度发生变化,控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。
响应于第一图像帧完全移出处理进度显示界面的图像帧显示区域,在处理进度显示界面的图像帧显示区域上显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,处理器被配置为下述步骤:
响应于处理进度发生变化,控制第一图像帧向处理进度显示界面的图像帧显示区域外移动,第一图像帧为变化前的处理进度对应的图像帧。
在第一图像帧移动的同时,控制第二图像帧进入处理进度显示界面的图像帧显示区域,第二图像帧为变化后的处理进度对应的图像帧。
在一些实施例中,处理器被配置为下述步骤:
响应于处理进度发生变化,取消第一图像帧的显示,第一图像帧为变化前的处理进度对应的图像帧。
在处理进度显示界面的图像帧显示区域上,显示第二图像帧,第二图像帧为变化后的处理进度对应的图像帧。
说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。

Claims (32)

  1. 一种图像处理方法,包括:
    响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
    根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧为与所述处理进度对应的图像帧;
    在处理进度显示界面上,显示所述目标图像帧。
  2. 根据权利要求1所述的图像处理方法,其中,所述根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧包括:
    响应于开始处理所述多个图像帧中的任一图像帧,将所述任一图像帧确定为所述目标图像帧。
  3. 根据权利要求1所述的图像处理方法,其中,所述根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧包括:
    响应于对所述多个图像帧中的任一图像帧处理完毕,将处理完毕的所述任一图像帧确定为所述目标图像帧。
  4. 根据权利要求1所述的图像处理方法,其中,所述从所述视频素材中获取多个图像帧包括:
    从所述视频素材中获取多个参考图像帧;
    确定所述多个参考图像帧的质量信息;
    基于所述多个参考图像帧的质量信息,从所述多个参考图像帧中确定所述多个图像帧,所述多个图像帧的质量信息符合目标条件。
  5. 根据权利要求4所述的图像处理方法,其中,所述确定所述多个参考图像帧的质量信息包括:
    获取所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项;
    融合所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到所述多个参考图像帧的质量信息。
  6. 根据权利要求1所述的图像处理方法,其中,所述在处理进度显示界面上,显示目标图像帧包括:
    裁剪所述目标图像帧,在所述处理进度显示界面的图像帧显示区域上,显示裁剪后的所述目标图像帧。
  7. 根据权利要求6所述的图像处理方法,其中,所述裁剪所述目标图像帧包括:
    确定目标区域,所述目标区域包含目标对象;
    删除所述目标区域以外的部分。
  8. 根据权利要求1所述的图像处理方法,其中,所述在处理进度显示界面上,显示目标图像帧包括:
    响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
    响应于所述第一图像帧完全移出所述处理进度显示界面的图像帧显示区域,在所述处理进度显示界面的图像帧显示区域上显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
  9. 根据权利要求1所述的图像处理方法,其中,所述在处理进度显示界面上,显示目标图像帧包括:
    响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
    在所述第一图像帧移动的同时,控制第二图像帧进入所述处理进度显示界面的图像帧显示区域,所述第二图像帧为变化后的处理进度对应的图像帧。
  10. 根据权利要求1所述的图像处理方法,其中,所述在处理进度显示界面上,显示目标图像帧包括:
    响应于所述处理进度发生变化,取消第一图像帧的显示,所述第一图像帧为变化前的处理进度对应的图像帧;
    在所述处理进度显示界面的图像帧显示区域上,显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
  11. 一种图像处理装置,包括:
    获取单元,被配置为响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
    确定单元,被配置为根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧为与所述处理进度对应的图像帧;
    显示单元,被配置为在处理进度显示界面上,显示所述目标图像帧。
  12. 根据权利要求11所述的图像处理装置,其中,所述确定单元,被配置为响应于开始处理所述多个图像帧中的任一图像帧,将所述任一图像帧确定为所述目标图像帧。
  13. 根据权利要求11所述的图像处理装置,其中,所述确定单元,被配置为响应于对所述多个图像帧中的任一图像帧处理完毕,将处理完毕的所述任一图像帧确定为所述目标图像帧。
  14. 根据权利要求11所述的图像处理装置,其中,所述获取单元,被配置为从所述视频 素材中获取多个参考图像帧;确定所述多个参考图像帧的质量信息;基于所述多个参考图像帧的质量信息,从所述多个参考图像帧中确定所述多个图像帧,所述多个图像帧的质量信息符合目标条件。
  15. 根据权利要求14所述的图像处理装置,其中,所述获取单元,被配置为获取所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项;融合所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到所述多个参考图像帧的质量信息。
  16. 根据权利要求11所述的图像处理装置,其中,所述显示单元,被配置为裁剪所述目标图像帧,在所述处理进度显示界面的图像帧显示区域上,显示裁剪后的所述目标图像帧。
  17. 根据权利要求16所述的图像处理装置,其中,所述显示单元,被配置为确定目标区域,所述目标区域包含目标对象;删除所述目标区域以外的部分。
  18. 根据权利要求11所述的图像处理装置,其中,所述显示单元,被配置为响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;响应于所述第一图像帧完全移出所述处理进度显示界面的图像帧显示区域,在所述处理进度显示界面的图像帧显示区域上显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
  19. 根据权利要求11所述的图像处理装置,其中,所述显示单元,被配置为响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;在所述第一图像帧移动的同时,控制第二图像帧进入所述处理进度显示界面的图像帧显示区域,所述第二图像帧为变化后的处理进度对应的图像帧。
  20. 根据权利要求11所述的图像处理装置,其中,所述显示单元,被配置为响应于所述处理进度发生变化,取消第一图像帧的显示,所述第一图像帧为变化前的处理进度对应的图像帧;在所述处理进度显示界面的图像帧显示区域上,显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
  21. 一种电子设备,包括:
    处理器;
    用于存储所述处理器可执行程序代码的存储器;
    其中,所述处理器被配置为所述程序代码,以实现以下步骤:
    响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
    根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧 为与所述处理进度对应的图像帧;
    在处理进度显示界面上,显示所述目标图像帧。
  22. 根据权利要求21所述的电子设备,其中,所述处理器被配置为下述步骤:响应于开始处理所述多个图像帧中的任一图像帧,将所述任一图像帧确定为所述目标图像帧。
  23. 根据权利要求21所述的电子设备,其中,所述处理器被配置为下述步骤:响应于对所述多个图像帧中的任一图像帧处理完毕,将处理完毕的所述任一图像帧确定为所述目标图像帧。
  24. 根据权利要求21所述的电子设备,其中,所述处理器被配置为下述步骤:从所述视频素材中获取多个参考图像帧;
    确定所述多个参考图像帧的质量信息;
    基于所述多个参考图像帧的质量信息,从所述多个参考图像帧中确定所述多个图像帧,所述多个图像帧的质量信息符合目标条件。
  25. 根据权利要求24所述的电子设备,其中,所述处理器被配置为下述步骤:
    获取所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项;
    融合所述多个参考图像帧的清晰度信息、色彩丰富度信息以及内容信息中的至少两项,得到所述多个参考图像帧的质量信息。
  26. 根据权利要求21所述的电子设备,其中,所述处理器被配置为下述步骤:
    裁剪所述目标图像帧,在所述处理进度显示界面的图像帧显示区域上,显示裁剪后的所述目标图像帧。
  27. 根据权利要求26所述的电子设备,其中,所述处理器被配置为下述步骤:
    确定目标区域,所述目标区域包含目标对象;
    删除所述目标区域以外的部分。
  28. 根据权利要求21所述的电子设备,其中,所述处理器被配置为下述步骤:
    响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
    响应于所述第一图像帧完全移出所述处理进度显示界面的图像帧显示区域,在所述处理进度显示界面的图像帧显示区域上显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
  29. 根据权利要求21所述的电子设备,其中,所述处理器被配置为下述步骤:
    响应于所述处理进度发生变化,控制第一图像帧向所述处理进度显示界面的图像帧显示 区域外移动,所述第一图像帧为变化前的处理进度对应的图像帧;
    在所述第一图像帧移动的同时,控制第二图像帧进入所述处理进度显示界面的图像帧显示区域,所述第二图像帧为变化后的处理进度对应的图像帧。
  30. 根据权利要求21所述的电子设备,其中,所述处理器被配置为下述步骤:
    响应于所述处理进度发生变化,取消第一图像帧的显示,所述第一图像帧为变化前的处理进度对应的图像帧;
    在所述处理进度显示界面的图像帧显示区域上,显示第二图像帧,所述第二图像帧为变化后的处理进度对应的图像帧。
  31. 一种非易失性存储介质,当所述存储介质中的程序代码由电子设备的处理器执行时,使得电子设备能够执行下述步骤:
    响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
    根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧为与所述处理进度对应的图像帧;
    在处理进度显示界面上,显示目标图像帧。
  32. 一种计算机程序产品,所述计算机程序产品存储有一条或多条程序代码,所述一条或多条程序代码可以由电子设备的处理器执行,以完成下述步骤:
    响应于对视频素材的处理指令,从所述视频素材中获取多个图像帧;
    根据所述视频素材的处理进度,从所述多个图像帧中确定目标图像帧,所述目标图像帧为与所述处理进度对应的图像帧;
    在处理进度显示界面上,显示目标图像帧。
PCT/CN2021/106910 2020-08-13 2021-07-16 图像处理方法以及电子设备 WO2022033272A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010814852.XA CN111954058B (zh) 2020-08-13 2020-08-13 图像处理方法、装置、电子设备以及存储介质
CN202010814852.X 2020-08-13

Publications (1)

Publication Number Publication Date
WO2022033272A1 true WO2022033272A1 (zh) 2022-02-17

Family

ID=73343052

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106910 WO2022033272A1 (zh) 2020-08-13 2021-07-16 图像处理方法以及电子设备

Country Status (2)

Country Link
CN (1) CN111954058B (zh)
WO (1) WO2022033272A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111954058B (zh) * 2020-08-13 2023-11-21 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备以及存储介质
CN112954450B (zh) * 2021-02-02 2022-06-17 北京字跳网络技术有限公司 视频处理方法、装置、电子设备和存储介质
CN113762058A (zh) * 2021-05-21 2021-12-07 腾讯科技(深圳)有限公司 一种视频合成方法、装置、计算机设备和存储介质
CN115484395B (zh) * 2021-06-16 2024-09-06 荣耀终端有限公司 一种视频处理方法及电子设备

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100104146A1 (en) * 2008-10-23 2010-04-29 Kabushiki Kaisha Toshiba Electronic apparatus and video processing method
CN103416073A (zh) * 2011-01-14 2013-11-27 谷歌公司 视频处理反馈
CN103905835A (zh) * 2012-12-27 2014-07-02 腾讯科技(北京)有限公司 一种视频播放器的进度预览方法、装置和系统
CN103986938A (zh) * 2014-06-03 2014-08-13 合一网络技术(北京)有限公司 基于视频播放的预览的方法和系统
CN106201838A (zh) * 2016-07-22 2016-12-07 传线网络科技(上海)有限公司 视频下载进度显示方法及装置
CN107908516A (zh) * 2017-12-04 2018-04-13 联想(北京)有限公司 一种数据显示方法及装置
CN108459831A (zh) * 2017-02-10 2018-08-28 佳能株式会社 信息处理设备和信息处理设备的控制方法
CN110971956A (zh) * 2018-09-30 2020-04-07 广州优视网络科技有限公司 视频帧预览方法和装置
CN111954058A (zh) * 2020-08-13 2020-11-17 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备以及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540881B (zh) * 2008-03-19 2011-04-13 华为技术有限公司 实现流媒体定位播放的方法、装置及系统
CN103577304B (zh) * 2012-08-10 2018-11-09 百度在线网络技术(北京)有限公司 一种代码动态分析的方法及装置
CN103702214B (zh) * 2013-12-10 2018-08-14 乐视网信息技术(北京)股份有限公司 一种视频播放方法和电子设备
CN105592363A (zh) * 2014-10-24 2016-05-18 腾讯科技(北京)有限公司 多媒体文件的播放方法及装置
CN104581381A (zh) * 2015-01-04 2015-04-29 浪潮软件股份有限公司 一种终端视频浏览辅助定位方法和装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100104146A1 (en) * 2008-10-23 2010-04-29 Kabushiki Kaisha Toshiba Electronic apparatus and video processing method
CN103416073A (zh) * 2011-01-14 2013-11-27 谷歌公司 视频处理反馈
CN103905835A (zh) * 2012-12-27 2014-07-02 腾讯科技(北京)有限公司 一种视频播放器的进度预览方法、装置和系统
CN103986938A (zh) * 2014-06-03 2014-08-13 合一网络技术(北京)有限公司 基于视频播放的预览的方法和系统
CN106201838A (zh) * 2016-07-22 2016-12-07 传线网络科技(上海)有限公司 视频下载进度显示方法及装置
CN108459831A (zh) * 2017-02-10 2018-08-28 佳能株式会社 信息处理设备和信息处理设备的控制方法
CN107908516A (zh) * 2017-12-04 2018-04-13 联想(北京)有限公司 一种数据显示方法及装置
CN110971956A (zh) * 2018-09-30 2020-04-07 广州优视网络科技有限公司 视频帧预览方法和装置
CN111954058A (zh) * 2020-08-13 2020-11-17 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备以及存储介质

Also Published As

Publication number Publication date
CN111954058A (zh) 2020-11-17
CN111954058B (zh) 2023-11-21

Similar Documents

Publication Publication Date Title
WO2021008456A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN110502954B (zh) 视频分析的方法和装置
CN109891874B (zh) 一种全景拍摄方法及装置
WO2022033272A1 (zh) 图像处理方法以及电子设备
CN111372126B (zh) 视频播放方法、装置及存储介质
CN111065001B (zh) 视频制作的方法、装置、设备及存储介质
CN108063981B (zh) 设置直播间的属性的方法和装置
CN109829864B (zh) 图像处理方法、装置、设备及存储介质
CN111464749B (zh) 进行图像合成的方法、装置、设备及存储介质
CN109859102B (zh) 特效显示方法、装置、终端及存储介质
WO2022134632A1 (zh) 作品处理方法及装置
CN109302632B (zh) 获取直播视频画面的方法、装置、终端及存储介质
CN110933334B (zh) 视频降噪方法、装置、终端及存储介质
CN110769313B (zh) 视频处理方法及装置、存储介质
CN110839174A (zh) 图像处理的方法、装置、计算机设备以及存储介质
CN110225390B (zh) 视频预览的方法、装置、终端及计算机可读存储介质
CN111385525B (zh) 视频监控方法、装置、终端及系统
CN111857793A (zh) 网络模型的训练方法、装置、设备及存储介质
CN111897465B (zh) 弹窗显示方法、装置、设备及存储介质
CN112381729B (zh) 图像处理方法、装置、终端及存储介质
CN112822544A (zh) 视频素材文件生成方法、视频合成方法、设备及介质
CN110992268A (zh) 背景设置方法、装置、终端及存储介质
WO2023087703A9 (zh) 媒体文件处理方法及装置
CN111860064A (zh) 基于视频的目标检测方法、装置、设备及存储介质
CN111369434B (zh) 拼接视频封面生成的方法、装置、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21855325

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/06/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21855325

Country of ref document: EP

Kind code of ref document: A1