[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114650437B - Video publishing method, device, equipment and storage medium - Google Patents

Video publishing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114650437B
CN114650437B CN202210247614.4A CN202210247614A CN114650437B CN 114650437 B CN114650437 B CN 114650437B CN 202210247614 A CN202210247614 A CN 202210247614A CN 114650437 B CN114650437 B CN 114650437B
Authority
CN
China
Prior art keywords
video data
coding
video
encoding
modes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210247614.4A
Other languages
Chinese (zh)
Other versions
CN114650437A (en
Inventor
王新宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Pte Ltd
Original Assignee
Bigo Technology Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bigo Technology Pte Ltd filed Critical Bigo Technology Pte Ltd
Priority to CN202210247614.4A priority Critical patent/CN114650437B/en
Publication of CN114650437A publication Critical patent/CN114650437A/en
Priority to PCT/CN2023/081306 priority patent/WO2023174254A1/en
Application granted granted Critical
Publication of CN114650437B publication Critical patent/CN114650437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • H04N21/64792Controlling the complexity of the content stream, e.g. by dropping packets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a video publishing method, a device, equipment and a storage medium, wherein the method comprises the following steps: generating video data to be released to a service platform; predicting a first time consumption of encoding the video data according to a plurality of encoding modes respectively; predicting second time consumption for transmitting the video data coded according to the plurality of coding modes to the service platform respectively; for the same coding mode, referring to the first time consumption and the second time consumption, calculating the total time consumption for publishing the video data to the service platform; selecting one of the coding modes as a target coding mode according to the total time consumption; and encoding the video data according to the target encoding mode and transmitting the video data to a service platform. The embodiment can combine the advantages of different coding modes, has high flexibility, can optimize the time consumption of video data release and improves the efficiency of video data release.

Description

Video publishing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of multimedia technologies, and in particular, to a video publishing method, device, apparatus, and storage medium.
Background
Video data is widely applied to business scenes such as entertainment, life and the like, and various presentation forms such as short videos, instant messaging messages, social messages and the like are presented.
The video data can be uploaded to some service platforms for release after being generated, the file size of the video data is large, and in order to reduce the occupation of bandwidth, the video data can be encoded according to encoding standards proposed by some standardization organizations before being uploaded to the service platforms, so that the file size of the video data is reduced.
At present, there are various modes for encoding video data, but for a given application, one of the modes is generally used for encoding video data according to the situation of the region where the video data is located, so that flexibility is poor, and release effect is poor.
Disclosure of Invention
The invention provides a video publishing method, a device, equipment and a storage medium, which are used for solving the problem of how to improve the flexibility of encoding video data when video data is published.
According to an aspect of the present invention, there is provided a video distribution method, including:
generating video data to be released to a service platform;
Predicting a first time consumption of encoding the video data in a plurality of encoding modes respectively;
Predicting second time consumption for transmitting the video data encoded according to a plurality of encoding modes to the service platform respectively;
for the same coding mode, referring to the first time consumption and the second time consumption, calculating total time consumption for publishing the video data to the service platform;
selecting one of the coding modes as a target coding mode according to the total consumption time;
And encoding the video data according to the target encoding mode and transmitting the video data to the service platform.
According to another aspect of the present invention, there is provided a video distribution apparatus including:
the video data generation module is used for generating video data to be released to the service platform;
A first time-consuming prediction module for predicting first time consuming of encoding the video data according to a plurality of encoding modes, respectively;
the second time consumption prediction module is used for predicting second time consumption of transmitting the video data coded according to the plurality of coding modes to the service platform respectively;
A total time consumption calculation module, configured to calculate, for the same encoding manner, a total time consumption of publishing the video data to the service platform with reference to the first time consumption and the second time consumption;
The coding mode limiting module is used for selecting one of the coding modes as a target coding mode according to the total consumption time;
and the video release module is used for encoding the video data according to the target encoding mode and transmitting the video data to the service platform.
According to another aspect of the present invention, there is provided an electronic apparatus including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the video distribution method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing a computer program for causing a processor to implement the video distribution method according to any one of the embodiments of the present invention when executed.
In this embodiment, video data to be published to a service platform is generated, first time consumption for encoding the video data according to a plurality of encoding modes is respectively predicted, second time consumption for transmitting the video data encoded according to the plurality of encoding modes to the service platform is respectively predicted, for the same encoding mode, total time consumption for publishing the video data to the service platform is calculated by referring to the first time consumption and the second time consumption, one of the encoding modes is selected as a target encoding mode according to the plurality of total time consumption, and the video data is encoded according to the target encoding mode and transmitted to the service platform. According to the embodiment, the advantages of different coding modes can be combined, the time consumption of the two main operations of coding and uploading in release is comprehensively considered according to actual conditions, the video data is coded by selecting a proper coding mode, the flexibility is high, the time consumption of coding and the time consumption of uploading are balanced, the time consumption of release of the video data can be optimized, the time of waiting for release of the video data by a user is reduced, the efficiency of release of the video data is improved, and the video resources of a service platform are enriched.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a video publishing method according to a first embodiment of the present invention;
Fig. 2 is a flowchart of a video publishing method according to a second embodiment of the present invention;
Fig. 3 is a flowchart of a video publishing method according to a third embodiment of the present invention;
Fig. 4 is a flowchart of a video publishing method according to a fourth embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a video publishing device according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device implementing a video distribution method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a video publishing method according to a first embodiment of the present invention, where the method may be implemented by a video publishing device, which may be implemented in hardware and/or software, and the video publishing device may be configured in an electronic device, especially a mobile terminal, and thus may be suitable for encoding and uploading different predicted video data, so as to select a suitable encoding mode. As shown in fig. 1, the method includes:
step 101, generating video data to be released to a service platform.
The embodiment can be applied to different types of electronic devices, the operating systems of the electronic devices can include android (android), iOS, harmonyOS (hong mo) and the like, and various application programs such as short video applications, instant messaging tools, browsers and the like can be installed in the operating systems, and the application programs can collect original video data in different service scenes through a camera and the like of the electronic devices, and the formats of the original video data are YUV (Y represents brightness, U represents chromaticity and V represents chromaticity) and the like.
The application program establishes a session with a service platform, which is a service-related system, and may be an independent server or a server cluster, such as a distributed system.
After the application program generates the video data, the video data is released to the service platform for being watched by the authorized user, and the authorized user can be the current user or other users except the current user.
The form of video data varies from service to service, and the users having viewing rights also vary, which is not limited in this embodiment.
For example, in a short video application, the generated video data is a short video, and the current user publishes the short video to a short video platform (service platform) for other users registered in the short video platform to view.
For another example, in the instant messaging tool, the generated video data is an instant messaging message and a social messaging message, the current user publishes the instant messaging message to an instant messaging platform (service platform) for viewing by other users who are in a session, the current user publishes the social messaging message to the instant messaging platform for viewing by other users who have a friend relationship with the current user, and so on.
In this embodiment, the publishing of the video data includes two operations, the first operation being to encode the video data and the second operation being to upload the encoded video data to the service platform, since the application will continuously perform the two operations, it is generally imperceptible to the user, i.e. it is impossible to distinguish the two operations.
In this embodiment, a plurality of encoding modes may be preset, where the encoding modes include a set of parameters for encoding video data, where the parameters include encoding hardware (such as CPU (central processing unit, central processing unit), GPU (graphics processing unit, graphics processor), heterogeneous hardware), encoding standards (such as MPEG-4, h.264, h.265, VC-1, etc.), and the like, and these parameters may be used as specifications for dividing the encoding modes, that is, dividing the encoding modes in the dimension of the parameters, so that different encoding modes have different advantages and disadvantages in terms of encoding speed, code rate, etc.
In one example, the encoding mode includes soft encoding, hard encoding, soft encoding being a mode encoded in a program (CPU), hard encoding being a mode encoded in hardware (GPU, heterogeneous hardware).
In general, the coding speed of soft coding is slower, the coding time is longer, and the method is particularly obvious in mobile terminals with short resources.
In addition, the soft coding has abundant parameters which can be set, the targeted coding adaptation can be carried out according to the service scene, meanwhile, the soft coding can code different code rates at different picture complexity to play a role in saving the code rate, and the time consumption for uploading video data is less.
The encoding and encoding speed of hard encoding is high, and particularly the encoding and encoding can be directly performed in textures by being matched with hard decoding, so that the conversion from GPU to CPU data is saved.
In addition, the image quality of hard coding is generally worse than that of soft coding under the same code rate, and different code rates cannot be set according to scenes, so that certain code rate waste exists, and the time consumption for uploading video data is high.
Of course, the above-described specifications of the division encoding method are merely examples, and in the implementation of the present embodiment, other specifications of the division encoding method may be set according to actual situations, for example, a program is divided into one encoding method according to a specified standard encoding method, hardware is divided into one encoding method according to a specified standard encoding method, soft encoding or hard encoding is performed, and a fine granularity encoding method according to different parameters is divided, which is not limited to this embodiment. In addition to the above specification of the division encoding scheme, those skilled in the art may adopt other specifications of the division encoding scheme according to actual needs, and the present embodiment is not limited thereto.
Step 102, predicting first time consuming encoding of the video data according to a plurality of encoding modes, respectively.
The resources (such as CPU, GPU, etc.) of different electronic devices are different, so that the time consumed for encoding video data in the same encoding manner in different electronic devices is also different.
Generally, resources (such as CPU, GPU, etc.) of the high-level electronic device are sufficient, speeds of various encoding modes (such as soft encoding and hard encoding) are high, time consumption of encoding is low, resources (such as CPU, GPU, etc.) of the low-level electronic device are limited, speeds of various encoding modes (such as soft encoding and hard encoding) are low, and time consumption of encoding is high.
In this embodiment, according to the resource situation of the current electronic device, the time consumed for calling corresponding resources (such as CPU, GPU, etc.) to encode video data in the current electronic device according to multiple encoding modes may be respectively predicted and recorded as the first time consumed.
In one embodiment of the present invention, step 102 may include the steps of:
Step 1021, divide the video data into coding-related categories.
In this embodiment, a plurality of categories related to encoding may be preset, each category covers a plurality of video data, and for each encoding mode, each category is associated with a reference video parameter, where the reference video parameter is a plurality of parameters recorded when encoding video data according to the encoding mode and invoking corresponding resources (such as CPU, GPU, etc.) in history in the current electronic device, and reflects the encoding capability of the current electronic device for the encoding mode.
In this embodiment, the video data is classified in the encoded dimension, so that the number of samples (i.e., video data) for accumulating the training reference video parameters can be increased, and the accuracy of the reference video parameters can be improved, thereby improving the accuracy of predicting the first time consumption.
Further, the classification mode of the category is one or more parameters related to coding in the video data, which are recorded as classification parameters, such as resolution, standard of coding, etc., for which an association relationship between the category and the classification parameters can be established in advance, then, for the current video data, the classification parameters can be identified from the video data, the category associated with the classification parameters can be searched in the association relationship, and the video data can be classified into the category.
Taking resolution as an example, the resolution of video data, such as 360P, 480P, 540P, 720P, 1080P, etc., may be queried to divide the video data into categories configured for the allocation rate.
Step 1022, for a plurality of encoding modes, respectively querying reference durations for encoding single-frame video data in the category in the reference video parameters.
And recording a plurality of reference video parameters for each coding mode of each category, wherein one reference video parameter is the duration of coding the single-frame video data and is recorded as the reference duration, and at the moment, under the condition of a given category, the reference duration of coding the single-frame video data in the category can be respectively inquired in the reference video parameters for each coding mode.
Step 1023, for a plurality of encoding modes, respectively calculating products between the reference time length and the frame number of the video data to obtain first time consumption for encoding the video data.
For the current video data, the frame number of the current video data can be queried, and for a plurality of coding modes, under a given coding mode, the product between the reference time length and the frame number of the video data is calculated, so that the first time consumption for coding the video data by calling corresponding resources at the current electronic equipment according to the coding mode is obtained.
Step 103, predicting second time consumption for transmitting the video data encoded according to the plurality of encoding modes to the service platform respectively.
The file sizes of the video data encoded according to different encoding modes are different, and the environments of the networks where the electronic devices are located are different, so that the time consumed for transmitting the video data encoded according to different encoding modes to the service platform in different electronic devices is different.
In general, the smaller the file size of the video data after encoding is, the better the environment of the network is, the less time consuming to transmit, whereas the larger the file size of the video data after encoding is, the worse the environment of the network is, the more time consuming to transmit.
In this embodiment, the time consumed for transmitting the video data encoded according to the plurality of encoding modes to the service platform may be respectively predicted in the current electronic device, and may be denoted as the second time consumed.
In one embodiment of the present invention, step 103 may include the steps of:
step 1031, classifying the video data into coding-related categories.
In this embodiment, a plurality of categories related to encoding may be preset, and each category is associated with a reference video parameter.
Illustratively, the resolution of the video data is queried, dividing the video data into categories configured for allocation rates.
In this embodiment, the video data is classified in the encoded dimension, so that the number of samples (i.e., video data) for accumulating the training reference video parameters can be increased, and the accuracy of the reference video parameters can be improved, thereby improving the accuracy for predicting the second time consumption.
Step 1032, for the multiple coding modes, respectively querying the reference code rate for coding the video data in the category in the reference video parameters.
For each coding mode of each category, a plurality of reference video parameters are recorded, wherein one reference video parameter is the code rate of video data coding, and the code rate is recorded as the reference code rate.
Step 1033, detecting network state with the service platform.
An application in the electronic device establishes a session with the service platform in which network conditions between the application and the service platform in the electronic device, such as network type (e.g., wiFi, mobile cellular, etc.), bandwidth, packet loss rate, etc., can be detected in real-time.
Step 1034, for a plurality of coding modes, calculating a second time consumption for transmitting the coded video data to the service platform according to the reference code rate in the network state.
The code rate is the number of data bits transmitted per unit time during data transmission, the unit is kbps, namely kilobits per second, and at this time, the second time required for transmitting the encoded video data to the service platform according to the reference code rate can be estimated under the condition of a given network state.
In a specific implementation, if the network state includes a bandwidth, the total duration of the video data may be queried, products between the reference code rate and the total duration are calculated for a plurality of coding modes respectively, as file sizes of the video data after coding, and ratios between the file sizes and the bandwidth are calculated for a plurality of coding modes, as second time consumption for transmitting the video data after coding to the service platform.
Step 104, for the same encoding mode, calculate the total time spent publishing the video data to the service platform with reference to the first time spent and the second time spent.
For a given encoding mode, under the condition of evaluating the first time consumption of encoding and the second time consumption of transmission, the total time consumption for publishing the video data to the service platform, namely, the time consumption for encoding the video data in the encoding mode in the electronic equipment and transmitting the encoded video data to the service platform, can be calculated by taking the first time consumption and the second time consumption as references.
In the process of encoding video data and transmitting the encoded video data to the service platform according to the encoding mode in the electronic device, other operations may exist besides encoding and transmitting, for example, writing the encoded video data from an encoded buffer queue into a transmitted buffer queue, etc., and these other operations have less time consumption and can be ignored, and the first time consumption of encoding and the second time consumption of transmitting are main time consumption, so that the time between the first time consumption and the second time consumption can be calculated as the total time consumption for publishing the video data to the service platform.
Step 105, selecting one of the coding modes as the target coding mode according to the total time consumption.
In this embodiment, the release design screening rule may be pre-set according to specific service requirements of a service scenario, and if the total consumption of a certain coding mode satisfies the screening rule, which indicates that the coding mode is adapted to the current service scenario, the coding mode may be finally selected and marked as a target coding mode.
By comparing the total time consumption, and selecting the coding mode corresponding to the total time consumption with the smallest value as the target coding mode, the time spent waiting for video data release by the user can be reduced.
Of course, the above screening rule is merely an example, and when implementing the present embodiment, other screening rules may be set according to actual situations, for example, for n total consumption with minimum total consumption (n is a positive integer), other factors (such as a resource usage state of the electronic device, etc.) are added to select a suitable encoding mode (such as a resource with a small resource occupancy rate of the electronic device, etc.) as a target encoding mode, and the embodiment of the present invention is not limited thereto. In addition, in addition to the above screening rules, those skilled in the art may also adopt other screening rules according to actual needs, which is not limited in this embodiment.
And 106, encoding the video data according to the target encoding mode and transmitting the video data to a service platform.
In the electronic equipment, the video data is encoded according to a target encoding mode, corresponding resources (such as a CPU (Central processing Unit), a GPU (graphics processing Unit) and the like) are called, and the encoded video data is transmitted to a service platform in a session, so that the video data is issued to the service platform.
In this embodiment, video data to be published to a service platform is generated, first time consumption for encoding the video data according to a plurality of encoding modes is respectively predicted, second time consumption for transmitting the video data encoded according to the plurality of encoding modes to the service platform is respectively predicted, for the same encoding mode, total time consumption for publishing the video data to the service platform is calculated by referring to the first time consumption and the second time consumption, one of the encoding modes is selected as a target encoding mode according to the plurality of total time consumption, and the video data is encoded according to the target encoding mode and transmitted to the service platform. According to the embodiment, the advantages of different coding modes can be combined, the time consumption of the two main operations of coding and uploading in release is comprehensively considered according to actual conditions, the video data is coded by selecting a proper coding mode, the flexibility is high, the time consumption of coding and the time consumption of uploading are balanced, the time consumption of release of the video data can be optimized, the time of waiting for release of the video data by a user is reduced, the efficiency of release of the video data is improved, and the video resources of a service platform are enriched.
For example, if the first time of encoding is relatively large in the total time consumption in the case of a better network state, hard encoding may be selected to save the first time of encoding; under the condition of poor network state, the second time consumption of uploading is relatively large in total time consumption, at the moment, soft coding can be selected to be used, the file size of video data can be compressed as much as possible, and the second time consumption of uploading is saved.
For another example, the first time consumption of soft coding and the first time consumption of hard coding of different electronic devices are different, the CPU of some types of electronic devices is better, and the speed of soft coding and the speed of hard coding are both faster, so that the first time consumption of soft coding and the first time consumption of hard coding are both smaller, and at this time, the advantage of using soft coding is larger; some types of electronic devices have a relatively poor CPU, and hard-coded speeds may be significantly faster than soft-coded speeds, where the advantage of using hard-coding is greater.
Example two
Fig. 2 is a flowchart of a video publishing method according to a second embodiment of the present invention, in which an operation of updating a reference video parameter is added on the basis of the above embodiment. As shown in fig. 2, the method includes:
Step 201, generating video data to be published to a service platform.
Step 202, dividing video data into categories related to coding.
Wherein the category is associated with a reference video parameter.
Step 203, predicting the first time consumption of encoding the video data according to the plurality of encoding modes according to the reference video parameters.
In a specific implementation, for a plurality of coding modes, reference duration for coding the single-frame video data in the category can be respectively queried in the reference video parameters, and for a plurality of coding modes, products between the reference duration and the frame number of the video data are respectively calculated to obtain first time consumption for coding the video data.
Step 204, predicting the second time consumption of transmitting the video data encoded according to the plurality of encoding modes to the service platform according to the reference video parameters.
In a specific implementation, for a plurality of coding modes, reference code rates for coding video data in a category can be respectively queried in reference video parameters, a network state between the reference code rates and a service platform is detected, and for the plurality of coding modes, second time consumption for transmitting the coded video data to the service platform according to the reference code rates in the network state is calculated.
Step 205, for the same encoding mode, calculate the total time spent publishing the video data to the service platform with reference to the first time spent and the second time spent.
Step 206, selecting one of the coding modes as the target coding mode according to the total time consumption.
Step 207, encoding the video data according to the target encoding mode and transmitting the video data to the service platform.
And step 208, recording parameters of the video data after being encoded according to the target encoding mode as actual video parameters.
The video data is encoded in the electronic device according to the target encoding mode, and the parameters of the video data after encoding can be recorded as actual video parameters.
The actual video parameters comprise the time length of encoding the single-frame video data and are recorded as actual time length, and the actual video parameters comprise the code rate of the video data and are recorded as actual code rate.
Further, for the actual duration, the total duration of encoding the entire video data may be recorded, and the number of frames of the video data may be queried, and the total duration divided by the number of frames may be obtained as the actual duration of encoding the single frame video data.
Step 209, updating the reference video parameters according to the actual video parameters.
For a given category, the actual video parameters under a given coding mode (i.e., a target coding mode) are continuously accumulated, and the reference video parameters of the coding mode (i.e., the target coding mode) are updated through a large number of actual video parameters, so that the reference video parameters more reflect the actual coding capability of the electronic equipment using the coding mode (i.e., the target coding mode).
In one update, all actual video parameters of the history may be queried, including the current actual video parameters.
And configuring weights for the actual video parameters according to the time stamps of the recorded actual video parameters, wherein the weights are positively correlated with the time stamps in consideration of the change of the state of the electronic equipment along with the time, namely, the larger the time stamp is, the closer the time of recording the actual video parameters is to the current time, the larger the weights are, otherwise, the smaller the time stamp is, the farther the time of recording the actual video parameters is from the current time, the smaller the weights are, and the weights of the actual video parameters are more close to the current state of the electronic equipment.
By configuration, it may be meant that the product between the weights and the actual video parameters is calculated, such that the sum between the actual video parameters (i.e. the product) after the weights are configured is calculated as the new reference video parameter.
Further, if the actual video parameter is an actual duration of encoding the single-frame video data, a corresponding weight may be configured for the actual duration according to the timestamp, a product between the actual duration and the weight may be calculated, and a sum value between the products may be calculated as a new reference duration.
If the actual video parameter is the actual code rate of the video data, the corresponding weight can be configured for the actual code rate according to the time stamp, the product between the actual code rate and the weight is calculated, and the sum value between the products is calculated to be used as a new reference code rate.
In this embodiment, the actual video parameters of the actual video data encoding are recorded, so that the reference video parameters are updated, the reference video parameters are continuously accumulated, the state of the electronic device is continuously reflected, and the accuracy of the reference video parameters is maintained.
Example III
Fig. 3 is a flowchart of a video publishing method according to a third embodiment of the present invention, where an operation of adjusting a target coding mode is added on the basis of the above embodiment. As shown in fig. 3, the method includes:
step 301, generating video data to be published to a service platform.
Step 302, predicting a first time consumption of encoding the video data according to a plurality of encoding modes, respectively.
Step 303, predicting the second time consumption of transmitting the video data encoded according to the plurality of encoding modes to the service platform.
Step 304, for the same encoding mode, calculate the total time spent publishing the video data to the service platform with reference to the first time spent and the second time spent.
Step 305, selecting one of the encoding modes as the target encoding mode according to the total time consumption.
Step 306, counting the frequency of use of each coding mode.
In this embodiment, the frequency of use of each encoding mode, that is, the wallpaper between the frequency of use of a certain encoding mode and the frequency of use of all encoding modes in the process of encoding video data by the history of the electronic device may be counted, where the encoding modes may be classified into the category statistics and not classified into the category statistics, which is not limited in this embodiment.
Step 307, if the frequency of use of a certain coding scheme is greater than a preset first threshold, setting other coding schemes as target coding schemes.
If the frequency of use of a certain coding mode is greater than a preset first threshold, which means that the frequency of use of the coding mode is higher, in order to avoid that the reference video parameters of other coding modes except the coding mode cannot be updated, the other coding modes except the coding mode can be set as target coding modes, and the coding capability of the electronic equipment for different coding modes can be detected by actively reducing the proportion of using certain coding modes, so that a fact basis is provided for the subsequent screening coding modes.
For example, if the frequency of use of soft coding is greater than 90% (first threshold), hard coding may be set as the target coding scheme.
Step 308, if the frequency of use of a certain coding scheme is less than a preset second threshold, setting the coding scheme as the target coding scheme.
If the frequency of use of a certain coding mode is smaller than a preset second threshold, the frequency of use of the coding mode is lower, in order to avoid that the reference video parameters of the coding mode cannot be updated, the coding mode can be set as a target coding mode, and the coding capability of the electronic equipment for different coding modes can be detected by actively increasing the proportion of using certain coding modes, so that a fact basis is provided for the subsequent screening of the coding modes.
Wherein the first threshold is greater than the second threshold.
For example, if the frequency of use of the soft coding is less than 10% (second threshold), the soft coding may be set as the target coding scheme.
Step 309, encoding the video data according to the target encoding mode and transmitting the video data to the service platform.
Example IV
Fig. 4 is a flowchart of a video publishing method according to a fourth embodiment of the present invention, in which the operation of adding the sounding reference video parameter is added on the basis of the above embodiment. As shown in fig. 4, the method includes:
step 401, receiving video data for a sounding reference signal as reference data.
For the situations of cold starting (such as initial registration and use, reinstallation and use after uninstallation, and the like) of an application program, long-time video data release, and the like, the situations of empty, unrenewed, and the like of a certain type of reference video parameters can occur, the real-time state of the electronic equipment cannot be reflected, and at the moment, a service platform can push a plurality of video data for detecting a coding mode to the electronic equipment and record the video data as the reference data.
Since the reference video parameters are associated with categories of video data, the reference data may belong to multiple categories or specified categories, such as video data at multiple resolutions, in order to accurately update the reference video parameters.
In general, the reference data is pushed when the application program is in an idle state, so as to avoid conflict with the operation of normal application program use by a user.
One or more idle conditions may be set in the application program, and when the one or more idle conditions are satisfied, it is determined that the application program is in an idle state, and the operation of normal use of the application program by the user is different for different service scenarios of different application programs, which is not limited in this embodiment.
For example, if the application is a short video application, the user normally uses the application to mainly watch the short video, download the short video, and issue the short video, and if the application does not watch the short video, download the short video, and issue the short video, the application may be considered to be in an idle state.
Step 402, classifying the reference data into categories related to coding.
In this embodiment, a plurality of categories related to encoding may be preset, and each category is associated with a reference video parameter.
Illustratively, the resolution of the video data is queried, dividing the video data into categories configured for allocation rates.
Step 403, coding the reference data according to a plurality of coding modes.
In the electronic device, the reference data is encoded according to a plurality of encoding modes by calling corresponding resources (such as a CPU, a GPU and the like).
Step 404, recording parameters of the reference data after encoding according to a plurality of encoding modes as actual video parameters.
The reference data are encoded in the electronic device according to a plurality of encoding modes, and the parameters of the encoded reference data can be recorded as actual video parameters.
The actual video parameters comprise the time length of encoding the single frame of reference data and are recorded as actual time length, and the actual video parameters comprise the code rate of the reference data and are recorded as actual code rate.
Step 405, updating the reference video parameters according to the actual video parameters.
In a specific implementation, querying all actual video parameters of the history record; configuring weights for the actual video parameters according to the time stamps of the recorded actual video parameters, wherein the weights are positively correlated with the time stamps; the sum between the actual video parameters after the configuration weights are calculated as new reference video parameters.
Step 406, generating video data to be published to the service platform.
Step 407, dividing the video data into coding-related categories.
Wherein the category is associated with a reference video parameter.
Step 408, predicting the first time consumption of encoding the video data according to the plurality of encoding modes according to the reference video parameters, respectively.
In a specific implementation, for a plurality of coding modes, reference duration for coding the single-frame video data in the category can be respectively queried in the reference video parameters, and for a plurality of coding modes, products between the reference duration and the frame number of the video data are respectively calculated to obtain first time consumption for coding the video data.
Step 409, predicting the second time consumption of transmitting the video data encoded according to the plurality of encoding modes to the service platform according to the reference video parameters.
In a specific implementation, for a plurality of coding modes, reference code rates for coding video data in a category can be respectively queried in reference video parameters, a network state between the reference code rates and a service platform is detected, and for the plurality of coding modes, second time consumption for transmitting the coded video data to the service platform according to the reference code rates in the network state is calculated.
Step 410, for the same encoding mode, calculate the total time spent publishing the video data to the service platform with reference to the first time spent and the second time spent.
Step 411, selecting one of the encoding modes as the target encoding mode according to the total time consumption.
Step 412, encoding the video data according to the target encoding mode and transmitting the video data to the service platform.
Example five
Fig. 5 is a schematic structural diagram of a video publishing device according to a third embodiment of the present invention. As shown in fig. 5, the apparatus includes:
A video data generating module 501, configured to generate video data to be published to a service platform;
A first time-consuming prediction module 502, configured to predict first time consuming for encoding the video data according to a plurality of encoding modes, respectively;
A second time-consuming prediction module 503, configured to predict second time consumption of transmitting the video data encoded according to the plurality of encoding modes to the service platform, respectively;
A total time consumption calculation module 504, configured to calculate, for the same encoding manner, a total time consumption of publishing the video data to the service platform with reference to the first time consumption and the second time consumption;
a coding mode defining module 505, configured to select one of the coding modes as a target coding mode according to a plurality of total time consumptions;
And the video publishing module 506 is configured to encode the video data according to the target encoding mode and transmit the encoded video data to the service platform.
In one embodiment of the present invention, the first time-consuming prediction module 502 includes:
A category classification module, configured to classify the video data into categories related to encoding, where the categories are associated with reference video parameters;
The reference time length inquiry module is used for inquiring the reference time length for encoding the video data of the single frame in the category in the reference video parameters for a plurality of encoding modes respectively;
and the frame duration accumulating module is used for respectively calculating products between the reference duration and the frame number of the video data for a plurality of coding modes to obtain first time consumption for coding the video data.
In one embodiment of the present invention, the second time-consuming prediction module 503 includes:
A category classification module, configured to classify the video data into categories related to encoding, where the categories are associated with reference video parameters;
The reference code rate query module is used for querying the reference code rate for encoding the video data in the category in the reference video parameters respectively for a plurality of encoding modes;
The network state detection module is used for detecting the network state between the network state detection module and the service platform;
and the network transmission calculation module is used for calculating second time consumption for transmitting the video data after coding to the service platform according to the reference code rate in the network state aiming at a plurality of coding modes.
In one embodiment of the present invention, the category classification module includes:
the resolution query module is used for querying the resolution of the video data;
And the resolution classification module is used for classifying the video data into categories configured for the distribution rate.
In one embodiment of the invention, the network state includes bandwidth;
The network transmission calculation module includes:
The total duration query module is used for querying the total duration of the video data;
The file size calculation module is used for calculating products between the reference code rate and the total duration respectively for a plurality of coding modes to be used as the file size of the video data after coding;
And the bandwidth calculation module is used for calculating the ratio between the file size and the bandwidth according to a plurality of coding modes and is used as a second time consumption for transmitting the coded video data to the service platform.
In one embodiment of the present invention, the encoding mode defining module 505 includes:
a total time consumption comparison module for comparing a plurality of the total time consumption;
And the extremum selecting module is used for selecting the coding mode corresponding to the total consumed time with the minimum value as a target coding mode.
In one embodiment of the present invention, further comprising:
The first actual parameter recording module is used for recording parameters of the video data after being encoded according to the target encoding mode as actual video parameters;
And the reference parameter updating module is used for updating the reference video parameters according to the actual video parameters.
In one embodiment of the present invention, the reference parameter updating module includes:
the video query module is used for querying all the actual video parameters of the history record;
the weight configuration module is used for configuring weights for the actual video parameters according to the time stamps for recording the actual video parameters, and the weights are positively correlated with the time stamps;
and the weight and calculation module is used for calculating the sum value between the actual video parameters after the weights are configured to serve as new reference video parameters.
In one embodiment of the present invention, further comprising:
the frequency-of-use statistics module is used for counting the frequency of use of each coding mode;
a first coding mode setting module, configured to set, if the frequency of use of a certain coding mode is greater than a preset first threshold, other coding modes as target coding modes;
a second coding mode setting module, configured to set a coding mode as a target coding mode if the frequency of use of a certain coding mode is less than a preset second threshold;
The first threshold is greater than the second threshold.
In one embodiment of the present invention, further comprising:
The reference data receiving module is used for receiving video data used for detecting the coding mode and taking the video data as reference data;
The reference data dividing module is used for dividing the reference data into categories related to coding, and the categories are associated with reference video parameters;
the reference video coding module is used for coding the reference data according to a plurality of coding modes respectively;
the second actual parameter recording module is used for respectively recording parameters of the reference data after being encoded according to a plurality of encoding modes and taking the parameters as actual video parameters;
And the reference parameter updating module is used for updating the reference video parameters according to the actual video parameters.
In one embodiment of the present invention, the reference data dividing module includes:
the reference inquiry module is used for inquiring the resolution ratio of the reference data;
And the reference classification module is used for classifying the reference data into categories configured for the distribution rate.
The video publishing device provided by the embodiment of the invention can execute the video publishing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the video publishing method.
Example six
Fig. 6 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the video distribution method.
In some embodiments, the video distribution method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the video distribution method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the video distribution method in any other suitable way (e.g., by means of firmware).

Claims (12)

1. A video distribution method, comprising:
generating video data to be released to a service platform;
Predicting a first time consumption of encoding the video data in a plurality of encoding modes respectively;
Predicting second time consumption for transmitting the video data encoded according to a plurality of encoding modes to the service platform respectively;
for the same coding mode, referring to the first time consumption and the second time consumption, calculating total time consumption for publishing the video data to the service platform;
selecting one of the coding modes as a target coding mode according to the total consumption time;
encoding the video data according to the target encoding mode and transmitting the video data to the service platform;
after selecting one of the coding modes as a target coding mode according to the total consumption, the method further comprises:
Counting the use frequency of each coding mode;
If the using frequency of one coding mode is larger than a preset first threshold value, setting the other coding modes as target coding modes;
If the use frequency of a certain coding mode is smaller than a preset second threshold value, setting the coding mode as a target coding mode;
The first threshold is greater than the second threshold.
2. The method of claim 1, wherein predicting the first time consuming encoding of the video data in a plurality of encoding modes, respectively, comprises:
dividing the video data into categories related to coding, the categories being associated with reference video parameters;
For a plurality of coding modes, respectively inquiring the reference duration for coding the video data of a single frame in the category in the reference video parameters;
and respectively calculating products between the reference time length and the frame number of the video data for a plurality of coding modes to obtain first time consumption for coding the video data.
3. The method of claim 1, wherein predicting the second time taken to transmit the video data encoded in the plurality of encoding modes to the service platform comprises:
dividing the video data into categories related to coding, the categories being associated with reference video parameters;
For a plurality of coding modes, respectively inquiring the reference code rate for coding the video data in the category in the reference video parameters;
Detecting a network state between the service platform and the network;
and calculating second time consumption for transmitting the video data after coding to the service platform according to the reference code rate in the network state according to a plurality of coding modes.
4. A method according to claim 2 or 3, wherein said classifying said video data into coding-related categories comprises:
Querying a resolution of the video data;
the video data is divided into categories configured for the resolution.
5. A method according to claim 3, wherein the network state comprises bandwidth;
The calculating, for a plurality of encoding modes, second time consumption for transmitting the video data after encoding to the service platform according to the reference code rate in the network state includes:
Inquiring the total duration of the video data;
For a plurality of coding modes, respectively calculating products between the reference code rate and the total duration to be used as file sizes of the video data after coding;
For a plurality of coding modes, calculating the ratio between the file size and the bandwidth as a second time consumption for transmitting the video data after coding to the service platform.
6. The method according to claim 1, wherein said selecting one of said coding modes as a target coding mode based on a plurality of said total time consumptions comprises:
comparing a plurality of said total time consumptions;
and selecting the coding mode corresponding to the total time consumption with the minimum value as a target coding mode.
7. A method according to claim 2 or 3, further comprising:
Recording parameters of the video data after being encoded according to the target encoding mode as actual video parameters;
And updating the reference video parameters according to the actual video parameters.
8. The method of claim 7, wherein said updating said reference video parameter based on said actual video parameter comprises:
querying all the actual video parameters of the history record;
Configuring weights for the actual video parameters according to the time stamps for recording the actual video parameters, wherein the weights are positively correlated with the time stamps;
and calculating the sum value between the actual video parameters after the weights are configured as new reference video parameters.
9. The method according to any one of claims 1-3, 5-6, further comprising:
receiving video data used for detecting a coding mode as reference data;
dividing the reference data into categories related to coding, wherein the categories are associated with reference video parameters;
encoding the reference data according to a plurality of encoding modes respectively;
Recording parameters of the reference data after being encoded according to a plurality of encoding modes respectively as actual video parameters;
And updating the reference video parameters according to the actual video parameters.
10. A video distribution apparatus, comprising:
the video data generation module is used for generating video data to be released to the service platform;
A first time-consuming prediction module for predicting first time consuming of encoding the video data according to a plurality of encoding modes, respectively;
the second time consumption prediction module is used for predicting second time consumption of transmitting the video data coded according to the plurality of coding modes to the service platform respectively;
A total time consumption calculation module, configured to calculate, for the same encoding manner, a total time consumption of publishing the video data to the service platform with reference to the first time consumption and the second time consumption;
The coding mode limiting module is used for selecting one of the coding modes as a target coding mode according to the total consumption time;
The video release module is used for encoding the video data according to the target encoding mode and transmitting the video data to the service platform;
the frequency-of-use statistics module is used for counting the frequency of use of each coding mode;
a first coding mode setting module, configured to set, if the frequency of use of a certain coding mode is greater than a preset first threshold, other coding modes as target coding modes;
a second coding mode setting module, configured to set a coding mode as a target coding mode if the frequency of use of a certain coding mode is less than a preset second threshold;
The first threshold is greater than the second threshold.
11. An electronic device, the electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the video distribution method of any one of claims 1-9.
12. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program for causing a processor to implement the video distribution method according to any one of claims 1 to 9 when executed.
CN202210247614.4A 2022-03-14 2022-03-14 Video publishing method, device, equipment and storage medium Active CN114650437B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210247614.4A CN114650437B (en) 2022-03-14 2022-03-14 Video publishing method, device, equipment and storage medium
PCT/CN2023/081306 WO2023174254A1 (en) 2022-03-14 2023-03-14 Video posting method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210247614.4A CN114650437B (en) 2022-03-14 2022-03-14 Video publishing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114650437A CN114650437A (en) 2022-06-21
CN114650437B true CN114650437B (en) 2024-04-16

Family

ID=81994423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210247614.4A Active CN114650437B (en) 2022-03-14 2022-03-14 Video publishing method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114650437B (en)
WO (1) WO2023174254A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114650437B (en) * 2022-03-14 2024-04-16 百果园技术(新加坡)有限公司 Video publishing method, device, equipment and storage medium
CN116993839B (en) * 2023-09-26 2024-01-26 苏州元脑智能科技有限公司 Coding mode screening method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012012933A (en) * 2011-08-29 2012-01-19 Sumitomo (Shi) Construction Machinery Co Ltd Shovel comprising motor generator for revolving
WO2012154157A1 (en) * 2011-05-06 2012-11-15 Google Inc. Apparatus and method for dynamically changing encoding scheme based on resource utilization
CN110996164A (en) * 2020-01-02 2020-04-10 北京字节跳动网络技术有限公司 Video distribution method and device, electronic equipment and computer readable medium
CN112019878A (en) * 2019-05-31 2020-12-01 广州市百果园信息技术有限公司 Video decoding and editing method, device, equipment and storage medium
CN112040333A (en) * 2020-09-04 2020-12-04 北京达佳互联信息技术有限公司 Video distribution method, device, terminal and storage medium
CN112312135A (en) * 2020-10-23 2021-02-02 广州市百果园网络科技有限公司 Video publishing method and device, computer equipment and storage medium
CN112533065A (en) * 2020-12-11 2021-03-19 北京达佳互联信息技术有限公司 Method and device for publishing video, electronic equipment and storage medium
CN112954400A (en) * 2020-08-19 2021-06-11 赵蒙 Deep learning-based data coding control method and system and big data platform

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10356406B2 (en) * 2016-01-19 2019-07-16 Google Llc Real-time video encoder rate control using dynamic resolution switching
CN113630604A (en) * 2020-05-09 2021-11-09 北京密境和风科技有限公司 Video data encoding method, device, equipment and storage medium
CN113938682A (en) * 2020-06-29 2022-01-14 北京金山云网络技术有限公司 Video coding method and device and electronic equipment
CN115412731B (en) * 2021-05-11 2024-08-23 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN114650437B (en) * 2022-03-14 2024-04-16 百果园技术(新加坡)有限公司 Video publishing method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012154157A1 (en) * 2011-05-06 2012-11-15 Google Inc. Apparatus and method for dynamically changing encoding scheme based on resource utilization
JP2012012933A (en) * 2011-08-29 2012-01-19 Sumitomo (Shi) Construction Machinery Co Ltd Shovel comprising motor generator for revolving
CN112019878A (en) * 2019-05-31 2020-12-01 广州市百果园信息技术有限公司 Video decoding and editing method, device, equipment and storage medium
CN110996164A (en) * 2020-01-02 2020-04-10 北京字节跳动网络技术有限公司 Video distribution method and device, electronic equipment and computer readable medium
CN112954400A (en) * 2020-08-19 2021-06-11 赵蒙 Deep learning-based data coding control method and system and big data platform
CN112040333A (en) * 2020-09-04 2020-12-04 北京达佳互联信息技术有限公司 Video distribution method, device, terminal and storage medium
CN112312135A (en) * 2020-10-23 2021-02-02 广州市百果园网络科技有限公司 Video publishing method and device, computer equipment and storage medium
CN112533065A (en) * 2020-12-11 2021-03-19 北京达佳互联信息技术有限公司 Method and device for publishing video, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Scalable Wireless Video Streaming over Real-Time Publish Subscribe Protocol (RTPS);B. Al-Madani 等;2013 IEEE/ACM 17th International Symposium on Distributed Simulation and Real Time Applications;20131223;第221-230页 *
基于Android系统的短视频编辑社交分享系统的设计与实现;王一同;《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》(2017年第02期);全文 *

Also Published As

Publication number Publication date
CN114650437A (en) 2022-06-21
WO2023174254A1 (en) 2023-09-21

Similar Documents

Publication Publication Date Title
CN114650437B (en) Video publishing method, device, equipment and storage medium
CN113542795B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN102685472B (en) Method, device and system of data transmission
CN112312135B (en) Video publishing method and device, computer equipment and storage medium
CN106604137B (en) Method and device for predicting video watching duration
CN114222194A (en) Video code stream adjusting method, device and system
CN112153415B (en) Video transcoding method, device, equipment and storage medium
CN111970565A (en) Video data processing method and device, electronic equipment and storage medium
CN115701709A (en) Video coding method and device, computer readable medium and electronic equipment
CN115589489B (en) Video transcoding method, device, equipment, storage medium and video on demand system
US20130286227A1 (en) Data Transfer Reduction During Video Broadcasts
CN114422792A (en) Video image compression method, device, equipment and storage medium
CN110912922A (en) Image transmission method and device, electronic equipment and storage medium
CN104994407A (en) Concentrated self-adaptive video transcoding method
CN116980662A (en) Streaming media playing method, streaming media playing device, electronic equipment, storage medium and program product
CN117676239A (en) Video transmission method, device, equipment and medium
CN111510715B (en) Video processing method, system, computer device and storage medium
Tao et al. Energy efficient video QoE optimization for dynamic adaptive HTTP streaming over wireless networks
CN116996649B (en) Screen projection method and device, storage medium and electronic equipment
Moldovan et al. Energy-efficient adaptation logic for http streaming in mobile networks
CN116016992A (en) Coding method and device
CN115103209B (en) Method for realizing multiple speed playback of monitoring video
CN118138801B (en) Video data processing method and device, electronic equipment and storage medium
CN118524246A (en) Method and device for controlling screen throwing
CN111314779B (en) Method and device for determining streaming media transmission quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant