[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114629797B - Bandwidth prediction method, model generation method and device - Google Patents

Bandwidth prediction method, model generation method and device Download PDF

Info

Publication number
CN114629797B
CN114629797B CN202210238602.5A CN202210238602A CN114629797B CN 114629797 B CN114629797 B CN 114629797B CN 202210238602 A CN202210238602 A CN 202210238602A CN 114629797 B CN114629797 B CN 114629797B
Authority
CN
China
Prior art keywords
sample
model
live broadcast
prediction model
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210238602.5A
Other languages
Chinese (zh)
Other versions
CN114629797A (en
Inventor
吴蕾蕾
胡雨涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210238602.5A priority Critical patent/CN114629797B/en
Publication of CN114629797A publication Critical patent/CN114629797A/en
Application granted granted Critical
Publication of CN114629797B publication Critical patent/CN114629797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2385Channel allocation; Bandwidth allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a bandwidth prediction method, a model generation method and equipment. The method comprises the steps of determining network information of a network used for transmitting live broadcast data by a live broadcast terminal; determining a target prediction model from prediction models corresponding to a plurality of levels respectively according to the network information; wherein the prediction model is obtained by at least one sub-model aggregation process of the same level; the sub-model is obtained according to the sample available bandwidth and sample network information training; and predicting the target available bandwidth corresponding to the live broadcast end by using the target prediction model and the network information. The technical scheme provided by the embodiment of the application improves the accuracy of available bandwidth prediction.

Description

Bandwidth prediction method, model generation method and device
Technical Field
The embodiment of the application relates to the technical field of data transmission, in particular to a bandwidth prediction method, a model generation method and equipment.
Background
When two communication parties stream data through a network, such as in a network live broadcast scene, when live broadcast data is transmitted between a live broadcast end and a service end, transmission quality problems such as transmission delay, play quality and the like with different degrees exist due to the influence of different network conditions, and by adjusting the predicted available bandwidth, the code rate of coding and/or transmission is adjusted based on the available bandwidth, so that the improvement of the data transmission quality is facilitated.
In the prior art, a rule is usually preconfigured, and available bandwidths corresponding to different network conditions set in the rule are used for predicting the available bandwidths based on the set rule. However, due to network heterogeneity, etc., the rule setting approach may result in inaccurate prediction of available bandwidth.
Disclosure of Invention
The embodiment of the application provides a bandwidth prediction method, a model generation method and computing equipment, which are used for solving the technical problem that available bandwidth prediction in the prior art is inaccurate.
In a first aspect, an embodiment of the present application provides a bandwidth prediction method, including:
determining network information of a network used for transmitting live broadcast data by a live broadcast terminal;
determining a target prediction model from prediction models corresponding to a plurality of levels respectively according to the network information; wherein the prediction model is obtained by at least one sub-model aggregation process of the same level; the sub-model is obtained according to the sample available bandwidth and sample network information training;
and predicting the available bandwidth by using the target prediction model and the network information.
In a second aspect, an embodiment of the present application provides a method for generating a model, including:
training a plurality of sub-models by utilizing sample available bandwidth and sample network information adopted by respectively carrying out live broadcast data transmission in a plurality of sample network environments;
Determining a plurality of levels obtained by dividing the plurality of sub-models according to the network conditions of the respective corresponding sample network environments;
at least one sub-model of the same level is subjected to aggregation treatment to obtain a prediction model;
the prediction models corresponding to the multiple levels are used for selecting a target prediction model from the network information of the live broadcast end according to the network information of the live broadcast end so as to predict the target available bandwidth of the live broadcast end by using the target prediction model.
In a third aspect, an embodiment of the present application provides a bandwidth prediction method, including:
determining network information of a network used by the client to stream data;
determining a target prediction model from prediction models corresponding to a plurality of levels respectively according to the network information; wherein the prediction model is obtained by at least one sub-model aggregation process of the same level; the sub-model is obtained according to sample available bandwidth and sample network information training adopted by streaming data in a sample network environment;
and predicting the target available bandwidth corresponding to the client by using the target prediction model and the network information.
In a fourth aspect, an embodiment of the present application provides a method for generating a model, including:
Training a plurality of sub-models by utilizing sample available bandwidth and sample network information adopted for data streaming in a plurality of sample network environments;
determining a plurality of levels obtained by dividing the plurality of sub-models according to the network conditions of the respective corresponding sample network environments;
at least one sub-model of the same level is subjected to aggregation treatment to obtain a prediction model;
the available bandwidth prediction models corresponding to the multiple levels are used for selecting a target prediction model from the available bandwidth prediction models according to network information of the client so as to predict target available bandwidth corresponding to the client by using the target prediction model.
In a fifth aspect, in an embodiment of the present application, there is provided an electronic device including a storage component and a processing component, where the storage component stores one or more computer instructions; the one or more computer instructions are operable to be invoked by the processing component to implement the bandwidth prediction method as set forth in the first aspect above or the bandwidth prediction method as set forth in the third aspect above.
In a sixth aspect, embodiments of the present application provide a computing device including a storage component that stores one or more computer instructions, and a processing component; the one or more computer instructions are operable to be invoked by the processing component to implement a bandwidth prediction method as described in the first aspect above or a bandwidth prediction method as described in the third party above or a model generation method as described in the second aspect above or a model generation method as described in the fourth aspect above.
In the embodiment of the application, a plurality of sub-models are trained by utilizing sample available bandwidths and sample network information adopted by data streaming in a plurality of sample network environments, the plurality of sub-models are divided into a plurality of grades according to the network conditions of the respective sample network environments, and at least one sub-model of the same grade is subjected to aggregation processing to obtain a prediction model corresponding to the grade, so that a prediction model corresponding to each grade is obtained; and for the client, determining a target prediction model from prediction models corresponding to a plurality of grades respectively according to network information of the client, and predicting a target available bandwidth by utilizing the target prediction model, wherein the target available bandwidth indicates that data streaming is carried out between the target available bandwidth and the client. By adopting a machine learning mode, a plurality of levels of prediction models are obtained through training according to different network conditions, so that a proper target prediction model can be selected according to the network information of a client to predict the available bandwidth, and the accuracy of the available bandwidth prediction is improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a illustrates a flow chart of one embodiment of a model generation method provided herein;
FIG. 1b shows a system architecture diagram to which the technical solution of the embodiments of the present application is applied;
FIG. 2 illustrates a flow chart of one embodiment of a bandwidth prediction method provided herein;
FIG. 3 shows another system architecture diagram to which the technical solution of the embodiments of the present application is applied;
FIG. 4 is a flow chart illustrating yet another embodiment of a model generation method provided herein;
FIG. 5 is a flow chart illustrating yet another embodiment of a bandwidth prediction method provided herein;
FIG. 6 is a schematic diagram of an interaction process in a practical application according to an embodiment of the present application;
FIG. 7 is a schematic view showing the structure of an embodiment of a model generating apparatus provided in the present application;
FIG. 8 is a schematic diagram illustrating the construction of one embodiment of a bandwidth prediction apparatus provided herein;
FIG. 9 illustrates a schematic diagram of one embodiment of a computing device provided herein;
fig. 10 is a schematic structural diagram of an embodiment of an electronic device provided in the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the foregoing figures, a number of operations are included that occur in a particular order, but it should be understood that the operations may be performed in other than the order in which they occur or in parallel, that the order of operations such as 101, 102, etc. is merely for distinguishing between the various operations, and that the order of execution is not by itself represented by any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The technical scheme of the embodiment of the application can be applied to any scene of streaming data between two communication parties through a network, such as a network live broadcast scene, a video service scene and the like. The two communication parties may be, for example, a client and a server, or may be two communication parties that establish a connection through any type of network.
The network live broadcast scene can refer to live broadcast data transmission between a live broadcast terminal and a service terminal, for example, the live broadcast scene can comprise live broadcast data recorded by the live broadcast terminal transmitted to the service terminal in real time, or the service terminal transmits a request to obtain live broadcast data of a live broadcast room to a viewer terminal in real time, and the like; the video service scene may refer to video data transmission between the video playing end and the server end, for example, the video service scene may include a scene that the server end transmits a video acquired by the video playing end to the client end; or may refer to video data transmission between the video uploading end and the server, such as uploading a video scene to the server by the video uploading end.
Wherein the streamed data may be any type of media streaming data, such as video, audio, images, animations, etc. Therefore, the technical scheme of the embodiment of the application can be also suitable for scenes and the like of media stream data transmission such as audio, image, animation and the like between the client and the server. Of course, other scenarios for streaming data between any two communicating parties may be included, as just an example.
Since the available bandwidth of the prediction transmission is not accurate enough in a rule setting manner, the inventor thinks that the prediction model can be trained in a machine learning manner, and the prediction model is trained based on the network information of the sample network environment in the process of data streaming in the sample network environment, however, the research discovers that the prediction model obtained by training the single sample network environment is only applicable to partial networks and is difficult to deploy on a large scale. Accordingly, the inventor also provides a series of researches to propose a technical scheme of the application, in the embodiment of the application, a machine learning mode is adopted to perform model training and aggregation learning in a plurality of sample network environments to obtain a plurality of levels of prediction models, so that a proper target prediction model can be selected to perform available bandwidth prediction according to network information of a network of client streaming data, and available bandwidth prediction accuracy is improved.
In order to facilitate understanding of the technical solutions of the present application, first, technical terms possibly related in the embodiments of the present application are explained correspondingly:
available bandwidth: refers to the maximum transmission rate at which the current network is free from congestion.
Coding rate: is the rate at which data is produced by the data producer.
Transmission code rate: the rate at which the producer of the data transfers the produced data onto the network.
Reception rate: is the rate at which data is received by the data receiver.
Packet loss rate: refers to the ratio of the number of lost packets to the number of packets transmitted.
Time delay: is the time required for a data packet to be transmitted from one end of the network to the other.
Delay jitter: i.e. time delay variations.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Fig. 1a is a flowchart of one embodiment of a model generating method according to the embodiments of the present application, where the method may include the following steps:
101: a plurality of sub-models are trained using sample available bandwidth and sample network information employed for data streaming in a plurality of sample network environments.
The multiple sample network environments can be obtained by screening from multiple actual network environments, for example, the multiple sample network environments can be selected based on streaming times, transmission time duration of each time and the like, and the multiple sample network environments can be selected according to actual requirements, and can be selected at any time.
The network environment where each pair of communication parties is located may be regarded as a sample network environment, and in addition, the network conditions corresponding to each data streaming may be different due to network instability and the like, so that the network environment where each pair of communication parties streams data may be alternatively regarded as a sample network environment.
In practical application, the communication parties may refer to a client and a server, the server may establish network connection with a plurality of clients and may perform data streaming based on the network connection, as fig. 1b shows a system architecture diagram to which the technical solution of the embodiment of the present application may be applied, where the system may include a plurality of clients 100 and a server 200. In a network live broadcast scenario, the client 100 may be a live broadcast end, which may specifically be a hosting end or a viewer end; in a video service scenario, the client 100 may be a video playing end or a video uploading end, etc.
The client 100 may be configured in an electronic device with a network communication function, such as a mobile phone, a tablet computer, a computer, or a smart watch, and may be a browser, an APP (Application), or a web Application, such as an H5 (HyperText Markup Language, 5 th edition of hypertext markup language) Application, or a light Application (also called an applet, a lightweight Application), or a cloud Application, etc., for easy understanding, in fig. 1b, the user is mainly represented by a device image.
The server 200 may be a server providing various services, such as a server supporting model training, a server providing streaming data services, or the like. Which may be a server of a distributed system or a server incorporating a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology, and the like. It should be understood that the number of clients, the number of servers, etc. in fig. 1b are only illustrative, and any number of clients, servers, etc. may be set according to actual needs.
The training of the plurality of sub-models by utilizing the sample available bandwidth and the sample network characteristics adopted for respectively carrying out live broadcast data transmission in the plurality of sample network environments can be as follows: the plurality of sub-models are trained using the available bandwidth of samples and the network characteristics of the samples employed by the plurality of first sample clients during at least one data streaming process, respectively.
For example, in a network live broadcast scenario, the server may select a first sample live broadcast end from a plurality of live broadcast ends, where a network environment between each first sample live broadcast end and the server may be used as a sample network environment, or a network environment used by each first sample live broadcast end and the server for live broadcast data transmission at each time may be used as a sample network environment; in another example, in the video service scenario, the server may select a first sample video playing end from a plurality of video playing ends, and a network environment between each first sample video playing end and the server may be used as a sample network environment, or a network environment used by each sample video playing end and the server for each video transmission may be used as a sample network environment.
A sub-model may be trained in each sample network environment. The model structures of the sub-models respectively corresponding to the multiple sample network environments are the same, initial parameters are the same, the initial parameters are influenced by different network conditions, and model parameters obtained through final training may be different.
The sample network information is obtained by detecting the sample network environment in the process of data streaming, and may include at least one network parameter of packet loss rate, time delay, delay jitter and receiving rate of a recently transmitted data packet, for example. The method for obtaining the packet loss rate, the time delay jitter and the receiving rate is not particularly limited in this application.
The sample available bandwidth is obtained by utilizing the sub-model based on the sample network information prediction, data streaming is carried out according to the sample available bandwidth, the sample network information is updated, and model parameters of the sub-model can be adjusted based on the evaluation result of the sample network information until the sub-model meets training conditions, so that the sub-model is obtained after training is completed. In practical application, the sub-models can be trained in real time in the data streaming process in a reinforcement learning mode, and initial parameter values of each sub-model can be preset or randomly valued. Of course, the available bandwidth of the sample and the information of the sample network can be preset, and the sub-model is trained by adopting a supervised learning mode.
The network information in the embodiments of the present application may include, for example, one or more network parameters of bandwidth, packet loss rate, delay jitter, and receiving rate, and may further include, in combination with different practical application situations, other types of network parameters, for example, a buffer size, which is not specifically limited in this application.
102: and determining a plurality of levels of the sub-models which are divided according to the network conditions of the corresponding sample network environments.
The plurality of sub-models may be classified into a plurality of levels according to the quality of network conditions of the sample network environment. The network condition may be evaluated and judged by one or more network parameters in the sample network information, for example, the receiving rate may be preset to a receiving rate value range corresponding to different levels, or the levels may be classified according to the sorting results of the receiving rates corresponding to the multiple sample network environments, for example, it may be assumed that three levels are respectively good, medium and bad, where for example, a sub-model obtained by training a sample network environment with a receiving rate of 20% in the first 20% belongs to a good level, a sub-model obtained by training a sample network environment with a receiving rate of 20% -80% belongs to a medium level, and a sub-model obtained by training a sample network environment with a spitting rate of 20% in the last 20% belongs to a good level.
Wherein, as an alternative way, the network condition of the sample network environment can be determined according to the sample network information obtained by detection in the model training process; i.e. it may be ranked according to the sample network information collected during each sub-model training process, for example, it may be ranked according to the receiving rate generated during the sub-model training process. Thus, after training to obtain a plurality of sub-models using the available sample bandwidth and sample network information for live data transmission in a plurality of sample network environments, respectively, the method may further include:
dividing the multiple sub-models into multiple levels according to the network conditions of the corresponding sample network environments.
As another alternative, the network conditions of the sample network environment may be determined from sample network information detected and obtained over a predetermined historical period of time prior to sub-model training. According to the network conditions of different network environments, at least one network environment corresponding to a plurality of grades can be selected, the sub-models are deployed, and the sub-models are trained in the data streaming process respectively. Thus, prior to training the plurality of sub-models using the sample available bandwidth and sample network information employed for data streaming in the plurality of sample network environments, respectively, the method may further comprise:
According to network information of the plurality of sample network environments in a preset historical time period, determining the grades corresponding to the plurality of sample network environments respectively;
training the plurality of sub-models using the sample available bandwidth and the sample network information employed for live data transmission in the plurality of sample network environments, respectively, may include:
and training to obtain a plurality of levels of submodels by utilizing the available sample bandwidths and the sample network information adopted by the data streaming in the sample network environments of different levels.
103: and carrying out aggregation treatment on at least one sub-model of the same level to obtain a prediction model.
The prediction models corresponding to the multiple levels are used for selecting a target prediction model according to the network information of the client so as to predict the target available bandwidth of the client by using the target prediction model.
And carrying out aggregation treatment on at least one sub-model of the same level to obtain a prediction model of the level, thereby obtaining a plurality of prediction models of the levels. The predictive models of the multiple levels can be solidified in the using process without real-time training.
In this embodiment, model training and aggregate learning are performed in a plurality of sample network environments, so as to obtain a plurality of levels of prediction models, so that an appropriate target prediction model can be selected according to network information of a client to perform available bandwidth prediction, and the accuracy of available bandwidth prediction is improved.
In some embodiments, training the plurality of sub-models using the sample available bandwidth and sample network characteristics employed by the plurality of first sample clients during at least one data streaming process, respectively, may include:
for any one first sample client, in the process of carrying out any one data streaming, iteratively executing the following training operation until the used sub-model meets the first training condition to obtain a trained sub-model; the training operation may include:
detecting sample network information of a first sample client;
according to the sample network information, adjusting model parameters of the used sub-model;
according to the sample network information, utilizing sub-model prediction to obtain available bandwidth of a sample;
the available bandwidth of the data streaming is adjusted according to the available bandwidth of the sample, and the first sample client continues the data streaming according to the adjusted available bandwidth of the transmission.
The sub-model meets a first training condition, and the model can be considered to converge to obtain a sub-model with complete training, and one implementation of the first training condition can be that, for example, the iteration number of training operations reaches a preset number of times or the total training time exceeds a preset time, and another implementation can be that, for example, sample network information obtained by current detection meets a preset requirement.
The method may specifically evaluate the currently used available bandwidth according to the sample network information, adjust the currently used available bandwidth based on the evaluation result, and the like, for example, evaluate the currently used available bandwidth according to the currently detected sample network information, determine whether the network condition thereof is better or worse, and further determine how to adjust model parameters and the like, and specifically perform training adjustment and the like on the model according to a reinforcement learning mode. Of course, an adjustment rule may be set in advance, and the model parameters may be adjusted according to adjustment methods corresponding to different pieces of network information specified in the adjustment rule.
According to the training mode, model training can be performed in real time in the actual data streaming process of the client. The real-time training mode can obtain the sub-model aiming at a single client, the single client can accurately predict the available bandwidth by utilizing the sub-model obtained by corresponding training, and because the real-time training mode consumes more resources and cannot be popularized to all clients, however, by adopting the technical scheme of the embodiment of the application, the sub-model obtained by real-time training is aggregated to obtain the prediction model, and the prediction model can be used as a solidification model so as to be suitable for all clients, and the available bandwidth prediction accuracy can be ensured simultaneously by using a model of a proper level through a plurality of levels.
In some embodiments, at least one sub-model of the same level may be aggregated in a federal learning manner to obtain a predictive model of the level, thereby obtaining predictive models of multiple levels. The predictive models of the multiple levels can be solidified in the using process without real-time training.
As an alternative, the aggregation processing of at least one sub-model of the same level to obtain the prediction model may be:
weighting model parameters of at least one sub-model of the same level to obtain a first aggregation parameter;
an available bandwidth prediction model corresponding to the level is generated based on the first aggregation parameter.
For example, the model parameters of at least one sub-model of the same level may be subjected to weighted average processing or average processing to obtain the first aggregate parameter, where the weight value of the model parameter of each sub-model may be preset, or may be set according to the network condition corresponding to the at least one sub-model, or the like.
As another alternative, in the case that the server may establish network connection with a plurality of clients, the aggregating at least one sub-model of the same level to obtain the available bandwidth prediction model includes:
Distributing a plurality of sub-models of the same level to a plurality of second sample clients, and respectively utilizing available sample bandwidths obtained by prediction of the sub-models obtained by the second sample clients and sample network information obtained by corresponding detection by the second sample clients to retrain the sub-models obtained by the second sample clients;
the following updating operation is performed iteratively: weighting model parameters obtained by updating a plurality of second sample clients to obtain second polymerization parameters; and updating model parameters of the sub-model in the plurality of second sample clients with the second aggregation parameters, respectively;
and generating a prediction model corresponding to the level based on the latest obtained second polymerization parameters under the condition that the plurality of sub-models meet the second training conditions.
The retraining manner of the sub-model is the same as that of the sub-model training manner described above, and the sample client can specifically train the respectively obtained sub-model.
The plurality of second sample clients may specifically be first sample clients corresponding to the plurality of sub-models of the same level, or may be clients selected randomly.
The second training condition may be, for example, that the number of iterations reaches a certain number of times or that the total retraining duration is greater than a certain duration, etc.
The prediction model obtained by training according to the embodiment shown in fig. 1a may be used to predict the available bandwidth during the data streaming process of any client, as shown in fig. 2, which is a flowchart of one embodiment of a bandwidth prediction method provided in the embodiment of the present application, where the method may include the following steps:
201: network information of a network used by a client to stream data is determined.
The client may be any client that establishes a network connection with the server, including a sample client that participates in training, and the like. The network information is obtained by network detection between the client and the server for streaming data, and may include at least one network parameter of packet loss rate, delay jitter, and receiving rate corresponding to a plurality of recently transmitted data packets. The method for obtaining the packet loss rate, the time delay jitter and the receiving rate is not particularly limited in this application.
The network information of the client may be periodically detected every certain time period. Of course, the network information of the client may also be detected in real time or after each adjustment of the available bandwidth of the transmission.
202: and determining a target prediction model from the prediction models respectively corresponding to the multiple levels according to the network information.
Wherein the predictive model for each level is obtained from at least one sub-model aggregation process for that level; each sub-model is obtained by training according to the available bandwidth of the sample and the sample network information adopted by the data streaming in the sample network environment, and the specific training mode of the sub-model and the specific obtaining mode of the prediction model can be detailed in any embodiment of the model generating method, and the detailed description is not repeated here.
The target prediction model may be determined from the prediction models respectively corresponding to the plurality of levels, in particular in combination with one or more network parameters in the network information.
The technical solution in this embodiment may be applied to a system as shown in fig. 1b, where the technical solution in this embodiment may be executed by a server, and of course, may also be executed by a client. In the case of execution by the client, step 202 may be to send the network information to the server, so as to trigger the server to determine, according to the network information, a target prediction model from prediction models corresponding to a plurality of levels respectively; or the predictive model of the multiple levels can be preconfigured, and specifically, the server side can issue the predictive model of the multiple levels to the client side.
In addition, when executed by the server, the server may also issue the target prediction model to the client when the client streams data to the server, so that the client may perform the operation of step 203 using the target prediction model.
203: and predicting the available bandwidth of the target by using the target prediction model and the network information.
One or more network parameters in the network information, such as packet loss rate, delay jitter, bandwidth, and/or receiving rate obtained by detecting a recently transmitted data packet, may be specifically input into the target prediction model to predict and obtain the target available bandwidth.
Wherein the target available bandwidth may be used to indicate a coding rate and/or a transmission rate of the data. Thus, as a further embodiment, the method may further comprise:
adjusting the coding rate and/or the sending rate according to the target available bandwidth; the coding rate is used to code data and the sending rate is used to stream data.
The client may stream data to the server, and in the case where the technical scheme of the embodiment is executed by the server, the server may send the target available bandwidth obtained by prediction to the client, or of course, the server may send the determined target prediction model to the client, and the client predicts the target available bandwidth by using the target prediction model and network information.
Another implementation scenario of data streaming between the server and the client may be that the server streams data to the client, and in the case where the technical solution of this embodiment may be executed by the client, the client may send the target available bandwidth obtained by prediction to the server. The server may stream data to the client, etc., specifically in accordance with the target available bandwidth.
In some embodiments, determining the target prediction model from the available bandwidth prediction models respectively corresponding to the plurality of levels according to the network information of the client may include:
calculating to obtain network evaluation parameters from network information of a client;
according to the network evaluation parameters, when the client side is determined to meet the first switching condition, taking a higher-level prediction model of the currently used prediction model as a target prediction model; the network utilization rate of the sample network environment corresponding to the prediction model with higher level is higher;
and when the client side meets the second switching condition according to the network evaluation parameters, taking a lower-level prediction model of the currently used prediction model as a target prediction model.
The prediction model of the middle level in the multiple levels can be adopted as the prediction model of the initial use corresponding to the client, and one prediction model of the highest level or the lowest level can be randomly selected.
The higher the network utilization rate is, the better the network condition is, the prediction model obtained by aggregation of the sub-models obtained by corresponding training can be suitable for predicting the available bandwidth under the condition of better network condition, otherwise, the prediction model corresponding to the sample network environment with lower network utilization rate is suitable for predicting the available bandwidth under the condition of worse network condition. The network utilization may specifically be referred to as bandwidth utilization. According to the method and the device, the network evaluation parameters are calculated through the network information, the selection, the switching and the like of the prediction model are realized based on the network evaluation parameters, so that the network condition dynamic switching model can be combined, and the accuracy of available bandwidth prediction is ensured.
To keep the available bandwidth prediction smooth, it may be possible to combine the network evaluation parameters with the model switching interval duration to determine whether the client fulfils the first switching condition or the second switching condition.
Wherein the first switching condition characterizes that the current network condition is better, so that a high level predictive model can be used. The second handover condition ensures that the current network condition ratio is worse and a lower level predictive model may be used.
Of course, other manners may be used to determine the target prediction model, for example, a determination rule may be preset, and network information or network evaluation parameters corresponding to the prediction models of different levels may be specified, so that the target prediction model corresponding to the network information may be determined by the determination rule.
In some embodiments, the network information may include delay jitter of a plurality of recently transmitted data packets, packet loss rate of a plurality of recently transmitted data packets, and the like. Thus, from the network information of the client, the calculating the network evaluation parameter may include:
determining the time delay jitter characteristics of the client according to the time delay jitter of a plurality of data packets which are transmitted recently;
under the condition that the delay jitter characteristic meets the overload condition, determining that the client is in an overload state;
under the condition that the time delay jitter characteristics obtained by continuous multiple times of calculation meet the low-load condition, determining that the client is in a low-load state;
and counting the packet loss rate characteristics of the client according to the packet loss rates of the plurality of recently transmitted data packets.
The first switching condition may include, for example, the client being in a low-load state and the packet loss rate characteristic being smaller than a first threshold, and the second switching condition may include the client being in an overload state and the packet loss rate characteristic being greater than a second threshold.
The time delay jitter feature does not meet the overload condition, the time delay jitter feature obtained by continuous multiple times of calculation does not meet the low-load condition, and the client is in a normal state.
In addition, in order to ensure smoothness of the predicted available bandwidth, the first switching condition may include that the client is in a low-load state and the packet loss rate characteristic is smaller than a first threshold, and the currently used prediction model is not the highest level, and the model switching interval duration is smaller than the first duration.
The second switching condition may include that the client is in an overload state and the packet loss rate characteristic is greater than a second threshold, and the currently used predictive model is not at a lowest level, and the model switching interval time is longer than a second duration.
The first duration may be longer than the second duration, and in combination with the actual application situation, the first duration may be set to 1.5 seconds, for example, and the second duration may be set to 0.5 seconds, for example.
Namely, when the client is in a low-load state, the packet loss rate characteristic is smaller than a first threshold, and the currently used prediction model is not the highest level, and the model switching interval duration is smaller than a first time duration, taking a prediction model of one level higher than the currently used prediction model as a target prediction model; the network utilization rate of the sample network environment corresponding to the prediction model with higher level is higher; and if the currently used prediction model is at the highest level or the model switching interval time is longer than the first time or the client is not in a low-load state or the packet loss rate characteristic is greater than a first threshold value, the currently used model is continuously used as the target prediction model.
And when the client is in an overload state, the packet loss rate characteristic is larger than a second threshold, the currently used prediction model is not at the lowest level, the model switching interval time is longer than the second time, and the prediction model of the lower level of the currently used prediction model is taken as the target prediction model. And if the currently used prediction model is at the lowest level or the duration of the model switching interval is smaller than the second duration or the packet loss rate characteristic is smaller than the second threshold or the client is not in an overload state, the currently used model is continuously used as the target prediction model.
Optionally, determining the delay jitter characteristic of the client may be to perform data fusion on the delay jitter of the recently transmitted data packets according to the delay jitter of the recently transmitted data packets, so as to obtain the delay jitter characteristic of the client.
One way of fusing may be, for example, to take the average value of the delay jitter of the most recently transmitted packets as the delay jitter characteristic of the client.
To improve accuracy, another fusion method is calculated, for example, by using the following fusion formula:
where mi represents the delay jitter characteristic, α k Representing the delay jitter of the kth data packet that was last transmitted. K takes on the values of 1, 2 and 3 … … M, and M represents the total number of data packets.
The multiplicative attenuation in the fusion formula can accurately detect and obtain the time delay jitter characteristics of the client, and can accurately represent the time delay jitter condition of the client in the last period of time.
Wherein the overload condition may mean that the delay jitter characteristic is greater than a first threshold; the low load condition may be that the delay jitter characteristic is less than a second threshold.
The second threshold value may be, for example, 0. The first threshold may be preset and may be dynamically adjusted in combination with actual network conditions. Thus, in some embodiments, the first threshold may be updated as follows:
Calculating the difference value between the current time delay jitter characteristic and the current first threshold value;
based on the difference and the update interval duration, a first threshold after the update is obtained is calculated.
Furthermore, the first threshold after the update is obtained may be calculated based on the difference and the update interval duration, in particular within the constraint of the maximum threshold and the minimum threshold. That is, if the calculated value is greater than the maximum threshold, the maximum threshold is used as the first threshold after the update, and if the calculated value is less than the minimum threshold, the minimum threshold is used as the first threshold after the update, otherwise, the calculated value is used as the first threshold.
Wherein the first threshold after the update is obtained may be calculated by multiplying the difference value by the update interval duration and a predetermined coefficient. Alternatively, the first threshold after updating may be calculated specifically in the following threshold calculation manner;
where Threshold represents a first Threshold, n is a predetermined number of times,representing the update interval duration.
The value of the predetermined coefficient n may be preset, where the value of the predetermined coefficient is smaller than the value of the predetermined coefficient when the absolute value of the delay jitter feature is greater than the first threshold and smaller than the value of the delay jitter feature, for example, the value may be 0.0087 when the absolute value of the delay jitter feature is greater than the first threshold in a network live broadcast scene with higher delay requirements; in the case where the absolute value of the delay jitter feature is less than the first threshold, the value may be 0.039.
By dynamically adjusting the first threshold, the situation that the algorithm is too sensitive to detect network information or is not sensitive enough to cause network congestion can be avoided.
Wherein the first threshold may be updated after determining that the client is in an overload state or in a low-load state, etc.
In some embodiments, according to the packet loss rate included in the plurality of data transmitted recently, the statistical packet loss rate feature of the client may be to perform data fusion on the packet loss rates of the plurality of data packets transmitted recently, so as to obtain the packet loss rate feature of the client, for example, take the average packet loss rate of the plurality of data packets transmitted recently as the packet loss rate feature of the client.
The plurality of data packets participating in the calculation of the packet loss rate characteristic may be the same as or different from the plurality of data packets participating in the calculation of the delay jitter characteristic.
As can be seen from the foregoing related description, the technical solution of the embodiment of the present application may be applied to any scenario in which data is streamed between two communication parties through a network, and the technical solution of the embodiment of the present application will be described mainly by taking a live network scenario as an example.
In a live webcast scenario, the technical solution of the embodiments of the present application may be applied to a system architecture as shown in fig. 3, where the system may be composed of a hosting end 301, a service end 302, and a viewer end 303. Compared to the server 302, the anchor 301 and the viewer 303 are live. When each live broadcast is performed, the anchor side 301 is responsible for collecting the sound and/or the picture of the live broadcast where the anchor is located for real-time collection, obtaining live broadcast data, and uploading the live broadcast data to the server side 302. Live broadcast data of different anchor can be distinguished through the live broadcast room, and the anchor can firstly apply for the live broadcast room from the server 302 through the anchor 301, so as to record and upload the live broadcast data in real time. A user may request to enter a living room through the viewer end 303, and the server end 302 may send living data of the living room to the viewer end 303, so that the viewer end 303 may play the living content. As can be seen from the above description, in the network live broadcast scenario, live broadcast data is transmitted by using a streaming method after existing between the anchor end and the server end and between the server end and the viewer end. The live broadcast data may specifically refer to a collected video stream or an audio stream or a media stream such as an audio-video stream.
The audience terminal 303 may be configured in electronic devices such as a mobile phone, a tablet computer, a computer, an intelligent watch, etc., the server terminal 302 may be implemented by adopting a CDN (Content Delivery Network ) system, and the anchor terminal 301 may be formed by an electronic device having an acquisition function and an OBS (Open Broadcaster Software, open source live broadcast software) push function, such as an intelligent device such as a mobile phone with a camera, a tablet, etc., and of course, the application is not limited to implementing network live broadcast by adopting the above live broadcast technical scheme. In addition, as will be appreciated by those skilled in the art, based on the available bandwidth, the live data may need to be encoded, transcoded, compressed, etc. before being uploaded to the server, and accordingly, the viewer may need to be decoded, decompressed, etc. before playing the live data, etc.
The viewer end and the anchor end may be independent application programs, or may be functional modules integrated in other application programs.
And because the transmission delay and video playing quality requirements are higher in the network live broadcast scene, the proper available bandwidth is beneficial to improving the video playing quality under different network conditions. For the system architecture diagram shown in fig. 3, fig. 4 is a flowchart of another embodiment of a model generating method provided in an embodiment of the present application, where the technical solution of the present embodiment is generally executed by a server, and the method may include the following steps:
401: and training a plurality of sub-models by utilizing the sample available bandwidth and sample network information adopted for respectively carrying out live data transmission in a plurality of sample network environments.
The network environment between each first sample live broadcast end and the service end can be used as a sample network environment, or the network environment used by each first sample live broadcast end and the service end for live broadcast data transmission at each time can be used as a sample network environment.
402: and determining a plurality of levels of the sub-models which are divided according to the network conditions of the corresponding sample network environments.
The grading manner may be specifically described in step 102 in the embodiment shown in fig. 1, and will not be described herein.
403: and carrying out aggregation treatment on at least one sub-model of the same level to obtain a prediction model.
The prediction models corresponding to the multiple levels are used for selecting a target prediction model according to the network information of the live broadcast end so as to predict the target available bandwidth of the live broadcast end by using the target prediction model.
The difference between this embodiment and the embodiment shown in fig. 1a is that the sample network environment is a network environment in a live network scenario, and the prediction model is used to predict the available bandwidth corresponding to the live network, and other identical or similar steps may be detailed in the embodiment shown in fig. 1a, and will not be repeated here.
In practical application, the live broadcast end may refer to any one of the anchor ends, and the corresponding sample network environment may be a network environment between the first sample anchor end and the server end or may be a network environment between the first sample anchor end and the server end in each live broadcast data transmission process.
Of course, the live broadcast end may also refer to any one of the audience ends, and the corresponding sample network environment may be a network environment between the first sample audience end and the service end or may be a network environment between the first sample audience end and the service end in each live broadcast data transmission process.
Thus, in some embodiments, training the plurality of sub-models using the sample available bandwidth and sample network characteristics employed for live data transmission in the plurality of sample network environments, respectively, comprises:
and training a plurality of sub-models by utilizing the available sample bandwidths and the sample network characteristics adopted by the plurality of first sample live terminals in at least one live data transmission process.
As an alternative, training the plurality of submodels by using the available bandwidths of samples and the network characteristics of samples adopted by the plurality of first sample live terminals in at least one live data transmission process respectively may include:
For any one first sample live terminal, in the process of carrying out any one live data transmission, iteratively executing the following training operation until the used sub-model meets a first training condition to obtain a sub-model after training is completed; the training operation may include: detecting sample network information of the first sample live terminal; according to the sample network information, adjusting model parameters of the used sub-model; according to the sample network information, predicting and obtaining available bandwidth of a sample by utilizing the submodel; and adjusting the transmission available bandwidth of the live broadcast data according to the sample available bandwidth.
The sub-model meets a first training condition, and the model can be considered to converge to obtain a sub-model with complete training, and one implementation of the first training condition can be that, for example, the iteration number of training operations reaches a preset number of times or the total training time exceeds a preset time, and another implementation can be that, for example, sample network information obtained by current detection meets a preset requirement.
The method may specifically evaluate the currently used available bandwidth according to the sample network information, adjust the currently used available bandwidth based on the evaluation result, and the like, for example, evaluate the currently used available bandwidth according to the currently detected sample network information, determine whether the network condition thereof is better or worse, and further determine how to adjust model parameters and the like, and specifically perform training adjustment and the like on the model according to a reinforcement learning mode. Of course, an adjustment rule may be set in advance, and the model parameters may be adjusted according to adjustment methods corresponding to different pieces of network information specified in the adjustment rule.
In some embodiments, at least one sub-model of the same level may be aggregated in a federal learning manner to obtain a predictive model of the level, thereby obtaining predictive models of multiple levels. The predictive models of the multiple levels can be solidified in the using process without real-time training.
As an alternative, aggregating at least one sub-model of the same level to obtain an available bandwidth prediction model includes: weighting model parameters of at least one sub-model of the same level to obtain a first aggregation parameter; an available bandwidth prediction model corresponding to the level is generated based on the first aggregation parameter.
As another alternative, aggregating at least one sub-model of the same level to obtain the available bandwidth prediction model may include:
distributing a plurality of sub-models of the same level to a plurality of second sample live broadcast ends, and retraining the respectively obtained sub-models by the plurality of second sample live broadcast ends by respectively utilizing the sample available bandwidth predicted by the respectively obtained sub-models and sample network information obtained by corresponding detection;
weighting model parameters obtained by updating the plurality of second sample live broadcast ends to obtain second polymerization parameters;
Respectively updating model parameters of the sub-models in the plurality of second live broadcast ends by using the second aggregation parameters;
and generating an available bandwidth prediction model corresponding to the level based on the newly obtained second polymerization parameters under the condition that the plurality of sub-models meet the second training condition.
The retraining mode of the sub-model is the same as the training mode of the sub-model described above, and the respectively obtained sub-model can be specifically trained by the sample live end.
The plurality of second sample live broadcast ends may specifically be first sample live broadcast ends corresponding to the plurality of submodels of the same level respectively, or may be randomly selected live broadcast ends, or the like.
The second training condition may be, for example, that the number of iterations reaches a certain number of times or that the total retraining duration is greater than a certain duration, etc.
Corresponding to the system architecture shown in fig. 3, fig. 5 is a flowchart of another embodiment of a bandwidth prediction method provided in the implementation of the present application, where the method may include the following steps:
501: network information of a network used for transmitting live data by a live terminal is determined.
The live broadcast end can be any live broadcast end which establishes network connection with the service end, and the live broadcast end comprises a sample live broadcast end which participates in training and the like.
The network information may be periodically detected every certain time interval. Of course, the network information may also be detected in real time or after each adjustment of the available bandwidth of the transmission.
The network information is obtained by network detection for transmitting live broadcast data between the live broadcast end and the service end, and may include at least one network parameter of packet loss rate, time delay jitter and receiving rate corresponding to a plurality of recently transmitted data packets.
502: and determining a target prediction model from the prediction models respectively corresponding to the multiple levels according to the network information.
Wherein the prediction model of each level is obtained by at least one sub-model aggregation process corresponding to the level; each sub-model is obtained by training according to the available bandwidth of the sample and the sample network information adopted for transmitting the live data in the sample network environment, and the specific training mode can be detailed in any embodiment of the model generating method, which is not described herein.
503: and predicting the available bandwidth of the target by using the target prediction model and the network information.
One or more network parameters in the network information, such as packet loss rate, delay jitter, bandwidth, and/or receiving rate obtained by detecting a recently transmitted data packet, may be specifically input into the target prediction model to predict and obtain the target available bandwidth.
Wherein the target available bandwidth may be used to indicate an adjustment of the coding rate and/or the transmission rate. Thus, as a further embodiment, the method may further comprise:
and adjusting the coding rate and/or the sending rate of the live broadcast data according to the target available bandwidth. Thereby, the live data can be encoded according to the encoding code rate, and/or the live data can be transmitted according to the transmitting code rate.
The available bandwidth can be used as an upper limit value of the coding rate and the sending rate, and the coding rate and/or the sending rate can be adjusted under the constraint of the upper limit value in combination with different practical application conditions.
The technical scheme of the embodiment may be executed by the server side or the live broadcast side, and in the case of being executed by the live broadcast side, the live broadcast side may configure the multiple-level prediction model, and specifically, the server side may issue the multiple-level prediction model to the live broadcast side.
In the case of execution by the server, the server may issue the target prediction model to the live broadcast end after determining the target prediction model, and step 503 may specifically be to trigger the live broadcast end to predict the target available bandwidth corresponding to the live broadcast end by using the target prediction model and the network information; of course, in the case that the live broadcast end is the anchor end, after the server end predicts and obtains the target available bandwidth, the target available bandwidth is issued to the anchor end, and the anchor end adjusts the encoding rate and/or the sending rate of the live broadcast data according to the target available bandwidth; naturally, in the case that the live broadcast end is the anchor end, after the target available bandwidth is obtained by predicting by the server end, the encoding code rate and/or the sending code rate of the live broadcast data are adjusted according to the target available bandwidth, the encoding code rate and/or the sending code rate are sent to the anchor end, and the live broadcast data are encoded according to the encoding code rate by the anchor end, and/or the live broadcast data are sent according to the sending code rate.
In addition, the live broadcast end is the audience end, and under the condition that the technical scheme of the embodiment is executed by the service end, the method can comprise the following steps: according to the target available bandwidth, adjusting the coding rate and/or the sending rate of the live broadcast data; encoding the live broadcast data according to the encoding code rate, and/or transmitting the live broadcast data to the audience terminal according to the transmitting code rate
In some embodiments, according to the network information of the live side, determining the target prediction model from the available bandwidth prediction models respectively corresponding to the multiple levels may include:
calculating to obtain network evaluation parameters from network information of a live broadcast terminal;
according to the network evaluation parameters, when the live broadcast end meets the first switching condition, taking a higher-level prediction model of the currently used prediction model as a target prediction model; the network utilization rate of the sample network environment corresponding to the prediction model with higher level is higher;
and when the live broadcast end meets the second switching condition according to the network evaluation parameters, taking a lower-level prediction model of the currently used prediction model as a target prediction model.
The prediction model of the middle level in the multiple levels can be adopted as the prediction model of the initial use corresponding to the live broadcast end, and one or the highest level or the lowest level of the prediction model can be randomly selected.
In order to keep the available bandwidth prediction smooth, it may be determined whether the live end satisfies the first handover condition or the second handover condition by combining the network evaluation parameter and the model handover interval duration.
Wherein the first switching condition may characterize the current network conditions to be good, so that a high level predictive model may be used. The second handover condition characterizes the current network condition ratio degradation, a low level predictive model may be used.
Of course, other manners may be used to determine the target prediction model, for example, a determination rule may be preset, and network information or network evaluation parameters corresponding to the prediction models of different levels may be specified, so that the target prediction model corresponding to the network information may be determined by the determination rule.
In some embodiments, the calculating the network evaluation parameter from the network information of the live end may include:
determining delay jitter characteristics of a live broadcast end according to delay jitter of a plurality of recently transmitted data packets;
under the condition that the delay jitter characteristic meets the overload condition, determining that the live broadcast terminal is in an overload state;
under the condition that the delay jitter characteristics obtained by continuous multiple times of calculation meet the low-load condition, determining that the live broadcast end is in a low-load state;
According to the packet loss rate included in the recently transmitted data, counting the packet loss rate characteristics of the live broadcast end;
the first switching condition comprises that the live broadcast terminal is in a low-load state and the packet loss rate characteristic is smaller than a first threshold value; the second switching condition includes that the live broadcast terminal is in an overload state and the packet loss rate characteristic is larger than a first threshold value.
The delay jitter feature does not meet the overload condition, the delay jitter feature obtained by continuous repeated calculation does not meet the low-load condition, and the live broadcast end is in a normal state.
In addition, in order to ensure smoothness of the predicted available bandwidth, the first switching condition may include that the live broadcast end is in a low-load state, the packet loss rate characteristic is smaller than a first threshold, a currently used prediction model is not at a highest level, and the model switching interval duration is smaller than a first duration.
The second switching condition may include that the live broadcast end is in an overload state, the packet loss rate characteristic is greater than a second threshold, the currently used prediction model is not the lowest level, and the model switching interval time is longer than a second time period.
In some embodiments, when determining that the live broadcast end meets the first switching condition according to the network evaluation parameter, taking a higher-level prediction model of the currently used prediction model as the target prediction model includes:
Taking a prediction model with one level higher than the other level of the currently used prediction model as a target prediction model under the conditions that the live broadcast end is in a low-load state, the packet loss rate characteristic is smaller than a first threshold, the currently used prediction model is not in the highest level, and the model switching interval duration is smaller than a first duration;
when the live broadcast end meets the second switching condition according to the network evaluation parameters, the step of taking the lower one-level prediction model of the currently used prediction model as the target prediction model comprises the following steps:
and when the live broadcast end is in an overload state, the packet loss rate characteristic is larger than a second threshold value, the currently used prediction model is not at the lowest level, the model switching interval time is longer than the second time length, and the prediction model of the lower level of the currently used prediction model is taken as the target prediction model.
Optionally, data fusion can be performed on delay jitter of a plurality of recently transmitted data packets, so as to obtain delay jitter characteristics of a live broadcast end. One way of fusing may be calculated, for example, using the fusion formulas described in the respective embodiments above.
Wherein the overload condition may mean that the delay jitter characteristic is greater than a first threshold; the low load condition may be that the delay jitter characteristic is less than a second threshold.
The second threshold value may be, for example, 0. The first threshold may be preset and may be dynamically adjusted in combination with actual network conditions. Thus, in some embodiments, the first threshold may be updated as follows: calculating the difference value between the current time delay jitter characteristic and the current first threshold value; based on the difference and the update interval duration, a first threshold after the update is obtained is calculated. The specific updating means may calculate the first threshold value after the update is obtained, for example, in the threshold value calculation means described in the above corresponding embodiment.
In some embodiments, according to the packet loss rate included in the plurality of data transmitted recently, the statistical packet loss rate characteristics of the live broadcast end may be: and taking the average value of the packet loss rates of the recently transmitted data packets as the packet loss rate characteristic of the live broadcast end. In the method, a plurality of data packets participating in the calculation of the packet loss rate characteristics and a plurality of data packets participating in the calculation of the delay jitter characteristics can be the same or different.
In an actual application, the live broadcast end may refer to a hosting end, and in a case where the technical scheme of the embodiment shown in fig. 5 is executed by the service end, the method may further include: transmitting the target available bandwidth to a main broadcasting end; the target available bandwidth is used to instruct the anchor to encode and/or transmit live data.
In addition, the live broadcast end is a main broadcast end, and in the case that the technical scheme of the embodiment shown in fig. 5 is executed by the service end, the method may further include:
according to the target available bandwidth, adjusting the coding rate and/or the sending rate of the live broadcast data;
transmitting the coding code rate and/or the transmitting code rate to the anchor side so as to instruct the target anchor to code the live broadcast data according to the coding code rate and/or transmit the live broadcast data according to the transmitting code rate;
in addition, the live broadcast end is a main broadcast end, and in the case that the technical scheme of the embodiment shown in fig. 5 is executed by the main broadcast end, the method may further include: according to the target available bandwidth, adjusting the coding rate and/or the sending rate of the live broadcast data; and encoding the live broadcast data according to the encoding code rate, and/or transmitting the live broadcast data to a server according to the transmitting code rate.
In addition, in the case that the live broadcast end is the viewer end, in the case that the technical scheme of the embodiment shown in fig. 5 is executed by the service end, the method may further include: according to the target available bandwidth, adjusting the coding rate and/or the sending rate of the live broadcast data; and encoding the live broadcast data according to the encoding code rate, and/or transmitting the live broadcast data to the audience terminal according to the transmitting code rate.
In order to further facilitate understanding of the technical solution of the present application, a network live broadcast scenario in which a host side transmits live broadcast data to a server side is taken as an example, and the technical solution of the present application is described with reference to an interaction schematic diagram shown in fig. 6. The server 601 may first select a plurality of first sample anchor ends 602, and when each first sample anchor end 602 performs each live broadcast, perform real-time training on the sub-model by adopting a reinforcement learning mode according to the sample network information obtained by detection and the sample available bandwidth predicted by the corresponding sub-model, where each first sample anchor end 602 may perform each live broadcast on the obtained sub-model, so as to obtain a plurality of sub-models.
Then, according to the network condition in the live broadcast process of the first sample anchor corresponding to each sub-model, the sub-models can be divided into a plurality of grades, and the sub-models are divided into 3 grades: good, medium and bad.
And then, a plurality of sub-models of the same level can be subjected to federal learning, so that a prediction model corresponding to the level is obtained through aggregation processing, and finally, a good-level prediction model, a medium-level prediction model and a bad-level prediction model can be obtained. Since multiple sub-models may correspond to different live broadcasts, the predictive model may adapt to different network environments. The prediction models of a plurality of grades obtained through aggregation can be used as a fixed model for any live broadcast.
When any one of the anchor terminals performs live broadcasting, the server 601 may initialize an initial model corresponding to the anchor terminal 602 to be a middle-level prediction model, then continuously detect a network used by the anchor terminal 602 to transmit live broadcasting data in a live broadcasting data transmission process, so as to obtain network information, calculate and obtain network evaluation parameters, such as an overload state or a low-load state, a packet loss rate feature, and the like, based on network parameters, such as a packet loss rate of a data packet, and time jitter, so that when the anchor terminal 602 meets a first switching condition, a high-level prediction model corresponding to a current prediction model is used as a target prediction model, and when the anchor terminal 602 meets a second switching condition, a low-level prediction model of the current prediction model is used as a target prediction model. The target prediction model determined by the server 601 may be issued to the anchor, and the anchor predicts the target available bandwidth by using the target prediction model, or the server may issue the target available bandwidth to the anchor 602 after obtaining the target available bandwidth by using the target prediction model.
Of course, as another alternative, the server 601 may send the prediction models of multiple levels to the anchor 602, and the anchor 602 switches the prediction models of different levels to perform available bandwidth prediction in combination with its own network information.
And then, the anchor 602 may adjust the coding rate and/or the sending rate according to the target available bandwidth obtained by prediction, and encode the live broadcast data according to the coding rate, and/or transmit the live broadcast to the server according to the sending rate.
By adopting the technical scheme, when the network condition of the anchor end is good, a higher-level prediction model obtained by training in a sample network environment with better network condition can be adopted; when the network condition is poor, a lower-level prediction model obtained by training in a sample network environment with poor network condition is adopted, so that the available bandwidth prediction accuracy can be improved by performing available bandwidth prediction through the prediction model matched with the network condition.
Fig. 7 is a schematic structural diagram of an embodiment of a model generating device provided in an embodiment of the present application, where the device may include:
a training module 701 for training a plurality of sub-models using sample available bandwidth and sample network information employed for data streaming in a plurality of sample network environments;
a level determining module 702, configured to determine a plurality of levels of the sub-models according to network conditions of respective corresponding sample network environments;
An aggregation processing module 703, configured to aggregate at least one sub-model of the same level to obtain a prediction model;
the prediction models corresponding to the multiple levels are used for selecting a target prediction model according to the network information of the live broadcast end so as to predict the target available bandwidth of the live broadcast end by using the target prediction model.
In an actual application, the technical scheme of the embodiment of the application can be specifically applied to a network live broadcast scene, and the training module can be specifically used for training a plurality of sub-models by utilizing sample available bandwidth and sample network information adopted by respectively carrying out live broadcast data transmission in a plurality of sample network environments;
the grade determining module can be specifically used for determining a plurality of grades obtained by dividing a plurality of sub-models according to the network conditions of the respective corresponding sample network environments;
the aggregation processing module can be specifically used for carrying out aggregation processing on at least one sub-model of the same level to obtain a prediction model;
the prediction models corresponding to the multiple levels are used for selecting a target prediction model according to the network information of the live broadcast end so as to predict the target available bandwidth of the live broadcast end by using the target prediction model.
In some embodiments, the training module may specifically train the plurality of sub-models by using available sample bandwidths and sample network characteristics adopted by the plurality of first sample live ends during at least one live data transmission process, respectively.
In some embodiments, the training module may specifically perform, for any one of the first live sample ends, in performing any one of the live broadcast data transmission processes, the following training operations until the used sub-model meets the first training condition, to obtain a sub-model after training is completed; the training operation includes: detecting sample network information of a first sample live terminal; according to the sample network information, adjusting model parameters of the used sub-model; according to the sample network information, utilizing sub-model prediction to obtain available bandwidth of a sample; and adjusting the transmission available bandwidth of the live broadcast data according to the sample available bandwidth.
In some embodiments, the aggregation processing module may specifically perform weighting processing on model parameters of at least one sub-model of the same level to obtain a first aggregation parameter; an available bandwidth prediction model corresponding to the level is generated based on the first aggregation parameter.
In some embodiments, the aggregation processing module may specifically distribute multiple sub-models of the same level to multiple second sample live broadcast ends, where the multiple second sample live broadcast ends retrain the respective obtained sub-models by using sample available bandwidths predicted by the respective obtained sub-models and sample network information obtained by corresponding detection; weighting model parameters obtained by updating the plurality of second sample live broadcast ends to obtain second polymerization parameters; respectively updating model parameters of the sub-models in the plurality of second live broadcast ends by using the second aggregation parameters; and generating an available bandwidth prediction model corresponding to the level based on the newly obtained second polymerization parameters under the condition that the plurality of sub-models meet the second training condition.
The model generating device shown in fig. 7 may perform the model generating method described in the embodiment shown in fig. 1a, and its implementation principle and technical effects are not repeated. The specific manner in which the respective modules, units, and operations of the model generating apparatus in the above embodiment are performed has been described in detail in the embodiment concerning the method, and will not be described in detail here.
Fig. 8 is a schematic structural diagram of an embodiment of a bandwidth prediction apparatus provided in an embodiment of the present application, where the apparatus may include:
an information determining module 801 for determining network information of a network used by a client to stream data;
the model determining module 802 is configured to determine, according to the network information, a target prediction model from prediction models corresponding to the multiple levels respectively; wherein the prediction model is obtained by at least one sub-model aggregation process of the same level; the sub-model is obtained according to sample available bandwidth and sample network information training adopted by streaming data in a sample network environment;
and the available bandwidth prediction module 803 is configured to predict a target available bandwidth corresponding to the client by using the target prediction model and the network information.
In an actual application, the technical scheme of the embodiment of the application can be specifically applied to a network live broadcast scene, and the information determining module can be specifically used for determining network information corresponding to a network used by a live broadcast terminal for transmitting live broadcast data;
The model determining module is specifically used for determining a target prediction model from prediction models corresponding to a plurality of grades according to network information; wherein the prediction model is obtained by at least one sub-model aggregation process of the same level; the sub-model is obtained according to sample available bandwidth and sample network information training adopted for transmitting live broadcast data in a sample network environment;
the available bandwidth prediction model is specifically used for predicting the target available bandwidth corresponding to the live broadcast end by utilizing the target prediction model and network information.
In some embodiments, the model determining module may specifically calculate and obtain a network evaluation parameter from network information corresponding to the live broadcast end; according to the network evaluation parameters, when the live broadcast end meets the first switching condition, taking a higher-level prediction model of the currently used prediction model as a target prediction model; the network utilization rate of the sample network environment corresponding to the prediction model with higher level is higher; and when the live broadcast end meets the second switching condition according to the network evaluation parameters, taking a lower-level prediction model of the currently used prediction model as a target prediction model.
In some embodiments, the model determining module calculates network evaluation parameters from network information of the live broadcast end, including: determining delay jitter characteristics of a live broadcast end according to delay jitter of a plurality of recently transmitted data packets; under the condition that the delay jitter characteristic meets the overload condition, determining that the live broadcast terminal is in an overload state; under the condition that the delay jitter characteristics obtained by continuous multiple times of calculation meet the low-load condition, determining that the live broadcast end is in a low-load state; according to the packet loss rate included in the recently transmitted data, counting the packet loss rate characteristics of the live broadcast end;
The first switching condition comprises that the live broadcast terminal is in a low-load state and the packet loss rate characteristic is smaller than a first threshold value; the second switching condition includes that the live broadcast terminal is in an overload state and the packet loss rate characteristic is larger than a first threshold value.
In some embodiments, the model determining module determines, according to the network evaluation parameter, that the live broadcast end satisfies the first switching condition, that a higher level prediction model of the currently used prediction model is used as the target prediction model includes: taking a prediction model with one level higher and lower than the currently used prediction model as a target prediction model under the conditions that the live broadcast end is in a low-load state, the packet loss rate characteristic is smaller than a first threshold, the currently used prediction model is not the highest level, and the model switching interval duration is smaller than a first time duration;
the model determining module determines, according to the network evaluation parameter, that the live broadcast terminal satisfies the second switching condition, that the prediction model of the lower one level of the currently used prediction model is used as the target prediction model includes: and when the live broadcast end is in an overload state, the packet loss rate characteristic is larger than a second threshold value, the currently used prediction model is not at the lowest level, the model switching interval time is longer than the second time length, and the prediction model of the lower level of the currently used prediction model is taken as the target prediction model.
In some embodiments, the live end is a hosting end, and the apparatus may further include:
the available bandwidth sending module is used for sending the target available bandwidth to the anchor; the target available bandwidth is used to instruct the anchor to encode and/or transmit live data.
In some embodiments, the live end is a hosting end, and the apparatus may further include:
the first adjusting module is used for adjusting the coding rate and/or the sending rate of the live broadcast data according to the target available bandwidth; transmitting the coding code rate and/or the transmitting code rate to the anchor side so as to instruct the target anchor to code the live broadcast data according to the coding code rate and/or transmit the live broadcast data according to the transmitting code rate;
in some embodiments, the live end is a viewer end, and the apparatus may further include:
the second adjusting module is used for adjusting the coding rate and/or the sending rate of the live broadcast data according to the target available bandwidth; and encoding the live broadcast data according to the encoding code rate, and/or transmitting the live broadcast data to the audience terminal according to the transmitting code rate.
The bandwidth prediction apparatus shown in fig. 8 may perform the bandwidth prediction method described in the embodiment shown in fig. 2, and its implementation principle and technical effects are not repeated. The specific manner in which the respective modules, units, and operations of the model generating apparatus in the above embodiment are performed has been described in detail in the embodiment concerning the method, and will not be described in detail here.
In addition, the embodiment of the application further provides a computing device, as shown in fig. 9, which may include a storage component 901 and a processing component 902;
the storage component 901 stores one or more computer instructions for the processing component 902 to invoke and execute to implement the model generation method as shown in fig. 1a or the model generation method as shown in fig. 4 or the bandwidth prediction method as shown in fig. 2 or the bandwidth prediction method as shown in fig. 5.
Of course, the computing device may necessarily include other components, such as input/output interfaces, communication components, and the like. The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The computing device may be a physical device or an elastic computing host provided by the cloud computing platform, and at this time, the computing device may be a cloud server, and the processing component, the storage component, and the like may be a base server resource rented or purchased from the cloud computing platform.
In addition, an embodiment of the present application further provides an electronic device, as shown in fig. 10, where the electronic device may include a storage component 1001 and a processing component 1002;
the storage component 1001 stores one or more computer instructions for the processing component 1002 to invoke and execute to implement the bandwidth prediction method as shown in fig. 2 or the bandwidth prediction method as shown in fig. 5.
Of course, the electronic device may necessarily also include other components, such as input/output interfaces, communication components, display components, and the like. The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate network communication between the electronic device and other devices, to stream data, etc.
In practical applications, the electronic device may be, for example, an electronic device with a network communication function, such as a mobile phone, a tablet computer, a computer, an intelligent watch, and the like.
The processing set in the respective embodiments described above may include one or more processors to execute computer instructions to perform all or part of the steps in the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above. The storage component is configured to store various types of data to support operations at the terminal. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition, the embodiment of the application further provides a computer readable storage medium, and a computer program is stored, and when the computer program is executed by a computer, the model generating method of the embodiment shown in fig. 1a or fig. 4 can be realized. The computer-readable medium may be contained in the corresponding apparatus described in the above embodiments; or may exist alone without being incorporated into the electronic device.
The embodiments of the present application also provide a computer program product comprising a computer program loaded on a computer readable storage medium, which when executed by a computer, can implement the model generating method as described above for the embodiments shown in fig. 1a or fig. 4. In such embodiments, the computer program may be downloaded and installed from a network, and/or installed from a removable medium. The computer program, when executed by a processor, performs the various functions defined in the system of the present application.
In addition, the embodiment of the application further provides a computer readable storage medium, and a computer program is stored, and when the computer program is executed by a computer, the model generating method of the embodiment shown in fig. 2 or fig. 5 can be realized. The computer-readable medium may be contained in the corresponding apparatus described in the above embodiments; or may exist alone without being incorporated into the electronic device.
The embodiments of the present application also provide a computer program product, which includes a computer program loaded on a computer readable storage medium, where the computer program when executed by a computer can implement the model generating method as described above in the embodiments shown in fig. 2 or fig. 5. In such embodiments, the computer program may be downloaded and installed from a network, and/or installed from a removable medium. The computer program, when executed by a processor, performs the various functions defined in the system of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (12)

1. A method of bandwidth prediction, comprising:
determining network information of a network used for transmitting live broadcast data by a live broadcast terminal;
determining a target prediction model from prediction models corresponding to a plurality of levels respectively according to the network information; wherein the prediction model is obtained by at least one sub-model aggregation process of the same level; the sub-model is obtained according to the sample available bandwidth and sample network information training;
and predicting the available bandwidth by using the target prediction model and the network information.
2. The method of claim 1, wherein the sample available bandwidth is an available bandwidth employed when the sample network environment transmits live data; the sample network information is the network information of the sample network environment;
the determining a target prediction model from available bandwidth prediction models corresponding to a plurality of levels respectively according to the network information of the live broadcast end comprises:
calculating to obtain network evaluation parameters from the network information of the live broadcast terminal;
according to the network evaluation parameters, when the live broadcast terminal meets a first switching condition, a prediction model with one level higher than a currently used prediction model is used as a target prediction model; the higher the level is, the higher the available bandwidth utilization rate of the sample network environment corresponding to the prediction model is;
And when the live broadcast terminal meets the second switching condition according to the network evaluation parameters, taking a lower-level prediction model of the currently used prediction model as a target prediction model.
3. The method according to claim 2, wherein calculating network evaluation parameters from the network information of the live side includes:
determining the time delay jitter characteristics of the live broadcast end according to the time delay jitter of a plurality of recently transmitted data packets;
under the condition that the delay jitter characteristic meets an overload condition, determining that the live broadcast terminal is in an overload state;
under the condition that the delay jitter characteristics obtained by continuous multiple times of calculation meet the low-load condition, determining that the live broadcast terminal is in a low-load state;
according to the packet loss rate of a plurality of recently transmitted data packets, counting the packet loss rate characteristics of the live broadcast end;
the first switching condition comprises that the live broadcast end is in a low-load state and the packet loss rate characteristic is smaller than a first threshold value; the second switching condition includes that the live broadcast terminal is in an overload state and the packet loss rate characteristic is larger than a first threshold.
4. A method according to claim 3, wherein determining, according to the network evaluation parameter, that the live-broadcast end satisfies a first switching condition, uses a higher level prediction model of a currently used prediction model as a target prediction model includes:
When the live broadcast terminal is in a low-load state, the packet loss rate characteristic is smaller than a first threshold value, a currently used prediction model is not the highest level, and the model switching interval duration is smaller than a first time duration, taking a prediction model of one level higher than the currently used prediction model as a target prediction model;
and when the live broadcast terminal meets the second switching condition according to the network evaluation parameter, the step of taking the lower one-level prediction model of the currently used prediction model as the target prediction model comprises the following steps:
and when the live broadcast terminal is in an overload state, the packet loss rate characteristic is larger than a second threshold value, the currently used prediction model is not the lowest level, the model switching interval time is longer than the second time, and the prediction model of the lower level of the currently used prediction model is used as a target prediction model.
5. The method of claim 1, wherein the live side is a hosting side, the method further comprising:
transmitting the target available bandwidth to the anchor; the target available bandwidth is used for indicating the main broadcasting end to encode and/or send live broadcast data;
or, the live broadcast end is a main broadcast end, and the method further comprises:
according to the target available bandwidth, adjusting the coding rate and/or the sending rate of the live broadcast data;
Transmitting the coding code rate and/or the transmitting code rate to the anchor terminal so as to instruct the anchor terminal to code the live broadcast data according to the coding code rate and/or transmit the live broadcast data according to the transmitting code rate;
or, the live broadcast end is a viewer end, and the method further includes:
according to the target available bandwidth, adjusting the coding rate and/or the sending rate of the live broadcast data;
and encoding the live broadcast data according to the encoding code rate, and/or transmitting the live broadcast data to the audience terminal according to the transmitting code rate.
6. A model generation method, characterized by comprising:
training a plurality of sub-models by utilizing sample available bandwidth and sample network information adopted by respectively carrying out live broadcast data transmission in a plurality of sample network environments;
determining a plurality of levels obtained by dividing the plurality of sub-models according to the network conditions of the respective corresponding sample network environments;
at least one sub-model of the same level is subjected to aggregation treatment to obtain a prediction model;
the prediction models corresponding to the multiple levels are used for selecting a target prediction model from the network information of the live broadcast end according to the network information of the live broadcast end so as to predict the target available bandwidth of the live broadcast end by using the target prediction model.
7. The method of claim 6, wherein training the plurality of sub-models using the sample available bandwidth and the sample network characteristics for live data transmission in the plurality of sample network environments, respectively, comprises:
and training a plurality of sub-models by utilizing the available sample bandwidths and the sample network characteristics adopted by the plurality of first sample live terminals in at least one live data transmission process.
8. The method of claim 7, wherein training the plurality of submodels using the sample available bandwidth and the sample network characteristics employed by the plurality of first sample live terminals during at least one live data transmission, respectively, comprises:
for any one first sample live terminal, in the process of carrying out any one live data transmission, iteratively executing the following training operation until the used sub-model meets a first training condition to obtain a sub-model after training is completed; the training operation includes:
detecting sample network information of the first sample live terminal;
according to the sample network information, adjusting model parameters of the used sub-model;
according to the sample network information, predicting and obtaining available bandwidth of a sample by utilizing the submodel;
And adjusting the transmission available bandwidth of the live broadcast data according to the sample available bandwidth.
9. A method of bandwidth prediction, comprising:
determining network information of a network used by the client to stream data;
determining a target prediction model from prediction models corresponding to a plurality of levels respectively according to the network information; wherein the prediction model is obtained by at least one sub-model aggregation process of the same level; the sub-model is obtained according to sample available bandwidth and sample network information training adopted by streaming data in a sample network environment;
and predicting the target available bandwidth corresponding to the client by using the target prediction model and the network information.
10. A model generation method, characterized by comprising:
training a plurality of sub-models by utilizing sample available bandwidth and sample network information adopted for data streaming in a plurality of sample network environments;
determining a plurality of levels obtained by dividing the plurality of sub-models according to the network conditions of the respective corresponding sample network environments;
at least one sub-model of the same level is subjected to aggregation treatment to obtain a prediction model;
the available bandwidth prediction models corresponding to the multiple levels are used for selecting a target prediction model from the available bandwidth prediction models according to network information of the client so as to predict target available bandwidth corresponding to the client by using the target prediction model.
11. An electronic device comprising a storage component and a processing component, the storage component storing one or more computer instructions; the one or more computer instructions are to be invoked by the processing component to implement the bandwidth prediction method of any one of claims 1-5 or the bandwidth prediction method of claim 9.
12. A computing device comprising a storage component and a processing component, the storage component storing one or more computer instructions; the one or more computer instructions are to be invoked by the processing component to implement the bandwidth prediction method of any one of claims 1 to 5 or the bandwidth prediction method of claim 9 or the model generation method of any one of claims 6 to 8 or the model generation method of claim 10.
CN202210238602.5A 2022-03-11 2022-03-11 Bandwidth prediction method, model generation method and device Active CN114629797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210238602.5A CN114629797B (en) 2022-03-11 2022-03-11 Bandwidth prediction method, model generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210238602.5A CN114629797B (en) 2022-03-11 2022-03-11 Bandwidth prediction method, model generation method and device

Publications (2)

Publication Number Publication Date
CN114629797A CN114629797A (en) 2022-06-14
CN114629797B true CN114629797B (en) 2024-03-08

Family

ID=81902617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210238602.5A Active CN114629797B (en) 2022-03-11 2022-03-11 Bandwidth prediction method, model generation method and device

Country Status (1)

Country Link
CN (1) CN114629797B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115801639B (en) * 2022-08-01 2024-06-07 天翼云科技有限公司 Bandwidth detection method and device, electronic equipment and storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015058656A1 (en) * 2013-10-21 2015-04-30 腾讯科技(深圳)有限公司 Live broadcast control method and main broadcast device
EP2887578A1 (en) * 2013-12-19 2015-06-24 BAE Systems PLC Network fault detection and location
CN108268934A (en) * 2018-01-10 2018-07-10 北京市商汤科技开发有限公司 Recommendation method and apparatus, electronic equipment, medium, program based on deep learning
CN110012302A (en) * 2018-01-05 2019-07-12 阿里巴巴集团控股有限公司 A kind of network direct broadcasting monitoring method and device, data processing method
WO2019165673A1 (en) * 2018-02-27 2019-09-06 平安科技(深圳)有限公司 Reimbursement form risk prediction method, apparatus, terminal device, and storage medium
CN110445653A (en) * 2019-08-12 2019-11-12 灵长智能科技(杭州)有限公司 Network state prediction technique, device, equipment and medium
WO2020098016A1 (en) * 2018-11-14 2020-05-22 网宿科技股份有限公司 Network traffic prediction method, device, and electronic device
CN111291770A (en) * 2018-12-06 2020-06-16 华为技术有限公司 Parameter configuration method and device
CN111405327A (en) * 2020-04-03 2020-07-10 广州市百果园信息技术有限公司 Network bandwidth prediction model training method, video data playing method and device
CN112274925A (en) * 2020-10-28 2021-01-29 超参数科技(深圳)有限公司 AI model training method, calling method, server and storage medium
CN112561320A (en) * 2020-12-14 2021-03-26 中国建设银行股份有限公司 Training method of mechanism risk prediction model, mechanism risk prediction method and device
CN112769619A (en) * 2021-01-08 2021-05-07 南京信息工程大学 Multi-classification network fault prediction method based on decision tree
CN113269319A (en) * 2020-02-14 2021-08-17 阿里巴巴集团控股有限公司 Deep learning model tuning method, deep learning model compiling method and computing device
CN113315604A (en) * 2021-05-25 2021-08-27 电子科技大学 Adaptive gradient quantization method for federated learning
CN113379530A (en) * 2021-06-09 2021-09-10 中国工商银行股份有限公司 User risk determination method and device and server
CN113422751A (en) * 2020-08-27 2021-09-21 阿里巴巴集团控股有限公司 Streaming media processing method and device based on online reinforcement learning and electronic equipment
CN113554093A (en) * 2021-07-23 2021-10-26 中国银联股份有限公司 Prediction model training and account value prediction method, device, equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561283A (en) * 2013-10-30 2014-02-05 乐视致新电子科技(天津)有限公司 Method and device for code rate switch of smart television
US10868726B2 (en) * 2018-12-07 2020-12-15 At&T Intellectual Property I, L.P. Apparatus and method for selecting a bandwidth prediction source

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015058656A1 (en) * 2013-10-21 2015-04-30 腾讯科技(深圳)有限公司 Live broadcast control method and main broadcast device
EP2887578A1 (en) * 2013-12-19 2015-06-24 BAE Systems PLC Network fault detection and location
CN110012302A (en) * 2018-01-05 2019-07-12 阿里巴巴集团控股有限公司 A kind of network direct broadcasting monitoring method and device, data processing method
CN108268934A (en) * 2018-01-10 2018-07-10 北京市商汤科技开发有限公司 Recommendation method and apparatus, electronic equipment, medium, program based on deep learning
WO2019165673A1 (en) * 2018-02-27 2019-09-06 平安科技(深圳)有限公司 Reimbursement form risk prediction method, apparatus, terminal device, and storage medium
WO2020098016A1 (en) * 2018-11-14 2020-05-22 网宿科技股份有限公司 Network traffic prediction method, device, and electronic device
CN111291770A (en) * 2018-12-06 2020-06-16 华为技术有限公司 Parameter configuration method and device
CN110445653A (en) * 2019-08-12 2019-11-12 灵长智能科技(杭州)有限公司 Network state prediction technique, device, equipment and medium
CN113269319A (en) * 2020-02-14 2021-08-17 阿里巴巴集团控股有限公司 Deep learning model tuning method, deep learning model compiling method and computing device
CN111405327A (en) * 2020-04-03 2020-07-10 广州市百果园信息技术有限公司 Network bandwidth prediction model training method, video data playing method and device
CN113422751A (en) * 2020-08-27 2021-09-21 阿里巴巴集团控股有限公司 Streaming media processing method and device based on online reinforcement learning and electronic equipment
CN112274925A (en) * 2020-10-28 2021-01-29 超参数科技(深圳)有限公司 AI model training method, calling method, server and storage medium
CN112561320A (en) * 2020-12-14 2021-03-26 中国建设银行股份有限公司 Training method of mechanism risk prediction model, mechanism risk prediction method and device
CN112769619A (en) * 2021-01-08 2021-05-07 南京信息工程大学 Multi-classification network fault prediction method based on decision tree
CN113315604A (en) * 2021-05-25 2021-08-27 电子科技大学 Adaptive gradient quantization method for federated learning
CN113379530A (en) * 2021-06-09 2021-09-10 中国工商银行股份有限公司 User risk determination method and device and server
CN113554093A (en) * 2021-07-23 2021-10-26 中国银联股份有限公司 Prediction model training and account value prediction method, device, equipment and medium

Also Published As

Publication number Publication date
CN114629797A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US11228794B2 (en) Method, computer device, and storage medium for media code rate self-adaptation
Duanmu et al. A quality-of-experience database for adaptive video streaming
Bampis et al. Towards perceptually optimized end-to-end adaptive video streaming
Wamser et al. Modeling the YouTube stack: From packets to quality of experience
CN109618565B (en) Method and system for automatically measuring experience quality of streaming video user
García et al. Quality-control algorithm for adaptive streaming services over wireless channels
US20220124141A1 (en) System and method for managing video streaming quality of experience
Aguayo et al. DASH adaptation algorithm based on adaptive forgetting factor estimation
CN114629797B (en) Bandwidth prediction method, model generation method and device
Mu et al. AMIS: Edge computing based adaptive mobile video streaming
Sun et al. Optimal strategies for live video streaming in the low-latency regime
Rahman et al. SABA: Segment and buffer aware rate adaptation algorithm for streaming over HTTP
EP2928145A1 (en) Method for estimating a bandwidth associated with a connection between a client terminal and at least one server, corresponding client terminal
CN114040257B (en) Self-adaptive video stream transmission playing method, device, equipment and storage medium
Zhang et al. A QOE-driven approach to rate adaptation for dynamic adaptive streaming over http
Li et al. JUST360: Optimizing 360-Degree Video Streaming Systems With Joint Utility
US9537707B2 (en) System for self organization of an internet protocol television grid
Araújo et al. A hybrid energy-aware video bitrate adaptation algorithm for mobile networks
WO2022037798A1 (en) Estimating video resolution delivered by an encrypted video stream
CN118175356A (en) Video transmission method, device, equipment and storage medium
CN116074256A (en) Content distribution network node bandwidth optimization method and device, electronic equipment and medium
Bouzian Modeling and optimization of the quality of customer experience (QoE) of data services on the mobile network. Application to video streaming
KR20220128396A (en) Flash Crowd Management for Live Streaming
Poliakov et al. Case for caching and Model Predictive Control quality decision algorithm for HTTP Adaptive Streaming: is cache-awareness actually needed?
CN112333456A (en) Live video transmission method based on cloud edge protocol

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant