CN118488269A - Video processing method and electronic equipment - Google Patents
Video processing method and electronic equipment Download PDFInfo
- Publication number
- CN118488269A CN118488269A CN202311578891.4A CN202311578891A CN118488269A CN 118488269 A CN118488269 A CN 118488269A CN 202311578891 A CN202311578891 A CN 202311578891A CN 118488269 A CN118488269 A CN 118488269A
- Authority
- CN
- China
- Prior art keywords
- video
- quality
- service
- abnormal
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 230000002159 abnormal effect Effects 0.000 claims abstract description 130
- 238000000034 method Methods 0.000 claims abstract description 73
- 230000005856 abnormality Effects 0.000 claims abstract description 45
- 238000005457 optimization Methods 0.000 claims description 81
- 238000009877 rendering Methods 0.000 claims description 62
- 238000012545 processing Methods 0.000 claims description 45
- 230000001133 acceleration Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 4
- 239000000523 sample Substances 0.000 claims 1
- 239000000758 substrate Substances 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 abstract description 8
- 230000008569 process Effects 0.000 description 31
- 230000006870 function Effects 0.000 description 30
- 230000005540 biological transmission Effects 0.000 description 22
- 238000004891 communication Methods 0.000 description 21
- 238000007726 management method Methods 0.000 description 18
- 238000001514 detection method Methods 0.000 description 17
- 238000003780 insertion Methods 0.000 description 11
- 230000037431 insertion Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000001976 improved effect Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 7
- 238000010295 mobile communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000008093 supporting effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000011230 binding agent Substances 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application provides a video processing method and electronic equipment, relates to the technical field of electronics, and is used for monitoring the quality of online media service provided by the electronic equipment in real time. The method is applied to the electronic equipment and comprises the following steps: sequentially acquiring video frame information of each video frame in the video to be transmitted by the electronic equipment; the video frame information of the video frame comprises a time stamp of the video frame and a data volume of the video frame; determining whether the video quality of the video at the first moment is abnormal or not according to the video frame information corresponding to the first moment; the video quality abnormality of the video comprises: the video is stuck and/or blurred; the first moment is any moment of the video; after detecting that the video quality of the video at the first moment is abnormal, acquiring parameter values of all video quality influence parameters; and analyzing the abnormal reasons of the abnormal video quality according to the parameter values of the video quality influence parameters.
Description
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a video processing method and electronic equipment.
Background
Because the coverage rate of the signals is not comprehensive enough, the problem of poor network signals can occur in part of scenes. For example, in a high-speed running high-speed railway, in a subway in which an underground signal is weak, in a remote mountain area in which signal coverage is poor, etc., there is a possibility that a problem of signal difference occurs when using electronic equipment. Under the condition of poor signals, the problems of video playing jamming, resolution reduction and the like can occur when the user uses the electronic equipment, and further the experience of using online media by the user is poor.
Disclosure of Invention
The embodiment of the application provides a video processing method and electronic equipment, which are used for monitoring the quality of online media service provided by the electronic equipment in real time.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
In a first aspect, a video processing method is provided, where the method is applied to an electronic device, and includes:
The electronic equipment sequentially acquires video frame information of each video frame in the video to be transmitted. Wherein the video frame information of the video frame may include a time stamp of the video frame and a data amount of the video frame. The electronic device can determine whether the video quality of the video at the first moment is abnormal or not according to the video frame information corresponding to the first moment. And then, under the condition that the video quality of the video at the first moment is detected to be abnormal, the electronic equipment acquires the parameter values of all video quality influence parameters, and analyzes the abnormality reason of the video quality abnormality according to the parameter values of all video quality influence parameters. Wherein, the occurrence of abnormal video quality of the video may include: video is stuck and/or blurred. The first time may be any time in the video.
In the scheme, in the process of providing video service by the electronic equipment, the electronic equipment can judge whether the video quality is abnormal or not according to the acquired video frame information in real time. And, when detecting that the video quality is abnormal, the parameter value of each video quality influence parameter can be obtained, and the reason of the abnormal video quality can be analyzed. The method is convenient for the targeted optimization processing to improve the video quality.
In a possible implementation manner of the first aspect, the video to be transmitted by the electronic device is an online video.
In a possible implementation manner of the first aspect, the video to be transmitted by the electronic device may be online video content provided by a video application. Or the video to be transmitted by the electronic device may also be the video content of the video call. Or the video to be transmitted by the electronic device may also be live video content provided by a live application.
In a possible implementation manner of the first aspect, the method may further include: and executing a first optimization strategy corresponding to the abnormal reason after the first moment. Executing the first optimization strategy can optimize the electronic device, thereby improving video quality. In the scheme, the electronic equipment executes the corresponding first optimization strategy according to different optimization strategies, so that targeted optimization can be realized, and the video quality is improved better.
In a possible implementation manner of the first aspect, the video quality influencing parameter includes a network quality parameter, and the first optimization strategy includes starting a network acceleration service in a case that a network quality of the electronic device indicated by the network quality parameter does not meet a preset network quality condition. Wherein the network quality parameter may be used to characterize the network quality of the electronic device, the better the network quality, the higher the video quality. Therefore, after the electronic equipment detects the video quality abnormality, the electronic equipment can optimize in a network acceleration mode to improve the network speed according to the network quality parameter analysis to determine that the network quality does not meet the preset network condition. After the network speed is improved, the video quality can be improved.
In a possible implementation manner of the first aspect, the network quality parameter includes a network transmission speed, and when the network transmission speed is lower than a preset speed, it may be determined that the network quality parameter indicates that the network quality of the electronic device does not meet a preset network quality condition.
In a possible implementation manner of the first aspect, the video quality influencing parameter comprises a temperature of the electronic device, and the first optimization strategy comprises limiting a frequency of a processor of the electronic device in case the temperature of the electronic device indicated by the temperature of the electronic device is greater than a preset temperature. Generally, when the processor frequency of the electronic device is high, the electronic device may generate heat. When the processor frequency of the electronic device is high, the video quality provided by the electronic device is also affected. Therefore, in this scheme, after detecting that the video quality is abnormal, if the temperature of the electronic device is detected to be greater than the preset temperature, the electronic device may limit the frequency of the processor of the electronic device, so as to reduce the frequency of the processor of the electronic device. Furthermore, after the frequency of the processor of the electronic device is reduced, the video quality can be improved.
In a possible implementation manner of the first aspect, the video quality influence parameter includes processor load information of the electronic device, and the first optimization strategy includes performing system level scheduling on the processor of the electronic device in a case that a load of the processor of the electronic device indicated by the processor load information of the electronic device is greater than a preset load threshold. The high load on the processor of the electronic device may also affect the video quality of the video service that the electronic device is currently providing. Therefore, in the scheme, after the electronic device detects that the video quality is abnormal, if the load of the processor of the electronic device is detected to be larger than the preset load threshold, the electronic device can perform system-level scheduling processing on the processor so as to reduce the load of the processor of the electronic device. Furthermore, after the load of the processor of the electronic device is reduced, the video quality can be improved.
In a possible implementation manner of the first aspect, the electronic device performs system level scheduling on a processor of the electronic device, which may specifically include: the electronic device obtains the resource occupation condition of the processor. Then, the electronic equipment occupies more application processes of the resources according to the resource occupation condition. The electronic equipment dispatches the application process which occupies more resources to a low-load core; or the electronic equipment controls the application process which occupies more resources to enter a sleep state. In this way, the load of the electronic device can be reduced. The resource occupation situation here may specifically refer to a resource occupation situation of media codec.
In a possible implementation manner of the first aspect, the video quality influence parameters of the electronic device may include: at least one of a network transmission speed, a temperature of the electronic device, and a load of processing of the electronic device. Likewise, the reasons for the abnormality that causes the abnormality in the video quality may also include one or more. When the abnormality causes include a plurality of first optimization strategies corresponding to the abnormality causes, the electronic device may execute the first optimization strategy corresponding to each abnormality cause, respectively.
In a possible implementation manner of the first aspect, the electronic device executes the first optimization policy corresponding to the abnormality cause after the first time, which may specifically be executing the first optimization policy corresponding to the abnormality cause for a period of time (e.g., a third preset time) after the first time. In this way, unnecessary power consumption waste of the electronic device can be reduced.
In a possible implementation manner of the first aspect, the video quality of the video is abnormal includes: the video is stuck. In this solution, determining whether the video quality of the video at the first moment is abnormal according to the video frame information corresponding to the first moment may specifically include: and acquiring a first time stamp of the first video frame corresponding to the first time and a second time stamp of the second video frame. Then, a time interval between the first time stamp and the second time stamp is obtained. It will be appreciated that when video clips, the time interval between two adjacent video frames is longer. Therefore, in the case that the time interval is greater than the preset time, it is determined that the video is stuck at the first time. Wherein the second video frame is a previous video frame to the first video frame.
In the scheme, whether the video is stuck in the first video is judged according to the time interval between the video frame at the first moment and the previous video frame. Thus, whether the video is jammed at the first moment is conveniently detected.
In a possible implementation manner of the first aspect, the preset time includes a first preset time, and the first preset time is a fixed value. For example, in the process of playing the same video by the electronic device, the same value (i.e. the first preset time) is used for judging whether the video is stuck or not.
In a possible implementation manner of the first aspect, the first preset time may be a preset fixed value, such as a time commonly used by a movie frame.
In a possible implementation manner of the first aspect, the first preset time may also be set according to a preset frame time. The preset frame time may be determined according to a preset frame rate of a video to be transmitted by the electronic device. For example, the first preset time may be set to a preset frame time. Or the first preset frame time may be set to a time value obtained by weighting the preset frame time, for example, (a) the preset frame time, a is a positive number greater than 1.
In a possible implementation manner of the first aspect, the preset time includes a second preset time, where the second preset time is set according to a time interval between multiple sets of adjacent two frames of video images before the first video frame. In this way, the first preset time can be set according to the actual frame rate of the video. The preset frame rate of a video is usually fixed, so the first preset time is still a fixed value during the process of displaying the video by the electronic device.
In a possible implementation manner of the first aspect, the time interval between the sets of two adjacent frames of video images before the first video may be a time interval between any sets of two adjacent sets of video images before the first video in the video. Or the time interval between the sets of two adjacent frames of video images before the first video may be the time interval between the sets of two adjacent frames of video images before and adjacent to the first video frame in the video.
In a possible implementation manner of the first aspect, the video quality of the video is abnormal includes: the video is blurred. In this scheme, according to the video frame information corresponding to the first moment, determining whether the video quality of the video at the first moment is abnormal may specifically include: and determining the actual resolution of the video at the first moment according to the data quantity of the first video frame corresponding to the first moment. It will be appreciated that the actual resolution of the video is lower when blurring of the video occurs. Therefore, in the case where the actual resolution is smaller than the preset resolution, it can be determined that the video is blurred at the first time.
In this scheme, whether the video is blurred in the first video is determined by the actual resolution of the video frame at the first time. In this way, it is easy to detect whether the video is blurred at the first moment.
In a possible implementation manner of the first aspect, the preset resolution includes a first preset resolution, where the first preset resolution is a target resolution of the video. For example, when the electronic device plays the video, the target resolution is preset, and the electronic device plays the video according to the first preset resolution without influence of other factors. In this scheme, if the actual resolution of the video frame corresponding to the first time does not reach the target resolution (i.e., the first resolution) of the video, it is determined that blurring occurs in the video at the first time.
In a possible implementation manner of the first aspect, the preset time includes a second preset resolution, and the second preset resolution may be a threshold value set according to an actual situation, for example 540P. In this scheme, if the actual resolution of the video corresponding to the first time does not reach the second preset resolution, it is determined that the video is blurred at the first time.
In a possible implementation manner of the first aspect, the method further includes: and the electronic equipment acquires the rendering mode of the video. If the rendering mode is that the system of the electronic equipment renders, the electronic equipment executes a second optimization strategy after the first moment after detecting that the video quality of the video at the first moment is abnormal. The second optimization strategy is used for the electronic equipment to carry out video image quality enhancement processing on the video.
In the scheme, if the video rendering mode is the mode of rendering by the system of the electronic equipment, the electronic equipment can improve the video quality by enhancing the video image in the video rendering process. In this way, it can help to improve video quality, such as improving video frame rate and resolution, thereby reducing video jamming and blurring.
In a possible implementation manner of the first aspect, the video rendering manner includes a system rendering by the electronic device and a rendering by the application itself.
In a possible implementation manner of the first aspect, in a case where the video quality occurrence abnormality of the video is represented as a video occurrence of a clip, the second optimization strategy includes performing a frame insertion process on the video. In this way, video stuck conditions may be reduced.
In a possible implementation manner of the first aspect, in a case where the video quality of the video is abnormally represented as a video occurrence of blur, the second optimization strategy includes performing an over-processing on the video. In this way, the video blur can be reduced.
In a possible implementation manner of the first aspect, in a case where the video quality of the video is abnormal, which is represented by a video being stuck, and the video is blurred, the second optimization strategy includes performing frame interpolation processing on the video and performing super-processing on the video.
In a possible implementation manner of the first aspect, the electronic device executes the second optimization strategy after the first time, and in particular may execute the second optimization strategy within a period of time after the first time.
In a possible implementation manner of the first aspect, the electronic device executes the second optimization strategy after the first time, or may execute the second optimization strategy after the first time and before the video ends.
In a possible implementation manner of the first aspect, the method further includes: and after detecting that the video quality of the video at the first moment is abnormal, acquiring preset information of the electronic equipment. The preset information comprises at least one of the following information: application identification information, codec resource occupancy information of the electronic device, media information of the video, or network quality parameters for indicating network quality of the electronic device. Then, the electronic equipment reports a video quality abnormal event to the server, wherein the video quality abnormal event carries preset information.
In this scheme, after the electronic device detects that the video quality is abnormal, a video quality abnormal event may be reported to the server, and preset information when the video quality is abnormal may be reported to the server. Thus, the server can analyze the reasons of abnormal video quality according to a plurality of abnormal video quality events.
In a possible implementation manner of the first aspect, the electronic device includes an application, a media codec service and a video quality of experience QoE service; the application is used to provide video services, such as displaying video. In this scheme, the video frame information of each video frame in the video to be transmitted by the electronic device is sequentially acquired, specifically, the video frame information is acquired from the application by the media coding and decoding service. The method further comprises the following steps: the media codec service sequentially transmits video frame information of each video frame to the video QoE service. The determining whether the video quality of the video at the first moment is abnormal or not according to the video frame information corresponding to the first moment can be specifically performed by the video QoE service. After detecting that the video quality of the video is abnormal at the first moment, the parameter values of the video quality influence parameters are obtained, and the video QoE service can specifically execute the video quality influence parameters. The above-mentioned analysis of the cause of the abnormality in video quality according to the parameter values of the video quality influencing parameters may be specifically performed by the video QoE service.
In a possible implementation manner of the first aspect, executing, after the first time, a first optimization policy corresponding to the cause of the abnormality may specifically include: the video QoE service executes a first optimization strategy corresponding to the abnormal reason after a first moment.
In a possible implementation manner of the first aspect, after detecting that the video quality of the video at the first moment is abnormal, the acquiring the preset information of the electronic device and the reporting, by the electronic device, the video quality abnormal event to the server may be performed by a video QoE service.
In a second aspect, the application further provides electronic equipment. The electronic device may include: a processor and a memory. The memory is configured to store computer-executable instructions that, when executed by the first electronic device, cause the electronic device to perform the video processing method of any one of the first aspects described above.
In a third aspect, the present application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the video processing method of any one of the first aspects above.
In a fourth aspect, there is provided a computer program product comprising instructions which, when run on an electronic device, enable the electronic device to perform the video processing method of any one of the first aspects above.
In a fifth aspect, there is provided an apparatus (e.g. the apparatus may be a system-on-a-chip) comprising a processor for supporting an electronic device to implement the functions referred to in the first aspect above. In one possible design, the apparatus further includes a memory for storing program instructions and data necessary for the electronic device. When the device is a chip system, the device can be formed by a chip, and can also comprise the chip and other discrete devices.
The technical effects of any one of the design manners of the second aspect to the fifth aspect may refer to the technical effects of different design manners of the first aspect, which are not described herein.
Drawings
Fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic flow chart of video QoE service initiation provided by an embodiment of the present application;
Fig. 3 is a schematic flow chart of a video processing method according to an embodiment of the present application;
Fig. 4 is a schematic diagram of determining whether a video is stuck according to a multi-frame video frame according to an embodiment of the present application;
Fig. 5 is a schematic flow chart of a video QoE service according to the present application for determining whether a video is stuck according to a timestamp of a video frame;
fig. 6 is a schematic diagram of information obtained from a media codec service in order to analyze the cause of video quality abnormality in the video QoE service according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a video processing method according to an embodiment of the present application;
Fig. 8 is a schematic diagram of information obtained from a media codec service by a video QoE service according to an embodiment of the present application, in order to report a video quality anomaly event to a server;
fig. 9 is a software architecture diagram of an electronic device according to an embodiment of the present application;
Fig. 10 is a software architecture diagram of an electronic device according to an embodiment of the present application;
FIG. 11 is an interface diagram of an electronic device according to an embodiment of the present application;
Fig. 12 is a block diagram of a chip system according to an embodiment of the present application.
Detailed Description
Taking electronic equipment as a mobile phone as an example, when a user takes a high-speed rail, the mobile phone used by the user is in a high-speed moving state and is continuously switched among different base stations, and the network switching is frequent, so that the network of the mobile phone is interrupted. When a user takes a subway, the subway is positioned underground, so that the network signal of the mobile phone is weak. The network of the mobile phone is not good or the network is interrupted easily due to the reasons of poor signal coverage and the like when the user is located in a mountain area. However, the poor network or network interruption may cause problems such as video clip, resolution degradation (i.e. blurring) and the like when the mobile phone provides online media service, which affect the quality of experience (quality of experience, qoE) of the user.
Based on this, the embodiment of the application provides a video processing method for monitoring the quality of online media service provided by electronic equipment in real time. And when the online media service quality is detected to be abnormal, the reasons of the quality abnormality can be analyzed, so that the optimization can be conveniently and timely carried out. The method can be applied to an electronic device.
The electronic device may be a mobile phone, a tablet computer, a personal computer (personal icomputer, a PC), a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (ultra-mobile personal computer, a UMPC), a netbook, a smart screen, a smart watch, and other wearable devices, an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) speaker, and a vehicle-mounted device, or may be various teaching aids (e.g., a learning machine, an early education machine), a smart toy, a portable robot, a personal digital assistant (personal DIGITAL ASSISTANT, a PDA), an augmented reality (augmented reality, AR), a Virtual Reality (VR) device, a media player, and other devices, or may be an audio-visual device having a mobile office function, a device having a smart home function, a device having an entertainment function, a device supporting an intelligent trip, and the like. The embodiment of the application does not limit the specific form of the device.
Fig. 1 illustrates a schematic structure of an electronic device 100 provided in some embodiments of the application. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a sensor module 180, keys 190, a motor 191, a camera 192, a display 193, and a subscriber identity module (subscriber identification module, SIM) card interface 194, among others. Among other things, the sensor module 180 may include a pressure sensor 180A, a touch sensor 180B, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. For example, the processor 110 is configured to perform the video processing method in the embodiment of the present application.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store application programs (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system.
In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 193, the camera 192, the wireless communication module 160, and the like.
In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wi-Fi) network, bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques.
The electronic device 100 may implement audio functions through the audio module 170, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio signals to analog audio signal outputs and also to convert analog audio inputs to digital audio signals. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 180A may be disposed on display 193. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 193, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A.
The touch sensor 180B, also referred to as a "touch panel". The touch sensor 180B may be disposed on the display 193, and the touch sensor 180B and the display 193 form a touch screen, which is also referred to as a "touch screen". The touch sensor 180B is used to detect a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display 193. In other embodiments, the touch sensor 180B may also be disposed on the surface of the electronic device 100 at a different location than the display 193.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback.
The camera 192 is used to capture still images or video. In some embodiments, the electronic device 100 may include 1 or N cameras 192, N being a positive integer greater than 1.
The electronic device 100 implements display functions through a GPU, a display screen 193, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 193 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display 193 is used to display images, videos, and the like. In some embodiments, electronic device 100 may include 1 or N display screens 193, N being a positive integer greater than 1.
The SIM card interface 194 is used to connect to a SIM card. The SIM card may be inserted into the SIM card interface 194, or removed from the SIM card interface 194 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1.
The video processing methods related to the following embodiments may be performed in the electronic device 100 having the above-described hardware configuration.
The following is a brief description of technical terms that may be involved in embodiments of the present application.
QoE: subjective feelings of the user on the device, network and system, application or service.
Frame Per Second (FPS), the number of video frames played per second. The frame rate influences the smoothness of the picture, and the higher the frame rate is, the smoother the picture is; the lower the frame rate, the stronger the picture skip feeling.
Resolution ratio: the number of pixels contained in a picture or video frame per inch. Resolution includes the length and width of a picture or video frame, defining the size of the picture or video frame. Typically resolution represents the size of a picture or video frame, the higher the resolution the larger the picture or video frame; the smaller the resolution, the smaller the picture or video frame.
The media codec service (media codec service) is primarily responsible for the Hardware Abstraction Layer (HAL) providing the framework layer with the call audio video codec interface. The media codec service may provide an encoding service and a decoding service, and may be particularly used to encode and decode multimedia data (e.g., video data) such as video. In other embodiments, the media codec service may also provide a rendering service for rendering the decoded video data. For example, when the video is rendered by the mobile phone system, the media codec service may also render the decoded video data.
The insertion of frames is used to insert additional frames into the video to improve the frame rate, smoothness, and viewing experience of the video.
Video super-resolution technology, which is called video super-resolution technology for short. The video super-resolution technology can improve the resolution of a section of video by processing the video with low resolution, so that the video becomes clearer.
The video processing method provided by the application can be applied to the scene of the electronic equipment for providing the online media service (such as video service) for the user. In the following, the electronic device is a mobile phone, and the video processing method provided in the embodiment of the present application is described by taking the case that the mobile phone provides video services, and the processing manner of providing other online media services except the video services by the mobile phone can refer to the processing manner of providing video services, which is not described in detail.
In the process of providing video service by the mobile phone, the mobile phone can acquire video frame information and monitor video quality in real time according to the video frame information. Specifically, the mobile phone can monitor whether the video quality is abnormal or not in real time. Illustratively, the video quality occurrence abnormality of the video may specifically include video occurrence of a jam and/or a blur, and the like. The mobile phone can analyze the reasons for the abnormal video quality when detecting the abnormal video quality. In some embodiments, the mobile phone may obtain parameter values of the video quality influencing parameters, and analyze reasons that may cause abnormal quality of the video in combination with the parameter values. Therefore, the video quality of the video service provided by the mobile phone can be monitored in real time, and when the video quality abnormality is detected, the abnormality cause is analyzed, so that the optimization can be made in a targeted mode.
In some embodiments, the video services provided by the mobile phone may include playing video, video calls, playing live pictures, and so on.
Furthermore, after analyzing and determining the reasons that may cause the video to be jammed and/or blurred, the mobile phone may be optimized specifically for the specific reasons. For example, the handset may implement a first optimization strategy to improve video quality.
In addition, after detecting that the video quality is abnormal, the mobile phone can report a video quality abnormal event to the server and report related information of the video quality abnormality to the server. Therefore, the server can be helped to count a plurality of video quality abnormal events and related information thereof, and more reasons for the abnormal video quality can be conveniently analyzed.
In some embodiments, the video processing method monitors video quality in real time, analyzes a cause when an abnormality occurs in video quality, executes a corresponding optimization strategy according to the cause, and reports a video quality abnormality event to a server, which can be implemented by a video QoE service of a mobile phone.
Further, the video QoE service of the handset may obtain video frame information from the media codec service. In combination with the above description, the media codec service may be used to provide an encoding service and a decoding service for video, and when the mobile phone provides the video service, the mobile phone needs to invoke the media codec service to decode the video data. Specifically, the mobile phone application sequentially transmits video frame information of the video to the media coding and decoding service for decoding. The media codec service may transmit video frame information to the video QoE service, thereby facilitating that the video QoE service may monitor video quality based on the video frame information.
Next, a detailed description will be given of an implementation procedure of the video processing method according to the embodiment of the present application with reference to the accompanying drawings.
Fig. 2 illustrates a startup procedure of a video QoE service of a handset in some embodiments. The handset may be, for example, the electronic device 100 shown in fig. 1. In some embodiments, after the handset is powered on, the system kernel initiates the service and loads the service manager (SERVICE MANAGER). After the service manager is started successfully, a media index service (media. Metrics) is started. In some embodiments, the video QoE service is mounted into media. Metrics, which will be followed by the video QoE service to start up. After the video QoE service is started successfully, the service can be registered in the service manager for the service framework layer to call. It should be noted that, fig. 2 only shows functional modules related to the mobile phone starting the video QoE service.
In some embodiments, the video QoE service registers for services with a service manager, which may specifically include: the video QoE service sends a service registration request to the service manager. The service manager may write the video QoE service to a service management list in response to the service registration request. The services in the service management list can be called by the framework layer.
Fig. 3 illustrates a timing diagram of a video processing method in some embodiments. In the embodiment of the application, the video QoE service in fig. 2 interacts with other modules of the electronic equipment to realize the effect of monitoring the video quality in real time. In this embodiment, a scenario in which the electronic device is a mobile phone and the user plays video using a video-type application of the mobile phone will be described as an example.
S301, starting the video application.
In some embodiments, the video class application may be launched in response to a user operation.
In some embodiments, the video class application may specifically include: long video applications and short video applications, etc. The video may be divided into long video and short video according to video duration, and long video applications may be used to provide long video services, and short video applications may provide short video services.
S302, the video class application starts playing the video in response to the operation of the user.
In the embodiment of the application, when the video application needs to play video, the media codec service needs to be called. Specifically, when the video class application needs to play video, a corresponding instance can be created in the media codec service. For a specific implementation of creating an instance in a media codec service, reference may be made to the description in the related art. The instance of the video class created at the media codec service may be used to provide an application programming interface (application programming interface, API) for the video class application to invoke the media codec.
S303, the video application sends preset media information of the video to the media coding and decoding service.
Correspondingly, the media encoding and decoding service receives preset media information of the video.
The video class application sends preset media information of the video to the media coding and decoding service, and the preset media information can be specifically realized by calling an API interface provided by an instance of the video class application created in the media coding and decoding service.
In some embodiments of the present application, the video sent by the video class application may be referred to as video to be played. Wherein the video to be played is an online video.
In some embodiments, the preset media information may include, but is not limited to: basic information such as video stream path, first preset resolution, preset frame rate, and decoding format. The video stream path is used for the mobile phone to acquire and play the video. The handset may decode the video in a decoding format. Under the condition that no other factors influence, the video to be played is played according to the first preset resolution and the preset frame rate in the playing process.
In other embodiments, the preset media information may further include application identification information of the video class application, the application identification information being used to uniquely identify the application. For example, the application identification information may be a package name of the application.
In other embodiments, the preset media information may also include a rendering mode used by the video class application. When a mobile phone provides video service for a user through video application, two rendering modes exist: the first is self-rendering by the application, and the second is rendering by the system of the handset.
In some embodiments, the media codec service may save the preset media information after receiving the preset media information. That is, after the step S303, the method further includes: the media codec service stores preset media information.
S304, initializing a video QoE service by the media coding and decoding service.
In the embodiment of the application, after the video application successfully invokes the media codec service, the video QoE service can be initialized through the media codec service. Specifically, the video class application may create a corresponding instance in the video QoE service through the media codec service. The specific implementation procedure of creating an instance in a video QoE service through a media codec service may be referred to the description in the related art. In this way, in the process of playing the video by the video application, the video quality of the video can be monitored in real time through the video QoE service.
As can be seen from the description of fig. 2, the video QoE service can register the service in the service manager after the mobile phone is powered on. Thus, in some embodiments, the video class application initializes the video QoE service through the media codec service, which may be first looked up in the service management list of the service manager. Under the condition that the video QoE service is found in the service management list of the service manager, the video QoE service can be called and initialized through the service manager.
In some embodiments, when the media codec service initializes the video QoE service, the acquired partial preset media information (which may be denoted as first preset media information) may be sent to the video QoE service. The first preset media information may be used for the video QoE service to monitor video quality in real time based on the first preset media information. For example, the media codec service may send information such as the first preset resolution and the preset frame rate to the video QoE service. In this way, the video QoE service monitors the video quality of the video in real time according to the information such as the first preset resolution and the preset frame rate.
In other embodiments, the media codec service may further send the application identification information to the video QoE service in the first preset media information. In this way, the video QoE service is facilitated to know which application is being monitored for video quality.
In other embodiments, the media codec service may also send the rendering mode to the video QoE service in the first preset media information. Therefore, when the video QoE service monitors that the video quality is abnormal, the video QoE service can execute the optimization strategy in a targeted manner by combining the rendering mode.
Further, in some embodiments, after the video QoE service receives the first preset media information sent from the media codec service, the first preset media information may be saved.
In some embodiments, the media codec service may send a notification message to the sending video class application to notify the video class application to begin sending video frames.
S305, the video class application starts to send video frames to the media codec service.
Accordingly, the media codec service receives video frames sent by the video class application.
In some embodiments, after the video class application starts playing the video, the video frames are sequentially sent to the media codec service to decode the video data through the media codec service. Further, in some embodiments where the video class application renders through the mobile phone system, the video class application generates video frames to the media codec service, and may further render the decoded video data through the media codec service after the media codec service decodes the video data.
The video frames may carry video frame information. A video segment typically comprises a number of frames of video images. One video frame is a frame of video image. In order to be able to play the video continuously in the normal order, a time stamp may be set for each frame data in the video in chronological order. In some embodiments, the video frame information for each video frame includes time information, such as a timestamp; the time stamp is used to indicate the position (time order) of the video frame in the video. Therefore, when the video is played, the sequence of the video images of each frame is ensured to be accurate, and the mobile phone can accurately play the video.
Each frame of video image in the video has an amount of data. In other embodiments, the video frame information for each video frame includes an amount of data for the video frame. The amount of data of a video frame may be used to identify the size of the video frame and determine the resolution of the video frame. Wherein the amount of data of the video frame may be determined according to the size of a buffer for storing the video frame. In some embodiments, the data size of the video frame is the buffer size (buffer size) used to store the video frame. The data amount of the video frame can be determined by the media codec service according to the size of the buffer storing the video frame, and the data amount of the video frame is sent to the video QoE service.
In other embodiments, the video frame information for each video frame may include both a time stamp of the video frame and a data amount of the video frame.
In other embodiments, the video frame information may also include other information such as the size of the video frame.
S306, the video application starts a decoding thread and a video frame rendering thread.
In the embodiment of the application, the decoding thread can be used for decoding the video data. The video frame rendering thread may be used to render decoded video frames.
The specific implementation process of the video application starting decoding thread and video frame rendering thread can refer to the description in the related technology, and will not be repeated in the embodiment of the present application.
After the decoding thread and the video frame rendering thread are started, the video application can decode the video data through the decoding thread in the media encoding and decoding service, and render the decoded video data through the video frame rendering thread, so that the video is displayed.
In some embodiments, the rendering mode adopted by the video class application is application self-rendering, and then in the process that the video class application provides video service, the data flow direction between the video class application and the media coding and decoding service comprises: the video class application sends video frames to the media codec service. The media coding and decoding service decodes the video frames through the decoding thread to obtain decoded video frames. The media codec service then returns the decoded video frames to the video class application. The video class application renders the decoded video frames after receiving the decoded video frames.
In other embodiments, the video class application adopts a rendering mode that the system of the mobile phone performs rendering, and then in the process that the video class application provides video services, the data flow direction between the video class application and the media coding and decoding services comprises: the video class application sends video frames to the media codec service. The media coding and decoding service decodes the video frames through the decoding thread to obtain decoded video frames. And then, the media coding and decoding service renders the decoded video frames through a video frame rendering thread to obtain rendered video frames. And finally, the media coding and decoding service returns the rendered video frames to the video application.
S307, the media coding and decoding service sends video frame information of the video frames to the video QoE service.
Accordingly, the video QoE service receives video frame information of video frames sent by the media codec service.
In the embodiment of the application, communication between the media codec service and the video QoE service can be realized through an inter-process communication (binder) mechanism.
S308, the video QoE service judges whether the video quality is abnormal or not according to the video frame information.
In some embodiments, video quality anomalies of the video may include video caton. Wherein, video occurrence clip may refer to: the interval between video frames is long. Therefore, whether the video frames are stuck or not can be judged according to the time interval between the video frames.
In some embodiments, the video frame information may include a timestamp of the video, and the determining, by the video QoE service in S308, whether video clip occurs according to the video frame information may specifically include: the video QoE service determines a time interval between two adjacent frames of video images from their time stamps. Then, the video QoE service determines whether the video is stuck according to the time interval. For example, the video QoE service may obtain a first timestamp of a first video frame and a second timestamp of a second video frame, and then calculate a time interval between the first video frame and the second video frame from the first timestamp and the second timestamp. The first video frame may be any video frame of the video, such as a video frame corresponding to the first time. The second video frame is the last video frame of the first video frame. It should be noted that if the video is determined to be jammed according to the time interval between the adjacent first video frame and the second video frame, specifically, the video class application is jammed at the time of playing the first video frame.
Optionally, the video QoE service determines whether the video is stuck according to the time interval, which may specifically include: and judging whether the time interval is larger than a preset time. If the time interval is greater than the preset time, the video is indicated to be blocked. If the time interval is less than or equal to the preset time, the video is indicated not to be blocked.
In some embodiments, the preset time may be a fixed value that is set in advance, and may be noted as a first preset time. By way of example, the first preset time may be set to 41.6 (≡1/24) milliseconds (ms), a time commonly used for one movie frame.
In still other embodiments, the first preset time may be set according to a preset frame time. For example, the first preset time may be a preset frame time, or a time value obtained by weighting the preset frame time by the first preset time, that is, the first preset time may also be set as (a) a preset frame time; a is a weighted value. Alternatively, a is a positive number greater than 1, e.g., a is set to 2.
The preset frame time represents a preset time required for playing a frame of video image. And the preset frame rate represents the number of frames of video played per second, the preset frame time can be determined according to the preset frame rate. For example, assuming that the preset frame rate of the video is 60 frames per second, it means that the time required for playing one frame of video image per time is about 16.7ms (i.e., the preset frame time). The first preset time may be 16.7ms, or may be 16.7ms×2=33.4 ms.
Based on this embodiment, when judging whether the video is stuck or not, it is also possible to judge whether the video is stuck or not by means of the preset frame time.
It will be appreciated that in other embodiments, the first preset time may be set to other values.
The above embodiment describes the case where the preset time is set to the first preset time, taking the example where the video service plays a video, the first preset time may be fixed in the same video played by the mobile phone. In other embodiments, the preset time may be dynamically set. For example, the preset time may also be set according to an average value of time intervals between sets of adjacent two-frame video images (may be noted as a second preset time).
Wherein, the time interval between the adjacent two frames of video images may include: the time interval between any plurality of groups of adjacent two frames of video images in the video to be played. Or the time interval between the adjacent two frames of video images may also include: the time interval between multiple sets of adjacent two frames of video images before the current time. The current time may be a time when whether a jam occurs or not to be determined. Taking the current time t as an example, the time interval between the plurality of sets of adjacent two frames of video images may include: (t-1) a time interval between two adjacent frames of video images at time, (t-2) a time interval between two adjacent frames of video images at time …, and (t-n) a time interval between two adjacent frames of video images at time.
The time interval between two adjacent frames of video images at the moment when the video is stuck is generally greater than the time interval between two adjacent frames of video images at the moment when the video is not stuck.
For example, in determining whether the current frame of the video is stuck, the second preset time may be set using a time interval between 3 sets of adjacent two frames of video images preceding the current frame. Fig. 4 shows successive multi-frame video frames of video. The time interval between two adjacent frames of video images is denoted by Tn. For example, taking a time interval between 3 sets of adjacent two frames of video images as an example, the second preset time may be set to b (t1+t2+t3)/3; wherein b is a positive number greater than 1. That is, in judging whether or not the frame 5 is stuck, it is possible to judge whether or not the following condition is satisfied: t4 > b (t1+t2+t3)/3. If T4 > b (T1+T2+T3)/3, then T4 is stuck, i.e., frame 5 is stuck.
Further, the video QoE service may analyze the cause of video occurrence of a clip after determining that the video occurrence of a clip. Fig. 5 shows a specific flow of the video QoE service determining whether video is stuck according to the timestamp of the video frame. In this embodiment, taking b as an example, 2, the video QoE service receives the time stamp of each video frame sent by the media codec service. Then, the video QoE service determines that t4 > 2 (t1+t2+t3)/3, and determines whether the video is stuck. If T4 > 2 (t1+t2+t3)/3, it indicates that video is stuck, and the video QoE service may analyze the cause of video stuck. If T4 is less than or equal to 2 (T1+T2+T3)/3, the video is not blocked; at this time, the video QoE service may clear the calculated timestamp of the video frame, and re-acquire the timestamp of the new video frame, to calculate whether the video is stuck.
In some embodiments of determining the second preset time by using the average time consumption of the 3 frames before the current frame, the video QoE service further determines whether the video is stuck when receiving the video frame information of the frame 5. When the video QoE service does not receive the video frame information of frame 5, the handset may not determine whether the received video frames (frame 1, frame 2, frame 3, and frame 4) are stuck. In other embodiments, the video QoE service determines whether each video frame is stuck according to a second preset time determined by 3 video frames preceding the frame.
In other embodiments, when judging whether the video is stuck according to the time interval between two adjacent frames of video images, more than two conditions can be specifically adopted to judge. In an exemplary embodiment, when determining whether the video is blocked, the video QoE service may specifically determine whether a time interval between two adjacent frames of video images is greater than a first preset time, or whether a time interval between two adjacent frames of video images is greater than a second preset time. In this embodiment, the time interval between two adjacent frames of video images is greater than the first preset time, or the time interval between two adjacent frames of video images is greater than the second preset time, may be determined that video is stuck. In contrast, in this embodiment, the time interval between two adjacent frames of video images is less than or equal to the first preset time and less than or equal to the second preset time, it may be determined that no video clip occurs. Therefore, the possibility of misjudgment can be reduced, and the accuracy of video jamming detection can be improved.
In addition, when video is chucked, the frame rate of the video is generally small. In some embodiments, the video QoE service may also determine whether a video is stuck by acquiring an actual frame rate of the current video. In some embodiments, the video QoE service obtains the actual frame rate of the current video, which may be specifically implemented by: the video QoE service counts the received frame number in the preset time period, and calculates the actual frame rate of the current video according to the frame number and the preset time period. The video QoE service may determine the number of frames received from receiving video frame information from the media codec service; one video frame information corresponds to one frame of video image. Further, the actual frame rate of the current video=the number of frames received in the preset time period/the preset time period. For example, the preset duration may be set to 1 second. That is, the video QoE service may count the number of frames received in 1 second and calculate the actual frame rate of the video from the number of frames received in 1 second.
In other embodiments, the occurrence of the video quality abnormality may specifically include the occurrence of blurring of the video or the resolution of the image not meeting a preset resolution requirement, such as a lower resolution. The preset resolutions may include a first preset resolution and a second preset resolution.
Wherein the resolution of the image determines the degree of refinement of the image details. In general, the higher the resolution of an image, the more pixels that are included, and the sharper the image. Video resolution refers to the precision of a video image within a unit size. Common video resolutions include 540P (progressive), 720P, 1080P, etc. The data size of a frame of video image corresponding to different resolutions is different. Accordingly, the size of the corresponding video resolution can also be determined from the data size of one frame of video image. As can be seen from the description of the above embodiments, the video frame information may also include the data amount of the video frame. And further, whether the video frame has a blurring problem or not can be determined according to the data quantity of the video frame.
In some embodiments, the video QoE service may obtain a first preset resolution of video to be played from the media codec service. Further, in S308, the video QoE service may determine the actual resolution corresponding to the video frame through the data size of the video frame in the video frame information. Then, the actual resolution of the video frame is compared with a first preset resolution to determine whether the video frame reaches the first preset resolution. If the video frame does not reach the first preset resolution, the blurring problem of the video frame is indicated. If the video frame reaches the first preset resolution, the video frame is indicated to have no blurring problem.
In other embodiments, the video QoE service may also determine the actual resolution corresponding to the video frame according to the data amount of the video frame in the video frame information. The actual resolution of the video frame is then compared to a second preset resolution to determine if the video frame is blurred. The second preset resolution may be set to a fixed value according to the actual situation. The second preset resolution may be set to 540P, for example.
In some embodiments, the video QoE service may determine that the video is blurred when a blur problem is detected for a certain frame of video image. In other embodiments, the video QoE service may also determine that the video is blurred when a blur problem is detected for multiple frames of video images. For example, the video QoE service determines that a video is blurred when it detects that a blur problem occurs in successive multi-frame video images.
The above embodiments respectively describe a determination mode in which the video quality abnormality appears as a video occurrence of a stuck state, and a determination mode in which the video quality abnormality appears as a video occurrence of a blurred state. In a practical application scenario, the video may also have both stuck and blurred. Thus, in other embodiments, video quality anomalies may also include video-on-clip and video-on-blur. In this embodiment, in S308, when the video QoE service determines that the video is stuck or abnormal according to the video frame information, it may determine that the video quality is abnormal. The specific implementation manner of the video QoE service for determining whether the video quality is abnormal according to the video frame may refer to the description of the video occurrence of the video clip and the determination of the video occurrence quality, which are not described herein.
In addition, when an abnormality occurs in video quality, it may last for a certain period of time. In some embodiments, the video QoE service may detect each video quality anomaly event that occurs in the video. Or in other embodiments, the video QoE service may not detect whether the video quality is abnormal anymore in a short time after detecting the video quality is abnormal. Therefore, the calculated amount of the mobile phone can be reduced, and the power consumption of the mobile phone is reduced.
After the video quality is abnormal, the video quality can be automatically recovered to be normal after a period of time. Therefore, the video QoE service detects that the video quality is abnormal, which may specifically indicate that the video quality of the video is abnormal at that time. In some embodiments, the step S308 may specifically include: and the video QoE service judges whether the video quality of the video at the first moment is abnormal or not according to the video frame information corresponding to the first moment. The first moment may be any moment of the video.
If the judgment result of S308 is no, it indicates that no abnormality occurs in video quality. At this time, the video QoE service may not perform any operation. If the determination result of S308 is no, this is not shown in the flow shown in fig. 3.
If the judgment result of S308 is yes, it indicates that an abnormality occurs in the video quality. At this time, the video QoE service may perform S309.
It should be noted that, whether the video quality is abnormal or not, the media codec service continues to decode the video through the decoding thread, and renders the video through the video frame rendering thread.
S309, the video QoE service acquires parameter values of all video quality influence parameters.
In some embodiments, the video quality impact parameters may include network quality parameters, temperature of the handset, and load information of the handset processor, among others.
Wherein the network quality parameter is used to indicate the network quality. In some embodiments, the network quality parameters may include network type, network transmission speed, and the like.
In some embodiments, the video QoE service may obtain parameter values for each video quality impact parameter at the moment an anomaly in video quality is detected.
In some embodiments, the video QoE services include a network detection sub-service, a temperature detection sub-service, and a load information detection sub-service. In S309 above, the video QoE service may acquire the network quality parameter through the network detection sub-service; acquiring the temperature of the mobile phone through a temperature detection sub-service; and acquiring the load information of the current processor of the mobile phone through the load information detection sub-service.
S310, the video QoE service analyzes the abnormal reasons of the abnormal video quality based on the parameter values of all video quality influence parameters.
As can be seen from the description of the above embodiments, in some embodiments, the video quality influencing parameter may include at least one of a network transmission speed, a temperature of the mobile phone, and load information of a processor of the mobile phone.
In some embodiments, if the network quality parameter in the video quality influencing parameter indicates that the network transmission speed of the current mobile phone is low, it indicates that the video quality of the current mobile phone is abnormal, which may be caused by the low network transmission speed. That is, the reasons for the abnormality of the video quality include the low network transmission speed of the mobile phone. In some embodiments, the network transmission speed may be determined to be lower when the network transmission speed is lower than the preset speed; the specific value of the preset speed can be set according to the actual situation.
In other embodiments, if the temperature in the video quality impact parameter indicates that the current handset is at a higher temperature, it indicates that the frequency of the current handset processor is higher. The video quality of the mobile phone is abnormal, possibly caused by a high frequency of a processor. For example, when the temperature is higher than the preset temperature, it may be determined that the current temperature of the mobile phone is higher; the preset temperature can be set according to actual conditions.
In other embodiments, if it is determined that the load of the processor of the current mobile phone is high according to the load information of the processor in the video quality influencing parameter, it indicates that the video quality of the current mobile phone is abnormal, which may be caused by the high load of the processor. When the load of the mobile phone processor is higher than a preset load threshold, the current mobile phone processor can be determined to be higher; the preset load threshold may be set according to actual conditions.
It will be appreciated that video quality anomalies at a given moment in time may be caused by a number of factors. Thus, in some embodiments, the video QoE service may determine that the cause of the video quality anomaly includes more than two when it is detected that the parameter values of more than two video quality affecting parameters do not satisfy the corresponding conditions. For example, when the video QoE service detects that the network transmission speed of the mobile phone is low and the frequency of the mobile phone processor is high, it may be determined that the reasons for the abnormality include that the network transmission speed of the mobile phone is low and the load of the mobile phone processor is high.
In other embodiments, the video quality impact parameters may also include other parameters.
In the technical scheme provided by the embodiment of the application, when the mobile phone starts to provide video service (such as playing video) for the user, the video QoE service is initialized. The video QoE service can monitor whether the video quality is abnormal or not in real time in the process of providing the video service by the mobile phone. And after the video QoE service detects that the video quality is abnormal, the parameter values of the video quality influence parameters can be obtained, and the reasons of the abnormal video quality are analyzed by combining the parameter values of the video quality influence parameters. The method is convenient for the targeted optimization processing to improve the video quality.
Further, after the video QoE service analysis determines the reason for the video quality abnormality in S310, a corresponding optimization strategy may be executed to improve the video quality in combination with the reason. With continued reference to fig. 3, after S310, the method may further include:
s311, the video QoE service executes an optimization strategy corresponding to the abnormal reason.
In some embodiments, the video QoE service performs an optimization policy corresponding to the cause of the anomaly, which may be denoted as a first optimization policy. Different reasons for the anomaly may correspond to different first optimization strategies.
In some embodiments, the video QoE server may determine that the network quality is lower due to the video anomaly by analyzing that the network quality parameter indicates that the network quality does not meet the preset network quality condition. Specifically, the network quality parameter may include a network transmission speed; when the network transmission speed is lower than the preset speed, it can be determined that the network quality parameter indicates that the network quality does not meet the preset network quality condition. In some embodiments, if the video QoE service analysis determines that the cause of the anomaly includes that the network transmission speed of the current mobile phone is low, the network acceleration service may be started, thereby improving the network transmission speed. And further, the problem of abnormal video quality caused by low network transmission speed is avoided. By way of example, network acceleration services may include, in particular, switching network types, network coordination aggregation, and the like.
In other embodiments, if the video QoE service analysis determines that the cause of the anomaly includes a higher temperature for the current handset, the frequency of the handset processor may be limited. And further, the problem of abnormal video quality caused by overhigh frequency of a mobile phone processor is avoided. In some embodiments, when the temperature of the mobile phone is greater than the preset temperature, it may be determined that the temperature of the mobile phone is higher.
In other embodiments, if the video QoE service analysis determines that the cause of the anomaly includes a high load on the current handset processor, the processor may be scheduled at the system level. The video QoE service first obtains the resource occupation situation of the processor, and then schedules the applications occupying more processor resources to the low-load core according to the resource occupation situation. And further, the problem of abnormal video quality caused by too high load of a mobile phone processor can be avoided. In other embodiments, the video QoE service may also control applications that occupy more processor resources to go to sleep.
As can be seen from the above description of the embodiments, in some embodiments, the video QoE service further includes a rendering mode when receiving the preset media information sent by the media codec service. In some embodiments, the mobile phone provides the video service for the user through the video application, and the system of the mobile phone renders the video service, so that after detecting that the video quality is abnormal, the video QoE service can further perform video image quality enhancement processing on the video in the rendering process, thereby improving the video quality. In some embodiments, the mobile phone performs video image quality enhancement processing on the video in the rendering process, which may also be called the mobile phone executing the second optimization strategy.
The video application adopts a rendering mode of application self-rendering, and the video application does not need to transmit interface information (surface) to the media coding and decoding service. When the video application adopts a rendering mode of mobile phone system rendering, the video application needs to transmit surface to the media coding and decoding service. In some embodiments, the video QoE service may obtain information from the media codec service as to whether the video class application passed surface. In this embodiment, the video QoE service may also determine a rendering mode adopted by the video class application according to whether the video class application delivers surface to the media codec service. It will be appreciated that if the video class application delivers surfac to the media codec service, the representation is rendered by the handset system.
In some embodiments, the video QoE service performs video image quality enhancement processing in the rendering process, which may specifically include: in the video rendering process, the video image is subjected to frame insertion, super division or sharpening processing and the like. In some embodiments, the video quality enhancement process may optimize the color and contrast of the video, making the picture more vivid.
Further, the video QoE service may determine how to perform video quality enhancement processing in combination with the specific expression of video quality anomalies. In some embodiments, if the video quality occurrence abnormality is specifically indicated that the video is a stuck video, the video QoE service may perform video quality enhancement processing by means of frame insertion, so as to reduce the problem of video stuck video and improve the video quality.
In other embodiments, if the video quality anomaly is embodied as a blurred video, the video QoE service may perform over-processing on the video to improve the video blur problem and improve the video quality.
The video QoE service, upon detecting an anomaly in video quality, may begin executing an optimization strategy to improve the video quality thereafter. In some embodiments, in a case where the video QoE service determines that the video is abnormal in video quality at the first time according to the video frame information corresponding to the first time, in S311, the video QoE service may specifically start executing the optimization policy after the first time.
As can be seen from the description of the above embodiments, the partial optimization strategy performed by the mobile phone requires the mobile phone to perform continuous operations. For example, when performing operations such as enhancement processing on the video image (i.e., the second optimization strategy described above), the mobile phone is required to continuously perform operations such as frame insertion and/or superdivision.
In some embodiments, the mobile phone executes the optimization strategy continuously during the video playing process after the first time. Therefore, the problem that the video is abnormal again due to the same factors can be avoided, and the video quality provided by the mobile phone for the user is ensured.
In some scenarios, video quality anomalies may be caused by the cell phone being in a location where network signals are bad, such as an elevator. Typically, after the handset leaves the elevator, the network signal may return to normal and the video quality may be able to return to normal. At this time, the mobile phone may not need to execute the optimization strategy any more, and the video quality can be ensured. Thus, in some embodiments, the video QoE service may execute the optimization strategy for a period of time after detecting that the video quality at the first moment is abnormal. For example, the step S311 may specifically include: and the video QoE service executes an optimization strategy corresponding to the abnormal reason within a third preset time. For example, the video QoE service performs an enhancement processing operation on the video image within a third preset time; after the third preset time is over, the mobile phone stops the video image quality enhancement processing on the video through the video QoE service. In this way, unnecessary power consumption waste can be reduced.
In some embodiments, in S308, when the video QoE service determines whether the video quality is abnormal according to the video frame information, only whether the video is stuck is determined. In this embodiment, before S311 described above, the video QoE service may detect whether the video has a blur problem based on the video frame information. In addition, the video QoE service can also perform super-processing on the video under the condition that the video QoE service determines that the video still has a fuzzy problem and the video class application adopts a rendering mode of mobile phone system rendering. Thus, when the video QoE service detects that the video is blocked, the video blocking and blurring problems can be simultaneously optimized. It will be appreciated that in this embodiment, a specific implementation of the video QoE service for detecting whether there is a blur problem in the video according to the video frame information may refer to the description in the embodiment of S308.
In some embodiments, the video QoE service comprises an optimization sub-service. The above S311 may be specifically performed by an optimization sub-service in the video QoE service.
In addition, when the video QoE service executes the optimization strategy, application information of video applications with abnormal video quality can be obtained. In some embodiments, the application information of the video class application may include identification information of the application (such as a package name of the application), and a rendering mode of the video class application. Fig. 6 illustrates information acquired from a media codec service by a video QoE service in order to analyze the cause of video quality anomalies. In the embodiment of the application, the video QoE service obtains the application package name, the rendering mode, the preset frame rate, the first preset resolution and the video frame information from the media codec service. And the video QoE service determines the actual frame rate and the actual resolution of the video according to the video frame information, so that it can be determined whether the actual frame rate of the video changes in frame rate compared with the preset frame rate, the actual resolution changes in resolution compared with the first preset resolution, and whether the video is blurred or not. Then, the video QoE service can select a corresponding optimization strategy by combining the frame rate change resolution change of the video and whether the video is blurred.
In the technical scheme provided by the embodiment of the application, after the video QoE service determines that the video quality is abnormal, the parameter values of all video quality influence parameters are acquired, and the possible reasons for causing the video quality to be abnormal are analyzed. And, after analyzing a cause that may cause an abnormality in video quality, a corresponding optimization strategy may be performed for the cause, thereby improving video quality.
In order to better collect data and help analyze the reason of the abnormal video quality, the mobile phone can report the abnormal video quality event to the server after detecting the abnormal video quality. Meanwhile, the video QoE service can also acquire various relevant data of the mobile phone at the moment when the video quality is abnormal, and report the relevant data to the server. The server can collect a large amount of relevant data when the video quality of the mobile phone is abnormal, and analyze the reason for causing the video quality abnormality according to the relevant data. Therefore, optimization is convenient to be carried out from the directions of mobile phones, mobile phone applications and the like, and the possibility of abnormal video quality is further reduced. In some embodiments, as shown in fig. 7, the video QoE service may further include the following steps after detecting that the video quality is abnormal:
S401, the video QoE service acquires preset information of the mobile phone.
In some embodiments, the step S401 may specifically include: the video QoE service acquires preset information of the mobile phone from the media coding and decoding service.
S402, the video QoE service reports a video quality abnormal event to the server, wherein the video quality abnormal event carries preset information. Correspondingly, the server receives the quality abnormal event reported by the video QoE service.
In some embodiments, the preset information may include: application identification information, coding and decoding resource occupation information of the mobile phone, and the like.
The application identification information corresponds to an application whose video quality is abnormal. In the embodiments shown in fig. 3 and 7, the application identification information specifically refers to identification information of the video class application. For example, the application identification information may be a package name of the application. As can be seen from the description of the above embodiments, in some embodiments, the preset media information acquired by the video QoE service from the media codec service may include identification information of the application.
Applications running at different times by the mobile phone and the occupation condition of each application on the coding and decoding resources may be different. Therefore, in order to better analyze the reason for the abnormal video quality, in some embodiments, the video QoE service in S401 may obtain the codec resource occupation information of the mobile phone when the abnormal video quality occurs.
In some embodiments, the codec resource occupancy information of the mobile phone may include: each application requesting and occupying codec resources, application priority, and whether each application requires real-time codec.
The codec resources of the mobile phone are usually limited, and when multiple applications simultaneously request the codec resources of the mobile phone, application priorities can be set for different applications. The application priority is used for indicating the priority of each application using the mobile phone coding and decoding resources. The higher the priority is, the higher the mobile phone codec resource can be used. The application occupying the mobile phone coding and decoding resources represents the application currently using the mobile phone coding and decoding resources. In some embodiments, applications requesting and occupying codec resources may be characterized in particular by identification information of the application.
When an application requests to use the codec resource, services implemented by using the codec resource according to needs may be divided into: real-time codec is required and not required. In some embodiments, applications requiring real-time codec generally require higher timeliness for services requiring implementation using codec resources.
In other embodiments, the preset information may further include: media information of the video. By way of example, the media information of the video may include the following: the first preset resolution, the preset frame rate, the actual resolution and the actual frame rate of the video when the video quality is abnormal, and whether the video is blurred or not. The video QoE service may be obtained from the received preset media information from the media codec service. When the video quality is abnormal, the actual resolution, the actual frame rate and whether the video is blurred or not can be determined by the video QoE service according to the acquired video frame information in the video playing process.
In other embodiments, the preset information may also include network quality parameters. Thus, the server can analyze the reasons of video quality abnormality according to the network quality parameters.
As can be seen from the description of the above embodiments, video-type applications include two rendering modes. In some embodiments where the video-like application is rendered by the system of the cell phone, the media information of the video may further include: rendering rate. The rendering rate is used to indicate the rate at which the video frame rendering thread renders video frames.
Video quality anomalies typically persist for some time. Also, the causes of the video quality anomalies over this period of time may be the same. Then, for the video quality abnormality in the period of time, the reasons of the video quality abnormality can be analyzed only by uploading the video quality abnormality event once. In some embodiments, the video QoE service may report a video quality anomaly event to the server for an anomaly in video quality detected at a preset time (which may be referred to as a fourth preset time). The fourth preset time may be set according to an actual situation. The fourth preset time may be set to, for example, 5s or 10s, or the like. Thus, the workload of the mobile phone can be reduced, and the power consumption of the mobile phone can be reduced.
Further, in some embodiments, the video QoE service may detect each video quality anomaly event occurring in the video; if multiple video quality anomalies are detected within a fourth preset time, the video QoE service can report a video quality anomaly event to the server. In other embodiments, the video QoE service may not detect whether the video quality is abnormal for a fourth preset time after detecting that the video quality is abnormal. And the video QoE service reports the video quality abnormality detected each time to the server.
Fig. 8 illustrates information acquired from a media codec service by a video QoE service in order to report a video quality anomaly event to a server. In the embodiment of the application, the video QoE service obtains an application requesting and occupying codec resources, an application priority, whether the application requires real-time codec, a preset frame rate, a first preset resolution, and video frame information from the media codec service. And the video QoE service determines the actual frame rate and the actual resolution of the video according to the video frame information, so that the frame rate change of the actual frame rate of the video compared with the preset frame rate, the resolution change of the actual resolution compared with the first preset resolution and whether the video is blurred can be determined. Then, the video QoE service may upload the information of the resolution change, the frame rate change, and whether to blur, together with the application requesting and occupying the codec resource, the application priority, whether the application requires real-time codec, the preset frame rate, and the first preset resolution, to the server.
In some embodiments, the video QoE service comprises a big data dotting sub-service. The above S401 and S402 may be specifically performed by a big data dotting sub-service of the video QoE service.
In the technical scheme provided by the embodiment of the application, when the mobile phone detects that the video quality is abnormal, the preset information when the quality abnormal event occurs can be obtained. And then reporting the video quality abnormal event to a server, and carrying the preset information in the video quality abnormal event. Therefore, the server can analyze the abnormal reasons of abnormal video quality according to the preset information of a plurality of abnormal video quality events, and can execute corresponding optimization strategies. In some embodiments, the server may analyze the cause of the video quality abnormality of the electronic device according to a plurality of video quality abnormality events reported by the same electronic device. Or the server can analyze the reasons of abnormal video quality according to a plurality of video quality abnormal events reported by a plurality of electronic devices.
Next, a flow of the video processing method according to the embodiment of the present application is described with reference to a software architecture of a mobile phone. Fig. 9 illustrates a portion of a software architecture of a mobile phone in some embodiments of the application. The handset may be, for example, the electronic device 100 shown in fig. 1. Fig. 9 only shows functional modules related to a video processing method in a mobile phone software architecture, where a video QoE service is the video QoE service in the example shown in fig. 2. In this embodiment, the software architecture of the handset includes an application layer and a framework layer.
The application layer includes a plurality of applications, such as application a and application B, each having a video function. In some embodiments, video functions include, but are not limited to: long video function, short video function, live broadcast picture playing, video call function, etc.
The framework layer includes media service processes, media codec services, and video QoE services. A media codec service (media codec) is used to provide video related services for applications. In some embodiments, when the handset provides video services to the user through the application, the media codec service may be invoked indirectly through the media player (MEDIA PLAYER) interface and the media service process (MEDIA SERVER); such as application a. In other embodiments, when the mobile phone provides video services for users through applications, the media codec services can also be directly called through the media codec interfaces; such as application B.
Before the mobile phone provides video service for the user through the application, the media codec service can be called, and preset media information of the video is sent to the media codec service. In some embodiments, the video service provided by the mobile phone for the user is specifically playing video, and the preset media information includes, but is not limited to, the following: the method comprises the steps of playing basic information such as a video stream path of a video to be played, a first preset resolution, a preset frame rate, a decoding format and the like. In other embodiments, the preset media information may further include information such as application identification information and a rendering mode adopted by the application.
In other embodiments, the video service provided by the mobile phone for the user is specifically playing a live broadcast picture, and the preset media information includes: the method comprises the steps of a data flow path of a live picture, a first preset resolution, a preset frame rate, a decoding format and the like of the live picture.
In other embodiments, the video service provided by the mobile phone for the user is specifically a video call, and the preset media information may include: the first preset resolution, the preset frame rate, the decoding format and other information used for video call.
And the media encoding and decoding service sends the acquired partial preset media information to the video QoE service. For example, the media codec service may send information such as the first preset resolution and the preset frame rate to the video QoE service, so that the video QoE service monitors the video quality of the video in real time according to the information such as the first preset resolution and the preset frame rate. The media codec service may also send application identification information to the video QoE service.
Video QoE services may be used to monitor video quality in real-time. As with the architecture shown in fig. 9, the video QoE services may include a video-stuck detect sub-service and a video-blur detect sub-service.
In the process that the mobile phone provides video service for users through the application, the application can also send video frames to the media coding and decoding service. Wherein the video frames carry video frame information. Further, the media codec service may send video frame information to the video QoE service.
After the video QoE service obtains the video frame information corresponding to the first time, the actual video quality of the video provided by the application to the user at the first time may be obtained according to the video frame information corresponding to the first time. Then, the video QoE service, in combination with preset media information of the video and video frame information corresponding to the first time, can determine whether the video quality provided by the application to the user at the first time is abnormal.
In some embodiments, the video frame information includes information such as a time stamp of the video frame and a data amount of the video frame. In some embodiments, applying video quality anomalies provided to a user includes: video is stuck and/or video is blurred.
For example, the video-on-detect sub-service may sequentially obtain the time stamps of the video frames from the media codec service. The video-churning sub-service may then determine whether churning of video frames of the video occurs in conjunction with the time stamps of the video frames.
The video blur detection sub-service may sequentially acquire the data amount of each video frame of the video from the media codec service. The video blur detection sub-service may then obtain the actual resolution of the video frame based on the data amount of the video frame. And then according to the actual resolution of the video frame and the second preset resolution, whether the video frame is blurred or not can be determined. Or the video blurring detection sub-service may also compare the actual resolution of the video frame with a first preset resolution in the preset media information to determine whether the resolution of the video frame changes. It will be appreciated that a reduction in the actual resolution of the video frame compared to the first predetermined resolution indicates that the video may suffer from blurring.
In other embodiments, the video QoE service may also be configured to execute an optimization strategy after detecting an anomaly in video quality. As shown in fig. 9, the video QoE service may further include a network detection sub-service, a temperature detection sub-service, a load information detection sub-service, an optimization sub-service, and the like. The video QoE service may be configured to obtain the video quality impact parameter after detecting that the video quality is abnormal. The video QoE service may determine the cause of the video quality anomaly by analyzing the video quality impact parameters. Then, the video QoE service may execute the corresponding optimization strategy from different directions in combination with the video quality impact parameter, so as to improve the video quality.
Factors affecting video quality may include network factors, frequency of the handset processor, and degree of loading, among others. When the frequency of the mobile phone processor is high, the mobile phone may heat, so that the temperature of the mobile phone is high.
In some embodiments, the video quality impact parameters may include network quality parameters of the handset, temperature, and load information of the processor. For example, after the video QoE service detects that the video quality is abnormal, the network detection sub-service may obtain a network quality parameter of the mobile phone; the temperature detection sub-service can acquire the temperature of the current mobile phone; the load information detection sub-service may obtain load information of the current mobile phone processor. Further, the video QoE service may execute an optimization policy through an optimization sub-service to improve video quality.
In some embodiments, if it is determined that the network transmission speed of the current mobile phone is low according to the network quality parameter in the video quality influence parameters, the optimizing sub-service may start the network acceleration service, so as to improve the network transmission speed.
In other embodiments, the optimization sub-service may limit the frequency of the processor of the handset if the temperature in the video quality impact parameter indicates that the current handset is too hot. By way of example, the processor may include a central processing unit (central processing unit, CPU), a graphics processor, and double rate synchronous dynamic random access memory (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM), among others.
In other embodiments, if the load information of the mobile phone processor in the video quality impact parameter indicates that the current load of the mobile phone processor is higher, the optimizing sub-service may acquire the resource occupation situation of the processor, and execute the corresponding first optimizing policy in combination with the resource occupation situation. In some embodiments, the optimizing sub-service may determine an application process that occupies more of the processor resources according to the processor's resource occupancy. Then, system level scheduling can be performed for applications that occupy more processor resources. For example, the optimization sub-service may schedule applications that occupy more processor resources to the low-load core. Or the optimization sub-service may control applications that occupy more processor resources to enter a sleep state.
When the mobile phone provides video service for users through video applications, the mobile phone can be divided into two rendering modes: the first is self-rendering by the application, and the second is rendering by the system of the handset. In some embodiments, the video QoE service may also obtain the rendering mode of the application from the media codec service. In some embodiments, the mobile phone provides the video service for the user through the video application, and the system of the mobile phone performs rendering, so when the video QoE service detects that the video quality is abnormal, video image quality enhancement processing (i.e. executing the second optimization processing strategy) can be performed in the rendering process, thereby improving the video quality.
In some embodiments, the video image quality enhancement processing is performed in the rendering process, which may specifically include: and performing frame insertion processing, super-division processing, sharpening processing and the like on the video in the rendering process. Specifically, the optimizing sub-service can combine the concrete expression form of the abnormal video quality to determine how to perform the video image quality enhancement processing.
For example, if the video quality occurrence abnormality is embodied as video occurrence clamping, the optimizing sub-service may perform frame insertion processing on the video.
In other embodiments, the optimization sub-service may oversubscrice the video if the video quality anomaly is embodied as a video blur.
In addition, the video QoE service may also be used to report video quality anomalies to a server. With continued reference to fig. 9, the video QoE service may also include a big data dotting sub-service. The big data dotting sub-service can be used for reporting a video quality abnormal event to a server, wherein the video quality abnormal event carries preset information when the electronic equipment generates video quality abnormality.
Fig. 10 shows a further software architecture diagram of the handset. Fig. 10 includes functional modules in the example shown in fig. 9, and further includes other functional modules that interact with a media codec service and a video QoE service in the flow of the video processing method according to the embodiment of the present application.
The media applications may include, among other things, video-type applications of internet vendors. By way of example, media applications may include long video applications, short video applications, live applications, and video telephony applications, among others. In some embodiments, the media application may perform video decoding through the media player interface and the media codec interface.
The graphics framework is used mainly to provide application vendor video frame rendering services and interfaces. When an application takes a decoded video frame, it is necessary to render the video frame data to a display. In some embodiments, rendering may be performed through an open graphics library (Open Graphics Library, openGL) service provided by a graphics framework.
Media frame: the method is mainly used for providing media services and comprises a media interface, a video player, a media encoding and decoding framework and a video QoE framework. The media codec framework may include a media codec service and a media codec list (media codec list), among others. The video QoE framework may include video QoE services. The media framework may be used to provide services such as video encoding, video decoding, video rendering, video parsing, and video generation.
The media infrastructure includes media asset management services, hardware decoding services, vector packet processing (Vector Packet Processing, VPP) services, and video QoE services.
The media resource management service mainly manages the coding and decoding resources and monitors the occupation condition of the coding and decoding resources in real time.
The hardware decoding service is mainly a decoding service provided by hardware capability, including decoding of a main stream video stream.
The VPP service is a post-processing service, mainly comprises the frame inserting, super division and sharpening capabilities of the video, and can improve the image quality and fluency of the video.
The video QoE service mainly acts on scenes providing the video service. Video QoE services may provide for monitoring of frame rate, resolution, network quality parameters (e.g., network transmission speed), etc. The video QoE service establishes a certain inference capacity by using a cartoon model, a fuzzy model, a system model and the like in the video quality monitoring model, so that the allocation of the optimization strategy is performed pertinently. In some embodiments, the optimization strategy mainly includes: network optimization, superdivision and frame insertion optimization, and resource optimization. The network optimization can use network acceleration service of the communication base station to enhance the network by switching network types, network cooperative aggregation and other modes. Super division and frame insertion optimization can be realized by using VPP service. Resource optimization may include frequency limiting of the handset processor, and system level scheduling of the loading of the handset processor.
In some embodiments, the network acceleration service may provide the application with enhanced network awareness capabilities and enhanced multi-network concurrency capabilities. Specifically, the network acceleration service can mainly guarantee the requirements of key services and improve the network quality through a plurality of network resource scheduling means such as self-healing recovery, high-priority methods, network switching/concurrency and the like according to the current network resource use condition. In addition, the network acceleration service also provides concurrency capability of fusing a plurality of schedulable networks such as WLAN, mobile cellular and the like, so that the Internet surfing experience of the application is improved. The network acceleration service is started, so that the application and the system can fully cooperatively schedule network resources, and smooth application surfing experience is provided for the user. In the embodiment of the application, if the abnormality of the video quality is detected in the process that the mobile phone provides the video service through the video application, the network acceleration service can be invoked, and the network transmission speed is improved by utilizing the capability of the network acceleration service, so that the video quality is improved.
FIG. 11 illustrates a schematic diagram of a cell phone interface in some embodiments. Fig. 11 is a view showing a display and brightness setting interface 401, and a video image quality enhancement option 402 is included in the display and brightness setting interface 401. The mobile phone may display a video image quality enhancement setting interface 403 in response to a triggering operation of the video image quality enhancement option 402 by the user. The video image quality enhancement setting interface 403 may include a video image quality enhancement setting switch 404, and applications for video image quality enhancement function support. The video image quality enhancement setting switch 404 may be used for a user to actively turn on or off the video image quality enhancement function. In the embodiment of the application, the super-division and frame insertion optimization can call the capacity corresponding to the video image quality enhancement function through the VPP service, thereby realizing the functions of super-division, frame insertion and the like. In some embodiments, the video QoE service may invoke the capability corresponding to the video quality enhancement function in a state where the video quality enhancement function is turned on, and perform enhancement processing on the video quality. In other embodiments, the video QoE service may invoke the capability corresponding to the video quality enhancement function to enhance the video quality when the video quality enhancement function is turned off.
The following describes the flow of the video processing method according to the embodiment of the present application with reference to fig. 10:
And responding to the operation of the user by the mobile phone, and starting playing the video by the video application. The media codec service of the media framework initializes the video QoE services in the video QoE framework. The media codec service starts a decoding thread and a video frame rendering thread, starting video data decoding and video frame rendering.
The video QoE service obtains preset media information (preset frame rate, first resolution) through a video quality monitoring model, and combines the media information and network status,
When video is monitored to be blocked and/or blurred through the video quality monitoring model, the resource occupation condition is acquired from the media resource management service, and the network quality parameters (such as the network transmission speed) are acquired from the network acceleration service. Then, the reasons for video jamming and/or blurring are caused by combining the resource occupation situation and the network state analysis. And finally, executing a corresponding optimization strategy by combining the reasons for causing the video to be blocked and/or blurred.
In addition, when the video is monitored to be blocked and/or blurred, the video QoE service also reports the video quality abnormal event to the server through the big data dotting sub-service. The reported video quality abnormal event can carry preset information at the moment; for example, the preset information may include: the method comprises the steps of real frame rate, real resolution, network quality parameters, identification information of video application, preset frame rate, first preset resolution, whether blurring and the like when video quality is abnormal.
Other embodiments of the present application provide an electronic device (e.g., a mobile phone). The electronic device may include: a memory and one or more processors. The memory is coupled to the processor. The memory is also used to store computer program code, which includes computer instructions. When the processor executes the computer instructions, the electronic device may perform the functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device may refer to the structure of the electronic device 100 shown in fig. 1.
The present application also provides a chip system, as shown in fig. 12, the chip system 1100 includes at least one processor 1101 and at least one interface circuit 1102. The processor 1101 and interface circuit 1102 may be interconnected by wires. For example, interface circuit 1102 may be used to receive signals from other devices, such as a memory of a computer. For another example, the interface circuit 1102 may be used to send signals to other devices (e.g., the processor 1101). The interface circuit 1102 may, for example, read instructions stored in a memory and send the instructions to the processor 1101. The instructions, when executed by the processor 1101, may cause a computer to perform the various steps of the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
Embodiments of the present application also provide a computer-readable storage medium including computer instructions that, when executed on an electronic device (e.g., a mobile phone) described above, cause the electronic device to perform the functions or steps performed by the mobile phone in the method embodiments described above.
The embodiment of the application also provides a computer program product, which when run on a computer, causes the computer to execute the functions or steps executed by the mobile phone in the method embodiment. The computer may be an electronic device, such as a cell phone.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.
Claims (13)
1. A video processing method, wherein the method is applied to an electronic device, the method comprising:
sequentially acquiring video frame information of each video frame in the video to be transmitted by the electronic equipment; the video frame information of the video frame comprises a time stamp of the video frame and a data volume of the video frame;
Determining whether the video quality of the video at the first moment is abnormal or not according to the video frame information corresponding to the first moment; the video quality abnormality of the video comprises: the video is stuck and/or blurred; the first moment is any moment of the video;
After detecting that the video quality of the video at the first moment is abnormal, acquiring parameter values of all video quality influence parameters;
And analyzing the abnormal reasons of the abnormal video quality according to the parameter values of the video quality influence parameters.
2. The method according to claim 1, wherein the method further comprises:
And executing a first optimization strategy corresponding to the abnormal reason after the first time.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
In the case that the video quality influence parameter includes a network quality parameter and the network quality of the electronic device indicated by the network quality parameter does not meet a preset network quality condition, the first optimization strategy includes opening a network acceleration service;
And/or the number of the groups of groups,
In the case that the video quality impact parameter includes a temperature of the electronic device and the temperature of the electronic device indicated by the temperature of the electronic device is greater than a preset temperature, the first optimization strategy includes limiting a frequency of a processor of the electronic device;
And/or the number of the groups of groups,
And when the video quality influence parameter comprises processor load information of the electronic equipment and the load of the processor of the electronic equipment indicated by the processor load information of the electronic equipment is larger than a preset load threshold, performing system-level scheduling on the processor of the electronic equipment by the first optimization strategy.
4. A method according to any one of claims 1-3, wherein the video quality anomaly of the video comprises: the video is blocked; the determining whether the video quality of the video at the first moment is abnormal according to the video frame information corresponding to the first moment comprises:
acquiring a time interval between a first time stamp of a first video frame corresponding to the first time and a second time stamp of a second video frame; the second video frame is a previous video frame to the first video frame;
the time interval is larger than a preset time, and the video is determined to be blocked at the first moment.
5. The method of claim 4, wherein the preset time comprises a first preset time, the first preset time being a fixed value; or the preset time comprises a second preset time, and the second preset time is set according to the time interval between multiple groups of adjacent two frames of video images before the first video frame.
6. The method of any one of claims 1-5, wherein the video quality anomaly comprises: blurring of the video occurs; the determining whether the video quality of the video at the first moment is abnormal according to the video frame information corresponding to the first moment comprises:
determining the actual resolution of the video at the first moment according to the data amount of the first video frame corresponding to the first moment;
and the actual resolution is smaller than the preset resolution, and the video is determined to be blurred at the first moment.
7. The method according to any one of claims 1-6, further comprising:
acquiring a rendering mode of the video;
If the rendering mode is that the system of the electronic equipment renders, after the video quality of the video at the first moment is detected to be abnormal, the electronic equipment executes a second optimization strategy after the first moment; the second optimization strategy is used for the electronic equipment to carry out video image quality enhancement processing on the video.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
In the case that the video quality of the video is abnormally expressed as the video is stuck, the second optimization strategy comprises the steps of performing frame inserting processing on the video; and/or the number of the groups of groups,
In the case where the video quality of the video is abnormally represented as blurred, the second optimization strategy includes performing super-processing on the video.
9. The method according to any one of claims 1-8, further comprising:
After detecting that the video quality of the video at the first moment is abnormal, acquiring preset information of the electronic equipment, wherein the preset information comprises at least one of the following information: application identification information, encoding and decoding resource occupation information of the electronic equipment, media information of the video or network quality parameters, wherein the network quality parameters are used for indicating network quality of the electronic equipment;
reporting a video quality abnormal event to a server; the video quality abnormal event carries the preset information.
10. The method of claim 2, wherein the electronic device comprises an application, a media codec service, and a video quality of experience QoE service; the application is used for providing video services;
The sequentially obtaining the video frame information of each video frame in the video to be transmitted by the electronic equipment comprises the following steps:
The media coding and decoding service sequentially acquires video frame information of each video frame in the video from the application;
The method further comprises the steps of:
the media encoding and decoding service sequentially sends video frame information of each video frame to a video QoE service;
The determining whether the video quality of the video at the first moment is abnormal according to the video frame information corresponding to the first moment comprises:
The video QoE service determines whether the video quality of the video at a first moment is abnormal or not according to the video frame information corresponding to the first moment;
After detecting that the video quality of the video at the first moment is abnormal, obtaining parameter values of all video quality influence parameters comprises the following steps:
after detecting that the video quality of the video at the first moment is abnormal, the video QoE service acquires parameter values of all video quality influence parameters;
The analyzing the abnormal reason of the abnormal video quality according to the parameter value of each video quality influence parameter comprises the following steps:
and the video QoE service analyzes the abnormal reasons of the abnormal video quality according to the parameter values of the video quality influence parameters.
11. The method of claim 10, wherein executing the first optimization strategy corresponding to the cause of the anomaly after the first time comprises:
And the video QoE service executes a first optimization strategy corresponding to the abnormal reason after the first time.
12. An electronic device, the electronic device comprising: a processor and a memory; the memory is coupled with the processor; the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1-11.
13. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311578891.4A CN118488269A (en) | 2023-11-22 | 2023-11-22 | Video processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311578891.4A CN118488269A (en) | 2023-11-22 | 2023-11-22 | Video processing method and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118488269A true CN118488269A (en) | 2024-08-13 |
Family
ID=92194159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311578891.4A Pending CN118488269A (en) | 2023-11-22 | 2023-11-22 | Video processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118488269A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106791910A (en) * | 2016-12-02 | 2017-05-31 | 浙江宇视科技有限公司 | Frame of video processing method and processing device |
CN108347598A (en) * | 2018-01-25 | 2018-07-31 | 晶晨半导体(上海)股份有限公司 | A kind of audio and video interim card information detects reporting system and method automatically |
CN111131808A (en) * | 2018-10-30 | 2020-05-08 | 中国电信股份有限公司 | Video stuck fault analysis method and device and set top box |
CN111683273A (en) * | 2020-06-02 | 2020-09-18 | 中国联合网络通信集团有限公司 | Method and device for determining video blockage information |
CN116916093A (en) * | 2023-09-12 | 2023-10-20 | 荣耀终端有限公司 | Method for identifying clamping, electronic equipment and storage medium |
-
2023
- 2023-11-22 CN CN202311578891.4A patent/CN118488269A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106791910A (en) * | 2016-12-02 | 2017-05-31 | 浙江宇视科技有限公司 | Frame of video processing method and processing device |
CN108347598A (en) * | 2018-01-25 | 2018-07-31 | 晶晨半导体(上海)股份有限公司 | A kind of audio and video interim card information detects reporting system and method automatically |
CN111131808A (en) * | 2018-10-30 | 2020-05-08 | 中国电信股份有限公司 | Video stuck fault analysis method and device and set top box |
CN111683273A (en) * | 2020-06-02 | 2020-09-18 | 中国联合网络通信集团有限公司 | Method and device for determining video blockage information |
CN116916093A (en) * | 2023-09-12 | 2023-10-20 | 荣耀终端有限公司 | Method for identifying clamping, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230094272A1 (en) | Notification Processing System, Method, and Electronic Device | |
WO2020014880A1 (en) | Multi-screen interaction method and device | |
CN114071197B (en) | Screen projection data processing method and device | |
CN113542795B (en) | Video processing method and device, electronic equipment and computer readable storage medium | |
US11627369B2 (en) | Video enhancement control method, device, electronic device, and storage medium | |
WO2007037913A1 (en) | Provision of applications across a network | |
CN110413383B (en) | Event processing method, device, terminal and storage medium | |
CN112367543A (en) | Display device, mobile terminal, screen projection method and screen projection system | |
CN114727101B (en) | Antenna power adjusting method and electronic equipment | |
CN115016885B (en) | Virtual machine garbage recycling operation method and electronic equipment | |
CN113099233A (en) | Video encoding method, video encoding device, video encoding apparatus, and storage medium | |
CN109474833B (en) | Network live broadcast method, related device and system | |
CN114125400A (en) | Multi-channel video analysis method and device | |
CN118488269A (en) | Video processing method and electronic equipment | |
CN113141541B (en) | Code rate switching method, device, equipment and storage medium | |
CN116418995A (en) | Scheduling method of coding and decoding resources and electronic equipment | |
CN107682733B (en) | Control method and system for improving user experience of watching video | |
CN113691815A (en) | Video data processing method, device and computer readable storage medium | |
CN116055613B (en) | Screen projection method and device | |
CN116709367B (en) | Network acceleration method and device | |
CN115499699B (en) | Screen throwing method | |
CN116709369B (en) | Network acceleration method and electronic equipment | |
CN116684521B (en) | Audio processing method, device and storage medium | |
CN115543649B (en) | Data acquisition method and electronic equipment | |
CN118524246A (en) | Method and device for controlling screen throwing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |