Disclosure of Invention
In order to ensure the fluency of online video communication and alleviate the problem of video communication blocking, the application provides a video background processing method, a system and a storage medium based on network signal strength.
In a first aspect, the present application provides a video background processing method based on network signal strength, which adopts the following technical scheme:
a video background processing method based on network signal intensity comprises the following steps:
Based on instant video communication, network signal data and communication picture images are obtained in real time;
Extracting a signal intensity value from the network signal data;
calculating a signal fluctuation value of the signal intensity value in the latest first set time period;
if the signal fluctuation value is larger than a preset reference fluctuation value, extracting a static background image and a dynamic picture image from the communication picture image;
calculating a network fluency value according to the signal intensity value and the signal fluctuation value;
If the network fluency value is higher than a preset reference fluency value, adjusting the blurring degree of the static background image according to a first difference value between the network fluency value and the reference fluency value; degree of blur = base blur value + (first difference/reference fluent value maximum x maximum blur delta); the basic fuzzy value, the maximum value of the reference fluency value and the maximum fuzzy increment are preset values, and the maximum value of the reference fluency value is the maximum value of the network fluency;
Reducing the moving distance of the dynamic picture image according to the first difference value; moving distance reduction ratio=1- (first difference value/reference fluency value maximum value x scaling factor); wherein the scaling factor is a coefficient for adjusting the reduction scale.
By adopting the technical scheme, in the process of online video communication, network signals can change in real time, when the network is blocked, the picture and the background of the online video communication can be blocked, so that the picture and the background are subjected to edge transition or background identification based on the strength of the network signals, the display characteristics are adaptively increased for the picture and the background, and the defect of blocking the picture and the background is overcome. The video background processing scheme can be used for effectively coping with network signal fluctuation and improving the user experience of online video communication.
Optionally, the method further comprises the steps of:
If the static background image and the dynamic image are extracted from the communication image, calculating the displacement of the static background image in the window as a background displacement speed value and calculating the relative displacement of the dynamic image and the static background image as an image displacement speed value in a second latest set time period;
calculating a dynamic speed value according to the background displacement speed and the picture displacement speed;
calculating an edge smooth value according to the dynamic speed value and the signal intensity value;
and adjusting the blurring degree of the interface area between the static background image and the dynamic picture image according to the edge smoothing value.
By adopting the technical scheme, the processing of the video picture dynamic property when the network signal changes is further refined, and the concept of edge smoothing is introduced to optimize the transition effect between the static background and the dynamic picture.
Optionally, the method further comprises the steps of:
analyzing pixel changes between adjacent frames of the dynamic picture image, and predicting to generate an intermediate frame to be inserted between original frames;
Adjusting the number of the inserted frames of the intermediate frames of the dynamic picture image according to the edge smoothing value;
the higher the edge smoothing value is, the more the number of the inserted frames is; the lower the edge smoothing value is, the smaller the number of the inserted frames is;
And when the network fluency value is lower than the set value, reducing the number of the set number of the inserted frames.
By adopting the technical scheme, the number of the inserted frames of the dynamic picture images is dynamically adjusted according to the edge smooth value, so that the smoothness and the look and feel of online video communication can be further improved, and meanwhile, the limitation of network conditions and equipment performance is considered, so that better user experience is realized.
Optionally, the method further comprises the steps of:
according to the edge smooth value, the deformation degree of the static background image is adjusted;
The farther the static background image is from the dynamic picture image, the smaller the deformation degree of the static background image is, the closer the static background image is from the dynamic picture image, and the larger the deformation degree of the static background image is;
And the deformation direction of the static background image is matched with the movement direction of the dynamic picture image, wherein the included angle between the vector of the deformation direction and the vector of the movement direction is an acute angle.
By adopting the technical scheme, the deformation degree of the static background image is finely adjusted according to the factors such as the edge smooth value, the distance, the motion direction and the like, so that the visual effect and the user experience of online video communication can be further improved.
Optionally, the method further comprises the steps of:
adjusting the gain value of the deformation degree according to the network strength value, wherein the lower the network strength value is, the smaller the gain value of the deformation degree is; the higher the network strength value is, the larger the gain value of the deformation degree is;
and the magnitude of the vector included angle is adjusted in positive correlation with the magnitude of the network intensity value.
By adopting the technical scheme, the network strength value reflects the stability and the speed of the current network connection. In the case of poor network conditions, high computational complexity image processing operations, such as distortion of static background images, may lead to degradation of video quality due to bandwidth limitations or delays.
Optionally, the method further comprises the steps of:
Extracting a graphic trunk outline in the static background image;
according to the graphic trunk outline, matching a corresponding correction graphic from a preset graphic library;
and placing the correction graph between the static background image and the dynamic picture image, and smoothing the edges of the static background image and the dynamic picture image.
By adopting the technical scheme, the quality and the efficiency of image synthesis can be obviously improved, and a finer and natural image processing effect is realized.
Optionally, the calculation formula of the signal fluctuation value is as follows:
signal fluctuation value=sqrt { (1/N) ×Σ (xi- μ) 2 }; where xi is the signal intensity value at each time point in the first set period, μ is the average of these signal intensity values, and N is the number of data points;
Acquiring signal fluctuation data of a plurality of devices based on a plurality of networked devices in the same local area network and the same instant video communication channel;
Calculating the similarity among a plurality of signal fluctuation data, and adjusting the correction value of the correction graph according to the similarity;
the higher the similarity is, the larger the correction value is; the lower the similarity, the smaller the correction value.
By adopting the technical scheme, the signal fluctuation data of a plurality of devices in the same local area network can be systematically collected and analyzed, the similarity between the devices is evaluated, and the video communication quality is optimized or the network setting is adjusted according to the similarity.
Optionally, the calculation formula of the similarity is as follows:
respectively calculating signal fluctuation values according to the plurality of signal fluctuation data;
similarity = 1-standard deviation of signal fluctuation values/average value of signal fluctuation values x 100%.
By adopting the technical scheme, the degree of dispersion of the data represented by the standard deviation of the signal fluctuation value/the average value of the signal fluctuation value is higher, and the lower the similarity is indicated.
In a second aspect, the present application provides a video background processing system based on network signal strength, which adopts the following technical scheme:
A video background processing system based on network signal strength, comprising a processor, wherein the processor runs a program of the video background processing method based on network signal strength.
In a third aspect, the present application provides a storage medium, which adopts the following technical scheme:
a storage medium storing a program of the video background processing method based on network signal strength as claimed in any one of the above.
In summary, the present application includes at least one of the following beneficial technical effects:
Improving video communication quality: by monitoring the network signal intensity of each device in the video communication process in real time and calculating the fluctuation value of the network signal intensity, the system can timely discover and respond to the change of the network condition. When the signal fluctuation is large, corresponding adjustment measures can be adopted to reduce video jamming, delay or packet loss caused by network instability, so that the overall quality of video communication is improved.
Optimizing network resource configuration: based on the signal fluctuation data of a plurality of devices, the system can evaluate the distribution condition of network resources in the whole local area network. When some devices are found to have large signal fluctuation due to network congestion, the network resource allocation strategy can be dynamically adjusted, so that more bandwidth or priority is provided for the devices, and the smoothness of video communication is ensured. Meanwhile, unnecessary resource waste can be avoided, and the overall utilization efficiency of network resources is improved.
Enhancing user experience: by reducing the problems of jamming, delay and the like in video communication, a user can obtain smoother and natural video communication experience. In addition, the system can automatically adjust the display effect of the video interface according to the change of the network signal intensity so as to adapt to different network environments and further improve the watching experience of users.
Intelligent fault prediction and prevention are realized: by long-term monitoring and analysis of signal fluctuation data, the system can identify early signs of network failure or performance bottlenecks. Once a potential problem is found, the system may take precautions in advance to avoid network failure from severely affecting video communications. This intelligent fault prediction and prevention capability helps to improve the stability and reliability of the system.
Support multi-device co-operation: within the same local area network, different devices may have different network signal strengths and performance characteristics. The processing method based on the signal fluctuation data can support cooperative work among multiple devices, and ensures that each device can keep a relatively consistent performance level in the video communication process by optimizing resource allocation and adjusting communication strategies, so that the cooperative efficiency and stability of the whole system are improved.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings.
In the description of the present specification, reference to the terms "certain embodiments," "one embodiment," "some embodiments," "an exemplary embodiment," "an example," "a particular example," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The embodiment of the application discloses a video background processing method based on network signal strength, which refers to fig. 1 and comprises the following steps:
Based on instant video communication, network signal data and communication picture images are obtained in real time;
Extracting a signal intensity value from network signal data;
calculating a signal fluctuation value of the signal intensity value in the latest first set time period; the signal fluctuation value may reflect the current network state.
And if the signal fluctuation value is larger than the preset reference fluctuation value, extracting a static background image and a dynamic picture image from the communication picture image. And determining whether to separate according to the signal fluctuation value so as to reduce unnecessary calculation overhead. Among them, image segmentation can use image processing techniques such as background difference, inter-frame difference, deep learning model, etc. to effectively distinguish static background images from dynamic picture images.
And calculating a network fluency value according to the signal strength value and the signal fluctuation value so as to more comprehensively reflect the network quality.
And if the network fluency value is higher than the preset reference fluency value, adjusting the blurring degree of the static background image according to the first difference value of the network fluency value and the reference fluency value. Degree of blur = base blur value + (first difference/reference fluent value maximum x maximum blur delta); that is, the larger the first difference value, the more blurred the static background image; the smaller the first difference, the clearer the static background image. The basic fuzzy value, the reference fluency value maximum value and the maximum fuzzy increment are preset values, and the reference fluency value maximum value is the maximum value of the network fluency. By dynamically adjusting the degree of blurring of the static background, visual discomfort to the user can be reduced when the network is poor.
Reducing the moving distance of the dynamic picture image according to the first difference value; moving distance reduction ratio=1- (first difference value/reference fluency value maximum value x scaling factor); wherein the scaling factor is a coefficient for adjusting the reduction scale. That is, the larger the first difference value, the larger the reduction multiple of the moving distance of the moving picture image; the smaller the first difference value, the smaller the reduction multiple of the moving distance of the moving picture image. The moving distance of the dynamic picture image is reduced, and the jumping feeling of the picture can be reduced when the network is blocked, so that the video is smoother.
In the process of online video communication, network signals can change in real time, when a network is blocked, pictures and backgrounds of online video communication can also be blocked, so that the pictures and the backgrounds are subjected to edge transition or background identification based on the strength of the network signals, display characteristics are adaptively increased for the pictures and the backgrounds, and the defects of blocking of the pictures and the backgrounds are overcome. The video background processing scheme can be used for effectively coping with network signal fluctuation and improving the user experience of online video communication.
Referring to fig. 2, the method further comprises the steps of:
if the static background image and the dynamic image are extracted from the communication image, calculating the displacement of the static background image in the window as the background displacement speed value in the latest second set time period. The background displacement velocity value reflects the overall moving velocity of the background image. The system firstly separates static background images and dynamic picture images from continuous video frames through algorithms such as a background difference method, a Gaussian mixture model and the like; static background images, such as a fixed view of a park entrance; dynamic picture images such as pedestrians walking and vehicles passing by. Assume that the latest second set period of time is 5 seconds elapsed. During this 5 seconds, the system monitors that the static background image is slightly displaced in the window due to natural factors such as the camera being slightly moved or the wind blowing the tree. And comparing the background images before and after 5 seconds to calculate the average displacement of the background images, namely the background displacement speed value. This value reflects the overall moving speed of the background image.
And calculating the relative displacement between the dynamic picture image and the static background image as a picture displacement speed value so as to reflect the moving speed of the dynamic object in the video picture. For a dynamic picture image, such as a pedestrian, the system calculates the relative displacement amount between the dynamic picture image and the static background image in the same time period. For example, a pedestrian walks from one side of the picture to the other side, and the relative displacement amount between the pedestrian and the background image is calculated in the process, namely the picture displacement speed value, which reflects the moving speed of the dynamic object in the video picture.
Calculating a dynamic speed value according to the background displacement speed and the picture displacement speed; the dynamic speed value can be obtained by a weighted average calculation method, and reflects the overall dynamic property of the video picture. For example, if the background displacement is small but the object in the picture moves rapidly, the dynamic speed value may be biased toward the picture displacement speed value.
Calculating an edge smooth value according to the dynamic speed value and the signal intensity value; the edge smoothing value is calculated based on the dynamic speed value and the signal strength value. When the dynamic speed value is higher, which means that more motion elements exist in the picture, edge smoothing is required to be added to reduce visual impact caused by motion; meanwhile, when the signal strength value is low, edge smoothing is also prone to be increased so as to make up for the unsmooth feel caused by network jamming.
And adjusting the blurring degree of the boundary area between the static background image and the dynamic picture image according to the edge smoothing value. If the edge smoothing value is larger, the blurring degree of the boundary area is higher, so that a smoother transition effect is realized; otherwise, the blurring degree is lower, and the definition of the picture is maintained.
The processing of the video picture dynamics when the network signal changes is further refined, and the concept of edge smoothing is introduced to optimize the transitional effect between the static background and the dynamic picture.
Referring to fig. 3, the method further comprises the steps of:
Analyzing pixel changes between adjacent frames of the dynamic picture image, and predicting to generate an intermediate frame to be inserted between original frames; by analyzing pixel differences (i.e., motion vectors) between successive frames by frame rate boosting (FRAME RATE Upscaling, FRUS) or motion interpolation (Motion Interpolation) techniques, an algorithm can predict intermediate pictures that should exist between these frames and generate these intermediate frames (also referred to as interpolation frames). These interpolated frames are then inserted between the original frames, thereby increasing the frame rate without changing the video content, making the video appear smoother.
Adjusting the number of the inserted frames of the intermediate frames of the dynamic picture image according to the edge smoothing value; the edge smoothing value is an indicator for measuring the edge sharpness in the image. In video processing, sharpness of edges, such as object boundaries, is critical to overall visual quality. If the edges are too blurred, not too many insertions are needed to avoid introducing unnecessary blurring; conversely, if the edges are sharp, more interpolation frames are needed to better capture motion details. Therefore, adjusting the number of inserted frames according to the edge smoothing value is an optimization strategy, which aims to improve the smoothness while maintaining the video definition.
The higher the edge smoothing value, the more the number of interpolation frames; the lower the edge smoothing value, the fewer the number of interpolated frames. High edge smoothing values generally mean that the image is blurred, and increasing the number of interpolated frames helps to fill in the details, making the motion look smoother. While low edge smoothing values indicate sharp edges of the image, too many insertions may introduce unnecessary blurring or distortion, and therefore the number of insertions should be reduced.
When the network fluency value is lower than the set value, reducing the number of the set number of inserted frames; this step takes into account the actual situation of the network transmission. In streaming media playing, the smoothness of video playing is directly affected by the quality of network conditions. If the network fluency value is lower than the set value, which indicates that the current network environment is not good, continuing to play at a high frame rate may result in buffering or jamming. Therefore, reducing the number of the inserted frames can reduce the requirement for bandwidth and reduce transmission pressure, thereby improving the video playing stability to a certain extent.
The number of the inserted frames of the dynamic picture images is dynamically adjusted according to the edge smooth value, so that the smoothness and the appearance of online video communication can be further improved, and meanwhile, the limitation of network conditions and equipment performance is considered, so that better user experience is realized.
In order to further enhance the visual effect and user experience of the online video communication, the method further comprises the following steps:
adjusting the deformation degree of the static background image according to the edge smoothing value; the deformation is the deformation of different degrees of each part. The edge smoothing value is used for adjusting the number of the inserted frames of the dynamic picture image and controlling the deformation degree of the static background image. Here, the edge smoothing value is used as a comprehensive index reflecting the sharpness and sharpness of the entire image. Lower edge smoothing values, i.e. clearer images, allow for a smaller degree of deformation to preserve the authenticity of the background; while higher edge smoothing values, i.e. blurred images, allow for greater deformation to simulate more natural dynamic effects.
The farther the static background image is from the dynamic picture image, the smaller the degree of deformation of the static background image, the closer the static background image is from the dynamic picture image, and the greater the degree of deformation of the static background image. Based on the principle of visual perception, objects at a distance are relatively small in visual change, while objects at a near position are more obvious in change. Therefore, when the static background image is far from the dynamic picture image, such as a moving object or person, the deformation degree thereof should be correspondingly reduced so as to maintain the stability of the background; when the background image is closer to the dynamic picture, the deformation degree should be increased to simulate the background change caused by the motion of the dynamic object.
And the deformation direction of the static background image is matched with the motion direction of the dynamic picture image, wherein the included angle between the vector of the deformation direction and the vector of the motion direction is an acute angle. And ensuring the harmony and consistency between the background deformation and the dynamic picture. The included angle between the vector of the deformation direction and the vector of the motion direction is an acute angle, which means that the trend of the background deformation should be consistent with the motion direction of the dynamic picture or present a certain auxiliary effect. The method is beneficial to enhancing the immersion and realism of the video, so that the audience can more easily accept and integrate the video into the scene presented by the video.
And fine-adjusting the deformation degree of the static background image according to the factors such as the edge smooth value, the distance, the movement direction and the like.
To help maintain overall consistency and viewability of the video pictures, the method further comprises the steps of:
According to the network strength value, adjusting the gain value of the deformation degree, wherein the lower the network strength value is, the smaller the gain value of the deformation degree is; the higher the network strength value, the greater the gain value of the degree of deformation. When the network strength value is low, it means that the network transmission condition is not good, such as delay, packet loss, etc. In this case, reducing the gain value of the degree of distortion may reduce the consumption of video processing resources while reducing the risk of degradation of video quality due to network fluctuations. Doing so helps to maintain the basic fluency of the video. Conversely, when the network strength value is high, the network transmission condition is good, and the video processing resources are relatively abundant. At this time, the gain value of the deformation degree is increased, so that the visual effect of the video can be further improved, and the interaction between the dynamic picture and the static background is more natural and lifelike.
And the magnitude of the network intensity value is positively correlated to the magnitude of the adjustment vector included angle. The network strength value not only affects the gain value of the deformation degree, but also directly affects the magnitude of the included angle between the deformation direction and the motion direction vector. When the network intensity value is higher, the included angle between the deformation direction and the movement direction is allowed to be increased, which means that the deformation can more freely follow or assist the movement of the dynamic picture, and a more vivid and rich visual effect is created. However, when the network strength value is low, the included angle between the deformation direction and the motion direction vector should be reduced to ensure that the deformation is not abrupt or unstable due to network fluctuation.
In order to improve the quality and efficiency of image synthesis and to achieve finer and natural image processing, the method further comprises the steps of:
Extracting a graphic trunk outline in a static background image; the contours of the main graphics are identified and extracted from the static background image by image processing techniques, such as edge detection, contour extraction, etc. algorithms. These contours represent the most important visual elements in the image, providing the basis for subsequent pattern matching and correction.
According to the outline of the graphic backbone, matching a corresponding correction graphic from a preset graphic library; after the outline of the graphic trunk is extracted, the system uses the outline as a query condition to search and match in a preset graphic library. The graphic library should contain a variety of predefined graphic templates, which may be standard geometries, common object contours, or specific graphics in a specific application scenario. By comparing the shape, size, scale, etc. characteristics of the contours, the system can find the corrected graph that most closely matches the extracted contours.
And placing the correction graph between the static background image and the dynamic picture image, and smoothing the edges of the static background image and the dynamic picture image. After finding the matching correction pattern, the next step is to ingeniously blend it into the video frame. I.e., placing the correction pattern in place between the static background image and the dynamic picture image to create visual consistency and harmony. Edge smoothing is critical in this step. Since the introduction of the correction pattern may create distinct boundaries or seams between images, these seams may distract the viewer and reduce the viewing experience of the video. Therefore, it is necessary to smooth the edges between the corrected graphics and the static background, dynamic picture to reduce or eliminate the visibility of the seams. This may be achieved by image fusion techniques such as feathering, fading, etc., which allow the corrected image to be seamlessly joined with the surrounding image.
The calculation formula of the signal fluctuation value is as follows:
Signal fluctuation value=sqrt { (1/N) ×Σ (xi- μ) 2 }; where xi is the signal intensity value at each time point in the first set period, μ is the average of these signal intensity values, and N is the number of data points. The signal fluctuation value essence is obtained by a standard deviation calculation formula and is used for measuring the degree of dispersion or fluctuation of the signal intensity.
And acquiring signal fluctuation data of a plurality of devices based on the plurality of networked devices in the same local area network and the same instant video communication channel.
Calculating the similarity between a plurality of signal fluctuation data, and adjusting a correction value of the correction graph according to the similarity; the similarity calculation method may be based on various statistical measures, such as correlation coefficient, distance measure, such as euclidean distance, manhattan distance, etc. Devices with high similarity mean that the signal fluctuation patterns are similar and are affected by similar network conditions.
The higher the similarity, the larger the correction value; the lower the similarity, the smaller the correction value. The higher the similarity, the more devices are operated under the same network conditions, and the correction value can be added appropriately to enhance the visual effect and stability of the video. Conversely, a lower similarity may indicate a greater difference in network conditions faced by different devices, and the correction value should be reduced to avoid video distortion or instability due to overcorrection.
Signal fluctuation data of a plurality of devices in the same local area network can be systematically collected and analyzed, the similarity between the devices is evaluated, and the video communication quality is optimized or the network setting is adjusted according to the similarity.
The calculation formula of the similarity is as follows:
Respectively calculating signal fluctuation values according to the plurality of signal fluctuation data; the calculation formula of the signal fluctuation value is the same as the calculation formula of the signal fluctuation value.
Similarity = 1-standard deviation of signal fluctuation values/average value of signal fluctuation values x 100%. When the standard deviation is smaller than the average value, the similarity is close to 1, and the signal fluctuation of different devices is consistent; when the standard deviation is large relative to the average value, the similarity decreases, indicating that the signal fluctuation difference is large.
The higher the degree of dispersion of the data represented by the standard deviation of the signal fluctuation value/the average value of the signal fluctuation value, the lower the degree of similarity is explained.
For example, assume signal fluctuation values of a plurality of devices: 0.5,0.6,0.55,0.58,0.62;
The standard deviation was calculated to be 0.0344 and the average value was calculated to be 0.57;
calculated similarity = 1-6.04% = 93.96%.
The embodiment of the application also discloses a video background processing system based on the network signal intensity, which comprises a processor, wherein the processor runs a program of the video background processing method based on the network signal intensity.
The embodiment of the application also discloses a storage medium which stores the program of the video background processing method based on the network signal strength.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.