[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118646930B - Video background processing method, system and storage medium based on network signal strength - Google Patents

Video background processing method, system and storage medium based on network signal strength Download PDF

Info

Publication number
CN118646930B
CN118646930B CN202411125439.7A CN202411125439A CN118646930B CN 118646930 B CN118646930 B CN 118646930B CN 202411125439 A CN202411125439 A CN 202411125439A CN 118646930 B CN118646930 B CN 118646930B
Authority
CN
China
Prior art keywords
value
network
image
signal
fluency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411125439.7A
Other languages
Chinese (zh)
Other versions
CN118646930A (en
Inventor
丁永建
王辉
王举
马鹏山
滕越
詹登峰
魏平花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Happy Network Technology Co ltd
Original Assignee
Zhejiang Happy Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Happy Network Technology Co ltd filed Critical Zhejiang Happy Network Technology Co ltd
Priority to CN202411125439.7A priority Critical patent/CN118646930B/en
Publication of CN118646930A publication Critical patent/CN118646930A/en
Application granted granted Critical
Publication of CN118646930B publication Critical patent/CN118646930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44227Monitoring of local network, e.g. connection or bandwidth variations; Detecting new devices in the local network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • H04N21/8153Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请涉及视频背景处理的技术领域,公开一种基于网络信号强度的视频背景处理方法、系统及存储介质,方法包括,基于即时视频通信,实时获取网络信号数据和通信画面图像;从网络信号数据中提取出信号强度值;计算出最时间段内信号强度值的信号波动值;若信号波动值大于预设的参考波动值,则从通信画面图像中提取出静态背景图像和动态画面图像;根据信号强度值与信号波动值计算出网络流畅值;若网络流畅值高于预设的参考流畅值,则根据网络流畅值与参考流畅值的第一差值正相关调节静态背景图像的模糊程度;根据第一差值反相关调节动态画面图像的移动距离的缩小比例。本申请能够更有效地应对网络信号波动,提升在线视频通信的用户体验。

The present application relates to the technical field of video background processing, and discloses a video background processing method, system and storage medium based on network signal strength, the method comprising: based on instant video communication, real-time acquisition of network signal data and communication screen images; extracting signal strength values from network signal data; calculating the signal fluctuation value of the signal strength value in the most time period; if the signal fluctuation value is greater than a preset reference fluctuation value, extracting a static background image and a dynamic screen image from the communication screen image; calculating a network fluency value according to the signal strength value and the signal fluctuation value; if the network fluency value is higher than a preset reference fluency value, adjusting the blurring degree of the static background image according to the positive correlation of the first difference between the network fluency value and the reference fluency value; and adjusting the reduction ratio of the moving distance of the dynamic screen image according to the anti-correlation of the first difference. The present application can more effectively cope with network signal fluctuations and improve the user experience of online video communication.

Description

Video background processing method, system and storage medium based on network signal intensity
Technical Field
The present application relates to the field of video background processing technologies, and in particular, to a method, a system, and a storage medium for processing a video background based on network signal strength.
Background
One common problem in current online video communication technology is that the video communication process is often susceptible to interference from network signal strength. Because the network environment is complex and changeable, various factors including bandwidth limitation, network congestion, signal attenuation and the like can change the network signal in real time. This variation directly affects the stability and smoothness of the online video communication.
In particular, when the signal strength of the network is reduced or the network is blocked, the transmission of video data is affected, which results in blocking, delay and even frame loss of the pictures and the background of the online video communication. This not only reduces the real-time and clarity of the video communication, but also severely affects the user's communication experience and satisfaction.
Disclosure of Invention
In order to ensure the fluency of online video communication and alleviate the problem of video communication blocking, the application provides a video background processing method, a system and a storage medium based on network signal strength.
In a first aspect, the present application provides a video background processing method based on network signal strength, which adopts the following technical scheme:
a video background processing method based on network signal intensity comprises the following steps:
Based on instant video communication, network signal data and communication picture images are obtained in real time;
Extracting a signal intensity value from the network signal data;
calculating a signal fluctuation value of the signal intensity value in the latest first set time period;
if the signal fluctuation value is larger than a preset reference fluctuation value, extracting a static background image and a dynamic picture image from the communication picture image;
calculating a network fluency value according to the signal intensity value and the signal fluctuation value;
If the network fluency value is higher than a preset reference fluency value, adjusting the blurring degree of the static background image according to a first difference value between the network fluency value and the reference fluency value; degree of blur = base blur value + (first difference/reference fluent value maximum x maximum blur delta); the basic fuzzy value, the maximum value of the reference fluency value and the maximum fuzzy increment are preset values, and the maximum value of the reference fluency value is the maximum value of the network fluency;
Reducing the moving distance of the dynamic picture image according to the first difference value; moving distance reduction ratio=1- (first difference value/reference fluency value maximum value x scaling factor); wherein the scaling factor is a coefficient for adjusting the reduction scale.
By adopting the technical scheme, in the process of online video communication, network signals can change in real time, when the network is blocked, the picture and the background of the online video communication can be blocked, so that the picture and the background are subjected to edge transition or background identification based on the strength of the network signals, the display characteristics are adaptively increased for the picture and the background, and the defect of blocking the picture and the background is overcome. The video background processing scheme can be used for effectively coping with network signal fluctuation and improving the user experience of online video communication.
Optionally, the method further comprises the steps of:
If the static background image and the dynamic image are extracted from the communication image, calculating the displacement of the static background image in the window as a background displacement speed value and calculating the relative displacement of the dynamic image and the static background image as an image displacement speed value in a second latest set time period;
calculating a dynamic speed value according to the background displacement speed and the picture displacement speed;
calculating an edge smooth value according to the dynamic speed value and the signal intensity value;
and adjusting the blurring degree of the interface area between the static background image and the dynamic picture image according to the edge smoothing value.
By adopting the technical scheme, the processing of the video picture dynamic property when the network signal changes is further refined, and the concept of edge smoothing is introduced to optimize the transition effect between the static background and the dynamic picture.
Optionally, the method further comprises the steps of:
analyzing pixel changes between adjacent frames of the dynamic picture image, and predicting to generate an intermediate frame to be inserted between original frames;
Adjusting the number of the inserted frames of the intermediate frames of the dynamic picture image according to the edge smoothing value;
the higher the edge smoothing value is, the more the number of the inserted frames is; the lower the edge smoothing value is, the smaller the number of the inserted frames is;
And when the network fluency value is lower than the set value, reducing the number of the set number of the inserted frames.
By adopting the technical scheme, the number of the inserted frames of the dynamic picture images is dynamically adjusted according to the edge smooth value, so that the smoothness and the look and feel of online video communication can be further improved, and meanwhile, the limitation of network conditions and equipment performance is considered, so that better user experience is realized.
Optionally, the method further comprises the steps of:
according to the edge smooth value, the deformation degree of the static background image is adjusted;
The farther the static background image is from the dynamic picture image, the smaller the deformation degree of the static background image is, the closer the static background image is from the dynamic picture image, and the larger the deformation degree of the static background image is;
And the deformation direction of the static background image is matched with the movement direction of the dynamic picture image, wherein the included angle between the vector of the deformation direction and the vector of the movement direction is an acute angle.
By adopting the technical scheme, the deformation degree of the static background image is finely adjusted according to the factors such as the edge smooth value, the distance, the motion direction and the like, so that the visual effect and the user experience of online video communication can be further improved.
Optionally, the method further comprises the steps of:
adjusting the gain value of the deformation degree according to the network strength value, wherein the lower the network strength value is, the smaller the gain value of the deformation degree is; the higher the network strength value is, the larger the gain value of the deformation degree is;
and the magnitude of the vector included angle is adjusted in positive correlation with the magnitude of the network intensity value.
By adopting the technical scheme, the network strength value reflects the stability and the speed of the current network connection. In the case of poor network conditions, high computational complexity image processing operations, such as distortion of static background images, may lead to degradation of video quality due to bandwidth limitations or delays.
Optionally, the method further comprises the steps of:
Extracting a graphic trunk outline in the static background image;
according to the graphic trunk outline, matching a corresponding correction graphic from a preset graphic library;
and placing the correction graph between the static background image and the dynamic picture image, and smoothing the edges of the static background image and the dynamic picture image.
By adopting the technical scheme, the quality and the efficiency of image synthesis can be obviously improved, and a finer and natural image processing effect is realized.
Optionally, the calculation formula of the signal fluctuation value is as follows:
signal fluctuation value=sqrt { (1/N) ×Σ (xi- μ) 2 }; where xi is the signal intensity value at each time point in the first set period, μ is the average of these signal intensity values, and N is the number of data points;
Acquiring signal fluctuation data of a plurality of devices based on a plurality of networked devices in the same local area network and the same instant video communication channel;
Calculating the similarity among a plurality of signal fluctuation data, and adjusting the correction value of the correction graph according to the similarity;
the higher the similarity is, the larger the correction value is; the lower the similarity, the smaller the correction value.
By adopting the technical scheme, the signal fluctuation data of a plurality of devices in the same local area network can be systematically collected and analyzed, the similarity between the devices is evaluated, and the video communication quality is optimized or the network setting is adjusted according to the similarity.
Optionally, the calculation formula of the similarity is as follows:
respectively calculating signal fluctuation values according to the plurality of signal fluctuation data;
similarity = 1-standard deviation of signal fluctuation values/average value of signal fluctuation values x 100%.
By adopting the technical scheme, the degree of dispersion of the data represented by the standard deviation of the signal fluctuation value/the average value of the signal fluctuation value is higher, and the lower the similarity is indicated.
In a second aspect, the present application provides a video background processing system based on network signal strength, which adopts the following technical scheme:
A video background processing system based on network signal strength, comprising a processor, wherein the processor runs a program of the video background processing method based on network signal strength.
In a third aspect, the present application provides a storage medium, which adopts the following technical scheme:
a storage medium storing a program of the video background processing method based on network signal strength as claimed in any one of the above.
In summary, the present application includes at least one of the following beneficial technical effects:
Improving video communication quality: by monitoring the network signal intensity of each device in the video communication process in real time and calculating the fluctuation value of the network signal intensity, the system can timely discover and respond to the change of the network condition. When the signal fluctuation is large, corresponding adjustment measures can be adopted to reduce video jamming, delay or packet loss caused by network instability, so that the overall quality of video communication is improved.
Optimizing network resource configuration: based on the signal fluctuation data of a plurality of devices, the system can evaluate the distribution condition of network resources in the whole local area network. When some devices are found to have large signal fluctuation due to network congestion, the network resource allocation strategy can be dynamically adjusted, so that more bandwidth or priority is provided for the devices, and the smoothness of video communication is ensured. Meanwhile, unnecessary resource waste can be avoided, and the overall utilization efficiency of network resources is improved.
Enhancing user experience: by reducing the problems of jamming, delay and the like in video communication, a user can obtain smoother and natural video communication experience. In addition, the system can automatically adjust the display effect of the video interface according to the change of the network signal intensity so as to adapt to different network environments and further improve the watching experience of users.
Intelligent fault prediction and prevention are realized: by long-term monitoring and analysis of signal fluctuation data, the system can identify early signs of network failure or performance bottlenecks. Once a potential problem is found, the system may take precautions in advance to avoid network failure from severely affecting video communications. This intelligent fault prediction and prevention capability helps to improve the stability and reliability of the system.
Support multi-device co-operation: within the same local area network, different devices may have different network signal strengths and performance characteristics. The processing method based on the signal fluctuation data can support cooperative work among multiple devices, and ensures that each device can keep a relatively consistent performance level in the video communication process by optimizing resource allocation and adjusting communication strategies, so that the cooperative efficiency and stability of the whole system are improved.
Drawings
Fig. 1 is a step diagram of a video background processing method based on network signal strength.
Fig. 2 is a step diagram of introducing the concept of edge smoothing to optimize the transition between static background and dynamic picture.
Fig. 3 is a step diagram of dynamically adjusting the number of interpolation frames of a moving picture image according to an edge smoothing value.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings.
In the description of the present specification, reference to the terms "certain embodiments," "one embodiment," "some embodiments," "an exemplary embodiment," "an example," "a particular example," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The embodiment of the application discloses a video background processing method based on network signal strength, which refers to fig. 1 and comprises the following steps:
Based on instant video communication, network signal data and communication picture images are obtained in real time;
Extracting a signal intensity value from network signal data;
calculating a signal fluctuation value of the signal intensity value in the latest first set time period; the signal fluctuation value may reflect the current network state.
And if the signal fluctuation value is larger than the preset reference fluctuation value, extracting a static background image and a dynamic picture image from the communication picture image. And determining whether to separate according to the signal fluctuation value so as to reduce unnecessary calculation overhead. Among them, image segmentation can use image processing techniques such as background difference, inter-frame difference, deep learning model, etc. to effectively distinguish static background images from dynamic picture images.
And calculating a network fluency value according to the signal strength value and the signal fluctuation value so as to more comprehensively reflect the network quality.
And if the network fluency value is higher than the preset reference fluency value, adjusting the blurring degree of the static background image according to the first difference value of the network fluency value and the reference fluency value. Degree of blur = base blur value + (first difference/reference fluent value maximum x maximum blur delta); that is, the larger the first difference value, the more blurred the static background image; the smaller the first difference, the clearer the static background image. The basic fuzzy value, the reference fluency value maximum value and the maximum fuzzy increment are preset values, and the reference fluency value maximum value is the maximum value of the network fluency. By dynamically adjusting the degree of blurring of the static background, visual discomfort to the user can be reduced when the network is poor.
Reducing the moving distance of the dynamic picture image according to the first difference value; moving distance reduction ratio=1- (first difference value/reference fluency value maximum value x scaling factor); wherein the scaling factor is a coefficient for adjusting the reduction scale. That is, the larger the first difference value, the larger the reduction multiple of the moving distance of the moving picture image; the smaller the first difference value, the smaller the reduction multiple of the moving distance of the moving picture image. The moving distance of the dynamic picture image is reduced, and the jumping feeling of the picture can be reduced when the network is blocked, so that the video is smoother.
In the process of online video communication, network signals can change in real time, when a network is blocked, pictures and backgrounds of online video communication can also be blocked, so that the pictures and the backgrounds are subjected to edge transition or background identification based on the strength of the network signals, display characteristics are adaptively increased for the pictures and the backgrounds, and the defects of blocking of the pictures and the backgrounds are overcome. The video background processing scheme can be used for effectively coping with network signal fluctuation and improving the user experience of online video communication.
Referring to fig. 2, the method further comprises the steps of:
if the static background image and the dynamic image are extracted from the communication image, calculating the displacement of the static background image in the window as the background displacement speed value in the latest second set time period. The background displacement velocity value reflects the overall moving velocity of the background image. The system firstly separates static background images and dynamic picture images from continuous video frames through algorithms such as a background difference method, a Gaussian mixture model and the like; static background images, such as a fixed view of a park entrance; dynamic picture images such as pedestrians walking and vehicles passing by. Assume that the latest second set period of time is 5 seconds elapsed. During this 5 seconds, the system monitors that the static background image is slightly displaced in the window due to natural factors such as the camera being slightly moved or the wind blowing the tree. And comparing the background images before and after 5 seconds to calculate the average displacement of the background images, namely the background displacement speed value. This value reflects the overall moving speed of the background image.
And calculating the relative displacement between the dynamic picture image and the static background image as a picture displacement speed value so as to reflect the moving speed of the dynamic object in the video picture. For a dynamic picture image, such as a pedestrian, the system calculates the relative displacement amount between the dynamic picture image and the static background image in the same time period. For example, a pedestrian walks from one side of the picture to the other side, and the relative displacement amount between the pedestrian and the background image is calculated in the process, namely the picture displacement speed value, which reflects the moving speed of the dynamic object in the video picture.
Calculating a dynamic speed value according to the background displacement speed and the picture displacement speed; the dynamic speed value can be obtained by a weighted average calculation method, and reflects the overall dynamic property of the video picture. For example, if the background displacement is small but the object in the picture moves rapidly, the dynamic speed value may be biased toward the picture displacement speed value.
Calculating an edge smooth value according to the dynamic speed value and the signal intensity value; the edge smoothing value is calculated based on the dynamic speed value and the signal strength value. When the dynamic speed value is higher, which means that more motion elements exist in the picture, edge smoothing is required to be added to reduce visual impact caused by motion; meanwhile, when the signal strength value is low, edge smoothing is also prone to be increased so as to make up for the unsmooth feel caused by network jamming.
And adjusting the blurring degree of the boundary area between the static background image and the dynamic picture image according to the edge smoothing value. If the edge smoothing value is larger, the blurring degree of the boundary area is higher, so that a smoother transition effect is realized; otherwise, the blurring degree is lower, and the definition of the picture is maintained.
The processing of the video picture dynamics when the network signal changes is further refined, and the concept of edge smoothing is introduced to optimize the transitional effect between the static background and the dynamic picture.
Referring to fig. 3, the method further comprises the steps of:
Analyzing pixel changes between adjacent frames of the dynamic picture image, and predicting to generate an intermediate frame to be inserted between original frames; by analyzing pixel differences (i.e., motion vectors) between successive frames by frame rate boosting (FRAME RATE Upscaling, FRUS) or motion interpolation (Motion Interpolation) techniques, an algorithm can predict intermediate pictures that should exist between these frames and generate these intermediate frames (also referred to as interpolation frames). These interpolated frames are then inserted between the original frames, thereby increasing the frame rate without changing the video content, making the video appear smoother.
Adjusting the number of the inserted frames of the intermediate frames of the dynamic picture image according to the edge smoothing value; the edge smoothing value is an indicator for measuring the edge sharpness in the image. In video processing, sharpness of edges, such as object boundaries, is critical to overall visual quality. If the edges are too blurred, not too many insertions are needed to avoid introducing unnecessary blurring; conversely, if the edges are sharp, more interpolation frames are needed to better capture motion details. Therefore, adjusting the number of inserted frames according to the edge smoothing value is an optimization strategy, which aims to improve the smoothness while maintaining the video definition.
The higher the edge smoothing value, the more the number of interpolation frames; the lower the edge smoothing value, the fewer the number of interpolated frames. High edge smoothing values generally mean that the image is blurred, and increasing the number of interpolated frames helps to fill in the details, making the motion look smoother. While low edge smoothing values indicate sharp edges of the image, too many insertions may introduce unnecessary blurring or distortion, and therefore the number of insertions should be reduced.
When the network fluency value is lower than the set value, reducing the number of the set number of inserted frames; this step takes into account the actual situation of the network transmission. In streaming media playing, the smoothness of video playing is directly affected by the quality of network conditions. If the network fluency value is lower than the set value, which indicates that the current network environment is not good, continuing to play at a high frame rate may result in buffering or jamming. Therefore, reducing the number of the inserted frames can reduce the requirement for bandwidth and reduce transmission pressure, thereby improving the video playing stability to a certain extent.
The number of the inserted frames of the dynamic picture images is dynamically adjusted according to the edge smooth value, so that the smoothness and the appearance of online video communication can be further improved, and meanwhile, the limitation of network conditions and equipment performance is considered, so that better user experience is realized.
In order to further enhance the visual effect and user experience of the online video communication, the method further comprises the following steps:
adjusting the deformation degree of the static background image according to the edge smoothing value; the deformation is the deformation of different degrees of each part. The edge smoothing value is used for adjusting the number of the inserted frames of the dynamic picture image and controlling the deformation degree of the static background image. Here, the edge smoothing value is used as a comprehensive index reflecting the sharpness and sharpness of the entire image. Lower edge smoothing values, i.e. clearer images, allow for a smaller degree of deformation to preserve the authenticity of the background; while higher edge smoothing values, i.e. blurred images, allow for greater deformation to simulate more natural dynamic effects.
The farther the static background image is from the dynamic picture image, the smaller the degree of deformation of the static background image, the closer the static background image is from the dynamic picture image, and the greater the degree of deformation of the static background image. Based on the principle of visual perception, objects at a distance are relatively small in visual change, while objects at a near position are more obvious in change. Therefore, when the static background image is far from the dynamic picture image, such as a moving object or person, the deformation degree thereof should be correspondingly reduced so as to maintain the stability of the background; when the background image is closer to the dynamic picture, the deformation degree should be increased to simulate the background change caused by the motion of the dynamic object.
And the deformation direction of the static background image is matched with the motion direction of the dynamic picture image, wherein the included angle between the vector of the deformation direction and the vector of the motion direction is an acute angle. And ensuring the harmony and consistency between the background deformation and the dynamic picture. The included angle between the vector of the deformation direction and the vector of the motion direction is an acute angle, which means that the trend of the background deformation should be consistent with the motion direction of the dynamic picture or present a certain auxiliary effect. The method is beneficial to enhancing the immersion and realism of the video, so that the audience can more easily accept and integrate the video into the scene presented by the video.
And fine-adjusting the deformation degree of the static background image according to the factors such as the edge smooth value, the distance, the movement direction and the like.
To help maintain overall consistency and viewability of the video pictures, the method further comprises the steps of:
According to the network strength value, adjusting the gain value of the deformation degree, wherein the lower the network strength value is, the smaller the gain value of the deformation degree is; the higher the network strength value, the greater the gain value of the degree of deformation. When the network strength value is low, it means that the network transmission condition is not good, such as delay, packet loss, etc. In this case, reducing the gain value of the degree of distortion may reduce the consumption of video processing resources while reducing the risk of degradation of video quality due to network fluctuations. Doing so helps to maintain the basic fluency of the video. Conversely, when the network strength value is high, the network transmission condition is good, and the video processing resources are relatively abundant. At this time, the gain value of the deformation degree is increased, so that the visual effect of the video can be further improved, and the interaction between the dynamic picture and the static background is more natural and lifelike.
And the magnitude of the network intensity value is positively correlated to the magnitude of the adjustment vector included angle. The network strength value not only affects the gain value of the deformation degree, but also directly affects the magnitude of the included angle between the deformation direction and the motion direction vector. When the network intensity value is higher, the included angle between the deformation direction and the movement direction is allowed to be increased, which means that the deformation can more freely follow or assist the movement of the dynamic picture, and a more vivid and rich visual effect is created. However, when the network strength value is low, the included angle between the deformation direction and the motion direction vector should be reduced to ensure that the deformation is not abrupt or unstable due to network fluctuation.
In order to improve the quality and efficiency of image synthesis and to achieve finer and natural image processing, the method further comprises the steps of:
Extracting a graphic trunk outline in a static background image; the contours of the main graphics are identified and extracted from the static background image by image processing techniques, such as edge detection, contour extraction, etc. algorithms. These contours represent the most important visual elements in the image, providing the basis for subsequent pattern matching and correction.
According to the outline of the graphic backbone, matching a corresponding correction graphic from a preset graphic library; after the outline of the graphic trunk is extracted, the system uses the outline as a query condition to search and match in a preset graphic library. The graphic library should contain a variety of predefined graphic templates, which may be standard geometries, common object contours, or specific graphics in a specific application scenario. By comparing the shape, size, scale, etc. characteristics of the contours, the system can find the corrected graph that most closely matches the extracted contours.
And placing the correction graph between the static background image and the dynamic picture image, and smoothing the edges of the static background image and the dynamic picture image. After finding the matching correction pattern, the next step is to ingeniously blend it into the video frame. I.e., placing the correction pattern in place between the static background image and the dynamic picture image to create visual consistency and harmony. Edge smoothing is critical in this step. Since the introduction of the correction pattern may create distinct boundaries or seams between images, these seams may distract the viewer and reduce the viewing experience of the video. Therefore, it is necessary to smooth the edges between the corrected graphics and the static background, dynamic picture to reduce or eliminate the visibility of the seams. This may be achieved by image fusion techniques such as feathering, fading, etc., which allow the corrected image to be seamlessly joined with the surrounding image.
The calculation formula of the signal fluctuation value is as follows:
Signal fluctuation value=sqrt { (1/N) ×Σ (xi- μ) 2 }; where xi is the signal intensity value at each time point in the first set period, μ is the average of these signal intensity values, and N is the number of data points. The signal fluctuation value essence is obtained by a standard deviation calculation formula and is used for measuring the degree of dispersion or fluctuation of the signal intensity.
And acquiring signal fluctuation data of a plurality of devices based on the plurality of networked devices in the same local area network and the same instant video communication channel.
Calculating the similarity between a plurality of signal fluctuation data, and adjusting a correction value of the correction graph according to the similarity; the similarity calculation method may be based on various statistical measures, such as correlation coefficient, distance measure, such as euclidean distance, manhattan distance, etc. Devices with high similarity mean that the signal fluctuation patterns are similar and are affected by similar network conditions.
The higher the similarity, the larger the correction value; the lower the similarity, the smaller the correction value. The higher the similarity, the more devices are operated under the same network conditions, and the correction value can be added appropriately to enhance the visual effect and stability of the video. Conversely, a lower similarity may indicate a greater difference in network conditions faced by different devices, and the correction value should be reduced to avoid video distortion or instability due to overcorrection.
Signal fluctuation data of a plurality of devices in the same local area network can be systematically collected and analyzed, the similarity between the devices is evaluated, and the video communication quality is optimized or the network setting is adjusted according to the similarity.
The calculation formula of the similarity is as follows:
Respectively calculating signal fluctuation values according to the plurality of signal fluctuation data; the calculation formula of the signal fluctuation value is the same as the calculation formula of the signal fluctuation value.
Similarity = 1-standard deviation of signal fluctuation values/average value of signal fluctuation values x 100%. When the standard deviation is smaller than the average value, the similarity is close to 1, and the signal fluctuation of different devices is consistent; when the standard deviation is large relative to the average value, the similarity decreases, indicating that the signal fluctuation difference is large.
The higher the degree of dispersion of the data represented by the standard deviation of the signal fluctuation value/the average value of the signal fluctuation value, the lower the degree of similarity is explained.
For example, assume signal fluctuation values of a plurality of devices: 0.5,0.6,0.55,0.58,0.62;
The standard deviation was calculated to be 0.0344 and the average value was calculated to be 0.57;
calculated similarity = 1-6.04% = 93.96%.
The embodiment of the application also discloses a video background processing system based on the network signal intensity, which comprises a processor, wherein the processor runs a program of the video background processing method based on the network signal intensity.
The embodiment of the application also discloses a storage medium which stores the program of the video background processing method based on the network signal strength.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (9)

1. The video background processing method based on the network signal strength is characterized by comprising the following steps:
Based on instant video communication, network signal data and communication picture images are obtained in real time;
Extracting a signal intensity value from the network signal data;
calculating a signal fluctuation value of the signal intensity value in the latest first set time period;
if the signal fluctuation value is larger than a preset reference fluctuation value, extracting a static background image and a dynamic picture image from the communication picture image;
calculating a network fluency value according to the signal intensity value and the signal fluctuation value;
If the network fluency value is higher than a preset reference fluency value, adjusting the blurring degree of the static background image according to a first difference value between the network fluency value and the reference fluency value; degree of blur = base blur value + (first difference/reference fluent value maximum x maximum blur delta); the basic fuzzy value, the maximum value of the reference fluency value and the maximum fuzzy increment are preset values, and the maximum value of the reference fluency value is the maximum value of the network fluency;
Reducing the moving distance of the dynamic picture image according to the first difference value; moving distance reduction ratio=1- (first difference value/reference fluency value maximum value x scaling factor); wherein the scaling factor is a coefficient for adjusting the reduction scale;
If the static background image and the dynamic image are extracted from the communication image, calculating the displacement of the static background image in the window as a background displacement speed value and calculating the relative displacement of the dynamic image and the static background image as an image displacement speed value in a second latest set time period;
calculating a dynamic speed value according to the background displacement speed and the picture displacement speed;
calculating an edge smooth value according to the dynamic speed value and the signal intensity value;
and adjusting the blurring degree of the interface area between the static background image and the dynamic picture image according to the edge smoothing value.
2. The method for processing video background based on network signal strength according to claim 1, further comprising the steps of:
analyzing pixel changes between adjacent frames of the dynamic picture image, and predicting to generate an intermediate frame to be inserted between original frames;
Adjusting the number of the inserted frames of the intermediate frames of the dynamic picture image according to the edge smoothing value;
the higher the edge smoothing value is, the more the number of the inserted frames is; the lower the edge smoothing value is, the smaller the number of the inserted frames is;
And when the network fluency value is lower than the set value, reducing the number of the set number of the inserted frames.
3. The method for processing video background based on network signal strength according to claim 2, further comprising the steps of:
according to the edge smooth value, the deformation degree of the static background image is adjusted;
The farther the static background image is from the dynamic picture image, the smaller the deformation degree of the static background image is, the closer the static background image is from the dynamic picture image, and the larger the deformation degree of the static background image is;
And the deformation direction of the static background image is matched with the movement direction of the dynamic picture image, wherein the included angle between the vector of the deformation direction and the vector of the movement direction is an acute angle.
4. A method of video background processing based on network signal strength according to claim 3, the method further comprising the steps of:
adjusting the gain value of the deformation degree according to the network strength value, wherein the lower the network strength value is, the smaller the gain value of the deformation degree is; the higher the network strength value is, the larger the gain value of the deformation degree is;
and the magnitude of the vector included angle is adjusted in positive correlation with the magnitude of the network intensity value.
5. The method for processing video background based on network signal strength according to claim 1, further comprising the steps of:
Extracting a graphic trunk outline in the static background image;
according to the graphic trunk outline, matching a corresponding correction graphic from a preset graphic library;
and placing the correction graph between the static background image and the dynamic picture image, and smoothing the edges of the static background image and the dynamic picture image.
6. The method for processing video background based on network signal strength according to claim 5, wherein the signal fluctuation value is calculated as follows:
Signal fluctuation value = ; Where xi is the signal intensity value at each time point in the first set period, μ is the average of these signal intensity values, and N is the number of data points; sqrt { } is an evolution function; sigma () is a sum function;
Acquiring signal fluctuation data of a plurality of devices based on a plurality of networked devices in the same local area network and the same instant video communication channel;
Calculating the similarity among a plurality of signal fluctuation data, and adjusting the correction value of the correction graph according to the similarity;
the higher the similarity is, the larger the correction value is; the lower the similarity, the smaller the correction value.
7. The method for processing video background based on network signal strength according to claim 6, wherein the similarity is calculated as follows:
respectively calculating signal fluctuation values according to the plurality of signal fluctuation data;
similarity = 1-standard deviation of signal fluctuation values/average value of signal fluctuation values x 100%.
8. A network signal strength based video background processing system comprising a processor in which the steps of the network signal strength based video background processing method of any one of claims 1-7 are performed.
9. A storage medium having stored therein a program which when executed by a processor performs the steps of the network signal strength based video background processing method of any one of claims 1-7.
CN202411125439.7A 2024-08-16 2024-08-16 Video background processing method, system and storage medium based on network signal strength Active CN118646930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411125439.7A CN118646930B (en) 2024-08-16 2024-08-16 Video background processing method, system and storage medium based on network signal strength

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411125439.7A CN118646930B (en) 2024-08-16 2024-08-16 Video background processing method, system and storage medium based on network signal strength

Publications (2)

Publication Number Publication Date
CN118646930A CN118646930A (en) 2024-09-13
CN118646930B true CN118646930B (en) 2024-11-12

Family

ID=92668235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411125439.7A Active CN118646930B (en) 2024-08-16 2024-08-16 Video background processing method, system and storage medium based on network signal strength

Country Status (1)

Country Link
CN (1) CN118646930B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104519396A (en) * 2014-12-15 2015-04-15 四川长虹电器股份有限公司 Network connection method and television
CN112805990A (en) * 2018-11-15 2021-05-14 深圳市欢太科技有限公司 Video processing method and device, electronic equipment and computer readable storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100847143B1 (en) * 2006-12-07 2008-07-18 한국전자통신연구원 Silhouette based object behavior analysis system and method of real-time video
US8085855B2 (en) * 2008-09-24 2011-12-27 Broadcom Corporation Video quality adaptation based upon scenery
CN101887739B (en) * 2010-06-25 2012-07-04 华为技术有限公司 Method and device for synchronizing media play
US9232189B2 (en) * 2015-03-18 2016-01-05 Avatar Merger Sub Ii, Llc. Background modification in video conferencing
CN107454395A (en) * 2017-08-23 2017-12-08 上海安威士科技股份有限公司 A kind of high-definition network camera and intelligent code stream control method
US11915429B2 (en) * 2021-08-31 2024-02-27 Gracenote, Inc. Methods and systems for automatically generating backdrop imagery for a graphical user interface
CN114640886B (en) * 2022-02-28 2023-09-15 深圳市宏电技术股份有限公司 Self-adaptive bandwidth audio/video transmission method, device, computer equipment and medium
CN115225961B (en) * 2022-04-22 2024-01-16 上海赛连信息科技有限公司 No-reference network video quality evaluation method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104519396A (en) * 2014-12-15 2015-04-15 四川长虹电器股份有限公司 Network connection method and television
CN112805990A (en) * 2018-11-15 2021-05-14 深圳市欢太科技有限公司 Video processing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN118646930A (en) 2024-09-13

Similar Documents

Publication Publication Date Title
US11637971B2 (en) Automatic composition of composite images or videos from frames captured with moving camera
CN109525901B (en) Video processing method and device, electronic equipment and computer readable medium
WO2022179335A1 (en) Video processing method and apparatus, electronic device, and storage medium
WO2020108082A1 (en) Video processing method and device, electronic equipment and computer readable medium
US20200374600A1 (en) Method for Embedding Advertisement in Video and Computer Device
US8531484B2 (en) Method and device for generating morphing animation
CN111292337A (en) Image background replacing method, device, equipment and storage medium
KR20090092839A (en) Method and system to convert 2d video into 3d video
US10354394B2 (en) Dynamic adjustment of frame rate conversion settings
WO2024022065A1 (en) Virtual expression generation method and apparatus, and electronic device and storage medium
EP2819096A1 (en) Method and apparatus for inserting a virtual object in a video
WO2022218042A1 (en) Video processing method and apparatus, and video player, electronic device and readable medium
CN114302226B (en) Intelligent cutting method for video picture
CN113709560B (en) Video editing method, device, equipment and storage medium
He et al. Iterative transductive learning for automatic image segmentation and matting with RGB-D data
CN118646930B (en) Video background processing method, system and storage medium based on network signal strength
CN117710249B (en) Image video generation method and device for interactive dynamic fuzzy scene recovery
CN118250514A (en) Outdoor live video restoration method and device
US20170124752A1 (en) Techniques for stereo three dimensional image mapping
CN101639940B (en) Method and system for extracting video attention window sequence based on video content
CN108074248B (en) OSD automatic detection method and device based on image content
CN118485952B (en) Multi-scene adaptive video image intelligent recognition method and system
CN112085002A (en) Portrait segmentation method, portrait segmentation device, storage medium and electronic equipment
CN116801047B (en) Weight normalization-based set top box image processing module and method
CN118764684B (en) Video optimization method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant