CN106713942B - Video processing method and device - Google Patents
Video processing method and device Download PDFInfo
- Publication number
- CN106713942B CN106713942B CN201611228075.0A CN201611228075A CN106713942B CN 106713942 B CN106713942 B CN 106713942B CN 201611228075 A CN201611228075 A CN 201611228075A CN 106713942 B CN106713942 B CN 106713942B
- Authority
- CN
- China
- Prior art keywords
- video data
- path
- video
- background
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 97
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000000694 effects Effects 0.000 claims abstract description 24
- 230000008569 process Effects 0.000 claims description 20
- 230000006835 compression Effects 0.000 claims description 3
- 238000007906 compression Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 4
- 230000002194 synthesizing effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000010428 oil painting Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/762—Media network packet handling at the source
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/80—Responding to QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/242—Synchronization processes, e.g. processing of PCR [Program Clock References]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Image Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application discloses a video processing method and a video processing device, wherein the method comprises the following steps: acquiring each path of video data and a virtual background corresponding to each path of video data; by carrying out concurrent processing on each path of video data, replacing background data contained in each path of video data with respective corresponding virtual background; and combining the video data of each channel after concurrent processing into a video source signal. By implementing the method and the device, the background of the videos can be simultaneously and concurrently replaced by the virtual background, and then the videos after the background replacement are combined to form the video source, so that the video content can be smoothly played in real time, diversified video content can be presented, and the video playing effect is enriched.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method and apparatus.
Background
With the rapid development of network technology, a network video live broadcast system can broadcast live broadcast video streams of anchor users to a plurality of audience users to realize live broadcast according to the requirements of the users. In the live broadcast implementation process, the client device of the anchor user collects the live broadcast video stream of the anchor user and sends the live broadcast video stream to the server, the server sends the live broadcast video stream to the corresponding audience client through a broadcast or multicast technology, and each audience client receives and displays the live broadcast video stream. In order to display live broadcast pictures in an enriched mode, a main broadcast user can shoot live broadcast scenes in multiple directions by adopting a plurality of cameras to obtain video contents at different angles, and the videos are combined and then output to the user to enrich the live broadcast contents.
In order to further increase the interest of the live broadcast content, the background in the live broadcast video can be replaced, and diversified live broadcast content is presented. And present background replacement scheme if carry out the background replacement to the video content that a plurality of cameras were shot respectively, synthesizes live video after the replacement background again, can appear live video asynchronous problem, for example: when the background is replaced, the jamming of any frame of image can cause the jamming of the whole live video.
Disclosure of Invention
The application provides a video processing method and a video processing device, which solve the related problems in the prior art.
According to a first aspect of embodiments of the present application, there is provided a video processing method, including the steps of:
acquiring each path of video data and a virtual background corresponding to each path of video data;
by carrying out concurrent processing on each path of video data, replacing background data contained in each path of video data with respective corresponding virtual background;
and combining the video data of each channel after concurrent processing into a video source signal.
In an embodiment, the replacing the background data included in each path of video data with the corresponding virtual background by concurrently processing each path of video data includes:
calling each local thread to carry out concurrent processing on each path of video data, wherein one thread processes one path of video data;
when concurrent processing is performed, the background data included in the video data processed by each thread is replaced with the virtual background corresponding to the video data.
In an embodiment, the obtaining the video data and the virtual background corresponding to the video data includes:
receiving video streams acquired by all the camera devices as video data of all paths, wherein one camera device corresponds to one video data path;
and calling the virtual background corresponding to each camera device to form the virtual background corresponding to each path of video data.
In an embodiment, the replacing the background data included in each path of video data with the corresponding virtual background by concurrently processing each path of video data includes:
by carrying out concurrent processing on each path of video data, compressing each path of video data and the corresponding virtual background into: each path of compressed signal with a preset size and a compressed background with a preset size corresponding to each path of compressed signal;
and replacing the background data contained in each path of compressed signal with the corresponding compressed background to generate each path of video data after concurrent processing.
In one embodiment, after the combining the concurrently processed channels of video data into a video source signal, the method further includes:
and performing size expansion on the video source signals to generate video source signals with the same size as any one path of acquired video data.
In one embodiment, after the combining the concurrently processed channels of video data into a video source signal, the method further includes:
identifying one path of main video data from each path of video data after concurrent processing;
determining other video data subjected to concurrent processing except the main video data as auxiliary video data;
and respectively superposing each path of auxiliary video data to each local area of the main video data to generate a video playing signal.
In one embodiment, the video data is collected by camera devices arranged at different positions of the live scene.
According to a second aspect of embodiments of the present application, there is provided a video processing apparatus including:
the information acquisition module is used for acquiring each path of video data and a virtual background corresponding to each path of video data;
the concurrent processing module is used for replacing background data contained in each path of video data with respective corresponding virtual background by performing concurrent processing on each path of video data;
and the video composition module is used for composing the video source signals from the video data after concurrent processing.
In one embodiment, the concurrent processing module comprises:
the thread calling module is used for calling each local thread to carry out concurrent processing on each path of video data, and one thread processes one path of video data;
and the first replacing module is used for replacing the background data contained in the video data processed by each thread with the virtual background corresponding to the video data when concurrent processing is carried out.
In one embodiment, the information acquisition module includes:
the video receiving module is used for receiving video streams acquired by all the camera devices into all paths of video data, and one camera device corresponds to one path of video data;
and the background calling module is used for calling the virtual background corresponding to each camera equipment to form the virtual background corresponding to each video data.
In one embodiment, the concurrent processing module comprises:
the video compression module is used for compressing each path of video data and the corresponding virtual background thereof into: each path of compressed signal with a preset size and a compressed background with a preset size corresponding to each path of compressed signal;
and the second replacing module is used for replacing the background data contained in each path of compressed signal with the corresponding compressed background to generate each path of video data after concurrent processing.
In one embodiment, the apparatus further comprises:
and the size expansion module is used for performing size expansion on the video source signals and generating the video source signals with the same size as any one path of acquired video data.
In one embodiment, the apparatus further comprises:
the main video identification module is used for identifying one path of main video data from each path of video data after concurrent processing;
the auxiliary video determining module is used for determining other video data subjected to concurrent processing except the main video data as auxiliary video data;
and the video overlapping module is used for respectively overlapping each path of auxiliary video data to each local area of the main video data to generate a video playing signal.
In one embodiment, the video data is collected by camera devices arranged at different positions of the live scene.
According to the embodiment of the application, the background data contained in each path of video data is replaced by the corresponding virtual background by carrying out concurrent processing on each path of video data; and combining the video data of each channel after concurrent processing into a video source signal. Therefore, the virtual background can be used for simultaneously and concurrently replacing the backgrounds of the videos, and then the videos after replacing the backgrounds form a video source, so that the video content can be smoothly played in real time, diversified video content can be presented, and the video playing effect is enriched.
Drawings
FIG. 1a is a flow chart of one embodiment of a video processing method of the present application;
FIG. 1b is a schematic diagram of parallel processing illustrated herein in accordance with an exemplary embodiment;
fig. 2a is a schematic view of an application scenario of a video processing method according to an embodiment of the present application;
FIG. 2b is a flow chart of another embodiment of a video processing method of the present application;
FIG. 2c is a first schematic diagram of a live preview interface shown in the present application in accordance with an exemplary embodiment;
FIG. 2d is a second schematic diagram of a live preview interface shown in the present application in accordance with an exemplary embodiment;
FIG. 3 is a block diagram of one embodiment of a video processing device of the present application;
fig. 4 is a hardware configuration diagram of the video processing apparatus according to the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The video processing method and apparatus proposed in the present application are described below with reference to specific embodiments and specific application scenarios.
Fig. 1a is a flowchart of an embodiment of a video processing method of the present application, which may include the following steps 101-103:
step 101: and acquiring each path of video data and a virtual background corresponding to each path of video data.
In the embodiment of the application, each path of video data can be acquired by the camera devices arranged in different directions in the same shooting area, and one path of video data corresponds to one direction of camera devices. The corresponding relation between each path of video data and the virtual background can be set by a user according to actual needs, multiple paths of video data correspond to one virtual background, or one path of video data corresponds to one virtual background, and the virtual background can be pre-selected by the user according to a background picture which the user wants to present. For example, one path of video data corresponds to the virtual background of the sunlight coast picture, and the other path of video data corresponds to the virtual background of the picture of the public main room.
In one embodiment, the obtaining of the video data and the virtual background corresponding to the video data may be performed by:
and receiving video streams acquired by the camera equipment into video data of each path, wherein one camera equipment corresponds to one path of video data.
And calling the virtual background corresponding to each camera device to form the virtual background corresponding to each path of video data.
In this example, the correspondence between each image capturing apparatus and the virtual background may be preset, so that when the virtual background library stores the virtual background, the virtual background may be stored in correspondence with the identifier of each image capturing apparatus, and when the virtual background corresponding to each image capturing apparatus is called, the virtual background stored in correspondence with the identifier of each image capturing apparatus may be searched.
Step 102: and replacing background data contained in each path of video data with the corresponding virtual background by performing concurrent processing on each path of video data.
In the embodiment of the application, when replacing the background, each path of video data is separated from the background data contained in the video data, and then each path of separated video data is respectively merged with the corresponding virtual background.
The separating of the video data and the synthesizing of the virtual background are operations which consume a processor extremely in the video processing technology, if parallel processing is not adopted, background replacement is carried out on each path of video data successively, when a video source signal formed by each path of video data is played, the consumption on the processor is extremely high, and if a processing process of one path of video data fails, the whole video processing process is stopped, and the problem that each path of video data is asynchronous is easily caused. In order to process the background replacement process of each path of video data in parallel, the background replacement of each path of video data can be realized by adopting multiple threads through the following operations:
and calling each local thread to carry out concurrent processing on each path of video data, wherein one thread processes one path of video data.
When concurrent processing is performed, the background data included in the video data processed by each thread is replaced with the virtual background corresponding to the video data.
As shown in fig. 1b, when local threads are called to concurrently process each path of video data, a processing resource a1 of the video processing process of the present application may be allocated to a plurality of threads a2, each thread has enough processing resources to complete one processing task A3, and one processing task A3 corresponds to concurrent processing of one path of video data.
When the background data contained in the video data processed by each thread is replaced by the virtual background corresponding to the video data, the background data contained in each path of video data can be separated from the video data; and synthesizing the video data with the background data separated and the corresponding virtual background. Separating video data can be achieved by matting techniques in the field of image processing.
In addition, when each thread processes video data, if the resolution of the video data is fixed, that is, the number of pixels contained in the video data is fixed, the larger the picture size of the video data is, the more pixels are required for filling the video data for amplification, and the number of pixels contained in the video data is limited, so that the blank area around the pixels needs to be filled through interpolation operation, and the requirement for filling the picture size of the video data is met. However, the interpolation pixel corresponding to the interpolation operation does not have the picture information corresponding to the actual video data, and the picture information corresponding to the interpolation pixel needs to be obtained through the picture information operation corresponding to the actual video data of the adjacent pixel point, so that the larger the blank area to be filled by the interpolation operation is, the more processing resources are consumed when the pixel point is edited.
In order to further save the consumption of the processor by background replacement and reduce the pause phenomenon in video playing, the reduction processing can be performed on each path of video data in the process of replacing the background data contained in each path of video data with the corresponding virtual background by performing concurrent processing on each path of video data, and the implementation process is as follows:
by carrying out concurrent processing on each path of video data, compressing each path of video data and the corresponding virtual background into: each path of compressed signal with a preset size and a compressed background with a preset size corresponding to each path of compressed signal.
And replacing the background data contained in each path of compressed signal with the corresponding compressed background to generate each path of video data after concurrent processing.
In one example, the size of each video data path is 800 pixels by 600 pixels, and the predetermined size is 400 pixels by 300 pixels.
When all paths of video data are processed concurrently, if all paths of video data are compressed, the consumption of processing resources can be reduced, but the picture size of the subsequently composed video source signals is still in a compressed state, the video playing effect can be influenced, and in order not to influence the video playing effect, after all paths of video data after being processed concurrently are composed into video source signals, the video source signals can be subjected to size expansion to generate video source signals with the same size as any path of acquired video data.
Step 103: and combining the video data of each channel after concurrent processing into a video source signal.
In the embodiment of the application, after the background replacement is completed on each path of video data, each path of video data is still dispersed and independent, is only a part of the content of the video playing source, and is not suitable for being presented to audiences. Therefore, it is necessary to combine the processed video data into a video source signal. If the component of the video source signal only includes the video data after replacing the background, then after replacing the background with the video data, the video source signal can be composed of the video data after concurrent processing. If other video data which do not need to replace the background are included, the video source signal can be formed by the other video data and the processed video data.
After the video source signals are formed, the video source signals can be directly played at a playing end, and part of video data can be selected from the video source signals to be played.
In some application scenarios, the video playing end for playing the video source signal and the background replacing end for replacing the background of each path of video data are integrated, so that the background replacing end can directly combine each path of video data into the video source signal after replacing the virtual background with each path of video data. If the video playing end and the background replacing end are not integrated, the background replacing end can respectively use each path of video data as a part of a video source signal after replacing the virtual background with each path of video data, and packages and sends the video data to the video synthesizing end or the video playing end, so that the video synthesizing end or the video playing end combines each path of video data into a video source signal.
In an optional implementation manner, after the video source signal is composed of the video data of each channel, the video play signal can be composed of the video data of each channel after concurrent processing by the following operations:
and identifying one path of main video data from the concurrent processed paths of video data.
And determining other video data after concurrent processing except the main video data as auxiliary video data.
And respectively superposing each path of auxiliary video data to each local area of the main video data to generate a video playing signal.
After the video playing signal is generated, the video playing signal can be directly pushed to a video playing end to be played, and diversified video playing contents are presented.
In addition, in order to further enrich the video playing content, each path of auxiliary video data is respectively superposed to the front of each local area of the main video data, and playing special effects can be respectively matched with each path of auxiliary video data, wherein the playing special effects comprise stretching effects and/or moving effects.
With the wide application of video technology in the digital field, the requirements for video formats are also diverse. The gradually emerging live video is a hotspot of current video technology application, and the anchor takes a video of a live scene and an image of the anchor through a camera (such as a camera), and after certain editing, the video and the image are transmitted to a user terminal, so that the live scene and the image of the anchor are displayed on the user terminal in real time. As a mode for enriching video content, the anchor can use a plurality of cameras to shoot live scenes from different directions, shoot images of a plurality of angles of the anchor, merge and output a plurality of videos into a live video, and display more comprehensive live content.
As another way of enriching video content, after the anchor takes a video of a live scene and its own image through a camera, the background of the video can be replaced by a video processing technology, and more various live contents are displayed. It is easy to understand that the replacement of the background can be realized in a plurality of videos, and the video content can be further enriched. The video processing method can replace the backgrounds of the videos by the virtual backgrounds, and then combine the videos to output one video, so that the replacement of the backgrounds of the videos is realized, and the video effect is enriched.
Referring to fig. 2a, taking live video as an example, an application scenario diagram of the video processing method according to the embodiment of the present application is shown, where the application scenario includes: the live broadcast system comprises a live broadcast server, and a live broadcast terminal, a mobile watching terminal and a fixed watching terminal which are respectively connected with the live broadcast server through a network.
For the live broadcast terminal, the live broadcast terminal is equipped with a main broadcast client, and may be a device having data acquisition, encoding, and communication functions, for example: desktop computers, smart phones, tablet computers and other smart devices. After the anchor user starts live broadcasting, the anchor client can receive live video streams collected by the camera devices arranged in different directions of a live broadcasting scene through the live broadcasting terminal.
And the live broadcast server is used for providing background service of network live broadcast, receiving a live broadcast video stream sent by a live broadcast terminal and correspondingly storing the live broadcast video stream and the anchor client. In addition, the corresponding relation between the anchor client and each channel is stored, and the live video stream is sent to the audience client belonging to the same channel.
For mobile viewing terminals and fixed viewing terminals, it may be a device with data communication, rendering, output functions, such as: desktop computers, smart phones, tablet computers and other smart devices. The mobile terminal is provided with a spectator client, so that a user can conveniently watch the live video stream uploaded by the main broadcasting client through the mobile watching terminal or the fixed watching terminal.
The following live video is taken as an example, and an example is described with reference to the application scenario shown in fig. 2a, where a specific implementation process of the example can be seen in fig. 2b, and includes the following steps 201 and 208:
step 201: and the live broadcast terminal acquires each path of video data and a virtual background corresponding to each path of video data.
In the embodiment of the application, the live broadcast terminal can receive the video stream that a plurality of camera equipment of presetting gathered, constitute each way video data, a camera equipment collection obtains video data of the same kind, each camera equipment sets up the different position in the live broadcast scene, can set up one of them and be main camera equipment (main camera), all the other are supplementary camera equipment (vice camera), the video data that main camera equipment gathered is main video data, the video data that supplementary camera equipment gathered is supplementary video data, for distinguishing main video number data and supplementary video data, can add different video ID (identification) for video data.
Referring to fig. 2c, after the live broadcast terminal receives the video data collected by each camera device, the anchor user may set a virtual background corresponding to the video data collected by the main camera and a virtual background corresponding to each path of video data collected by each secondary camera through a virtual background menu item in the live broadcast preview interface shown in fig. 2 c. For example: the video data collected by the main camera corresponds to the virtual background of the oil painting landscape picture, the video data collected by the auxiliary camera corresponds to the virtual background of the flower-sea picture, and after the virtual background is set, the live broadcast picture after the virtual background is replaced can be previewed on the live broadcast preview interface shown in fig. 2 d.
Step 202: and the live broadcast terminal calls each local thread to carry out concurrent processing on each path of video data, and one thread processes one path of video data.
In the embodiment of the application, the CPU of the live broadcast terminal can be used as an operation unit to undertake the operation task of video processing. The multithreading mechanism of the CPU can create the whole operation task as an integral thread, a plurality of threads are distributed in the integral thread to complete the operation task, and all the threads share the video processing operation resources distributed by the integral thread, so that data sharing and exchange are easy to carry out among all the threads, the multi-channel video data of the live broadcast terminal end is shared to output the picture rate of the video picture of the watching user terminal, and the synchronization of a plurality of video pictures in the video of the watching user terminal is ensured.
Step 203: and when the live broadcast terminal performs concurrent processing, replacing background data contained in the video data processed by each thread with a virtual background corresponding to the video data.
In this embodiment, the alternative background may use an Open Broadcast Software (OBS) technology. The OBS is open-source online live broadcast software, has the characteristics of light weight and free charge, and can be used for matting characters in a video.
In practical application, a solid background with a large color contrast with the clothing color of the anchor user can be placed at the back of the anchor user to form an area with a strong hue contrast including the video image of the anchor user, so that an obvious edge is formed between the video image of the anchor user and the solid background, the video image of the anchor user can be conveniently cut out from the area, and then the virtual background is combined with the video image of the anchor user, so that the background replacement is completed.
Step 204: and the live broadcast terminal takes each path of video data after replacing the background as a live broadcast video stream to be played and sends the live broadcast video stream to the live broadcast server.
Step 205: and the live broadcast server identifies one path of main video data from the concurrent video data.
In the embodiment of the application, according to the ID of each path of video data, the video data collected by the main camera equipment is identified as the main video data, and the rest video data is the auxiliary video data.
Step 206: and the live broadcast server determines other video data subjected to concurrent processing except the main video data as auxiliary video data.
Step 207: and the live broadcast server respectively superposes each path of auxiliary video data to each local area of the main video data to generate a live broadcast video signal to be played.
In the embodiment of the application, after receiving the images of the plurality of cameras, the live broadcast server can perform judgment and merging processing, the images shot by the main camera are placed at the lowest part, and the images shot by the auxiliary cameras are sequentially displayed in the local area above the images shot by the main camera.
Step 208: and the live broadcast server sends the live broadcast video signal to the audience client terminals belonging to the same channel.
In addition, in order to further enrich the live content, before each path of video data is sent to the server as a live video stream, the live terminal user may match a play special effect for each path of auxiliary video data through a special effect menu item in a live preview interface as shown in fig. 2c, where the play special effect includes at least one of a double-sided mirror, horizontal rotation, vertical flip, a stretching effect, and a moving effect.
In other embodiments, the live broadcast terminal may replace background data included in video data processed by each thread with a virtual background replacement background corresponding to the channel of video data, directly identify a channel of main video data from each channel of video data after concurrent processing, determine other video data after concurrent processing except the main video data as auxiliary video data, superimpose each channel of auxiliary video data on each local area of the main video data respectively to generate a live broadcast video signal to be played, and then send the live broadcast video signal to a live broadcast server, so that the live broadcast server sends the live broadcast video signal to audience clients belonging to the same channel.
In practical applications, when the auxiliary video data is superimposed on a local area of the main video data, only a part of the auxiliary video data may be superimposed.
From the above embodiment, it can be seen that: by carrying out concurrent processing on each path of video data, replacing background data contained in each path of video data with respective corresponding virtual background; and combining the video data of each channel after concurrent processing into a video source signal. Therefore, the virtual background can be used for simultaneously and concurrently replacing the backgrounds of the videos, and then the videos after replacing the backgrounds form a video source, so that the video content can be smoothly played in real time, diversified video content can be presented, and the video playing effect is enriched.
If the method is applied to the live broadcast field, after the virtual background of each live broadcast video stream is replaced, the live broadcast content can be smoothly played in real time, diversified live broadcast content can be presented, and the live broadcast effect is enriched.
Corresponding to the embodiment of the video processing method, the application also provides an embodiment of the video processing device.
Referring to fig. 3, fig. 3 is a block diagram of an embodiment of a video processing apparatus according to the present application, which may include: an information acquisition module 310, a concurrency processing module 320, and a video composition module 330.
The information obtaining module 310 is configured to obtain each path of video data and a virtual background corresponding to each path of video data.
And the concurrent processing module 320 is configured to perform concurrent processing on each channel of video data, and replace background data included in each channel of video data with a corresponding virtual background.
And the video composing module 330 is configured to compose video source signals from the concurrently processed channels of video data.
In an optional implementation manner, the information obtaining module 310 may further include (not shown in fig. 3):
and the video receiving module is used for receiving the video streams acquired by the camera devices into video data of each path, and one camera device corresponds to one path of video data.
And the background calling module is used for calling the virtual background corresponding to each camera equipment to form the virtual background corresponding to each video data.
In another alternative implementation, the concurrent processing module 320 may further include (not shown in fig. 3):
and the thread calling module is used for calling each local thread to carry out concurrent processing on each path of video data, and one thread processes one path of video data.
And the first replacing module is used for replacing the background data contained in the video data processed by each thread with the virtual background corresponding to the video data when concurrent processing is carried out.
In another alternative implementation, the concurrent processing module 320 may further include (not shown in fig. 3):
the video compression module is used for compressing each path of video data and the corresponding virtual background thereof into: each path of compressed signal with a preset size and a compressed background with a preset size corresponding to each path of compressed signal.
And the second replacing module is used for replacing the background data contained in each path of compressed signal with the corresponding compressed background to generate each path of video data after concurrent processing.
In another optional implementation manner, the video processing apparatus of the embodiment of the present application may further include (not shown in fig. 3):
and the size expansion module is used for performing size expansion on the video source signals and generating the video source signals with the same size as any one path of acquired video data.
In another optional implementation manner, the video processing apparatus of the embodiment of the present application may further include (not shown in fig. 3):
and the main video identification module is used for identifying one path of main video data from the paths of video data after concurrent processing.
And the auxiliary video determining module is used for determining other video data which are subjected to concurrent processing except the main video data as auxiliary video data.
And the video overlapping module is used for respectively overlapping each path of auxiliary video data to each local area of the main video data to generate a video playing signal.
In another alternative implementation, the video data of each channel are acquired by the camera devices arranged at different positions of the live scene.
The implementation process of the functions and actions of each unit (or module) in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units or modules described as separate parts may or may not be physically separate, and the parts displayed as the units or modules may or may not be physical units or modules, may be located in one place, or may be distributed on a plurality of network units or modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the video processing device can be applied to electronic equipment. In particular, it may be implemented by a computer chip or entity, or by an article of manufacture having some functionality. In a typical implementation, the electronic device is a computer, which may be embodied in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, internet television, smart car, smart home device, or a combination of any of these devices.
The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in a readable medium such as a nonvolatile memory and the like into a memory for operation through a processor of the electronic device where the software implementation is located as a logical device. From a hardware aspect, as shown in fig. 4, the present application is a hardware structure diagram of an electronic device in which a video processing apparatus is located, and besides the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 4, the electronic device in which the apparatus is located in the embodiment may also include other hardware according to an actual function of the electronic device, which is not described again. The storage processor of the electronic device may be a memory of executable instructions; the processor may be coupled to the memory for reading program instructions stored in the memory and, in response, performing the following: acquiring each path of video data and a virtual background corresponding to each path of video data; by carrying out concurrent processing on each path of video data, replacing background data contained in each path of video data with respective corresponding virtual background; and combining the video data of each channel after concurrent processing into a video source signal.
In other embodiments, the operations performed by the processor may refer to the description related to the above method embodiments, which is not repeated herein.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.
Claims (6)
1. A video processing method, comprising the steps of:
acquiring each path of video data and a virtual background corresponding to each path of video data; each path of video data is acquired by each camera device arranged in different directions of a live broadcast scene aiming at the anchor broadcast;
calling each local thread to carry out concurrent processing on each path of video data, wherein one thread processes one path of video data, and background data contained in each path of video data is replaced by a corresponding virtual background; specifically, by performing concurrent processing on each path of video data, each path of video data and the corresponding virtual background thereof are compressed into: each path of compressed signal with a preset size and a compressed background with a preset size corresponding to each path of compressed signal; replacing background data contained in each path of compressed signal with a corresponding compressed background to generate video data of each path after concurrent processing;
combining the video data of each channel after concurrent processing into a video source signal;
identifying one path of main video data from each path of video data after concurrent processing;
determining other video data subjected to concurrent processing except the main video data as auxiliary video data;
matching playing special effects for each path of auxiliary video data, wherein the playing special effects at least comprise stretching effects or moving effects;
and respectively superposing each path of auxiliary video data to each local area of the main video data to generate a video playing signal.
2. The method of claim 1, wherein the obtaining the channels of video data and the virtual backgrounds corresponding to the channels of video data comprises:
receiving video streams acquired by all the camera devices as video data of all paths, wherein one camera device corresponds to one video data path;
and calling the virtual background corresponding to each camera device to form the virtual background corresponding to each path of video data.
3. The method of claim 1, wherein after the combining the concurrently processed channels of video data into a video source signal, the method further comprises:
and performing size expansion on the video source signals to generate video source signals with the same size as any one path of acquired video data.
4. A video processing apparatus, comprising:
the information acquisition module is used for acquiring each path of video data and a virtual background corresponding to each path of video data; each path of video data is acquired by each camera device arranged in different directions of a live broadcast scene aiming at the anchor broadcast;
the concurrent processing module is used for calling each local thread to perform concurrent processing on each path of video data, one thread processes one path of video data, and background data contained in each path of video data is replaced by a corresponding virtual background; the concurrent processing module comprises: the video compression module is used for compressing each path of video data and the corresponding virtual background thereof into: each path of compressed signal with a preset size and a compressed background with a preset size corresponding to each path of compressed signal; the second replacement module is used for replacing background data contained in each path of compressed signal with a corresponding compressed background to generate each path of video data after concurrent processing;
the video composition module is used for composing the video source signals from the video data after concurrent processing;
the main video identification module is used for identifying one path of main video data from each path of video data after concurrent processing;
the auxiliary video determining module is used for determining other video data subjected to concurrent processing except the main video data as auxiliary video data; matching playing special effects for each path of auxiliary video data, wherein the playing special effects at least comprise stretching effects or moving effects;
and the video overlapping module is used for respectively overlapping each path of auxiliary video data to each local area of the main video data to generate a video playing signal.
5. The apparatus of claim 4, wherein the information acquisition module comprises:
the video receiving module is used for receiving video streams acquired by all the camera devices into all paths of video data, and one camera device corresponds to one path of video data;
and the background calling module is used for calling the virtual background corresponding to each camera equipment to form the virtual background corresponding to each video data.
6. The apparatus of claim 4, wherein the apparatus further comprises:
and the size expansion module is used for performing size expansion on the video source signals and generating the video source signals with the same size as any one path of acquired video data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611228075.0A CN106713942B (en) | 2016-12-27 | 2016-12-27 | Video processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611228075.0A CN106713942B (en) | 2016-12-27 | 2016-12-27 | Video processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106713942A CN106713942A (en) | 2017-05-24 |
CN106713942B true CN106713942B (en) | 2020-06-09 |
Family
ID=58896531
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611228075.0A Active CN106713942B (en) | 2016-12-27 | 2016-12-27 | Video processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106713942B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108933958B (en) * | 2017-05-27 | 2020-10-16 | 武汉斗鱼网络科技有限公司 | Method, storage medium, equipment and system for realizing microphone connection preview at user side |
CN107948741B (en) * | 2017-10-31 | 2020-06-19 | 深圳宜弘电子科技有限公司 | Dynamic cartoon playing method and system based on intelligent terminal |
CN110933448B (en) * | 2019-11-29 | 2022-07-12 | 广州市百果园信息技术有限公司 | Live list service system and method |
CN113014801B (en) * | 2021-02-01 | 2022-11-29 | 维沃移动通信有限公司 | Video recording method, video recording device, electronic equipment and medium |
CN114915798A (en) * | 2021-02-08 | 2022-08-16 | 阿里巴巴集团控股有限公司 | Real-time video generation method, multi-camera live broadcast method and device |
CN113873272B (en) * | 2021-09-09 | 2023-12-15 | 北京都是科技有限公司 | Method, device and storage medium for controlling background image of live video |
CN113965665B (en) * | 2021-11-22 | 2024-09-13 | 上海掌门科技有限公司 | Method and equipment for determining virtual live image |
CN115225929B (en) * | 2022-07-12 | 2023-12-15 | 北京字跳网络技术有限公司 | Live broadcast page configuration method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998042126A1 (en) * | 1997-03-18 | 1998-09-24 | The Metaphor Group | Real-time method of digitally altering a video data stream to remove portions of the original image and substitute elements to create a new image |
CN105392053A (en) * | 2015-12-11 | 2016-03-09 | 上海纬而视科技股份有限公司 | Method for receiving and processing network video streams in real time |
CN105472271A (en) * | 2014-09-10 | 2016-04-06 | 易珉 | Video interaction method, device and system |
WO2016058302A1 (en) * | 2014-10-14 | 2016-04-21 | 青岛海信电器股份有限公司 | Multi-video data display method and apparatus |
CN105827976A (en) * | 2016-04-26 | 2016-08-03 | 北京博瑞空间科技发展有限公司 | GPU (graphics processing unit)-based video acquisition and processing device and system |
CN106101579A (en) * | 2016-07-29 | 2016-11-09 | 维沃移动通信有限公司 | A kind of method of video-splicing and mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105828091B (en) * | 2016-03-28 | 2018-11-09 | 广州华多网络科技有限公司 | The playback method and system of video frequency program in network direct broadcasting |
-
2016
- 2016-12-27 CN CN201611228075.0A patent/CN106713942B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998042126A1 (en) * | 1997-03-18 | 1998-09-24 | The Metaphor Group | Real-time method of digitally altering a video data stream to remove portions of the original image and substitute elements to create a new image |
CN105472271A (en) * | 2014-09-10 | 2016-04-06 | 易珉 | Video interaction method, device and system |
WO2016058302A1 (en) * | 2014-10-14 | 2016-04-21 | 青岛海信电器股份有限公司 | Multi-video data display method and apparatus |
CN105392053A (en) * | 2015-12-11 | 2016-03-09 | 上海纬而视科技股份有限公司 | Method for receiving and processing network video streams in real time |
CN105827976A (en) * | 2016-04-26 | 2016-08-03 | 北京博瑞空间科技发展有限公司 | GPU (graphics processing unit)-based video acquisition and processing device and system |
CN106101579A (en) * | 2016-07-29 | 2016-11-09 | 维沃移动通信有限公司 | A kind of method of video-splicing and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN106713942A (en) | 2017-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106713942B (en) | Video processing method and device | |
US11153615B2 (en) | Method and apparatus for streaming panoramic video | |
CN112585978B (en) | Generating a composite video stream for display in VR | |
US9774896B2 (en) | Network synchronized camera settings | |
CN111083515B (en) | Method, device and system for processing live broadcast content | |
CN107820039B (en) | Method and apparatus for virtual surround-view conferencing experience | |
CN107105315A (en) | Live broadcasting method, the live broadcasting method of main broadcaster's client, main broadcaster's client and equipment | |
CN108989830A (en) | A kind of live broadcasting method, device, electronic equipment and storage medium | |
CN105791895B (en) | Audio & video processing method and its system based on time stab | |
CN107040808B (en) | Method and device for processing popup picture in video playing | |
CN109547724B (en) | Video stream data processing method, electronic equipment and storage device | |
US20200213631A1 (en) | Transmission system for multi-channel image, control method therefor, and multi-channel image playback method and apparatus | |
CN111405339B (en) | Split screen display method, electronic equipment and storage medium | |
CN108243318B (en) | Method and device for realizing live broadcast of multiple image acquisition devices through single interface | |
WO2017176349A1 (en) | Automatic cinemagraph | |
WO2023035882A1 (en) | Video processing method, and device, storage medium and program product | |
CN112019907A (en) | Live broadcast picture distribution method, computer equipment and readable storage medium | |
CN111432284A (en) | Bullet screen interaction method of multimedia terminal and multimedia terminal | |
US20190379917A1 (en) | Image distribution method and image display method | |
CN113365130B (en) | Live broadcast display method, live broadcast video acquisition method and related devices | |
KR101843025B1 (en) | System and Method for Video Editing Based on Camera Movement | |
CN110913118B (en) | Video processing method, device and storage medium | |
CN112019906A (en) | Live broadcast method, computer equipment and readable storage medium | |
CN115225915A (en) | Live broadcast recording device, live broadcast recording system and live broadcast recording method | |
US10764655B2 (en) | Main and immersive video coordination system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210115 Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province Patentee after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd. Address before: 511442 24 floors, B-1 Building, Wanda Commercial Square North District, Wanbo Business District, 79 Wanbo Second Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd. |