[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109302619A - A kind of information processing method and device - Google Patents

A kind of information processing method and device Download PDF

Info

Publication number
CN109302619A
CN109302619A CN201811087974.2A CN201811087974A CN109302619A CN 109302619 A CN109302619 A CN 109302619A CN 201811087974 A CN201811087974 A CN 201811087974A CN 109302619 A CN109302619 A CN 109302619A
Authority
CN
China
Prior art keywords
masking
file
target
video frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811087974.2A
Other languages
Chinese (zh)
Inventor
冯巍
阳群益
宇哲伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201811087974.2A priority Critical patent/CN109302619A/en
Publication of CN109302619A publication Critical patent/CN109302619A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Graphics (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the invention provides a kind of information processing method and devices, which comprises the video file request that server receiving terminal is sent, wherein video file is requested for requesting video file to be played;Server obtains video file and the corresponding masking-out file of video file, and masking-out file is used to identify the target area of video frame in video file, and target area is used to characterize the target object in video frame;Server sends video file, masking-out file and the corresponding barrage information of video file to terminal.Terminal loads barrage information according to masking-out file during playing video file, and region of the barrage information in the video frame in addition to target area is shown.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block the target object in video frame, and user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, improves the experience that user watches video.

Description

A kind of information processing method and device
Technical field
The present invention relates to multimedia technology fields, more particularly to a kind of information processing method and device.
Background technique
With the continuous development of society, the user for watching video by terminal is also more and more, in order to enhance different user Between interactivity when watching video, many video platforms both provide barrage function, i.e. user can be in video display process Middle input oneself is to information such as the evaluation of video program or subjective feelings, and user can also be seen that other users were sent Barrage information.
However, inventor has found in the implementation of the present invention, at least there are the following problems for the prior art:
The prior art often shelters from partial video picture when showing barrage on the video pictures of broadcasting, so that with Family is likely to that video cannot be watched well while interacting by barrage with other users.
Summary of the invention
The embodiment of the present invention has been designed to provide a kind of information processing method and device, watches video to improve user Experience.Specific technical solution is as follows:
In a first aspect, the embodiment of the invention provides a kind of information processing methods, which comprises
The video file request that server receiving terminal is sent, wherein the video file request is to be played for requesting Video file;
The server obtains the video file and the corresponding masking-out file of the video file, and the masking-out file is used In the target area for identifying video frame in the video file, the target area is used to characterize the target pair in the video frame As;
The server is corresponding to the terminal transmission video file, the masking-out file and the video file Barrage information.
Optionally, the server obtains the step of video file corresponding masking-out file, comprising:
The server obtains the target video frame in the video file, and the lens type of the target video frame is pre- If lens type;
The server is split the target object in the target video frame, obtains the target object described Regional location in target video frame;
The server generates the masking-out information and timestamp information of the target video frame, and the masking-out information is for marking Know regional location of the target object in the target video frame;
The server generates the corresponding masking-out file of the video file, and the masking-out file includes the video file The masking-out information and timestamp information of middle target video frame.
Optionally, the server obtains the step of target video frame in the video file, comprising:
The server detects the lens type of each video frame in the video file, and the lens type includes: spy Write lens type, close up shots type, medium shot type, full shot type or long shot type;
The server will belong to the video frame of default lens type as target video frame, and the default lens type is The close-up shot type, close up shots type or medium shot type.
Optionally, the server determines the process of the lens type of any video frame in the video file, comprising:
Determine the corresponding target area ratio of each target object for including in video frame, wherein the target area ratio is The area ratio of target object and the target video frame;
Each target area determined by judging whether there is the area ratio greater than preset area ratio than in, if it is, will The lens type of the video frame is determined as default lens type.
Optionally, the step of server is split the target object in the target video frame, comprising:
The target object in the target video frame is split using DeepLab V3+ deep learning partitioning algorithm.
Optionally, the step of server is split the target object in the target video frame, comprising:
The server obtains the foreground image in the target video frame, and the foreground image includes multiple prospects pair As;
The server divides using the object in the multiple foreground object area greater than preset area as target object Cut out the target object.
Optionally, before the server sends the masking-out file to the terminal, the method also includes:
The server detects the repetition masking-out information in the masking-out file, and the masking-out information that repeats is successive objective The consistent masking-out information of the corresponding information content of video frame;
The server retains the first masking-out information repeated in masking-out information, and repeats described in masking-out information Remaining masking-out information deletion, it is corresponding that each target video frame in masking-out information is repeated described in the first masking-out information representation Masking-out information.
Second aspect, the embodiment of the invention provides a kind of information processing methods, which comprises
Terminal to server sends video file request, and the video file request is for requesting video text to be played Part;
The terminal receive the video file that the server sends, the corresponding masking-out file of the video file and The corresponding barrage information of the video file, the masking-out file are used to identify the target area of video frame in the video file Domain, the target area are used to characterize the target object in the video frame;
Video file described in the terminal plays, the terminal is during playing video file according to the masking-out text Part loads the barrage information, and region of the barrage information in the video frame in addition to the target area is shown.
Optionally, the masking-out file includes the masking-out information and timestamp information of target video frame, the target video The lens type of frame is default lens type.
Optionally, the terminal loads the barrage according to the masking-out file during playing video file and believes Breath, comprising:
The terminal detects the corresponding illiteracy of target video frame described in the masking-out file during playing video file Whether version information lacks;
If the corresponding masking-out loss of learning of the target video frame, the terminal believe the corresponding masking-out of previous video frame For breath as the corresponding masking-out information of the target video frame, the previous video frame is the target video frame in the timestamp A corresponding upper video frame in information.
The third aspect, the embodiment of the invention provides a kind of information processing unit, the information processing unit is applied to clothes Business device, described device include:
Request receiving module, for receiving the video file request of terminal transmission, wherein the video file request is used for Request video file to be played;
Data obtaining module, for obtaining the video file and the corresponding masking-out file of the video file, the illiteracy Version file is used to identify the target area of video frame in the video file, and the target area is for characterizing in the video frame Target object;
Information sending module, for sending the video file, the masking-out file and video text to the terminal The corresponding barrage information of part.
Optionally, the data obtaining module, comprising:
Target video frame acquisition submodule, for obtaining the target video frame in the video file, the target video The lens type of frame is default lens type;
Target object divides submodule, for being split to the target object in the target video frame, obtains described Regional location of the target object in the target video frame;
Information generates submodule, for generating the masking-out information and timestamp information of the target video frame, the masking-out Information is for identifying regional location of the target object in the target video frame;
Masking-out file generated submodule, for generating the corresponding masking-out file of the video file, the masking-out file packet Masking-out information and timestamp information containing target video frame in the video file.
Optionally, the target video frame acquisition submodule, is specifically used for:
Detect the lens type of each video frame in the video file, the lens type include: close-up shot type, Close up shots type, medium shot type, full shot type or long shot type;
The video frame of default lens type will be belonged to as target video frame, the default lens type is the feature mirror Head type, close up shots type or medium shot type.
Optionally, the target video frame acquisition submodule, is specifically used for:
Determine the corresponding target area ratio of each target object for including in video frame, wherein the target area ratio is The area ratio of target object and the target video frame;
Each target area determined by judging whether there is the area ratio greater than preset area ratio than in, if it is, will The lens type of the video frame is determined as default lens type.
Optionally, the target object divides submodule, is specifically used for:
The target object in the target video frame is split using DeepLab V3+ deep learning partitioning algorithm.
Optionally, the target object divides submodule, is specifically used for:
The foreground image in the target video frame is obtained, the foreground image includes multiple foreground objects;
Using the object in the multiple foreground object area greater than preset area as target object, and it is partitioned into the mesh Mark object.
Optionally, described device further include:
Masking-out information detecting module, for sending the masking-out file to the terminal in the masking-out file sending module Before, the repetition masking-out information in the masking-out file is detected, the masking-out information that repeats is that successive objective video frame is right respectively The consistent masking-out information of the information content answered;
Masking-out message processing module, for retaining the first masking-out information repeated in masking-out information, and will be described heavy Remaining masking-out information deletion in masking-out information is answered, repeats each target in masking-out information described in the first masking-out information representation The corresponding masking-out information of video frame.
Fourth aspect, the embodiment of the invention provides a kind of information processing unit, the information processing unit is applied to eventually End, described device include:
Request sending module, for sending video file request to server, video file request for request to The video file of broadcasting;
Information receiving module, it is corresponding for receiving the video file, the video file that the server is sent Masking-out file and the corresponding barrage information of the video file, the masking-out file is for identifying video frame in the video file Target area, the target area is used to characterize target object in the video frame;
Video file playing module, for playing the video file, the terminal is during playing video file The barrage information is loaded according to the masking-out file, the barrage information area in addition to the target area in the video frame Domain is shown.
Optionally, the masking-out file includes the masking-out information and timestamp information of target video frame, the target video The lens type of frame is default lens type.
Optionally, video file playing module is specifically used for:
The terminal detects the corresponding illiteracy of target video frame described in the masking-out file during playing video file Whether version information lacks;
If the corresponding masking-out loss of learning of the target video frame, the terminal believe the corresponding masking-out of previous video frame For breath as the corresponding masking-out information of the target video frame, the previous video frame is the target video frame in the timestamp A corresponding upper video frame in information.
5th aspect, the embodiment of the invention also provides a kind of server, including processor, communication interface, memory and Communication bus, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes information processing method described in first aspect.
6th aspect, the embodiment of the invention also provides a kind of terminal, including processor, communication interface, memory and logical Believe bus, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes information processing method described in second aspect.
7th aspect, it is described computer-readable to deposit the embodiment of the invention also provides a kind of computer readable storage medium Instruction is stored in storage media, when run on a computer, so that computer executes information processing described in first aspect Method.
Eighth aspect, it is described computer-readable to deposit the embodiment of the invention also provides a kind of computer readable storage medium Instruction is stored in storage media, when run on a computer, so that computer executes information processing described in second aspect Method.
9th aspect, the embodiment of the invention also provides a kind of computer program products comprising instruction, when it is being calculated When being run on machine, so that computer executes information processing method described in first aspect.
Tenth aspect, the embodiment of the invention also provides a kind of computer program products comprising instruction, when it is being calculated When being run on machine, so that computer executes information processing method described in second aspect.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described.
Fig. 1 is a kind of schematic diagram of server and terminal interaction provided by the embodiment of the present invention;
Fig. 2 is a kind of flow chart of the information processing method applied to server provided by the embodiment of the present invention;
Fig. 3 is a kind of schematic diagram for the video frame for showing barrage information provided by the embodiment of the present invention;
Fig. 4 obtains the process of the corresponding masking-out file of video file for a kind of server provided by the embodiment of the present invention Figure;
Fig. 5 is the schematic diagram for the video frame that a kind of lens type provided by the embodiment of the present invention is medium shot type;
Fig. 6 is the schematic diagram for the video frame that a kind of lens type provided by the embodiment of the present invention is long shot type;
Fig. 7 is a kind of flow chart of the information processing method applied to terminal provided by the embodiment of the present invention;
Fig. 8 is a kind of schematic diagram of the information processing unit applied to server provided by the embodiment of the present invention;
Fig. 9 is a kind of schematic diagram of the information processing unit applied to terminal provided by the embodiment of the present invention;
Figure 10 is a kind of structural schematic diagram of server provided by the embodiment of the present invention;
Figure 11 is a kind of structural schematic diagram of terminal provided by the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention is described.
In order to solve the technical issues of recording in background technique, the embodiment of the invention provides a kind of information processing method and Device, to improve the experience that user watches video.
It is introduced in a first aspect, being provided for the embodiments of the invention a kind of breath processing method first below.
It should be noted that a kind of executing subject of information processing method provided by the embodiment of the present invention can be one kind Information processing unit, the information processing unit can run in a kind of information processing system the server for being used for information processing In.
In practical applications, as shown in Figure 1, the information processing system may include: server and terminal, at this point, service The interactive process of device and terminal is as shown in Figure 1.Specifically,
S110, terminal to server send video file request.
Wherein, video file request is for requesting video file to be played.
S120, server after receiving the video file request of terminal transmission, obtain video file to be played and to The corresponding masking-out file of the video file of broadcasting, the masking-out file are used to identify the target area of video frame in video file, should Target area is used to characterize the target object in video frame.
S130, server send the corresponding masking-out file of video file to be played, video file to be played to terminal And the corresponding barrage information of video file to be played.
S140, the video file that terminal plays server is sent to it, terminal during playing video file according to Masking-out file loads barrage information, and region of the barrage information in the video frame in addition to target area is shown.
A kind of information processing method applied to server side provided in an embodiment of the present invention will be explained in detail below It states.
As shown in Fig. 2, a kind of information processing method applied to server provided by the embodiment of the present invention, may include Following steps:
S210, the video file request that server receiving terminal is sent, wherein video file request is to be played for requesting Video file;
When terminal to server requests video file, video file request can be sent to server, which asks Seek the identification information that can carry video file, in this way, server receive server transmission video file request after, The identification information carried in being requested according to video file determines the video file that terminal is requested to it.
S220, server obtain video file and the corresponding masking-out file of video file, and masking-out file is for identifying video The target area of video frame in file, target area are used to characterize the target object in video frame.
After server has determined the video file that terminal is requested to it, available video file and video file are corresponding Masking-out file, which is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame, that is to say, that masking-out file is for identifying regional location of the target object in video file.
It should be noted that target object is usually the interested object of user, therefore, user is in viewing video file In the process, it is undesirable to barrage information shelter target object, clearly to watch target object.For example, user is in viewing video frame When, usually more interested in the facial expression of personage, therefore, target object can be the face area of personage.Certainly, in reality In the application of border, target object can also be entire character image, animal painting or plant image etc., and the embodiment of the present invention is to target Object is not specifically limited.
For example, if video file is the video file about personage, at this point, the target object in video frame can be Character image, the target area of video frame can be character image shared region in the video frame.Due to different video frames In, shown picture may be identical, it is also possible to which different, therefore, the character image in different video frame may be identical, it is also possible to It is different.For same video frame, target object can be the face area and body region of personage, be also possible to personage's Face area can also be the body region of personage.
If video file is zoologic video, then the target object in video frame can be animal painting, video The target area of frame is animal painting shared region in the video frame.Also, for same video frame, target object can To be the head zone and body region of animal, it is also possible to the head zone of animal, can also be the body region of animal.
In order to which scheme is completely clear with description, the corresponding masking-out of video file will be obtained to server in embodiment below The process of file is described in detail.
S230, server send video file, masking-out file and the corresponding barrage information of video file to terminal.
Server is getting the corresponding masking-out file of video file, video file and the corresponding barrage letter of video file After breath, video file, the corresponding masking-out file of video file and the corresponding barrage information of video file can be sent to end End.
Terminal is in video file, the corresponding masking-out file of video file and the video file pair for receiving server transmission After the barrage information answered, barrage can be loaded according to masking-out file with playing video file, and during playing video file Information, and barrage information is shown in the region in addition to target area in the video frame.Specifically, terminal plays video file In each video frame when, can determine the target area of the video frame according to masking-out file, and by the corresponding bullet of the video frame Curtain information is shown in the region in the video frame in addition to target area;It is understood that if terminal is according to masking-out text Part is not determined by the target area of the video frame, then barrage information can be shown to any region in the video frame.
Since target area is the region of target object in the video frame, and target object is usually that user is interested Object therefore, can be to avoid because of barrage by showing barrage information in the region in addition to target area in the video frame Information shows in the target area and leads to barrage information shelter target object, so that user is while watching barrage information, It will be appreciated also that the target object in ground viewing video frame, improves the experience that user watches video.
For example, as shown in figure 3, the target object in video frame is character image, the target area in video frame is Barrage information is shown the region in addition to target area in the video frame to character image by shared region, terminal in the video frame In, so that user is while watching barrage information, additionally it is possible to clearly watch the character image in video frame.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
In order to which scheme is complete and description is clear, the corresponding masking-out file of video file will be obtained to server below and will be carried out in detail Thin elaboration.
As shown in figure 4, server obtains the step of video file corresponding masking-out file, may include:
S410, server obtain the target video frame in video file, and the lens type of target video frame is default camera lens Type.
One video file generally includes multiple video frames, and the lens type of each video frame may be identical, it is also possible to It is different.In practical applications, the lens type of video frame can be close-up shot type, close up shots type, medium shot class Type, full shot type or long shot type.
For example, it is assumed that target object is character image.
When the lens type of video frame is close-up shot type, the facial area of main body personage and the area ratio of video frame are big In 1/3, wherein main body personage is the leading role personage in video frame.It should be noted that can be based on deep neural network MTCNN algorithm carrys out the facial area of positioning body personage, and MTCNN algorithm can relatively accurately position the pass of the face in picture Key position, and calculate the area ratio in the surface area and video frame of human face region.It will be appreciated by those skilled in the art that MTCNN Algorithm herein no longer repeats MTCNN algorithm.
When the lens type of video frame is close up shots type, main body personage's waist is appeared in video frame with upper limb body.
When the lens type of video frame is medium shot type, limbs in the video frame main body personage occur above knee, As shown in Figure 5.
When the lens type of video frame is full shot type, main body personage is all appeared in screen, main body personage's Height is greater than the half of video frame height.
When the lens type of video frame is long shot type, main body personage's height is less than the half of video frame height, such as Shown in Fig. 6.
Certainly, above-mentioned that only each lens type of video frame is described by way of example, the present invention is implemented Example is not specifically limited the lens type of each video frame of video frame.
In one embodiment, server obtains the target video frame in video file, may include:
Server detects the lens type of each video frame in video file, and lens type includes: close-up shot type, close Scape lens type, medium shot type, full shot type or long shot type;
Server presets lens type using the video frame for belonging to default lens type as target video frame as close-up shot Type, close up shots type or medium shot type.
In this embodiment, server detects the lens type of each video frame in video file, and will belong to default The video frame of lens type is conducive in subsequent step server to the target object in target video frame as target video frame It is split.
In one embodiment, server determines the process of the lens type of any video frame in video file, can be with Include:
Determine the corresponding target area ratio of each target object for including in video frame, wherein target area ratio is target The area ratio of object and target video frame;
Each target area determined by judging whether there is the area ratio greater than preset area ratio than in, if it is, will The lens type of video frame is determined as default lens type.
It should be noted that determine target object area mode can there are many, the embodiment of the present invention does not do this It is specific to limit.Preset area ratio can be set according to the actual situation, and the embodiment of the present invention is more specific than not doing to preset area It limits.
S420, server are split the target object in target video frame, obtain target object in target video frame In regional location.
Due to target video frame lens type be close-up shot type, close up shots type or medium shot type, because This, exists in target video frame and accounts for the biggish target object of video frame area ratio, these target objects be usually user more Interested object, therefore, server are split the target object in target video frame, to obtain target object in video Regional location in frame.
In one embodiment, the step of server is split the target object in target video frame, can wrap It includes:
The target object in target video frame is split using DeepLab V3+ deep learning partitioning algorithm.
Wherein, DeepLab V3+ deep learning partitioning algorithm is a kind of prospect background partitioning algorithm, passes through DeepLab V3 The target object that+deep learning more can accurately obtain in target video frame is split, the target area where target object Domain, except other regions of target area can be indicated with binary image, is worth the region of the pixel composition for 1 with video frame It can be the target area where target object, the region for being worth the pixel composition for 0 can be in video frame except target area Other regions.
The target object in video file is determined in order to be more accurate, and in one embodiment, server is to mesh The step of target object in video frame is split is marked, may include:
Server obtains the foreground image in target video frame, and foreground image includes multiple foreground objects;
Object of the server using area in multiple foreground objects greater than preset area is partitioned into target as target object Object.
Due in the foreground image in target video frame may multiple foreground objects, the area of a part of foreground object compared with Greatly, which is usually the more interested object of user;And the area of another part foreground object is smaller, the portion Point foreground object is not usually the interested object of user, and therefore, server, can be by prospect after obtaining multiple foreground objects Area is greater than the object of preset area as target object in object.
It should be noted that can use the morphological image process such as " corrosion " or " expansion " after obtaining target object Algorithm subtly cuts target object, more accurately to determine the target area where target object.
S430, server generate the masking-out information and timestamp information of target video frame, and masking-out information is for identifying target Regional location of the object in target video frame.
The illiteracy of target video frame can be generated after obtaining regional location of the target object in target video in server Version information and timestamp information.Masking-out information is for identifying regional location of the target object in target video frame.Wherein, masking-out The form of information can be matrix, and the element value in matrix can be 0 or 1, and for example, the resolution ratio of target video frame is 640*480, then, masking-out information can be 640 rows, the matrixes of 480 column, each element in matrix with it is each in video frame Pixel corresponds.Wherein, element value is that the region of 1 composition can correspond to the target area where target object;Element value 0 The region of composition can correspond to the region in video frame in addition to target area.
Certainly, the form of masking-out information is not limited to matrix form, and the content of masking-out information is also not necessarily limited to 0 or 1, the present invention couple Masking-out information is not especially limited.
S440, server generate the corresponding masking-out file of video file, and masking-out file includes target video in video file The masking-out information and timestamp information of frame.
After server generates masking-out information and the timestamp information of target video frame, the corresponding illiteracy of video file can be generated Version file, wherein masking-out file includes the masking-out information and timestamp information of target video frame in video file.Certainly, masking-out It can also include other auxiliary letters in addition to masking-out information and timestamp information comprising target video frame in video file in file Breath, such as the width of target area, height, video width, video height etc., the letter that the embodiment of the present invention is included to masking-out file Breath content is not especially limited.
It is emphasized that the corresponding timestamp information of each target video frame and a masking-out information.In masking-out In file, the timestamp information and masking-out information of target video frame have corresponding relationship.
It is understood that the similarity of continuous multiple frames target video frame may be higher in a video file, at this point, this The information content similarity of the corresponding masking-out information of multiframe target video frame is higher, i.e. this corresponding masking-out of multiframe target video frame The information content of information is consistent.In order to reduce transmission masking-out file needed for network flow, in one embodiment, service Before device sends masking-out file to terminal, the information processing method can also include:
Server detects the repetition masking-out information in masking-out file, and repeating masking-out information is that successive objective video frame is right respectively The consistent masking-out information of the information content answered;
Server retains the first masking-out information repeated in masking-out information, and believes remaining masking-out in masking-out information is repeated Breath is deleted, and first masking-out information representation repeats the corresponding masking-out information of each target video frame in masking-out information.
In another embodiment, server can retain any masking-out information in repetition masking-out information, and will weigh Other masking-out information deletions of the masking-out information are removed in multiple masking-out information, also, are believed each masking-out in masking-out information is repeated Corresponding timestamp information and the masking-out information association are ceased, in this way, the masking-out information representation repeats in masking-out information each The corresponding masking-out information of target video frame.
Second aspect, the embodiment of the invention provides a kind of information processing methods applied to terminal side.
As shown in fig. 7, a kind of information processing method applied to terminal provided in an embodiment of the present invention, may include as follows Step:
S710, terminal to server send video file request, and video file request is for requesting video text to be played Part.
When terminal to server requests video file, video file request can be sent to server, which asks Seek the identification information that can carry video file, in this way, server receive server transmission video file request after, The identification information carried in being requested according to video file determines the video file that terminal is requested to it.
S720, terminal receive video file, the corresponding masking-out file of video file and the video file pair that server is sent The barrage information answered, masking-out file are used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame.
After server has determined the video file that terminal is requested to it, available video file and video file are corresponding Masking-out file, which is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame, that is to say, that masking-out file is for identifying regional location of the target object in video file.
It should be noted that target object is usually the interested object of user, therefore, user is in viewing video file In the process, it is undesirable to barrage information shelter target object, clearly to watch target object.For example, user is in viewing video frame When, usually more interested in the facial expression of personage, therefore, target object can be the face area of personage.Certainly, in reality In the application of border, target object can also be entire character image, animal painting or plant image etc., and the embodiment of the present invention is to target Object is not specifically limited.
S730, terminal plays video file, terminal load barrage according to masking-out file during playing video file Information, region of the barrage information in the video frame in addition to target area are shown.
Terminal is in video file, the corresponding masking-out file of video file and the video file pair for receiving server transmission , can be with playing video file after the barrage information answered, and barrage is loaded according to masking-out file during playing video file Information, and barrage information is shown in the region in addition to target area in the video frame.Specifically, terminal plays video file In each video frame when, can determine the target area of the video frame according to masking-out file, and by the corresponding bullet of the video frame Curtain information is shown in the region in the video frame in addition to target area;It is understood that if terminal is according to masking-out text Part is not determined by the target area of the video frame, then barrage information can be shown to any region in the video frame.
Since target area is the region of target object in the video frame, and target object is usually that user is interested Object therefore, can be to avoid because of barrage by showing barrage information in the region in addition to target area in the video frame Information shows in the target area and leads to barrage information shelter target object, so that user is while watching barrage information, It will be appreciated also that the target object in ground viewing video frame, improves the experience that user watches video.
In one embodiment, masking-out file includes the masking-out information and timestamp information of target video frame, target view The lens type of frequency frame is default lens type.
Masking-out file and lens type have been elaborated in first aspect embodiment, it is no longer superfluous herein It states.
As a kind of implementation of the embodiment of the present invention, terminal is during playing video file according to masking-out file Barrage information is loaded, may include:
Terminal detects whether the corresponding masking-out information of target video frame in masking-out file lacks during playing video file It loses;
If the corresponding masking-out loss of learning of target video frame, terminal is using the corresponding masking-out information of previous video frame as mesh The corresponding masking-out information of video frame is marked, previous video frame is target video frame corresponding upper video on timestamp information Frame.
In this implementation, server in order to reduce transmission masking-out file needed for network flow, detect masking-out file In repetition masking-out information, repeating masking-out information is the corresponding information content of successive objective video frame consistent masking-out letter Breath;Server retains the first masking-out information repeated in masking-out information, and remaining the masking-out information repeated in masking-out information is deleted It removes, first masking-out information representation repeats the corresponding masking-out information of each target video frame in masking-out information.Therefore, terminal is playing Detect whether the corresponding masking-out information of target video frame in masking-out file lacks during video file, if target video frame pair The masking-out loss of learning answered, terminal using the corresponding masking-out information of previous video frame as the corresponding masking-out information of target video frame, Previous video frame is target video frame corresponding upper video frame on timestamp information.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
The third aspect, the embodiment of the invention provides a kind of information processing unit, the information processing unit is applied to clothes Business device, as shown in figure 8, described device includes:
Request receiving module 810, for receiving the video file request of terminal transmission, wherein the video file request For requesting video file to be played;
Data obtaining module 820, it is described for obtaining the video file and the corresponding masking-out file of the video file Masking-out file is used to identify the target area of video frame in the video file, and the target area is for characterizing the video frame In target object;
Information sending module 830, for sending the video file, the masking-out file and the video to the terminal The corresponding barrage information of file.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
Optionally, the data obtaining module, comprising:
Target video frame acquisition submodule, for obtaining the target video frame in the video file, the target video The lens type of frame is default lens type;
Target object divides submodule, for being split to the target object in the target video frame, obtains described Regional location of the target object in the target video frame;
Information generates submodule, for generating the masking-out information and timestamp information of the target video frame, the masking-out Information is for identifying regional location of the target object in the target video frame;
Masking-out file generated submodule, for generating the corresponding masking-out file of the video file, the masking-out file packet Masking-out information and timestamp information containing target video frame in the video file.
Optionally, the target video frame acquisition submodule, is specifically used for:
Detect the lens type of each video frame in the video file, the lens type include: close-up shot type, Close up shots type, medium shot type, full shot type or long shot type;
The video frame of default lens type will be belonged to as target video frame, the default lens type is the feature mirror Head type, close up shots type or medium shot type.
Optionally, the target video frame acquisition submodule, is specifically used for:
Determine the corresponding target area ratio of each target object for including in video frame, wherein the target area ratio is The area ratio of target object and the target video frame;
Each target area determined by judging whether there is the area ratio greater than preset area ratio than in, if it is, will The lens type of the video frame is determined as default lens type.
Optionally, the target object divides submodule, is specifically used for:
The target object in the target video frame is split using DeepLab V3+ deep learning partitioning algorithm.
Optionally, the target object divides submodule, is specifically used for:
The foreground image in the target video frame is obtained, the foreground image includes multiple foreground objects;
Using the object in the multiple foreground object area greater than preset area as target object, and it is partitioned into the mesh Mark object.
Optionally, described device further include:
Masking-out information detecting module, for sending the masking-out file to the terminal in the masking-out file sending module Before, the repetition masking-out information in the masking-out file is detected, the masking-out information that repeats is that successive objective video frame is right respectively The consistent masking-out information of the information content answered;
Masking-out message processing module, for retaining the first masking-out information repeated in masking-out information, and will be described heavy Remaining masking-out information deletion in masking-out information is answered, repeats each target in masking-out information described in the first masking-out information representation The corresponding masking-out information of video frame.
Fourth aspect, the embodiment of the invention provides a kind of information processing unit, the information processing unit is applied to eventually End, as shown in figure 9, described device includes:
Request sending module 910, for sending video file request to server, the video file request is for requesting Video file to be played;
Information receiving module 920, it is corresponding for receiving the video file, the video file that the server is sent Masking-out file and the corresponding barrage information of the video file, the masking-out file is for identifying video in the video file The target area of frame, the target area are used to characterize the target object in the video frame;
Video file playing module 930, for playing the video file, process of the terminal in playing video file According to the masking-out file load the barrage information, the barrage information is in the video frame in addition to the target area Region is shown.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
Optionally, the masking-out file includes the masking-out information and timestamp information of target video frame, the target video The lens type of frame is default lens type.
Optionally, video file playing module is specifically used for:
The terminal detects the corresponding illiteracy of target video frame described in the masking-out file during playing video file Whether version information lacks;
If the corresponding masking-out loss of learning of the target video frame, the terminal believe the corresponding masking-out of previous video frame For breath as the corresponding masking-out information of the target video frame, the previous video frame is the target video frame in the timestamp A corresponding upper video frame in information.
5th aspect, the embodiment of the invention also provides a kind of servers, as shown in Figure 10, including it is processor 1001, logical Believe interface 1002, memory 1003 and communication bus 1004, wherein processor 1001, communication interface 1002, memory 1003 are logical It crosses communication bus 1004 and completes mutual communication,
Memory 1003, for storing computer program;
Processor 1001 when for executing the program stored on memory 1003, realizes information described in first aspect Processing method.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
The communication bus that above-mentioned server is mentioned can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, abbreviation PCI) bus or expanding the industrial standard structure (Extended Industry Standard Architecture, abbreviation EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc.. Only to be indicated with a thick line in figure, it is not intended that an only bus or a type of bus convenient for indicating.
Communication interface is for the communication between above-mentioned server and other equipment.
Memory may include random access memory (Random Access Memory, abbreviation RAM), also may include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.Optionally, memory may be used also To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit, Abbreviation CPU), network processing unit (Network Processor, abbreviation NP) etc.;It can also be digital signal processor (Digital Signal Processing, abbreviation DSP), specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), field programmable gate array (Field-Programmable Gate Array, Abbreviation FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components.
6th aspect, the embodiment of the invention also provides a kind of terminals, as shown in figure 11, including processor 1101, communication Interface 1102, memory 1103 and communication bus 1104, wherein processor 1101, communication interface 1102, memory 1103 pass through Communication bus 1104 completes mutual communication,
Memory 1103, for storing computer program;
Processor 1101 when for executing the program stored on memory 1103, realizes information described in second aspect Processing method.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
The communication bus that above-mentioned terminal is mentioned can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, abbreviation PCI) bus or expanding the industrial standard structure (Extended Industry Standard Architecture, abbreviation EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc.. Only to be indicated with a thick line in figure, it is not intended that an only bus or a type of bus convenient for indicating.
Communication interface is for the communication between above-mentioned terminal and other equipment.
Memory may include random access memory (Random Access Memory, abbreviation RAM), also may include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.Optionally, memory may be used also To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit, Abbreviation CPU), network processing unit (Network Processor, abbreviation NP) etc.;It can also be digital signal processor (Digital Signal Processing, abbreviation DSP), specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), field programmable gate array (Field-Programmable Gate Array, Abbreviation FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components.
7th aspect, in another embodiment provided by the invention, additionally provides a kind of computer readable storage medium, should Instruction is stored in computer readable storage medium, when run on a computer, so that computer executes above-described embodiment Information processing method shown in middle first aspect.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
Eighth aspect additionally provides a kind of computer readable storage medium in another embodiment provided by the invention, should Instruction is stored in computer readable storage medium, when run on a computer, so that computer executes above-described embodiment Information processing method shown in middle second aspect.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
9th aspect, in another embodiment provided by the invention, additionally provides a kind of computer program comprising instruction Product, when run on a computer, so that computer executes information processing side shown in first aspect in above-described embodiment Method.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
Tenth aspect, in another embodiment provided by the invention, additionally provides a kind of computer program comprising instruction Product, when run on a computer, so that computer executes information processing side shown in second aspect in above-described embodiment Method.
Technical solution provided in an embodiment of the present invention when terminal to server requests video file, sends to server and regards Frequency file request obtains video file and video file is corresponding after server receives the video file request of terminal transmission Masking-out file, wherein masking-out file is used to identify the target area of video frame in video file, and target area is for characterizing video Target object in frame;And video file, masking-out file and the corresponding barrage information of video file are sent to terminal.Terminal receives After the video file, masking-out file and the corresponding barrage information of video file that are sent to server, playing video file, also, Terminal loads barrage information according to masking-out file during playing video file, and barrage information is shown in the video frame Region in addition to target area.As it can be seen that the technical solution provided through the embodiment of the present invention, barrage information will not block video Target object in frame, user is while watching barrage information, it will be appreciated also that the target object in ground viewing video frame, mentions High user watches the experience of video.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.The computer program Product includes one or more computer instructions.When loading on computers and executing the computer program instructions, all or It partly generates according to process or function described in the embodiment of the present invention.The computer can be general purpose computer, dedicated meter Calculation machine, computer network or other programmable devices.The computer instruction can store in computer readable storage medium In, or from a computer readable storage medium to the transmission of another computer readable storage medium, for example, the computer Instruction can pass through wired (such as coaxial cable, optical fiber, number from a web-site, computer, server or data center User's line (DSL)) or wireless (such as infrared, wireless, microwave etc.) mode to another web-site, computer, server or Data center is transmitted.The computer readable storage medium can be any usable medium that computer can access or It is comprising data storage devices such as one or more usable mediums integrated server, data centers.The usable medium can be with It is magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid state hard disk Solid State Disk (SSD)) etc..
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device, For server, terminal, storage medium and computer program product embodiments, since it is substantially similar to the method embodiment, institute To be described relatively simple, the relevent part can refer to the partial explaination of embodiments of method.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (22)

1. a kind of information processing method, which is characterized in that the described method includes:
The video file request that server receiving terminal is sent, wherein the video file request is for requesting view to be played Frequency file;
The server obtains the video file and the corresponding masking-out file of the video file, and the masking-out file is for marking Know the target area of video frame in the video file, the target area is used to characterize the target object in the video frame;
The server sends the video file, the masking-out file and the corresponding barrage of the video file to the terminal Information.
2. the method according to claim 1, wherein the server obtains the corresponding masking-out of the video file The step of file, comprising:
The server obtains the target video frame in the video file, and the lens type of the target video frame is default mirror Head type;
The server is split the target object in the target video frame, obtains the target object in the target Regional location in video frame;
The server generates the masking-out information and timestamp information of the target video frame, and the masking-out information is for identifying institute State regional location of the target object in the target video frame;
The server generates the corresponding masking-out file of the video file, and the masking-out file includes mesh in the video file Mark the masking-out information and timestamp information of video frame.
3. according to the method described in claim 2, it is characterized in that, the server obtains the view of the target in the video file The step of frequency frame, comprising:
The server detects the lens type of each video frame in the video file, and the lens type includes: feature mirror Head type, close up shots type, medium shot type, full shot type or long shot type;
The server will belong to the video frame of default lens type as target video frame, and the default lens type is described Close-up shot type, close up shots type or medium shot type.
4. according to the method described in claim 3, it is characterized in that, the server determines any video in the video file The process of the lens type of frame, comprising:
Determine the corresponding target area ratio of each target object for including in video frame, wherein the target area ratio is target The area ratio of object and the target video frame;
Each target area determined by judging whether there is the area ratio greater than preset area ratio than in, if it is, will be described The lens type of video frame is determined as default lens type.
5. according to the method described in claim 3, it is characterized in that, the server is to the target pair in the target video frame As the step of being split, comprising:
The target object in the target video frame is split using DeepLab V3+ deep learning partitioning algorithm.
6. method according to any one of claims 1 to 5, which is characterized in that the server is to the target video frame In target object the step of being split, comprising:
The server obtains the foreground image in the target video frame, and the foreground image includes multiple foreground objects;
The server is partitioned into using the object in the multiple foreground object area greater than preset area as target object The target object.
7. method according to any one of claims 1 to 5, which is characterized in that the server sends institute to the terminal Before stating masking-out file, the method also includes:
The server detects the repetition masking-out information in the masking-out file, and the masking-out information that repeats is successive objective video The consistent masking-out information of the corresponding information content of frame;
The server retain it is described repeat masking-out information in first masking-out information, and by it is described repeat masking-out information in its Remaining masking-out information deletion repeats the corresponding masking-out of each target video frame in masking-out information described in the first masking-out information representation Information.
8. a kind of information processing method, which is characterized in that the described method includes:
Terminal to server sends video file request, and the video file request is for requesting video file to be played;
The terminal receives the video file that the server sends, the corresponding masking-out file of the video file and described The corresponding barrage information of video file, the masking-out file are used to identify the target area of video frame in the video file, institute Target area is stated for characterizing the target object in the video frame;
Video file described in the terminal plays, the terminal add during playing video file according to the masking-out file The barrage information is carried, region of the barrage information in the video frame in addition to the target area is shown.
9. according to the method described in claim 8, it is characterized in that, the masking-out file includes the masking-out information of target video frame And timestamp information, the lens type of the target video frame are default lens type.
10. according to the method described in claim 9, it is characterized in that, the terminal during playing video file according to The masking-out file loads the barrage information, comprising:
The terminal detects the corresponding masking-out letter of target video frame described in the masking-out file during playing video file Whether breath lacks;
If the corresponding masking-out loss of learning of the target video frame, the terminal make the corresponding masking-out information of previous video frame For the corresponding masking-out information of the target video frame, the previous video frame is the target video frame in the timestamp information A upper corresponding upper video frame.
11. a kind of information processing unit, which is characterized in that the information processing unit is applied to server, and described device includes:
Request receiving module, for receiving the video file request of terminal transmission, wherein the video file request is for requesting Video file to be played;
Data obtaining module, for obtaining the video file and the corresponding masking-out file of the video file, the masking-out text Part is used to identify the target area of video frame in the video file, and the target area is used to characterize the mesh in the video frame Mark object;
Information sending module, for sending the video file, the masking-out file and the video file pair to the terminal The barrage information answered.
12. device according to claim 11, which is characterized in that the data obtaining module, comprising:
Target video frame acquisition submodule, for obtaining the target video frame in the video file, the target video frame Lens type is default lens type;
Target object divides submodule and obtains the target for being split to the target object in the target video frame Regional location of the object in the target video frame;
Information generates submodule, for generating the masking-out information and timestamp information of the target video frame, the masking-out information For identifying regional location of the target object in the target video frame;
Masking-out file generated submodule, for generating the corresponding masking-out file of the video file, the masking-out file includes institute State the masking-out information and timestamp information of target video frame in video file.
13. device according to claim 12, which is characterized in that the target video frame acquisition submodule is specifically used for:
The lens type of each video frame in the video file is detected, the lens type includes: close-up shot type, close shot Lens type, medium shot type, full shot type or long shot type;
The video frame of default lens type will be belonged to as target video frame, the default lens type is the close-up shot class Type, close up shots type or medium shot type.
14. device according to claim 13, which is characterized in that the target video frame acquisition submodule is specifically used for:
Determine the corresponding target area ratio of each target object for including in video frame, wherein the target area ratio is target The area ratio of object and the target video frame;
Each target area determined by judging whether there is the area ratio greater than preset area ratio than in, if it is, will be described The lens type of video frame is determined as default lens type.
15. device according to claim 13, which is characterized in that the target object divides submodule, is specifically used for:
The target object in the target video frame is split using DeepLab V3+ deep learning partitioning algorithm.
16. 1 to 15 described in any item devices according to claim 1, which is characterized in that the target object divides submodule, It is specifically used for:
The foreground image in the target video frame is obtained, the foreground image includes multiple foreground objects;
Using the object in the multiple foreground object area greater than preset area as target object, and it is partitioned into the target pair As.
17. 1 to 15 described in any item devices according to claim 1, which is characterized in that described device further include:
Masking-out information detecting module, for the masking-out file sending module to the terminal send the masking-out file it Before, the repetition masking-out information in the masking-out file is detected, the masking-out information that repeats is that successive objective video frame respectively corresponds The consistent masking-out information of the information content;
Masking-out message processing module repeats to cover for retaining the first masking-out information repeated in masking-out information, and by described Remaining masking-out information deletion in version information, each target video in masking-out information is repeated described in the first masking-out information representation The corresponding masking-out information of frame.
18. a kind of information processing unit, which is characterized in that the information processing unit is applied to terminal, and described device includes:
Request sending module, for sending video file request to server, the video file request is to be played for requesting Video file;
Information receiving module, for receiving the video file, the corresponding masking-out of the video file that the server is sent File and the corresponding barrage information of the video file, the masking-out file are used to identify the mesh of video frame in the video file Region is marked, the target area is used to characterize the target object in the video frame;
Video file playing module, for playing the video file, the terminal during playing video file according to The masking-out file loads the barrage information, and the region in addition to the target area is aobvious in the video frame for the barrage information Show.
19. device according to claim 18, which is characterized in that the masking-out file includes the masking-out letter of target video frame Breath and timestamp information, the lens type of the target video frame are default lens type.
20. device according to claim 19, which is characterized in that video file playing module is specifically used for:
The terminal detects the corresponding masking-out letter of target video frame described in the masking-out file during playing video file Whether breath lacks;
If the corresponding masking-out loss of learning of the target video frame, the terminal make the corresponding masking-out information of previous video frame For the corresponding masking-out information of the target video frame, the previous video frame is the target video frame in the timestamp information A upper corresponding upper video frame.
21. a kind of server, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing Device, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes method and step as claimed in claim 1 to 7.
22. a kind of terminal, which is characterized in that including processor, communication interface, memory and communication bus, wherein processor, Communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes any method and step of claim 8-10.
CN201811087974.2A 2018-09-18 2018-09-18 A kind of information processing method and device Pending CN109302619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811087974.2A CN109302619A (en) 2018-09-18 2018-09-18 A kind of information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811087974.2A CN109302619A (en) 2018-09-18 2018-09-18 A kind of information processing method and device

Publications (1)

Publication Number Publication Date
CN109302619A true CN109302619A (en) 2019-02-01

Family

ID=65163559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811087974.2A Pending CN109302619A (en) 2018-09-18 2018-09-18 A kind of information processing method and device

Country Status (1)

Country Link
CN (1) CN109302619A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862414A (en) * 2019-03-22 2019-06-07 武汉斗鱼鱼乐网络科技有限公司 A kind of masking-out barrage display methods, device and server
CN110225365A (en) * 2019-04-23 2019-09-10 北京奇艺世纪科技有限公司 A kind of method, server and the client of the interaction of masking-out barrage
CN110248209A (en) * 2019-07-19 2019-09-17 湖南快乐阳光互动娱乐传媒有限公司 Transmission method and system for bullet screen anti-shielding mask information
CN110300118A (en) * 2019-07-09 2019-10-01 联想(北京)有限公司 Streaming Media processing method, device and storage medium
CN110519245A (en) * 2019-08-15 2019-11-29 咪咕文化科技有限公司 Multimedia file processing method, communication equipment and communication system
CN110535839A (en) * 2019-08-15 2019-12-03 咪咕文化科技有限公司 Information processing method, device, system and computer readable storage medium
CN110798726A (en) * 2019-10-21 2020-02-14 北京达佳互联信息技术有限公司 Bullet screen display method and device, electronic equipment and storage medium
CN110913241A (en) * 2019-11-01 2020-03-24 北京奇艺世纪科技有限公司 Video retrieval method and device, electronic equipment and storage medium
CN111641870A (en) * 2020-06-05 2020-09-08 北京爱奇艺科技有限公司 Video playing method and device, electronic equipment and computer storage medium
CN111787240A (en) * 2019-04-28 2020-10-16 北京京东尚科信息技术有限公司 Video generation method, device and computer readable storage medium
CN111935508A (en) * 2020-08-13 2020-11-13 百度时代网络技术(北京)有限公司 Information processing and acquiring method and device, electronic equipment and storage medium
CN111954060A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Barrage mask rendering method, computer device and readable storage medium
CN111954081A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Method for acquiring mask data, computer device and readable storage medium
CN111954082A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Mask file structure, mask file reading method, computer device and readable storage medium
CN111954053A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Method for acquiring mask frame data, computer device and readable storage medium
CN111954052A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Method for displaying bullet screen information, computer equipment and readable storage medium
CN112118484A (en) * 2019-06-19 2020-12-22 上海哔哩哔哩科技有限公司 Video bullet screen display method and device, computer equipment and readable storage medium
WO2020259510A1 (en) * 2019-06-28 2020-12-30 腾讯科技(深圳)有限公司 Method and apparatus for detecting information embedding region, electronic device, and storage medium
WO2021012837A1 (en) * 2019-07-19 2021-01-28 腾讯科技(深圳)有限公司 Method and apparatus for determining recommendation information implantation position, device and storage medium
CN112423110A (en) * 2020-08-04 2021-02-26 上海哔哩哔哩科技有限公司 Live video data generation method and device and live video playing method and device
CN112492324A (en) * 2019-09-12 2021-03-12 上海哔哩哔哩科技有限公司 Data processing method and system
CN112492347A (en) * 2019-09-12 2021-03-12 上海哔哩哔哩科技有限公司 Method for processing information flow and displaying bullet screen information and information flow processing system
CN112637670A (en) * 2020-12-15 2021-04-09 上海哔哩哔哩科技有限公司 Video generation method and device
CN113132786A (en) * 2019-12-30 2021-07-16 深圳Tcl数字技术有限公司 User interface display method and device and readable storage medium
CN113315924A (en) * 2020-02-27 2021-08-27 北京字节跳动网络技术有限公司 Image special effect processing method and device
CN114500726A (en) * 2021-12-27 2022-05-13 努比亚技术有限公司 Charging video display method, mobile terminal and storage medium
US11513937B2 (en) 2019-06-19 2022-11-29 Shanghai Bilibili Technology Co., Ltd. Method and device of displaying video comments, computing device, and readable storage medium
CN115734006A (en) * 2021-08-26 2023-03-03 武汉斗鱼网络科技有限公司 Mask file processing method and device and processing equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307051A (en) * 2015-05-04 2016-02-03 维沃移动通信有限公司 Video processing method and device
CN105430512A (en) * 2015-11-06 2016-03-23 腾讯科技(北京)有限公司 Method and device for displaying information on video image
CN106303731A (en) * 2016-08-01 2017-01-04 北京奇虎科技有限公司 The display packing of barrage and device
CN107181976A (en) * 2017-04-28 2017-09-19 华为技术有限公司 A kind of barrage display methods and electronic equipment
CN108124185A (en) * 2016-11-28 2018-06-05 广州华多网络科技有限公司 A kind of barrage display methods, device and terminal
US20180191987A1 (en) * 2017-01-04 2018-07-05 International Business Machines Corporation Barrage message processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307051A (en) * 2015-05-04 2016-02-03 维沃移动通信有限公司 Video processing method and device
CN105430512A (en) * 2015-11-06 2016-03-23 腾讯科技(北京)有限公司 Method and device for displaying information on video image
CN106303731A (en) * 2016-08-01 2017-01-04 北京奇虎科技有限公司 The display packing of barrage and device
CN108124185A (en) * 2016-11-28 2018-06-05 广州华多网络科技有限公司 A kind of barrage display methods, device and terminal
US20180191987A1 (en) * 2017-01-04 2018-07-05 International Business Machines Corporation Barrage message processing
CN107181976A (en) * 2017-04-28 2017-09-19 华为技术有限公司 A kind of barrage display methods and electronic equipment

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862414A (en) * 2019-03-22 2019-06-07 武汉斗鱼鱼乐网络科技有限公司 A kind of masking-out barrage display methods, device and server
CN109862414B (en) * 2019-03-22 2021-10-15 武汉斗鱼鱼乐网络科技有限公司 Mask bullet screen display method and device and server
CN110225365A (en) * 2019-04-23 2019-09-10 北京奇艺世纪科技有限公司 A kind of method, server and the client of the interaction of masking-out barrage
CN111787240A (en) * 2019-04-28 2020-10-16 北京京东尚科信息技术有限公司 Video generation method, device and computer readable storage medium
CN111954060B (en) * 2019-05-17 2022-05-10 上海哔哩哔哩科技有限公司 Barrage mask rendering method, computer device and readable storage medium
CN111954082A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Mask file structure, mask file reading method, computer device and readable storage medium
CN111954052A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Method for displaying bullet screen information, computer equipment and readable storage medium
US11871086B2 (en) 2019-05-17 2024-01-09 Shanghai Bilibili Technology Co., Ltd. Method of displaying comment information, computing device, and readable storage medium
CN111954053B (en) * 2019-05-17 2023-09-05 上海哔哩哔哩科技有限公司 Method for acquiring mask frame data, computer equipment and readable storage medium
CN111954053A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Method for acquiring mask frame data, computer device and readable storage medium
CN111954081B (en) * 2019-05-17 2022-12-09 上海哔哩哔哩科技有限公司 Method for acquiring mask data, computer device and readable storage medium
CN111954060A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Barrage mask rendering method, computer device and readable storage medium
CN111954052B (en) * 2019-05-17 2022-04-05 上海哔哩哔哩科技有限公司 Method for displaying bullet screen information, computer equipment and readable storage medium
CN111954081A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Method for acquiring mask data, computer device and readable storage medium
US11513937B2 (en) 2019-06-19 2022-11-29 Shanghai Bilibili Technology Co., Ltd. Method and device of displaying video comments, computing device, and readable storage medium
CN112118484A (en) * 2019-06-19 2020-12-22 上海哔哩哔哩科技有限公司 Video bullet screen display method and device, computer equipment and readable storage medium
US12073621B2 (en) 2019-06-28 2024-08-27 Tencent Technology (Shenzhen) Company Limited Method and apparatus for detecting information insertion region, electronic device, and storage medium
WO2020259510A1 (en) * 2019-06-28 2020-12-30 腾讯科技(深圳)有限公司 Method and apparatus for detecting information embedding region, electronic device, and storage medium
CN110300118B (en) * 2019-07-09 2020-09-25 联想(北京)有限公司 Streaming media processing method, device and storage medium
CN110300118A (en) * 2019-07-09 2019-10-01 联想(北京)有限公司 Streaming Media processing method, device and storage medium
US11928863B2 (en) * 2019-07-19 2024-03-12 Tencent Technology (Shenzhen) Company Limited Method, apparatus, device, and storage medium for determining implantation location of recommendation information
CN110248209A (en) * 2019-07-19 2019-09-17 湖南快乐阳光互动娱乐传媒有限公司 Transmission method and system for bullet screen anti-shielding mask information
WO2021012837A1 (en) * 2019-07-19 2021-01-28 腾讯科技(深圳)有限公司 Method and apparatus for determining recommendation information implantation position, device and storage medium
US20210350136A1 (en) * 2019-07-19 2021-11-11 Tencent Technology (Shenzhen) Company Limited Method, apparatus, device, and storage medium for determining implantation location of recommendation information
CN110535839A (en) * 2019-08-15 2019-12-03 咪咕文化科技有限公司 Information processing method, device, system and computer readable storage medium
CN110519245A (en) * 2019-08-15 2019-11-29 咪咕文化科技有限公司 Multimedia file processing method, communication equipment and communication system
CN112492347A (en) * 2019-09-12 2021-03-12 上海哔哩哔哩科技有限公司 Method for processing information flow and displaying bullet screen information and information flow processing system
CN112492324A (en) * 2019-09-12 2021-03-12 上海哔哩哔哩科技有限公司 Data processing method and system
US11641493B2 (en) 2019-10-21 2023-05-02 Beijing Dajia Internet Information Technology Co., Ltd. Method and electronic device for displaying bullet screens
CN110798726A (en) * 2019-10-21 2020-02-14 北京达佳互联信息技术有限公司 Bullet screen display method and device, electronic equipment and storage medium
CN110913241B (en) * 2019-11-01 2022-09-30 北京奇艺世纪科技有限公司 Video retrieval method and device, electronic equipment and storage medium
CN110913241A (en) * 2019-11-01 2020-03-24 北京奇艺世纪科技有限公司 Video retrieval method and device, electronic equipment and storage medium
CN113132786A (en) * 2019-12-30 2021-07-16 深圳Tcl数字技术有限公司 User interface display method and device and readable storage medium
CN113315924A (en) * 2020-02-27 2021-08-27 北京字节跳动网络技术有限公司 Image special effect processing method and device
CN111641870A (en) * 2020-06-05 2020-09-08 北京爱奇艺科技有限公司 Video playing method and device, electronic equipment and computer storage medium
CN111641870B (en) * 2020-06-05 2022-04-22 北京爱奇艺科技有限公司 Video playing method and device, electronic equipment and computer storage medium
US11863801B2 (en) 2020-08-04 2024-01-02 Shanghai Bilibili Technology Co., Ltd. Method and device for generating live streaming video data and method and device for playing live streaming video
CN112423110A (en) * 2020-08-04 2021-02-26 上海哔哩哔哩科技有限公司 Live video data generation method and device and live video playing method and device
CN111935508A (en) * 2020-08-13 2020-11-13 百度时代网络技术(北京)有限公司 Information processing and acquiring method and device, electronic equipment and storage medium
CN112637670A (en) * 2020-12-15 2021-04-09 上海哔哩哔哩科技有限公司 Video generation method and device
CN115734006A (en) * 2021-08-26 2023-03-03 武汉斗鱼网络科技有限公司 Mask file processing method and device and processing equipment
CN114500726A (en) * 2021-12-27 2022-05-13 努比亚技术有限公司 Charging video display method, mobile terminal and storage medium

Similar Documents

Publication Publication Date Title
CN109302619A (en) A kind of information processing method and device
Wang et al. Utility-driven adaptive preprocessing for screen content video compression
US10368123B2 (en) Information pushing method, terminal and server
CN104519401B (en) Video segmentation point preparation method and equipment
WO2020056903A1 (en) Information generating method and device
US20160050465A1 (en) Dynamically targeted ad augmentation in video
CN110691259B (en) Video playing method, system, device, electronic equipment and storage medium
CN107679249A (en) Friend recommendation method and apparatus
CN111654700B (en) Privacy mask processing method and device, electronic equipment and monitoring system
CN108961157B (en) Picture processing method, picture processing device and terminal equipment
CN104321762B (en) Measure the method and system of page phase time
Shi et al. A fast and robust key frame extraction method for video copyright protection
CN104427284B (en) Process the method and apparatus of sport video
CN105828146A (en) Video image interception method, terminal and server
WO2017084306A1 (en) Method and apparatus for playing key information of video in browser of mobile device
CN111046230B (en) Content recommendation method and device, electronic equipment and storable medium
CN109982106A (en) A kind of video recommendation method, server, client and electronic equipment
CN110740290A (en) Monitoring video previewing method and device
US10445585B1 (en) Episodic image selection
CN115134677A (en) Video cover selection method and device, electronic equipment and computer storage medium
CN111277904B (en) Video playing control method and device and computing equipment
CN114117120A (en) Video file intelligent index generation system and method based on content analysis
CN117459662B (en) Video playing method, video identifying method, video playing device, video playing equipment and storage medium
CN112055258B (en) Time delay testing method and device for loading live broadcast picture, electronic equipment and storage medium
CN114143590B (en) Video playing method, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190201

RJ01 Rejection of invention patent application after publication