CN110929098B - Video data processing method and device, electronic equipment and storage medium - Google Patents
Video data processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110929098B CN110929098B CN201911111883.2A CN201911111883A CN110929098B CN 110929098 B CN110929098 B CN 110929098B CN 201911111883 A CN201911111883 A CN 201911111883A CN 110929098 B CN110929098 B CN 110929098B
- Authority
- CN
- China
- Prior art keywords
- text
- target video
- sentence
- similarity
- content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7844—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a video data processing method, a video data processing device, electronic equipment and a storage medium; the method comprises the following steps: acquiring a title text and a content text of a target video; performing sentence smoothness detection on the content text to obtain a sentence smoothness corresponding to the content text; based on the sentence passing degree, when a descriptive fragment for describing a video picture exists in the target video, acquiring a plurality of clause texts corresponding to the content texts; the descriptive section comprises a sub-section of which the content subject is independent of the content subject of the target video; respectively carrying out similarity matching on each sentence text and the title text to obtain a plurality of corresponding similarity values; and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the similarity value. By the method and the device, whether the sub-segment duration is too long in the descriptive segment of the target video can be effectively identified.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for processing video data, an electronic device, and a storage medium.
Background
With the popularization of mobile terminals and the development of mobile social media, short videos have become one of the important ways for users to obtain information, entertainment and the like as a main product line of current information streams. In order to facilitate a user to better understand the content of the short video, the short video usually includes a descriptive section (i.e., a comment) introducing the video content, however, there may be a descriptive sub-section (i.e., a matting) that is irrelevant to the video content in the comment, and the relevant technology cannot determine the relative relationship between the matting duration and the video duration, and thus cannot effectively identify whether the short video is too long in matting, which brings a bad experience to the user.
Disclosure of Invention
The embodiment of the invention provides a video data processing method and device, electronic equipment and a storage medium, which can effectively identify whether short video is too long.
The embodiment of the invention provides a video data processing method, which comprises the following steps:
acquiring a title text and a content text of a target video;
detecting the sentence smoothness of the content text to obtain the sentence smoothness corresponding to the content text;
based on the sentence passing degree, when a descriptive segment for describing a video picture exists in the target video, acquiring a plurality of sentence dividing texts corresponding to the content texts; the descriptive section comprises a sub-section of which the content subject is independent of the content subject of the target video;
respectively carrying out similarity matching on each sentence text and the title text to obtain a plurality of corresponding similarity values;
and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the similarity value.
An embodiment of the present invention provides a video data processing apparatus, including:
the first acquisition module is used for acquiring a title text and a content text of a target video;
the detection module is used for detecting the sentence smoothness of the content text to obtain the sentence smoothness corresponding to the content text;
a second obtaining module, configured to obtain, when it is determined that a descriptive section for describing a video picture exists in the target video based on the sentence passing degree, a plurality of clause texts corresponding to the content text; the descriptive section comprises a sub-section of which the content subject is independent of the content subject of the target video;
the matching module is used for respectively carrying out similarity matching on each clause text and the title text to obtain a plurality of corresponding similarity values;
and the determining module is used for determining the relative relation between the duration of the sub-segment in the descriptive segment and the duration of the target video based on the similarity value.
In the above scheme, the detection module is further configured to perform clause processing on the content text to obtain a plurality of corresponding clause texts;
inputting each sentence text into a sentence smoothness detection model respectively to obtain a first sentence smoothness score corresponding to the sentence text;
and weighting the first sentence smoothness scores corresponding to the sentence dividing texts to obtain second sentence smoothness scores corresponding to the content texts, wherein the second sentence smoothness scores are used for representing the sentence smoothness of the content texts.
In the above scheme, the second obtaining module is further configured to obtain a statement smoothness reference score;
acquiring the ratio of the second sentence smoothness score to the sentence smoothness reference score;
when the ratio is larger than a scale threshold value, determining that a descriptive segment for describing a video picture exists in the target video.
In the above scheme, the matching module is further configured to perform vector conversion on the title text to obtain a corresponding title vector;
respectively carrying out vector conversion on each sentence text to obtain corresponding text vectors;
and respectively carrying out similarity matching on each text vector and the title vector to obtain corresponding similarity values.
In the above solution, the determining module is further configured to rank the similarity values based on an order of each of the clause texts in the content text to obtain a first sequence including a first number of similarity values and a second sequence including a second number of similarity values;
and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the first sequence and the second sequence.
In the foregoing solution, the determining module is further configured to extract a maximum similarity value from the first sequence as a first similarity value, and extract a maximum similarity value from the second sequence as a second similarity value;
comparing the first similarity value with the second similarity value to obtain a comparison result;
and determining the relative relation between the time length of the sub-segments in the descriptive segments and the time length of the target video based on the comparison result.
In the foregoing scheme, the determining module is further configured to perform weighted averaging on the similarity values of the first number to obtain a corresponding third similarity value, and perform weighted averaging on the similarity values of the second number to obtain a corresponding fourth similarity value;
comparing the third similarity value with the fourth similarity value to obtain a comparison result;
and determining the relative relation between the time length of the sub-segments in the descriptive segments and the time length of the target video based on the comparison result.
In the above scheme, the determining module is further configured to sort the similarity values based on the sequence of each clause text in the content text to obtain a corresponding similarity value sequence;
sequentially comparing the similarity values in the similarity sequence with a similarity threshold value, and determining the sequence number of the first similarity value exceeding the similarity threshold value in the similarity value sequence;
and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the sequence number and the similarity value sequence.
In the above scheme, the apparatus further includes a recommending module, configured to obtain a ratio of a duration of the sub-segment in the descriptive segment to a duration of the target video;
and when the ratio does not exceed a proportion threshold, adding the target video into a video library to be recommended.
An embodiment of the present invention provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the video data processing method provided by the embodiment of the invention when executing the executable instructions stored in the memory.
An embodiment of the present invention provides a storage medium, which stores executable instructions for causing a processor to execute the storage medium to implement the video data processing method provided in the embodiment of the present invention.
The embodiment of the invention has the following beneficial effects:
the method comprises the steps of detecting sentence smoothness of a content text of a target video, determining whether a descriptive section for describing a video picture exists in the target video, when the descriptive section exists, performing sentence division processing on the content text to obtain a plurality of sentence division texts corresponding to the content text, respectively performing similarity matching on each sentence division text and a title text of the target video, determining the relative relation between the duration of the sub-section in the descriptive section and the duration of the target video, and further effectively identifying whether the target video is too long.
Drawings
Fig. 1 is a schematic diagram of an alternative architecture of a video data processing system according to an embodiment of the present invention;
fig. 2 is an alternative structural schematic diagram of an electronic device according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of an alternative video data processing method according to an embodiment of the present invention;
FIG. 4 is a schematic flowchart of obtaining a semantic representation of a text according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a target video recommendation system according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of an alternative video data processing method according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart of video watching according to an embodiment of the present invention;
fig. 8 is a schematic flow chart of an alternative video data processing method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third \ fourth" merely distinguish similar objects and do not denote a particular ordering with respect to the objects, and it is understood that "first \ second \ third \ fourth" may be interchanged under appropriate circumstances in a particular order or sequence so that embodiments of the present invention described herein may be practiced in other than that shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the implementation method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Key technologies of Speech Technology (Speech Technology) are Automatic Speech Recognition (ASR) and Speech synthesis (TTS), as well as voiceprint recognition. The computer can listen, see, speak and feel, and the development direction of the future human-computer interaction is provided, wherein the voice becomes one of the best human-computer interaction modes in the future.
Natural Language Processing (NLP) is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable efficient communication between a person and a computer using natural language. Natural language processing is a science integrating linguistics, computer science and mathematics. Therefore, the research in this field will involve natural language, i.e. the language that people use everyday, so it is closely related to the research of linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic question and answer, knowledge mapping, and the like.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The method specially studies how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
The automatic driving technology generally comprises the technologies of high-precision maps, environmental perception, behavior decision, path planning, motion control and the like, and the self-determined driving technology has wide application prospect,
with the research and development of artificial intelligence technology, the artificial intelligence technology is developed and researched in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical services, smart customer service and the like.
The scheme provided by the embodiment of the application relates to technologies such as artificial intelligence natural language processing and the like, and is specifically explained by the following embodiment:
the inventor of the invention finds that the technologies of realizing matching between texts in the related technology mainly include similarity calculation, cross matching, interactive matching and the like in the process of implementing the embodiment of the invention. The similarity calculation is mainly a method of vectorizing texts to be matched and then calculating the similarity between vectors corresponding to the texts, but the method is more suitable for the cases that all the texts are short sentences, because the vectors of the short sentences can sufficiently represent semantic information. The cross matching needs to realize local information matching between matched texts, and has a remarkable effect on a local information sensitive Natural Language Processing (NLP) task. Interactive matching generally uses a twin network to perform information reading on texts needing matching, and information sharing is realized among structural layers, so that the interactive matching is suitable for matching between long texts.
Because the title of the short video belongs to the short text (generally within 40 words), the "audio-text" (i.e., the content text) of the short video belongs to the long text (generally over 300 words), and the matching method between the texts cannot be applied to the matching between the long text and the short text, the matching between the long text and the short text is the core difficulty of the current matching algorithm, how to construct an appropriate matching method based on the title text and the content text of the short video is the key of the whole problem in the scene of the short video matting process, and no mature method for solving the problem of the overlong short video matting exists in the current industry.
In view of this, an embodiment of the present invention provides a method for processing video data, where sentence smoothness detection is performed on a content text of a target video, so as to determine whether a descriptive section for describing a video picture exists in the target video, when the descriptive section exists, a sentence division processing is performed on the content text, so as to obtain a plurality of sentence division texts corresponding to the content text, and similarity matching is performed on each sentence division text and a title text of the target video, so as to determine a relative relationship between a duration of the sub-section in the descriptive section and a duration of the target video, so as to implement proper matching between a long text and a short text, and further effectively identify whether the target video is too long.
Referring to fig. 1, fig. 1 is an alternative architecture diagram of a video data processing system 100 according to an embodiment of the present invention, in order to support an exemplary application, a user terminal 400 (illustratively, a terminal 400-1, a terminal 400-2, and a terminal 400-N) is connected to an information streaming platform 200 through a network 300, where the terminal 400-1 is located at a short video distribution side, the terminal 400-2 and the terminal 400-N are located at a short video receiving side, and the network 300 may be a wide area network or a local area network, or a combination of both, and uses a wireless link to implement data transmission.
As shown in fig. 1, a user opens an application client of a user terminal 400-1, issues a recorded target video, and sends video data of the target video to the information streaming platform 200, where the video data includes title text and content text. The information flow platform 200 is configured to obtain a title text and a content text of a target video, perform sentence smoothness detection on the content text to obtain a sentence smoothness of the corresponding content text, and obtain a plurality of clause texts corresponding to the content text when it is determined that a descriptive section for describing a video picture exists in the target video based on the sentence smoothness, where the descriptive section includes a sub-section whose content theme is independent of the content theme of the target video; respectively carrying out similarity matching on each clause text and the title text to obtain a plurality of corresponding similarity values; and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the similarity value.
In practical application, whether the bedding duration of the target video is too long or not can be determined based on the relative relationship between the duration of the sub-segments in the descriptive segments and the duration of the target video, for example, the ratio of the duration of the sub-segments in the descriptive segments to the duration of the target video can be obtained, when the obtained ratio does not exceed a proportional threshold value, the target video is determined to be the video with no bedding length, and the target video is added to a video library to be recommended to the terminals 400-2 to 400-N corresponding to other users.
Referring to fig. 2, fig. 2 is an optional schematic structural diagram of an electronic device 200 according to an embodiment of the present invention, and taking the electronic device as an information flow platform 200 as an example, the electronic device 200 shown in fig. 2 includes: at least one processor 210, memory 250, at least one network interface 220, and a user interface 230. The various components in terminal 200 are coupled together by a bus system 240. It will be appreciated that the bus system 250 is used to enable communications among the components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 240 in fig. 2.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 250 optionally includes one or more storage devices physically located remotely from processor 210.
The memory 250 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 250 described in embodiments of the invention is intended to comprise any suitable type of memory.
In some embodiments, memory 250 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 252 for communicating to other computing devices via one or more (wired or wireless) network interfaces 220, an exemplary network interface 420 comprising: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), and the like;
a presentation module 253 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 231 (e.g., display screen, speakers, etc.) associated with the user interface 230;
an input processing module 254 for detecting one or more user inputs or interactions from one of the one or more input devices 232 and translating the detected inputs or interactions.
In some embodiments, the video data processing apparatus provided by the embodiments of the present invention may be implemented in software, and fig. 2 shows a video data processing apparatus 255 stored in a memory 250, which may be software in the form of programs and plug-ins, and includes the following software modules: the first obtaining module 2551, the detecting module 2552, the second obtaining module 2553, the matching module 2554 and the determining module 2555 are logical and thus can be arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be explained below.
In other embodiments, the video data processing apparatus provided in the embodiments of the present invention may be implemented in hardware, and for example, the video data processing apparatus provided in the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the video data processing method provided in the embodiments of the present invention, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic components.
The following describes a method for processing video data according to an embodiment of the present invention, with reference to an exemplary application of the method for processing video data according to an embodiment of the present invention, when the method is implemented as an information stream platform.
Referring to fig. 3, fig. 3 is an alternative flowchart of a video data processing method according to an embodiment of the present invention, which will be described with reference to the steps shown in fig. 3.
Step 301: and the information flow platform acquires the title text and the content text of the target video.
An information stream is a stream of content that can be scrolled through and appears in tiles that are similar in appearance and displayed next to each other, for example, an information stream that can be an editorial selection of information streams (e.g., articles or news listings) or product details (e.g., product listings, service listings, etc.). In practical application, each user using the news client contacts the product form of information flow to a certain extent, the information flow product has massive information, new and real-time contents are continuously refreshed by the energy source, and appropriate contents can be provided for the user in an appropriate scene.
In practical implementation, information flow products received by users, such as viewpoint videos, are recommended through manual operation or recommendation algorithms, and in a big data era, because the content updated by media is massive, manual operation is often limited to hot content, therefore, an information flow platform must rely on target video data in information flow to construct an algorithm model to recommend the content of the information flow. The information flow platform utilizes specific field information points in the target video data, such as title texts and content texts, to construct an algorithm model so as to judge whether the target video is suitable for being recommended to a user, wherein the content texts correspond to 'audio-text' of the target video and are obtained by performing text conversion on audio data of the target video.
Step 302: and carrying out sentence smoothness detection on the content text to obtain the sentence smoothness of the corresponding content text.
In practical application, the target video comprises a video picture and a descriptive segment for describing the video picture, and the content text sentence of the target video containing the descriptive segment is smooth and can form a complete sentence, while the content text of the target video not containing the descriptive segment corresponds to the recognition result of the background sound, so that the sentences are not smooth and cannot form the complete sentence.
In actual implementation, before performing sentence continuity detection on a content text, a sentence continuity detection model needs to be trained, and when training the model, a large number of sample texts are used as a training set, wherein the sample texts are all standard texts containing descriptive segments, and the sentence continuity detection model is obtained by training the sample texts by using tools of training Language models such as Stanford Research Institute Language Modeling Toolkit (SRILM, stanford Research Institute Language Modeling Toolkit) and KenLM.
In some embodiments, the information flow platform may perform sentence smoothness detection on the content text in the following manner to obtain a sentence smoothness corresponding to the content text:
sentence division processing is carried out on the content texts to obtain a plurality of corresponding sentence division texts; respectively inputting each clause text into a sentence smoothness detection model to obtain a first sentence smoothness score corresponding to the clause text; and weighting the first sentence order degree scores corresponding to the sentence dividing texts to obtain second sentence order degree scores corresponding to the content texts, wherein the second sentence order degree scores are used for representing the sentence order degrees of the content texts.
The method includes the steps of dividing a relatively long content text into a plurality of short clause texts based on punctuation marks in the content text, inputting the obtained clause texts into a trained sentence smoothness detection model to obtain a plurality of corresponding sentence smoothness scores, and taking an average value of the sentence smoothness scores corresponding to the clause texts as the smoothness score of the content text.
For example, suppose that a content text of a target video is subjected to clause processing to obtain 8 clause texts, and the 8 clause texts are respectively input into a trained sentence smoothness detection model to obtain sentence smoothness scores of corresponding clause texts: [ S ] 1 ,S 2 ,S 3 ,S 4 ,S 5 ,S 6 ,S 7 ,S 8 ]Then, the sentence smoothness score of the content text of the target video is: s = (S) 1 +…+S 8 ) And 8, the sentence openness score S is used for representing the sentence openness of the content text of the target video.
Step 303: based on the sentence continuity, when a descriptive fragment for describing a video picture exists in a target video, acquiring a plurality of clause texts corresponding to the content texts; the descriptive section includes a sub-section whose content subject is independent of the content subject of the target video.
In some embodiments, the information flow platform may determine that a descriptive segment is present in the target video for describing a picture of the video by:
obtaining a sentence smoothness reference score; acquiring the ratio of the second sentence smoothness score to the sentence smoothness reference score; and when the ratio is larger than the ratio threshold value, determining that the descriptive section for describing the video picture exists in the target video.
Here, the sentence smoothness reference score is an average value of sentence smoothness scores obtained by inputting a preset number of sample texts into the trained smoothness detection model, and the sentence smoothness reference score is assumed to be S 0 Then, by comparing the sentence smoothness score S of the corresponding text content with the sentence smoothness reference score S 0 To determine whether the target video contains descriptive segments.
For example, based on empirical knowledge, when S and S 0 The difference between themGreater than 20%, i.e. S/S 0 <0.8, the target video is considered to have no descriptive section for describing the video picture; when S and S 0 The difference between them is less than or equal to 20%, i.e. S/S 0 >If =0.8, then it is assumed that there is a descriptive slice in the target video for describing the video pictures.
In practical applications, when a descriptive segment for describing a video picture exists in a target video, a sub-segment with a content theme independent of the content theme of the target video may be further included in the descriptive segment, where the sub-segment refers to a segment with a relatively small correlation with the content theme of the target video, for example, a descriptive segment for introducing a beijing culture exists in the target video, and a segment for introducing a beijing traffic or environment that is not related to the beijing culture, such as beijing traffic or environment, exists before introducing the beijing culture, and then the segment for introducing the beijing traffic or environment may be considered as the sub-segment.
Step 304: and respectively carrying out similarity matching on each clause text and the title text to obtain a plurality of corresponding similarity values.
In some embodiments, the information flow platform may obtain the corresponding plurality of similarity values by:
performing vector conversion on the title text to obtain a corresponding title vector; respectively carrying out vector conversion on each sentence text to obtain corresponding text vectors; and respectively carrying out similarity matching on each text vector and the title vector to obtain corresponding similarity values.
In practical application, if complex texts are to be understood, the texts need to be encoded to be a language which can be read and understood by a computer, during encoding, it is desirable that similar lines among words are kept among sentences, and vector representation of the words is a basis for machine learning and deep learning. Therefore, in order to obtain a semantic Representation including rich semantic analysis, the header text and each sentence text are respectively input into a general semantic Representation model such as BERT (Bidirectional Encoder Representation from transforms) model.
Referring to fig. 4, fig. 4 is a schematic flow chart of obtaining semantic representation of a text according to an embodiment of the present invention, and as shown in fig. 4, a one-dimensional vector of each word/word in the text is used as an input of a BERT model, and a vector representation fused with full-text semantic information corresponding to each input word is obtained after being processed by the BERT model. Therefore, the title text is input into the BERT model to obtain a corresponding title vector; respectively inputting each sentence text into a BER T model to obtain corresponding text vectors; and then, respectively carrying out similarity matching on the title vector and the text vector corresponding to each clause text to obtain corresponding similarity values.
Step 305: and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the similarity value.
In some embodiments, the information flow platform may determine the relative relationship of the duration of the sub-segment in the descriptive segment to the duration of the target video by:
sequencing the similarity values based on the sequence of each clause text in the content text to obtain a first sequence containing a first number of similarity values and a second sequence containing a second number of similarity values; and determining the relative relation between the time length of the sub-segments in the descriptive segments and the time length of the target video based on the first sequence and the second sequence.
In practical implementation, firstly, the similarity values between the title vectors of the title texts and the text vectors of the clause texts are arranged according to the arrangement sequence of the clause texts in the content texts to obtain corresponding similarity sequences; next, a first sequence containing a first number of similarity values and a second sequence containing a second number of similarity values may be sequentially obtained; the similarity sequence can also be segmented according to the empirical value to obtain a first sequence containing a first number of similarity values and a second sequence containing a second number of similarity values; and finally, determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the first sequence and the second sequence.
For example, the content text of the target video has 10 clause texts, and cosine similarity between the text vector of each clause text and the title vector of the title text is calculated to obtain each clause text and the titleThe similarity value sequence formed by the similarity values between the subject texts is as follows: [ score) 1 ,score 2 ,...,score 10 ]Wherein, s core 1 Score is the cosine similarity of the first sentence text and the title text 2 The cosine similarity between the second sentence text and the title text is obtained, and so on. Empirically, the first three tenths of the sequence of similarity values can be taken as the first sequence: [ score) 1 ,score 2 ,score 3 ]Taking the last seven tenths of the sequence of similarity values as the second sequence: [ score) 4 ,score 5 ,...,score 10 ]It should be noted that, in addition to the above, the similarity value sequence may be divided into two sequences in other feasible manners.
In some embodiments, the information flow platform may determine the relative relationship between the duration of the sub-segment in the descriptive segment and the duration of the target video based on the first sequence and the second sequence by:
extracting a maximum similarity value from the first sequence as a first similarity value, and extracting a maximum similarity value from the second sequence as a second similarity value; comparing the first similarity value with the second similarity value to obtain a comparison result; and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the comparison result.
Here, also in the above-described first sequence: [ score) 1 ,score 2 ,score 3 ]And the second sequence: [ score) 4 ,score 5 ,...,score 10 ]To illustrate by way of example, the largest similarity value is extracted from the first sequence as the first similarity value top = max ([ score) 1 ,score 2 ,score 3 ]) Extracting the largest similarity value from the second sequence as a second similarity value end = ([ score) 4 ,score 5 ,...,score 10 ]) Obtaining a ratio a = top/end of the first similarity value top to the second similarity value end, where a is a relative relationship between the duration of the sub-segment in the descriptive segment and the duration of the target video, and the larger a is, the shorter the duration of the sub-segment in the descriptive segment relative to the duration of the target video is, the shorter a isSince the descriptive section is a section for describing a video picture of the target video and the sub-section is a section having no correlation with the content subject of the target video, a larger is a, which also means that a shorter is a descriptive sub-section having no correlation with the content subject of the target video in the target video.
In some embodiments, the information flow platform may further determine a relative relationship between the duration of the sub-segment in the descriptive segment and the duration of the target video based on the first sequence and the second sequence by:
carrying out weighted averaging on the similarity values of the first quantity to obtain a corresponding third similarity value, and carrying out weighted averaging on the similarity values of the second quantity to obtain a corresponding fourth similarity value; comparing the third similarity value with the fourth similarity value to obtain a comparison result; and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the comparison result.
Here, also in the above-described first sequence: [ score) 1 ,score 2 ,score 3 ]And the second sequence: [ score) 4 ,score 5 ,...,score 10 ]For example, the similarity values of the first number in the first sequence are weighted and averaged to obtain the corresponding third similarity value sim 1 =(score 1 +score 2 +score 3 ) A/3; carrying out weighted averaging on the similarity values of the second number in the second sequence to obtain a corresponding fourth similarity value sim 1 =(score 4 +...+score 10 ) (7) obtaining a third similarity value sim 1 Similarity to fourth similarity value sim 2 Ratio of b = sim 1 /sim 2 If b is the relation between the duration of the sub-segment in the descriptive segment and the duration of the target video, the larger b is, the shorter the duration of the sub-segment in the representative descriptive segment is relative to the duration of the target video is.
In some embodiments, the information flow platform may further determine a relative relationship between the duration of the sub-segment in the descriptive segment and the duration of the target video based on the first sequence and the second sequence by:
sequencing the similarity values based on the sequence of each clause text in the content text to obtain a corresponding similarity value sequence; sequentially comparing the similarity values in the similarity sequence with a similarity threshold value, and determining the sequence number of the first similarity value exceeding the similarity threshold value in the similarity sequence; and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the sequence number and the similarity value sequence.
Here, the similarity value sequence, which is also composed of 10 clause texts in total and similarity values between each clause text and the title text in the content text of the above target video, is as follows: [ score) 1 ,score 2 ,...,score 10 ]The description is given for the sake of example. Assuming that the similarity threshold is score, sequentially comparing the similarity values score in the similarity sequence i Comparing with a similarity threshold score, and if the similarity value is greater than the similarity threshold, considering that the corresponding clause text is a text for describing a video picture of the target video, namely a text related to the subject content; if the similarity value is smaller than the similarity threshold value, the corresponding clause text is not considered to be the text for describing the video picture of the target video, namely the text irrelevant to the subject content.
More specifically, assume that the similarity sequence is: 0.12, 0.2, 0.3, 0.4, 0.8, 0.9, 0.8, 0.4, and 0.4, the similarity threshold is 0.7, and the similarity values in the similarity sequence are compared with the similarity threshold in sequence, so that the sequence number of the first similarity value exceeding the similarity threshold in the similarity value sequence is known to be 6, that is, the similarity value between the 6 th clause text and the title text in the similarity sequence exceeds the similarity threshold, the 6 th clause text can be regarded as the text related to the subject content of the target video, and the first five clause texts are the text unrelated to the subject content of the target video, that is, the first five clause texts are descriptive sub-segments, the relative relationship between the duration of the sub-segments in the descriptive segment and the duration of the target video can be determined to be 5/10, that is, the duration of the sub-segments in the descriptive segment accounts for half of the duration of the target video, and as a result, the ratio of the number of the first similarity value exceeding the similarity threshold in the similarity sequence to the duration of the target video sequence is greater, and the length of the target video sequence is characterized by the target segment.
In some embodiments, the information flow platform may further obtain a ratio of a duration of the sub-segment in the descriptive segment to a duration of the target video; and when the ratio does not exceed the proportion threshold, adding the target video into a video library to be recommended.
In practical application, whether the duration of the sub-segment in the descriptive segment in the target video is too long or not can be determined according to the relative relation between the duration of the sub-segment in the descriptive segment and the duration of the target video, namely whether too long description irrelevant to the subject content of the target video exists in the target video or not is determined, and when the situation that too long description irrelevant to the subject content of the target video does not exist in the target video is determined, the target video is stored in a video library to be recommended so as to be recommended to a user; and when the target video is determined to have too long description which is irrelevant to the subject content of the target video, setting the target video as not recommended.
For example, if a = top/end is less than or equal to 0.8, it is determined that the target video has too long description irrelevant to the subject content of the target video, and the target video is set as not recommended; if the a = top/end is larger than 0.8, determining that the target video does not have too long description irrelevant to the subject content of the target video, and storing the target video in a video library to be recommended to the user.
Referring to fig. 5, fig. 5 is a schematic diagram of a recommendation system for a target video provided by an embodiment of the present invention, as shown in fig. 5, after video data of the target video is processed by an information flow platform according to a processing method of the video data provided by the embodiment of the present invention, a relative relationship between a time length of a sub-segment in a descriptive segment and a time length of the target video is obtained, whether the target video has too long description irrelevant to subject content of the target video is determined based on the relative relationship, and when it is determined that the target does not have too long description irrelevant to subject content of the target video, the target video is considered to meet a recommendation condition, and the target video is stored in a video library to be recommended to be pushed to an information flow product such as a browser or a flash newspaper.
By the method, sentence smoothness detection is carried out on the content text of the target video, whether descriptive fragments used for describing video pictures exist in the target video is determined, when the descriptive fragments exist, sentence processing is carried out on the content text, a plurality of sentence texts corresponding to the content text are obtained, similarity matching is carried out on each sentence text and the title text of the target video respectively, and the relative relation between the duration of the sub-fragments in the descriptive fragments and the duration of the target video is determined, so that proper matching between the long text and the short text is realized, whether the target video is overlong is effectively identified, and when the fact that the target video is overlong is identified, namely, the long description irrelevant to the subject content of the target video exists in the descriptive fragments of the target video, the target video is set as not recommended; when the fact that the bedding of the target video is not long is recognized, the target video is recommended to the user, therefore, the user can find the interest points when watching the received target video, and user experience is improved.
Next, a description is continued on a method for processing video data according to an embodiment of the present invention, where the method for processing video data is implemented by a terminal or an information flow platform, or is implemented by the information flow platform and the terminal in a cooperative manner, an application client is disposed on the terminal, and is implemented by the information flow platform as an example, fig. 6 is an optional schematic flow diagram of the method for processing video data according to the embodiment of the present invention, and referring to fig. 6, the method for processing video data according to the embodiment of the present invention includes:
step 601: the first client responds to the uploading operation of the user for the target video and acquires the target video.
Here, the first client is located on the target video distribution side, and in practical application, the user opens the first client on the user terminal, records and distributes the target video, or distributes the recorded target video.
Step 602: and the application client sends the target video data to the information flow platform.
Step 603: and the information flow platform acquires the title text and the content text of the target video.
Here, the information flow platform constructs an algorithm model by relying on the target video data in the information flow to recommend the content of the information flow. The information flow platform utilizes specific field information points in the target video data, such as title texts and content texts, to construct an algorithm model so as to judge whether the target video is suitable for being recommended to a user, wherein the content texts correspond to 'audio-text' of the target video and are obtained by performing text conversion on audio data of the target video.
Step 604: and the information flow platform detects the sentence smoothness of the content text to obtain the score of the sentence smoothness of the corresponding content text.
The method includes the steps of dividing a relatively long content text into a plurality of short clause texts based on punctuation marks in the content text, inputting the obtained clause texts into a trained sentence smoothness detection model to obtain a plurality of corresponding sentence smoothness scores, and taking an average value of the sentence smoothness scores corresponding to the clause texts as the smoothness score of the content text.
Step 605: and the information flow platform acquires the statement smoothness reference score.
Here, the sentence smoothness reference score is an average value of sentence smoothness scores obtained by inputting a preset number of sample texts into a trained smoothness detection model, where the sample texts are all standard texts containing descriptive segments.
Step 606: and the information flow platform acquires the ratio of the second sentence smoothness score to the sentence smoothness reference score.
Step 607: when the ratio is larger than the ratio threshold value, the information flow platform determines that the descriptive segment for describing the video picture exists in the target video.
For example, assume that the sentence smoothness reference score is S 0 If the sentence smoothness score of the corresponding text content is S, the S and the S are passed 0 To determine whether the target video contains descriptive segments. Based on empirical knowledge, when S and S 0 The difference between them is greater than 20%, i.e. S/S 0 <When 0.8, then recognizeNo descriptive section exists for the target video for describing the video pictures; when S and S 0 The difference between them is less than or equal to 20%, i.e. S/S 0 >If =0.8, then it is assumed that there is a descriptive slice in the target video for describing the video pictures.
Step 608: and the information flow platform acquires a plurality of clause texts corresponding to the content texts.
The descriptive section comprises a sub-section with a content subject independent from the content subject of the target video, and the sub-section refers to a section which is less or not related to the content subject of the target video and exists in the descriptive section.
Step 609: and the information flow platform respectively matches the similarity of each clause text with the title text to obtain corresponding similarity values.
Here, the title text and each sentence text are respectively input into a general semantic representation model such as a BERT model to obtain a corresponding title vector and a text vector corresponding to each sentence text, and then similarity matching is performed between the title vector and the text vector corresponding to each sentence text to obtain a corresponding similarity value.
Step 610: and the information flow platform sequences the similarity values based on the sequence of the sentence separating texts in the content text to obtain a first sequence containing the similarity values of a first quantity and a second sequence containing the similarity values of a second quantity.
Here, the sum of the first number and the second number is the total number of similarity values, and the proportional relationship between the first number and the second number may be set according to an empirical value.
Step 611: the information flow platform extracts a maximum similarity value from the first sequence as a first similarity value and extracts the maximum similarity value from the second sequence as a second similarity value.
Step 612: and the information flow platform compares the first similarity value with the second similarity value to obtain a comparison result.
Here, the first similarity value may be divided by the second similarity value to obtain a ratio of the first similarity value to the second similarity value.
Step 613: and the information flow platform determines the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the comparison result.
Here, the larger the ratio, the shorter the duration of the sub-segment in the descriptive segment is relative to the duration of the target video, the shorter the descriptive sub-segment in the target video exists that has no correlation with the content subject of the target video; otherwise, the smaller the ratio, the longer the duration of the sub-segment in the descriptive segment relative to the duration of the target video, the longer the descriptive sub-segment in the target video exists that has no relevance to the subject matter of the content of the target video.
Step 614: and when the ratio of the time length of the sub-segment in the descriptive segment to the time length of the target video is determined not to exceed the proportional threshold, storing the target video to a video library to be recommended to recommend the target video to a second client.
Here, the second client is located at the receiving side of the target video, and when the ratio of the time length of the sub-segment in the descriptive segment to the time length of the target video is larger, the longer the descriptive sub-segment which has no correlation with the content subject of the target video exists in the target video, and the longer the descriptive sub-segment is laid; when the ratio of the time length of the sub-segment in the descriptive segment to the time length of the target video is smaller, the descriptive sub-segment in the target video without relevance to the content subject of the target video is shorter, namely, the sub-segment is shorter in bedding. And when the ratio of the time length of the sub-segment in the descriptive segment to the time length of the target video does not exceed a proportional threshold, considering that a recommendation condition is met, and storing the target video meeting the push condition into a video library to be recommended to information stream products such as a browser or a flash newspaper for a user to use and watch.
Step 615: the second client plays the target video.
In the following, an exemplary application of the embodiments of the present invention in a practical application scenario will be described.
The short video is one of the main product lines of the current information flow, and becomes one of the important ways for users to acquire information, entertainment and the like, however, whether the short video with high quality can be provided for the users as the information flow products such as 'QQ view point', 'view point video', 'QQ browser' and the like becomes one of the core pain points of the current information flow products.
Referring to fig. 7, fig. 7 is a schematic view of a video watching process provided by an embodiment of the present invention, where the video watching process of an information flow user includes:
step 701: the information flow user obtains the title of the target video.
Step 702: and determining whether the target video has the interest point according to the title of the target video.
Here, when it is determined that there is a point of interest of itself according to the title of the target video, step 703 is performed; when it is determined that there is no own point of interest according to the title of the target video, step 705 is performed.
Step 703: and entering a target video, and searching interest points from the target video.
Step 704: and judging whether the target video is paved too long.
Here, when the mat in the target video is too long, step 705 is performed; when the mat in the target video is not long, step 706 is performed.
Step 705: boring and not watching the target video.
Step 706: and receiving and continuing to watch the target video.
As can be seen from fig. 7, if the matting in the target video is too long, it is difficult for the user to quickly acquire a point of interest from the short video, thereby causing annoyance to the user. Due to the non-strong correlation of the information flow product, the dependence of the algorithm is larger than that of other products, and when a user uses a short video product, the cushion length tolerance of the short video is different due to different characters, environments and the like of the user. Therefore, whether the recommendation strategy can be better formulated by the recommendation side and the high-quality short video is provided for the user becomes one of the core pain points of the current information flow product, and the related technology cannot determine the relative relation between the bedding duration and the video duration, so that whether the short video is too long in bedding or not can not be effectively identified, and bad experience is brought to the user.
Based on this, an embodiment of the present invention provides a method for processing video data, which determines whether a descriptive section (i.e., an explanation) for describing a video picture exists in a target video by performing sentence smoothness detection on a content text (i.e., an audio-text) of the target video, where the descriptive section includes a sub-section (i.e., a pad) whose content subject is independent of the content subject of the target video, and when the descriptive section exists, performs clause processing on the content text to obtain a plurality of clause texts corresponding to the content text, and determines a relative relationship between a duration of the sub-section in the descriptive section and a duration of the target video by performing similarity matching on each clause text and a title text of the target video, so as to implement proper matching between a long text and a short text, and further effectively identify whether the target video is too long pad.
Referring to fig. 8, fig. 8 is an optional flowchart of a video data processing method according to an embodiment of the present invention, and as shown in fig. 8, the video data processing method according to the embodiment of the present invention includes:
step 801: the information flow platform obtains target video data in the information flow product.
Step 802: the information flow platform obtains the title and content text of the target video which can be utilized.
Step 803: and constructing a content matching model based on the acquired title and content text of the target video.
Step 804: and obtaining a recognition result of whether the bedding of the target video is too long or not based on the constructed content matching model.
As shown in fig. 8, an overall flow of a method for processing video data according to an embodiment of the present invention includes: identifying whether the target video contains a descriptive segment (i.e. a narration) for describing a video picture, a clause text of a content text (i.e. an audio-text) of the target video, a matching model of the clause text and a title text of the target video, and a target video matting over-length decision mechanism, and introducing the following steps one by one:
1. identifying whether a target video contains commentary using a language model
In practical applications, the target video includes a video picture and a descriptive section (i.e., commentary) describing the video picture, and since the text sentence of the content of the target video including the descriptive section (i.e., commentary) is relatively smooth and can form a complete sentence, while the text sentence of the content of the target video not including the descriptive section (i.e., commentary) corresponds to the recognition result of the background sound, the sentence is not smooth and cannot form a complete sentence. In actual implementation, the process of identifying the content text of the target video is as follows:
1) Constructing a sentence smoothness detection model (namely a language model) training set by using article data which is exported from a content center history;
2) Training a language model using kenlm;
3) Randomly selecting 2000 pieces of short video data, and selecting 500 pieces of basic data of short video audio-to-text, wherein the basic data are required to be standard data containing commentary;
4) Computing language model average score S of basic data by using trained kenlm language model 0 ;
5) Calculating the language model score S of the target video' voice-to-text i If S is i And S 0 If the difference is greater than 20%, the target video is determined not to contain commentary, i.e., S i <0.8S 0 Then the target video does not contain commentary.
In practical applications, when the target video contains the commentary, the commentary may contain a descriptive sub-segment (i.e. a pad) of the content subject independent of the content subject of the short video, and therefore, whether the target video contains the pad or not is detected next.
2. Clause of content text of target video
Here, the longer content text may be divided into a plurality of short clause texts based on punctuation marks in the content text (i.e., phonetic transcription text), for example, may be implemented by a re module in python,
import re
sent_segs=re.findall(".*?[。!?]",content)
wherein, content is content text, and send _ segs is the calculated result.
3. Vector model of each sentence text and title of content text of target video
1) Inputting each clause text and title of the content text as a BERT model;
[ cls ] + "phonetic-to-text" sentence + [ seq ]
2) Using a BERT model to calculate sentence vectors, inputting content texts and title texts, and outputting the vectors;
3) And taking the vector corresponding to the cls as a final text vector to be output.
Here, since the BERT model has the task of predicting words, other words need to be considered when predicting words, and since cls has no obvious semantic information, it more fairly fuses the semantic information of each word in the text. In practical application, a one-dimensional vector of each character/word in a text is used as input of a BERT model, and vector representation fused with full-text semantic information corresponding to each input character is obtained after the BERT model processing. Therefore, the title text is input into the BERT model to obtain a corresponding title vector; and respectively inputting the sentence dividing texts into a BERT model to obtain corresponding text vectors.
4. Decision mechanism for overlong target video matting
1) Respectively calculating cosine similarity between the title vector and text vectors corresponding to the clause texts to obtain corresponding cosine similarity values;
2) Sequencing the similarity values based on the sequence of each clause text in the content text to obtain a similarity value sequence;
3) And comparing the highest value of the similarity value of the three-tenth clause text and the title text in the similarity sequence with the highest value of the similarity value of the seven-tenth clause text and the title text to obtain a ratio, and identifying the target video with the ratio being less than or equal to 0.8 as being too long.
Here, in practical applications, the similarity sequence is empirically divided into a first sequence containing a first number of similarity values and a second sequence containing a second number of similarity values; extracting a maximum similarity value from the first sequence as a first similarity value, extracting a maximum similarity value from the second sequence as a second similarity value, and comparing the first similarity value with the second similarity value to obtain a ratio; based on the ratio, the relative relationship between the duration of the sub-segment (i.e., the pad) in the descriptive segment and the duration of the target video is determined.
For example, the content text of the target video has 10 clause texts in total, and the similarity value sequence consisting of the similarity values between each clause text and the title text is obtained by calculating the cosine similarity between the text vector of each clause text and the title vector of the title text as follows: [ score) 1 ,score 2 ,...,score 10 ]Wherein, score 1 Score is the cosine similarity of the first sentence text and the title text 2 The cosine similarity between the second sentence text and the title text is obtained, and so on. Empirically, the first three tenths of the sequence of similarity values can be taken as the first sequence: [ score) 1 ,score 2 ,score 3 ]The second tenth of the sequence of similarity values is taken as the second sequence: [ score) 4 ,score 5 ,...,score 10 ]。
The largest similarity value is extracted from the first sequence as the first similarity value top = max ([ score) 1 ,score 2 ,score 3 ]) Extracting the largest similarity value from the second sequence as a second similarity value end = ([ score) 4 ,score 5 ,...,score 10 ]) Acquiring a ratio a = top/end of the first similarity value top to the second similarity value end, where a represents a relative relationship between the time length of the sub-segment in the descriptive segment and the time length of the target video, and the larger a, the shorter a time length of the sub-segment in the descriptive segment is represented relative to the time length of the target video, since the descriptive segment is a segment for describing a video picture of the target video and the sub-segment is a segment having no correlation with a content subject of the target video, the larger a is, which also means that the descriptive sub-segment (i.e. a pad) having no correlation with the content subject of the target video exists in the target video; accordingly, the smaller a, the longer the mat in the representation target video.
If the a is less than or equal to 0.8, determining that the target video has too long description irrelevant to the subject content of the target video, namely the target video is identified as being too long, and setting the target video as not recommended; if the a is larger than 0.8, determining that the target video does not have too long description irrelevant to the subject content of the target video, namely the target video is identified as not too long, and storing the target video into a video library to be recommended to be pushed to information flow products such as a browser or a flash newspaper.
By the video data processing method provided by the embodiment of the invention, videos with overlong mats are identified from short videos, and the overlong short videos are set as unrecommended videos in information flow products (viewpoint videos, browsers and flash newspapers), so that the user experience can be effectively improved.
Continuing with the exemplary structure of the video data processing device 255 provided by the embodiment of the present invention implemented as software modules, in some embodiments, as shown in fig. 2 and 9, the software modules stored in the video data processing device 255 of the memory 250 may include: a first obtaining module 2551, a detecting module 2552, a second obtaining module 2553, a matching module 2554 and a determining module 2555.
A first obtaining module 2551, configured to obtain a title text and a content text of a target video;
a detecting module 2552, configured to perform sentence smoothness detection on the content text, to obtain a sentence smoothness corresponding to the content text;
a second obtaining module 2553, configured to, based on the sentence passing degree, obtain a plurality of sentence division texts corresponding to the content text when it is determined that a descriptive segment for describing a video picture exists in the target video; the descriptive section comprises a sub-section of which the content subject is independent of the content subject of the target video;
a matching module 2554, configured to perform similarity matching between each clause text and the title text, respectively, to obtain a plurality of corresponding similarity values;
a determining module 2555, configured to determine, based on the similarity value, a relative relationship between a duration of a sub-segment in the descriptive segment and a duration of the target video.
In some embodiments, the detection module is further configured to perform clause processing on the content text to obtain a plurality of corresponding clause texts;
inputting each sentence text into a sentence smoothness detection model respectively to obtain a first sentence smoothness score corresponding to the sentence text;
and weighting the first sentence smoothness scores corresponding to the sentence dividing texts to obtain second sentence smoothness scores corresponding to the content texts, wherein the second sentence smoothness scores are used for representing the sentence smoothness of the content texts.
In some embodiments, the second obtaining module is further configured to obtain a sentence smoothness reference score;
acquiring the ratio of the second sentence smoothness score to the sentence smoothness reference score;
when the ratio is larger than a ratio threshold value, determining that a descriptive section for describing a video picture exists in the target video.
In some embodiments, the matching module is further configured to perform vector conversion on the caption text to obtain a corresponding caption vector;
respectively carrying out vector conversion on each sentence text to obtain corresponding text vectors;
and respectively carrying out similarity matching on each text vector and the title vector to obtain corresponding similarity values.
In some embodiments, the determining module is further configured to rank the similarity values based on an order of each of the clause texts in the content text, so as to obtain a first sequence including a first number of similarity values and a second sequence including a second number of similarity values;
and determining the relative relation between the duration of the sub-segments in the descriptive segments and the duration of the target video based on the first sequence and the second sequence.
In some embodiments, the determining module is further configured to extract a maximum similarity value from the first sequence as a first similarity value and extract a maximum similarity value from the second sequence as a second similarity value;
comparing the first similarity value with the second similarity value to obtain a comparison result;
and determining the relative relation between the time length of the sub-segments in the descriptive segments and the time length of the target video based on the comparison result.
In some embodiments, the determining module is further configured to perform weighted averaging on the first number of similarity values to obtain a corresponding third similarity value, and perform weighted averaging on the second number of similarity values to obtain a corresponding fourth similarity value;
comparing the third similarity value with the fourth similarity value to obtain a comparison result;
and determining the relative relation between the time length of the sub-segments in the descriptive segments and the time length of the target video based on the comparison result.
In some embodiments, the determining module is further configured to sort the similarity values based on an order of each of the clause texts in the content text, so as to obtain a corresponding similarity value sequence;
sequentially comparing the similarity values in the similarity sequence with a similarity threshold value, and determining the sequence number of the first similarity value exceeding the similarity threshold value in the similarity value sequence;
and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the sequence number and the similarity value sequence.
In some embodiments, the apparatus further includes a recommendation module, configured to obtain a ratio of a duration of a sub-segment in the descriptive segment to a duration of the target video;
and when the ratio does not exceed a proportion threshold, adding the target video into a video library to be recommended.
An embodiment of the present invention provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the video data processing method provided by the embodiment of the invention when executing the executable instructions stored in the memory.
An embodiment of the present invention provides a storage medium, which stores executable instructions for causing a processor to execute the storage medium to implement the video data processing method provided in the embodiment of the present invention.
In some embodiments, the storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.
Claims (12)
1. A method for processing video data, the method comprising:
acquiring a title text and a content text of a target video;
detecting the sentence smoothness of the content text to obtain the sentence smoothness corresponding to the content text;
based on the sentence passing degree, when a descriptive segment for describing a video picture exists in the target video, acquiring a plurality of sentence dividing texts corresponding to the content texts; the descriptive section comprises a sub-section of which the content subject is independent of the content subject of the target video;
respectively carrying out similarity matching on each clause text and the title text to obtain a plurality of corresponding similarity values;
and determining the relative relation between the duration of the sub-segments in the descriptive segments and the duration of the target video based on the similarity value.
2. The method of claim 1, wherein the detecting the sentence smoothness of the content text to obtain the sentence smoothness corresponding to the content text comprises:
sentence division processing is carried out on the content texts to obtain a plurality of corresponding sentence division texts;
inputting each sentence text into a sentence smoothness detection model respectively to obtain a first sentence smoothness score corresponding to the sentence text;
and weighting the first sentence smoothness scores corresponding to the sentence dividing texts to obtain second sentence smoothness scores corresponding to the content texts, wherein the second sentence smoothness scores are used for representing the sentence smoothness of the content texts.
3. The method of claim 2, wherein the determining that a descriptive segment for describing a video picture exists in the target video based on the sentence order comprises:
obtaining a sentence smoothness reference score;
acquiring the ratio of the second sentence smoothness score to the sentence smoothness reference score;
when the ratio is larger than a ratio threshold value, determining that a descriptive section for describing a video picture exists in the target video.
4. The method of claim 1, wherein said similarity matching each of said clause texts with said title text, respectively, to obtain a corresponding plurality of similarity values, comprises:
performing vector conversion on the title text to obtain a corresponding title vector;
respectively carrying out vector conversion on each sentence text to obtain corresponding text vectors;
and respectively carrying out similarity matching on each text vector and the title vector to obtain corresponding similarity values.
5. The method of claim 1, wherein determining the relative relationship of the duration of the sub-segment in the descriptive segment to the duration of the target video based on the similarity value comprises:
sequencing the similarity values based on the sequence of each sentence text in the content text to obtain a first sequence containing a first number of similarity values and a second sequence containing a second number of similarity values;
and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the first sequence and the second sequence.
6. The method of claim 5, wherein determining the relative relationship of the duration of the sub-segment in the descriptive segment to the duration of the target video based on the first sequence and the second sequence comprises:
extracting a maximum similarity value from the first sequence as a first similarity value and extracting a maximum similarity value from the second sequence as a second similarity value;
comparing the first similarity value with the second similarity value to obtain a comparison result;
and determining the relative relation between the time length of the sub-segments in the descriptive segments and the time length of the target video based on the comparison result.
7. The method of claim 5, wherein determining the relative relationship of the duration of the sub-segment in the descriptive segment to the duration of the target video based on the first sequence and the second sequence comprises:
carrying out weighted averaging on the similarity values of the first quantity to obtain a corresponding third similarity value, and carrying out weighted averaging on the similarity values of the second quantity to obtain a corresponding fourth similarity value;
comparing the third similarity value with the fourth similarity value to obtain a comparison result;
and determining the relative relation between the duration of the sub-segments in the descriptive segments and the duration of the target video based on the comparison result.
8. The method of claim 1, wherein said determining, based on said similarity values, a relative relationship of a duration of a sub-segment in said descriptive segment to a duration of said target video comprises:
sequencing the similarity values based on the sequence of each sentence text in the content text to obtain a corresponding similarity value sequence;
sequentially comparing the similarity values in the similarity value sequence with a similarity threshold value, and determining the sequence number of the first similarity value exceeding the similarity threshold value in the similarity value sequence;
and determining the relative relation between the time length of the sub-segment in the descriptive segment and the time length of the target video based on the sequence number and the similarity value sequence.
9. The method of claim 1, wherein the method further comprises:
acquiring the ratio of the time length of the sub-segment in the descriptive segment to the time length of the target video;
and when the ratio does not exceed a proportion threshold, adding the target video into a video library to be recommended.
10. An apparatus for processing video data, the apparatus comprising:
the first acquisition module is used for acquiring a title text and a content text of a target video;
the detection module is used for detecting the sentence smoothness of the content text to obtain the sentence smoothness corresponding to the content text;
a second obtaining module, configured to obtain, when it is determined that a descriptive section for describing a video picture exists in the target video based on the sentence passing degree, a plurality of clause texts corresponding to the content text; the descriptive section comprises a sub-section of which the content theme is independent of the content theme of the target video;
the matching module is used for respectively carrying out similarity matching on each clause text and the title text to obtain a plurality of corresponding similarity values;
and the determining module is used for determining the relative relation between the duration of the sub-segments in the descriptive segments and the duration of the target video based on the similarity value.
11. An electronic device for video processing, comprising a processor and a memory, the memory for storing executable instructions, the processor for retrieving the executable instructions in the memory and performing the method as claimed in any one of claims 1-9.
12. A storage medium comprising stored executable instructions, wherein the executable instructions when executed perform the method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911111883.2A CN110929098B (en) | 2019-11-14 | 2019-11-14 | Video data processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911111883.2A CN110929098B (en) | 2019-11-14 | 2019-11-14 | Video data processing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110929098A CN110929098A (en) | 2020-03-27 |
CN110929098B true CN110929098B (en) | 2023-04-07 |
Family
ID=69853908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911111883.2A Active CN110929098B (en) | 2019-11-14 | 2019-11-14 | Video data processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110929098B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111767796B (en) * | 2020-05-29 | 2023-12-15 | 北京奇艺世纪科技有限公司 | Video association method, device, server and readable storage medium |
CN111767714B (en) * | 2020-06-28 | 2022-02-11 | 平安科技(深圳)有限公司 | Text smoothness determination method, device, equipment and medium |
CN111984845B (en) * | 2020-08-17 | 2023-10-31 | 江苏百达智慧网络科技有限公司 | Website wrongly written word recognition method and system |
CN112203140B (en) * | 2020-09-10 | 2022-04-01 | 北京达佳互联信息技术有限公司 | Video editing method and device, electronic equipment and storage medium |
CN112307770B (en) * | 2020-10-13 | 2025-01-10 | 深圳前海微众银行股份有限公司 | Sensitive information detection method, device, electronic device and storage medium |
CN112182166B (en) * | 2020-10-29 | 2023-03-10 | 腾讯科技(深圳)有限公司 | Text matching method and device, electronic equipment and storage medium |
CN113762040B (en) * | 2021-04-29 | 2024-05-10 | 腾讯科技(深圳)有限公司 | Video identification method, device, storage medium and computer equipment |
CN113326400B (en) * | 2021-06-29 | 2024-01-12 | 合肥高维数据技术有限公司 | Evaluation method and system of model based on depth fake video detection |
CN114302227B (en) * | 2021-12-28 | 2024-04-26 | 北京国瑞数智技术有限公司 | Method and system for collecting and analyzing network video based on container collection |
CN114222196B (en) * | 2022-01-04 | 2024-09-10 | 优酷文化科技(北京)有限公司 | A method, device and electronic device for generating short video of plot explanation |
CN114390217B (en) * | 2022-01-17 | 2024-08-09 | 腾讯科技(深圳)有限公司 | Video synthesis method, device, computer equipment and storage medium |
CN117692676B (en) * | 2023-12-08 | 2024-11-15 | 广东创意热店互联网科技有限公司 | Video quick editing method based on artificial intelligence technology |
CN118714416A (en) * | 2024-08-29 | 2024-09-27 | 阿里健康科技(杭州)有限公司 | Method, device, storage medium and program product for processing dialogue text |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509465A (en) * | 2017-02-28 | 2018-09-07 | 阿里巴巴集团控股有限公司 | A kind of the recommendation method, apparatus and server of video data |
CN108874777A (en) * | 2018-06-11 | 2018-11-23 | 北京奇艺世纪科技有限公司 | A kind of method and device of text anti-spam |
CN109508406A (en) * | 2018-12-12 | 2019-03-22 | 北京奇艺世纪科技有限公司 | A kind of information processing method, device and computer readable storage medium |
CN109614604A (en) * | 2018-12-17 | 2019-04-12 | 北京百度网讯科技有限公司 | Subtitle processing method, device and storage medium |
CN109978021A (en) * | 2019-03-07 | 2019-07-05 | 北京大学深圳研究生院 | A kind of double-current method video generation method based on text different characteristic space |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9807473B2 (en) * | 2015-11-20 | 2017-10-31 | Microsoft Technology Licensing, Llc | Jointly modeling embedding and translation to bridge video and language |
-
2019
- 2019-11-14 CN CN201911111883.2A patent/CN110929098B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509465A (en) * | 2017-02-28 | 2018-09-07 | 阿里巴巴集团控股有限公司 | A kind of the recommendation method, apparatus and server of video data |
CN108874777A (en) * | 2018-06-11 | 2018-11-23 | 北京奇艺世纪科技有限公司 | A kind of method and device of text anti-spam |
CN109508406A (en) * | 2018-12-12 | 2019-03-22 | 北京奇艺世纪科技有限公司 | A kind of information processing method, device and computer readable storage medium |
CN109614604A (en) * | 2018-12-17 | 2019-04-12 | 北京百度网讯科技有限公司 | Subtitle processing method, device and storage medium |
CN109978021A (en) * | 2019-03-07 | 2019-07-05 | 北京大学深圳研究生院 | A kind of double-current method video generation method based on text different characteristic space |
Non-Patent Citations (2)
Title |
---|
LSTM逐层多目标优化及多层概率融合的图像描述;汤鹏杰等;《自动化学报》(第07期);87-99 * |
基于子主题增强的演化式多文档摘要生成方法研究;江璐璐;《中国优秀硕士学位论文全文数据库 信息科技辑》(第02期);I138-2752 * |
Also Published As
Publication number | Publication date |
---|---|
CN110929098A (en) | 2020-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110929098B (en) | Video data processing method and device, electronic equipment and storage medium | |
CN111026861B (en) | Text abstract generation method, training device, training equipment and medium | |
WO2021174890A1 (en) | Data recommendation method and apparatus, and computer device and storage medium | |
CN112182166A (en) | Text matching method and device, electronic equipment and storage medium | |
CN112100438A (en) | Label extraction method and device and computer readable storage medium | |
CN113395578A (en) | Method, device and equipment for extracting video theme text and storage medium | |
CN113766299B (en) | Video data playing method, device, equipment and medium | |
CN112989212B (en) | Media content recommendation method, device and equipment and computer storage medium | |
CN114611498A (en) | Title generation method, model training method and device | |
CN113656561A (en) | Entity word recognition method, apparatus, device, storage medium and program product | |
CN112085120A (en) | Multimedia data processing method and device, electronic equipment and storage medium | |
CN116977701A (en) | Video classification model training method, video classification method and device | |
CN113392640A (en) | Title determining method, device, equipment and storage medium | |
CN114282055A (en) | Video feature extraction method, device and equipment and computer storage medium | |
CN116977992A (en) | Text information identification method, apparatus, computer device and storage medium | |
CN116975016A (en) | Data processing method, device, equipment and readable storage medium | |
CN118035945B (en) | Label recognition model processing method and related device | |
CN110516153B (en) | Intelligent video pushing method and device, storage medium and electronic device | |
CN118093936A (en) | Video tag processing method, device, computer equipment and storage medium | |
CN117036866A (en) | Multi-label image recognition model training and multi-label image recognition method and device | |
CN116955704A (en) | Searching method, searching device, searching equipment and computer readable storage medium | |
CN115269961A (en) | Content search method and related device | |
CN116310975A (en) | A Consistent Segment Selection Based Audiovisual Event Localization Method | |
CN115130461A (en) | Text matching method and device, electronic equipment and storage medium | |
CN118227910B (en) | Media resource aggregation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40022510 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |