[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111432206B - Video clarity processing method, device and electronic equipment based on artificial intelligence - Google Patents

Video clarity processing method, device and electronic equipment based on artificial intelligence Download PDF

Info

Publication number
CN111432206B
CN111432206B CN202010334489.1A CN202010334489A CN111432206B CN 111432206 B CN111432206 B CN 111432206B CN 202010334489 A CN202010334489 A CN 202010334489A CN 111432206 B CN111432206 B CN 111432206B
Authority
CN
China
Prior art keywords
video
foreground
definition
clarity
image frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010334489.1A
Other languages
Chinese (zh)
Other versions
CN111432206A (en
Inventor
杨天舒
黄嘉文
沈招益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Beijing Co Ltd
Original Assignee
Tencent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Beijing Co Ltd filed Critical Tencent Technology Beijing Co Ltd
Priority to CN202010334489.1A priority Critical patent/CN111432206B/en
Publication of CN111432206A publication Critical patent/CN111432206A/en
Application granted granted Critical
Publication of CN111432206B publication Critical patent/CN111432206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种基于人工智能的视频清晰度处理方法、装置及电子设备;方法包括:从视频中提取待识别的多个图像帧;对所述多个图像帧中的前景进行清晰度识别,得到每个所述图像帧的前景清晰度;基于每个所述图像帧的前景清晰度,确定所述视频的第一视频清晰度,并作为所述视频的清晰度识别结果;当所述视频的第一视频清晰度未满足清晰度条件时,对所述多个图像帧的背景进行清晰度识别,得到所述每个所述图像帧的背景清晰度;基于每个所述图像帧的背景清晰度,确定所述视频的第二视频清晰度,并作为所述视频的更新的清晰度识别结果。通过本发明,能够高效、准确的识别视频的清晰度。

The present invention provides a method, device and electronic device for processing video clarity based on artificial intelligence; the method comprises: extracting multiple image frames to be identified from a video; performing clarity identification on the foreground in the multiple image frames to obtain the foreground clarity of each of the image frames; determining the first video clarity of the video based on the foreground clarity of each of the image frames, and using it as the clarity identification result of the video; when the first video clarity of the video does not meet the clarity condition, performing clarity identification on the background of the multiple image frames to obtain the background clarity of each of the image frames; determining the second video clarity of the video based on the background clarity of each of the image frames, and using it as the updated clarity identification result of the video. Through the present invention, the clarity of a video can be efficiently and accurately identified.

Description

Video definition processing method and device based on artificial intelligence and electronic equipment
Technical Field
The present invention relates to the field of artificial intelligence, and in particular, to a video definition processing method and apparatus based on artificial intelligence, and an electronic device.
Background
Artificial intelligence (AI, artificial Intelligence) is the theory, method and technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
As an important technical content in the artificial intelligence software technology, the computer vision technology has been rapidly developed in recent years, and the image recognition technology is an important branch in the computer vision technology, and can be used for recognizing image frames through the image recognition technology to give out the definition evaluation result of videos, but the scenes of the videos are mainly static scenes, and in the dynamic scenes, the recognition result of the video definition is not ideal.
Disclosure of Invention
The embodiment of the invention provides a video definition processing method and device based on artificial intelligence and electronic equipment, which can efficiently and accurately identify the definition of a video.
The technical scheme of the embodiment of the invention is realized as follows:
The embodiment of the invention provides a video definition processing method based on artificial intelligence, which comprises the following steps:
extracting a plurality of image frames to be identified from the video;
performing definition identification on the foreground in the plurality of image frames to obtain the foreground definition of each image frame;
determining a first video definition of the video based on the foreground definition of each image frame, and taking the first video definition as a definition identification result of the video;
When the first video definition of the video does not meet the definition condition, performing definition identification on the backgrounds of the plurality of image frames to obtain the background definition of each image frame;
And determining second video definition of the video based on the background definition of each image frame, and using the second video definition as an updated definition identification result of the video.
The embodiment of the invention provides a video definition processing device based on artificial intelligence, which comprises:
The extraction module is used for extracting a plurality of image frames to be identified from the video;
The first identification module is used for carrying out definition identification on the foreground in the plurality of image frames to obtain the foreground definition of each image frame;
the first determining module is used for determining first video definition of the video based on the foreground definition of each image frame and taking the first video definition as a definition identification result of the video;
The second identification module is used for carrying out definition identification on the background of the plurality of image frames when the definition of the first video of the video does not meet the definition condition, so as to obtain the background definition of each image frame;
and the second determining module is used for determining second video definition of the video based on the background definition of each image frame and serving as an updated definition identification result of the video.
In the above scheme, the extraction module is configured to:
The video is subjected to equidistant frame extraction to obtain a first image frame set;
Clustering the image frames in the first image frame set to obtain a plurality of similar image frame subsets, randomly extracting one image frame from each similar image frame subset, and combining the images which are not clustered in any similar image frame subset in the first image set to form a second image frame set;
And filtering out the image frames meeting the blurring condition from the second image frame set, and taking the rest multi-frame image frames in the second image frame set as the image frames to be identified.
In the above scheme, the first identification module is configured to:
And mapping the image characteristics of the image frame into confidence degrees corresponding to different foreground definition categories, and taking the foreground definition category corresponding to the maximum confidence degree as the foreground definition of the image frame.
In the above solution, the first determining module is configured to:
the foreground definition category includes: the foreground is clear, the foreground is general and the foreground is fuzzy;
Determining the number of the image frames to be identified, which are included in each foreground definition category, based on the foreground definition category to which each image frame belongs;
And determining the first video definition of the video according to the proportion of the number of the image frames included in each foreground definition category in the total number, wherein the total number is the count corresponding to the plurality of image frames to be identified.
In the above solution, the first determining module is configured to:
The foreground definition category of the foreground definition corresponds to a first proportion threshold, the foreground definition category of the foreground general foreground definition corresponds to a second proportion threshold, the foreground definition category of the foreground blur corresponds to a third proportion threshold, and the second proportion threshold, the first proportion threshold and the third proportion threshold are arranged in descending order;
determining that a first video definition of the video is clear when a proportion of a number of image frames included in a foreground definition category of the foreground definition in total number is greater than the first proportion threshold;
The proportion of the number of the image frames included in the general foreground definition category of the foreground in the total number is larger than the second proportion threshold, the proportion of the number of the image frames included in the foreground definition category of the foreground blurring in the total number is smaller than the third proportion threshold, and the first video definition of the video is determined to be general;
Determining that the first video definition of the video is general when the proportion of the number of image frames included in the foreground definition category of the foreground definition to the total number is less than the first proportion threshold and the proportion of the number of image frames included in the foreground definition category of the foreground definition to the total number is zero;
and when the proportion of the number of the image frames included in the foreground definition category of the foreground blurring in the total number is greater than a third proportion threshold value, determining that the first video definition of the video is blurring.
In the above scheme, the second identifying module is configured to:
the following processing is performed for each of the image frames:
mapping image features of the image frames to confidence levels of different background definition categories;
wherein the background definition category comprises: clear background and blurred background.
In the above solution, the second determining module is configured to:
Accumulating the confidence coefficient of each image frame belonging to the background definition category with clear background and taking the average value to obtain the average value confidence coefficient;
When the mean confidence is greater than a confidence threshold, determining that a second video sharpness of the video is blurred, and when the mean confidence is less than or equal to the confidence threshold, determining that the second video definition of the video is general.
In the above solution, the second determining module is further configured to:
Acquiring category information of the video;
Searching the confidence threshold corresponding to the video category information in the corresponding relation between the video categories and the confidence threshold.
In the above scheme, the video definition processing device based on artificial intelligence further comprises: a recommendation module for: and sending the definition identification result of the video to a recommendation system so that the recommendation system executes corresponding recommendation operation according to the definition of the video.
The embodiment of the invention also provides electronic equipment, which comprises:
A memory for storing executable instructions;
and the processor is used for realizing the video definition processing method based on artificial intelligence when executing the executable instructions stored in the memory.
The embodiment of the invention provides a computer readable storage medium, wherein executable instructions are stored in the computer readable storage medium and are used for realizing the video definition processing method based on artificial intelligence.
The embodiment of the invention has the following beneficial effects:
The method comprises the steps of extracting a plurality of image frames from a video, carrying out definition recognition on the foreground of the plurality of image frames to obtain a recognition result of video definition, carrying out definition recognition on the background of the corresponding image frame according to judgment on the recognition result to obtain an updated video definition recognition result, and being applicable to dynamic video to realize efficient and accurate recognition, thereby improving the efficiency and precision of video definition recognition.
Drawings
FIG. 1 is a schematic diagram of an architecture of an artificial intelligence based video sharpness processing system according to an embodiment of the present invention;
Fig. 2 is a schematic diagram of an architecture of a terminal device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an architecture of an artificial intelligence based video sharpness processing apparatus according to an embodiment of the present invention;
FIG. 4A is a flow chart of an artificial intelligence based video sharpness processing method according to an embodiment of the present invention;
FIG. 4B is a schematic flow chart of an artificial intelligence-based video sharpness processing method according to an embodiment of the present invention;
FIG. 4C is a flowchart of an artificial intelligence based video sharpness processing method according to an embodiment of the present invention;
FIG. 5 is a schematic view of a foreground definition model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a recommendation system according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of two image frames extracted in a short video according to an embodiment of the present invention;
Fig. 8 is a schematic flow chart of an artificial intelligence-based video definition processing method according to an embodiment of the present invention.
Detailed Description
The present invention will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent, and the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the invention described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
In the embodiment of the application, the relevant data collection processing should be strictly according to the requirements of relevant laws and regulations when the example is applied, so as to acquire the informed consent or independent consent of the personal information body, and develop the subsequent data use and processing within the authorized range of the laws and regulations and the personal information body.
Before describing embodiments of the present invention in further detail, the terms and terminology involved in the embodiments of the present invention will be described, and the terms and terminology involved in the embodiments of the present invention will be used in the following explanation.
1) Video definition: is an important indicator for measuring video quality. The definition refers to the definition degree of each detail shadow and the boundary thereof on the image, so that the image quality is compared by looking at the definition degree of the playback image, and the definition of the video identified by the artificial intelligence mode is called as a definition identification result in the application.
2) Convolutional neural network (CNN, convolutional Neural Networks): one type of feed-forward neural network (FNN, feedforward Neural Networks) that includes convolution calculations and has a deep structure is one of the representative algorithms for deep learning (DEEP LEARNING). Convolutional neural networks have a token learning (representation learning) capability that enables a shift-invariant classification (shift-INVARIANT CLASSIFICATION) of input images in their hierarchical structure.
3) The foreground, the definition of which is called foreground definition, is the person or object in front of or near the front of the subject in the video.
4) The scenes which are positioned behind the main body and far away from the camera in the video are used for enriching the space content of the pictures and reflecting the characteristics of places, environments, seasons, time and the like. The emotional degree of the background is called background clarity.
In the related art, methods for determining the definition of a video include determining the definition of a video according to the definition of a target frame, determining the definition of a video based on a 3D convolutional neural network (deep learning method), and determining the definition of a video based on a 2D convolutional neural network (deep learning method) +long Short-Term Memory (LSTM) and other time series models, and these methods are described below.
(1) Judging the definition of the video according to the definition of the target frame: extracting frames from video frames of the video at fixed time points or filtering some excessive frames through some traditional operators, and selecting target frames of the video; and extracting gradient characteristics of the target frame by using a traditional operator (a canny operator, sober operator, laplacian operator and the like) after the target frame is obtained, calculating weighted values of the characteristics, and comparing the weighted values with a preset threshold value to obtain the definition of the video.
(2) Judging the definition of the video based on a 3D convolutional neural network (a deep learning method): and (3) building a common 3D convolutional neural network model such as 3D-resnet, putting the marked video data into the model for training, and finally predicting the video definition by using the trained model.
(3) Judging the definition of the video based on a 2D convolutional neural network (deep learning method) +LSTM and other time sequence models: and (3) obtaining the characteristics of each frame by building a common convolutional neural network model such as 2D-resnet and the like, fusing the characteristics between the frames, and predicting the video definition through the fused characteristics.
In the embodiment of the invention, the following technical problems occur in the actual application process of the method of the related technology:
(1) Because the video has the characteristics of rich scenes, quick content change and the like, especially some life scenes such as square dances, street skateboards and the like, video with higher occurrence frequency can be easily extracted into a motion frame in the frame extraction process, and if the obtained target frames are all motion frames, the identification result of the target frames cannot accurately represent the definition of the whole video. Therefore, the identification result of the method is completely dependent on the acquired target frame, and the video definition of various categories cannot be accurately identified.
(2) The above methods (2) and (3) take the continuity between frames into consideration, and the background processing capacity is limited in the actual service scene, so that the recognition speed based on the time sequence model is slower by the two methods, which results in slower real-time processing efficiency of the background.
Aiming at the problems, the embodiment of the invention provides a video definition processing method and device based on artificial intelligence and electronic equipment, which can efficiently and accurately identify the definition of a video.
An exemplary application of the electronic device provided by the embodiment of the present invention is described below, where the electronic device provided by the embodiment of the present invention may be a server, for example, a server deployed in a cloud end, and according to a remote uploading of a video by a terminal device, a plurality of image frames are extracted from the video, and foreground of the plurality of image frames is subjected to definition recognition to obtain a recognition result of video definition, and according to a determination of the recognition result, background of a corresponding image frame is subjected to definition recognition to obtain an updated video definition recognition result; the method can also be a terminal device, such as a handheld terminal device, and a series of processing of the video definition recognition is performed according to the video input in the terminal device so as to obtain a video definition recognition result. By operating the video definition processing scheme based on artificial intelligence provided by the embodiment of the invention, the electronic equipment can improve the accuracy of video definition identification, enhance the applicability of video definition processing in actual service scenes, improve the processing efficiency of the electronic equipment identification definition, and is suitable for a plurality of application scenes, for example, a recommendation system can recommend videos with high definition preferentially.
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of an artificial intelligence-based video sharpness processing system 100 according to an embodiment of the present invention. Wherein the artificial intelligence based video sharpness processing system 100 includes: server 200, network 300, and terminal device 400 (terminal device 400-1 and terminal device 400-2 are shown as examples), terminal device 400 connects to server 200 via network 300, and network 300 may be a wide area network or a local area network, or a combination of both.
The terminal device 400 is used to obtain video samples, for example, when a user (e.g., user a and user B in fig. 1) inputs video through a video input interface (e.g., selects a local video file or captures video).
In some embodiments, the terminal device 400 locally executes the video sharpness processing method based on artificial intelligence provided in the embodiments of the present invention to complete video input according to a user to obtain a video sharpness recognition result, for example, on the terminal device 400, the user opens a video input interface, inputs video in the video input interface, the terminal device 400 performs a series of sharpness recognition processes on the video to obtain a video sharpness recognition result, and the obtained video sharpness recognition result is displayed on the video input interface 410 (the video input interface 410-1 and the video input interface 410-2 are exemplarily shown) of the terminal 400.
In some embodiments, the terminal device 400 may also send, to the server 200 through the network 300, a video input by a user on the terminal device 400, invoke the video definition processing function provided by the server 200 and perform a series of processes for recognizing the input video based on the video definition processing method provided by the embodiment of the present invention, so as to obtain a video definition recognition result, for example, the user opens a video input interface on the terminal device 400, inputs the video in the video input interface, the terminal device sends the video to the server 200 through the network 300, after receiving the video, the server 200 recognizes the video definition recognition result, returns the obtained video definition recognition result to the terminal device, and displays the video definition result of the video on the display interface 410 of the terminal device 400, or the server 200 directly gives the video definition result of the video.
The embodiment of the invention can be widely applied to video definition processing scenes, for example, when a video APP background is used for checking basic information (whether video content is clear or not) of video information, a strategy is formulated by combining the characteristics of the video, a plurality of image frames are extracted from the video, the foreground of the image frames is subjected to definition recognition so as to obtain a video definition recognition result, and the background of the corresponding image frames is subjected to definition recognition according to the judgment of the recognition result so as to obtain an updated video definition recognition result, so that the definition of the video is efficiently and accurately recognized, the purpose of simulating the definition of the video given by human senses is finally achieved, and the efficiency of real-time processing is accelerated; the video definition processing system 100 based on artificial intelligence can also be applied to a recommendation system, and the obtained video definition result is input into the recommendation system, so that the recommendation system recommends a video with higher definition to a user, the video click rate and the watching time of the user are increased, and the obtained video definition result can also be stored in a server for later offline use by the recommendation system. In addition, scenes related to video definition processing belong to potential application scenes of the invention.
In the following, an electronic device is taken as an example of a terminal device. Referring to fig. 2, fig. 2 is a schematic architecture diagram of a terminal device 400 (for example, may be the terminal device 400-1 and the terminal device 400-2 shown in fig. 1) provided in an embodiment of the present invention, and the terminal device 400 shown in fig. 2 includes: at least one processor 410, a memory 450, at least one network interface 420, and a user interface 430. The various components in terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 440.
The Processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose Processor, such as a microprocessor or any conventional Processor, a digital signal Processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices physically remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile memory may be read only memory (ROM, read Only Me mory) and the volatile memory may be random access memory (RAM, random Access Memor y). The memory 450 described in embodiments of the present invention is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
Network communication module 452 for reaching other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
A presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., a display screen, speakers, etc.) associated with the user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the video sharpness processing apparatus based on artificial intelligence provided in the embodiments of the present invention may be implemented in software, and fig. 2 shows the video sharpness processing apparatus 455 based on artificial intelligence stored in the memory 450, which may be software in the form of a program, a plug-in, or the like, including the following software modules: the extraction module 4551, the first recognition module 4552, the first determination module 4553, the second recognition module 4554, the second determination module 4555, and the recommendation module 4556; the extracting module 4551, the first identifying module 4552, the first determining module 4553, the second identifying module 4554 and the second determining module 4555 are used for implementing the video definition processing method based on artificial intelligence provided by the embodiment of the invention, and the recommending module 4556 is used for implementing the recommendation of the video definition identifying result according to the embodiment of the invention, and these modules are logical, so that any combination or further splitting can be performed based on the implemented functions. The functions of the respective modules will be described hereinafter.
The video sharpness processing method based on artificial intelligence provided by the embodiment of the invention can be executed by the server, or can be executed by terminal devices (for example, can be the terminal device 400-1 and the terminal device 400-2 shown in fig. 1), or can be executed by the server and the terminal device together.
The video definition processing method based on artificial intelligence provided by the embodiment of the invention will be described below in connection with exemplary application and implementation of the terminal device provided by the embodiment of the invention.
Referring to fig. 3 and fig. 4A, fig. 3 is a schematic architecture diagram of an artificial intelligence-based video sharpness processing apparatus 455 according to an embodiment of the present invention, which shows a flow of implementing video sharpness processing by a series of modules, and fig. 4A is a schematic flow diagram of an artificial intelligence-based video sharpness processing method according to an embodiment of the present invention, and the steps shown in fig. 4A will be described with reference to fig. 3.
In step S101, the server extracts a plurality of image frames to be identified from the video.
The user can input the video on the input interface of the terminal, after the input is completed, the terminal can forward the video to the server, and the server can extract a plurality of image frames to be identified from the video after receiving the video so as to obtain the definition identification result of the video according to the image frames.
In some embodiments, referring to fig. 3, a server pair extracts a plurality of image frames to be identified from a video, comprising: equally-spaced frame extraction is carried out on the video to obtain a first image frame set; clustering the image frames in the first image frame set to obtain a plurality of similar image frame subsets, randomly extracting one image frame from each similar image frame subset, and combining the images which are not clustered into any similar image frame subset in the first image set to form a second image frame set; and filtering out the image frames meeting the blurring condition from the second image frame set, and taking the rest multi-frame image frames in the second image frame set as the image frames to be identified.
As an example, equally spaced frames of a video by a server to obtain a first set of image frames may be implemented by a multimedia video processing tool (FFMpeg, fast Forward Mpeg). That is, after the server receives the video, the server reads the stream information in the video file, calls FFMpeg a corresponding decoder in the decoding library to open the stream information, and decodes a plurality of video frames from the video by the set number of frames of the extracted image per second, thereby obtaining the first image frame set.
As an example, the filtering process of the image frames by the server is implemented by clustering. The process of clustering the image frames is specifically described below, and a first image frame set is projected to a feature space to obtain an image feature vector corresponding to each first image frame; calculating the distance (Euclidean distance or cosine distance) between each feature vector and other feature vectors; classifying the feature vectors with the calculated distances falling within a numerical threshold range into similar image frame types to obtain a plurality of similar image frame subsets, and taking the feature vectors which are not clustered into any similar image frame subset as a new similar image frame type; randomly extracting an image frame from each similar image frame subset as a similar image frame category; image frames of all similar image frame categories are combined together to form a second set of image frames.
The method for extracting the image frames is suitable for various types of videos, and can filter out the blurred frames caused by the extraction method, so that the subsequent video definition processing process can be accurately performed.
In step S102, the server performs sharpness recognition on the foreground in the plurality of image frames, to obtain foreground sharpness of each image frame.
In some embodiments, referring to fig. 3, the server performs sharpness recognition on the foreground in the plurality of image frames to obtain foreground sharpness for each image frame, including: the following processing is performed for each image frame based on the foreground definition model: and mapping the image characteristics of the image frame into confidence degrees corresponding to different foreground definition categories through a forward propagation process among various layers of the foreground definition model, and taking the foreground definition category corresponding to the maximum confidence degree as the foreground definition of the image frame.
As an example, referring to fig. 3, the foreground definition model includes an input layer, a hidden layer, and an output layer. The server outputs the confidence coefficient of the foreground definition category to which each image frame belongs through the forward propagation process among the input layer, the hidden layer and the output layer of the foreground definition model, and takes the foreground definition category corresponding to the maximum confidence coefficient as the foreground definition of the image frame, wherein the foreground definition category comprises: the foreground is clear, the foreground is general and the foreground is blurred.
For example, the foreground definition categories of an image frame are classified into three categories of foreground blur, foreground general and foreground definition, and the output result for one image frame is: the foreground is blurred by 2%, the foreground is 7% generally, and the foreground is clear by 91%, so that the foreground definition of the image frame is clear. It should be noted that the closer the confidence level is to 1, the better the prediction effect.
Here, a forward propagation process of the foreground definition model will be described, and a process in which sample data is propagated from a low level to a high level is a forward propagation process of the foreground definition model. In the forward propagation process, an image is input into an input layer, image features are extracted through a hidden layer, the image is input into an output layer for classification, a result of foreground definition type is obtained, and when the output result accords with an expected value, the result is output.
In step S103, the server determines a first video definition of the video based on the foreground definition of each image frame, and as a definition recognition result of the video.
Referring to fig. 4B, fig. 4B is a schematic flow chart of an artificial intelligence-based video sharpness processing method according to an embodiment of the present invention, and in some embodiments, fig. 4B illustrates that step S103 in fig. 4A may be implemented through steps S1031-S1032 shown in fig. 4B.
In step S1031, the server determines the number of image frames to be identified included in each foreground definition category based on the foreground definition category to which each image frame belongs; in step S1032, the server determines the first video definition of the video according to the proportion of the number of image frames included in each foreground definition category to the total number of image frames, wherein the total number is a count corresponding to the plurality of image frames to be identified.
In some embodiments, the foreground definition category of foreground definition corresponds to a first scale threshold, the foreground definition category of foreground general corresponds to a second scale threshold, the foreground definition category of foreground blur corresponds to a third scale threshold, and the second scale threshold, the first scale threshold, and the third scale threshold are arranged in descending order. The first video sharpness of the video is determined according to the proportion of the number of image frames included in each foreground sharpness category to the total number, and can be identified by corresponding conditions, as will be exemplified below.
Condition 1) determining that a first video definition of a video is clear when a proportion of a number of image frames included in a foreground definition category of a current foreground definition in a total number is greater than a first proportion threshold; the proportion of the number of image frames included in the foreground general foreground definition category in the total number is larger than a second proportion threshold value, the proportion of the number of image frames included in the foreground fuzzy foreground definition category in the total number is smaller than a third proportion threshold value, and the first video definition of the video is determined to be general.
Condition 2) the proportion of the number of image frames included in the foreground definition category of the foreground definition to the total number is smaller than the first proportion threshold, and the proportion of the number of image frames included in the foreground definition category of the foreground blur to the total number is zero, and the first video definition of the video is determined to be general.
Condition 3) determining that the first video definition of the video is blurred when the proportion of the number of image frames included in the foreground definition category of the current scene blur in the total number is greater than a third proportion threshold.
For example, assuming that the total number of counts of the plurality of image frames to be identified is m frames, where m is a natural number greater than zero, the server invokes a foreground definition model to perform definition identification on the foreground in the plurality of image frames, so as to obtain a number of image frames included in a foreground definition category of foreground definition, b number of image frames included in a foreground general foreground definition category, and c number of image frames included in a foreground definition category of foreground blur; wherein a, b and c are natural numbers greater than zero. The following cases can be distinguished:
Case 1) if When the video definition is clear, judging the result of the first video definition of the video;
Case 2) if And is also provided withWhen the first video definition of the video is judged to be general; if it isAnd is also provided withWhen the first video definition of the video is normal;
Case 3) if When the first video definition of the video is blurred.
As an example of step S1032, the server determining the first video definition of the video according to the proportion of the number of image frames included in each foreground definition category to the total number may further include: determining that a first video definition of the video is clear when a proportion of a number of image frames included in a foreground definition category of the foreground definition in total number is greater than a first proportion threshold; the proportion of the number of image frames included in the foreground definition category of the foreground definition in the total number is smaller than a first proportion threshold, the proportion of the number of image frames included in the foreground definition category of the foreground definition in the total number is larger than a second proportion threshold, and the first video definition of the video is determined to be general; and when the proportion of the number of the image frames included in the foreground definition category of the foreground blurring in the total number is greater than a third proportion threshold value, determining that the first video definition of the video is blurring.
In step S104, when the first video definition of the video does not meet the definition condition, the server performs definition recognition on the backgrounds of the plurality of image frames, and obtains the background definition of each image frame.
In some embodiments, the server performs sharpness recognition on the background of the plurality of image frames to obtain background sharpness of each image frame, including: the following processing is performed for each image frame: mapping image features of the image frames to confidence levels of different background definition categories; wherein the background definition category comprises: clear background and blurred background.
It should be noted that, the background definition model is a two-class model, and for each image frame, the background definition and its corresponding confidence level, the background blurring and its corresponding confidence level are output, and only the background blurring confidence level is used in the application. As an example, the background sharpness model may employ a resnet-50 network.
It should be noted that, the first video definition recognition result of the video may be a qualitative definition category, such as definition, general, and blur; but may also be a quantized sharpness score, such as any score from 0-10 points.
As an example of step S104, when the first video sharpness of the video is a blurred image, sharpness recognition is performed on the backgrounds of the plurality of image frames; or when the first video definition of the video is an image with the score lower than the definition score threshold value, performing definition recognition on the background of the plurality of image frames. For example, when the score of the first video definition of the video is 0-2, it is determined that the first video definition of the video does not satisfy the definition condition.
In step S105, a second video definition of the video is determined based on the background definition of each image frame, and is used as a definition recognition result of the update of the video.
Referring to fig. 4C, fig. 4C is a schematic flow chart of an artificial intelligence-based video sharpness processing method according to an embodiment of the present invention, and in some embodiments, fig. 4C illustrates that step S105 in fig. 4A may be implemented by steps S1051-S1052 illustrated in fig. 4C. In step S1051, accumulating and averaging the confidence degrees of the background definition category of the background blur of each image frame to obtain an average confidence degree; in step S1052, when the mean confidence is greater than the confidence threshold, the second video definition of the video is determined to be blurred, and when the mean confidence is less than or equal to the confidence threshold, the second video definition of the video is determined to be normal.
In some embodiments, referring to fig. 4C, fig. 4C is a schematic flow chart of an artificial intelligence-based video sharpness processing method according to an embodiment of the present invention, and based on fig. 3, the following steps may be further performed before step S105:
In step S201, category information of a video is acquired;
In step S202, the confidence threshold corresponding to the video category information is searched for in the correspondence between the plurality of video categories and the confidence threshold.
It should be noted that, the confidence threshold may be different for different types of video. For example, for dance-like video, since a person may have a ghost but a clear background, a slight ghost of the person does not affect the human look; for near-view video, slight blurring can accentuate the human body's look and feel to the video. The confidence threshold setting for dancing video may be higher than for near video, and thus the confidence threshold setting may be different for different categories of video. As an example, the correspondence of the video category information to the confidence threshold may be stored in a database of the server or terminal device for invocation of the server or terminal device.
According to the method for processing the video definition based on the artificial intelligence, different identification modes can be automatically selected for videos of different categories, and efficient and accurate identification of various videos is achieved. For the videos of the definition type and the general type, the definition recognition result can be obtained efficiently and accurately only by calling the foreground definition model, so that recall and accuracy of the video of the definition type are improved, and processing efficiency is improved. And for the fuzzy video, the background definition model is further called to carry out definition recognition so as to update the result of video definition recognition. By the method, the video judged to be blurred due to the blur of the foreground can be recalled, and the accuracy of the blurred video is improved. Finally, the purpose of simulating the definition of the sensory recognition video of the human can be achieved.
In some embodiments, referring to fig. 4B, based on fig. 4A, following step S105, the following steps may also be performed:
In step S106, the video definition recognition result is sent to the recommendation system, so that the recommendation system executes a corresponding recommendation operation according to the video definition.
In some embodiments, referring to fig. 6, fig. 6 is a schematic architecture diagram of a recommendation system according to an embodiment of the present invention. In fig. 6, the recommendation system includes a definition module, a personalization module, a recall module, a sort module, a diversity module, and a diversity+definition based recommendation module. The individuation module is used for calculating an object portrait according to object behaviors so as to obtain interest preference under different dimensions according to object attributes, historical behaviors, interest content and the like; the definition module is used for realizing the definition processing process of the video so as to obtain candidate video with higher definition, and the definition identification result obtained by the definition module can be stored locally and is directly used; the recall module comprises a recall model of a plurality of channels such as collaborative filtering, a topic model, content recall, social network service (SNS, social Network Software) and the like, and the diversity of candidate videos is guaranteed during recall; and the sorting module is used for uniformly scoring and sorting the recalled results, and selecting the video which is most interesting to the user and has high definition from the candidate videos, namely selecting the optimal video from the candidate videos so as to obtain the videos meeting diversity and definition conditions. The recommendation system gives consideration to multiple dimensions such as diversity, definition, individuation and the like of recommendation results, and can meet the requirement of diversity of users.
The video definition model based on the artificial intelligence can realize the processing of the definition of various videos by the background, and saves a great deal of labor cost. Meanwhile, the obtained video definition result can be applied to a recommendation system, and videos with higher definition are recommended to a user in the recommendation system so as to increase the video click rate and the watching time of the user; the video definition result can also be stored in a server for subsequent offline use by a recommendation system.
In the following, an exemplary application of the embodiment of the present invention in a practical application scenario will be described.
In the implementation process of the embodiment of the invention, the related technology is found to have the following problems: taking an application scenario of judging the definition of a short video as an example, with the continuous development of the mobile internet, mobile platforms such as smartphones and the like are rapidly rising, and the short video using smartphones/tablets as carriers has become a new content transmission form in recent years. With the explosive growth of short video data, how to quickly and accurately judge the definition of the short video by a background system becomes important. However, the short video has the characteristics of numerous categories and most of video frames are motion frames, so that the difficulty in judging the definition of the short video is increased. If the video frame is a motion frame, the identification result cannot accurately represent the definition of the whole short video; however, the fusion characteristics between frames are identified through the time sequence model, which results in slower identification speed. Therefore, how to efficiently and accurately identify video sharpness is a problem addressed by the present invention.
As an example, referring to fig. 7, fig. 7 is a schematic diagram of two image frames extracted from a short video according to an embodiment of the present invention, and fig. 7 shows a foreground 301, a foreground 302, a background 303, and a background 304, where in the prior art, the foreground of the two image frames includes a virtual image, so that the recognition result of the model in the prior art is blurred, but it is seen that the background of the two image frames is relatively clear, and the definition of the short video where the two image frames are located should be judged to be general by human sense organs. For example, category videos of dancing, sports, etc.: the characters in the video frame may be ghosts, but the background is clearer; distant view video: the main body in a video single frame is too small to be clearly distinguished; the videos in the categories have good overall appearance, and the definition of the short videos is judged to be general by the sense of people. That is, the sharpness recognition method in the related art is not adapted to various business scenarios.
Aiming at the problems, the embodiment of the invention provides the video definition processing method based on the artificial intelligence, which not only can be used for providing video definition processing more suitable for the service scene in combination with the service scene, but also has higher processing efficiency and effectively improves the efficiency and the precision of the video definition processing.
The video definition processing and detecting method based on artificial intelligence provided by the embodiment of the invention extracts a plurality of image frames from a video, performs definition recognition on the foreground of the plurality of image frames to obtain a video definition recognition result, and performs definition recognition on the background of the corresponding image frame according to judgment on the recognition result to obtain an updated video definition recognition result.
Referring to fig. 8, fig. 8 is a schematic flow chart of an artificial intelligence-based video definition processing method according to an embodiment of the present invention, and an implementation scheme of the embodiment of the present invention is specifically as follows:
The method comprises the steps of extracting k frames from video at equal intervals by utilizing a multimedia video processing tool (FFMpeg), clustering the video frames according to characteristics such as a color histogram and a canny operator to filter repeated frames, and meanwhile, performing primary screening on the frames to mainly filter out frames which are too fuzzy, and finally selecting m frames from the k frames, wherein k and m are fixed constants.
Extracting frames from short video uploaded by a user, normalizing each frame of image in m frames, and sending the normalized images into a foreground definition model to judge the definition of the foreground content of each frame of image, wherein the foreground definition model supports any-size image as input; the output is clear foreground, general foreground, fuzzy foreground of three categories and corresponding confidence.
In some examples, the foreground definition category is identified by a corresponding condition:
Condition 1) The frames are clear, and the short video is clear;
Condition 2) The following frames are clear, the rest frames are general, and the short video is general;
Condition 3) The above frames are generally described as such,The following frames are blurred, then the short video is general;
Condition 4) if The above frames are blurred and a background sharpness model is input.
If the foreground definition model judges that the video is fuzzy, acquiring category information of the video, and giving out definition of the short video based on confidence level of m-frame results given by the background definition model and the category information of the video.
As an example, assuming that the result of the background sharpness model output is (cof _1, cof_2), cof _1 characterizes the confidence that the output background is blurred, cof _2 characterizes the confidence that the output background is not blurred, wherein cof _1+cof_2=1, cof _1 of m frames are added, and because of different categories of video, the blurred range may be different, and then the threshold of cof _1 is given based on the different categories, if cof _1_avg > thre of m frames, the short video gives a blurred label, otherwise, a general label is given, wherein thre is the confidence threshold corresponding to the video category.
In some examples, the background sharpness model is mainly used to determine background sharpness of an excessive frame (e.g., a motion frame, a subject blur, and background sharpness), and the model is a two-class model, which is used as an auxiliary determination of the foreground sharpness model to determine whether the background of the image is sharp. The background definition model mainly adopts a network of resnet-50.
In some examples, referring to fig. 5, fig. 5 is a schematic structural diagram of a foreground definition model provided by an embodiment of the present invention, where a backbone network of the foreground definition model mainly includes a convolution layer, a pooling layer, a residual module, a downsampling layer, an adaptive downsampling layer, a random inactivation layer (dropout), and a full connection layer; the residual error module mainly selects convolution layers of convolution kernels of 5 x 5,3 x 3 and 1*1, and then carries out direct connection operation; the downsampling layer downsamples the image mainly by using a convolution layer or a pooling layer with the step length of 2; the adaptive downsampling layer can convert the feature images of any scale with the same channel number into feature vectors of the same dimension, and the convolutional neural network model can take the images of any scale as the input of the model.
In some examples, a frame of a foreground definition model is described, referring to fig. 5, based on fig. 5, fig. 5 is a schematic structural diagram of the foreground definition model provided by an embodiment of the present invention, and as an example, an input layer of the foreground definition model performs normalization processing on an input image; the categories of hidden layers of the foreground definition module may include: the device comprises a convolution layer, a pooling layer, a residual error module, a downsampling layer, a self-adaptive pooling layer, a random inactivation layer and a full connection layer.
Convolution layer: performing convolution linear mapping processing on an input image to extract image characteristics; it should be noted that, some features of the image can be extracted from the input image through mathematical operations with the convolution kernel, and the features of the extracted image are different from each other due to the difference of the convolution kernels, so that for training the foreground definition model, the image features with the best performance can be extracted, so as to reduce the complexity of the model and save a large amount of calculation resources and calculation time.
The pooling layer is used for selecting mean pooling or maximum pooling so as to obtain main image characteristics; the average pooling is to average the values in the pooling areas, and the maximum pooling is to divide the feature map into a plurality of rectangular areas and take the maximum value of each area; after pooling operation, unimportant image features in the feature map of the convolution layer are removed, and the number of parameters is reduced so as to reduce overfitting.
The downsampling layer is a nonlinear downsampling method, and features extracted by each downsampling can be output in parallel through 4 times of serial downsampling processing so as to extract 4 groups of feature graphs with different sizes; wherein, a residual error module is added before the downsampling process. It should be noted that the direct connection operation in the residual error module plays a role in direct transmission through simple identity mapping, maintains the spatial structure of the gradient, and relieves the problem of model gradient crushing.
And the self-adaptive pooling layer converts the 4 groups of feature maps with different sizes output by the downsampling layer into 4 groups of feature maps with the same size, and integrates the 4 groups of feature maps with the same size into 1 group of feature maps through connection processing. The connection processing is to add 4 groups of feature graphs two by two through addition operation, and output all the added feature graphs; the self-adaptive downsampling layer automatically calculates the size of the convolution kernel and the step length of each movement according to the sizes of the set input image and the set output image so as to output the set output image size, namely the self-adaptive downsampling layer can convert the feature images with the same channel number and any size into feature vectors with the same dimension, so that the foreground definition model supports processing images with any size.
And the full connection layer integrates all the features obtained before convolution into an N-dimensional feature vector. Some neuron nodes are discarded with a certain probability through a random inactivation layer (dropout) between the two fully connected layers, so that joint adaptability among the neuron nodes is weakened. For example, the drop out rate may be 50% for discarding half of the neuron nodes.
And the output layer classifies the N-dimensional feature vectors by adopting a logistic regression softmax function so as to output the definition category of each frame of image and the confidence corresponding to each definition category, wherein N is a natural number larger than zero.
In some examples, prior to using the model, the sharpness classification for the short video is based on the nature of the video itself, as well as the business side requirements: and respectively making quantization standards for the three categories of definition, general and fuzzy, and marking the training samples.
The video definition model of the artificial intelligence can directly enable the background to process the definition of the short video, and can save a great deal of labor cost. Meanwhile, the obtained result can be applied to a recommendation system, and short videos with higher definition are recommended to the user in the system so as to increase the click rate and the watching time of the user.
Continuing with the description below of an exemplary architecture implemented as software modules for an artificial intelligence based video sharpness processing apparatus 455 provided in accordance with an embodiment of the present invention, in some embodiments, as shown in FIG. 2, the software modules stored in the memory 440 of the artificial intelligence based video sharpness processing apparatus 455 may include:
An extracting module 4551 for extracting a plurality of image frames to be identified from a video; the first identifying module 4552 is configured to identify the foreground in the plurality of image frames in a sharpness manner, so as to obtain foreground sharpness of each image frame; a first determining module 4553, configured to determine a first video definition of the video based on the foreground definition of each image frame, and use the first video definition as a definition recognition result of the video; the second identifying module 4554 is configured to identify the background of the plurality of image frames when the first video definition of the video does not meet the definition condition, so as to obtain the background definition of each image frame; a second determining module 4555, configured to determine a second video definition of the video based on the background definition of each image frame, and as an updated definition recognition result of the video.
In the above solution, the extracting module 4551 is configured to: equally-spaced frame extraction is carried out on the video to obtain a first image frame set; clustering the image frames in the first image frame set to obtain a plurality of similar image frame subsets, randomly extracting one image frame from each similar image frame subset, and combining the images which are not clustered into any similar image frame subset in the first image set to form a second image frame set; and filtering out the image frames meeting the blurring condition from the second image frame set, and taking the rest multi-frame image frames in the second image frame set as the image frames to be identified.
A first identifying module 4552 for:
the image features of the image frames are mapped to confidence degrees corresponding to different foreground definition categories, and the foreground definition category corresponding to the maximum confidence degree is used as the foreground definition of the image frames.
A first determining module 4553, configured to:
the foreground definition categories include: the foreground is clear, the foreground is general and the foreground is fuzzy;
Determining the number of image frames to be identified, which are included in each foreground definition category, based on the foreground definition category to which each image frame belongs;
And determining the first video definition of the video according to the proportion of the number of the image frames included in each foreground definition category in the total number, wherein the total number is a count corresponding to a plurality of image frames to be identified.
A first determining module 4553, configured to:
the foreground definition category of the foreground definition corresponds to a first proportion threshold, the foreground definition category of the foreground general corresponds to a second proportion threshold, the foreground definition category of the foreground blurring corresponds to a third proportion threshold, and the second proportion threshold, the first proportion threshold and the third proportion threshold are arranged in descending order;
Determining that a first video definition of the video is clear when a proportion of a number of image frames included in a foreground definition category of the foreground definition in total number is greater than a first proportion threshold;
The proportion of the number of image frames included in the general foreground definition category of the foreground in the total number is larger than a second proportion threshold value, the proportion of the number of image frames included in the fuzzy foreground definition category of the foreground in the total number is smaller than a third proportion threshold value, and the first video definition of the video is determined to be general;
the proportion of the number of the image frames included in the foreground definition category of the foreground definition in the total number is smaller than a first proportion threshold, and the proportion of the number of the image frames included in the foreground definition category of the foreground definition in the total number is zero, and the first video definition of the video is determined to be general;
and when the proportion of the number of the image frames included in the foreground definition category of the foreground blurring in the total number is greater than a third proportion threshold value, determining that the first video definition of the video is blurring.
A second identifying module 4554 for:
the following processing is performed for each image frame:
Mapping image features of the image frames to confidence levels of different background definition categories;
wherein the background definition category comprises: clear background and blurred background.
A second determining module 4555 configured to:
accumulating the confidence coefficient of each image frame belonging to the background definition category with clear background and taking the average value to obtain the average value confidence coefficient;
And when the mean confidence is greater than the confidence threshold, determining that the second video definition of the video is fuzzy, and when the mean confidence is less than or equal to the confidence threshold, determining that the second video definition of the video is general.
The second determining module 4555 is further configured to:
Acquiring category information of a video;
Searching the confidence threshold corresponding to the video category information in the corresponding relation between the video categories and the confidence threshold.
Recommendation module 4556 for: and sending the definition identification result of the video to a recommendation system so that the recommendation system executes corresponding recommendation operation according to the definition of the video.
Embodiments of the present invention provide a storage medium having stored therein executable instructions which, when executed by a processor, cause the processor to perform a method provided by embodiments of the present invention, for example, an artificial intelligence based video sharpness processing method as illustrated in fig. 4A, 4B or 4C.
In some embodiments, the storage medium may be FRAM, ROM, PROM, EPROM, EE PROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (html, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the invention, a plurality of image frames are extracted from the video, the foreground of the plurality of image frames is subjected to definition recognition to obtain a recognition result of video definition, and the background of the corresponding image frame is subjected to definition recognition according to the judgment of the recognition result to obtain an updated video definition recognition result, so that different recognition modes can be automatically selected for different types of videos, the definition of a short video is efficiently and accurately recognized, and the purpose of simulating the definition of a human-simulated sensory recognition video is achieved; aiming at the clear video and the general video, only the foreground definition model is required to be called for definition identification, so that a definition identification result can be obtained efficiently and accurately, recall and accuracy of the clear video are improved, and processing efficiency is improved; aiming at the fuzzy video, a background definition model is further called to carry out definition recognition so as to update the result of video definition recognition, and in this way, the video judged to be fuzzy due to fuzzy foreground comparison can be recalled, so that the accuracy rate of the fuzzy video is improved; inputting the obtained video definition result into a recommendation system, so that the recommendation system recommends videos with higher definition to a user, and the video click rate and the watching time of the user are increased.
The foregoing is merely exemplary embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (11)

1.一种基于人工智能的视频清晰度处理方法,其特征在于,所述方法包括:1. A video definition processing method based on artificial intelligence, characterized in that the method comprises: 从视频中提取待识别的多个图像帧;Extracting multiple image frames to be recognized from the video; 对所述多个图像帧中的前景进行清晰度识别,得到每个所述图像帧的前景清晰度;所述前景清晰度类别包括:前景清晰、前景一般和前景模糊;Performing clarity recognition on the foreground in the plurality of image frames to obtain the foreground clarity of each of the image frames; the foreground clarity categories include: foreground clarity, foreground general, and foreground blur; 根据比例阈值和每个所述前景清晰度类别所包括的图像帧的数量在总数量中所占的比例,确定所述视频的第一视频清晰度,其中,不同的所述前景清晰度类别对应不同的所述比例阈值,所述总数量为对应所述待识别的多个图像帧的计数;Determine the first video definition of the video according to a ratio threshold and a ratio of the number of image frames included in each foreground definition category to the total number, wherein different foreground definition categories correspond to different ratio thresholds, and the total number is a count of the plurality of image frames to be identified; 当所述视频的所述前景清晰度类别为所述前景清晰或所述前景一般时,将所述第一视频清晰度作为所述视频的清晰度识别结果;When the foreground clarity category of the video is the foreground clarity or the foreground general, using the first video clarity as the clarity recognition result of the video; 当所述视频的所述前景清晰度类别为所述前景模糊时,对所述多个图像帧的背景进行清晰度识别,得到每个所述图像帧的背景清晰度;When the foreground clarity category of the video is the foreground blur, performing clarity recognition on the backgrounds of the multiple image frames to obtain the background clarity of each of the image frames; 基于每个所述图像帧的背景清晰度,确定所述视频的第二视频清晰度,并作为所述视频的更新的清晰度识别结果。Based on the background definition of each of the image frames, a second video definition of the video is determined and used as an updated definition recognition result of the video. 2.根据权利要求1所述的方法,其特征在于,所述从视频中提取待识别的多个图像帧,包括:2. The method according to claim 1, characterized in that extracting a plurality of image frames to be identified from the video comprises: 对所述视频进行等间隔地抽帧,得到第一图像帧集合;Extracting frames from the video at equal intervals to obtain a first image frame set; 对所述第一图像帧集合中的图像帧进行聚类以得到多个相似图像帧子集,从每个所述相似图像帧子集中抽取一个图像帧,并结合未被聚类到任一所述相似图像帧子集的图像组成第二图像帧集合;Clustering the image frames in the first image frame set to obtain a plurality of similar image frame subsets, extracting an image frame from each of the similar image frame subsets, and combining images that are not clustered into any of the similar image frame subsets to form a second image frame set; 从所述第二图像帧集合中过滤掉符合模糊条件的图像帧,将所述第二图像帧集合中剩余的多帧图像帧作为待识别的图像帧。Image frames meeting the blur condition are filtered out from the second image frame set, and the remaining multiple image frames in the second image frame set are used as image frames to be identified. 3.根据权利要求1所述的方法,其特征在于,所述对所述多个图像帧中的前景进行清晰度识别,得到每个所述图像帧的前景清晰度,包括:3. The method according to claim 1, wherein the step of identifying the clarity of the foreground in the plurality of image frames to obtain the foreground clarity of each of the image frames comprises: 对每个所述图像帧执行以下处理:The following processing is performed on each of the image frames: 将所述图像帧的图像特征映射为对应不同前景清晰度类别的置信度,将最大置信度对应的前景清晰度类别作为所述图像帧的前景清晰度。The image features of the image frame are mapped into confidences corresponding to different foreground clarity categories, and the foreground clarity category corresponding to the maximum confidence is used as the foreground clarity of the image frame. 4.根据权利要求1所述的方法,其特征在于,4. The method according to claim 1, characterized in that: 所述前景清晰的前景清晰度类别对应第一比例阈值,所述前景一般的前景清晰度类别对应第二比例阈值,所述前景模糊的前景清晰度类别对应第三比例阈值,且所述第二比例阈值、所述第一比例阈值和所述第三比例阈值呈降序排列;The foreground clarity category of clear foreground corresponds to a first ratio threshold, the foreground clarity category of average foreground corresponds to a second ratio threshold, the foreground clarity category of blurred foreground corresponds to a third ratio threshold, and the second ratio threshold, the first ratio threshold and the third ratio threshold are arranged in descending order; 所述根据比例阈值和每个所述前景清晰度类别所包括的图像帧的数量在总数量中所占的比例,确定所述视频的第一视频清晰度,包括:Determining the first video definition of the video according to the ratio threshold and the ratio of the number of image frames included in each foreground definition category to the total number includes: 当前景清晰的前景清晰度类别所包括的图像帧的数量在总数量中所占的比例大于所述第一比例阈值时,确定所述视频的第一视频清晰度为清晰;When the proportion of the number of image frames included in the foreground clarity category with clear foreground in the total number is greater than the first proportion threshold, determining that the first video clarity of the video is clear; 当前景一般的前景清晰度类别所包括的图像帧的数量在总数量中所占的比例大于所述第二比例阈值,且前景模糊的前景清晰度类别所包括的图像帧的数量在总数量中所占的比例小于第三比例阈值,确定所述视频的第一视频清晰度为一般;When the proportion of the number of image frames included in the foreground clarity category with a general foreground in the total number is greater than the second proportion threshold, and the proportion of the number of image frames included in the foreground clarity category with a blurred foreground in the total number is less than the third proportion threshold, it is determined that the first video clarity of the video is general; 当前景清晰的前景清晰度类别所包括的图像帧的数量在总数量中所占的比例小于所述第一比例阈值,且前景模糊的前景清晰度类别所包括的图像帧的数量在总数量中所占的比例为零时,确定所述视频的第一视频清晰度为一般;When the proportion of the number of image frames included in the foreground clarity category with a clear foreground in the total number is less than the first proportion threshold, and the proportion of the number of image frames included in the foreground clarity category with a blurred foreground in the total number is zero, determining that the first video clarity of the video is fair; 当前景模糊的前景清晰度类别所包括的图像帧的数量在总数量中所占的比例大于第三比例阈值时,确定所述视频的第一视频清晰度为模糊。When the proportion of the number of image frames included in the foreground definition category with blurred foreground in the total number is greater than a third proportion threshold, it is determined that the first video definition of the video is blurred. 5.根据权利要求1所述的方法,其特征在于,所述对所述多个图像帧的背景进行清晰度识别,得到所述每个所述图像帧的背景清晰度,包括:5. The method according to claim 1, characterized in that the step of performing clarity recognition on the backgrounds of the plurality of image frames to obtain the background clarity of each of the image frames comprises: 对每个所述图像帧执行以下处理:The following processing is performed on each of the image frames: 将所述图像帧的图像特征映射为不同背景清晰度类别的置信度;Mapping the image features of the image frame into confidences of different background definition categories; 其中,所述背景清晰度类别包括:背景清晰和背景模糊。The background clarity categories include: clear background and blurred background. 6.根据权利要求5所述的方法,其特征在于,所述基于每个所述图像帧的背景清晰度,确定所述视频的第二视频清晰度,包括:6. The method according to claim 5, characterized in that the determining the second video definition of the video based on the background definition of each of the image frames comprises: 将每个所述图像帧属于背景模糊的背景清晰度类别的置信度累加并取均值,以得到均值置信度;Accumulating the confidence that each of the image frames belongs to the background clarity category of background blur and taking the average to obtain a mean confidence; 当所述均值置信度大于置信度阈值时,确定所述视频的第二视频清晰度为模糊,当所述均值置信度小于或等于所述置信度阈值时,确定所述视频的第二视频清晰度为一般。When the mean confidence is greater than the confidence threshold, it is determined that the second video clarity of the video is blurred; when the mean confidence is less than or equal to the confidence threshold, it is determined that the second video clarity of the video is average. 7.根据权利要求6所述的方法,其特征在于,所述方法还包括:7. The method according to claim 6, characterized in that the method further comprises: 获取所述视频的类别信息;Obtaining category information of the video; 在多个视频类别与置信度阈值的对应关系中,查找与所述视频类别信息对应的置信度阈值。In the correspondence between multiple video categories and confidence thresholds, the confidence threshold corresponding to the video category information is searched. 8.根据权利要求1至7任一项所述的方法,其特征在于,所述方法还包括:8. The method according to any one of claims 1 to 7, characterized in that the method further comprises: 将所述视频的清晰度识别结果发送到推荐系统,以使所述推荐系统根据所述视频的清晰度执行相应的推荐操作。The definition recognition result of the video is sent to the recommendation system, so that the recommendation system performs corresponding recommendation operations according to the definition of the video. 9.一种基于人工智能的视频清晰度处理装置,其特征在于,包括:9. A video definition processing device based on artificial intelligence, characterized by comprising: 提取模块,用于从视频中提取待识别的多个图像帧;An extraction module, used for extracting a plurality of image frames to be identified from a video; 第一识别模块,用于对所述多个图像帧中的前景进行清晰度识别,得到每个所述图像帧的前景清晰度;所述前景清晰度类别包括:前景清晰、前景一般和前景模糊;A first recognition module is used to perform clarity recognition on the foreground in the plurality of image frames to obtain the foreground clarity of each image frame; the foreground clarity categories include: foreground clarity, foreground general, and foreground blur; 第一确定模块,根据比例阈值和每个所述前景清晰度类别所包括的图像帧的数量在总数量中所占的比例,确定所述视频的第一视频清晰度,其中,不同的所述前景清晰度类别对应不同的所述比例阈值,所述总数量为对应所述待识别的多个图像帧的计数;当所述视频的所述前景清晰度类别为所述前景清晰或所述前景一般时,将所述第一视频清晰度作为所述视频的清晰度识别结果;A first determination module determines a first video definition of the video according to a ratio threshold and a ratio of the number of image frames included in each foreground definition category to a total number, wherein different foreground definition categories correspond to different ratio thresholds, and the total number is a count of the plurality of image frames to be identified; when the foreground definition category of the video is the foreground clear or the foreground general, the first video definition is used as a definition identification result of the video; 第二识别模块,用于当所述视频的所述前景清晰度类别为所述前景模糊时,对所述多个图像帧的背景进行清晰度识别,得到所述每个所述图像帧的背景清晰度;A second recognition module is used for, when the foreground clarity category of the video is the foreground blur, to perform clarity recognition on the backgrounds of the multiple image frames to obtain the background clarity of each of the image frames; 第二确定模块,用于基于每个所述图像帧的背景清晰度,确定所述视频的第二视频清晰度,并作为所述视频的更新的清晰度识别结果。The second determination module is used to determine the second video definition of the video based on the background definition of each of the image frames, and use it as an updated definition recognition result of the video. 10.一种电子设备,所述电子设备包括:10. An electronic device, comprising: 存储器,用于存储计算机可执行指令;A memory for storing computer executable instructions; 处理器,用于执行所述存储器中存储的计算机可执行指令时,实现权利要求1至8任一项所述的基于人工智能的视频清晰度处理方法。A processor is used to implement the artificial intelligence-based video clarity processing method described in any one of claims 1 to 8 when executing the computer executable instructions stored in the memory. 11.一种计算机可读存储介质,存储有计算机可执行指令或者计算机程序,所述计算机可执行指令或者计算机程序被处理器执行时实现权利要求1至8任一项所述的基于人工智能的视频清晰度处理方法。11. A computer-readable storage medium storing computer-executable instructions or a computer program, wherein the computer-executable instructions or the computer program, when executed by a processor, implements the video clarity processing method based on artificial intelligence as described in any one of claims 1 to 8.
CN202010334489.1A 2020-04-24 2020-04-24 Video clarity processing method, device and electronic equipment based on artificial intelligence Active CN111432206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334489.1A CN111432206B (en) 2020-04-24 2020-04-24 Video clarity processing method, device and electronic equipment based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334489.1A CN111432206B (en) 2020-04-24 2020-04-24 Video clarity processing method, device and electronic equipment based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN111432206A CN111432206A (en) 2020-07-17
CN111432206B true CN111432206B (en) 2024-11-26

Family

ID=71558364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334489.1A Active CN111432206B (en) 2020-04-24 2020-04-24 Video clarity processing method, device and electronic equipment based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN111432206B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351196B (en) * 2020-09-22 2022-03-11 北京迈格威科技有限公司 Image definition determining method, image focusing method and device
CN113627496B (en) * 2021-07-27 2024-09-24 交控科技股份有限公司 Switch machine fault prediction method, device, electronic equipment and readable storage medium
CN117831038A (en) * 2022-01-10 2024-04-05 于胜田 Method and system for recognizing characters of big data digital archives
CN114491146B (en) * 2022-04-01 2022-07-12 广州智慧城市发展研究院 Video image processing method suitable for video monitoring equipment
CN115171048B (en) * 2022-07-21 2023-03-17 北京天防安全科技有限公司 Asset classification method, system, terminal and storage medium based on image recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621672A (en) * 2009-03-10 2010-01-06 北京中星微电子有限公司 Method and device for analyzing video definition
CN109831680A (en) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 A kind of evaluation method and device of video definition
CN110533097A (en) * 2019-08-27 2019-12-03 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device, electronic equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6040655B2 (en) * 2012-09-13 2016-12-07 オムロン株式会社 Image processing apparatus, image processing method, control program, and recording medium
US20140368669A1 (en) * 2012-10-04 2014-12-18 Google Inc. Gpu-accelerated background replacement
US9232189B2 (en) * 2015-03-18 2016-01-05 Avatar Merger Sub Ii, Llc. Background modification in video conferencing
CN105825511B (en) * 2016-03-18 2018-11-02 南京邮电大学 A kind of picture background clarity detection method based on deep learning
CN105915892A (en) * 2016-05-06 2016-08-31 乐视控股(北京)有限公司 Panoramic video quality determination method and system
CN109478329B (en) * 2016-10-14 2021-04-20 富士通株式会社 Image processing method and device
US10095932B2 (en) * 2016-12-22 2018-10-09 Sap Se Video abstract using signed foreground extraction and fusion
CN110278485B (en) * 2019-07-29 2021-04-23 北京华雨天成文化传播有限公司 Method and device for evaluating video quality
CN110969602B (en) * 2019-11-26 2023-09-05 北京奇艺世纪科技有限公司 Image definition detection method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621672A (en) * 2009-03-10 2010-01-06 北京中星微电子有限公司 Method and device for analyzing video definition
CN109831680A (en) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 A kind of evaluation method and device of video definition
CN110533097A (en) * 2019-08-27 2019-12-03 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111432206A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN110533097B (en) Image definition recognition method and device, electronic equipment and storage medium
CN111432206B (en) Video clarity processing method, device and electronic equipment based on artificial intelligence
CN112131978B (en) Video classification method and device, electronic equipment and storage medium
CN112734775B (en) Image labeling, image semantic segmentation and model training methods and devices
CN112381104B (en) Image recognition method, device, computer equipment and storage medium
CN112183672B (en) Image classification method, feature extraction network training method and device
CN112215171B (en) Target detection method, device, equipment and computer readable storage medium
CN111026914A (en) Training method of video abstract model, video abstract generation method and device
CN111783712A (en) Video processing method, device, equipment and medium
CN107992937B (en) Unstructured data judgment method and device based on deep learning
CN111182367A (en) Video generation method and device and computer system
CN112149642A (en) Text image recognition method and device
CN113705293B (en) Image scene recognition method, device, equipment and readable storage medium
CN112712051A (en) Object tracking method and device, computer equipment and storage medium
CN117036834B (en) Data classification method and device based on artificial intelligence and electronic equipment
CN113515669A (en) AI-based data processing method and related equipment
CN114064974A (en) Information processing method, information processing apparatus, electronic device, storage medium, and program product
CN112712068B (en) Key point detection method and device, electronic equipment and storage medium
CN116932788A (en) Cover image extraction method, device, equipment and computer storage medium
CN113569613B (en) Image processing method, device, image processing equipment and storage medium
CN114529761A (en) Video classification method, device, equipment, medium and product based on classification model
CN111986259A (en) Training method of character and face detection model, auditing method of video data and related device
CN114357301B (en) Data processing method, device and readable storage medium
CN113903083B (en) Behavior recognition method and apparatus, electronic device, and storage medium
CN111768729A (en) VR scene automatic explanation method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant