[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106874827A - Video frequency identifying method and device - Google Patents

Video frequency identifying method and device Download PDF

Info

Publication number
CN106874827A
CN106874827A CN201510925602.2A CN201510925602A CN106874827A CN 106874827 A CN106874827 A CN 106874827A CN 201510925602 A CN201510925602 A CN 201510925602A CN 106874827 A CN106874827 A CN 106874827A
Authority
CN
China
Prior art keywords
face
field picture
video
target
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510925602.2A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201510925602.2A priority Critical patent/CN106874827A/en
Publication of CN106874827A publication Critical patent/CN106874827A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure is directed to a kind of video frequency identifying method and device, by obtaining target video;According to the first default frame period, target video is divided, obtain multiple video clips;According to the second default frame period, the first two field picture is extracted from each video clips;The first two field picture that face information will be included is extracted, and obtains the second face two field picture;Based on default identification model, the face identity in the second face two field picture is identified, determines the face mark included in the second face two field picture;According to face mark and the corresponding relation of the second face two field picture, the corresponding relation of the video clips belonged to according to the second face two field picture and the second face two field picture, face mark, the second face two field picture, tripartite's mapping table of video clips are formed.It is that user's push only includes the video clips that the performer of its desired viewing appears on the scene such that it is able to the corresponding relation identified with face according to the video clips determined.

Description

Video frequency identifying method and device
Technical field
This disclosure relates to technical field of image processing, more particularly to a kind of video frequency identifying method and device.
Background technology
With the fast development of society, the continuous progress of science and technology, the information that people to be touched is presented geometric increasing Long, people have increasing need for excavating effective information in the information of magnanimity by information search technique.
Current information search technique is preferable for the search effect of word, can quickly navigate to and include user preset key The article of word, but for one section of video, if user wishes to watch only comprising the video clips for oneself liking performer to appear on the scene, Can only then be searched by dragging video progress button or pressing video fast forward key, be wasted time and energy, and be positioned inaccurate.
The content of the invention
In order to solve in the prior art, the problem that cannot be positioned to performer's time for competiton section in video, the disclosure provides one kind and regards Frequency recognition methods and device, recognition of face is carried out by carrying out segment division to video, and in video clips after division, The corresponding relation of video clips and face identity is determined, such that it is able to only include drilling for its desired viewing for user pushes The video clips that member appears on the scene, the method is effective and quickly realizes recognition of face, and face video segment positioning, lifting user views and admires The Consumer's Experience of video.
The disclosure provides a kind of video frequency identifying method and device, and the technical scheme is as follows:
According to the first aspect of the embodiment of the present disclosure, there is provided a kind of video frequency identifying method, including:
Obtain target video;
According to the first default frame period, the target video is divided, obtain multiple video clips;
According to the second default frame period, the first two field picture is extracted from video clips each described;
Whether detect in first two field picture comprising face information, the first two field picture that will include face information is extracted Come, obtain the second face two field picture;
Based on default identification model, the face identity in the second face two field picture is identified, determines second people The face mark included in face two field picture;
Corresponding relation with the second face two field picture is identified according to the face, according to the second face two field picture and institute The corresponding relation of the video clips that the second face two field picture is belonged to is stated, the face mark, the second face frame figure is formed Picture, tripartite's mapping table of the video clips.
According to the second aspect of the embodiment of the present disclosure, there is provided a kind of video identification device, including:
First acquisition module, for obtaining target video;
First division module, for according to the first default frame period, being divided to the target video, obtains multiple videos Segment;
Second division module, for according to the second default frame period, the first two field picture being extracted from video clips each described;
Whether detection module, for detecting in first two field picture comprising face information, will include the first of face information Two field picture is extracted, and obtains the second face two field picture;
Identification module, for based on default identification model, being identified to the face identity in the second face two field picture, Determine the face mark included in the second face two field picture;
Matching module, for identifying the corresponding relation with the second face two field picture according to the face, according to described second The corresponding relation of the video clips that face two field picture is belonged to the second face two field picture, forms the face mark, institute State tripartite's mapping table of the second face two field picture, the video clips.
The method and device that embodiment of the disclosure is provided can include the following benefits:By obtaining target video;According to First default frame period, divides to target video, obtains multiple video clips;According to the second default frame period, from every The first two field picture is extracted in individual video clips;Whether detect in the first two field picture comprising face information, face letter will be included First two field picture of breath is extracted, and obtains the second face two field picture;Based on default identification model, to the second face two field picture In face identity be identified, the face mark for determining to be included in the second face two field picture;According to face mark and the second people The corresponding relation of face two field picture, according to the second face two field picture pass corresponding with the video clips that the second face two field picture is belonged to System, forms face mark, the second face two field picture, tripartite's mapping table of video clips.Determined such that it is able to basis Video clips and face mark corresponding relation, be that user pushes and only includes the video that performer of its desired viewing appears on the scene Segment, the method is effective and quickly realizes recognition of face, and face video segment positioning, lifting user views and admires the user's body of video Test.
It should be appreciated that the general description of the above and detailed description hereinafter are only exemplary and explanatory, can not limit The disclosure processed.
Brief description of the drawings
Accompanying drawing herein is merged in specification and constitutes the part of this specification, shows and meets embodiment of the disclosure, And it is used to explain the principle of the disclosure together with specification.
Fig. 1 is a kind of flow chart of the video frequency identifying method according to an exemplary embodiment;
Fig. 2 is a kind of flow chart of the video frequency identifying method according to another exemplary embodiment;
Fig. 3 is a kind of schematic diagram of video dividing mode of embodiment illustrated in fig. 2;
Fig. 4 is a kind of flow chart of the video identification device according to an exemplary embodiment;
Fig. 5 is a kind of flow chart of the video identification device according to another exemplary embodiment.
By above-mentioned accompanying drawing, it has been shown that the clear and definite embodiment of the disclosure, will hereinafter be described in more detail.These accompanying drawings and Word description is not intended to limit the scope that the disclosure is conceived by any mode, but is this by reference to specific embodiment Art personnel illustrate the concept of the disclosure.
Specific embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to accompanying drawing When, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.In following exemplary embodiment Described implementation method does not represent all implementation methods consistent with the disclosure.Conversely, they are only and such as appended power The example of consistent apparatus and method in terms of some of described in detail in the sharp claim, disclosure.
Fig. 1 is a kind of flow chart of the video frequency identifying method according to an exemplary embodiment, as shown in figure 1, this implementation The video frequency identifying method of example can apply to be applied to receive in the video server of video provider the end of video side In end (client device), below to be applied in video server come for example, the method for the present embodiment is including following Step:
The method for processing video frequency is comprised the following steps:
In a step 101, target video is obtained.
Specifically, video is formed by connecting by a series of static image, as a rule continuous image change is per second During more than more than 24 frame pictures, according to persistence of vision principle, human eye cannot distinguish the tableaux of single width, it appears that be flat The continuous visual effect of cunning, so continuous picture is called video.Carried out by the continuous two field picture for constituting target video The identification of facial image, it is possible to achieve the identification of the performer to occurring in target video.
In a step 102, according to the first default frame period, target video is divided, obtains multiple video clips.
Specifically, as it was previously stated, why video flowing can be made up of the tableaux of a frame frame, being primarily due to human eye pair It is limited in the recognition capability of fast-changing single width tableaux, therefore the video being made up of tableaux, human eye is looked can Be smooth continuous visual effect.Therefore, it can according to the tableaux quantity included in certain intervals interval, by target Video is divided into video clips one by one, and from for the perception effect of user's viewing video, the first default frame period can By in units of minute, such as 0.5 minute, 1 minute, so include user and like performer when being extracted from target video During the video clips of appearance, every section of mobility of video preferably, does not have the hopping sense of a frame frame picture and lofty sense.
In step 103, according to the second default frame period, the first two field picture is extracted from each video clips.
Even if specifically, after complete video is carried out into segment processing, the quantity of the two field picture included in every section of video clips remains unchanged It is very big, the still image of tens frames can be included in the video of a second as previously described, if every in each video clips Two field picture all carries out recognition of face operation, and operand is huge, and recognition rate is not high.Therefore, it can in each video clips Some specific two field pictures are extracted, these specific two field pictures are scanned, obtain the face characteristic included in image Information, the extraction to specific two field picture can be divided according to the process performance of processor, if the process performance of processor is high, Second default frame period can be smaller, due to that may include face information in the first two field picture, it is also possible to do not believe comprising face Breath, improves the probability that the two field picture for including face information is extracted from video clips if the second default frame period is small. Preferably, the first default frame period is more than the second default frame period.
At step 104, whether face information is included in the first two field picture of detection, the first frame figure of face information will be included As extracting, the second face two field picture is obtained.
Specifically, whether referring to be searched with certain strategy in the images comprising face information in the first two field picture of detection Rope, to determine whether containing face information, face information therein can be individual face information or multiple face informations, And the position that face information occurs is demarcated in the two field picture, to confirm coordinate of each face information in two field picture Position.First two field picture is screened, the first two field picture that will include face information is extracted, and obtains the second face Two field picture.
In step 105, based on default identification model, the face identity in the second face two field picture is identified, it is determined that The face mark included in second face two field picture.
Specifically, there are various algorithms for being identified to the face identity in image in the prior art, based on different Algorithm, can obtain different identification models, for example, a large amount of face pictures are gathered as sample data, using artificial neuron Network is trained to sample data, obtains the neural network model with artificial intelligence learning ability, then use this train Artificial nerve network model facial image to be identified is identified, be identified result.The artificial neuron for training Network model is default identification model.After being pre-processed to all second face two field pictures, it is input to as input data In the default identification model, the recognition result of the facial image of appearance in every second face two field picture can be obtained, that is, obtained The face mark included in second face two field picture, face mark can be the name of performer in video.
In step 106, according to face mark and the corresponding relation of the second face two field picture, according to the second face two field picture with The corresponding relation of the video clips that the second face two field picture is belonged to, forms face mark, the second face two field picture, piece of video Disconnected tripartite's mapping table.
Specifically, by tripartite's mapping table, can quickly be positioned to a certain Given Face mark, to get Include the video clips of Given Face mark, so as to the video clips that these are included Given Face mark are extracted and connected Continued broadcasting is put, to reach the purpose that user only watches the video clips for oneself liking performer to appear on the scene.
In the present embodiment, by obtaining target video;According to the first default frame period, target video is divided, obtained Multiple video clips;According to the second default frame period, the first two field picture is extracted from each video clips;Detect the first frame Whether face information is included in image, the first two field picture that will include face information is extracted, and obtains the second face frame figure Picture;Based on default identification model, the face identity in the second face two field picture is identified, determines the second face two field picture In include face mark;According to face mark and the corresponding relation of the second face two field picture, according to the second face two field picture with The corresponding relation of the video clips that the second face two field picture is belonged to, forms face mark, the second face two field picture, piece of video Disconnected tripartite's mapping table.It is that user pushes away such that it is able to the corresponding relation identified with face according to the video clips determined The video clips for sending the performer for only including its desired viewing to appear on the scene, the method is effective and quickly realizes recognition of face, face Video clips are positioned, and lifting user views and admires the Consumer's Experience of video.
Fig. 2 is a kind of flow chart of the video frequency identifying method according to another exemplary embodiment, as shown in Fig. 2 this reality The method for processing video frequency for applying example can apply to be applied to receive video side in the video server of video provider In terminal (client device), below to be applied in video server come for example, the method for the present embodiment include with Lower step:
In step 201, target video is obtained.
In step 202., according to the first default frame period, target video is divided, obtains multiple video clips.
In step 203, according to the second default frame period, the first two field picture is extracted from each video clips.
Wherein, the first default frame period is more than the second default frame period.Preferably, the second default interframe is divided into the static picture of 5 frames Face.
In step 204, whether face information is included in the first two field picture of detection, the first frame figure of face information will be included As extracting, the second face two field picture is obtained.
In step 205, target face mark corresponding with target video is obtained from the description information of target video.
Specifically, description information refers to the program description for the target video, it will usually the performer comprising featured performer in video Table, the performer's title in the cast can be identified as face, and target video is identified according to face mark, To determine to be identified comprising the face in which video clips.In a width two field picture, multiple face marks may be included, it is right Each face mark is demarcated, and is demarcated compared to the specified face in only to image, both treatment effeciencies It is very different.Therefore, obtained by target face mark, can accelerate to position target person in target video The efficiency of face.
In step 206, transferred from identification model number storehouse and target face mark corresponding first according to target face mark Default identification model.
In step 207, based on the first default identification model, the face identity in the second face two field picture is identified, Third party's face two field picture is determined in the second face two field picture, third party's face two field picture is include target face mark second Face two field picture.
Specifically, the first default identification model is with targetedly identification model, its can more targetedly identify to Fixed target face.For example, model is identified with 100 the 100000 of star photo training, the identification mould after training Type can quickly be recognized to other photos of 100 stars.10 stars or 1 the 10 of star can also be used Ten thousand or the photo training of other quantity obtain corresponding identification model, as a rule, under identical training condition, identification model The scope being applicable is narrower, and its degree of accuracy to recognition of face is higher.Therefore, identified from identification mould by according to target face Transferred in type number storehouse and preset identification model with target face mark corresponding first, and based on the specific first default identification mould Type, is identified to the face identity in the second face two field picture, is included such that it is able to be determined in the second face two field picture Third party's face two field picture of target face mark.So that the recognition accuracy to target face is improved.
In a step 208, according to target face mark and the corresponding relation of third party's face two field picture, according to third party's face frame figure As the corresponding relation of video clips belonged to third party's face two field picture, formed target face mark, third party's face two field picture, Tripartite's mapping table of video clips.
Optionally, in step 206, according to target face mark transferred from identification model number storehouse corresponding with target face mark The first default identification model before, can also include:
Identified according to target face, target face image data bag corresponding with target face mark is transferred from picture number storehouse;
Using target face image data bag as training sample, training is obtained and the default knowledge of target face mark corresponding first Other model.
Specifically, including the face figure corresponding with the target face mark of predetermined number in target face image data bag Picture, i.e. training sample;As a rule the quantity of the training sample recognition accuracies for training the identification model for obtaining higher more, But the particular number of training sample is also needed to according to depending on the algorithm attribute for using.Wherein, training algorithm can be rolled up using depth Product neutral net.
Optionally, formed after face mark, the second face two field picture, tripartite's mapping table of video clips, also included:
The video push request that receiving terminal sends, includes in video push request:Face mark to be pushed;
Searched in tripartite's mapping table according to face mark to be pushed, regarded corresponding with face mark to be pushed Frequency segment is pushed to terminal.
Specifically, user can install the application APP suitable for the video frequency identifying method in terminal (mobile phone, PAD etc.), The name of the performer of its desired viewing is input into, what is obtained analyzed in high in the clouds in advance according to the name of the performer to target video For user determines video clips corresponding with the actor names in tripartite's mapping table, and terminal is pushed to, allows user The segment for having it to like performer to appear on the scene in the target video is only watched, appreciation effect is improved.
Position fixing process of the video frequency identifying method to specific actors in video is exemplified below:Fig. 3 is refer to, in figure 3, Video clips division is carried out to target video (such as " brother of running ") first, as shown in A1~A6 in Fig. 3, is divided It is 6 video clips;(such as performers and clerks are introduced, film is introduced) obtains the target from the description information of the target video Starring actor information included in video;(such as comprising " Yang Ying ", " Deng Chao ", " Zheng Kai " etc.);Adjusted in database Face picture packet corresponding with performer's mark is taken, for example, transfers the mass picture about " Yang Ying ";By relevant " Yang Ying " Mass picture as training sample, training obtain being capable of identify that target face whether be " Yang Ying " identification model.Due to every Individual video clips are made up of two field picture one by one, and as a rule in the video of a second just contain tens frames, according to The definition of video is different, and the two field picture included in HD video is more, if therefore to target video in every two field picture Carry out Face datection, waste of resource, and it is inefficient;Simultaneously for watching the appearance picture of oneself liking performer for user, It is more reasonable as unit ratio is blocked with minute, video is carried out with the second and is redirected, influence viewing impression.Therefore, the length of video clips Preferably can be positioned at half a minute, one minute.For extracted from video clips for detecting face information first Two field picture need not must also be examined per frame, can be extracted and be detected using the second of default step-length the default frame period.Such as Fig. 3 Shown in middle B, a number of two field picture is extracted in each video clips as the first two field picture B to be detected.To carrying The the first two field picture B for taking out carries out Face datection, and detection algorithm can use AdaBoost iterative algorithms, and the algorithm can be with The detector efficiency of facial image is effectively improved, while improving the accuracy of detection.As shown in C in Fig. 3, by the first frame figure As second face two field picture C of the detection comprising facial image is extracted in B, for carrying out recognition of face.By the second people Face two field picture C is separately input to be identified in " Yang Ying ", " Deng Chao ", " Zheng Kai " the respective identification model for obtaining before, The third party face two field picture D1, D2, D3, D4 as shown in Figure 3 is obtained, comprising bag in " Yang Ying ", D2 in wherein D1 Containing in " Deng Chao ", D3 comprising including the recognition results such as " Deng Chao ", " Zheng Kai " in " Yang Ying " and " Deng Chao ", D4.Such as the institute of table 1 Show:The corresponding relation of third party's face two field picture and video clips is determined, target face mark, third party's face two field picture is formed With tripartite's mapping table of video clips.
Tripartite's mapping table of table 1, target face mark, third party's face two field picture and video clips
Can be its continuous broadcasting if the push for receiving the video clips that user's selection viewing has " Yang Ying " to occur is asked A2 and A3 video clips, so as to quickly navigate to the video of the performer seen desired by it for user.
To sum up, the present embodiment is by carrying out segment division to video, and in video clips after division, for Given Face Given Face identification model foundation is carried out, and the Given Face in each video clips is entered based on the Given Face identification model Go and recognize, effectively improve recognition efficiency, can only include regarding for the performer watched desired by it appearance for user quickly pushes Frequency segment, the method is effective and quickly realizes recognition of face, and face video segment positioning, lifting user views and admires the user of video Experience.
Following is disclosure device embodiment, can be used for performing method of disclosure embodiment.For disclosure device embodiment In the details that does not disclose, refer to method of disclosure embodiment.
Fig. 4 is a kind of flow chart of the video identification device according to an exemplary embodiment, as shown in figure 4, the video Identifying device can by software, hardware or both be implemented in combination with turn into electronic equipment it is some or all of.The video Processing unit can include:
First acquisition module 41, for obtaining target video.First division module 42, for presetting frame period according to first, Target video is divided, multiple video clips are obtained.Second division module 43, for presetting frame period according to second, The first two field picture is extracted from each video clips.Detection module 44, for whether including people in the first two field picture of detection Face information, the first two field picture that will include face information is extracted, and obtains the second face two field picture.Identification module 45, For based on default identification model, being identified to the face identity in the second face two field picture, the second face two field picture is determined In include face mark.Matching module 46, for the corresponding relation according to face mark and the second face two field picture, root The corresponding relation of the video clips belonged to according to the second face two field picture and the second face two field picture, forms face mark, second Tripartite's mapping table of face two field picture, video clips.
In the present embodiment, by obtaining target video;According to the first default frame period, target video is divided, obtained Multiple video clips;According to the second default frame period, the first two field picture is extracted from each video clips;Detect the first frame Whether face information is included in image, the first two field picture that will include face information is extracted, and obtains the second face frame figure Picture;Based on default identification model, the face identity in the second face two field picture is identified, determines the second face two field picture In include face mark;According to face mark and the corresponding relation of the second face two field picture, according to the second face two field picture with The corresponding relation of the video clips that the second face two field picture is belonged to, forms face mark, the second face two field picture, piece of video Disconnected tripartite's mapping table.It is that user pushes away such that it is able to the corresponding relation identified with face according to the video clips determined The video clips for sending the performer for only including its desired viewing to appear on the scene, the method is effective and quickly realizes recognition of face, face Video clips are positioned, and lifting user views and admires the Consumer's Experience of video.
Fig. 5 is a kind of flow chart of the video identification device according to another exemplary embodiment, and the video identification device can With by software, hardware or both be implemented in combination with turn into electronic equipment it is some or all of.Implemented based on said apparatus Example, the first default frame period is more than the second default frame period.
Optionally, the video identification device also includes:
Second acquisition module 47, for obtaining target face mark corresponding with target video from the description information of target video Know.
Accordingly, identification module 45 includes:
Submodule 451 is transferred, it is corresponding with target face mark for being transferred from identification model number storehouse according to target face mark The first default identification model.
Identification submodule 452, for based on the first default identification model, being carried out to the face identity in the second face two field picture Identification.
Determination sub-module 453, for determining third party's face two field picture in the second face two field picture, third party's face two field picture is Include the second face two field picture of target face mark.
Accordingly, matching module 46, specifically for the corresponding relation according to target face mark and third party's face two field picture, The corresponding relation of the video clips belonged to according to third party's face two field picture and third party's face two field picture, forms target face mark Knowledge, third party's face two field picture, tripartite's mapping table of video clips.
Optionally, the video identification device also includes:
Picture acquisition module 48, for being identified according to target face, transfers corresponding with target face mark from picture number storehouse Target face image data bag.
Training module 49, for, used as training sample, training to be obtained and target face mark using target face image data bag Know the corresponding first default identification model.
Optionally, the video identification device also includes:
Receiver module 50, for the video push request that receiving terminal sends, includes in video push request:To be pushed Face is identified.
Searching modul 51, for being searched in tripartite's mapping table according to face mark to be pushed, will with wait to push Face identify corresponding video clips and be pushed to terminal.
On the device in above-described embodiment, wherein modules perform the concrete mode of operation in the reality about the method Apply and be described in detail in example, explanation will be not set forth in detail herein.
Those skilled in the art will readily occur to other realities of the disclosure after considering specification and putting into practice invention disclosed herein Apply scheme.The application is intended to any modification, purposes or the adaptations of the disclosure, these modifications, purposes or Adaptations follow the general principle of the disclosure and including the undocumented common knowledge in the art of the disclosure or Conventional techniques.Description and embodiments are considered only as exemplary, and the true scope of the disclosure and spirit are by following power Profit requires to point out.
It should be appreciated that the disclosure is not limited to the precision architecture for being described above and being shown in the drawings, and can To carry out various modifications and changes without departing from the scope.The scope of the present disclosure is only limited by appended claim.

Claims (10)

1. a kind of video frequency identifying method, it is characterised in that methods described includes:
Obtain target video;
According to the first default frame period, the target video is divided, obtain multiple video clips;
According to the second default frame period, the first two field picture is extracted from video clips each described;
Whether detect in first two field picture comprising face information, the first two field picture that will include face information is extracted Come, obtain the second face two field picture;
Based on default identification model, the face identity in the second face two field picture is identified, determines second people The face mark included in face two field picture;
Corresponding relation with the second face two field picture is identified according to the face, according to the second face two field picture and institute The corresponding relation of the video clips that the second face two field picture is belonged to is stated, the face mark, the second face frame figure is formed Picture, tripartite's mapping table of the video clips.
2. method according to claim 1, it is characterised in that the described first default frame period is pre- more than described second If frame period.
3. method according to claim 1, it is characterised in that described based on default identification model, to described second Face identity in face two field picture is identified, before determining the face mark included in the second face two field picture, also Including:
Target face mark corresponding with the target video is obtained from the description information of the target video;
Accordingly, it is described based on default identification model, the face identity in the second face two field picture is identified, really The face mark included in the fixed second face two field picture includes:
Transferred from identification model number storehouse and the default knowledge of the target face mark corresponding first according to target face mark Other model, based on the described first default identification model, is identified to the face identity in the second face two field picture, Third party's face two field picture is determined in the second face two field picture, third party's face two field picture is to include the target face Second face two field picture of mark;
Accordingly, the corresponding relation identified according to the face with the second face two field picture, according to second people The corresponding relation of the video clips that face two field picture is belonged to the second face two field picture, forms face mark, described Second face two field picture, tripartite's mapping table of the video clips include:
Corresponding relation with third party's face two field picture is identified according to the target face, according to third party's face two field picture The corresponding relation of the video clips belonged to third party's face two field picture, forms the target face mark, the described 3rd Tripartite's mapping table of face two field picture, the video clips.
4. method according to claim 3, it is characterised in that described according to target face mark from identification mould Transferred in type number storehouse with before the default identification model of the target face mark corresponding first, also included:
Identified according to the target face, target face picture corresponding with the target face mark is transferred from picture number storehouse Packet;
Using the target face image data bag as training sample, training obtains institute corresponding with the target face mark State the first default identification model.
5. the method according to any one of Claims 1 to 4, it is characterised in that the formation face mark, institute State after tripartite's mapping table of the second face two field picture, the video clips, also include:
The video push request that receiving terminal sends, includes in the video push request:Face mark to be pushed;
Searched in tripartite's mapping table according to the face mark to be pushed, by with the face to be pushed Identify corresponding video clips and be pushed to the terminal.
6. a kind of video identification device, it is characterised in that described device includes:
First acquisition module, for obtaining target video;
First division module, for according to the first default frame period, being divided to the target video, obtains multiple videos Segment;
Second division module, for according to the second default frame period, the first two field picture being extracted from video clips each described;
Whether detection module, for detecting in first two field picture comprising face information, will include the first of face information Two field picture is extracted, and obtains the second face two field picture;
Identification module, for based on default identification model, being identified to the face identity in the second face two field picture, Determine the face mark included in the second face two field picture;
Matching module, for identifying the corresponding relation with the second face two field picture according to the face, according to described second The corresponding relation of the video clips that face two field picture is belonged to the second face two field picture, forms the face mark, institute State tripartite's mapping table of the second face two field picture, the video clips.
7. device according to claim 6, it is characterised in that the described first default frame period is pre- more than described second If frame period.
8. device according to claim 6, it is characterised in that also include:
Second acquisition module, for obtaining target person corresponding with the target video from the description information of the target video Face is identified;
Accordingly, the identification module includes:
Submodule is transferred, is identified with the target face for being transferred from identification model number storehouse according to target face mark Corresponding first default identification model;
Identification submodule, for based on the described first default identification model, to the face identity in the second face two field picture It is identified;
Determination sub-module, for determining third party's face two field picture, third party's face frame figure in the second face two field picture As being the second face two field picture for including the target face mark;
Accordingly, the matching module, it is right with third party's face two field picture specifically for being identified according to the target face Should be related to, the corresponding relation of the video clips belonged to third party's face two field picture according to third party's face two field picture, Form the target face mark, third party's face two field picture, tripartite's mapping table of the video clips.
9. device according to claim 8, it is characterised in that described device also includes:
Picture acquisition module, for being identified according to the target face, transfers from picture number storehouse and is identified with the target face Corresponding target face image data bag;
Training module, for, used as training sample, training to be obtained and the target person using the target face image data bag The corresponding described first default identification model of face mark.
10. the device according to any one of claim 6~9, it is characterised in that described device also includes:
Receiver module, for the video push request that receiving terminal sends, includes in the video push request:To be pushed Face is identified;
Searching modul, for being searched in tripartite's mapping table according to the face mark to be pushed, will be with institute State the corresponding video clips of face mark to be pushed and be pushed to the terminal.
CN201510925602.2A 2015-12-14 2015-12-14 Video frequency identifying method and device Pending CN106874827A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510925602.2A CN106874827A (en) 2015-12-14 2015-12-14 Video frequency identifying method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510925602.2A CN106874827A (en) 2015-12-14 2015-12-14 Video frequency identifying method and device

Publications (1)

Publication Number Publication Date
CN106874827A true CN106874827A (en) 2017-06-20

Family

ID=59178785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510925602.2A Pending CN106874827A (en) 2015-12-14 2015-12-14 Video frequency identifying method and device

Country Status (1)

Country Link
CN (1) CN106874827A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316462A (en) * 2017-08-30 2017-11-03 济南浪潮高新科技投资发展有限公司 A kind of flow statistical method and device
CN108111603A (en) * 2017-12-21 2018-06-01 广东欧珀移动通信有限公司 Information recommendation method, device, terminal device and storage medium
CN108446390A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN109034100A (en) * 2018-08-13 2018-12-18 成都盯盯科技有限公司 Face pattern detection method, device, equipment and storage medium
CN109241345A (en) * 2018-10-10 2019-01-18 百度在线网络技术(北京)有限公司 Video locating method and device based on recognition of face
CN109635158A (en) * 2018-12-17 2019-04-16 杭州柚子街信息科技有限公司 For the method and device of video automatic labeling, medium and electronic equipment
CN109815805A (en) * 2018-12-18 2019-05-28 深圳壹账通智能科技有限公司 Automatic identification drowned method, apparatus, storage medium and electronic equipment
CN110942027A (en) * 2019-11-26 2020-03-31 浙江大华技术股份有限公司 Method and device for determining occlusion strategy, storage medium and electronic device
CN111414517A (en) * 2020-03-26 2020-07-14 成都市喜爱科技有限公司 Video face analysis method and device and server
CN112507824A (en) * 2020-11-27 2021-03-16 长威信息科技发展股份有限公司 Method and system for identifying video image features

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169065A1 (en) * 2007-12-28 2009-07-02 Tao Wang Detecting and indexing characters of videos by NCuts and page ranking
CN102087704A (en) * 2009-12-08 2011-06-08 索尼公司 Information processing apparatus, information processing method, and program
CN102110399A (en) * 2011-02-28 2011-06-29 北京中星微电子有限公司 Method, device and system for assisting explication
CN102799637A (en) * 2012-06-27 2012-11-28 北京邮电大学 Method for automatically generating main character abstract in television program
CN103049459A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Feature recognition based quick video retrieval method
CN103488764A (en) * 2013-09-26 2014-01-01 天脉聚源(北京)传媒科技有限公司 Personalized video content recommendation method and system
CN103702117A (en) * 2012-09-27 2014-04-02 索尼公司 Image processing apparatus, image processing method, and program
KR101382948B1 (en) * 2012-11-22 2014-04-09 한국과학기술원 An accuracy improving method for automatic recognition of characters in a video by utilizing casting information
CN104298748A (en) * 2014-10-13 2015-01-21 中南民族大学 Device and method for face search in videos
CN104636413A (en) * 2013-11-07 2015-05-20 三星泰科威株式会社 Video search system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169065A1 (en) * 2007-12-28 2009-07-02 Tao Wang Detecting and indexing characters of videos by NCuts and page ranking
CN102087704A (en) * 2009-12-08 2011-06-08 索尼公司 Information processing apparatus, information processing method, and program
CN102110399A (en) * 2011-02-28 2011-06-29 北京中星微电子有限公司 Method, device and system for assisting explication
CN103049459A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Feature recognition based quick video retrieval method
CN102799637A (en) * 2012-06-27 2012-11-28 北京邮电大学 Method for automatically generating main character abstract in television program
CN103702117A (en) * 2012-09-27 2014-04-02 索尼公司 Image processing apparatus, image processing method, and program
KR101382948B1 (en) * 2012-11-22 2014-04-09 한국과학기술원 An accuracy improving method for automatic recognition of characters in a video by utilizing casting information
CN103488764A (en) * 2013-09-26 2014-01-01 天脉聚源(北京)传媒科技有限公司 Personalized video content recommendation method and system
CN104636413A (en) * 2013-11-07 2015-05-20 三星泰科威株式会社 Video search system and method
CN104298748A (en) * 2014-10-13 2015-01-21 中南民族大学 Device and method for face search in videos

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
梁斌,段富: "奇异值分解和改进PCA的视频人脸检索方法", 《计算机工程与应用》 *
高广宇: "影视视频结构解析及自动编目技术研究", 《中国博士学位论文全文数据库(电子期刊) 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316462A (en) * 2017-08-30 2017-11-03 济南浪潮高新科技投资发展有限公司 A kind of flow statistical method and device
CN108111603A (en) * 2017-12-21 2018-06-01 广东欧珀移动通信有限公司 Information recommendation method, device, terminal device and storage medium
CN108446390A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN108446390B (en) * 2018-03-22 2022-01-04 百度在线网络技术(北京)有限公司 Method and device for pushing information
CN109034100A (en) * 2018-08-13 2018-12-18 成都盯盯科技有限公司 Face pattern detection method, device, equipment and storage medium
CN109241345A (en) * 2018-10-10 2019-01-18 百度在线网络技术(北京)有限公司 Video locating method and device based on recognition of face
CN109635158A (en) * 2018-12-17 2019-04-16 杭州柚子街信息科技有限公司 For the method and device of video automatic labeling, medium and electronic equipment
CN109815805A (en) * 2018-12-18 2019-05-28 深圳壹账通智能科技有限公司 Automatic identification drowned method, apparatus, storage medium and electronic equipment
CN110942027A (en) * 2019-11-26 2020-03-31 浙江大华技术股份有限公司 Method and device for determining occlusion strategy, storage medium and electronic device
CN111414517A (en) * 2020-03-26 2020-07-14 成都市喜爱科技有限公司 Video face analysis method and device and server
CN111414517B (en) * 2020-03-26 2023-05-19 成都市喜爱科技有限公司 Video face analysis method, device and server
CN112507824A (en) * 2020-11-27 2021-03-16 长威信息科技发展股份有限公司 Method and system for identifying video image features

Similar Documents

Publication Publication Date Title
CN106874827A (en) Video frequency identifying method and device
CN108322788B (en) Advertisement display method and device in live video
US9323785B2 (en) Method and system for mobile visual search using metadata and segmentation
US9881084B1 (en) Image match based video search
US11961271B2 (en) Multi-angle object recognition
CN102831176B (en) The method of commending friends and server
CN111491187B (en) Video recommendation method, device, equipment and storage medium
CN104991906B (en) Information acquisition method, server, terminal, database construction method and device
CN103686344A (en) Enhanced video system and method
CN103428537B (en) A kind of method for processing video frequency and device
Paul et al. Spatial and motion saliency prediction method using eye tracker data for video summarization
CN113766299B (en) Video data playing method, device, equipment and medium
CN105069005A (en) Data searching method and data searching device
CN107801061A (en) Ad data matching process, apparatus and system
CN112380929A (en) Highlight segment obtaining method and device, electronic equipment and storage medium
CN111954087B (en) Method and device for intercepting images in video, storage medium and electronic equipment
US9036921B2 (en) Face and expression aligned movies
CN110516153B (en) Intelligent video pushing method and device, storage medium and electronic device
CN112287790A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114296627B (en) Content display method, device, equipment and storage medium
CN110866168A (en) Information recommendation method and device, terminal and server
CN110942056A (en) Clothing key point positioning method and device, electronic equipment and medium
Xu et al. Touch saliency
Li et al. An empirical evaluation of labelling method in augmented reality
JP6934001B2 (en) Image processing equipment, image processing methods, programs and recording media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170620

RJ01 Rejection of invention patent application after publication