CN108984648A - The retrieval of the main eigen and animated video of digital cartoon and altering detecting method - Google Patents
The retrieval of the main eigen and animated video of digital cartoon and altering detecting method Download PDFInfo
- Publication number
- CN108984648A CN108984648A CN201810673083.9A CN201810673083A CN108984648A CN 108984648 A CN108984648 A CN 108984648A CN 201810673083 A CN201810673083 A CN 201810673083A CN 108984648 A CN108984648 A CN 108984648A
- Authority
- CN
- China
- Prior art keywords
- video
- point
- frame
- feature
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Landscapes
- Image Analysis (AREA)
Abstract
A kind of main eigen method of digital cartoon of the present invention, color histogram is extracted to animated video frame color model, tone H, saturation degree S, brightness V stereogram are combined into an one-dimensional feature vector G, the histogram of G is calculated according to color histogram, 72bin altogether, wherein the weight of tone H is up to 9, and it is 3 that saturation degree S, which takes second place, brightness V is minimum, is set as 1.Tri- components of HSV are quantified respectively, establish color histogram by the visual characteristic of the present invention combination HSV model and human eye, it converts the histogram of three components to one-dimensional, total 72bin, and the similitude of each component is judged by calculating Euclidean distance, effect is intuitive, obvious.
Description
Technical field
The present invention relates to the copyright protections of digital cartoon, more particularly, to the main eigen and animation of a kind of digital cartoon
The retrieval of video and altering detecting method.
Background technique
In contrast to text information and pictorial information, video is considered as combination of the picture along the time, have space and when
Between attribute, therefore, description and division for video will be the most complicated.A large amount of content informations breath that video is included can be divided into two
Layer, the video information of low layer and high-rise semantic information.The former is primarily referred to as characteristics of image, such as color characteristic, textural characteristics,
Motion feature etc.;The latter is semantic feature, the classification including video, the emotion etc. of expression.
The video data volume is big and content structure is complicated, brings a lot of trouble to video management.At a large amount of easy to get started videos
Reason software tool is taken advantage of the occasion to distort video and be provided convenience to criminal, propagates the illegal unhealthy videos such as violence pornographic, so that
Network video is difficult to manage, and video copy receives threat, to cause various social concerns.This problems demand solves.
However traditional key search technology correctly can not be retrieved and classify to video, for illegal video, mainly
Manually detection and the supervision of broad masses of the people report, time-consuming and inefficiency.
Video is retrieved based on text information, needs to carry out content analysis to video, some passes is then sticked according to result
Then keyword label, such as title, size, compression type, video type, author and time etc. exist according to the keyword of input
It is searched in video database, or is being video according to being searched in corresponding tabulation specific.But with video data
More and more, the data of storage constantly increase, to the artificial mark of massive information, it appears inefficiency is not able to satisfy daily
Newly-increased video.
Summary of the invention
In order to solve the above-mentioned technical problems, the present invention provides a kind of main eigen of digital cartoon and animated videos
Retrieval and altering detecting method.
The technical scheme adopted by the invention is that: a kind of main eigen method of digital cartoon, to animated video frame face
Color model extracts color histogram, and tone H, saturation degree S, brightness V stereogram are combined into an one-dimensional feature vector
G is shown in formula (1)
G=HQSQV+SQV+V (1)
In formula, Qs is the quantization series of S, and Qv is the quantization series of V, enables Qs=3, Qv=3, above formula is represented by formula
(2)
G=9H+3S+V (2)
The histogram of G is calculated according to color histogram, altogether 72bin, wherein the weight of tone H is up to 9, saturation degree S
Taking second place is 3, and brightness V is minimum, is set as 1.
Preferably, the similarity calculation of two pictures includes the following steps: two picture tone H component similarity such as formula
(3) shown in:
In formula, hk(i) the H histogram of component of kth frame in video, h are representk+1+ 1 frame of kth is then represented, is used between the two
Then a lesser side adds up than a upper biggish side, the coloration H similarity for obtaining two interframe is Sh(k, k+1), similarly may be used
To calculate saturation degree S similitude Ss (k, k+1) and brightness similitude Sv (k, k+1), integrate, kth frame is similar with k+1 frame
Degree is as shown in formula (4):
According to human eye feature, tone H, saturation degree S, brightness V component are weighted, mh:ms:mv=QsQv:Qv:1s=9:3:
1, three weight coefficients are set to 0.9,0.3 and 0.1, the result calculated by above formula is smaller, and similarity is higher, i.e. frame pitch
From smaller, the calculated result of formula (4) is interframe distance.
Preferably, formula (4) is applied to the judgement of key frame, includes the following steps:
Video segmentation is first camera lens one by one by a, using camera lens as basic unit, referred to as Shot Detection, the change of camera lens
It changes and is generally divided into two kinds: shear and gradual change;
B given threshold T1, if interframe distance is greater than T1, is judged as shot-cut, otherwise in the comparison of k and k+1 frame
K+1 frame and a upper key frame are compared, i.e., compared with the representative frame of a upper camera lens, if distance is greater than T1 between the two,
Illustrate that the two belongs to different camera lenses, judges that k+1 frame belongs to next camera lens.
Preferably, animated video frame be based on Oriented FAST and Rotated BRIEF algorithm characteristics point judgement and
Feature point description comprising following steps:
The judgement of c characteristic point: will judge whether certain point is characteristic point, choose first and put as the center of circle and be uniformly distributed with this
4 or 4 or more points circumferentially compare the gray scale of this point and the point on circumference, set when gray scale absolute value of the difference is greater than
When determining threshold value, judge that this point is characterized a little;
Description of d characteristic point: around characteristic point, circle is done with certain radius, is then extracted in circle with certain pattern
Point pair in each pair of point, if the gray value of the two is different, is put to being determined as 1, is otherwise determined as 0, ties to each pair of point to processing
Shu Shengcheng description.
Preferably, it is 4 that the point in circumferential point is chosen in step c, and is located at 3 o'clock in the center of circle, 6 o'clock, at 9 points
Clock, 12 o'clock direction.
Preferably, in step c feature point extraction the following steps are included:
E, by constructing scale pyramid, the Set scale factor and the pyramid number of plies, original image is pressed in feature point extraction
Pyramid, every layer of extraction characteristic point, feature of the extracted characteristic point summation as the diagram are constructed according to scale factor and the number of plies
Candidate point, to realize scale invariability.
F is after selecting feature candidate point, using non-maxima suppression principle, to remove the multiple feature candidate points closed on
Problem, calculates that its is correspondingly sized, and calculation is contrast characteristic's candidate point and surrounding a certain range to each feature candidate point
Interior feature candidate point calculates its inclined absolute value of the difference, in these feature candidate points, retains and responds maximum point, and other
Point is then deleted.
Preferably, in step f characteristic point description the following steps are included:
G determines principal direction, around the characteristic point O in one piece of region, its mass center C is calculated, as shown in formula (5)
Wherein mpqFor geometric moment, as shown in formula (6)
The coordinate for calculating mass center C establishes coordinate system with vector OC principal direction the most with this principal direction, and in coordinate system
In select 256 description points pair, to compared with the vector for being combined into 256 dimensions;
H Feature Points Matching, the binary feature to the 256bit found out calculate its Hamming distance, and Hamming distance indicates two
The different character number of corresponding position in the character of equal length.
A kind of retrieval of animated video and altering detecting method, in video frequency searching, the number according in above scheme
The main eigen method of word animation, the key frame extracted save its color histogram, are stored in database, in retrieval, input
One section of video extraction key frame, then in the database between traverse identical key frame number, key frame it is less be output
Associated video;In tampering detection, key frame is extracted respectively, the global characteristics of picture is described with color histogram, then again
The ORB feature with rotation and Scale invariant is extracted, the ORB feature of key frame of video is compared, detect whether to distort and is determined
Distort position in position.
Preferably, video frequency searching process is as follows:
(1) build up video database in advance, by all video extraction key frames, and using the color histogram of these frames as
Feature saves, and every frame is indicated with 14 floating numbers;
(2) video to be detected extracts key frame, extracts feature vector;
(3) video finger print in database is accessed, the feature vector of video to be detected is traversed, with video in database
Feature compares, given threshold, when two frame color histogram gaps be less than threshold value, then judge that two pictures are similar;
(4) remaining feature vector is compared, if in two videos, similar frame number is k, two key frame of video numbers are respectively
M and n, then two video similarities are k/min (m, n);
(5) next data in database is selected, until having traversed database;
(6) given threshold, video similarity are greater than threshold value person and are sequentially output according to similarity size.
Compared with the existing technology, the invention has the benefit that
1, the present invention combines the visual characteristic of HSV model and human eye, and tri- components of HSV are quantified respectively, it is straight to establish color
Fang Tu converts the histogram of three components to one-dimensional, total 72bin, and judges each component by calculating Euclidean distance
Similitude, effect are intuitive, obvious.
2, according to animation trimming and gradual change the characteristics of, for dual threshold extract key frame deficiency, the present invention improve with
Former frame is compared and then is compared with a upper key frame, judges whether camera lens is converted with this.
3, the method that the present invention will extract video features, retrieval and tampering detection applied to this video of animation.Video
In retrieval, according to the key frame extracted, its color histogram is saved, be stored in database, in retrieval, input one section of video extraction
Key frame, then in the database between traverse identical key frame number, key frame it is less be output associated video.It is usurping
Change in detection, extract key frame respectively, then extracts the ORB feature with rotation and Scale invariant again, compare key frame of video
ORB feature, detect whether to distort and position to distort position.Retrieval is set to become more succinct, efficient.
Detailed description of the invention
Fig. 1 be dual threshold detection schematic diagram;
Fig. 2 is FAST Corner Detection schematic diagram;
Fig. 3 present invention describes sub- schematic diagram;
Fig. 4 animated video search method flow chart of the present invention;
Fig. 5 animated video altering detecting method flow chart of the present invention
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and embodiments to this hair
It is bright to be described in further detail, it should be understood that implementation example described herein is merely to illustrate and explain the present invention, not
For limiting the present invention.
A kind of main eigen method of digital cartoon extracts color histogram to animated video frame color model, by color
It adjusts H, saturation degree S, brightness V stereogram to be combined into an one-dimensional feature vector G, sees formula (1)
G=HQSQV+SQV+V (1)
In formula, Qs is the quantization series of S, and Qv is the quantization series of V, enables Qs=3, Qv=3, above formula is represented by formula
(2)
G=9H+3S+V (2)
The histogram of G is calculated according to color histogram, altogether 72bin, wherein the weight of tone H is up to 9, saturation degree S
Taking second place is 3, and brightness V is minimum, is set as 1.Relative to other features, color characteristic is more suitable for the feature extraction of animation, because HSV is color
Color space model is more in line with human eye vision feature.Under this color model, color histogram is extracted.According to human eye feature,
In HSV model, people can see black, white and colour, colored to exist when brightness is greater than 0.8 or saturation degree is less than 0.2
Identification is very low in human eye, substantially corresponds to see gray level image again;In brightness below 0.15, it is seen that substantially black, only
Have brightness reach big saturation degree it is moderate in the case where, colour can be perceived out.Human eye is most sensitive to tone H, therefore tone H is most
To be main, next is only saturation degree S, followed by brightness V.
The similarity calculation of two pictures includes the following steps: two picture tone H component similarity such as formula (3) institutes
Show:
In formula, hk(i) the H histogram of component of kth frame in video, h are representk+1+ 1 frame of kth is then represented, is used between the two
Then a lesser side adds up than a upper biggish side, the coloration H similarity for obtaining two interframe is Sh(k, k+1), similarly may be used
To calculate saturation degree S similitude Ss (k, k+1) and brightness similitude Sv (k, k+1), integrate, kth frame is similar with k+1 frame
Degree is as shown in formula (4):
According to human eye feature, tone H, saturation degree S, brightness V component are weighted, mh:ms:mv=QsQv:Qv:1s=9:3:
1, three weight coefficients are set to 0.9,0.3 and 0.1, the result calculated by above formula is smaller, and similarity is higher, i.e. frame pitch
From smaller, the calculated result of formula (4) is interframe distance.
Formula (4) is applied to the judgement of key frame, includes the following steps:
Video segmentation is first camera lens one by one by a, using camera lens as basic unit, referred to as Shot Detection, the change of camera lens
It changes and is generally divided into two kinds: shear and gradual change;
B given threshold T1, if interframe distance is greater than T1, is judged as shot-cut, otherwise in the comparison of k and k+1 frame
K+1 frame and a upper key frame are compared, i.e., compared with the representative frame of a upper camera lens, if distance is greater than T1 between the two,
Illustrate that the two belongs to different camera lenses, judges that k+1 frame belongs to next camera lens.As shown in Figure 1, dual threshold judges shot-cut method
Are as follows: setting two threshold value T1 and T2 (T1 > T2), big threshold value be used to judge camera lens whether shear, small threshold value be used to detect it is potential
Gradual change start frame.If frame-to-frame differences is judged as shear away from T1 is greater than;If interframe distance is greater than T2 and is less than T1, judge
Start gradual change from there, and the subsequent interframe distance that adds up, when adding up total gap greater than T1, illustrates that progressive formation is completed;Such as
Fruit interframe distance is less than T2, illustrates only minor change, directly ignores.It is done so that often having some problems, such as cannot
The beginning for detecting gradual change well, and if gradual change is very slow, interframe spacing all very little always, thus inspection is not measured
Come;In addition, the movement of some target in camera lens, such as the positive and negative color of skirt of a personage are different, rotated through in its dancing
Cheng Zhong, interframe variation are maintained at T2 or more whithin a period of time, and accumulative variation is easy for more than T1, but real lens are not cut
It changes, the change detecte method in the present invention compensates for this deficiency.
On the basis of FAST Corner Detection and BRIEF Feature Descriptor, ORB (Oriented FAST is obtained by improving
And Rotated BRIEF) algorithm, realize the fast selecting of characteristic point, and overcome former descriptor lacking on scale and rotation
It falls into.
Particularly, to judge whether certain point is characteristic point, choose put with this as uniformly distributed 16 on circumference first
Point, as shown in Fig. 2, the gray scale put on this point and circumference is compared, when gray scale absolute value of the difference is greater than a threshold value, it is believed that two o'clock
Difference is more than such as continuous n point all different in 16 points of circumference, then it is assumed that this point is a characteristic point, and n is generally set to 12.Here
There is a kind of efficient algorithm, i.e., by checking 3 o'clock for being located at the center of circle, 6 o'clock, 9 o'clock, 5,9,13,1 on 12 o'clock direction
Ungratified point is quickly discharged in the gray scale of 4 positions, first check for 1 and 9 see it is whether identical, if identical, then check 5 Hes
13, if this point is characteristic point, above in 4 points, at least three point is identical, if not satisfied, then directly exclusion.
After obtaining characteristic point, it would be desirable to describe the features of these characteristic points by some way, these, which are described as being, retouches
State son.To N is chosen to characteristic point in some way around key point, then using the comparison result of these points pair as describing
Son.
Firstly, doing circle around key point with certain radius, point pair is then extracted with certain pattern, as shown in figure 3, than
Such as extract four pairs of points (in practice can extract 512 pairs), by this four pairs of points, be denoted as respectively P1 (A1, B1), P2 (A2, B2), P3 (A3,
B3), P4 (A4, B4).If wherein the gray value of A1 is greater than B1, P1 (A1, B1) is determined as 1, is otherwise determined as 0, to each pair of point
Processing, if four pairs of results above are respectively P1=1, P2=1, P3=0, P4=1, then finally, the description of this characteristic point
Son is 1101.
ORB points are two parts, are feature point extraction and feature point description respectively.In feature point extraction, by constructing ruler
Pyramid, the Set scale factor and the pyramid number of plies are spent, proportionally the factor and the number of plies construct pyramid, every layer of extraction by original image
Characteristic point, feature candidate point of the extracted characteristic point summation as the diagram, to realize scale invariability.Selecting candidate point
Afterwards, using non-maxima suppression principle, the problem of to remove the multiple feature candidate points closed on, each feature candidate point is calculated
Its is correspondingly sized out, and calculation is contrast characteristic's candidate point and a certain range of feature candidate point of surrounding, calculates its deviation
Absolute value retain in these feature candidate points and respond maximum point, and other points are then deleted.
The description of feature point description the following steps are included:
It determines principal direction: around the characteristic point O in one piece of region, its mass center C is calculated, as shown in formula (5).
Wherein mpqFor geometric moment, as shown in formula (6)
The coordinate for calculating mass center C establishes coordinate system with vector OC principal direction the most with this principal direction, and in coordinate system
In select 256 description points pair, to compared with the vector for being combined into 256 dimensions.
Feature Points Matching: the binary feature to the 256bit found out calculates its Hamming distance, and Hamming distance indicates two phases
With the different character number of corresponding position in the character of length, such as 10001101 and 1011001 between the two Hamming distance be 2.
The method that the present invention will extract the main feature of animated video, applied to this video of animation retrieval and distort inspection
It surveys, in Fig. 4 video frequency searching, according to the key frame extracted, saves its color histogram, be stored in database, in retrieval, input
One section of video extraction key frame, then in the database between traverse identical key frame number, key frame it is less be output
Associated video.As shown in figure 5, extracting key frame respectively in tampering detection, the overall situation of picture is only described with color histogram
Feature, can detecte out it is certain distort, but the local detail of key frame can be had ignored in this way, such as by the head portrait of personage in video
Replacement, therefore the video features extracted specifically, therefore should extract the ORB feature with rotation and Scale invariant again, it is right
Than the ORB feature of key frame of video, detect whether to distort and position to distort position.
Video frequency searching process is as follows:
(1) build up video database in advance, by all video extraction key frames, and using the color histogram of these frames as
Feature saves, and every frame is indicated with 14 floating numbers.
(2) video to be detected extracts key frame, extracts feature vector.
(3) video finger print in database is accessed, the feature vector of video to be detected is traversed, with video in database
Feature compares, given threshold, when two frame color histogram gaps be less than threshold value, then judge that two pictures are similar.
(4) remaining feature vector is compared, if in two videos, similar frame number is k, two key frame of video numbers are respectively
M and n, then two video similarities are k/min (m, n).
(5) next data in database is selected, until having traversed database.
(6) given threshold, video similarity are greater than threshold value person and are sequentially output according to similarity size.
It should be understood that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this
The limitation of invention patent protection range, those skilled in the art under the inspiration of the present invention, are not departing from power of the present invention
Benefit requires to make replacement or deformation under protected ambit, fall within the scope of protection of the present invention, this hair
It is bright range is claimed to be determined by the appended claims.
Claims (9)
1. a kind of main eigen method of digital cartoon, which is characterized in that it is straight to extract color to animated video frame color model
Tone H, saturation degree S, brightness V stereogram are combined into an one-dimensional feature vector G, see formula (1) by Fang Tu
G=HQSQV+SQV+V (1)
In formula, Qs is the quantization series of S, and Qv is the quantization series of V, enables Qs=3, Qv=3, and above formula is represented by formula (2)
G=9H+3S+V (2)
The histogram of G is calculated according to color histogram, altogether 72bin, wherein the weight of tone H is up to 9, and saturation degree S takes second place
It is 3, brightness V is minimum, is set as 1.
2. the main eigen method of digital cartoon according to claim 1, which is characterized in that the similarity meter of two pictures
Calculation includes the following steps: shown in two picture tone H component similarities such as formula (3):
In formula, hk(i) the H histogram of component of kth frame in video, h are representk+1+ 1 frame of kth is then represented, between the two with lesser
Then one side adds up than a upper biggish side, the coloration H similarity for obtaining two interframe is Sh(k, k+1) can similarly be calculated
Saturation degree S similitude Ss (k, k+1) and brightness similitude Sv (k, k+1) out, integrates, and kth frame and k+1 frame similarity are such as public
Shown in formula (4):
According to human eye feature, tone H, saturation degree S, brightness V component are weighted, mh:ms:mv=QsQv:Qv:1s=9:3:1, will
Three weight coefficients are set to 0.9,0.3 and 0.1, and the result calculated by above formula is smaller, and similarity is higher, i.e. interframe distance is got over
Small, the calculated result of formula (4) is interframe distance.
3. the main eigen method of digital cartoon according to claim 2, which is characterized in that formula (4) is applied to key
The judgement of frame, includes the following steps:
Video segmentation is first camera lens one by one by a, using camera lens as basic unit, referred to as Shot Detection, the transformation one of camera lens
As be divided into two kinds: shear and gradual change;
B given threshold T1, if interframe distance is greater than T1, is judged as shot-cut, otherwise by k+ in the comparison of k and k+1 frame
1 frame and a upper key frame compare, i.e., compare with the representative frame of a upper camera lens, if distance is greater than T1 between the two, illustrate
The two belongs to different camera lenses, judges that k+1 frame belongs to next camera lens.
4. the main eigen method of digital cartoon according to claim 1, which is characterized in that animated video frame is based on
The judgement of Oriented FAST and Rotated BRIEF algorithm characteristics point and feature point description comprising following steps:
The judgement of c characteristic point: will judge whether certain point is characteristic point, choose first and put with this as the center of circle and be evenly distributed on circle
4 or 4 or more points on week compare the gray scale of this point and the point on circumference, when gray scale absolute value of the difference is greater than setting threshold
When value, judge that this point is characterized a little;
Description of d characteristic point: around characteristic point, doing circle with certain radius, then extracts point pair in circle with certain pattern,
In each pair of point, if the gray value of the two is different, puts to being determined as 1, be otherwise determined as 0, it is raw to processing terminate to each pair of point
At description.
5. the main eigen method of digital cartoon according to claim 4, which is characterized in that choose circumferential point in step c
On point be 4, and be located at 3 o'clock, 6 o'clock, 9 o'clock, 12 o'clock direction in the center of circle.
6. the main eigen method of digital cartoon according to claim 5, which is characterized in that feature point extraction in step c
The following steps are included:
E is in feature point extraction, by constructing scale pyramid, the Set scale factor and the pyramid number of plies, by original image according to than
The example factor and the number of plies construct pyramid, every layer of extraction characteristic point, feature candidate of the extracted characteristic point summation as the diagram
Point, to realize scale invariability.
F is after selecting feature candidate point, using non-maxima suppression principle, to remove asking for the multiple feature candidate points closed on
Topic, calculates that its is correspondingly sized to each feature candidate point, and calculation is in contrast characteristic's candidate point and surrounding a certain range
Feature candidate point, calculate its inclined absolute value of the difference, in these feature candidate points, retain and respond maximum point, and other points
Then delete.
7. the main eigen method of digital cartoon according to claim 6, which is characterized in that characteristic point retouches in step f
State the following steps are included:
G determines principal direction, around the characteristic point O in one piece of region, its mass center C is calculated, as shown in formula (5)
Wherein mpqFor geometric moment, as shown in formula (6)
mpq=∑X, yxpyqI (x, y) (6)
The coordinate for calculating mass center C establishes coordinate system with vector OC principal direction the most with this principal direction, and selects in a coordinate system
256 description points pair out, to compared with the vector for being combined into 256 dimensions;
H Feature Points Matching, the binary feature to the 256bit found out calculate its Hamming distance, and Hamming distance expression two is identical
The different character number of corresponding position in the character of length.
8. retrieval and the altering detecting method of a kind of animated video, which is characterized in that in video frequency searching, according to claim 7
Described in digital cartoon main eigen method, the key frame extracted saves its color histogram, is stored in database, inspection
Suo Zhong inputs one section of video extraction key frame, then in the database between traverse identical key frame number, key frame it is less i.e.
For the associated video of output;In tampering detection, key frame is extracted respectively, and the overall situation that picture is described with color histogram is special
Then sign extracts the ORB feature with rotation and Scale invariant again, compares the ORB feature of key frame of video, detect whether to usurp
Change and position and distorts position.
9. the retrieval of animated video and altering detecting method according to claim 8, which is characterized in that video frequency searching process is such as
Under:
(1) video database is built up in advance, by all video extraction key frames, and using the color histogram of these frames as feature
It saves, every frame is indicated with 14 floating numbers;
(2) video to be detected extracts key frame, extracts feature vector;
(3) video finger print in database is accessed, the feature vector of video to be detected is traversed, with video features in database
Compare, given threshold, when two frame color histogram gaps be less than threshold value, then judge that two pictures are similar;
(4) compare remaining feature vector, if in two videos, similar frame number is k, two key frame of video numbers be respectively m and
N, then two video similarities are k/min (m, n);
(5) next data in database is selected, until having traversed database;
(6) given threshold, video similarity are greater than threshold value person and are sequentially output according to similarity size.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810673083.9A CN108984648A (en) | 2018-06-27 | 2018-06-27 | The retrieval of the main eigen and animated video of digital cartoon and altering detecting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810673083.9A CN108984648A (en) | 2018-06-27 | 2018-06-27 | The retrieval of the main eigen and animated video of digital cartoon and altering detecting method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108984648A true CN108984648A (en) | 2018-12-11 |
Family
ID=64538927
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810673083.9A Withdrawn CN108984648A (en) | 2018-06-27 | 2018-06-27 | The retrieval of the main eigen and animated video of digital cartoon and altering detecting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108984648A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211146A (en) * | 2019-05-16 | 2019-09-06 | 中国人民解放军陆军工程大学 | Video foreground segmentation method and device for cross-view simulation |
CN110443007A (en) * | 2019-07-02 | 2019-11-12 | 北京瑞卓喜投科技发展有限公司 | A kind of Traceability detection method of multi-medium data, device and equipment |
CN112950556A (en) * | 2021-02-07 | 2021-06-11 | 深圳力维智联技术有限公司 | Image truth evaluation method, device and system and computer readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102685398A (en) * | 2011-09-06 | 2012-09-19 | 天脉聚源(北京)传媒科技有限公司 | News video scene generating method |
CN102695056A (en) * | 2012-05-23 | 2012-09-26 | 中山大学 | Method for extracting compressed video key frames |
CN103426176A (en) * | 2013-08-27 | 2013-12-04 | 重庆邮电大学 | Video shot detection method based on histogram improvement and clustering algorithm |
CN103761252A (en) * | 2013-12-25 | 2014-04-30 | 北京航天测控技术有限公司 | Video retrieval method |
CN106060568A (en) * | 2016-06-28 | 2016-10-26 | 电子科技大学 | Video tampering detecting and positioning method |
CN106952299A (en) * | 2017-03-14 | 2017-07-14 | 大连理工大学 | A kind of 3 d light fields Implementation Technology suitable for Intelligent mobile equipment |
CN107944471A (en) * | 2017-11-03 | 2018-04-20 | 安徽工程大学 | A kind of ORB characteristic point matching methods based on Nonlinear Scale Space Theory |
-
2018
- 2018-06-27 CN CN201810673083.9A patent/CN108984648A/en not_active Withdrawn
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102685398A (en) * | 2011-09-06 | 2012-09-19 | 天脉聚源(北京)传媒科技有限公司 | News video scene generating method |
CN102695056A (en) * | 2012-05-23 | 2012-09-26 | 中山大学 | Method for extracting compressed video key frames |
CN103426176A (en) * | 2013-08-27 | 2013-12-04 | 重庆邮电大学 | Video shot detection method based on histogram improvement and clustering algorithm |
CN103761252A (en) * | 2013-12-25 | 2014-04-30 | 北京航天测控技术有限公司 | Video retrieval method |
CN106060568A (en) * | 2016-06-28 | 2016-10-26 | 电子科技大学 | Video tampering detecting and positioning method |
CN106952299A (en) * | 2017-03-14 | 2017-07-14 | 大连理工大学 | A kind of 3 d light fields Implementation Technology suitable for Intelligent mobile equipment |
CN107944471A (en) * | 2017-11-03 | 2018-04-20 | 安徽工程大学 | A kind of ORB characteristic point matching methods based on Nonlinear Scale Space Theory |
Non-Patent Citations (1)
Title |
---|
ETHAN RUBLEE等: ""ORB:An efficient alternative to SIFT or SURF"", 《2011 INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211146A (en) * | 2019-05-16 | 2019-09-06 | 中国人民解放军陆军工程大学 | Video foreground segmentation method and device for cross-view simulation |
CN110443007A (en) * | 2019-07-02 | 2019-11-12 | 北京瑞卓喜投科技发展有限公司 | A kind of Traceability detection method of multi-medium data, device and equipment |
CN110443007B (en) * | 2019-07-02 | 2021-07-30 | 北京瑞卓喜投科技发展有限公司 | Multimedia data tracing detection method, device and equipment |
CN112950556A (en) * | 2021-02-07 | 2021-06-11 | 深圳力维智联技术有限公司 | Image truth evaluation method, device and system and computer readable storage medium |
CN112950556B (en) * | 2021-02-07 | 2024-05-10 | 深圳力维智联技术有限公司 | Image reality evaluation method, device, system and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102560308B1 (en) | System and method for exterior search | |
US6937766B1 (en) | Method of indexing and searching images of text in video | |
Wang et al. | A new face detection method based on shape information | |
US7869657B2 (en) | System and method for comparing images using an edit distance | |
JP5649425B2 (en) | Video search device | |
Gong | Intelligent image databases: towards advanced image retrieval | |
US20090290791A1 (en) | Automatic tracking of people and bodies in video | |
CN103988232A (en) | IMAGE MATCHING by USING MOTION MANIFOLDS | |
JP6557592B2 (en) | Video scene division apparatus and video scene division program | |
Xiong et al. | Automatic video data structuring through shot partitioning and key-frame computing | |
Avrithis et al. | Broadcast news parsing using visual cues: A robust face detection approach | |
CN108984648A (en) | The retrieval of the main eigen and animated video of digital cartoon and altering detecting method | |
Choi et al. | Automatic face annotation in personal photo collections using context-based unsupervised clustering and face information fusion | |
Patel et al. | Content based video retrieval | |
Bertini et al. | Player identification in soccer videos | |
e Souza et al. | Survey on visual rhythms: A spatio-temporal representation for video sequences | |
CN105989063A (en) | Video retrieval method and device | |
Yuk et al. | Object-based surveillance video retrieval system with real-time indexing methodology | |
Ovhal et al. | Plagiarized image detection system based on CBIR | |
Ulges et al. | Spatiogram-Based Shot Distances for Video Retrieval. | |
Martin et al. | Violence Detection in Video by Large Scale Multi-Scale Local Binary Pattern Dynamics. | |
Pham-Ngoc et al. | Multi-face detection system in video sequence | |
Adamek et al. | Stopping region-based image segmentation at meaningful partitions | |
Aslam et al. | HUMAN ACTIVITY BASED VIDEO RETRIEVAL USING OPTICAL FLOW AND LOCAL BINARY PATTERNS. | |
Kim et al. | Edge-based spatial descriptor using color vector angle for effective image retrieval |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20181211 |
|
WW01 | Invention patent application withdrawn after publication |