[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115294504B - Marketing video auditing system based on AI - Google Patents

Marketing video auditing system based on AI Download PDF

Info

Publication number
CN115294504B
CN115294504B CN202211186780.4A CN202211186780A CN115294504B CN 115294504 B CN115294504 B CN 115294504B CN 202211186780 A CN202211186780 A CN 202211186780A CN 115294504 B CN115294504 B CN 115294504B
Authority
CN
China
Prior art keywords
module
frame
identification
visual
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211186780.4A
Other languages
Chinese (zh)
Other versions
CN115294504A (en
Inventor
张豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Dangxia Technology Co.,Ltd.
Original Assignee
Wuhan Dangxia Time Culture Creative Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Dangxia Time Culture Creative Co ltd filed Critical Wuhan Dangxia Time Culture Creative Co ltd
Priority to CN202211186780.4A priority Critical patent/CN115294504B/en
Publication of CN115294504A publication Critical patent/CN115294504A/en
Application granted granted Critical
Publication of CN115294504B publication Critical patent/CN115294504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a marketing video auditing system based on AI (artificial intelligence), which comprises a preprocessing module, an auditing model establishing module and a visual optimization module, wherein the preprocessing module is used for preprocessing a video to be audited, the auditing model establishing module is used for establishing a model based on video auditing, the visual optimization module is used for optimizing the audited video, the preprocessing module comprises a visual frame extracting module, a characteristic identification module, a single-frame data quantity identification module and a mathematical curve fitting module, the visual frame extracting module extracts video frame numbers which can be continuously identified by human eyes to serve as a unit of visual frame, the characteristic identification module is used for carrying out characteristic identification on the extracted visual frame, the single-frame data quantity identification module is used for carrying out data quantity identification on the visual frame number of each unit, and the mathematical curve fitting module is used for fitting the visual frame data quantity of each unit to a curve.

Description

Marketing video auditing system based on AI
Technical Field
The invention relates to the technical field of image processing, in particular to a marketing video auditing system based on AI.
Background
Along with the fire development of the short video industry, more and more people select to use the short video for marketing, but currently, the auditing mechanism of each platform is different, the situation that part of marketing words or patterns are deleted may occur, the marketing effect of the video is influenced, and even the video is off-shelf, and the manual auditing mode adopted at present is a conservative video auditing mode, but for different people, the auditing strength of the auditing mode can elastically change and is easy to miss, marketing companies need to audit the marketing video before uploading the video, so that the situation that the marketing video is audited by the platforms to cause reduction of weight and influence the marketing effect is prevented, and a patent with a publication number of CN107798304B discloses a method for fast auditing the video, wherein the shot boundary is secondarily detected by adopting HOG characteristic difference to eliminate wrong shot boundary benefits, and the shot boundary distance is less than 15 long shot boundary distance is removed by adopting the HOG difference to obtain M shot boundaries so as to fast audit the video, although the auditing speed can be improved to a certain extent, the generating process of the HOG characteristic noise is very sensitive, and the problem of extra noise is generated in the difference auditing process; therefore, it is necessary to design an AI-based marketing video auditing system with multi-directional filtering and feature extraction.
Disclosure of Invention
The invention aims to provide a marketing video auditing system based on AI, which aims to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: a marketing video auditing system based on AI comprises a preprocessing module, an auditing model establishing module and a visual optimization module, wherein the preprocessing module is used for preprocessing a video to be audited, the auditing model establishing module is used for establishing a model based on video auditing, the visual optimization module is used for further optimizing the audited video, the preprocessing module comprises a visual frame extracting module, a feature recognition module, a single-frame data quantity recognition module and a mathematical curve fitting module, the visual frame extracting module extracts the number of video frames which can be continuously recognized by human eyes to serve as a unit of visual frame, the feature recognition module is used for performing feature recognition on the extracted visual frame, the single-frame data quantity recognition module is used for performing data quantity recognition on the visual frame of each unit, and the mathematical curve fitting module is used for fitting the data quantity of the visual frame of each unit to be a curve.
According to the technical scheme, the auditing model establishing module comprises a feature migration module, a convolution layering module, an original domain accumulation module and an identification domain selection module, wherein the feature migration module is used for model migration of features identified in the preprocessing process, the convolution layering module is used for convolution layering according to feature types, the original domain accumulation module is used for providing multiple features of an original data model, and the identification domain selection module is used for selecting an identification domain matched with the original data model according to a preprocessing result.
According to the technical scheme, the visual optimization module comprises a multidirectional filtering module, a single-frame confidence coefficient calculating module and a feedback module, the multidirectional filtering module is used for carrying out multidirectional filtering on the video to be examined, the single-frame confidence coefficient calculating module is used for carrying out confidence coefficient calculation on the split single-frame image, and the feedback module is used for carrying out feedback based on a time point on the video to be examined according to the confidence coefficient.
According to the technical scheme, in the system, the operation method of the preprocessing module comprises the following steps:
determining a visual frame, and extracting a multi-unit visual frame from the video;
performing feature recognition on the extracted multi-unit visual frame, wherein the feature recognition content comprises direct features, semantic features and repeated features;
carrying out data quantity identification statistics on the visual frame of each unit, and fitting a data quantity change curve by taking time as a reference;
and screening out the visual frames needing further examination by combining the identification characteristics and the data volume change curve.
According to the technical scheme, the screening method of the visual frames needing further auditing comprises the following steps:
after the feature recognition is carried out on the visual frame of a unit, a feature matrix of the visual frame of the unit is generated
Figure 978594DEST_PATH_IMAGE001
Performing weighted average on elements in the feature matrix to generate a weight value of the unit visual frame
Figure 836828DEST_PATH_IMAGE002
Setting an audit weight threshold
Figure 386758DEST_PATH_IMAGE003
When it comes to
Figure 346755DEST_PATH_IMAGE004
Time marks the time point of the unit visual frame and takes the unit visual frame as a candidate auditing visual frame;
amount of data contained in each unit visual frame
Figure 640333DEST_PATH_IMAGE005
Identifying and outputting, and fitting time points
Figure 353074DEST_PATH_IMAGE006
Graph with data amount, wherein
Figure 73906DEST_PATH_IMAGE007
Is a set of data arrays of data amount in bytes, and the change of data amount contained in unit visual frame is obeyed by time point
Figure 504887DEST_PATH_IMAGE006
The curve change of (c);
calculating the change trend of the visual frame data of each adjacent time point, and calculating the data change degree
Figure 851424DEST_PATH_IMAGE008
And judging whether the data change is abnormal or not according to the AI, and marking and uploading abnormal points.
According to the technical scheme, the data change degree
Figure 418671DEST_PATH_IMAGE008
The calculation method comprises the following steps:
Figure DEST_PATH_IMAGE009
in the formula, a whole video channel is represented
Figure 841562DEST_PATH_IMAGE010
After the extraction of the identification frames of the units, the average data amount of each unit identification frame,
Figure 510572DEST_PATH_IMAGE011
representing the variance value of all unit identification frames, directly reflecting each segmentA degree of data change for the frame is identified, wherein,
Figure 411532DEST_PATH_IMAGE012
is as follows
Figure 567707DEST_PATH_IMAGE013
The unit identifies the amount of data of the frame, in bytes,
Figure 895920DEST_PATH_IMAGE010
the number of frames is identified for the total decimation,
Figure 35914DEST_PATH_IMAGE008
the larger the data size is, the more obvious the change of the identification frame data of the section is, and the larger the fluctuation is;
and after the identification is finished, judging whether the data change is normal according to the AI, and carrying out time point marking on the abnormal identification frame and uploading.
According to the technical scheme, after the abnormal identification frame is uploaded, the verification model is used for further verification, and the establishment method of the verification model comprises the following steps:
step S1: establishing a characteristic original domain, specifically a characteristic original domain containing all characteristic ranges of the identification frame;
step S2: feature matrix
Figure 992763DEST_PATH_IMAGE001
Carrying out convolution layering with the characteristic original domain, and establishing an auditing model with characteristic layering;
and step S3: after each audit, performing original domain accumulation on the identification of the features in the audit content, and storing the identification in a feature matrix form;
and step S4: after the verification is finished, selecting the identification domains with the corresponding layers for identification operation, and after the identification is passed, canceling the mark of the identification frame content, otherwise, continuously outputting.
According to the above technical solution, the operation method of the vision optimization module further comprises the steps of:
performing multi-directional filtering based on feature layering, wherein the filtering degree is related to the feature layer number of the visual frame;
calculating single frame confidence of unit visual frame according to characteristic parameters
Figure 534602DEST_PATH_IMAGE014
Wherein the characteristic parameters comprise the total characteristic convolution layer number
Figure 768138DEST_PATH_IMAGE015
Number of characteristic layers for filtration
Figure 395428DEST_PATH_IMAGE016
Conversion coefficient of
Figure 388923DEST_PATH_IMAGE017
According to the technical scheme, the single frame confidence degree
Figure 785269DEST_PATH_IMAGE014
The calculation method comprises the following steps:
Figure 455285DEST_PATH_IMAGE018
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE019
the feature recognition depth after filtering is represented,
Figure 835451DEST_PATH_IMAGE020
the proportion of the identification depth to the total characteristic convolution layer number is represented, the larger the proportion of the identification depth to the total characteristic convolution layer number is, the less the characteristic redundancy is represented, the higher the safety factor of a single identification frame is, and specifically, the converted coefficient is used
Figure 865592DEST_PATH_IMAGE017
A converted single frame confidence representation;
after single-frame confidence coefficient calculation, marking the visual frames in the auditing model, performing danger feedback on the visual frames with the confidence coefficient lower than the average confidence coefficient, sending the visual frames to a manual auditing end for further auditing, and performing repeated model auditing on the visual frames with the confidence coefficient higher than the average confidence coefficient to realize multiple screening.
Compared with the prior art, the invention has the following beneficial effects: according to the invention, the visual frame extraction module is arranged, so that the video frame number which can be smoothly identified by human eyes is extracted as the visual frame, the AI operation complexity is reduced while the manual review is simulated, and the subsequent curve fitting is facilitated; the data volume in the identification frame of each unit is identified by arranging a single-frame data volume identification module, and a curve change image is fitted by using a time point for auxiliary identification; the feature matrix and the original feature domain are subjected to convolution layering through the convolution layering module, the auditing precision is improved, the original domain is updated, the confidence coefficient of a single identification frame is calculated through the single-frame confidence coefficient calculating module, the content of the identification frame is further screened, the part with abnormal numerical values is submitted to a manual place for further auditing, and the auditing efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic block diagram of the system of the present invention;
FIG. 2 is a diagram illustrating a single frame data variation curve according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides the following technical solutions: a marketing video auditing system based on AI comprises a preprocessing module, an auditing model establishing module and a visual optimization module, wherein the preprocessing module is used for preprocessing a video to be audited, the auditing model establishing module is used for establishing a model based on video auditing, the visual optimization module is used for further optimizing the audited video, the preprocessing module comprises a visual frame extracting module, a feature recognition module, a single-frame data quantity recognition module and a mathematical curve fitting module, the visual frame extracting module extracts the number of video frames which can be continuously recognized by human eyes to serve as a unit of visual frame, the feature recognition module is used for carrying out feature recognition on the extracted visual frame, the single-frame data quantity recognition module is used for carrying out data quantity recognition on the visual frame of each unit, and the mathematical curve fitting module is used for fitting the data quantity of the visual frame of each unit to be a curve.
The auditing model establishing module comprises a feature migration module, a convolution layering module, an original domain accumulation module and an identification domain selection module, wherein the feature migration module is used for performing model migration on the identified features in the preprocessing process, the convolution layering module is used for performing convolution layering according to feature types, the original domain accumulation module is used for providing multiple features of an original data model, and the identification domain selection module is used for selecting an identification domain matched with the original data model according to the preprocessing result.
The visual optimization module comprises a multidirectional filtering module, a single-frame confidence coefficient calculating module and a feedback module, wherein the multidirectional filtering module is used for carrying out multidirectional filtering on the video to be examined, the single-frame confidence coefficient calculating module is used for carrying out confidence coefficient calculation on the split single-frame image, and the feedback module is used for carrying out feedback based on time points on the video to be examined according to the confidence coefficient.
In the system, the operation method of the preprocessing module comprises the following steps:
determining a visual frame, and extracting multi-unit visual frames from the video; the visual frames are extracted by taking the number of video frames which can be smoothly identified by human eyes as a unit, so that the consistency of video audit is ensured;
performing feature recognition on the extracted multi-unit visual frame, wherein the feature recognition content comprises direct features, semantic features and repeated features;
carrying out data quantity identification statistics on the visual frame of each unit, and fitting a data quantity change curve by taking time as a reference;
and screening out the visual frames needing further examination by combining the identification characteristics and the data volume change curve.
The screening method of the visual frames needing further auditing comprises the following steps:
after the feature recognition is carried out on the visual frame of a unit, a feature matrix of the visual frame of the unit is generated
Figure 382024DEST_PATH_IMAGE001
(ii) a After the visual frame is subjected to feature recognition, generating
Figure 222942DEST_PATH_IMAGE013
Line of
Figure 824824DEST_PATH_IMAGE021
A feature matrix of columns, the matrix containing all features of a single visual frame;
performing weighted average on elements in the feature matrix to generate a weight value of the unit visual frame
Figure 894542DEST_PATH_IMAGE002
(ii) a After elements in the feature matrix are weighted and averaged, screening out elements which are least matched and excessively matched with the identification content and the features, generating a weight value of the visual frame, and feeding back an audit weight of the visual frame;
setting an audit weight threshold
Figure 265481DEST_PATH_IMAGE003
When is coming into contact with
Figure 277299DEST_PATH_IMAGE004
Time marking the time point of the unit visual frame, and using the time point as a candidate audit visual frame; auditing weight threshold
Figure 100899DEST_PATH_IMAGE003
The big data is obtained by filtering according to the weight value of the identification content in the historical audit record, and is a dynamic value, the audit content is different every time, and the weight threshold is differentThe value also changes;
amount of data contained in each unit visual frame
Figure 223576DEST_PATH_IMAGE005
Recognizing and outputting, and fitting out time points
Figure 966797DEST_PATH_IMAGE006
Graph with data amount, wherein
Figure 883937DEST_PATH_IMAGE007
Is a set of data arrays of data amount in bytes, and the change of data amount contained in unit visual frame is obeyed by time point
Figure 460412DEST_PATH_IMAGE006
The curve change of (c); counting the data quantity contained in each identification frame, fitting a data change curve with the time point, recording the data flow in a set form, and judging the time point with abnormal data quantity by combining curve data;
calculating the change trend of the visual frame data of each adjacent time point, and calculating the data change degree
Figure 386780DEST_PATH_IMAGE008
And judging whether the data change is abnormal or not according to the AI, and marking and uploading abnormal points.
Degree of data change
Figure 217464DEST_PATH_IMAGE008
The calculating method comprises the following steps:
Figure 836664DEST_PATH_IMAGE009
in the formula (I), the compound is shown in the specification,
Figure 634856DEST_PATH_IMAGE022
representing a whole video channel
Figure 99335DEST_PATH_IMAGE010
After the extraction of the identification frames of the units, the average data amount of each unit identification frame,
Figure 548640DEST_PATH_IMAGE011
the variance value of all unit identification frames is expressed, and the data change degree of each identification frame is directly reflected, wherein,
Figure 73162DEST_PATH_IMAGE012
is as follows
Figure 358650DEST_PATH_IMAGE013
The unit identifies the amount of data of the frame, in bytes,
Figure 626820DEST_PATH_IMAGE010
the number of frames is identified for the total decimation,
Figure 432096DEST_PATH_IMAGE008
the larger the data size is, the more obvious the change of the identification frame data of the section is, and the larger the fluctuation is;
and after the identification is finished, judging whether the data change is normal according to the AI, and carrying out time point marking on the abnormal identification frame and uploading.
After the abnormal recognition frame is uploaded, the auditing model is used for further auditing, and the establishing method of the auditing model comprises the following steps:
step S1: establishing a characteristic original domain, specifically a characteristic original domain containing all characteristic ranges of the identification frame; the original features contain larger redundancy, an original feature domain is established after classification is carried out through a classifier, effective features are screened out according to AI and put into the domain, and the complexity of the feature original domain is reduced;
step S2: feature matrix
Figure 393099DEST_PATH_IMAGE001
Carrying out convolution layering on the feature original domain, and establishing an auditing model with feature layering; after convolution of the feature matrix and the elements in the feature original domain, the feature matrix containing a plurality of directions is generatedLayering vectors, accessing the vectors into an audit model, matching most features in a visual frame, uploading the unmatched parts and updating an original domain to complete accumulation of the original domain, wherein the more the original domain is accumulated, the more accurate the feature identification is;
and step S3: after each audit, performing original domain accumulation on the identification of the features in the audit content, and storing the identification in a feature matrix form;
and step S4: and after the auditing is finished, selecting the identification domain with the corresponding layer number for identification operation, and after the identification is passed, identifying the frame content canceling mark, otherwise, continuously outputting.
The method for operating the vision optimization module further comprises the following steps:
performing multi-directional filtering based on feature layering, wherein the filtering degree is related to the feature layer number of the visual frame; the more the number of the characteristic layers is, the richer the video content is represented, the more the hidden characteristics are buried, and the more the common characteristics needing to be filtered are;
calculating single frame confidence of unit visual frame according to characteristic parameters
Figure 634725DEST_PATH_IMAGE014
Wherein the characteristic parameter comprises the total characteristic convolution layer number
Figure 706586DEST_PATH_IMAGE015
Number of characteristic layers for filtration
Figure 881215DEST_PATH_IMAGE016
Conversion coefficient of
Figure 988019DEST_PATH_IMAGE017
Single frame confidence
Figure 248099DEST_PATH_IMAGE014
The calculation method comprises the following steps:
Figure 858072DEST_PATH_IMAGE018
in the formula (I), the compound is shown in the specification,
Figure 887207DEST_PATH_IMAGE019
the feature recognition depth after filtering is represented,
Figure 409587DEST_PATH_IMAGE020
the proportion of the identification depth to the total characteristic convolution layer number is represented, the larger the proportion of the identification depth to the total characteristic convolution layer number is, the less the characteristic redundancy is represented, the higher the safety factor of a single identification frame is, and specifically, the converted coefficient is used
Figure 422542DEST_PATH_IMAGE017
A converted single frame confidence representation;
after single-frame confidence coefficient calculation, marking the visual frames in the auditing model, performing danger feedback on the visual frames with the confidence coefficient lower than the average confidence coefficient, sending the visual frames to a manual auditing end for further auditing, and performing repeated model auditing on the visual frames with the confidence coefficient higher than the average confidence coefficient to realize multiple screening.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A marketing video auditing system based on AI comprises a preprocessing module, an auditing model establishing module and a visual optimization module, and is characterized in that: the system comprises a preprocessing module, an audit model establishing module, a vision optimizing module, a characteristic identification module, a single-frame data quantity identification module and a mathematical curve fitting module, wherein the preprocessing module is used for preprocessing a video to be audited, the audit model establishing module is used for establishing a model based on video audit, the vision optimizing module is used for further optimizing the audited video, the preprocessing module comprises a vision frame extracting module, a characteristic identification module, a single-frame data quantity identification module and a mathematical curve fitting module, the vision frame extracting module extracts video frame numbers which can be continuously identified by human eyes to serve as a unit of vision frame, the characteristic identification module is used for carrying out characteristic identification on the extracted vision frame, the single-frame data quantity identification module is used for carrying out data quantity identification on the vision frame of each unit, and the mathematical curve fitting module is used for fitting the data quantity of the vision frame of each unit to a curve;
the auditing model establishing module comprises a feature migration module, a convolution layering module, an original domain accumulation module and an identification domain selection module, wherein the feature migration module is used for carrying out model migration on features identified in the preprocessing process, the convolution layering module is used for carrying out convolution layering according to feature types identified in the preprocessing process, the original domain accumulation module is used for providing multiple features of an original data model, and the identification domain selection module is used for selecting an identification domain matched with the original data model according to a preprocessing result; the visual optimization module comprises a multidirectional filtering module, a single-frame confidence coefficient calculation module and a feedback module, wherein the multidirectional filtering module is used for carrying out multidirectional filtering on the video to be reviewed, the single-frame confidence coefficient calculation module is used for carrying out confidence coefficient calculation on the split single-frame image, and the feedback module is used for carrying out feedback based on a time point on the video to be reviewed according to the confidence coefficient.
2. An AI-based marketing video audit system according to claim 1 wherein: the operation method of the pretreatment module comprises the following steps:
determining a visual frame, and extracting a multi-unit visual frame from the video;
performing feature recognition on the extracted multi-unit visual frame, wherein the feature recognition content comprises direct features, semantic features and repeated features;
carrying out data quantity identification statistics on the visual frame of each unit, and fitting a data quantity change curve by taking time as a reference;
and screening out the visual frames needing further examination by combining the identification characteristics and the data volume change curve.
3. The AI-based marketing video audit system according to claim 2 wherein: the screening method of the visual frames needing further auditing comprises the following steps:
after the feature recognition is carried out on the visual frame of one unit, a feature matrix of the visual frame of the unit is generated
Figure 62869DEST_PATH_IMAGE001
Performing weighted average on elements in the feature matrix to generate a weight value of the unit visual frame
Figure 37035DEST_PATH_IMAGE002
Setting an audit weight threshold
Figure 792501DEST_PATH_IMAGE003
When is coming into contact with
Figure 650736DEST_PATH_IMAGE004
Time marks the time point of the unit visual frame and takes the unit visual frame as a candidate auditing visual frame;
amount of data contained in each unit visual frame
Figure 466245DEST_PATH_IMAGE005
Identifying and outputting, and fitting time points
Figure 895084DEST_PATH_IMAGE006
Graph with data amount, wherein
Figure 188662DEST_PATH_IMAGE007
Is a set of arrays of data volumes,
Figure 901403DEST_PATH_IMAGE008
is a first
Figure 887813DEST_PATH_IMAGE009
The unit identifies the amount of data of the frame, in bytes,
Figure 568062DEST_PATH_IMAGE010
identifying the number of frames in bytes for the total extraction, the change of the data amount contained in the unit visual frame being obeyed by the time point
Figure 399752DEST_PATH_IMAGE006
The curve change of (c);
calculating the change trend of the visual frame data of each adjacent time point, and calculating the data change degree
Figure 967000DEST_PATH_IMAGE011
And judging whether the data change is abnormal or not according to the AI, and marking and uploading abnormal points.
4. An AI-based marketing video audit system according to claim 3 wherein: degree of change of the data
Figure 124312DEST_PATH_IMAGE011
The calculating method comprises the following steps:
Figure 42589DEST_PATH_IMAGE012
in the formula (I), the compound is shown in the specification,
Figure 163123DEST_PATH_IMAGE013
representing a whole video channel
Figure 850456DEST_PATH_IMAGE010
After the extraction of the identification frames of each unit, the average data amount of each unit identification frame,
Figure 178669DEST_PATH_IMAGE014
the variance value of all unit identification frames is expressed, and the data change degree of each identification frame is directly reflected, wherein,
Figure 318664DEST_PATH_IMAGE008
is as follows
Figure 275512DEST_PATH_IMAGE009
The unit identifies the amount of data of the frame, in bytes,
Figure 817352DEST_PATH_IMAGE010
for the total number of extracted identification frames,
Figure 50887DEST_PATH_IMAGE011
the larger the data size is, the more obvious the change of the identification frame data of the section is, and the larger the fluctuation is;
and after the identification is finished, judging whether the data change is normal according to the AI, and carrying out time point marking on the abnormal identification frame and uploading.
5. The AI-based marketing video audit system according to claim 4 wherein: after the abnormal identification frame is uploaded, further auditing is carried out by an auditing model, and the establishing method of the auditing model comprises the following steps:
step S1: establishing a characteristic original domain, specifically a characteristic original domain containing all characteristic ranges of the identification frame;
step S2: feature matrix
Figure 678177DEST_PATH_IMAGE001
Carrying out convolution layering on the feature original domain, and establishing an auditing model with feature layering;
and step S3: after each audit, performing original domain accumulation on the identification of the features in the audit content, and storing the identification in a feature matrix form;
and step S4: and after the auditing is finished, selecting the identification domain with the corresponding layer number for identification operation, and after the identification is passed, identifying the frame content canceling mark, otherwise, continuously outputting.
6. The AI-based marketing video audit system according to claim 5 wherein: the method for operating the vision optimization module further comprises the following steps:
performing multi-directional filtering based on feature layering;
calculating single-frame confidence coefficient of unit visual frame according to characteristic parameters
Figure 655360DEST_PATH_IMAGE015
Wherein the characteristic parameter comprises the total characteristic convolution layer number
Figure 68018DEST_PATH_IMAGE016
Number of characteristic layers for filtration
Figure 472455DEST_PATH_IMAGE017
Conversion coefficient of
Figure 587041DEST_PATH_IMAGE018
7. The AI-based marketing video audit system according to claim 6 wherein: the single frame confidence
Figure 367916DEST_PATH_IMAGE015
The calculating method comprises the following steps:
Figure 133615DEST_PATH_IMAGE019
in the formula (I), the compound is shown in the specification,
Figure 974532DEST_PATH_IMAGE020
the feature recognition depth after filtering is represented,
Figure 310836DEST_PATH_IMAGE021
the proportion of the identification depth to the total characteristic convolution layer number is represented, the larger the proportion of the identification depth to the total characteristic convolution layer number is, the less the characteristic redundancy is represented, the higher the safety factor of a single identification frame is, and specifically, the converted coefficient is used
Figure 895401DEST_PATH_IMAGE018
A converted single frame confidence representation; after single-frame confidence calculation, marking the visual frames in the auditing model, performing danger feedback on the visual frames with the confidence lower than the average confidence, sending the visual frames to a manual auditing end for further auditing, and performing repeated model auditing on the visual frames with the confidence higher than the average confidence to realize multiple screening.
CN202211186780.4A 2022-09-28 2022-09-28 Marketing video auditing system based on AI Active CN115294504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211186780.4A CN115294504B (en) 2022-09-28 2022-09-28 Marketing video auditing system based on AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211186780.4A CN115294504B (en) 2022-09-28 2022-09-28 Marketing video auditing system based on AI

Publications (2)

Publication Number Publication Date
CN115294504A CN115294504A (en) 2022-11-04
CN115294504B true CN115294504B (en) 2023-01-03

Family

ID=83834631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211186780.4A Active CN115294504B (en) 2022-09-28 2022-09-28 Marketing video auditing system based on AI

Country Status (1)

Country Link
CN (1) CN115294504B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108124191A (en) * 2017-12-22 2018-06-05 北京百度网讯科技有限公司 A kind of video reviewing method, device and server
CN113850162A (en) * 2021-09-10 2021-12-28 北京百度网讯科技有限公司 Video auditing method and device and electronic equipment
CN113887432A (en) * 2021-09-30 2022-01-04 瑞森网安(福建)信息科技有限公司 Video auditing method and system
CN114385837A (en) * 2021-12-20 2022-04-22 山东智驱力人工智能科技有限公司 Automatic media content detection and verification method and system
CN114998782A (en) * 2022-05-18 2022-09-02 平安科技(深圳)有限公司 Scene classification method and device of face-check video, electronic equipment and storage medium
CN115049953A (en) * 2022-05-09 2022-09-13 中移(杭州)信息技术有限公司 Video processing method, device, equipment and computer readable storage medium
CN115100561A (en) * 2022-06-08 2022-09-23 新疆大学 Intelligent auditing method and system for bad content of network short video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3216076A1 (en) * 2015-07-16 2017-01-19 Inscape Data, Inc. Detection of common media segments

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108124191A (en) * 2017-12-22 2018-06-05 北京百度网讯科技有限公司 A kind of video reviewing method, device and server
CN113850162A (en) * 2021-09-10 2021-12-28 北京百度网讯科技有限公司 Video auditing method and device and electronic equipment
CN113887432A (en) * 2021-09-30 2022-01-04 瑞森网安(福建)信息科技有限公司 Video auditing method and system
CN114385837A (en) * 2021-12-20 2022-04-22 山东智驱力人工智能科技有限公司 Automatic media content detection and verification method and system
CN115049953A (en) * 2022-05-09 2022-09-13 中移(杭州)信息技术有限公司 Video processing method, device, equipment and computer readable storage medium
CN114998782A (en) * 2022-05-18 2022-09-02 平安科技(深圳)有限公司 Scene classification method and device of face-check video, electronic equipment and storage medium
CN115100561A (en) * 2022-06-08 2022-09-23 新疆大学 Intelligent auditing method and system for bad content of network short video

Also Published As

Publication number Publication date
CN115294504A (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN110490239B (en) Training method, quality classification method, device and equipment of image quality control network
CN112819604A (en) Personal credit evaluation method and system based on fusion neural network feature mining
CN107291822A (en) The problem of based on deep learning disaggregated model training method, sorting technique and device
CN107909299A (en) People hinders Claims Resolution data risk checking method and system
CN108257122A (en) Paper sheet defect detection method, device and server based on machine vision
CN111079941B (en) Credit information processing method, credit information processing system, terminal and storage medium
CN112906500B (en) Facial expression recognition method and system based on deep privilege network
CN112819093A (en) Man-machine asynchronous recognition method based on small data set and convolutional neural network
CN117455417B (en) Automatic iterative optimization method and system for intelligent wind control approval strategy
CN112164033A (en) Abnormal feature editing-based method for detecting surface defects of counternetwork texture
CN116524607A (en) Human face fake clue detection method based on federal residual error
CN115294504B (en) Marketing video auditing system based on AI
CN110059126B (en) LKJ abnormal value data-based complex correlation network analysis method and system
CN112889075B (en) Improved predictive performance using asymmetric hyperbolic tangent activation function
KR102525491B1 (en) Method of providing structure damage detection report
CN111666872A (en) Efficient behavior identification method under data imbalance
CN115187266B (en) Credit card fraud detection method and system based on memory variation self-coding model
CN116467930A (en) Transformer-based structured data general modeling method
CN116363344A (en) Wheat spike counting method and system based on improved YOLOv5s
CN114299328A (en) Environment self-adaptive sensing small sample endangered animal detection method and system
CN118296389B (en) Construction and evaluation method of data index model
CN114240929A (en) Color difference detection method and device
KR20220074327A (en) Loan regular auditing system using artificia intellicence
CN116259091B (en) Method and device for detecting silent living body
CN113705729B (en) Garbage classification model modeling method, garbage classification device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 11, room 9, floor 2, building 18, office R & D building, Huagong science and Technology Park, 33 Tangxun Hubei Road, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430000

Applicant after: Wuhan Dangxia time culture creative Co.,Ltd.

Address before: 430223 Floor 2, R&D Building B, No. 1 Modern Service Industry Base, Huazhong University of Science and Technology Science Park, Donghu Hi tech Zone, Wuhan, Hubei Province

Applicant before: Wuhan Dangxia time culture creative Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Room 11, room 9, floor 2, building 18, office R & D building, Huagong science and Technology Park, 33 Tangxun Hubei Road, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430000

Patentee after: Wuhan Dangxia Technology Co.,Ltd.

Address before: Room 11, room 9, floor 2, building 18, office R & D building, Huagong science and Technology Park, 33 Tangxun Hubei Road, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430000

Patentee before: Wuhan Dangxia time culture creative Co.,Ltd.