CN106062715B - The method and apparatus deleted for intelligent video - Google Patents
The method and apparatus deleted for intelligent video Download PDFInfo
- Publication number
- CN106062715B CN106062715B CN201380082021.6A CN201380082021A CN106062715B CN 106062715 B CN106062715 B CN 106062715B CN 201380082021 A CN201380082021 A CN 201380082021A CN 106062715 B CN106062715 B CN 106062715B
- Authority
- CN
- China
- Prior art keywords
- video
- motion event
- video frame
- moving region
- storage period
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
According at least one example embodiment, the method and corresponding device for deleting video data include detecting the moving region in the video frame of video data based on short-term and long-term variation associated with the content of the video data.Then, based on detected moving region, corresponding filtered moving region and changing pattern associated with the video data, motion event associated with the content of the video data is identified.Based on the motion event identified, the storage mode for storing the video frame of the video data is determined.Video frame is stored according to identified storage mode.
Description
Background technique
Unpressed video data is a series of video frame or image.In this way, storage video data is usually deposited with big
Reservoir consumption is associated.Many Video Applications are related to storing a large amount of video data.For example, video monitoring system is commonly designed
At constantly captured video data and the video of capture is stored with the potential following access whenever for needs.
Summary of the invention
In the Video Applications of storage for being related to a large amount of video content, available memory capacity is usually to can store
How many video datas apply limitation.In such a case, the video content of relatively high importance and its preferential storage are identified
It is useful for depositing.
According at least one example embodiment, method and corresponding equipment for deleting video data include being based on and institute
State the moving region in the video frame of the associated short-term and long-term variation detection video data of content of video data.Base
In detected moving region, corresponding filtered moving region and changing pattern associated with the video data,
Identify motion event associated with the content of the video data.Then, based on the motion event identified, the view is determined
The storage period of the video frame of frequency evidence.Video frame is stored according to the identified storage period.
Filtered moving region is also identified.For example, filtered region is associated with duplicate movement.In this way,
Subregion in identified moving region and associated with duplicate movement is identified.It is associated with duplicate movement
Subregion then by from identified moving region exclusion, obtain filtered moving region.If captured video data
Camera be not fixed, for example, rotation video camera, then the change in the content of the video data as caused by camera motion
Change can be estimated and filter.
According at least one example embodiment, it is based on for the motion detection of moving region and filtered moving region single
Only detection time section and carry out.Between when detecting in section, one or more movements are generated for each corresponding video frame and are retouched
State symbol.For example, generating the first descriptor based on the moving region identified accordingly, and based on corresponding for each video frame
Filtered moving region generate the second descriptor.Motion descriptors include the distribution of the moving region in each video frame
Instruction, the moving region in each video frame relative quantity instruction and/or movement pixel in single video block
Maximum quantity.
Once generating motion descriptors, the video frame in individual analysis time section is analyzed, and determine each analysis
One or more indicators of changing pattern in period.Determining indicator is stored to be used in and identify the motion event
In.Threshold value identifies motion event based on one or more.For example, threshold value may include the minimum period of movement, minimum movement etc.
The maximal clearance period between grade and two continuous motion events.According to an exemplary aspect, for each analysis time end
Adaptively threshold value.For example, the threshold value based on default initially identifies motion event in analysis time section.If identification
The quantity of motion event be considered too high, then the threshold value defaulted is enhanced and repeats to identify the process of motion event.But
If the threshold value that the motion event of identification is considered defaulting very little is lowered, and repeats to identify the process of motion event.
According at least one example embodiment, motion event is graded.Classification based on motion event, determines video frame
Store the period.For example, higher grade associated with motion event, one or more corresponding video frames are stored more long.
Detailed description of the invention
It will become from the following more specific description of example embodiments of the present invention as shown in the drawings above
It is clear that running through different views in attached drawing, similar appended drawing reference refers to identical component.Attached drawing is not necessarily in proportion
, but focus on and the embodiment of the present invention is shown.
Fig. 1 is the figure that video monitoring system is shown according at least one example embodiment;
Fig. 2 is that the flow chart for deleting the method for video data is shown according at least one example embodiment;
Fig. 3 is to show identification moving mass according at least one example embodiment with the flow chart of the method for filtering;
Fig. 4 is the flow chart that the method for sequence motion event is shown according at least one example embodiment;And
Fig. 5 is the mode that storage video data and corresponding memory consumption is shown according at least one example embodiment
Table.
Specific embodiment
Example embodiments of the present invention is described as follows.
Fig. 1 is the figure that video monitoring system 100 is shown according at least one example embodiment.Video monitoring system 100 wraps
Include one or more cameras, such as 101a-101d, processing unit 110 and storage device 120.Processing unit 110 can be calculating
Machine device, such as personal computer, laptop computer, server, purl machine, hand-held device etc., or can be and have
Processor and wherein store computer generation code instruction memory computing device.For example, processing unit 110 can be insertion
Processor in (one or more) camera, such as 101a-101d, or can be configured as storing and analyze by (one
Or it is multiple) camera, such as 101a-101d capture video data video server.Storage device 120 is configured as storing
It is filled by the memory device of the video data of (one or more) cameras record of such as 101a-101d, such as external memory
It sets, server etc..According to a sample implementation, processing unit 110 and storage device 120 are the groups of same electronic equipment
Part.Alternatively, processing unit 110 and storage device 120 are realized in one of camera 101a-101d or each.Even
It is logical according to another implementation, such as (one or more) camera, processing unit 110 and the storage device 120 of 101a-101d
The communication network for crossing local area network, wide area network, a combination thereof etc. is mutually coupled.Such as (one or more) phase of 101a-101d
Machine can be couple to processing unit 110 or storage device 120 by wired or Radio Link 105.
In video monitoring system 100, such as (one or more) camera of 101a-101d usually constantly captures video
Data.The video data of record usually stores a period of time for potential access when needed.Video data storage
More long, a possibility that providing the access to the event recorded before, is bigger.In this way, the video monitoring system of design such as 100
Problem is given hardware storage capacity, deposits interested video content as much as possible.Specifically, the view of one day record
Millions of video frames that frequency will be stored according to generation, or it is equivalent to the video data of gigabyte.Given monitoring system
The memory storage capacity of 100 or equivalent storage device 120, target are in storage interested video as much as possible
Hold, and therefore saves the record of interested event as much as possible.
The video frame for storing all records leads to store interested video content and also store do not have interested event
Video content.In this way, otherwise video content consumption of the storage without interested event will can be used for storing instruction and feel emerging
The memory space of the video content of the event of interest.The user of video monitoring system 100 often wants to save the note of events of interest
Record the period as longest as possible.It does not provide and is used in addition, simply storing (I frame) and discarding inter-prediction (P frame) in frame
The enough storage information of events of interest before access.
According at least one example embodiment, analyze the video data of capture, and based on movement or time change from
The information that the video data of capture obtains identifies motion event.Here, motion event be defined as wherein adjacent video frames it
On almost constantly detect sequence or the period of the video frame significantly moved.Motion event may include it is one or more wherein
Do not detect the relatively short period significantly moved.Obtained motion information includes original and filtered movement letter
Breath.Original motion information includes the movement of determination letter based on the short-term and long-term variation detected from the video data of capture
Breath.Filtered motion information is generated and excluding at least duplicate movement from the original movement.
The motion event of identification is used for determining that storage mode, the video of corresponding frame are stored according to the memory module.
For example, motion event is more related, corresponding video frame is stored in more long in storage device 120.According to a sample implementation,
The Video Events of identification are classified according to correlation or importance by the unsupervised study based on corresponding motion information
Or classification (rank).The information of classification or classification for each motion event be then used to determine for identical fortune
The storage mode of the dynamic associated video frame of event.Given storage volume, identified storage mode make relevant or information
It is long too much that the storage of video data abundant can delete (prune) technology than typical video.
Fig. 2 is the flow chart for showing the method 200 for deleting video data according at least one example embodiment.Pass through example
As the video data of (one or more) cameras capture of 101a-101d can be directly stored in storage device 120 or in quilt
Processing unit 110 is forwarded to before being stored in storage device 120 for handling.At block 210, it is based on and current video frame phase
Associated short-term and long-term time change and detect original moving region, such as pixel.Long-term variation can be used all
Such as background subtraction technique of gauss hybrid models (GMM), rolling average detects.Short-term variation can be for example by from working as
Previous video frame is subtracted in preceding video frame and is detected.
Then the pixel of current video frame is labeled as by " movement " or " static " pixel using threshold value.More than threshold value
The pixel for being different from the present frame of background is marked as " moving " pixel.In addition, being different from former frame more than threshold value
The pixel of the present frame of corresponding pixel is marked as " moving " pixel.Long-term and short-term variation can be used identical
Threshold value or different threshold values.Content based on current video frame, (one or more) threshold value can be dynamically.In order to inhibit to shine
The influence of bright change, standardized cross correlation are used as the measurement in detection movement pixel.Those skilled in the art
It should be understood that obtaining the estimation of the less noise of movement pixel using both change in long term and short term variations.
Once detecting movement pixel, current video frame is divided into N × N number of video block, and wherein N is integer.Then it uses
Each block is labeled as " movement " or " static " block by second threshold.If the counting of " movement " pixel is greater than second threshold,
The block is marked as " moving " block;Otherwise it is marked as " static " block.Morphologic corrosion or opening operation (opening) can answer
For removing noise " movement " pixel before marking video block.For example, when morphologic corrosion or opening operation are applied to work as
When preceding video frame, relatively small or thin " movement " region is usually eliminated.
At block 220, " movement " block of detection is filtered and corresponding filtered " movement " block is determined.It is filtering
When moving mass, the label of the corresponding video block in the video frame before at least one is considered.In other words, based on current
Motion information in video frame and video frame before and determine filtered movement.
Fig. 3 is to show the identification moving mass according at least one example embodiment with the flow chart of the method for filtering.
Movement changes counter (MCC) and is defined for each video block to indicate video block label in corresponding detection time section
It is the number of " movement " from " static " overturning.In this way, MCC be used to detect duplicate or fluctuation movement.For example, each is examined
Surveying the period has predetermined lasting time, for example, five or ten minutes, and MCC is answered at the beginning of each detects the period
Position.It is proposed that movement changes history image (MCHI) to keep the motion history of each video block.For in current video frame
Video block, at block 310 check video block label.If video block is marked as " static ", phase of successively decreasing at block 315
The MCHI answered.However, being checked at block 320 identical in previous video frame if video block is marked as " moving "
The label of video block.If the label of the frequency block in former frame is " static ", it is incremented by block 325 and corresponds to the video
The MCC of block.If current video block is marked as " moving ", the MCHI of identical video block is set at block 330
In maximum value.It is also reset for each new detection time section MCHI.
At block 340, the MCC for being used for video block is changed into threshold value with label and is compared.If MCC greater than flag changes threshold
Value, then at block 350, corresponding noise shielding (mask) entry is arranged to 1, indicates that the movement detected accordingly is noise
Either incoherent movement.For example, the user of video monitoring system 100 is usually to the tree for keeping the flag waved or movement
The tracking of leaf and branch is lost interest in.If the MCC for corresponding to video block, which is less than label, changes threshold value, corresponding at block 245
Noise shielding entry be arranged to 0.If the noise shielding entry for video block is arranged to 0 (block 260), in block
Check that the history of the history of the noise shielding entry of the video block of the instruction in video frame before shields entry at 265.Such as
The fruit history shields entry and is greater than 0, then the noise shielding of the video block is arranged to 1 at block 380.If sent out at block 360
Existing noise shielding entry is equal to 1, then corresponding history shielding entry is arranged to positive value at block 370.Even if being regarded accordingly
It is not moved for a period of time in frequency scene, history shielding also keeps the tracking to past noise shielding value and be used to inhibit
Noise.All video blocks in current video frame are repeated with the process described about Fig. 3.Then using noise shielding come
Filter the original motion detected in current video frame.
At the block 230 of Fig. 2, at least one motion descriptors for being used for current video frame are generated.Motion descriptors include
The maximum quantity of motion activity value, distribution of movement indicator and " movement " pixel in " movement " video block.Motor activity
Property the percentage of moving mass that is detected in corresponding video frame of value instruction.Distribution of movement indicator is indicated for corresponding video
The distribution of the video block of frame.For example, distribution of movement indicator can be bit sequence, each indicates that corresponding video block is to be marked
It is denoted as " movement " still " static ".For example, entire video frame can be evenly divided into 8 × 8 or finer video block.Movement
Each lattice (bin) of indicator are distributed corresponding to the video block in video frame.For multiple movements comprising being greater than number of thresholds
The video block of pixel, corresponding position are arranged to 1.For the video block comprising multiple movement pixels less than number of thresholds, phase
The position answered is arranged to 0.In this way, 32 integer representation, 32 video blocks, this is to describe having for distribution of movement in the video frame
Efficacious prescriptions formula.The maximum quantity of " movement " pixel in " movement " video block is the system for assessing amount of exercise in the video frame
Count parameter.It will be understood by those skilled in the art that the parameter of other statistics can be used, such as each " movement " video block
The average of " movement " pixel, minimum number, a combination thereof of " movement " pixel in " movement " video block etc..
Parameter associated with motion descriptors be used to mutually distinguish the noise of movement and scattering.Shown according at least one
Example embodiment generates two motion descriptors for each video frame of processing.One motion descriptors corresponds in video
The original movement detected in frame, and another motion descriptors is based on corresponding filtered movement and obtains.
According at least one example embodiment, video frame associated with detection time section is carried out in block 210-230
Described in process.In other words, by the video data of (one or more) cameras record of such as 101a-101d by as list
The sequence of only video and handle, each sequence of video frame corresponds to a detection time section.With detection time section phase
Video frame in each associated video sequence is treated together.At block 240, the fortune in the present analysis period is analyzed
Dynamic activity patterns, and calculate and store corresponding measurement.Specifically, estimation is at each in current sensing time section
The probability distribution P of movement in video block ii t.Subscript t refers to analysis time section associated with processed video frame.For example,
Analysis time section can be defined based on each hour, wherein each of one day hour one period of expression.Alternatively,
Analysis time section can be defined differently, for example, the duration having the same not every period.For example, when analysis
Between section can be a hour or two small durations.It will be understood by those skilled in the art that can define for analysis time
Other duration of section.Alternatively, analysis time section can be defined according to multiple continuous video frames.Art technology
Personnel should be understood that other statistics or non-statistical parameter can be used as the motor pattern in each analysis time section
Analysis part and calculate.In addition, when characterization is with the given associated motor pattern of detection time section, and before
The associated parameter of section of identical analysis time in one day or multiple days can be merged.
At block 250, based on motion descriptors associated with the video frame in the present analysis period and conduct
The part of the analysis of motor pattern associated with the present analysis period and the parameter that obtains were detected in the present analysis time
Motion event in section.Motion event is defined herein as the sequence or corresponding in the present analysis period of video frame
Time interval, significant movement is almost constantly detected on adjacent video frames in the interval.Motion event may include
Without detecting the one or more relatively short periods significantly moved.According at least one example embodiment, it is based on
The amount of motion activity in corresponding video frame, the length of time interval or the continuous view for carrying almost lasting movement
The quantity of frequency frame and (one or more associated with " static " video frame in the carrying almost time interval of lasting movement
It is a) length of space period identifies motion event.
Specifically, motion activity threshold is used within the present analysis period as " movement " frame or " static " frame
Each video frame.If corresponding motion activity level is greater than motion activity threshold, video frame is marked as " fortune
It is dynamic " frame.For example, motion activity can be defined in range [0,100], wherein without movement and 100 in 0 instruction scene
Indicate the full motion in scene.Identical range is used for both original and filtered movements.Those skilled in the art
It should be understood that motion activity can be defined differently.
According to an example embodiment, first group of motion event, and base are detected based on the original movement detected
Second group of motion event is detected in corresponding filtered movement.Once the video frame of current slot is labeled, two are used
A time threshold detects motion event.First time threshold is indicated for detecting the almost lasting movement of motion event most
Minor time slice.Second time threshold indicates the minimum clearance period between the continuous motion event of any two.Specifically,
If the period of the almost persistent movement detected is greater than first time threshold, the period of the detection based on almost persistent movement
And identifying corresponding motion event, the period otherwise detected is ignored.In addition, if two adjacent motion events detected
Between gap be less than second time threshold, then the two motion events detected are merged into a longer movement thing
Part.
According at least one example embodiment, motion activity threshold, first time threshold and second time threshold are extremely
Few one is based at least partially on the motor pattern analysis for corresponding analysis time section and is defined.For example, can be based on
The frequency of motion activity level and motion event during corresponding detection time section calculates (one or more) threshold value.
It specifically, can quantity for example based on the motion event detected iteratively threshold value parameter.For example, can change first
Using the thresholding parameter value of default in generation, and then in next iteration based on the movement detected in each iteration
The quantity of event and the thresholding parameter value for updating the default.It will be understood by those skilled in the art that at least the previous day
The associated parameter of section of identical analysis time can also be used for setting (one or more) threshold value.Such as based on corresponding movement
Pattern analysis allows the more reliable motion event in one day different time to detect using adaptive threshold.For example, to institute
Some periods make it difficult to detect in day and night and move using fixed threshold value, because in the movement water of day and night
It puts down entirely different.
According to a sample implementation, adaptive threshold is iteratively determined.The default threshold of motion activity is provided, just
Beginning motion event is identified.If the motion event detected is very little, adaptive threshold and again detection movement thing are reduced
Part.Such process is repeated until that the quantity of the motion event detected is greater than corresponding minimum value.On the contrary, if detecting
Too many event or the event for detecting long time section then increase Motion Adaptive threshold value and again detection movement thing
Part.The process is repeated until that the quantity of the motion event detected drops to scheduled range.
Once detecting motion event, detected in current sensing time section based on original and filtered movement
Motion event be graded at block 260.It is classified based on the original and filtered movement detected.Point of motion event
Grade can be counted as the mode based on corresponding importance or correlation and motion event of classifying.For example, with original and filtering
The associated motion event of both rear movements is considered more related to the associated motion event of original movement than only.
Fig. 4 is the flow chart for showing the method for the graded movement event according at least one example embodiment.Firstly, in block
At 405 and 415, for based on original motion detection to motion event and the movement thing that is arrived based on filtered motion detection
Part defines one or more levels.Level includes several layers.For example, layer 0 is top layer, there is most rough time interval, example
Such as five minutes duration.That is, analysis time section is divided into such as 12 five minutes periods.Next layer
Include increasingly finer time interval, such as one minute, 20 seconds, 5 seconds and/or even 1 second time interval.
At block 440 to 455, the classification of the motion event at the lowest class of the level of classification is calculated.Based in layer
The rating information calculated at the lowest class of grade, with the movement at the higher level of more and more rough time granularity construction level
The classification of event, as shown in block 470 to 485.Such as the time interval at minimum grade is 1 second, and higher
The time interval of grade can be 10 seconds, 1 minute and/or 5 minutes.According to an example embodiment, do not detect wherein
The sequence of the video frame of original movement is assigned minimum grade.Higher grade be assigned to wherein only detect it is original
The motion event of movement.The grade more increased is assigned in the movement for wherein detecting both original and filtered movements
Event.
In given level, the classification of motion event since the bottom comprising most fine time interval and with
Most rough or maximum time interval terminates and carries out.The classification of motion event from most fine interval transit to accordingly compared with
Large-spacing.From bottom, such as layer N-1 and upwards, successively construct the level.From layer j+1 to layer j, the maximum point of time interval
Grade will be assigned to corresponding more rough time interval, until reaching top layer or most rough layer, such as layer 0.
At block 270, based on for example at block 240 calculate motor pattern statistics or parameter and with identical movement thing
The interactive classification to adjust the motion event of calculating of the user of the associated video frame of part.For example, having in corresponding video block
What place detected has relatively low probability of motion, such as Pi tThe classification of motion event of movement be enhanced as in the fortune
The instruction of (one or more) unexpected event in dynamic event.In addition, user query are tracked and stored in processing dress
It sets in 110 or database 120.For example, the start and end time of user query, the video frame with the object as user query
The time that associated motor pattern and user access video-frequency band is stored by monitoring system 100.Using related with user query
Storage information, and the user query and export to video frame or access the classification of associated motion event and be enhanced.Class
It is similar to motion event motion event associated with user query and in one day at identical time slot detected
(one or more) classification can also be used as classification adjustment a part and be enhanced.
At block 280, the mode for storing video data is determined based on the motion event detected and corresponding classification, and
And video data is stored in accordingly in database 120.Motion event phase according at least one example embodiment, with lowest hierarchical
Associated video-frequency band is deleted first.However, being based on corresponding classification with the associated video-frequency band of the motion event compared with high-grade
Longer time is stored with idle storage space available at database 120.
Fig. 5 is shown according at least one example embodiment based on the primitive event and (one or more) phase detected
That answers is classified to store the table of the mode of video data.All video-frequency bands are pre- up to first regardless of corresponding classification is all stored
The fixed period, such as three days.Once have passed through the first predetermined time period, only view associated with the motion event detected
Frequency range, the video-frequency band for example with the grade equal to or more than 4 are stored up to the second predetermined time period, such as 4 to 6 days, and
And other video-frequency bands are deleted.After by the second predetermined time period, with the motion event with filtered movement
Associated video-frequency band, the video-frequency band for example with the grade greater than 7 are stored up to third predetermined time period, such as 7-10
It.In addition, such as I- frame with grade 4 and 7 between associated with the motion event only with original movement exists
Storage is kept during third predetermined time period.
After the third scheduled period, only I- frame corresponding with only having the motion event of filtered movement is kept
Up to the 4th scheduled period, such as 11-14 days.During the 5th scheduled period, such as 15 to 18 days, only and only with original
The associated key frame of motion event of movement be kept and be stored in database 120.During the 6th scheduled period, example
Such as 19 to 28 days, only key frame associated with the motion event with filtered movement, which is kept, was stored in database 120
In.After the 6th predetermined time period, key frame corresponding with the motion event with filtered movement is deleted.It is crucial
Frame is defined as the I- frame of the most significant movement comprising the period for corresponding motion event.Table in Fig. 5 last
Row shows the memory consumption corresponding to each predetermined time period.
It is assumed that the video bitrate of 2 megabits per second for single camera, is stored in the corresponding view captured in one day
Frequency is according to 21 gigabytes for consuming memory.If every single camera distributes the memory capacity of 105 gigabytes, if not
Using deleting, memory capacity only allows to store 5 days video datas.However, by storage mode described in application drawing 5,
Some video contents are kept storage up to 28 days.
According at least one example embodiment, the received view of camera is captured from (one or more) of such as 101a-101d
Frequency is handled and is analyzed in operation (on the fly) according to by processing unit 110.For example, received video data is stored into
In video file respectively, each video file corresponds to detection time section.For original video data, in video file
In image group (GOP) in each I- frame of video and the pull-in time of the first p- frame and position be recorded in database
In.It is also stored at the beginning of the motion event of classification, the identification of Video Events with the end time.According to an example implementation
Mode, the I- frame and p- frame of video data are separately stored.In this way, all p- frames can be deleted together.In addition, keeping to every
The tracking of the position of one GOP makes it easy to progress video and deletes processing.In addition, especially being cut if video hierarchically will be carried out
It removes, the classification of layering is particularly useful.
It will be understood by those skilled in the art that the process that video as described herein is deleted is example embodiment and does not answer
It is construed in a limiting sense.For example, instead of the motion event classification to identification, it can be based on the motor area detected accordingly
Domain and motion event of classifying.In addition, if capture camera be it is mobile, then due to caused by camera motion variation can be by mistake
Filter.Those skilled in the art will also be appreciated that video as described herein is deleted process and can also be applied and be different from video prison
Depending on Video Applications.In addition, instead of defining the period for storing video frame, it can be fixed based on the motion event detected
The variable frame rate of justice.Then such variable frame rate can be used in video data compression or video data transmission.
Alternatively, it is possible to define variable video resolution or variable video quality based on the motion event detected.
It should be understood that example described above embodiment can be realized in a number of different manners.In some instances,
Various methods as described herein and machine can each be implemented by physics, virtual or mixed universal or special purpose computer,
The computer have central processing unit, memory, disk or other mass storages, (one or more) communication interface,
(one or more) input/output (I/O) device and other peripheral equipments.General or specialized computer is for example by referring to software
Order is loaded into data processor and then makes instruction execution to carry out function as described herein and be converted into execution or more and retouch
The machine for the method stated.
As known in the art, such computer may include system bus, and wherein bus is at computer or place
One group of hardware lines of the data transmission between the component of reason system.One or more buses are substantially connection computer systems
Such as the shared lead of the different elements of processor, disc storage device, memory, input/output end port, network port etc.,
The transmission of the enabled information between elements of the conducting wire.One or more central processing units are attached to system bus and provide
The execution of computer instruction.Be also attached to system bus is usually I/O device interface, for outputting and inputting dress for various
It sets, be connected to computer such as keyboard, mouse, display, printer, loudspeaker.(one or more) network interface allows
Computer is connected to the various other devices for being attached to network.The computer software instructions that memory is provided for realizing embodiment
With the volatile storage of data.Disk or other mass storages are provided for realizing such as various processes described herein
Computer software instructions and data non-volatile memories.
Therefore, embodiment can usually be realized with hardware, firmware, software or any combination thereof.
In certain embodiments, program as described herein, device and processing, which are constituted, provides the software instruction for system
At least part of computer program product including computer-readable medium, the computer-readable medium such as such as one or
Dismountable storage medium of multiple DVD-ROM, CD-ROM, floppy disk, tape etc..Such computer program product can be by appointing
What suitable software installation process is installed, as known in the art.In another embodiment, software instruction is at least
A part can also be downloaded by cable, communication and/or wireless connection.
Embodiment also can be implemented as the instruction being stored on nonvolatile machine readable media, can be by one or more
Processor reads and executees.Non-transient machine readable medium may include for the shape can be read by the machine of such as computing device
Any mechanism of formula storage or transmission information.For example, non-transient machine readable medium may include read-only memory (ROM);At random
It accesses memory (RAM);Disk storage media;Light-memory medium;Flash memory device;And it is other.
In addition, firmware, software, routine or instruction can be described as carrying out certain movements of data processor in this paper
And/or function.However, it should be understood that the such description for including herein be intended merely to convenient and such movement in fact from
Computing device, processor, controller or execute firmware, software, routine, instruction etc. other devices obtain.
It, can differently cloth it will also be appreciated that flow chart, block diagram and network may include more or fewer elements
It sets or differently indicates.But will be further appreciated that, certain implementations can instruct block and network and show
The block of the execution for the embodiment realized in a specific way and the quantity of network.
Therefore, further embodiment can also be combined more with physics, virtual, cloud computer and/or some
It plants computer architecture and realizes, therefore, data processor as described herein is intended to the purpose being merely to illustrate and not as right
The limitation of embodiment.
Although the present invention is specifically illustrated and described by reference to its example embodiment, those skilled in the art will be managed
Solution, can in the case where not departing from the scope of the present invention for including by appended claims, make wherein form and
Various changes in details.
Claims (27)
1. a kind of method for deleting video data, comprising:
By computer installation based on short-term and long-term variation associated with the content of the video data, detect described
Moving region in the video frame of video data;
Identify the subregion associated with duplicate movement in the moving region identified;
Identify moving region of the filtered moving region as the identification for excluding subregion associated with duplicate movement;
Based on detected moving region, the filtered moving region and changing pattern associated with the video data
Formula identifies motion event associated with the content of the video data;
Based on the motion event identified, the storage period of the video frame for the video data is determined;And
Stored on video playback medium associated with the first motion event video frame up to the first storage period and storage and
The associated video frame of second motion event is up to the second storage period, and the second storage period is than the first storage period
Long, the video frame of storage up to the second storage period are to store the subset of the video frame up to the first storage period, in institute
After stating the first storage period expiration, other video frames of storage up to the first storage period are deleted from the playback medium.
2. the method as described in claim 1 further includes the one or more fortune for generating each frame for the video data
Dynamic descriptor.
3. method according to claim 2, wherein generating one or more movements of each frame for the video data
Descriptor includes:
The first descriptor for being used for each video frame is generated based on the moving region identified accordingly;And
The second descriptor for being used for each video frame is generated based on corresponding filtered moving region.
4. method according to claim 2, wherein each motion descriptors include:
The instruction of the distribution of moving region in each video frame;And
The instruction of the relative quantity of moving region in each video frame.
5. the method as described in claim 1, further includes:
Determine one or more indicators of changing pattern associated with the video data;And
One or more indicators of changing pattern determined by storing in identifying the motion event to use.
6. method according to claim 2, wherein identifying that the motion event includes one or more movements based on generation
Descriptor and one or more threshold value identify the motion event.
7. method as claimed in claim 6, wherein one or more of threshold values include it is below at least one:
The minimum period of movement;
Minimum movement grade;And
The maximal clearance period between two continuous motion events.
8. method as claimed in claim 6, wherein at least one of one or more of threshold values is adaptive threshold value.
9. method according to claim 8, wherein iteratively identification at least one the adaptive threshold value and the fortune
Dynamic event.
10. the method as described in claim 1 further includes the motion event classification to being identified.
11. method as claimed in claim 10, wherein the motion event ratio identified based on filtered moving region is based on
Unfiltered moving region and detect motion event classification it is higher.
12. method as claimed in claim 10, wherein the storage period for determining video frame includes based on motion event
It is classified the storage period to determine video frame associated with the motion event.
13. method as claimed in claim 12, wherein video frame ratio associated with the motion event of high-grade and low classification
The associated video frame of motion event store the longer period.
14. a kind of equipment for deleting video data, comprising:
Processor;And
Memory stores computer generation code instruction on it,
The memory and store on it computer generation code instruction the processor is configured to making the equipment:
It is detected based on short-term and long-term variation associated with the content of the video data in the video data
Moving region in video frame;
Identify the subregion associated with duplicate movement in the moving region identified;
Identify moving region of the filtered moving region as the identification for excluding subregion associated with duplicate movement;
Based on moving region detected, the filtered moving region and changing pattern associated with the video data
To identify motion event associated with the content of the video data;
The storage period of the video frame for the video data is determined based on the motion event identified;And
Stored on video playback medium associated with the first motion event video frame up to the first storage period and storage and
The associated video frame of second motion event is up to the second storage period, and the second storage period is than the first storage period
Long, the video frame of storage up to the second storage period are to store the subset of the video frame up to the first storage period, in institute
After stating the first storage period expiration, other video frames of storage up to the first storage period are deleted from the playback medium.
15. equipment as claimed in claim 14, wherein the processor and storing the institute of computer generation code instruction on it
It states memory and is further configured such that one or more movements of the equipment generation for each frame of the video data are retouched
State symbol.
16. equipment as claimed in claim 15, wherein in the one or more for generating each frame for the video data
In motion descriptors, the processor and the memory for storing computer generation code instruction on it are further configured such that
The equipment:
The first descriptor for being used for each video frame is generated based on the moving region identified accordingly;And
The second descriptor for being used for each video frame is generated based on corresponding filtered moving region.
17. equipment as claimed in claim 15, wherein each motion descriptors include:
The instruction of the distribution of moving region in each video frame;And
The instruction of the relative quantity of moving region in each video frame.
18. equipment as claimed in claim 14, wherein the processor and storing the institute of computer generation code instruction on it
It states memory and is further configured such that the equipment:
Determine one or more indicators of changing pattern associated with the video data;And
One or more indicators of changing pattern determined by storing in identifying the motion event to use.
19. equipment as claimed in claim 14, wherein in identifying the motion event, the processor and store on it
There is the memory of computer generation code instruction to be further configured such that the equipment based on one or more described in threshold value identification
Motion event.
20. equipment as claimed in claim 19, wherein one or more of threshold values include it is below at least one:
The minimum period of movement;
Minimum movement grade;And
The maximal clearance period between two continuous motion events.
21. equipment as claimed in claim 19, wherein at least one of one or more of threshold values is adaptive threshold value.
22. equipment as claimed in claim 21, wherein the processor and storing the institute of computer generation code instruction on it
It states memory and is further configured such that the equipment iteratively identifies at least one the adaptive threshold value and the movement
Event.
23. equipment as claimed in claim 21, wherein the processor and storing the institute of computer generation code instruction on it
It states memory and is further configured such that the equipment is classified the motion event identified.
24. equipment as claimed in claim 23, wherein the motion event ratio identified based on filtered moving region is based on
Unfiltered moving region and detect motion event classification it is higher.
25. equipment as claimed in claim 22, wherein in the storage period for determining video frame, the processor and
The memory for storing computer generation code instruction thereon is further configured such that the equipment based on the motion event
Classification determines the storage period of video frame associated with motion event.
26. equipment as claimed in claim 25, wherein video frame ratio associated with the motion event of high-grade and low classification
The associated video frame of motion event store the longer period.
27. a kind of nonvolatile computer-readable medium, stores computer instruction on it, the computer instruction is by handling
Device is configured such that equipment when executing:
Based on short-term and long-term variation associated with the content of video data, the video frame in the video data is detected
Interior moving region;
Identify the subregion associated with duplicate movement in the moving region identified;
Identify moving region of the filtered moving region as the identification for excluding subregion associated with duplicate movement;
Based on detected moving region, the filtered moving region and changing pattern associated with the video data
Formula identifies motion event associated with the content of the video data;
Based on the motion event identified, the storage period of the video frame for the video data is determined;And
Stored on video playback medium associated with the first motion event video frame up to the first storage period and storage and
The associated video frame of second motion event is up to the second storage period, and the second storage period is than the first storage period
Long, the video frame of storage up to the second storage period are to store the subset of the video frame up to the first storage period, in institute
After stating the first storage period expiration, other video frames of storage up to the first storage period are deleted from the playback medium.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2013/077673 WO2015099704A1 (en) | 2013-12-24 | 2013-12-24 | Method and apparatus for intelligent video pruning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106062715A CN106062715A (en) | 2016-10-26 |
CN106062715B true CN106062715B (en) | 2019-06-04 |
Family
ID=53479370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201380082021.6A Active CN106062715B (en) | 2013-12-24 | 2013-12-24 | The method and apparatus deleted for intelligent video |
Country Status (4)
Country | Link |
---|---|
US (1) | US10134145B2 (en) |
EP (1) | EP3087482B1 (en) |
CN (1) | CN106062715B (en) |
WO (1) | WO2015099704A1 (en) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10127783B2 (en) | 2014-07-07 | 2018-11-13 | Google Llc | Method and device for processing motion events |
US9501915B1 (en) | 2014-07-07 | 2016-11-22 | Google Inc. | Systems and methods for analyzing a video stream |
US10140827B2 (en) | 2014-07-07 | 2018-11-27 | Google Llc | Method and system for processing motion event notifications |
US9779307B2 (en) | 2014-07-07 | 2017-10-03 | Google Inc. | Method and system for non-causal zone search in video monitoring |
US9685194B2 (en) * | 2014-07-23 | 2017-06-20 | Gopro, Inc. | Voice-based video tagging |
USD782495S1 (en) | 2014-10-07 | 2017-03-28 | Google Inc. | Display screen or portion thereof with graphical user interface |
US9361011B1 (en) | 2015-06-14 | 2016-06-07 | Google Inc. | Methods and systems for presenting multiple live video feeds in a user interface |
DE102016206367A1 (en) * | 2016-04-15 | 2017-10-19 | Robert Bosch Gmbh | Camera device for the exterior of a building |
US10506237B1 (en) | 2016-05-27 | 2019-12-10 | Google Llc | Methods and devices for dynamic adaptation of encoding bitrate for video streaming |
US10380429B2 (en) | 2016-07-11 | 2019-08-13 | Google Llc | Methods and systems for person detection in a video feed |
US10192415B2 (en) | 2016-07-11 | 2019-01-29 | Google Llc | Methods and systems for providing intelligent alerts for events |
US10957171B2 (en) | 2016-07-11 | 2021-03-23 | Google Llc | Methods and systems for providing event alerts |
US11783010B2 (en) | 2017-05-30 | 2023-10-10 | Google Llc | Systems and methods of person recognition in video streams |
US10410086B2 (en) | 2017-05-30 | 2019-09-10 | Google Llc | Systems and methods of person recognition in video streams |
CN107613237B (en) * | 2017-09-14 | 2020-03-06 | 国网重庆市电力公司电力科学研究院 | Extraction method of video dynamic and static mixed key frames |
US10664688B2 (en) | 2017-09-20 | 2020-05-26 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US11134227B2 (en) | 2017-09-20 | 2021-09-28 | Google Llc | Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment |
US11399207B2 (en) * | 2018-02-02 | 2022-07-26 | Comcast Cable Communications, Llc | Image selection using motion data |
US11568624B1 (en) * | 2019-05-09 | 2023-01-31 | Objectvideo Labs, Llc | Managing virtual surveillance windows for video surveillance |
US11893795B2 (en) | 2019-12-09 | 2024-02-06 | Google Llc | Interacting with visitors of a connected home environment |
EP3968636A1 (en) | 2020-09-11 | 2022-03-16 | Axis AB | A method for providing prunable video |
EP3968635A1 (en) * | 2020-09-11 | 2022-03-16 | Axis AB | A method for providing prunable video |
CN113055705B (en) * | 2021-03-25 | 2022-08-19 | 郑州师范学院 | Cloud computing platform data storage method based on big data analysis |
KR20230015146A (en) * | 2021-07-22 | 2023-01-31 | 엘지전자 주식회사 | Air conditioner and method thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101325691A (en) * | 2007-06-14 | 2008-12-17 | 清华大学 | Method and apparatus for tracing a plurality of observation model with fusion of differ durations |
CN102810159A (en) * | 2012-06-14 | 2012-12-05 | 西安电子科技大学 | Human body detecting method based on SURF (Speed Up Robust Feature) efficient matching kernel |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060165386A1 (en) | 2002-01-08 | 2006-07-27 | Cernium, Inc. | Object selective video recording |
US7480393B2 (en) * | 2003-11-19 | 2009-01-20 | Digimarc Corporation | Optimized digital watermarking functions for streaming data |
GB0502371D0 (en) * | 2005-02-04 | 2005-03-16 | British Telecomm | Identifying spurious regions in a video frame |
US9813707B2 (en) * | 2010-01-22 | 2017-11-07 | Thomson Licensing Dtv | Data pruning for video compression using example-based super-resolution |
US9171075B2 (en) * | 2010-12-30 | 2015-10-27 | Pelco, Inc. | Searching recorded video |
US8335350B2 (en) * | 2011-02-24 | 2012-12-18 | Eastman Kodak Company | Extracting motion information from digital video sequences |
US20130030875A1 (en) | 2011-07-29 | 2013-01-31 | Panasonic Corporation | System and method for site abnormality recording and notification |
-
2013
- 2013-12-24 CN CN201380082021.6A patent/CN106062715B/en active Active
- 2013-12-24 US US15/105,897 patent/US10134145B2/en active Active
- 2013-12-24 EP EP13900118.4A patent/EP3087482B1/en active Active
- 2013-12-24 WO PCT/US2013/077673 patent/WO2015099704A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101325691A (en) * | 2007-06-14 | 2008-12-17 | 清华大学 | Method and apparatus for tracing a plurality of observation model with fusion of differ durations |
CN102810159A (en) * | 2012-06-14 | 2012-12-05 | 西安电子科技大学 | Human body detecting method based on SURF (Speed Up Robust Feature) efficient matching kernel |
Also Published As
Publication number | Publication date |
---|---|
CN106062715A (en) | 2016-10-26 |
EP3087482A1 (en) | 2016-11-02 |
US10134145B2 (en) | 2018-11-20 |
EP3087482A4 (en) | 2017-07-19 |
WO2015099704A1 (en) | 2015-07-02 |
US20170039729A1 (en) | 2017-02-09 |
EP3087482B1 (en) | 2019-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106062715B (en) | The method and apparatus deleted for intelligent video | |
US7428314B2 (en) | Monitoring an environment | |
US8107680B2 (en) | Monitoring an environment | |
Fuhl et al. | Automatic Generation of Saliency-based Areas of Interest for the Visualization and Analysis of Eye-tracking Data. | |
CN106992974B (en) | Live video information monitoring method, device and equipment | |
KR20080075091A (en) | Storage of video analysis data for real-time alerting and forensic analysis | |
CN108063914B (en) | Method and device for generating and playing monitoring video file and terminal equipment | |
US9456190B2 (en) | Systems and methods of determining retention of video surveillance data | |
Sanches et al. | Challenging situations for background subtraction algorithms | |
CN109289196A (en) | Game achieves processing method and processing device | |
CN112183179A (en) | Method of analyzing a plurality of EDSs and computer readable medium | |
CN110969645A (en) | Unsupervised abnormal track detection method and unsupervised abnormal track detection device for crowded scenes | |
CN104780310A (en) | Image blurring detection method and system and camera | |
CN113342622A (en) | Operation behavior auditing method and device and storage medium | |
CN116071133A (en) | Cross-border electronic commerce environment analysis method and system based on big data and computing equipment | |
CN110677309B (en) | Crowd clustering method and system, terminal and computer readable storage medium | |
Maity et al. | Block-based quantized histogram (BBQH) for efficient background modeling and foreground extraction in video | |
Yeo et al. | A framework for sub-window shot detection | |
Hjelm et al. | Vehicle Counting Using Video Metadata | |
CN111246796A (en) | System, method, computer program and computer interface for analyzing electroencephalographic information | |
CN113469142B (en) | Classification method, device and terminal for monitoring video time-space information fusion | |
Srilakshmi et al. | Shot boundary detection using structural similarity index | |
AU2004233448B2 (en) | Monitoring an environment | |
US20210337161A1 (en) | Video management apparatus, video management method and program | |
CN118656519A (en) | Method and device for retrieving video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |