US7733369B2 - View handling in video surveillance systems - Google Patents
View handling in video surveillance systems Download PDFInfo
- Publication number
- US7733369B2 US7733369B2 US10/950,680 US95068004A US7733369B2 US 7733369 B2 US7733369 B2 US 7733369B2 US 95068004 A US95068004 A US 95068004A US 7733369 B2 US7733369 B2 US 7733369B2
- Authority
- US
- United States
- Prior art keywords
- view model
- video
- known view
- view
- next frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/188—Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Definitions
- This invention relates to surveillance systems. More specifically, the invention relates to a video-based surveillance system that is configured to run in an all-weather, 24/7 environment. Furthermore, the camera used in the surveillance system may be a pan-tilt-zoom (PTZ) camera, it may point to different scenes according to a schedule, and/or it may be in the form of a multiplexed camera system.
- PTZ pan-tilt-zoom
- An intelligent video surveillance (IVS) system should ideally detect, identify, track and classify targets in real-time. It should also send alerts in real-time if targets trigger user-defined rules.
- the performance of an IVS system is mainly measured by the detection rate and false alarm rate.
- a surveillance camera associated with an IVS system may have PTZ capability.
- the camera may point in one direction, and a user may define rules based on this particular view.
- the camera may point in some other direction, and in this situation, the user-defined rules used when the camera is pointing in the first direction may not make sense.
- the alerts generated would be false alarms.
- a camera points in different directions, corresponding to different scenes (for example, a water scene versus a non-water scene)
- different target detection algorithms may be desirable.
- an IVS system should ideally detect if the camera switches from view to view and should allow a user to configure views and to enable different video surveillance algorithms and to define different rules based on different views.
- an IVS system may be connected to multiple cameras, where video signals may be fed through a multiplexer, and the system should recognize which camera the current video signal corresponds to and which set of rules should be used.
- a camera may be moved, or the signal of a camera may be disconnected, possibly by suspicious activities, and in these situations, certain alerts should be sent to the user.
- a camera can not perform well under certain lighting conditions, for example, strong or low light, or a camera may have unusually high noise. In such situations, the IVS system should also notify the user that the video signal has a quality issue and/or that the camera should be checked.
- the present invention may embodied as an algorithm, system modules, or computer-program product directed to an IVS system to handling multiple views, unexpected camera motion, unreasonable video quality, and/or the lost of camera signal.
- a video surveillance apparatus may comprise a content analysis engine to receive video input and to perform analysis of said video input; a view engine coupled to said content analysis engine to receive at least one output from said content analysis engine selected from the group consisting of video primitives, a background model, and content analysis engine state information; a rules engine coupled to said view engine to receive view identification information from said view engine; and an inference engine to perform video analysis based on said video primitives and a set of rules associated with a particular view.
- a video processing apparatus may comprise a content analysis engine coupled to receive video input and to generate video primitives, said content analysis engine further to perform one or more tasks selected from the group consisting of determining whether said one or more video frames include one or more bad frames and determining if a gross change has occurred.
- a method of video processing may comprise analyzing input video information to determine if a current video frame is directed to a same view as a previous video frame; determining whether a new view is present; and indicating a need to use video processing information pertaining to said new view if a new view is determined to be present.
- the invention may be embodied in the form of hardware, software, firmware, or combinations thereof.
- a “video” refers to motion pictures represented in analog and/or digital form. Examples of video include: television, movies, image sequences from a video camera or other observer, and computer-generated image sequences.
- a “frame” refers to a particular image or other discrete unit within a video.
- An “object” refers to an item of interest in a video. Examples of an object include: a person, a vehicle, an animal, and a physical subject.
- a “target” refers to the computer's model of an object.
- the target is derived from the image processing, and there is a one-to-one correspondence between targets and objects.
- Formground refers to the area in a frame having meaningful change over time. For example, a walking person may be meaningful to a user, and should thus be considered as foreground. But some types of moving areas are not meaningful and should not be considered as background, such as water waves, tree leaves blowing, sun glittering, etc.
- Background refers to the area in a frame where pixels depict the same thing, on average, over time. Note that foreground objects may occlude background pixels at times, so a particular pixel may be included in either foreground or background regions of various frames.
- a “background segmentation algorithm” refers to an algorithm to separate foreground and background. It may also be referred to as a “foreground detection algorithm.”
- a “background model” refers to a representation of background.
- background may have two corresponding images. One is a mean image, where each pixel is the average value of that pixel over a certain time when that pixel is in a background region. The other one is a standard deviation image, where each pixel corresponds to the standard deviation value of that pixel over a certain time when that pixel is in a background region.
- a “view” refers to the model of a scene that a camera monitors, which includes the background model of the scene and a frame from the video representing an observation of the scene.
- the frame included in the view may, but need not, correspond to a latest observation of the scene.
- a “BAD frame” refers to a frame in which the content in the video frame is too different from the background (according to some criterion).
- a “gross change” occurs when there are significant changes in a video feed over a given predetermined period of time.
- a “bad signal” refers to the case where the video feed into the IVS has unacceptable noise; the video feed may, for example, be too bright/dark, or the video signal may be lost.
- An “unknown view” refers to the case in which the current view to which the camera points does not match any of the views in a view database.
- a “known view” refers to a view to which a camera points, and which matches one of the views in a view database.
- a “video primitive” refers to an analysis result based on at least one video feed, such as information about a moving target.
- a “warm-up state” refers to when a content analysis module starts and needs some amount of time to build a background model, which may include a background mean and a background standard deviation. During this time period, the content analysis module is considered to be in a warm-up state.
- a “computer” refers to any apparatus that is capable of accepting a structured input, processing the structured input according to prescribed rules, and producing results of the processing as output.
- Examples of a computer include: a computer; a general purpose computer; a supercomputer; a mainframe; a super mini-computer; a mini-computer; a workstation; a micro-computer; a server; an interactive television; a hybrid combination of a computer and an interactive television; and application-specific hardware to emulate a computer and/or software (for example, but not limited to, a programmable gate array (PGA) or a programmed digital signal processor (DSP)).
- a computer can have a single processor or multiple processors, which can operate in parallel and/or not in parallel.
- a computer also refers to two or more computers connected together via a network for transmitting or receiving information between the computers.
- An example of such a computer includes a distributed computer system for processing information via computers linked by a network.
- a “computer-readable medium” or “machine-readable medium” refers to any storage device used for storing data accessible by a computer. Examples of a computer-readable medium include: a magnetic hard disk; a floppy disk; an optical disk, such as a CD-ROM and a DVD; a magnetic tape; and a memory chip.
- Software refers to prescribed rules to operate a computer. Examples of software include: software; code segments; instructions; computer programs; and programmed logic.
- a “computer system” refers to a system having a computer, where the computer comprises a computer-readable medium embodying software to operate the computer.
- a “network” refers to a number of computers and associated devices that are connected by communication facilities.
- a network involves permanent connections such as cables or temporary connections such as those made through telephone or other communication links.
- Examples of a network include: an internet, such as the Internet; an intranet; a local area network (LAN); a wide area network (WAN); and a combination of networks, such as an internet and an intranet.
- a “sensing device” refers to any apparatus for obtaining visual information. Examples include: color and monochrome cameras, video cameras, closed-circuit television (CCTV) cameras, charge-coupled device (CCD) sensors, analog and digital cameras, PC cameras, web cameras, and infra-red imaging devices. If not more specifically described, a “camera” refers to any sensing device.
- a “blob” refers generally to any object in an image (usually, in the context of video). Examples of blobs include moving objects (e.g., people and vehicles) and stationary objects (e.g., furniture and consumer goods on shelves in a store).
- moving objects e.g., people and vehicles
- stationary objects e.g., furniture and consumer goods on shelves in a store.
- FIG. 1 depicts an overall system block diagram according to an embodiment of the invention
- FIG. 2 depicts a block diagram of a content analysis module (CA Engine) which contains a Gross Change Detector, according to an embodiment of the invention
- FIG. 3 depicts the structure of a Gross Change Detector according to an embodiment of the invention
- FIG. 4 depicts the data flow of a View Engine when IVS system starts up, according to an embodiment of the invention
- FIG. 5 depicts the data flow relating to a View Engine when a user adds a view, according to an embodiment of the invention
- FIG. 6 depicts how a View Engine may perform view checking according to an embodiment of the invention
- FIG. 7 depicts the data flow of a View Engine when IVS system is in the steady state, according to an embodiment of the invention.
- FIG. 8 depicts a system which may be used to implement some embodiments of the invention.
- FIG. 9 depicts an exemplary multiplexed camera system, according to an embodiment of the invention.
- FIG. 1 depicts an overall system block diagram according to an embodiment of the invention.
- the view engine 12 loads all of the view information from a view database 17 .
- the view engine 12 enters a searching mode and awaits notification from the content analysis (CA) engine 11 that it is warmed up.
- CA content analysis
- the view engine 12 enters another process, which may be called “view checking.”
- View checking can determine whether the coming video feed is a bad signal, an unknown view or a known view. If view checking finds that the video feed switches from one known view to another known view, the view engine 12 will notify rules engine 13 that the view has changed, and the rules engine 13 will enable an appropriate rule set, depending on which view is active.
- CA engine 11 produces ordinary data (“video primitives”) based on input video, which may be received from a video buffer 16 . It passes this data to the View Engine 12 , which attaches data on which view it was in when the video primitive was produced. The View Engine 12 forwards those primitives to the Inference Engine 14 , which checks them against its current rule set. Inference Engine 14 , upon detecting that a rule has been satisfied or broken, in other words, that an event has occurred, may notify Rules Engine 13 , which may then determine an appropriate response for the event. Rules Engine 13 may then communicate with Response Engine 15 , which may generate an alert or cause some sort of action to be taken. Therefore, embodiments of the present invention may be useful in detecting and countering terrorist activities.
- view checking There are two cases in which view checking occurs. One is a scheduled periodical view checking. The other is when the CA Engine 11 notifies View Engine 12 that it has warmed up. Note that CA Engine 11 enters its warm-up state when the system first starts or when a gross change happens, which will be discussed further below.
- a video buffer 16 may be used to provide video to CA Engine 11 of the IVS system.
- the video may be fed directly from a camera or other video source.
- a multiplexed camera system as shown in FIG. 9 , may be used to feed video to the IVS system.
- there may be multiple cameras 91 each of which may be observing a different view/scene. Outputs of cameras 91 are fed to a multiplexer 92 , which then selects one of the camera outputs for feeding to the IVS system 93 .
- FIG. 2 depicts a block diagram of a CA Engine module 11 in which a Gross Change Detector (GCD) 27 is enabled.
- GCD Gross Change Detector
- a video signal is initially fed into modules to apply background segmentation.
- Change Detector 22 and Blobizer 23 are used to perform background segmentation.
- GCD 27 if the area of the foreground, which is computed as the total number of pixels in the foreground, is lower than a predetermined threshold, GCD 27 considers the frame to be a “good” frame, and the data will go through the other modules of the CA engine module; that is, it proceeds through tracker 24 , classifier 25 , and primitive generator 26 .
- the GCD 27 will mark the current frame as being a “BAD” frame, and it will generate a BAD frame event.
- Blackboard Reaper 28 detects the BAD frame event, it deletes the data packet containing this BAD frame; that is, Blackboard Reaper 28 may serve as a data manager.
- GCD 27 will also classify the type of BAD frame. BAD frame types are kept in a histogram. If a predetermined number of consecutive BAD frames occur (or if consecutive BAD frames occur over a predetermined period of time), GCD 27 will generate a gross change event, and it also clears the BAD frame histogram.
- Primitive Generator 26 When it detects the gross change event, Primitive Generator 26 will generate a gross change primitive, Change Detector 22 and Tracker 24 will be reset, and Blackboard Reaper 28 will delete all the data packets generated after the gross change started to happen and up until the present time. CA Engine 11 will then notify all the engines that listen to it that it has re-entered a warm-up state.
- FIG. 3 depicts a state structure of a GCD 27 according to an exemplary embodiment of the invention, where GCD 27 is implemented as a state machine.
- GCD 27 may be implemented in hardware, software, firmware, or as a combination thereof and need not be limited to a state machine.
- the state diagram of FIG. 3 includes states 31 - 37 and arrows indicating state transitions. The abbreviations used in connection with the arrows are explained as follows:
- BAD frames there are four types of BAD frames: unknown bad frame; light-on bad frame; light-off bad frame; and camera-motion bad frame.
- a BAD frame is classified as light-on if the mean of the current frame is larger than the mean of a reference frame by a certain amount, and it is classified as light-off if the mean of the current frame is less than the mean of a reference image by a certain amount.
- the mean of a frame is defined to be the average of all the pixels in the frame; and the reference image is taken to be the mean image in the background model, where, as previously defined, each pixel of the mean image is the average value of that pixel over a certain number of frames in which the pixel is considered to be a background pixel.
- a BAD frame is classified as camera-motion if the similarity between the BAD frame and the reference image is lower than a certain threshold.
- a similarity computation algorithm will be introduced below.
- a BAD frame that does not fall into any of the other three categories is classified as being unknown.
- GCD 27 When GCD 27 detects a BAD frame, it puts the BAD frame type into a histogram. If GCD 27 detects consecutive BAD frames and if the time duration of these BAD frames is larger than a predetermined threshold, the GCD 27 generates a gross change event. Note that the threshold may, equivalently, be expressed in terms of a number of consecutive BAD frames. The type of the gross change is determined by examining the BAD frame histogram, and the gross change type corresponds to the BAD frame type having the maximum number of BAD frames in the histogram. If a good frame is detected after a BAD frame, where the number of BAD frames is still less than the predetermined threshold, the BAD frame histogram is cleared.
- CA 11 enters its warm-up state.
- FIG. 4 depicts an exemplary data flow with respect to a View Engine 41 , which may correspond to View Engine 12 of FIG. 1 , when the IVS system starts up.
- View Engine 41 may request view information from a database 42 .
- the database 42 may forward the requested stored view information to View Engine 41 .
- FIG. 5 depicts an exemplary data flow with respect to a View Engine 52 (which, again, may correspond to the View Engine 12 of FIG. 1 ) when a user adds a view.
- the View Engine 52 receives an Add View command. It receives new background data from CA engine 51 and a current view snapshot from video buffer 54 .
- View Engine 52 forwards information about the new view to database 55 and sends a notification of a view change to Rules Engine 53 , which is a module that maintains all the user-defined rules. This will be further elaborated upon below.
- FIG. 6 depicts how a View Engine 62 (which may correspond to View Engine 12 of FIG. 1 ) performs view checking, according to an embodiment of the invention. View checking will be discussed in further detail below.
- FIG. 7 depicts the data flow of View Engine 72 (which may correspond to View Engine 12 of FIG. 1 ) when the IVS system is in the steady state, according to an embodiment of the invention.
- CA engine 71 provides View Engine 72 with video primitives.
- View Engine 72 takes the video primitives and provides them to Inference Engine 73 along with view identification information (“view id”), where Inference Engine 73 is a module for comparing primitives against rules to see if there is any rule being broken (or satisfied) by one or more targets, represented by the primitives.
- view id view identification information
- the View Engine in general, stores and detects different scenes that come into a system from a video feed.
- the most common ways for the signal on the video feed to change is when multiple video sources are passed through a multiplexer and when a Pan-Tilt-Zoom camera is being used to point to different scenes from time to time.
- the View Engine stores camera views. In its most basic form, a camera view consists of:
- the view engine may be in several states:
- View Engine 52 When the system (i.e., View Engine 52 in FIG. 5 ) is running in the “unknown view” state, an outside application can send an add view command into the system.
- the View Engine 52 gets the latest background model from the CA engine 51 and the latest image from the video buffer 54 . It uses those to build a camera view and stores the camera view in the database 55 . View Engine 52 then sets its internal state to “known view” and notifies the Rules Engine 53 that it is in the new view.
- Startup operations may be demonstrated by the embodiment shown in FIG. 4 .
- the View Engine 41 loads all of its view information from a database 42 .
- the View Engine 41 enters into a searching mode and waits for notification from the CA engine ( 11 , in FIG. 1 ) that it is warmed up. When it receives this notification, the View Engine 41 begins view checking.
- the CA engine 11 takes a certain amount of time to warm up. During that time, it is building up a model of the background in the scene it is viewing. At this time, View Engine 12 is in the “searching” state. When CA engine 11 is warmed up, it notifies the View Engine 12 .
- CA Engine 11 will reset. When CA engine 11 resets, it moves into the not warmed up state and notifies the View Engine 12 that it is no longer warmed up. This moves the View Engine 12 into the “Searching” state.
- View checking is the process of determining whether the feed coming into the system is in a bad signal state, an unknown view or a known view. View checking, according to an embodiment of the invention, is shown in FIG. 6 .
- the View Engine 62 requests the latest background model from the CA engine 61 and attempts to determine if the video feed is a bad signal, which may occur, for example, if the camera is getting insufficient light or if the camera has unusually high noise. An algorithm for detecting whether or not the signal is bad will be discussed below. If that is the case, it moves into the Bad Signal state. Next, it compares the latest background model against the background models for all of the stored views. If a match is found, the View Engine 62 moves into the Known View state.
- the View Engine 62 moves into the Unknown View state. If the current state differs from the previous state, it notifies the Rules Engine 63 that the state has changed. If it has moved to a Known View, it also notifies the Rules Engine 63 which view it is now in. The Rules Engine 63 will modify the rule set that is enabled depending on which view is active.
- View Checking happens in two cases. The first is when the CA Engine 61 notifies View Engine 62 that it has warmed up. The second is a regularly scheduled view check that View Engine 62 performs when it is in a known view. When it is in a known view, the View Engine 62 checks the view periodically, according to a predetermined period, to confirm that it is still in that known view. When the view check occurs, the View Engine 62 may update the database 65 with more recent view information.
- the exemplary algorithm may go as follows:
- the exemplary algorithm uses both mean and standard deviation images of the background model. If the mean of the standard deviation image, which is the average of all the pixel values in the standard deviation image, is too small (i.e., less than a predetermined threshold), the algorithm determines that the video feed has low contrast, and the signal from the video feed is considered to be a BAD signal. The algorithm can further detect if the video feed is too bright or too dark by checking the mean of the mean image, which is the average of all the pixel values in mean image. If the mean value is too small, the video feed is too dark, and if the mean value is too large, the video feed is too bright. If the mean of the standard deviation image is too large (i.e., larger than some predetermined threshold), the algorithm determines that the video feed is too noisy, which also corresponds to a BAD signal type.
- a background model is not available, one may alternatively collect a set of video frames to generate mean and standard deviation images and use these mean and standard deviation images to classify the quality of the incoming video signals.
- CA Engine 71 produces ordinary data (“video primitives”) about the video it is processing. It passes this data to the View Engine 72 . If the View Engine 72 is in the Known View state, it attaches data on which view it was in when the video primitives were produced, and View Engine 72 forwards those primitives to the Inference Engine 73 . Inference Engine 73 checks them against its current rule set. If the View Engine 72 is in the Unknown View state, the video primitives should be deleted.
- View Engine 72 may still be possible to utilize the video primitives, and there are certain rules that can be applied to these primitives, such as rules to detect gross changes and targets appearing or disappearing.
- the View Engine 72 may send these primitives to Inference Engine 73 to check against these rules.
- FIG. 8 The computer system of FIG. 8 may include at least one processor 82 , with associated system memory 81 , which may store, for example, operating system software and the like.
- the system may further include additional memory 83 , which may, for example, include software instructions to perform various applications.
- the system may also include one or more input/output (I/O) devices 84 , for example (but not limited to), keyboard, mouse, trackball, printer, display, network connection, etc.
- I/O input/output
- the present invention may be embodied as software instructions that may be stored in system memory 81 or in additional memory 83 .
- Such software instructions may also be stored in removable or remote media (for example, but not limited to, compact disks, floppy disks, etc.), which may be read through an I/O device 84 (for example, but not limited to, a floppy disk drive). Furthermore, the software instructions may also be transmitted to the computer system via an I/O device 84 for example, a network connection; in such a case, a signal containing the software instructions may be considered to be a machine-readable medium.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Burglar Alarm Systems (AREA)
Abstract
Description
Events (Which Can Cause State Change) | Actions (Upon State Change) |
R-Reset | -set bad frame reference |
W-sensor warms up | -clear bad frame |
reference | |
G-good frame detected | -set home frame |
~G-bad frame detected | -clear home frame |
GC(M)-gross change | -update bad frame list |
(motion) | |
GC(L)-any gross change | -clear bad frame list |
due to lighting | |
GC(LH)-gross change due | -set static camera |
to lighting in the home | reference |
position | |
GC(L~H)-gross change due | -clear static camera |
to lighting while not at | reference |
home | |
GC(~MH)-gross change due | -generate bad frame |
to camera not moving (back | event |
at home) | |
GC(~M~H)-gross change | -generate gross change |
due to camera not moving | event ( -if state has |
(camera away) | changed) |
-
- Background model (background mean and standard deviation images)
- Image snapshot.
A more complex version of a camera view may have multiple model-snapshot pairs taken at intervals over a time period.
-
- Searching
- Unknown View
- Known View
- Bad Signal
-
- Apply an edge detection algorithm to the two images to obtain two edge images. There are many such edge detection algorithms known in the art that may be used for this purpose.
- Calculate the median value of the edge images, and then use a multiple of the median value as a threshold to apply to the two edge images to generate two binary edge masks separately. In the binary mask, a “0” value for a pixel may be used to denote that an edge value at that pixel is lower than the threshold, and this represents that the edge is not strong enough at that pixel; a “1” value may be used to denote that the edge value for the pixel is greater than or equal to the threshold (alternatively, the roles of “0” and “1” may be reversed; however, the ensuing discussion will assume the use of “0” and “1” as discussed above).
- Collapse each edge mask into horizontal and vertical vectors, H and V, respectively, where H[i] is the number of “1” pixels in row i, and V[i] is the number of “1” pixels in column i. Thus, each edge mask will be represented by two vectors.
- Apply a window filter to all four vectors. In some embodiments of the invention, a trapezoidal window may be used.
- Compute the correlation, Ch, between the two horizontal vectors and the correlation, Cv, between the two vertical vectors (the subscripts “1” and “2” are used to denote the two images being considered; the superscript “T” represents the transpose of the vector):
C h=(H 1 H 2 T)2/(H 1 H 1 T *H 2 H 2 T)
C v=(V 1 V 2 T)2/(V 1 V 1 T *V 2 V 2 T) - If both Ch and Cv are larger than a certain predetermined threshold, the algorithm determines that the two images are similar, where similar means that there is no motion between the two images. In the case of View Checking, this will mean that the algorithm will determine that the two views are similar or not similar.
Signal Quality Verification Algorithm
Claims (14)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/950,680 US7733369B2 (en) | 2004-09-28 | 2004-09-28 | View handling in video surveillance systems |
PCT/US2005/034864 WO2006037057A2 (en) | 2004-09-28 | 2005-09-27 | View handling in video surveillance systems |
US12/781,617 US8497906B2 (en) | 2004-09-28 | 2010-05-17 | View handling in video surveillance systems |
US13/838,665 US9204107B2 (en) | 2004-09-28 | 2013-03-15 | View handling in video surveillance systems |
US14/952,200 US9936170B2 (en) | 2004-09-28 | 2015-11-25 | View handling in video surveillance systems |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/950,680 US7733369B2 (en) | 2004-09-28 | 2004-09-28 | View handling in video surveillance systems |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/781,617 Division US8497906B2 (en) | 2004-09-28 | 2010-05-17 | View handling in video surveillance systems |
Publications (2)
Publication Number | Publication Date |
---|---|
US20060066722A1 US20060066722A1 (en) | 2006-03-30 |
US7733369B2 true US7733369B2 (en) | 2010-06-08 |
Family
ID=36098573
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/950,680 Active 2028-09-19 US7733369B2 (en) | 2004-09-28 | 2004-09-28 | View handling in video surveillance systems |
US12/781,617 Active 2025-07-04 US8497906B2 (en) | 2004-09-28 | 2010-05-17 | View handling in video surveillance systems |
US13/838,665 Active 2025-12-13 US9204107B2 (en) | 2004-09-28 | 2013-03-15 | View handling in video surveillance systems |
US14/952,200 Active 2025-07-28 US9936170B2 (en) | 2004-09-28 | 2015-11-25 | View handling in video surveillance systems |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/781,617 Active 2025-07-04 US8497906B2 (en) | 2004-09-28 | 2010-05-17 | View handling in video surveillance systems |
US13/838,665 Active 2025-12-13 US9204107B2 (en) | 2004-09-28 | 2013-03-15 | View handling in video surveillance systems |
US14/952,200 Active 2025-07-28 US9936170B2 (en) | 2004-09-28 | 2015-11-25 | View handling in video surveillance systems |
Country Status (2)
Country | Link |
---|---|
US (4) | US7733369B2 (en) |
WO (1) | WO2006037057A2 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8311275B1 (en) | 2008-06-10 | 2012-11-13 | Mindmancer AB | Selective viewing of a scene |
US8457401B2 (en) | 2001-03-23 | 2013-06-04 | Objectvideo, Inc. | Video segmentation using statistical pixel modeling |
US8564661B2 (en) | 2000-10-24 | 2013-10-22 | Objectvideo, Inc. | Video analytic rule detection system and method |
US8711217B2 (en) | 2000-10-24 | 2014-04-29 | Objectvideo, Inc. | Video surveillance system employing video primitives |
US8941561B1 (en) | 2012-01-06 | 2015-01-27 | Google Inc. | Image capture |
US9020261B2 (en) | 2001-03-23 | 2015-04-28 | Avigilon Fortress Corporation | Video segmentation using statistical pixel modeling |
US9197864B1 (en) | 2012-01-06 | 2015-11-24 | Google Inc. | Zoom and image capture based on features of interest |
US9760792B2 (en) | 2015-03-20 | 2017-09-12 | Netra, Inc. | Object detection and classification |
US9892606B2 (en) | 2001-11-15 | 2018-02-13 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US9922271B2 (en) | 2015-03-20 | 2018-03-20 | Netra, Inc. | Object detection and classification |
US10341606B2 (en) | 2017-05-24 | 2019-07-02 | SA Photonics, Inc. | Systems and method of transmitting information from monochrome sensors |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050162515A1 (en) * | 2000-10-24 | 2005-07-28 | Objectvideo, Inc. | Video surveillance system |
US6625310B2 (en) * | 2001-03-23 | 2003-09-23 | Diamondback Vision, Inc. | Video segmentation using statistical pixel modeling |
US7697026B2 (en) * | 2004-03-16 | 2010-04-13 | 3Vr Security, Inc. | Pipeline architecture for analyzing multiple video streams |
US8130285B2 (en) * | 2005-04-05 | 2012-03-06 | 3Vr Security, Inc. | Automated searching for probable matches in a video surveillance system |
US7646895B2 (en) * | 2005-04-05 | 2010-01-12 | 3Vr Security, Inc. | Grouping items in video stream images into events |
US9158975B2 (en) * | 2005-05-31 | 2015-10-13 | Avigilon Fortress Corporation | Video analytics for retail business process monitoring |
US20070252693A1 (en) * | 2006-05-01 | 2007-11-01 | Jocelyn Janson | System and method for surveilling a scene |
US20080074496A1 (en) * | 2006-09-22 | 2008-03-27 | Object Video, Inc. | Video analytics for banking business process monitoring |
US20080273754A1 (en) * | 2007-05-04 | 2008-11-06 | Leviton Manufacturing Co., Inc. | Apparatus and method for defining an area of interest for image sensing |
WO2008147913A2 (en) * | 2007-05-22 | 2008-12-04 | Vidsys, Inc. | Tracking people and objects using multiple live and recorded surveillance camera video feeds |
US7822275B2 (en) * | 2007-06-04 | 2010-10-26 | Objectvideo, Inc. | Method for detecting water regions in video |
US9019381B2 (en) | 2008-05-09 | 2015-04-28 | Intuvision Inc. | Video tracking systems and methods employing cognitive vision |
KR101632963B1 (en) * | 2009-02-02 | 2016-06-23 | 아이사이트 모빌 테크놀로지 엘티디 | System and method for object recognition and tracking in a video stream |
US10880035B2 (en) * | 2009-07-28 | 2020-12-29 | The United States Of America, As Represented By The Secretary Of The Navy | Unauthorized electro-optics (EO) device detection and response system |
US20110043689A1 (en) * | 2009-08-18 | 2011-02-24 | Wesley Kenneth Cobb | Field-of-view change detection |
US10373470B2 (en) | 2013-04-29 | 2019-08-06 | Intelliview Technologies, Inc. | Object detection |
CA2847707C (en) | 2014-03-28 | 2021-03-30 | Intelliview Technologies Inc. | Leak detection |
US10943357B2 (en) | 2014-08-19 | 2021-03-09 | Intelliview Technologies Inc. | Video based indoor leak detection |
US9767564B2 (en) | 2015-08-14 | 2017-09-19 | International Business Machines Corporation | Monitoring of object impressions and viewing patterns |
CN105120217B (en) * | 2015-08-21 | 2018-06-22 | 上海小蚁科技有限公司 | Intelligent camera mobile detection alert system and method based on big data analysis and user feedback |
CN107124583B (en) * | 2017-04-21 | 2020-06-23 | 宁波公众信息产业有限公司 | Monitoring system for rapidly acquiring video monitoring information |
CN107146573B (en) * | 2017-06-26 | 2020-05-01 | 上海天马有机发光显示技术有限公司 | Display panel, display method thereof and display device |
WO2019076076A1 (en) * | 2017-10-20 | 2019-04-25 | 杭州海康威视数字技术股份有限公司 | Analog camera, server, monitoring system and data transmission and processing methods |
CN109698895A (en) * | 2017-10-20 | 2019-04-30 | 杭州海康威视数字技术股份有限公司 | A kind of analog video camera, monitoring system and data transmission method for uplink |
US11508172B2 (en) * | 2017-12-28 | 2022-11-22 | Dst Technologies, Inc. | Identifying location of shreds on an imaged form |
US11200435B1 (en) * | 2019-03-11 | 2021-12-14 | Objectvideo Labs, Llc | Property video surveillance from a vehicle |
US11756295B2 (en) | 2020-12-01 | 2023-09-12 | Western Digital Technologies, Inc. | Storage system and method for event-driven data stitching in surveillance systems |
US11546612B2 (en) | 2021-06-02 | 2023-01-03 | Western Digital Technologies, Inc. | Data storage device and method for application-defined data retrieval in surveillance systems |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5801765A (en) * | 1995-11-01 | 1998-09-01 | Matsushita Electric Industrial Co., Ltd. | Scene-change detection method that distinguishes between gradual and sudden scene changes |
US5886744A (en) * | 1995-09-08 | 1999-03-23 | Intel Corporation | Method and apparatus for filtering jitter from motion estimation video data |
US6088468A (en) * | 1995-05-17 | 2000-07-11 | Hitachi Denshi Kabushiki Kaisha | Method and apparatus for sensing object located within visual field of imaging device |
US6211912B1 (en) * | 1994-02-04 | 2001-04-03 | Lucent Technologies Inc. | Method for detecting camera-motion induced scene changes |
US6297844B1 (en) * | 1999-11-24 | 2001-10-02 | Cognex Corporation | Video safety curtain |
US20020067412A1 (en) * | 1994-11-28 | 2002-06-06 | Tomoaki Kawai | Camera controller |
US20020145660A1 (en) * | 2001-02-12 | 2002-10-10 | Takeo Kanade | System and method for manipulating the point of interest in a sequence of images |
US20030184647A1 (en) * | 1995-12-19 | 2003-10-02 | Hiroki Yonezawa | Communication apparatus, image processing apparatus, communication method, and image processing method |
US6646655B1 (en) * | 1999-03-09 | 2003-11-11 | Webex Communications, Inc. | Extracting a time-sequence of slides from video |
US6765569B2 (en) * | 2001-03-07 | 2004-07-20 | University Of Southern California | Augmented-reality tool employing scene-feature autocalibration during camera motion |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0616290B1 (en) * | 1993-03-01 | 2003-02-05 | Kabushiki Kaisha Toshiba | Medical information processing system for supporting diagnosis. |
US6246787B1 (en) * | 1996-05-31 | 2001-06-12 | Texas Instruments Incorporated | System and method for knowledgebase generation and management |
US7362946B1 (en) * | 1999-04-12 | 2008-04-22 | Canon Kabushiki Kaisha | Automated visual image editing system |
US7142251B2 (en) * | 2001-07-31 | 2006-11-28 | Micronas Usa, Inc. | Video input processor in multi-format video compression system |
US6950123B2 (en) * | 2002-03-22 | 2005-09-27 | Intel Corporation | Method for simultaneous visual tracking of multiple bodies in a closed structured environment |
JP4185052B2 (en) * | 2002-10-15 | 2008-11-19 | ユニバーシティ オブ サザン カリフォルニア | Enhanced virtual environment |
WO2005036456A2 (en) * | 2003-05-12 | 2005-04-21 | Princeton University | Method and apparatus for foreground segmentation of video sequences |
US7680342B2 (en) * | 2004-08-16 | 2010-03-16 | Fotonation Vision Limited | Indoor/outdoor classification in digital images |
JP3778208B2 (en) * | 2003-06-30 | 2006-05-24 | 三菱電機株式会社 | Image coding apparatus and image coding method |
US7409076B2 (en) * | 2005-05-27 | 2008-08-05 | International Business Machines Corporation | Methods and apparatus for automatically tracking moving entities entering and exiting a specified region |
US7929729B2 (en) * | 2007-04-02 | 2011-04-19 | Industrial Technology Research Institute | Image processing methods |
WO2010013171A1 (en) * | 2008-07-28 | 2010-02-04 | Koninklijke Philips Electronics N.V. | Use of inpainting techniques for image correction |
US8265380B1 (en) * | 2008-08-14 | 2012-09-11 | Adobe Systems Incorporated | Reuse of image processing information |
-
2004
- 2004-09-28 US US10/950,680 patent/US7733369B2/en active Active
-
2005
- 2005-09-27 WO PCT/US2005/034864 patent/WO2006037057A2/en active Application Filing
-
2010
- 2010-05-17 US US12/781,617 patent/US8497906B2/en active Active
-
2013
- 2013-03-15 US US13/838,665 patent/US9204107B2/en active Active
-
2015
- 2015-11-25 US US14/952,200 patent/US9936170B2/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6211912B1 (en) * | 1994-02-04 | 2001-04-03 | Lucent Technologies Inc. | Method for detecting camera-motion induced scene changes |
US20020067412A1 (en) * | 1994-11-28 | 2002-06-06 | Tomoaki Kawai | Camera controller |
US6088468A (en) * | 1995-05-17 | 2000-07-11 | Hitachi Denshi Kabushiki Kaisha | Method and apparatus for sensing object located within visual field of imaging device |
US5886744A (en) * | 1995-09-08 | 1999-03-23 | Intel Corporation | Method and apparatus for filtering jitter from motion estimation video data |
US5801765A (en) * | 1995-11-01 | 1998-09-01 | Matsushita Electric Industrial Co., Ltd. | Scene-change detection method that distinguishes between gradual and sudden scene changes |
US20030184647A1 (en) * | 1995-12-19 | 2003-10-02 | Hiroki Yonezawa | Communication apparatus, image processing apparatus, communication method, and image processing method |
US6646655B1 (en) * | 1999-03-09 | 2003-11-11 | Webex Communications, Inc. | Extracting a time-sequence of slides from video |
US6297844B1 (en) * | 1999-11-24 | 2001-10-02 | Cognex Corporation | Video safety curtain |
US20020145660A1 (en) * | 2001-02-12 | 2002-10-10 | Takeo Kanade | System and method for manipulating the point of interest in a sequence of images |
US6765569B2 (en) * | 2001-03-07 | 2004-07-20 | University Of Southern California | Augmented-reality tool employing scene-feature autocalibration during camera motion |
Non-Patent Citations (1)
Title |
---|
International Search Report PCT/US05/34864, Yin. |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10026285B2 (en) | 2000-10-24 | 2018-07-17 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US8564661B2 (en) | 2000-10-24 | 2013-10-22 | Objectvideo, Inc. | Video analytic rule detection system and method |
US8711217B2 (en) | 2000-10-24 | 2014-04-29 | Objectvideo, Inc. | Video surveillance system employing video primitives |
US10645350B2 (en) | 2000-10-24 | 2020-05-05 | Avigilon Fortress Corporation | Video analytic rule detection system and method |
US9378632B2 (en) | 2000-10-24 | 2016-06-28 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US10347101B2 (en) | 2000-10-24 | 2019-07-09 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US8457401B2 (en) | 2001-03-23 | 2013-06-04 | Objectvideo, Inc. | Video segmentation using statistical pixel modeling |
US9020261B2 (en) | 2001-03-23 | 2015-04-28 | Avigilon Fortress Corporation | Video segmentation using statistical pixel modeling |
US9892606B2 (en) | 2001-11-15 | 2018-02-13 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US8965047B1 (en) | 2008-06-10 | 2015-02-24 | Mindmancer AB | Selective viewing of a scene |
US9172919B2 (en) | 2008-06-10 | 2015-10-27 | Mindmancer AB | Selective viewing of a scene |
US8311275B1 (en) | 2008-06-10 | 2012-11-13 | Mindmancer AB | Selective viewing of a scene |
US9197864B1 (en) | 2012-01-06 | 2015-11-24 | Google Inc. | Zoom and image capture based on features of interest |
US8941561B1 (en) | 2012-01-06 | 2015-01-27 | Google Inc. | Image capture |
US9934447B2 (en) | 2015-03-20 | 2018-04-03 | Netra, Inc. | Object detection and classification |
US9922271B2 (en) | 2015-03-20 | 2018-03-20 | Netra, Inc. | Object detection and classification |
US9760792B2 (en) | 2015-03-20 | 2017-09-12 | Netra, Inc. | Object detection and classification |
US10341606B2 (en) | 2017-05-24 | 2019-07-02 | SA Photonics, Inc. | Systems and method of transmitting information from monochrome sensors |
Also Published As
Publication number | Publication date |
---|---|
US20100225760A1 (en) | 2010-09-09 |
WO2006037057A2 (en) | 2006-04-06 |
US20160127699A1 (en) | 2016-05-05 |
WO2006037057A3 (en) | 2009-06-11 |
US9204107B2 (en) | 2015-12-01 |
US8497906B2 (en) | 2013-07-30 |
US9936170B2 (en) | 2018-04-03 |
US20130278764A1 (en) | 2013-10-24 |
US20060066722A1 (en) | 2006-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9936170B2 (en) | View handling in video surveillance systems | |
US10929680B2 (en) | Automatic extraction of secondary video streams | |
US9363487B2 (en) | Scanning camera-based video surveillance system | |
US8848053B2 (en) | Automatic extraction of secondary video streams | |
US7280673B2 (en) | System and method for searching for changes in surveillance video | |
JP4673849B2 (en) | Computerized method and apparatus for determining a visual field relationship between a plurality of image sensors | |
Lei et al. | Real-time outdoor video surveillance with robust foreground extraction and object tracking via multi-state transition management | |
US20050134685A1 (en) | Master-slave automated video-based surveillance system | |
US20050104958A1 (en) | Active camera video-based surveillance systems and methods | |
US20070058717A1 (en) | Enhanced processing for scanning video | |
US20070122000A1 (en) | Detection of stationary objects in video | |
CA2394926C (en) | Image data processing | |
Bashir et al. | Collaborative tracking of objects in EPTZ cameras | |
Kaur | Background subtraction in video surveillance | |
Fleck et al. | An integrated visualization of a smart camera based distributed surveillance system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: OBJECTVIDEO, INC.,VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YIN, WEIHONG;CHOSAK, ANDREW J.;FRAZIER, MATTHEW F.;AND OTHERS;SIGNING DATES FROM 20050301 TO 20050510;REEL/FRAME:016565/0506 Owner name: OBJECTVIDEO, INC., VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YIN, WEIHONG;CHOSAK, ANDREW J.;FRAZIER, MATTHEW F.;AND OTHERS;REEL/FRAME:016565/0506;SIGNING DATES FROM 20050301 TO 20050510 |
|
AS | Assignment |
Owner name: RJF OV, LLC, DISTRICT OF COLUMBIA Free format text: SECURITY AGREEMENT;ASSIGNOR:OBJECTVIDEO, INC.;REEL/FRAME:020478/0711 Effective date: 20080208 Owner name: RJF OV, LLC,DISTRICT OF COLUMBIA Free format text: SECURITY AGREEMENT;ASSIGNOR:OBJECTVIDEO, INC.;REEL/FRAME:020478/0711 Effective date: 20080208 |
|
AS | Assignment |
Owner name: RJF OV, LLC, DISTRICT OF COLUMBIA Free format text: GRANT OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:OBJECTVIDEO, INC.;REEL/FRAME:021744/0464 Effective date: 20081016 Owner name: RJF OV, LLC,DISTRICT OF COLUMBIA Free format text: GRANT OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:OBJECTVIDEO, INC.;REEL/FRAME:021744/0464 Effective date: 20081016 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: OBJECTVIDEO, INC., VIRGINIA Free format text: RELEASE OF SECURITY AGREEMENT/INTEREST;ASSIGNOR:RJF OV, LLC;REEL/FRAME:027810/0117 Effective date: 20101230 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: AVIGILON FORTRESS CORPORATION, CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OBJECTVIDEO, INC.;REEL/FRAME:034552/0251 Effective date: 20141217 |
|
AS | Assignment |
Owner name: HSBC BANK CANADA, CANADA Free format text: SECURITY INTEREST;ASSIGNOR:AVIGILON FORTRESS CORPORATION;REEL/FRAME:035387/0569 Effective date: 20150407 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552) Year of fee payment: 8 |
|
AS | Assignment |
Owner name: AVIGILON FORTRESS CORPORATION, CANADA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HSBC BANK CANADA;REEL/FRAME:047032/0063 Effective date: 20180813 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |
|
AS | Assignment |
Owner name: MOTOROLA SOLUTIONS, INC., ILLINOIS Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:AVIGILON FORTRESS CORPORATION;REEL/FRAME:061746/0897 Effective date: 20220411 |