EP1472870A4 - Method and apparatus for video frame sequence-based object tracking - Google Patents
Method and apparatus for video frame sequence-based object trackingInfo
- Publication number
- EP1472870A4 EP1472870A4 EP03704983A EP03704983A EP1472870A4 EP 1472870 A4 EP1472870 A4 EP 1472870A4 EP 03704983 A EP03704983 A EP 03704983A EP 03704983 A EP03704983 A EP 03704983A EP 1472870 A4 EP1472870 A4 EP 1472870A4
- Authority
- EP
- European Patent Office
- Prior art keywords
- reference image
- objects
- term reference
- image
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 90
- 230000003068 static effect Effects 0.000 claims abstract description 47
- 238000004458 analytical method Methods 0.000 claims abstract description 13
- 230000007774 longterm Effects 0.000 claims description 69
- 230000006399 behavior Effects 0.000 claims description 25
- 238000007781 pre-processing Methods 0.000 claims description 23
- 230000000007 visual effect Effects 0.000 claims description 16
- 238000012512 characterization method Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 13
- 230000007246 mechanism Effects 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 4
- 230000001960 triggered effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 15
- 238000001514 detection method Methods 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 9
- 241000239290 Araneae Species 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000013519 translation Methods 0.000 description 5
- 238000003491 array Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000007664 blowing Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- GOLXNESZZPUPJE-UHFFFAOYSA-N spiromesifen Chemical compound CC1=CC(C)=CC(C)=C1C(C(O1)=O)=C(OC(=O)CC(C)(C)C)C11CCCC1 GOLXNESZZPUPJE-UHFFFAOYSA-N 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012956 testing procedure Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S3/00—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
- G01S3/78—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
- G01S3/782—Systems for determining direction or deviation from predetermined direction
- G01S3/785—Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
- G01S3/786—Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
- G01S3/7864—T.V. type tracking systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19604—Image analysis to detect motion of the intruder, e.g. by frame subtraction involving reference image or background adaptation with time to compensate for changing conditions, e.g. reference image update on detection of light level change
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19606—Discriminating between target movement or movement in an area of interest and other non-signicative movements, e.g. target movements induced by camera shake or movements of pets, falling leaves, rotating fan
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19608—Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19613—Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
Definitions
- the present invention relates to video surveillance systems in general, and more particularly to video frame sequence-based objects tracking in video surveillance environments.
- Existing video surveillance systems are based on diverse automatic object tracking methods.
- Object tracking methods are designed to process a captured sequence of temporally consecutive images in order to detect and track objects that do not belong to the "natural" scene being monitored.
- Current object tracking methods are typically performed by the separation of the objects from the background, (by delineating or segmenting the objects), and via the determination of the motion vectors of the objects across the sequence of frames in accordance with the spatial transformations of the tracked objects.
- the drawbacks of the current methods concern the inability to track static objects for a lengthy period Of time. Thus, following a short interval, during which a previously dynamic object ceased moving, the tracking of the same object is effectively rendered.
- An additional drawback of the current, methods concerns the inability of the methods to handle "occlusion" situations, such as where the tracked objects are occluded (partially or entirely) by other objects temporarily passing through or permanently located between the image acquiring devices and the tracked object.
- the apparatus comprises at least one image sequence source for transmitting a sequence of images to an object tracking program, and an object tracking program.
- the object tracking program comprises a, pre-processing application layer for constructing a difference image between a currently captured video frame and a previously constructed reference image, an objects clustering application layer for generating at least one new or updated object from the difference image and an at least one existing object, and a background updating application layer for updating at least one reference image prior to processing of a new frame.
- a second aspect of the present invention regards a method for the analysis of a sequence of captured images showing a scene for detecting and tracking of at least one moving or static object and for matching the patterns of the at least one object behavior in the captured images to object behavior in predetermined scenarios.
- the method comprises capturing at least one image of the scene, preprocessing the captured at least one image and generating a short term difference image and a long term difference image, clustering the at least one moving or static object in the short term difference and long term difference images, and generating at least one new object and at least one existing object.
- Fig. 2 is a high-level block diagram showing the application layers of the object tracking apparatus, in accordance with the preferred embodiment of the present invention
- Fig. 5A is a block diagram illustrating the components of the scene characterization layer, in accordance with the preferred embodiment of the present invention.
- Fig. 5B is a block diagram illustrating the components of the background update layer, in accordance with the preferred embodiment of the present invention.
- Fig. 6 is a block diagram showing the data structures associated with the object tracking apparatus, in accordance with a preferred embodiment of the present invention
- Fig. 7 illustrates the operation of the object tracking method, in accordance with the preferred embodiment of the present invention
- Fig. 8 describers the operation of the reference image learning routine, in accordance with a preferred embodiment of the present invention
- Fig. 9 shows the input and output data structures associated with the pre-processing layer, in accordance with a preferred embodiment of the present invention.
- Figs. 10A, 10B and IOC describe the operational steps associated with the clustering layer, in accordance with the preferred embodiment of the present invention.
- Fig. 11 illustrates the scene characterization, in accordance with the preferred embodiment of the present invention.
- Fig. 12 illustrates the background updating, in accordance with the preferred embodiment of the present invention.
- An object tracking apparatus and method for the detection and tracking of dynamic and static objects is disclosed.
- the apparatus and method may be utilized in a monitoring and surveillance system.
- the surveillance system is operative in the detection of potential alarm situation via a recorded surveillance content analysis and in the management of the detected unattended object situation via an alarm distribution mechanism.
- the object tracking apparatus supports the object tracking method that incorporates a unique method for detecting, tracking and counting objects across a sequence of captured surveillance content images. Through the operation of the object tracking method the captured content is analyzed and the results of the analysis provide the option of activating in real time a set of alarm messages to a set of diverse devices via a triggering mechanism.
- the method of the present invention may be implemented in various contexts such as the detection of unattended objects (luggage, vehicles or persons), identification of vehicles parking or driving in restricted zones, access control of persons into restricted zones, prevention of loss of objects (luggage or persons) and counting of persons, as well as in police and fire alarm situations.
- the object tracking apparatus and method described here in may be useful in myriad of other situations and as a video objects analysis tool.
- the monitored content is a stream of video images recorded by video cameras, captured, and sampled by a video capture device and transferred to a video processing unit.
- Each part of this system may be located in a single device or in separate devices located in various locations and inter-connected by hardwire or via wireless connection over local or wide or other networks.
- the video processing unit performs a content analysis of the video frames where the content analysis is based on the object tracking method. The results of the analysis could indicate an alarm situation.
- diverse other content formats are also analyzed, such as thermal based sensor cameras, audio, wireless linked cameras, data produced from motion detectors, and the like.
- An exemplary application that could utilize the apparatus and method of the present invention concerns the detection of unattended objects, such as luggage in a dynamic object-rich environment, such as an airport or city center.
- Other exemplary applications concern the detection of a vehicle parked in a forbidden zone, or the extended-period presence of a non-moving vehicle in a restricted-period parking zone.
- Forbidden or restricted parking zones are typically associated with sensitive traffic-intensive locations, such as a city center.
- Still applications that could use the apparatus and method include the tracking of objects such as persons involved in various scenario models, such as a person leaving the vehicle away from the terminal, which may equal suspicious (unpredicted) behavioral pattern.
- the method and apparatus of the present invention is operative in the analysis of a sequence of video images received from a video camera covering a predefined area, referred herein below to as the video scene.
- the object monitored is a combined object comprising an individual and a suitcase where the individual carries the suitcase.
- the combined object may be separated into a first separate object and a second separate object. It is assumed that the individual (second object) leaves the suitcase (first object) on the floor, a bench, or the like.
- the first object remains in the video scene without movement for a pre-defined period of time. It is assumed that the suitcase (first object) was left unattended.
- the second object exits the video scene.
- the system of the present invention generates, displays, and or distributes an alarm indication.
- a first object such as a suitcase or person monitored is already present and monitored within the video scene. Such object can be lost luggage located within the airport. Such object can be a person monitored. The object may merge into a second object.
- the second object can be a person picking up the luggage, another person to whom the first person joins or a vehicle to which the first person enters.
- the first object (now merged with the second object) may move from its original position and exist the scene or move in a prohibited direction so predetermined.
- the application will provide an indication to a human operator.
- the indication may be oral, visual or written.
- the indication may be provided visually to a screen or delivered via communication networks to officers located at the scene or to off- premises or via dry contact to an external device such as a siren, a bell, a flashing or revolving light and the like.
- An additional exemplary application that could utilize the apparatus and method of the present invention regards a detection of vehicles parked in restricted area or moving in restricted lanes.
- the second exemplary application is designed to detect vehicles parking in restricted areas for more than a predefined number of time units and generates an alarm when identifying an illegal parking event of a specific vehicle.
- the system and method of the present invention can detect whether persons disembark or embark a vehicle in predefined restricted zones.
- Other exemplary applications can include the monitoring of persons and objects in city centers, warehouses, restricted areas, borders or checkpoints and the like. It would be easily perceived that for the successful operation of the above-described applications an object tracking apparatus and an object tracking method are required.
- the object tracking method should be capable of detecting moving objects, tracking moving objects and tracking static objects, such as objects that are identified as moving and subsequently identified as non-moving during a lengthy period of time.
- the object tracking method should recognize linked or physically connected objects, to be able to recognize the separation of the linked objects, to track the separated objects while retaining the historical connectivity states of the objects.
- the object tracking apparatus and method should further be able to handle occlusions where the tracked objects are occluded by one or more separate objects temporarily, semi- permanently or permanently.
- the image sequence sources 12 are one or more video cameras operating in a security-wise sensitive environment and cover a specific pre-defined visual area that is required to be monitored.
- the area monitored can be any area preferably in a transportation area including an airport, a city center, a building, and restricted or non-restricted areas within buildings or outdoors.
- the image sequence sources 12 could include analog devices and/or digital devices.
- the images provided by the image sequence sources could include normal light, infrared, temperature, or any other form of radiation.
- the image sequence sources 12 continuously acquire and transmit sequences of video images and provide the images simultaneously to an image sequence display device 20 and to a computing and storage device 15.
- the display device 20 could be a video terminal, which is operated by a human operator or any other display device including a display device located on a mobile or hand held device.
- Alarm triggers are generated by the object tracking program 14 installed in the computing and storage device 15 in order to indicate an alarm situation to the operator of the display device 20.
- the alarm may be generated in the form of an audio or any other indication.
- the image sequence sources 12 transmit sequences of video images to an object tracking program 14 via suitably wired connections.
- the images could be provided through an analog interface, a digital interface or through a Local Area Network (LAN) interface or Wide Area Network (WAN), IP, Wireless, Satellite connectivity.
- LAN Local Area Network
- WAN Wide Area Network
- IP Wireless, Satellite connectivity
- the object tracking program 14 and the associated control data structures 16 could be installed in distinct platforms and/or devices distributed randomly across a Local Area Network (LAN) that could communicate over the LAN infrastructure or across Wide Area Networks (WAN).
- LAN Local Area Network
- WAN Wide Area Networks
- One example is a Radio Frequency Camera that transmits composite video remotely to a receiving station, the receiving station can be connected to other components of the system via a network or directly.
- the program 14 and the associated control data structures 16 could be installed in distinct platforms and/or devices distributed randomly across very wide area networks such as the Internet.
- Various forms of communication between the constituent parts of the system can be used.
- Such can be a data communication network, which can be connected via landlines or wireless or like communication devices and that can be implemented via TCP/IP protocols and like protocols.
- Other protocols and methods of communications such as cellular, satellite, low band, and high band communications networks and devices will readily be useful in the implementation of the present invention.
- the program 14 and the associated control data structures 16 could be further co-located on the same computing platform or distributed across several platforms for load balancing, redundancy considerations, back-up in the case of equipment failure, and the like.
- the object tracking program 14 includes several application layers. Each application layer is a group of logically and functionally linked computer program components responsible for different aspects of the application within the apparatus of the present invention.
- the object tracking program 14 includes a configuration layer 38, a pre-processing layer 42, and an objects clustering layer 44, a scene characterization layer 46, and a background updating layer 48.
- Each layer is a computer program executing within the computerized environment shown in detail in association with the description of Fig. 1.
- the configuration layer 38 is a responsible for the initialization of the apparatus of the present invention in accordance with specific user-defined parameters.
- the pre-processing layer 42 is operative in constructing difference images between a currently captured video frame and previously constructed reference images.
- the objective of the objects clustering layer 44 is to generate new and or updated objects from the difference images and the existing objects.
- the scene characterization layer 46 uses the objects generated by the objects clustering layer 44 to describe the monitored scene.
- the layer 46 also includes' a triggering mechanism that compares the behavior pattern and other characteristics of the objects to pre-defined behavior patterns and characteristics in order to create alarm triggers.
- the background updating layer 48 updates the reference images for the processing of the next frame.
- the configuration layer 38 comprises a reference image constructor component 50, a timing parameters definer component 52, and a visual parameters definer component 54.
- the reference image constructor component 50 is responsible for the acquisition of the background model.
- the reference image is generated in accordance with a predefined option.
- the component 50 includes a current frame capture module 56, a reference image loading module 60, and a reference image learning module 62.
- the reference image may be created alternatively from; a) a currently captured frame, b) an existing reference image, c) a reference image learning module.
- the current frame capture module 56 provides a currently captured frame to be used as the reference image.
- the currently captured frame can be a frame from any camera covering the scene.
- the module 64 derives the camera tilt in accordance with the measurements taken by a user of an arbitrary object located at different location in the monitored scene.
- the module 65 defines the maximum, the minimum and the typical size of the objects to be tracked.
- the region location definition module 66 provides the definition of the location of one or more regions-of-interest in the scene.
- the region type definition module 67 enables the user to define a region of interest as "objects track region” or “no objects track region”.
- the alarm type definition module 68 defines a region of interest as "trigger alarm in region” or "no alarm trigger in region", in accordance with the definitions of the user. Referring now to Fig. 4A showing a block diagram illustrating the components of the pre-processing layer, in accordance with the preferred embodiment of the present invention.
- the pre-processing layer 42 comprises a current frame handler 212, a short-term reference image handler 214, a long- term reference image handler 216, a pre-processor module, a short-term difference image updater 220, and a long-term difference image updater 222.
- Each module is a computer program operative to perform one or more tasks in association with the computerized system of Fig. 1.
- the current frame handler 212 obtains a currently captured frame and passes the frame to the preprocessor module 218.
- the short-term reference handler 214 loads an existing short-term reference image and passes the frame to the pre-processor module 218.
- the handler 214 could further provide calculations concerning the moments of the short term reference image.
- the long-term reference handler 216 loads an existing long-term reference image and passes the frame to the pre-processor module 218.
- the handler 216 could further provide calculations concerning the moments of the long term reference image.
- the pre-processor module 218 uses the current frame and the obtained reference images as input for processing.
- the process generates a new short-term difference image and a new long-term difference image and subsequently passes the new difference images to the short-term reference image updater (handler) 220 and the long-term difference image updater (handler) 222 respectively.
- the updater 220 and the updater 222 update the existing short-term reference image and the existing long-term reference image respectively.
- the clustering layer 44 comprises an object merger module 231, an objects group builder module 232, an objects group adjuster module 234, a new objects creator module 236, an object searcher module 240, a Kalman filter module 242, and an object status updater 254.
- Each module is a computer program operative to perform one or more tasks in association with the computerized system of Fig. 1.
- the object merger module 231 corrects clustering errors by the successive merging of partially overlapping objects having the same motion vector for a pre-defined period.
- the objects group builder 232 is responsible for creating groups of close objects by using neighborhood relations among the objects.
- the object group adjuster 234 initiates a group adjustment processes in order to find the optimal spatial parameters of each object in a group.
- the new objects constructor module 236 constructs new objects from the difference images, controls the operation of a specific object location and size finder function and adjusts new objects.
- the new objects may be construed from the difference images whether existing objects are compared with or where there are no existing objects. For example, when the system begins operation a new object may be identified even if there are no previously acquired and existing objects.
- the object searcher 240 scans a discarded objects archive in order to attempt to locate recently discarded objects with parameters (such as spatial parameters) similar to a newly created object.
- a Kalman filter module 242 is utilized to track the motion of the objects.
- the object status updater 254 is responsible for modifying the status of the object from “static” to “dynamic” or from “dynamic” to “static”. A detailed description of the clustering layer 44 will be set forth herein under in association with the following drawings.
- the scene characterization layer 46 comprises an object movement measurement module 242, an object merger module 244, and a triggering mechanism 246.
- the object movement measurement module 242 analyzes the changes in the spatial parameters of an object and determines whether the object is moving or stationary.
- the object merger module 244 is responsible for correcting errors to objects as a result of the clustering stage.
- the functionality of the triggering mechanism 246 is to check each object against the spatio-temporal behavior patterns and properties defined as "suspicious" or as alarm triggering. When a suitable match is found the mechanism 246 generates an alarm trigger.
- the object tracking control structures 16 of Fig. 1 comprise a long-term reference image 70, a short-term reference image 72, an objects table 74, a sophisticated absolute distance (SAD) short-term map 76, a sophisticated absolute distance (SAD) long-term map 78, a discarded objects archive 82, and a background draft 84.
- the long- term reference image 70 includes the background image of the monitored scene without the dynamic and without the static objects tracked by the apparatus and method of the present invention.
- the short-term reference image 72 includes the scene background image and the static objects tracked by the object tracking method.
- the provided information enables the method to decide which regions of the frame to work on and in which regions should an alert situation be produced.
- the configuration step optionally includes a reference image learning step (not shown) in which the background image is adaptively learned in order to construct a long-term and a short-term reference image from a temporally consecutive sequence of captured images.
- the long-term reference image is copied and maintained as a short-term reference picture.
- the long-term reference image contains no objects while the short-term reference image includes static objects, such as objects that have been static for a pre-defined period. In the preferred embodiment of the invention, the length of the pre-defined period is one minute while in other preferred embodiments other time values could be used.
- the long-term reference image and the short-term reference image are updated for background changes, such as changes in the illumination artifacts associated with the image (lights or shadows or constantly moving objects (such as trees) and the like).
- the video frame pre-processing phase 88 uses a currently captured frame and the short-term and long-term reference images for generating new short-term and long-term difference images.
- the difference images represent the difference between the currently captured frame and the reference images.
- the reference images can be obtained from one of the image sequence sources described in association with Fig. 1 or could be provided directly by a user or by another system associated with the system of the present invention.
- the difference images are suitably filtered or smoothened.
- the clustering phase 90 generates new or updated objects from the difference images and from the previously generated or updated objects.
- an image acquiring device such as a video camera
- the object tracking method requires a pre-pre-processing phase configured such as to compensate for the potential camera movements between the capture of the reference images and the capture of each current frame.
- the pre-pre-processing phase involves an estimation of the relative overall frame movement (registration) between the current frame and the reference images. Consequent to the estimation of the registration (in terms of pixel offset) the offset is applied to the reference images in order to extract "in-place" reference images for the object tracking to proceed in a usual manner.
- extended reference images have to be used, allowing for margins (the content of which may be constantly updated) up to the maximal expected registration.
- the estimation of the registration (offset) between the current frame and the reference images involves a separate estimation of the x and y offset components, and a joint estimation of the x and y offset components.
- a separate estimation selected horizontal and vertical stripes of the current frame and the reference images are averaged with appropriate weighting, and cross- correlated in search of a maximum match in the x and y offsets, respectively.
- diagonal stripes are used (in both diagonal directions), from which the x and y offsets are jointly estimated. The resulting estimates are then averaged to produce the final estimate.
- Fig. 8 which describers the operation of the reference image learning routine, in accordance with a preferred embodiment of the present invention.
- the construction of the long-term and short term reference images could be carried out in several alternative ways.
- a currently captured frame could be stored on a memory device as the long-term reference image.
- a previously stored long-term reference image could be loaded from the memory device in order to be used as the current long-term reference image respectively.
- a specific reference image learning process could be activated (across steps 100 through 114).
- step 100 the reference image learning process is performed across a temporally consecutive sequence of captured images where each of the frames is divided into macro blocks (MB) having a pre-defined size, such as 16X16 pixels or 32X32 pixels or any like other division into macro blocks.
- MB macro blocks
- each MB is examined for motion vectors. The motion is detected by comparing the MB in a specific position in currently captured frame to the MB in the same position in the previously captured frame.
- the number of frames in the sequence is about 150 frames while in other preferred embodiments of the invention different values could be used, c) Background MB 106 where no motion vector was detected across the previously captured sequence of temporally consecutive frames.
- step 110 the values of each of the pixels in an MB that were identified as a Background MB are obtained and in step 112 the values are averaged in time 112.
- step 114 an initial short term and long term reference image is generated from the values average in time.
- the short-term reference image is created such that it contains the averages of the values of pixels in time.
- the pixels are examined in order to find which pixels had insufficient background time (MBs that were always in motion). Pixels without sufficient background time are given the value from the short-term reference image.
- the pre-processing step 88 of Fig. 6 employs the current frame 264 and the short-term reference image 262 to generate a short-term difference image 270.
- the step 88 further uses the current frame 264 and the long-term reference image 266 to generate a long-term difference image 272.
- the long-term 272 and short-term 270 difference images represent respectively the sophisticated absolute difference (SAD) between the current frame 264 and the long-term 266 and the short-term 262 reference images.
- SAD maps The size of the difference images (referred to herein after as SAD maps) 270, 272 is equal to the size of the current frame 264.
- Each pixel in the SAD maps 270, 272 are provided with an arbitrary value in the range of 1 through 101. Other values may be used instead. High values indicate a substantial difference between the value of the pixel in the reference images 262, 266 and the value of the pixel in the currently captured frame 264. Thus, the score indicates the probability for the pixel belonging either to the scene background or to an object.
- the generation of the SAD maps 270, 272 is achieved by performing one of two alternative methods.
- the values of x, y concern the pixel coordinates.
- the values of Ymin and of Ymax represent the lower and the higher luminance levels at (x, y) between the current frame 264 and the reference images 262, 266.
- the values of aO, al, and a3 are thresholds designed to minimize D(x, y) for similar pixels and maximize it for non-similar pixels. Consequent to the performance of the above equation for each of the pixels and to the generation of the SAD maps 270, 272 the SAD maps 270, 272 are 5 filtered for smoothing with two Gaussian filters one in the X coordinate and the second in the Y coordinate.
- MOO is the sum of all the pixels around the given pixel
- M10 is the sum of all the pixels around the given pixel each multiplied by a filter that detects horizontal edges
- M01 is the sum of all the pixels around a given pixel multiplied 20 by a filter that detects vertical edges.
- Tmpl(x, y) A0* (D00(x, y) + WO) - Min(x, y)
- Tmp2(x, y) Al * Dl 0(x, y) + Wl
- Tmp3(x, y) Al * D01(x, y) + Wl
- the method takes into consideration the texture of the current frame 264 and the reference images 262, 266 and compares there between.
- the second pre-processing method is favorable since it is less sensitive to light changes
- the prediction step 120 is performed before the adjustment of the objects and the update step 125 is performed after the creation of a new object.
- the Kalman state of the object is updated in accordance with the adjusted parameters of the object.
- the status of the object is updated.
- the changing of the object status from "dynamic" status to "static" status is performed as follows: If the value of the non-moving counter associated with the object exceeds a specific threshold then the status of the object is set to "static".
- the dead-area (described in the clustering step) is calculated and saved. The pixels that are bounded within the object are copied from the background draft to the short-term reference image. Subsequently, the status of the object is set the "static". Static objects are not adjusted until their status is changed back to "dynamic".
- Fig. 10B At step 126 the objects groups are built.
- An object-specific bounding ellipse represents each object. The functionality, structure and operation of the ellipse will be described herein after in association with the following drawings. Every two objects are identified as neighbors if the minimum distance between their bounding ellipses is up to about 4 pixels. Using the neighborhood relations between every two objects, the object groups are built. Note should be taken that static objects are not adjusted.
- the parameters of the existing dynamic objects are adjusted in order to perform tracking of the objects detected in the previously captured video frames. The objects are divided into groups according to their locations. Objects of a group are close to each other and may occlude each other. Objects belonging to different groups are distant from each other.
- the adjustment of the object parameters is performed for every group of objects separately.
- the adjustment to groups of objects enables appropriate handling of occlusion situations.
- groups of objects are built.
- Each object is represented by a bounding marker, which a distinct artificially generated graphical structure, such as an ellipse.
- a pair of objects is identified as two neighboring members if the minimum distance between their marker ellipses is up to a pre-defined number of pixels. In the preferred embodiment of the invention the pre-defined number of pixels is 4 while in other embodiments different values could be used.
- the object groups are adjusted.
- the object group adjustment process determines the optimal spatial parameters of each object in the objects group.
- Each set of spatial parameter values of all the objects in a given objects group is scored.
- the purpose of the adjustment process is to find the spatial parameters of each object in a group, such that the total score of the group is maximized.
- the initial parameters are the values generated for the previously captured frame.
- the initial base score is derived from a predictive Kalman filter.
- a pre-defined number of geometric operations are performed on the objects. The operations effect changes in the parameters of every object in the group.
- Various geometric operations could be used, such as translation, scaling (zooming), rotation, and the like.
- the number of geometric operations applied to the object is 10 while in other preferred embodiments different values could be applied.
- every ellipse parameter is changed according to the movement thereof as derived by a Kalman filter used to track after the object. If the score of the group is higher than the base score the change is applied and the new score will become the base score.
- each pixel that is associated with more than one object in the group will contribute its score only once and not for every member object that wraps it.
- the contribution of each pixel in the SAD map to the total score of the group is set in accordance with the value of the pixel.
- Xc pixel coordinates the spider structure is re-built. Extending the about 16 extensible members of the spider structure yields two Y[16] and X[16] arrays. If the spatial extent of the spider structure is sufficient the parameters of the boundary ellipse are calculated. If the spatial extent of the spider overlaps the area of an existing object the new object will not be created unless its size is above a minimum threshold.
- the algorithm checks whether there is motion inside the object ellipse. If in at least 12 of the last 16 frames there was motion in the object, it is considered as a moving object. Consequently, the value of the non-moving counter is divided by 2.
- an object merging mechanism is activated. There are cases in which an element in the monitored scene, such as a person or a car, is represented by 2 objects whose ellipses are partially overlapping due to clustering errors. The object merging mechanism is provided for the handling of the situation.
- the background draft frame is updated.
- the background draft frame is continuously updated from the current frame in all macro-blocks (16 X 16 pixels or the like) in which there was mo motion for several frames.
- Each pixel in the background draft is updated by utilizing the following calculation: (16):
- Background Draft (x, y) Background Draft (x, y) + sgn (Current Frame (x, y) - Background Draft (x, y)
- the short-term reference image is updated at step 200.
- the update of each pixel in short-term reference image is performed in accordance with the values of the pixel in the SAD map and in the objects map. In the update calculations the following variables are used:
- the applications are both for city centers, airports, secure locations, hospitals, warehouses, border and other restricted areas or locations and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US35420902P | 2002-02-06 | 2002-02-06 | |
US354209P | 2002-02-06 | ||
WOPCT/IL02/01042 | 2002-12-26 | ||
PCT/IL2002/001042 WO2003067360A2 (en) | 2002-02-06 | 2002-12-26 | System and method for video content analysis-based detection, surveillance and alarm management |
PCT/IL2003/000097 WO2003067884A1 (en) | 2002-02-06 | 2003-02-06 | Method and apparatus for video frame sequence-based object tracking |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1472870A1 EP1472870A1 (en) | 2004-11-03 |
EP1472870A4 true EP1472870A4 (en) | 2006-11-29 |
Family
ID=27736268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP03704983A Withdrawn EP1472870A4 (en) | 2002-02-06 | 2003-02-06 | Method and apparatus for video frame sequence-based object tracking |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1472870A4 (en) |
AU (1) | AU2003207979A1 (en) |
WO (1) | WO2003067884A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782568A (en) * | 2018-07-13 | 2020-02-11 | 宁波其兰文化发展有限公司 | Access control system based on video photography |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7953219B2 (en) | 2001-07-19 | 2011-05-31 | Nice Systems, Ltd. | Method apparatus and system for capturing and analyzing interaction based content |
US7728870B2 (en) | 2001-09-06 | 2010-06-01 | Nice Systems Ltd | Advanced quality management and recording solutions for walk-in environments |
US7573421B2 (en) | 2001-09-24 | 2009-08-11 | Nice Systems, Ltd. | System and method for the automatic control of video frame rate |
US7436887B2 (en) | 2002-02-06 | 2008-10-14 | Playtex Products, Inc. | Method and apparatus for video frame sequence-based object tracking |
AU2002361483A1 (en) | 2002-02-06 | 2003-09-02 | Nice Systems Ltd. | System and method for video content analysis-based detection, surveillance and alarm management |
WO2003074326A1 (en) | 2002-03-07 | 2003-09-12 | Nice Systems Ltd. | Method and apparatus for internal and external monitoring of a transportation vehicle |
US6822969B2 (en) | 2003-04-03 | 2004-11-23 | Motorola, Inc. | Method and apparatus for scheduling asynchronous transmissions |
WO2004090770A1 (en) | 2003-04-09 | 2004-10-21 | Nice Systems Ltd. | Apparatus, system and method for dispute resolution, regulation compliance and quality management in financial institutions |
US7546173B2 (en) | 2003-08-18 | 2009-06-09 | Nice Systems, Ltd. | Apparatus and method for audio content analysis, marking and summing |
US20050073585A1 (en) * | 2003-09-19 | 2005-04-07 | Alphatech, Inc. | Tracking systems and methods |
AU2003276661A1 (en) | 2003-11-05 | 2005-05-26 | Nice Systems Ltd. | Apparatus and method for event-driven content analysis |
WO2006021943A1 (en) | 2004-08-09 | 2006-03-02 | Nice Systems Ltd. | Apparatus and method for multimedia content based |
US8724891B2 (en) | 2004-08-31 | 2014-05-13 | Ramot At Tel-Aviv University Ltd. | Apparatus and methods for the detection of abnormal motion in a video stream |
FR2875629B1 (en) * | 2004-09-23 | 2007-07-13 | Video & Network Concept Sarl | VIDEO SURVEILLANCE INDEXING SYSTEM |
CN100452871C (en) * | 2004-10-12 | 2009-01-14 | 国际商业机器公司 | Video analysis, archiving and alerting methods and apparatus for a video surveillance system |
US8078463B2 (en) | 2004-11-23 | 2011-12-13 | Nice Systems, Ltd. | Method and apparatus for speaker spotting |
EP1867167A4 (en) | 2005-04-03 | 2009-05-06 | Nice Systems Ltd | Apparatus and methods for the semi-automatic tracking and examining of an object or an event in a monitored site |
US7386105B2 (en) | 2005-05-27 | 2008-06-10 | Nice Systems Ltd | Method and apparatus for fraud detection |
US7716048B2 (en) | 2006-01-25 | 2010-05-11 | Nice Systems, Ltd. | Method and apparatus for segmentation of audio interactions |
US7476013B2 (en) | 2006-03-31 | 2009-01-13 | Federal Signal Corporation | Light bar and method for making |
US9002313B2 (en) | 2006-02-22 | 2015-04-07 | Federal Signal Corporation | Fully integrated light bar |
US9346397B2 (en) | 2006-02-22 | 2016-05-24 | Federal Signal Corporation | Self-powered light bar |
US7746794B2 (en) | 2006-02-22 | 2010-06-29 | Federal Signal Corporation | Integrated municipal management console |
US8725518B2 (en) | 2006-04-25 | 2014-05-13 | Nice Systems Ltd. | Automatic speech analysis |
WO2007135656A1 (en) | 2006-05-18 | 2007-11-29 | Nice Systems Ltd. | Method and apparatus for combining traffic analysis and monitoring center in lawful interception |
US7885429B2 (en) | 2006-06-08 | 2011-02-08 | General Electric Company | Standoff detection systems and methods |
US7822605B2 (en) | 2006-10-19 | 2010-10-26 | Nice Systems Ltd. | Method and apparatus for large population speaker identification in telephone interactions |
US7631046B2 (en) | 2006-10-26 | 2009-12-08 | Nice Systems, Ltd. | Method and apparatus for lawful interception of web based messaging communication |
US7577246B2 (en) | 2006-12-20 | 2009-08-18 | Nice Systems Ltd. | Method and system for automatic quality evaluation |
US8571853B2 (en) | 2007-02-11 | 2013-10-29 | Nice Systems Ltd. | Method and system for laughter detection |
US7925112B2 (en) | 2007-02-28 | 2011-04-12 | Honeywell International Inc. | Video data matching using clustering on covariance appearance |
US7898576B2 (en) | 2007-02-28 | 2011-03-01 | Honeywell International Inc. | Method and system for indexing and searching objects of interest across a plurality of video streams |
US7599475B2 (en) | 2007-03-12 | 2009-10-06 | Nice Systems, Ltd. | Method and apparatus for generic analytics |
JP5264582B2 (en) * | 2008-04-04 | 2013-08-14 | キヤノン株式会社 | Monitoring device, monitoring method, program, and storage medium |
FR2932278B1 (en) * | 2008-06-06 | 2010-06-11 | Thales Sa | METHOD FOR DETECTING AN OBJECT IN A SCENE COMPRISING ARTIFACTS |
FR2987537B1 (en) * | 2012-02-23 | 2015-12-25 | Cliris | METHOD AND SYSTEM FOR SUPERVISION OF A SCENE, IN PARTICULAR IN A SALE SITE |
CN103778442B (en) * | 2014-02-26 | 2017-04-05 | 哈尔滨工业大学深圳研究生院 | A kind of central air-conditioner control method analyzed based on video demographics |
US9996749B2 (en) | 2015-05-29 | 2018-06-12 | Accenture Global Solutions Limited | Detecting contextual trends in digital video content |
CN110782554B (en) * | 2018-07-13 | 2022-12-06 | 北京佳惠信达科技有限公司 | Access control method based on video photography |
CN112380971B (en) * | 2020-11-12 | 2023-08-25 | 杭州海康威视数字技术股份有限公司 | Behavior detection method, device and equipment |
CN113794860A (en) * | 2021-09-07 | 2021-12-14 | 杭州天宽科技有限公司 | Power supply network safety monitoring system based on cloud service |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5847755A (en) * | 1995-01-17 | 1998-12-08 | Sarnoff Corporation | Method and apparatus for detecting object movement within an image sequence |
WO2000079366A1 (en) * | 1999-06-21 | 2000-12-28 | Catherin Mitta | Method for the personal identification of mobile users |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5091780A (en) * | 1990-05-09 | 1992-02-25 | Carnegie-Mellon University | A trainable security system emthod for the same |
CA2054344C (en) * | 1990-10-29 | 1997-04-15 | Kazuhiro Itsumi | Video camera having focusing and image-processing function |
DE69124777T2 (en) * | 1990-11-30 | 1997-06-26 | Canon Kk | Device for the detection of the motion vector |
-
2003
- 2003-02-06 EP EP03704983A patent/EP1472870A4/en not_active Withdrawn
- 2003-02-06 WO PCT/IL2003/000097 patent/WO2003067884A1/en not_active Application Discontinuation
- 2003-02-06 AU AU2003207979A patent/AU2003207979A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5847755A (en) * | 1995-01-17 | 1998-12-08 | Sarnoff Corporation | Method and apparatus for detecting object movement within an image sequence |
WO2000079366A1 (en) * | 1999-06-21 | 2000-12-28 | Catherin Mitta | Method for the personal identification of mobile users |
Non-Patent Citations (1)
Title |
---|
See also references of WO03067884A1 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782568A (en) * | 2018-07-13 | 2020-02-11 | 宁波其兰文化发展有限公司 | Access control system based on video photography |
CN110782568B (en) * | 2018-07-13 | 2022-05-31 | 深圳市元睿城市智能发展有限公司 | Access control system based on video photography |
Also Published As
Publication number | Publication date |
---|---|
AU2003207979A1 (en) | 2003-09-02 |
WO2003067884A1 (en) | 2003-08-14 |
EP1472870A1 (en) | 2004-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7436887B2 (en) | Method and apparatus for video frame sequence-based object tracking | |
US9052386B2 (en) | Method and apparatus for video frame sequence-based object tracking | |
WO2003067884A1 (en) | Method and apparatus for video frame sequence-based object tracking | |
US9363487B2 (en) | Scanning camera-based video surveillance system | |
US8605155B2 (en) | Video surveillance system | |
US6999600B2 (en) | Video scene background maintenance using change detection and classification | |
CN104378582B (en) | A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera | |
US7280673B2 (en) | System and method for searching for changes in surveillance video | |
US7394916B2 (en) | Linking tracked objects that undergo temporary occlusion | |
US7995843B2 (en) | Monitoring device which monitors moving objects | |
US20080117296A1 (en) | Master-slave automated video-based surveillance system | |
US20070058717A1 (en) | Enhanced processing for scanning video | |
KR100777199B1 (en) | Apparatus and method for tracking of moving target | |
JP2007209008A (en) | Surveillance device | |
Xu et al. | Segmentation and tracking of multiple moving objects for intelligent video analysis | |
US20030052971A1 (en) | Intelligent quad display through cooperative distributed vision | |
JP3910626B2 (en) | Monitoring device | |
Lalonde et al. | A system to automatically track humans and vehicles with a PTZ camera | |
Harasse et al. | People Counting in Transport Vehicles. | |
CN118057479A (en) | Multi-camera target tracking and re-identification algorithm and system in public places | |
Baran et al. | Motion tracking in video sequences using watershed regions and SURF features | |
Branca et al. | Human motion tracking in outdoor environment | |
CN118552877A (en) | Digital twin positioning method and system based on BIM and video | |
Harasse et al. | Multiple faces tracking using local statistics | |
Kang et al. | Automatic detection and tracking of security breaches in airports |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20040709 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK RO |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: LACHOVER, BOAZ Inventor name: KOREN-BLUMSTEIN, GUY Inventor name: DVIR, IGAL Inventor name: YEREDOR, ARIE |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20061031 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04N 5/228 20060101ALI20061025BHEP Ipc: H04N 5/225 20060101ALI20061025BHEP Ipc: H04N 7/12 20060101ALI20061025BHEP Ipc: G06T 7/20 20060101AFI20061025BHEP |
|
17Q | First examination report despatched |
Effective date: 20070322 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20110901 |