WO2010027348A1 - Digital video filter and image processing - Google Patents
Digital video filter and image processing Download PDFInfo
- Publication number
- WO2010027348A1 WO2010027348A1 PCT/US2008/010484 US2008010484W WO2010027348A1 WO 2010027348 A1 WO2010027348 A1 WO 2010027348A1 US 2008010484 W US2008010484 W US 2008010484W WO 2010027348 A1 WO2010027348 A1 WO 2010027348A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- color
- pixel
- fifo
- processor
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 39
- 230000015654 memory Effects 0.000 claims abstract description 70
- 230000033001 locomotion Effects 0.000 claims abstract description 50
- 239000013598 vector Substances 0.000 claims abstract description 25
- 238000001914 filtration Methods 0.000 claims abstract description 20
- 239000003086 colorant Substances 0.000 claims description 68
- 238000000034 method Methods 0.000 claims description 34
- 238000001514 detection method Methods 0.000 claims description 32
- 230000007704 transition Effects 0.000 claims description 32
- 238000001228 spectrum Methods 0.000 claims description 31
- 238000005259 measurement Methods 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 7
- 230000008034 disappearance Effects 0.000 claims description 7
- 238000000926 separation method Methods 0.000 claims description 6
- 230000001360 synchronised effect Effects 0.000 claims description 6
- 230000001133 acceleration Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000003860 storage Methods 0.000 claims description 2
- 241001481828 Glyptocephalus cynoglossus Species 0.000 claims 2
- 230000005714 functional activity Effects 0.000 claims 1
- 238000012805 post-processing Methods 0.000 claims 1
- 238000007781 pre-processing Methods 0.000 claims 1
- 230000002452 interceptive effect Effects 0.000 abstract description 4
- 238000007792 addition Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 17
- 230000000694 effects Effects 0.000 description 8
- 230000004044 response Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 6
- 238000013500 data storage Methods 0.000 description 5
- 230000005855 radiation Effects 0.000 description 4
- 238000006467 substitution reaction Methods 0.000 description 4
- 239000000872 buffer Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000012723 sample buffer Substances 0.000 description 2
- 238000013179 statistical model Methods 0.000 description 2
- 244000291564 Allium cepa Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/43—Hardware specially adapted for motion estimation or compensation
- H04N19/433—Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/28—Indexing scheme for image data processing or generation, in general involving image processing hardware
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
Definitions
- the present invention relates to the digital video colored filtering, and image processing consisting of hardware and software, and more particularly to the art of image recognition, image identification, and image tracking.
- the present invention relates to the efficient filtering of colored video images, thus eliminating the need for use of complex Fourier Transforms.
- Fourier Transforms by their nature slow down the digital image processing.
- it utilizes a unique computer architecture that resembles a typical car assembly lines to identify emergence, disappearance, and directional and rotational changes of multicolored objects in a six-degree of freedom of space.
- Gindele; Edward B. U.S. 20050089240 discloses a method of processing a digital image to improve tone scale, includes the steps of: generating a multiresolution image representation of the digital image including a plurality of base digital images and a plurality of residual digital images; applying a texture reducing spatial filter to the base digital images to produce texture reduced base digital images; combining the texture reduced base digital images and the residual digital images s to generate a texture reduced digital image; subtracting the texture reduced digital image from the digital image to produce a texture digital image; applying a compressive tone scale function to the texture reduced digital image to produce a tone scale adjusted digital image having a compressed tone scale in at least a portion of the image; and combining the texture digital image with the tone scale adjusted digital image to produce a processed digital image, whereby the contrast of the digital image is improved without compressing the contrast of the texture in the digital image.
- Srinivasan; Sridhar U.S. 20030194009 discloses various techniques and tools for approximate bicubic filtering are described. For example, during motion estimation and compensation, a video encoder uses approximate bicubic filtering when computing pixel values at quarter-pixel positions in reference video frames. Or, during motion compensation, a video decoder uses approximate bicubic filtering when computing pixel values at quarter-pixel positions.
- a graphics system comprises a graphics processor, a sample buffer, and a sample-to-pixel calculation unit.
- the graphics processor generates samples in response to received stream of graphics data.
- the sample buffer may be configured to store the samples.
- the sample-to-pixel calculation unit is programmable to generate a plurality of output pixels by filtering the rendered samples using a filter. A filter having negative lobes may be used.
- the graphics system computes a negativity value for a first frame.
- the negativity value measures an amount of pixel negativity in the first frame.
- the graphics systems adjusts the filter function and/or filter support in order to reduce the negativity value for subsequent frames.
- Debes; Eric U.S. 7,085,795 discloses an apparatus and method for efficient filtering and convolution of content data are described.
- the method includes organizing, in response to executing a data shuffle instruction, a selected portion of data within a destination data storage device.
- the portion of data is organized according to an arrangement of coefficients within a coefficient data storage device.
- a plurality of summed-product pairs are generated in response to executing a multiply-accumulate instruction.
- the pluralities of product pairs are formed by multiplying data within the destination data storage device and coefficients within the coefficient data storage device.
- adjacent summed-product pairs are added in response to executing an adjacent-add instruction.
- the adjacent summed-product pairs are added within the destination data storage device to form one or more data processing operation results. Once the one or more data processing operation results are formed, the results are stored within a memory device.
- Lachine; Vladimir U. S 20060050083 discloses a method and system for circularly symmetric anisotropic filtering over an extended elliptical or rectangular footprint in single-pass digital image warping are disclosed.
- the filtering is performed by first finding and adjusting an ellipse that approximates a non-uniform image scaling function in a mapped position of an output pixel in the input image space.
- a linear transformation from this ellipse to a unit circle in the output image space is determined to calculate input pixel radii inside the footprint and corresponding filter coefficient as a function of the radius.
- the shape of the footprint is determined as a trade-off between image quality and processing speed.
- profiles of smoothing and warping components are combined to produce sharper or detail enhanced output image.
- the method and system of the invention produce natural output image without jagging artifacts, while maintaining or enhancing the sharpness of the input image.
- Maclnnis; Alexander G. U. S 20040181564 discloses system and method of data unit management in a decoding system employing a decoding pipeline. Each incoming data unit is assigned a memory element and is stored in the assigned memory
- Each decoding module gets the data to be operated on, as well as the control data, for a given data unit from the assigned memory element.
- Each decoding module after performing its decoding operations on the data unit, deposits the newly processed data back into the same memory element.
- the assigned memory locations comprise a header portion for holding the control data corresponding to the data unit and a data portion for holding the substantive data of the data unit.
- the header information is written to the header portion of the assigned memory element once and accessed by the various decoding modules throughout the decoding pipeline as needed.
- the data portion of memory is used/shared by multiple decoding modules.
- Yu; Dahai; U.S. 7,120,286 discloses a method and apparatus for tracing an edge contour of an object in three dimensional space is provided.
- the method and apparatus is utilized in a computer vision system that is designed to obtain precise dimensional measurements of a scanned object.
- multiple images may be collected and saved for a number of Z heights for a particular position of the XY stage. These saved images can later be used to calculate a focal position for each edge point trial location in the selected XY area rather than requiring a physical Z stage movement.
- a Z height extrapolation based on the Z heights of previous edge points can significantly speed up the searching process, particularly for objects where the Z height change of a contour is gradual and predictable.
- a filter that includes an analyzer, thresholding circuit, and synthesizer.
- the analyzer generates a low- frequency component signal and a high-frequency component signal from an input signal.
- the thresholding circuit generates a processed high-frequency signal from the high-frequency component signal, the processed high-frequency signal having an amplitude of zero in those regions in which the high-frequency component signal has an amplitude that is less than a threshold value.
- the synthesizer generates a filtered signal from input signals that include the low-frequency component signal and the processed high-frequency signal. The filtered signal is identical to the input signal if the threshold value is zero.
- the analyzer is preferably constructed from a plurality of
- Kawano; Tsutomu; U.S. 20030095698 discloses a feature extracting method for a radiation image formed by radiation image signals each corresponding to an amount of radiation having passed through a radiographed subject, has plural different feature extracting steps, each of the plural different feature extracting steps having a respective feature extracting condition to extract a respective feature value; a feature value evaluating step of evaluating a combination of the plural different feature values; and a controlling step of selecting at least one feature extracting step from the plural different feature extracting steps based on an evaluation result by the feature value evaluating step, changing the feature extracting condition of the selected feature extracting step and conducting the selected feature extracting step so as to extract a feature value again based on the changed feature extracting condition from the radiation image.
- U.S. 20030052886 discloses a video routing system including a plurality of video routers VR(O), VR(I), . . . , VR(N.sub.R-l) coupled in a linear series. Each video router in the linear series may successively operate on a digital video stream. Each video router provides a synchronous clock along with its output video stream so a link interface buffer in the next video router can capture values from the output video stream in response to the synchronous clock. A common clock signal is distributed to each of the video routers. Each video router buffers the common clock signal to generate an output clock. The output clock is used as a read clock to read data out of the corresponding link interface buffer. The output clock is also used to generate the synchronous clock that is transmitted downstream.
- the image processing system comprises: a device profile storage section which stores ideal- environment-measurement data; a light separating section which derives output light data indicating output light from an image projecting section and ambient light data based on a difference between first and second viewing-environment-measurement
- a projection-plane-reflectance estimating section which estimates a reflectance of a projection plane, based on the output light data and the ideal environment-measurement data
- a sensing data generating section which generates viewing-environment-estimation data based on the reflectance, the ideal-environment-measurement data and the ambient light data
- an LUT generating section which updates an LUT based on the viewing-environment-estimation data
- a correcting section which corrects image information based on the updated LUT.
- 7,205,52OA discloses a ground based launch detection system consisting of a sensor grid of electro-optical sensors for detecting the launch of a threat missile which targets commercial aircraft in proximity to a commercial airport or airfield.
- the electro-optical sensors are configured in a wireless network which broadcast threat lines to neighboring sensors with overlapping field of views.
- threat data is sent to a centrally located processing facility which determines which aircraft in the vicinity are targets and send a dispense countermeasure signal to the aircraft.
- Nefian; Ara V. U.S. 20040071338 discloses an image processing system useful for facial recognition and security identification obtains an array of observation vectors from a facial image to be identified.
- a Viterbi algorithm is applied to the observation vectors given the parameters of a hierarchical statistical model for each object, and a face is identified by finding a highest matching score between an observation sequence and the hierarchical statistical model.
- Maclnnis; Alexander G. U.S. 20030187824 discloses a system and method of data unit management in a decoding system employing a decoding pipeline.
- Each incoming data unit is assigned a memory element and is stored in the assigned memory element.
- Each decoding module gets the data to be operated on, as well as the control data, for a given data unit from the assigned memory element.
- Each decoding module after performing its decoding operations on the data unit, deposits the newly processed data back into the same memory element.
- the assigned memory locations comprise a header portion for holding the control data corresponding to the data unit and a data portion for holding the substantive
- the header information is written to the header portion of the assigned memory element once and accessed by the various decoding modules throughout the decoding pipeline as needed.
- the data portion of memory is used/shared by multiple decoding modules.
- An apparatus consisting of hardware and software for converting input signals from a video camera or sensors into a numerical data in real time, and to minimize time latencies.
- the derived data provides identification of object, directional as well as rotational parameters of moving objects, in a six degree of freedom.
- the apparatus for converts input signals from a video camera or sensors into a numerical data in real time, to detect, identify, and track dynamic moving objects in 3D space.
- the data will also provide for 3D locations coordinates of each target, and track 3D motion vectors of each individual target.
- the hardware and software architecture is intended to eliminate time latencies between detection, tracking and reporting of moving multiple targets, moving in a six degree of freedom.
- the apparatus utilizes efficient video filtering hardware that identifies individual prime colors of electromagnetic waves, with resolution of the least significant bit of the analog to digital (AfD) converter.
- the filter also has the capability to filter out unwanted colors including background colors and substituting them with any desired color.
- the major difference between this invention and other digital image processing systems is it capability to filter video spectrum pixel colors electronically.
- the resolution of spectrum color filtering or the number of individual colors to be distinguished and filtered is D to the power of 3, where D is the number of bits in the analog to digital converter used. For instance for a 10 bit AfD converter, it distinguishes (1000* 1000*1000) one billion individual colors within the color spectrum. This is a very powerful tool in digital image processing. It eliminates the need for time consuming Fourier Transforms used in almost all image processor
- the detection time for each one of the prime color pixels are the throughput delays or access time of electronic memory, which are usually in the order of tens of nanoseconds.
- Another major advantage of this invention is that does not require computation intensive Fourier Transforms.
- the architecture is intended to minimize the detection time of multiple moving targets in real time interactive scenario.
- the architecture is that of a distributed processing, acting similar to an assembly line processor (similar to of a car manufacturing assembly lines) in which processors will work in conjunction with First in First out Memories in between two processors.
- the apparatus utilizes special distributed computer hardware that resembles that of a typical assembly line activities.
- FIFO's are utilized to carry semi-processed data from one processor to another.
- the FIFO's are also used in a unique manner in which identification of the objects are made much easier.
- the activity of each individual processor is made simple enough, such that a simple state machine hardware implementation would save time.
- the processor's individual tasks within the processing line provides a means to eliminate processing bottlenecks that are common in most of the computer architecture.
- Another characteristic of the invention are its capability to measure X, Y, Z distances as well as rotational vector parameters of moving objects. Another characteristic of the invention are its capability to measure rotational parameters of moving objects as well as distance measurements. Another advantage of this invention is its usage of memory for variety of image processing tasks, and avoiding elaborate software programs.
- Another advantage of this invention is the use of unique distributed computer architecture similar to car manufacturing assembly lines wherein each computer performs simple image processing tasks by receiving semi processed data from a FIFO, and writing semi processed data to a next FIFO in line.
- each computer performs simple image processing tasks by receiving semi processed data from a FIFO, and writing semi processed data to a next FIFO in line.
- Figure 1 is the block diagram of the color filter and identifier. It receives video analog signals from a camera or a sensor. It shows two stages in which colors are filtered and identified. The first stage is for identification of prime colors and the second stage is for identification of colors in color spectrum. Number identifies prime colors and spectrum colors. As seen from the block diagram, the output is spectrum color numbers and spectrum color group number.
- Figure 2 is the Video Synchronization and Control Logic Block Diagram.
- Figure 2A is the hardware method in which a gap is detected to distinguish one object from another.
- Figure 3 is block diagram of Real Time Distributed Processor (assembly Line Processor). It receives video data information from the color Filter and Identification block diagram Figure IA. It provides Object Identification and motion tracking data of distances and rotational motions of moving objects.
- Figure 4 is drawing of a multicolored cube wherein its midpoints in X, and Y coordinate is shown.
- Figure 4A is drawing of a multicolored cube of Figure 4 is shown wherein its motion in z axis, and its area of each color covered in X, and Y coordinate is shown.
- Figure 5 is drawing of a multicolored half globe wherein its midpoints in X, and Y coordinate is shown.
- FIG 6 is the flowchart diagram of one of the Pixel Group Identification Processor part of the distributed processor shown in Figure 2.
- Processor function is to provide reference x midpoint coordinate of objects necessary for next stage processor, Midpoint x, and y Coordinate Processor.
- Figure 6A is a presentation of midpoint x reference data generated by figure 6.
- the presentation is for understanding of the figure 6-midpoint reference activities. It shows identified pixels of a group of objects in rows and the definition of Gap.
- Figure 7 is the x (row) and y (column) Processor flow chart diagram. Its main function is to sort reference midpoint coordinate of objects based upon the X, and Y,
- Figure 7A shows the input to the Midpoint x, and y (column) Coordinate Processor for two consecutive frames.
- Figure 7B shows output of the Midpoint x, and y (column) Coordinate Processor for two consecutive frames.
- Figure 7C shows the detailed operation of the Midpoint x, and y (column)
- FIG. 8 is the Object Identification Processor flow diagram that identifies objects based upon he emergence on the screen and disappearance.
- Figure 8 A is the input to the Object Identification Processor for two consecutive frames.
- Figure 9 is the block diagram of the Motion vector Measurement hardware that provides distances as ell as rotational parameter of moving objects.
- An apparatus consisting of hardware and software for converting input signals from a video camera or sensors into a numerical data representing motion characteristics of multiple moving targets, with minimal latencies.
- the data provides identification of objects, distances (X, Y, Z) as well as rotational parameters of moving objects, in a six degree of freedom.
- the apparatus consists of an efficient video filtering technique that identifies each individual prime colors of electromagnetic waves and color spectrum with the resolution of the relevant AJD converter to the power of three. .
- the filter has the capability to filter out unwanted colors including background colors and substituting any desired color for transmission.
- the apparatus In order to meet stringent latency time requirement of real time motion detection, the apparatus consists of a special distributed processing computer hardware that resembles a typical assembly line activities. FIFO's are utilized to carry semi- processed data from one processor to another. The FIFO's are also used in a unique manner in which identification of the objects are made much easier. The activity of each individual processor is made simple enough, such that a state machine controller/processor hardware implementation would replaces typical CPU's. The individual processor's tasks in conjunction with use of FIFO's, provides a means to eliminate bottlenecks that are common in most of the distributed processor computer architectures.
- the digital prime color intensities are set as an address to an appropriate prime color memory.
- the memory contains the prime color filtering and bandwidth information for each prime color, which has been pre- recorded by the CPU.
- the pre-recorded data of the memory is organized to identify prime color numbers, and prime color's group number.
- the pre-recorded data of the memory also identifies the particular group of any other prime colors.
- the groupings can be from 1 to m, where m is the total number of groups of colors of different objects.
- the memory will indicate if that prime color is to be replaced with another prime color, and provides the desired intensities to be replaced with the detected intensity. Therefore the content of memory can contain pre-recorded information such as:
- the prime color numbers of all prime colors (10), and their associated group numbers (11) are set to a color spectrum memory to identify color numbers within the color spectrum.
- the prime color umbers from all three of prime color memories are set as an address to a Color Spectrum Memory, wherein the data of the memory, indicates identification, selected color number, selected color group,
- center of the filter bandwidth of each color, if it is greater than, less than, or equal to a center of the color within a group of colors in the color spectrum;
- the number of locations of the address in which the color is to be filtered decides a bandwidth of a color and its group identification.
- the Color Spectrum Memory Filter also contains substitution of any incoming color with another color to be transmitted.
- Identification is made by reading a "0" or a "1" from the data of the memory.
- A"0" represents the prime color is not identified and "1" represent the prime colors intensity is identified.
- the memory also contains prerecorded number associated with that particular prime color intensity.
- Identified prime colors point 10 are numbered from 1 to n, where n is total filtered prime color number.
- Figure 2 is the expansion of the block 7 in Figure 1. It includes the video frame header detection 61, frame's row and column counters 62, sub pixel timing counter 63 that are input to the frame reference ROM 65 to provide pixel prime color designation timing to filter apparatus 20 and other logical controls.
- FIG 3 is the architectural block diagram of a distributed processor for time critical digital image processing. Since the architecture of this distributed processor, resembles that of a typical assembly line, it is called, a Distributed Assembly Line Processor.
- the post-processor of each FIFO read the semi-processed data from that FIFO and after further processing write it into the next stage FIFO.
- the Pixel Processor interfaces with Color Spectrum Filter and both Video Data FIFO A, and Video Data FIFO B. Filtered and identified pixels, are red from the Spectrum Filter memory and then loaded to one of the FIFO's. It also interfaces with Video Synchronization and Control Logic to read relevant frame timing to write it to the Video Data FIFO's. It is also interfaced to Gap signal ( Figure 2A) to receive a Gap signal from the Gap Detector Hardware to amends a gap mark and announce the end of detection of group of colored pixels within a row.
- the Assembly Line Processor's individual processors will process the pixels based upon their color and group identifications, and then start processing and identification of colors and objects based upon their x, and y frame location coordinates in which they were found.
- the order of coordinates of each pixel are characterized by column first and row second.
- the definition of tasks and functions of each processor and FIFO will become clear in the following sections.
- Utilization of FIFO's provide the advantage for the each processor to read and write data in only two addresses, thus saving time in updating pointers for data read and data write. Since the functions of each processor is kept to a minimum, a memory based state machine logic that changes modes of operation within one clock period time, compared with memory based CPU's, that take many clocks to complete an instruction set.
- an object is considered to be separated from another object, if there is a "n" number of consecutive undetected pixels of a color (s) in a row, and "m" consecutive columns of undetected (same color'(s)) number, in between colored objects.
- this separation a gap.
- the gap is absent of a specified color pixel in a row and columns from another specified color pixel or the same color pixel in the same row.
- a separation and identification of two objects are declared. • Two dimensional detection of object moving in a three dimensional space are assumed to be in the vicinity of the same location initially detected for a given frame rate;
- the shift register is loaded with "n” and it loads the shift register whenever an identification of signal is received from Spectrum Color Memory. As long as there are consecutive detected pixels in a row, the gap detect signal will remain low, but when the "n" number of undetected signal reaches, this signal will go High indicating a separation two objects.
- the x, and y midpoint position of an object moving in a three-dimensional space is its two-dimensional focal plane midpoint "x" (row), and its midpoint "y” (column) captured by sensors of a camera.
- the midpoint X coordinate of a multicolor device is the midpoint between the smallest ( Figure 4A point 202) and largest pixel x coordinates of any one of its colors detected in any row (Fig 4A point 203).
- the midpoint Y coordinates of a multicolor object is the midpoint row, between the first to the last row in which any one of its colors is detected ( Figure 4 A, points 204, and 205). Referring now to diagram of Figure 4, we find the approximate midpoint coordinate of a multicolored cube, is the point where two lines (200), and (201) intersect each other Point 209.
- Figure 4A is another drawing of Figure 4, wherein the distances as well as angles are changed from frame to frame, compared to Figure 4.
- Point (202) is the minimum x, (the smallest x coordinate pixel in which the object was detected) and (203) is the maximum x coordinates.
- Point (204) is the first row in which the cube has been detected (minimum y), and point (205) is the detection of the objects is ended (maximum y).
- Figure 5 is a rendition of a half globe, wherein the midpoint coordinates are identified.
- Points (210, 211, 213, and 214) are the area of each color is detected in an X, and Y plane. The area under each color is the total pixel count of that color.
- Points (210, 211, 213, and 214) are the derived by counting the same colored pixels detected in one particular frame.
- the Filter Processor coupled to the Color Spectrum Filter, reads pixel information from the color spectrum memory whenever the "pixel detect mark” appears at the output of the spectrum color filter (at appropriate pixel timing), to denote the detection of a pixel color during that pixel timing and provides the following to Video Data FIFO: a) The filtered color pixel data. • The multicolored object's spectrum identity number.
- the pixel Group Identifier Processor receives filtered color pixels, and related group number from Video Data FIFO.
- Figure 6 represents the flow chart activities of the Pixel Group Identifier Processor. Its job function is to identify color, and groups of colors belonging to an object within a row. It then provides the midpoint reference location of a group of colors in which they were found in a row of a frame.
- Figures 6A
- This reference midpoint x is only for location identification of a group of color pixels in a row that have the same color and belong to a group of colors. Actual midpoint x identification takes place in the next stage of processing.
- point 103 it adds the number of detected pixels (in a row) belonging to the same color of the group.
- the area under each color of an object is needed to detect it rotational vectors.
- the total area under all colors of an object represents it closeness to the detector. This is explained in Motion vector Measurement memory to follow.
- the algorithm also checks for and retains the smallest and the largest x coordinates (of any color in a group in a row). This measurement is later used to find the midpoint coordinate of an object in following columns.
- it checks to make sure that the detected color belongs to the group of colors associated with an object. This is a double check, in addition to the filter group checking and identification of colors in Figure 1. If it is the color of the same group, it goes back to point (104) to get the next pixel, and group's color number.
- Point (108) is reached when different group of colors are detected. It does the following:
- ⁇ Retains the smallest and the largest x coordinates of a group colors in a row.
- the new detected pixel color is different (does not belong to the same group), it is assumed that a color in a different group of colors is detected (This is the same as detection of a different object).
- it checks for a gap tag that was amended by figure 3. If there is gap, it assumes correct spacing, if there is no gap found, it provides an error signal.
- FIG. 6A illustrates the concept of group of colors that appear on the CCD, and the concept of Gap between two objects.
- the method in which the Pixel Group Processor reads data from the FIFO is in a manner in which a pixel is detected in any row to the end of a row and then from the next.
- the output of the Pixel Group Identifier Processor is illustrated in Figure 7A for to consecutive frames.
- the X (row) and Y (column) Coordinate Processor reads the reference midpoint coordinates from the Group Identifier FIFO, and sorts them with respect to their relative location coordinates.
- point 131 it looks for the first midpoint entry and keeps it to check other entries that are closely related to the first coordinate reading.
- it reads the next entry, and in order to check its position with the first entry, it extends the search range of second reading by few +/- n pixels.
- point 133 it starts from the lowest extended number and checks it against the first entry it received in point 131. If the midpoints are close to each other within +/- n pixel locations, and close to each other within "m" number of columns (point 134), it transitions to point 135.
- the Pixel Identifier processor Since the order of the received midpoint reference coordinate is that the Pixel Identifier processor starts from the first to the end of the row looking for the reference midpoint and repeats it again for the remaining of the rows, there is a correlation between the data and the object within the same frame. For continence, a number is assigned for each group of midpoint x. The numbers are based upon the first group and the last group of the x midpoints figures 4, 4 A, and 5.
- point (136) it checks the end of the list and if it is the end, it changes the order of net FIFO's and goes back to point (130), for the start of the next frame.
- AT point (138) if at the end of the range of +/-n, there is no match between the two midpoint reference x's coordinate readings; it indicates that, the second reading belongs to different object and transitions to point (139).
- Point (139) it assumes that the midpoint coordinate x identification has ended. It then calculates the real midpoint X coordinate and midpoint Y coordinate of the object and it sends the result to next stage FIFO. It also marks the second reading as the first and it transitions to point (132), to look for match of the second object.
- the X, and Y coordinate processor reduces the amount of data in between rows belonging to a group of colors (object).
- Figure 7 A is the illustration of the pixel grouping within a row followed by the next row (next Column) for two frames. It indicates the emergence of a new and disappearance of object that are input to the X, and Y Coordinate Processor.
- Figure 7B is the presentation of the result of the processing by the X, and Y Coordinate Processor in which each object is represented by a point that is the midpoint x, and midpoint y of an object with a frame reference of coordinates.
- the Object Identification Processor reads X, and Y midpoint coordinates information from the X, and Y Coordinate FIFO. It essentially compares the coordinates from new FIFO (new frame) to the old FIFO (old frame) and makes a decision if the new coordinates in the new frame, is equal to the old frame, smaller
- the processor starts with the new Y coordinates, developed in the previous process, and after extending the search range of the new row coordinate by +/-m, it starts comparison of the rows.
- Search range +/-m is to make sure that small motion changes from one frame to another frame are accounted for.
- point (152) if the comparison is not made, it will increase the range of search by one, and transition to point 154 wherein if it is not the end of y coordinate search range; it will go back to point 152.
- Multicolored three dimensional objects moving in a three dimensional space will provide an instantaneous vector measurement of distances x, y, and z as well as rotational values related to motions of the object in a six degree of freedom as follows: a) Multicolored objects moving in a three dimensional space, when detected by a camera, will register a unique signature of different color areas in each frame, wherein, the areas under each color of all the colors in a group of predefined colors, will represent a unique instantaneous angles of rotation in a three dimensional space and the total areas of different colors, provide an instantaneous magnitude of distance in z direction. b) The relative value of the three rotational positions of an object in motion is obtained by setting the area for each color of related colors and color numbers, as an address to a memory wherein the data for the three angular positions have already been registered during calibration.
- Measurements of instantaneous angular positions and instantaneous location in z direction in any set of multicolor object are the result of comparison of instantaneous measurements to the empirical measurements performed during calibration.
- the surface area of each color is measured by counting the number of pixels of that particular color in a set of colors detected in a frame real time motion detections.
- Calibration of values of motion in z direction and the three instantaneous angular motion values are the result of empirical measurement of the area under the each color and recording known vector motion values in a memory addressed by each color number and detected area of each relevant colors.
- the Rotational Motion Detector Memory (41) receives each group's individual total number of pixels of each color of an object (43, 45, 46) along with the associated group number from the Object Identification Processor. It sets this information along with spectrum identification number as an address to the Rotational Vector Motion Measurement Memory and receives three dimensional rotational values (47), as well as motion in Z direction
- Figure (4, 4A, and 5) show the motions of a multicolored cube and a half globe, moving in a three dimensional space.
- the motion change detector processor receives the rotational x, y, and z vectors and Z coordinates values from the Vector Measurement Ram. It also calculates their associated elapsed time from this frame to previous frame of each object. It calculates the velocities and acceleration of each object from this measurement to previous frame measurement. Elapsed time is calculated a follows:
- Elapsed time between the detections in two frames is the midpoint in time of this frame detection to the midpoint in time of the previous frame detection.
- Midpoint time is the time difference between the first row to the last row in a frame in which the detection took place divided by two.
- the vector velocities are derived by the changes in vector measurements during two frames divided by the elapsed time. This information is passed to the Motion Track FIFO.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011526019A JP2012502275A (en) | 2008-09-08 | 2008-09-08 | Digital video filters and image processing |
GB1021723.0A GB2475432B (en) | 2008-09-08 | 2008-09-08 | Digital video filter and image processing |
EP08815721.9A EP2321819A4 (en) | 2008-09-08 | 2008-09-08 | Digital video filter and image processing |
PCT/US2008/010484 WO2010027348A1 (en) | 2008-09-08 | 2008-09-08 | Digital video filter and image processing |
KR1020117001102A KR20110053417A (en) | 2008-09-08 | 2008-09-08 | Digital video filter and image processing |
CA2725377A CA2725377A1 (en) | 2008-09-08 | 2008-09-08 | Digital video filter and image processing |
CN2008801300256A CN102077268A (en) | 2008-09-08 | 2008-09-08 | Digital video filter and image processing |
IL211130A IL211130A0 (en) | 2008-09-08 | 2011-02-09 | Digital video filter and image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2008/010484 WO2010027348A1 (en) | 2008-09-08 | 2008-09-08 | Digital video filter and image processing |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010027348A1 true WO2010027348A1 (en) | 2010-03-11 |
Family
ID=41797344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2008/010484 WO2010027348A1 (en) | 2008-09-08 | 2008-09-08 | Digital video filter and image processing |
Country Status (8)
Country | Link |
---|---|
EP (1) | EP2321819A4 (en) |
JP (1) | JP2012502275A (en) |
KR (1) | KR20110053417A (en) |
CN (1) | CN102077268A (en) |
CA (1) | CA2725377A1 (en) |
GB (1) | GB2475432B (en) |
IL (1) | IL211130A0 (en) |
WO (1) | WO2010027348A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016178643A1 (en) * | 2015-05-06 | 2016-11-10 | Erlab Teknoloji Anonim Sirketi | Method for analysis of nucleotide sequence data by joint use of multiple calculation units at different locations |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8872969B1 (en) | 2013-09-03 | 2014-10-28 | Nvidia Corporation | Dynamic relative adjustment of a color parameter of at least a portion of a video frame/image and/or a color parameter of at least a portion of a subtitle associated therewith prior to rendering thereof on a display unit |
CN104537974B (en) | 2015-01-04 | 2017-04-05 | 京东方科技集团股份有限公司 | Data acquisition submodule and method, data processing unit, system and display device |
WO2019032622A1 (en) * | 2017-08-07 | 2019-02-14 | The Jackson Laboratory | Long-term and continuous animal behavioral monitoring |
JP2020004247A (en) * | 2018-06-29 | 2020-01-09 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
CN111384963B (en) * | 2018-12-28 | 2022-07-12 | 上海寒武纪信息科技有限公司 | Data compression/decompression device and data decompression method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5216501A (en) * | 1989-02-13 | 1993-06-01 | Matsushita Electric Industrial Co., Ltd. | Apparatus for detecting moving and unmoving regions in a moving image using a calculator |
US6177922B1 (en) * | 1997-04-15 | 2001-01-23 | Genesis Microship, Inc. | Multi-scan video timing generator for format conversion |
US6831653B2 (en) * | 2001-07-31 | 2004-12-14 | Sun Microsystems, Inc. | Graphics pixel packing for improved fill rate performance |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE69816876T2 (en) * | 1998-09-24 | 2004-04-22 | Qinetiq Ltd. | IMPROVEMENTS REGARDING PATTERN RECOGNITION |
MXPA03001804A (en) * | 2000-08-31 | 2004-05-21 | Rytec Corp | Sensor and imaging system. |
US20030098869A1 (en) * | 2001-11-09 | 2003-05-29 | Arnold Glenn Christopher | Real time interactive video system |
US7356190B2 (en) * | 2002-07-02 | 2008-04-08 | Canon Kabushiki Kaisha | Image area extraction method, image reconstruction method using the extraction result and apparatus thereof |
US20040130546A1 (en) * | 2003-01-06 | 2004-07-08 | Porikli Fatih M. | Region growing with adaptive thresholds and distance function parameters |
US7956889B2 (en) * | 2003-06-04 | 2011-06-07 | Model Software Corporation | Video surveillance system |
-
2008
- 2008-09-08 WO PCT/US2008/010484 patent/WO2010027348A1/en active Application Filing
- 2008-09-08 GB GB1021723.0A patent/GB2475432B/en not_active Expired - Fee Related
- 2008-09-08 CN CN2008801300256A patent/CN102077268A/en active Pending
- 2008-09-08 JP JP2011526019A patent/JP2012502275A/en active Pending
- 2008-09-08 EP EP08815721.9A patent/EP2321819A4/en not_active Withdrawn
- 2008-09-08 KR KR1020117001102A patent/KR20110053417A/en not_active Application Discontinuation
- 2008-09-08 CA CA2725377A patent/CA2725377A1/en not_active Abandoned
-
2011
- 2011-02-09 IL IL211130A patent/IL211130A0/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5216501A (en) * | 1989-02-13 | 1993-06-01 | Matsushita Electric Industrial Co., Ltd. | Apparatus for detecting moving and unmoving regions in a moving image using a calculator |
US6177922B1 (en) * | 1997-04-15 | 2001-01-23 | Genesis Microship, Inc. | Multi-scan video timing generator for format conversion |
US6831653B2 (en) * | 2001-07-31 | 2004-12-14 | Sun Microsystems, Inc. | Graphics pixel packing for improved fill rate performance |
Non-Patent Citations (1)
Title |
---|
See also references of EP2321819A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016178643A1 (en) * | 2015-05-06 | 2016-11-10 | Erlab Teknoloji Anonim Sirketi | Method for analysis of nucleotide sequence data by joint use of multiple calculation units at different locations |
Also Published As
Publication number | Publication date |
---|---|
CN102077268A (en) | 2011-05-25 |
EP2321819A4 (en) | 2014-03-12 |
GB201021723D0 (en) | 2011-02-02 |
GB2475432B (en) | 2013-01-23 |
IL211130A0 (en) | 2011-04-28 |
KR20110053417A (en) | 2011-05-23 |
JP2012502275A (en) | 2012-01-26 |
CA2725377A1 (en) | 2010-03-11 |
GB2475432A (en) | 2011-05-18 |
EP2321819A1 (en) | 2011-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7925051B2 (en) | Method for capturing images comprising a measurement of local motions | |
US7529404B2 (en) | Digital video filter and image processing | |
JP6493163B2 (en) | Density search method and image processing apparatus | |
CN106504182B (en) | A kind of extraction of straight line system based on FPGA | |
EP2321819A1 (en) | Digital video filter and image processing | |
Ishii et al. | Development of high-speed and real-time vision platform, H 3 Vision | |
CA3206206A1 (en) | Device and method for correspondence analysis in images | |
CN111079669A (en) | Image processing method, device and storage medium | |
CN110637461A (en) | Densified optical flow processing in computer vision systems | |
CN1130077C (en) | Motion compensation device and method matched by gradient mode | |
CN101572770B (en) | Method for testing motion available for real-time monitoring and device thereof | |
Cambuim et al. | Hardware module for low-resource and real-time stereo vision engine using semi-global matching approach | |
CN116486250A (en) | Multi-path image acquisition and processing method and system based on embedded type | |
CN110651475B (en) | Hierarchical data organization for compact optical streaming | |
JP2001338280A (en) | Three-dimensional space information input device | |
CN117870659A (en) | Visual inertial integrated navigation algorithm based on dotted line characteristics | |
RU2767281C1 (en) | Method for intelligent processing of array of non-uniform images | |
Nover et al. | ESPReSSo: efficient slanted PatchMatch for real-time spacetime stereo | |
JP2001167283A (en) | Face motion analyzing device and storage medium with stored program for analyzing face motion | |
KR20040107962A (en) | System for detecting moving objects and method thereof | |
CN115190303A (en) | Cloud desktop image processing method and system and related equipment | |
Yang et al. | A general line tracking algorithm based on computer vision | |
CN115115605B (en) | Method and system for realizing circle detection based on Hough transformation of ZYNQ | |
CN118379696B (en) | Ship target detection method and device and readable storage medium | |
JP2709301B2 (en) | Striation light extraction circuit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200880130025.6 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08815721 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2011526019 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2725377 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 4848/KOLNP/2010 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008815721 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 1021723 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20080908 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1021723.0 Country of ref document: GB |
|
ENP | Entry into the national phase |
Ref document number: 20117001102 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 211130 Country of ref document: IL |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010152702 Country of ref document: RU |