US20100271554A1 - Method And Apparatus For Motion Estimation In Video Image Data - Google Patents
Method And Apparatus For Motion Estimation In Video Image Data Download PDFInfo
- Publication number
- US20100271554A1 US20100271554A1 US12/677,507 US67750708A US2010271554A1 US 20100271554 A1 US20100271554 A1 US 20100271554A1 US 67750708 A US67750708 A US 67750708A US 2010271554 A1 US2010271554 A1 US 2010271554A1
- Authority
- US
- United States
- Prior art keywords
- motion
- field
- vector
- line
- motion vectors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 323
- 238000000034 method Methods 0.000 title claims abstract description 68
- 239000013598 vector Substances 0.000 claims abstract description 203
- 238000006243 chemical reaction Methods 0.000 claims abstract description 8
- 230000015654 memory Effects 0.000 claims description 83
- 230000002123 temporal effect Effects 0.000 claims description 11
- 230000009467 reduction Effects 0.000 claims description 9
- 238000013016 damping Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 abstract description 7
- VJTAZCKMHINUKO-UHFFFAOYSA-M chloro(2-methoxyethyl)mercury Chemical class [Cl-].COCC[Hg+] VJTAZCKMHINUKO-UHFFFAOYSA-M 0.000 description 28
- 230000008569 process Effects 0.000 description 27
- 230000003044 adaptive effect Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000013459 approach Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 239000000872 buffer Substances 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 101710163391 ADP-ribosyl cyclase/cyclic ADP-ribose hydrolase Proteins 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0127—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
- H04N7/0132—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter the field or frame frequency of the incoming video signal being multiplied by a positive integer, e.g. for flicker reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/014—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- the present invention is related to a method and an apparatus for motion estimation in video image data and especially for field rate up-conversion motion estimation video image data.
- the present invention is further related to a TV-set, a computer program product and a data carrier comprising a computer program.
- the present invention relates to a motion estimation and motion compensation device and more particularly to a motion estimation and motion compensation device that estimates motion vectors and performs motion compensated predictions of an interlaced sequence of chrominance sub-sampled video frames.
- FIG. 1 wherein the motion trajectory of the moving objects (white squares) in the original image fields (i.e. transmitted and received image fields) is supposed to be straight-lined. If the missing fields result from interpolation by means of the above mentioned standard FRU methods (i.e. without motion estimation and compensation), the motion of the moving object in the interpolated fields (dark grey squares) is not at a position as expected by the observer (dotted squares). Such artefacts are visible and induce a blurring effect especially of fast moving objects. These blurring effects typically reduce the quality of the displayed images significantly.
- This MEMC provides the detecting of a moving part or object within the received image fields and then the interpolation of the missing fields according to the estimated motion by incorporating the missing object or part in an estimated field.
- FIG. 2 shows schematically the change of the position of a moving object between two successive image fields.
- the moving objects will have changed their position, e.g. object MO which is in the previous field T in position A is then in the current field T+1 then in position B.
- This motion of an object in successive image fields can be represented by a so-called motion vector.
- the motion vector AB represents the motion of the object MO from position A in the previous field T to position B in the current field T+1.
- This motion vector AB typically has a horizontal and a vertical vector component. Starting from point A in the previous field T and applying this motion vector AB to the object MO the object MO is then translated in position B in the current field T+1.
- the missing position I of the object MO in the missing field T+1 ⁇ 2 that has to be interpolated must be calculated by the interpolation of the previous field T and the current field T+1 taken account of the respective positions A, B of the moving object MO. If the object MO does not change its position between the previous field and the current field, e.g. if A and B are the same positions, position I in the missing field is obtained by the translation of A with a motion vector
- a first approach employs a so-called block-based MEMC.
- This approach assumes that the dimension of an object in the image is always larger than that of a single pixel. Therefore, the image field is divided into several image blocks. For MEMC only a single motion vector is calculated for each one of these blocks which leads to a significant reduction of used motion vectors. This approach is for example described in EP 874 523 A1.
- a second approach employs a so-called line-based MEMC.
- the algorithms is based on a reduced set of video input data of a single line of a field or part of this line.
- this line based MEMC there is so far no method known in the art for an effective reduction of motion vectors.
- the present invention is, therefore, based on the object to provide a possibility to more efficiently use motion vectors within a motion estimation process.
- the present invention is further based on the object to reduce the memory requirement and/or the computational requirements in motion estimation implementations.
- a method comprising the features of claim 1 and/or an apparatus comprising the features of claim 15 and/or a TV-set comprising the features of claim 23 and/or a computer program product comprising the features of claim 24 and/or a data carrier comprising the features of claim 25 is/are provided.
- motion vectors are calculated which are suitable for being used in a subsequent motion compensation process.
- the calculation of the motion vector might be performed for every pixel of a frame or a field, or alternatively for only some of these pixels, e.g. several selected pixels within a line or part of a line. It is also possible that this motion vector is assigned to a predefined block or section of a frame or a field.
- One basic idea of the present invention is the provision of a motion vector histogram which contains information which of the calculated motion vectors is used mostly and which is even rarely used in a current frame or field of a picture.
- This information stored in the motion vector histogram enables a significant and effective motion estimation process and thus also an efficient motion compensation process since only part of the calculated motion vectors is used. This consequently reduces the overall memory requirement and computational effort significantly.
- Another advantage of the present invention is the fact, that the whole motion estimation and motion compensation process is getting more and more quick which is especially in modern video applications one of the key issue for the establishment of a highly precise picture of the TV-panel.
- the present invention describes also a method for motion estimation and motion compensation which operates only in one direction and therefore performs the motion estimation and motion compensation operations using at least one single line buffer memory, the so-called line memory.
- This offers the possibility to reduce the chip embedded memory to one single line memory for the previous and one single line memory for the current field or frame. This advantageously enables significant silicon area reducing and cost saving implementations.
- the MEMC is limited to motion in the horizontal direction only, since most of the motion in natural scenes has this direction.
- video signal processing line memories are often used in other applications which already have access to the previous and current motion portrayal, e.g. like so-called de-interlacer applications or temporal noise reduction applications.
- these already existing line memories of the video application are now additionally used also for MEMC operations.
- this solution offers the possibility to accomplish the MEMC operations by adding a minimal or in the optimal case no additional memory to the video processing system.
- the method is used for line-based motion estimation.
- the motion vector having the highest rank and/or are the most often used motion vectors is/are selected.
- the method further comprises a motion compensation wherein the selected motion vector is used for motion compensation to interpolate a picture.
- the motion vectors having the highest rank and/or the most often used motion vector are stored in a memory.
- the step of calculating a histogram is done for the whole frame or field.
- the step of calculating a histogram is done for parts of the frame or field by splitting the frame or field into horizontal stripes and detect most often used vector for each stripe.
- the step of calculating a histogram is done to detect news ticker information, sub-titles or any other written information within a frame or field.
- a damping value which depends on the selected motion vector is used to adapt motion vectors with similar counter values.
- the histogram information of the rank of an motion vector is used to detect reliable and unreliable motion vectors.
- the motion vector contains only motion data for motion of an object in one direction and especially in the horizontal direction.
- image data of the previous frame is derived from a first line memory and image data of the current frame is derived from a second line memory.
- first line memory and/or the second line memory is/are further used in a de-interlacer application and/or a temporal noise reduction application.
- a histogram generator is provided to establish a motion vector histogram for motion vectors to derive most and less used motion vectors in a current frame or field.
- the apparatus further comprises a histogram generator to provide a motion vector histogram for motion vectors to derive most and less used motion vectors in a current frame or field.
- the histogram generator further comprises a counting device for counting the occurrences of identical motion vector by incrementing or decrementing the counter by a given value; a ranking device which is designed to compare different counter values assigned to the different motion vectors and which is further designed to rank the different motion vectors on the basis of their occurrence in a current frame or field and to selects the most often used motion vector for the motion compensation.
- a motion vector histogram memory is provided to store the most often used motion vectors.
- a first line memory for storing image data of the previous frame and a second line memory for storing image data of the current frame are provided.
- first line memory and/or the second line memory are configured to be further used in a de-interlacer device and/or a temporal noise reduction device.
- the apparatus is an integrated circuit and/or is implemented within a microcontroller or a microprocessor.
- FIG. 1 shows the result of a standard (i.e. non motion compensated) FRU method
- FIG. 2 shows the change of position of a moving object between two consecutive received image fields
- FIG. 3 show the motion estimation principle for the line-based motion estimation by means of a current frame and the corresponding previous frame
- FIG. 4 shows a block diagram of a first embodiment of a line-based MEMC system according to the present invention
- FIG. 5 shows an example to illustrate the matching process of the motion estimation
- FIG. 6 shows the basic principle for the provision of a motion vector histogram
- FIG. 7 shows a block diagram illustrating an embodiment of the line-based motion estimation according to the present invention.
- FIG. 8 shows a block diagram illustrating an embodiment of the line-based motion compensation according to the present invention using adaptive artefact concealments
- FIG. 9 shows a block diagram of a second embodiment of a line-based MEMC system according to the present invention using the line memories assigned to the de-interlacer device also for the motion estimation device.
- the MEMC method consists mainly of two sections, the motion estimation and the motion compensation method.
- the motion estimation performs the measurement of the motion and derives the velocity of the displayed regions in pixel per picture (i.e. field or frame). Also the direction of the motion will be indicated by a positive or negative sign. These measured motion information is described in the form of a motion vector.
- the motion vector is used for the motion compensation to interpolate the picture at the temporal accurate position and to avoid so-called judder effects and/or so-called motion blurring effects.
- FIG. 3 shows the motion estimation principle for the line-based motion estimation by means of a current picture (field or frame) 10 ( n ) and the corresponding previous picture 11 ( n ⁇ 1).
- the motion vector 12 , 13 will be split by its length into two parts, where the first vector part 12 points into the previous picture 11 and the second vector part 13 points into the current picture 10 .
- 11 pixels 15 from both temporal pictures 10 , 11 are taken into account for the compensation.
- line-based MEMC only the pixels 15 within the same line 16 are used at the same time and the MEMC is performed for a single line 16 of a field or frame only.
- the pixels 15 of the current picture 10 are compared with the corresponding pixels 15 of the previous picture 11 to estimate and compensate the corresponding pixels 15 of the missing picture 14 .
- FIG. 4 shows a block diagram of a line-based MEMC system according to the present invention.
- the MEMC system is denoted by reference number 20 .
- the MEMC system 20 comprises an input terminal 21 , a bus 22 , two line memories 23 , 24 , a motion estimation device 25 , a motion compensation device 26 and an output terminal 27 .
- the bus 22 is an external bus 22 and especially an external memory bus 22 .
- the bus 22 is an internal bus 22 .
- the bus 22 is connected to an external memory 28 device such as a SDRAM, a DDR-RAM, etc.
- Image data to be displayed in a panel 29 such as a plasma- or LCD-panel or a CRT-screen is stored in this external memory 28 .
- this image data X 1 , X 1 ′ is transferred to both line memories 23 , 24 .
- the first line memory 23 is used for buffering image data X 1 of the previous picture and the other line memory 24 is used for storing the image data X 1 ′ of the current picture.
- a line memory 23 , 24 as used in the present patent application indicates an embedded memory of a size of one video line of a frame or a field or at least less of the incoming video signal stream or actually processing video signal stream.
- a field denotes a video image or picture which comprises either odd or even lines.
- a frame denotes a video image comprising of the complete video information for one picture, i.e. of a field for the odd lines and the corresponding field for the even lines.
- a line denotes a full horizontal row within a field of one video picture or at least a part of this row.
- Both of the line memories 23 , 24 are coupled—on their output sides—to the motion estimation device 25 and to the motion compensation device 26 .
- This enables the image data X 1 , X 1 ′ stored in the line memories 23 , 24 to be transferred to the motion estimation device 25 and to the motion compensation device 26 , respectively.
- the corresponding data signals to the motion estimation device 25 are denoted by X 2 , X 2 ′ and the corresponding data signals motion compensation device 26 are denoted by X 3 , X 3 ′.
- the motion estimation device 25 generates a motion vector signal X 4 out of the image data X 2 , X 2 ′ stored in the line memories 23 , 24 by employing a matching process.
- This vector signal X 4 is transferred to the motion compensation device 26 .
- the motion compensation device 26 performs a motion compensation using the image data X 3 , X 3 ′ stored in the line memories 23 , 24 and applying the vector data X 4 to this image data X 3 , X 3 ′.
- the motion compensation device 27 provides a video signal X 5 which comprises information for a motion compensated picture.
- This video signal X 5 is transferred via the output terminal 27 to a display 29 , such as a LCD-panel 29 or the like.
- a matching process is employed to select a corresponding series of pixels 32 which fits best to a given amount of pixels 30 .
- a given amount of pixels 30 of a line of a current frame around the centre pixel 31 for which the motion shall be determined is taken from a line memory 24 of the current frame 32 .
- this given amount of pixels 30 is denoted to as series of pixels 30 .
- a series of pixels 30 comprises 9 single pixels 33 . It is self-understood that a series can also comprise a greater or a smaller amount of pixels 33 .
- Luminance is a photometric measure of the density of luminous intensity in a given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle. Thus, luminance is the photometric measure of the brightness in a frame of a motion picture. If the luminance is high, the picture is bright and if it is low the picture is dark. Thus, luminance is the black and white part of the picture.
- This luminance profile is used to find out that series of nine pixels 34 out of the previous frame 35 which fits best with the series of nine pixels 30 of the current frame 32 .
- the luminance profile of the series of nine pixels 30 of the current frame 32 are compared with the luminance profiles of several corresponding series of nine pixels 34 of the previous frame 35 .
- the series of nine pixels 30 will be shifted over the search range in the horizontal direction 36 . It is assumed that that series of nine pixels 34 of the previous frame 35 which shows the best luminance profile matching (with the series of nine pixels 30 of the current frame 32 ) is the correct series of pixels. These series of pixels 30 , 34 are then used for the computation of the motion vector.
- a typical value for the search range is, e.g. 64 pixels (+31 . . . ⁇ 32). However, it may also be possible to use less than 64 pixels, however, then the quality of the result of this comparison is increasingly going down. On the other hand it is also possible to use more than 64 pixels. Then the quality of the selection result is going up, however, this needs more computational effort. Therefore, typically a trade-off which provides an optimization between best quality of the selection result and simultaneously a minimum computation effort is employed.
- SAD sum of absolute difference
- the matching process can then be performed more efficiently if a set 38 of pre-selected motion vectors 37 —the so-called motion vector samples 37 —are checked for a matching of the luminance profile (see FIG. 5 ).
- a set 38 of pre-selected motion vectors 37 the so-called motion vector samples 37 —are checked for a matching of the luminance profile (see FIG. 5 ).
- one selected motion vector 37 can be taken from the neighbouring pixel.
- a second selected motion vector can be taken from the previous line, if the already estimated motion vectors are stored in a vector memory specially designed for the different motion vector samples.
- the zero-vector which indicates no motion of the object is typically one of the most used motion vector samples. This zero-vector is used in order to more efficiently detect regions within a picture showing no motion. In principle the amount of pre-selected motion vectors 37 which will taken into account depend strongly on what kind of motion vector quality is desired.
- a variation of certain pre-selected motion vectors is required for test operation purposes. That means that for pre-selected motion vector samples a certain amount of motion will be added or subtracted. This can be done by applying a variance with different amount of motion speed to these motion vectors.
- the tested implementation checks between odd pixels and even pixels alternating an update of +/ ⁇ 1 pixels and +/ ⁇ 4 pixels on the previously determined motion vector.
- the selection of the variance is adjustable and variable as required or as the need arises and depends e.g. on the resolution of the incoming video signal.
- the selection of the tested motion vectors is treated differently for the first line of a frame or field.
- the selected motion vectors which normally test the motion vectors of the line above are loaded with vector values, which e.g. vary according to a triangle function from pixel to pixel.
- the triangle function oscillates between an adjustable minimum value and an adjustable maximum value.
- other regular oscillating functions e.g. a saw tooth function, a sinusoidal function, and the like may be employed for the determination of the motion vector of the first line.
- the matching process assigns a failure value to each tested motion vector.
- this value may be also a quality value. It might also be possible to evaluate as well a failure value and a quality value for the matching process.
- the sum of the absolute difference (SAD) is used as the failure value or to at least derive the failure value.
- SAD absolute difference
- a failure value of zero is needed. However, typically the failure value is different from zero. Therefore, the motion vector corresponding with the lowest failure value is then selected as the most probably motion vector representing the motion of an object in the local scene.
- a damping value is used which depends on the vector attenuation of the different motion vectors. This enables to control the motion vectors with equal failure values and/or to furnish the motion vector selection process with a certain direction.
- the different motion vectors are advantageously stored in a vector memory. These motion vectors can be then—if required—fetched from the vector memory for further processing and/or for the motion estimation of the next pixels.
- the motion estimation process forms a recursive process. Therefore, the size of this vector memory mainly depends on the desired quality level of the matching process.
- the tested implementation comprises only one line of a vector memory. In this vector memory every second motion vector will be stored alternately, in order that an access of the motion vectors from the measured line above is possible.
- a motion vector histogram is calculated in order to create a highly reliable and homogeneous field of motion vectors. This vector histogram allows a vector majority ranking to derive most and less used motion vectors in an actual scene.
- FIG. 6 shows a preferred embodiment to illustrate the basic principle for the provision of a motion vector histogram accordingly to the present invention.
- FIG. 6 shows a vector histogram generator 40 to provide a motion vector histogram.
- the vector histogram generator 40 comprises a switching device 41 , which is controlled by an +1-incrementing device 42 .
- the switching device 41 is controlled on the one hand by a motion vector 43 information and on the other hand by the incrementing device 42 which shift the switching device 41 to the next input terminal of a counting device 45 whensoever the next identical motion vector 43 occurs.
- the counting device 45 which comprises different counter cells 44 counts the occurrence of each motion vector and increments the counter by +1 for each occurrence of the motion vector.
- a ranking device 46 which e.g. comprises a simple comparing means—is coupled to the output terminals of the different counter cells 44 of the counting device 45 .
- This ranking device 46 selects the most often used motion vector and applies this motion vector for the estimation determination.
- the most often used motion vector may be then stored in a motion vector histogram memory 47 .
- a motion vector histogram can be done either for the whole frame or field or only for parts of the frame or field. It is very efficient to split the picture into horizontal stripes and return a most often used vector for each stripe. In very a preferred embodiment news ticker information within a picture can be detected in that way very reliable.
- FIG. 7 shows a block diagram illustrating an embodiment of the line-based motion estimation according to the present invention as described above and as implemented in a motion estimation device 25 as shown in FIG. 4 .
- the motion estimation device 25 comprises a matching device 80 , a cost/quality function device 81 and a vector selector device 82 , which are arranged in series connection between the input side 83 of the motion estimation device 25 where the image data signals X 1 , X 1 ′ stored in the both line memories 23 , 24 are provided and the output side 84 of the motion estimation device 25 where the motion vector signal X 4 in present.
- a matching process and a vector selection as described with regard to FIG. 5 is implemented.
- the motion estimation device 25 further comprises a vector quality device 85 which is connected on the one hand to the input side 83 and on the other hand to the output side 84 .
- the vector quality device 85 generates a quality signal X 6 comprising an information of the vector quality out of the image data signals X 1 , X 1 ′ and the motion vector signal X 4 .
- the motion estimation device 25 further comprises a vector histogram device 86 and a vector majority device 87 which are arranged in series connection in a feedback path between the output side 84 and the matching device 80 .
- a vector histogram is generated to provide a ranking of most and less used vectors in the actual scene as shown and described with regard to FIG. 6 .
- the elements 86 , 87 correspond to the vector histogram generator 40 of FIG. 6 .
- the motion estimation device 25 may further comprise a further line memory 88 to store the motion vector data X 4 and/or the data X 6 for the vector quality.
- the motion estimation device 25 further comprises a vector sample device 89 .
- This vector sample device 89 is also arranged in the feedback path and is connected at its input side with the line memory 88 , the vector majority device 87 and advantageously with a further device 90 .
- This further device 90 performs a variation of the motion vector samples by using a special signal having a certain magnitude, e.g. a sinusoidal signal, a saw tooth signal or the like. This certain signal is then used for a testing and/or matching process and/or an up-dating process of the first line of a frame or field. However, it might also be possible to randomly up-date different lines of the frame or field.
- the vector sample device 89 On its output side, the vector sample device 89 is connected at its output side to the matching device 80 .
- the motion estimation device 25 further comprises a vertical motion estimation device 91 .
- a vertical motion estimation device 91 For vertical motions the above described one-dimensional motion estimation algorithm is not able to compensate fully motion in the vertical direction. However, the occurrence of vertical motions can be used to reduce the compensation in same regions of the picture by splitting the picture into different regions to derive vertical motion for each region. In this case the luminance values of the lines in the different region of the same picture will be summed up and stored individually for each line of this picture. This results in an accumulated vertical profile for different regions of the same picture. Then, the whole picture can be divided into smaller regions to derive a vertical motion for each of these regions. This vertical motion estimation process is performed in the vertical motion estimation device 91 which is connected to the input side 83 and which provides at its output side a sector based vertical motion index X 7 .
- the vertical MEMC as sketched above can be performed independently of horizontal MEMC and also in combination with the horizontal MEMC, wherein the combination can be performed in dependence on a certain situation or the motions present, respectively. Further, such a methodology allows an implementation of vertical MEMC, which does not need large amounts of additional memory capacity to analyze data of consecutive frames being the case in the most methodologies of the prior art.
- the motion estimation device 25 further comprises a vector damping device 92 .
- a damping value as described above may be used to damp vector samples of the vector sample device 89 and to provide damped vector samples to the vector selector 82 .
- FIG. 8 shows a block diagram illustrating an embodiment of the line-based motion compensation according to the present invention using adaptive artefact concealments as described above.
- the motion compensation device 26 comprises a compensation device 100 which performs the temporal motion interpolation according to the motion vectors X 4 estimated by the motion estimation device 25 .
- the compensation device 100 comprises a Median Filter which uses as input data the luminance values of the vector compensated previous line, the vector compensated current and the uncompensated previous line. Additionally, also the chrominance values can be compensated.
- a replacement vector indicated as reliable vector will be searched in the local area of the vector memory from the line above. If no reliable vector can be found the adaptive blurring typically tries to cover this artefact.
- the motion compensation device 26 further comprises a vertical motion control device 101 which provides a control signal X 8 to the compensation device 100 in order to incorporate also information of a vertical motion to the compensation device 100 .
- the motion compensation device 26 further comprises a bad vector modification device 102 . Based on information X 4 , X 6 provided by the motion estimation device 25 the bad vector modification device 102 modifies bad vectors. These information X 9 about modified bad vectors is then used—together with the control signal X 8 —to perform the motion compensation within the compensation device 100 . The compensation device 100 then generates at its output side a motion compensated image data signal X 10 .
- the motion compensation device 26 further comprises an adaptive blurring device 103 . Based on the motion compensated image data signal X 10 and a blurring control signal generated by the bad vector modification device 102 this adaptive blurring device 103 performs an adaptive blurring.
- the adaptive blurring device 103 generates an adaptive blurred image data signal X 5 ′ which might correspond to the image signal X 5 of FIG. 4 .
- FIG. 9 shows a block diagram of a second embodiment of a line-based MEMC system according to the present invention using the line memories assigned to the de-interlacer device also for the motion estimation device.
- a de-interlacer device 113 is arranged between the line memories 110 , 111 , 112 and the motion compensation device 26 .
- the de-interlacer device 113 is typically used to convert a field represented by video data stream into a full frame which is then also represented by another video data stream.
- On-chip solutions for video processing which are memory-based have already existing internal line buffers 110 - 112 —the so-called line memories 110 - 112 —which carry video data from the previous and current field or frame.
- These line buffers 110 - 112 can be located e.g. within temporal noise reductions or de-interlacing units 113 which operate motion adaptive.
- these line buffers can be reused additionally for the motion estimation.
- a movie detector which indicates the current interpolated sequence of pull-down mode is used.
- a line buffer selector transfers the video signal data to the motion estimation device according to the previous and the current video input signal. This technique allows using already existing memory resources also for motion estimation which also prevents additional bandwidth for the temporal up-conversion process. Therefore, the chip area for the motion estimation and the motion compensation can be reduced to a minimum.
- the de-interlacer device 113 uses three line memories 110 , 111 , 112 coupled on their input side to the memory bus 22 and providing at their output side line data.
- This line data provided by the line memories 110 , 111 , 112 is processed within the de-interlacer device and then provided to the motion compensation device 26 .
- these line memories 110 , 111 , 112 are additionally used also for the motion estimation device 25 .
- the system 20 additionally comprises a selector device 114 , where a movie sequence X 0 is provided to this selector device 114 .
- This movie sequence X 0 may be then stored in an external memory 28 via the memory bus 22 and can be read out from this external memory 28 through the line memories 110 , 111 , 112 .
- this data stored in the line memories 110 , 111 , 112 of the de-interlacer device 113 can be also used for MEMC.
- the data stored in the line memories 110 , 111 , 112 is then provided as well to the motion estimation device 25 and the motion compensation 26 device.
- the present invention is not based necessarily on so-called line-based MEMC systems, although in the above embodiments of the present invention always reference is made on line-based MEMC systems.
- the present invention is related generally to all implementations using motion estimation of video image data, i.e. especially so-called block-based motion estimation, line-based motion estimation and the like. It is self understood that for those implementations which do not apply line-based motion estimation typically other memory means than line memories maybe employed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Television Systems (AREA)
- Picture Signal Circuits (AREA)
- Image Analysis (AREA)
- Color Television Systems (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
- The present invention is related to a method and an apparatus for motion estimation in video image data and especially for field rate up-conversion motion estimation video image data. The present invention is further related to a TV-set, a computer program product and a data carrier comprising a computer program.
- The present invention relates to a motion estimation and motion compensation device and more particularly to a motion estimation and motion compensation device that estimates motion vectors and performs motion compensated predictions of an interlaced sequence of chrominance sub-sampled video frames.
- Hereinafter, the present invention and its underlying problem is described with regard to the processing of a video signal for line-based motion estimation and motion compensation within a video processing apparatus such as a microprocessor or microcontroller having line memory devices, whereas, it should be noted, that the present invention is not restricted to this application, but can also be used for other video processing apparatus.
- The market introduction of high-end TV-sets based on 100 Hz Cathodic Ray Tubes (CRT) required the development of reliable Field Rate Up-conversion (FRU) techniques to remove artefacts within a picture such as large area flickers and line flickers. Standard FRU methods, which interpolate the missing image fields to be displayed on the CRT without performing an estimation and compensation of the motion of moving objects in successive image fields are satisfactory in many applications, especially with regard to a better quality of the image and with regard to the reduction of the above-mentioned artefacts. However, many pictures contain moving objects, like persons, subtitles and the like, which cause so-called motion judders.
- This problem is better understood by referring to
FIG. 1 , wherein the motion trajectory of the moving objects (white squares) in the original image fields (i.e. transmitted and received image fields) is supposed to be straight-lined. If the missing fields result from interpolation by means of the above mentioned standard FRU methods (i.e. without motion estimation and compensation), the motion of the moving object in the interpolated fields (dark grey squares) is not at a position as expected by the observer (dotted squares). Such artefacts are visible and induce a blurring effect especially of fast moving objects. These blurring effects typically reduce the quality of the displayed images significantly. - In order to avoid such blurring effects and to reduce artefacts several methods for motion estimation and motion compensation—or shortly MEMC—are proposed. This MEMC provides the detecting of a moving part or object within the received image fields and then the interpolation of the missing fields according to the estimated motion by incorporating the missing object or part in an estimated field.
-
FIG. 2 shows schematically the change of the position of a moving object between two successive image fields. Between two successive received image fields, the moving objects will have changed their position, e.g. object MO which is in the previous field T in position A is then in the current field T+1 then in position B. This means, that a motion exists from the previous field T to the current field T+1. This motion of an object in successive image fields can be represented by a so-called motion vector. The motion vector AB represents the motion of the object MO from position A in the previous field T to position B in the current field T+1. This motion vector AB typically has a horizontal and a vertical vector component. Starting from point A in the previous field T and applying this motion vector AB to the object MO the object MO is then translated in position B in the current field T+1. The missing position I of the object MO in the missing field T+½ that has to be interpolated must be calculated by the interpolation of the previous field T and the current field T+1 taken account of the respective positions A, B of the moving object MO. If the object MO does not change its position between the previous field and the current field, e.g. if A and B are the same positions, position I in the missing field is obtained by the translation of A with a motion vector |AB|/2. In this manner the missing field T+½ is interpolated with a moving object in the right position with the consequence that blurring effects are effectively avoided. - MEMC implementations operating in the way sketched up above are described for example by Gerard de Haan in EP 765 571 B1 and U.S. Pat. No. 6,034,734.
- Theoretically, for each pixel of a frame or field a corresponding motion vector has to be calculated. However, calculating motion vectors for the huge amount of the pixels within a field or frame a enormous amount of calculations is needed with the consequence that the memory requirements are increasing significantly. Especially with modern Plasma- and LCD-TVs which typically have an increased resolution this goes along with an corresponding increase of the number of pixels and thus the number of motion vectors to be calculated. To reduce this enormous calculation and memory effort there is an increasing need to provide MEMC implementations which are able to only use a reduced set of motion vectors.
- To reduce the amount of motion vectors needed there exist several different approaches:
- A first approach employs a so-called block-based MEMC. This approach assumes that the dimension of an object in the image is always larger than that of a single pixel. Therefore, the image field is divided into several image blocks. For MEMC only a single motion vector is calculated for each one of these blocks which leads to a significant reduction of used motion vectors. This approach is for example described in EP 874 523 A1.
- A second approach employs a so-called line-based MEMC. In this approach the algorithms is based on a reduced set of video input data of a single line of a field or part of this line. However, in this line based MEMC there is so far no method known in the art for an effective reduction of motion vectors.
- It is clear, that for the most video applications the different pixels within a field or a frame or of a line of this field/frame is not changing from one field/frame to the next one. Therefore, for these pixels typically a zero motion vector is applied, that is a motion vector having the zero-magnitude and a zero-angle. In many applications those zero motion vectors build by far the most used motion vectors within a field/frame. On the other hand, there are also a huge number of pixels having an identical motion vector especially those pixels assigned to the same moving object in the picture.
- The present invention is, therefore, based on the object to provide a possibility to more efficiently use motion vectors within a motion estimation process. The present invention is further based on the object to reduce the memory requirement and/or the computational requirements in motion estimation implementations.
- In accordance with the present invention, a method comprising the features of
claim 1 and/or an apparatus comprising the features ofclaim 15 and/or a TV-set comprising the features ofclaim 23 and/or a computer program product comprising the features ofclaim 24 and/or a data carrier comprising the features ofclaim 25 is/are provided. - Accordingly, it is provided:
-
- A method for motion estimation in video image data, especially for field rate up-conversion in consecutive frames of a motion picture, comprising the steps of: providing a video signal comprising video image data of a video line or part of the video line of the picture; performing the motion estimation by detecting and analysing the video image data and by deriving motion vectors depending on the detected motion; calculating a histogram for motion vectors to derive most and less used motion vectors in a current frame or field.
- An apparatus for motion estimation in video image data, especially field rate up-conversion in consecutive frames or fields of a motion picture, wherein the apparatus is configured to perform a method according to the present invention.
- A TV-set comprising: an analogue or digital input terminal to provide a video input signal; a device to generate a video signal out of the video input signal comprising video image data of a video line or part of the video line of the picture; an apparatus to perform a line-based motion estimation and a motion compensation according to the present invention and to provide a motion compensated image output signal; a screen to display a motion compensated picture using the motion compensated image output signal.
- A computer program product comprising a code, said code being configured to implement a method according to the present invention.
- A data carrier comprising a computer program product according to the present invention.
- During the process of motion estimation several motion vectors are calculated which are suitable for being used in a subsequent motion compensation process. The calculation of the motion vector might be performed for every pixel of a frame or a field, or alternatively for only some of these pixels, e.g. several selected pixels within a line or part of a line. It is also possible that this motion vector is assigned to a predefined block or section of a frame or a field.
- One basic idea of the present invention is the provision of a motion vector histogram which contains information which of the calculated motion vectors is used mostly and which is even rarely used in a current frame or field of a picture. This information stored in the motion vector histogram enables a significant and effective motion estimation process and thus also an efficient motion compensation process since only part of the calculated motion vectors is used. This consequently reduces the overall memory requirement and computational effort significantly.
- Another advantage of the present invention is the fact, that the whole motion estimation and motion compensation process is getting more and more quick which is especially in modern video applications one of the key issue for the establishment of a highly precise picture of the TV-panel.
- The present invention describes also a method for motion estimation and motion compensation which operates only in one direction and therefore performs the motion estimation and motion compensation operations using at least one single line buffer memory, the so-called line memory. This offers the possibility to reduce the chip embedded memory to one single line memory for the previous and one single line memory for the current field or frame. This advantageously enables significant silicon area reducing and cost saving implementations.
- In a preferred embodiment of the present invention, the MEMC is limited to motion in the horizontal direction only, since most of the motion in natural scenes has this direction.
- In video signal processing line memories are often used in other applications which already have access to the previous and current motion portrayal, e.g. like so-called de-interlacer applications or temporal noise reduction applications. In a preferred embodiment these already existing line memories of the video application are now additionally used also for MEMC operations. By using existing line memories of the video signal processing system, no further memory bandwidth has to be added to the memory bus and the memory bandwidth is kept uninfluenced.
- Thus, this solution offers the possibility to accomplish the MEMC operations by adding a minimal or in the optimal case no additional memory to the video processing system.
- Advantages, embodiments and further developments of the present invention can be found in the further subclaims and in the following description, referring to the drawings.
- In a preferred embodiment the method is used for line-based motion estimation.
-
- In a preferred embodiment the step of calculating comprises the following sub-steps: assigning a counter to each different motion vector; counting the occurrences of identical motion vector by incrementing or decrementing the counter by a given value; comparing the different counter values assigned to the different motion vectors; ranking the different motion vectors in the order of their occurrences in a current frame or field.
- In a preferred embodiment the motion vector having the highest rank and/or are the most often used motion vectors is/are selected.
- In a preferred embodiment the method further comprises a motion compensation wherein the selected motion vector is used for motion compensation to interpolate a picture.
- In a preferred embodiment the motion vectors having the highest rank and/or the most often used motion vector are stored in a memory.
- In a preferred embodiment the step of calculating a histogram is done for the whole frame or field.
- In a preferred embodiment the step of calculating a histogram is done for parts of the frame or field by splitting the frame or field into horizontal stripes and detect most often used vector for each stripe.
- In a preferred embodiment the step of calculating a histogram is done to detect news ticker information, sub-titles or any other written information within a frame or field.
- In a preferred embodiment a damping value which depends on the selected motion vector is used to adapt motion vectors with similar counter values.
- In a preferred embodiment the histogram information of the rank of an motion vector is used to detect reliable and unreliable motion vectors.
- In a preferred embodiment the motion vector contains only motion data for motion of an object in one direction and especially in the horizontal direction.
- In a preferred embodiment image data of the previous frame is derived from a first line memory and image data of the current frame is derived from a second line memory.
- In a preferred embodiment the first line memory and/or the second line memory is/are further used in a de-interlacer application and/or a temporal noise reduction application.
- In a preferred embodiment a histogram generator is provided to establish a motion vector histogram for motion vectors to derive most and less used motion vectors in a current frame or field.
- In a preferred embodiment the apparatus further comprises a histogram generator to provide a motion vector histogram for motion vectors to derive most and less used motion vectors in a current frame or field.
- In a preferred embodiment the histogram generator further comprises a counting device for counting the occurrences of identical motion vector by incrementing or decrementing the counter by a given value; a ranking device which is designed to compare different counter values assigned to the different motion vectors and which is further designed to rank the different motion vectors on the basis of their occurrence in a current frame or field and to selects the most often used motion vector for the motion compensation.
- In a preferred embodiment a motion vector histogram memory is provided to store the most often used motion vectors.
- In a preferred embodiment a first line memory for storing image data of the previous frame and a second line memory for storing image data of the current frame are provided.
- In a preferred embodiment the first line memory and/or the second line memory are configured to be further used in a de-interlacer device and/or a temporal noise reduction device.
- In a preferred embodiment the apparatus is an integrated circuit and/or is implemented within a microcontroller or a microprocessor.
- For a more complete understanding of the present invention and advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings. The invention is explained in more detail below using exemplary embodiments which are schematically specified in the figures of the drawings, in which:
-
FIG. 1 shows the result of a standard (i.e. non motion compensated) FRU method; -
FIG. 2 shows the change of position of a moving object between two consecutive received image fields; -
FIG. 3 show the motion estimation principle for the line-based motion estimation by means of a current frame and the corresponding previous frame; -
FIG. 4 shows a block diagram of a first embodiment of a line-based MEMC system according to the present invention; -
FIG. 5 shows an example to illustrate the matching process of the motion estimation; -
FIG. 6 shows the basic principle for the provision of a motion vector histogram; -
FIG. 7 shows a block diagram illustrating an embodiment of the line-based motion estimation according to the present invention; -
FIG. 8 shows a block diagram illustrating an embodiment of the line-based motion compensation according to the present invention using adaptive artefact concealments; -
FIG. 9 shows a block diagram of a second embodiment of a line-based MEMC system according to the present invention using the line memories assigned to the de-interlacer device also for the motion estimation device. - In all figures of the drawings elements, features and signals which are the same or at least have the same functionality have been provided with the same reference symbols, descriptions and abbreviations unless explicitly stated otherwise.
- In the following description of the present invention first of all a short overview of the motion estimation and motion compensation is presented.
- The MEMC method consists mainly of two sections, the motion estimation and the motion compensation method. The motion estimation performs the measurement of the motion and derives the velocity of the displayed regions in pixel per picture (i.e. field or frame). Also the direction of the motion will be indicated by a positive or negative sign. These measured motion information is described in the form of a motion vector. The motion vector is used for the motion compensation to interpolate the picture at the temporal accurate position and to avoid so-called judder effects and/or so-called motion blurring effects.
-
FIG. 3 shows the motion estimation principle for the line-based motion estimation by means of a current picture (field or frame) 10 (n) and the corresponding previous picture 11 (n−1). According to the temporal positions themotion vector first vector part 12 points into theprevious picture 11 and thesecond vector part 13 points into thecurrent picture 10. For the interpolation of a missing picture 14 (n−½) between the current and theprevious pictures pixels 15 from bothtemporal pictures pixels 15 within thesame line 16 are used at the same time and the MEMC is performed for asingle line 16 of a field or frame only. For this kind of MEMC thepixels 15 of thecurrent picture 10 are compared with the correspondingpixels 15 of theprevious picture 11 to estimate and compensate the correspondingpixels 15 of the missingpicture 14. -
FIG. 4 shows a block diagram of a line-based MEMC system according to the present invention. The MEMC system is denoted byreference number 20. TheMEMC system 20 comprises aninput terminal 21, abus 22, twoline memories motion estimation device 25, amotion compensation device 26 and anoutput terminal 27. It is assumed that thebus 22 is anexternal bus 22 and especially anexternal memory bus 22. However, it may also be possible, that thebus 22 is aninternal bus 22. At the input side thebus 22 is connected to anexternal memory 28 device such as a SDRAM, a DDR-RAM, etc. Image data to be displayed in apanel 29 such as a plasma- or LCD-panel or a CRT-screen is stored in thisexternal memory 28. Via theinput terminal 21 and thememory bus 22 this image data X1, X1′ is transferred to bothline memories line memories first line memory 23 is used for buffering image data X1 of the previous picture and theother line memory 24 is used for storing the image data X1′ of the current picture. - A
line memory - Both of the
line memories motion estimation device 25 and to themotion compensation device 26. This enables the image data X1, X1′ stored in theline memories motion estimation device 25 and to themotion compensation device 26, respectively. InFIG. 4 the corresponding data signals to themotion estimation device 25 are denoted by X2, X2′ and the corresponding data signalsmotion compensation device 26 are denoted by X3, X3′. - The
motion estimation device 25 generates a motion vector signal X4 out of the image data X2, X2′ stored in theline memories motion compensation device 26. Themotion compensation device 26 performs a motion compensation using the image data X3, X3′ stored in theline memories output terminal 27, themotion compensation device 27 provides a video signal X5 which comprises information for a motion compensated picture. This video signal X5 is transferred via theoutput terminal 27 to adisplay 29, such as a LCD-panel 29 or the like. - With regard to
FIG. 5 , hereinafter the operation of themotion estimation device 25 is described in more detail: - For the motion estimation a matching process is employed to select a corresponding series of
pixels 32 which fits best to a given amount ofpixels 30. For this selection a given amount ofpixels 30 of a line of a current frame around thecentre pixel 31 for which the motion shall be determined is taken from aline memory 24 of thecurrent frame 32. Hereinafter this given amount ofpixels 30 is denoted to as series ofpixels 30. In the present embodiment a series ofpixels 30 comprises 9single pixels 33. It is self-understood that a series can also comprise a greater or a smaller amount ofpixels 33. - For the selection the luminance profile of the
pixels 33 is used as the matching parameter. Luminance is a photometric measure of the density of luminous intensity in a given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle. Thus, luminance is the photometric measure of the brightness in a frame of a motion picture. If the luminance is high, the picture is bright and if it is low the picture is dark. Thus, luminance is the black and white part of the picture. - This luminance profile is used to find out that series of nine
pixels 34 out of theprevious frame 35 which fits best with the series of ninepixels 30 of thecurrent frame 32. In the embodiment ofFIG. 5 the luminance profile of the series of ninepixels 30 of thecurrent frame 32 are compared with the luminance profiles of several corresponding series of ninepixels 34 of theprevious frame 35. In order to derive the true motion the series of ninepixels 30 will be shifted over the search range in thehorizontal direction 36. It is assumed that that series of ninepixels 34 of theprevious frame 35 which shows the best luminance profile matching (with the series of ninepixels 30 of the current frame 32) is the correct series of pixels. These series ofpixels - A typical value for the search range is, e.g. 64 pixels (+31 . . . −32). However, it may also be possible to use less than 64 pixels, however, then the quality of the result of this comparison is increasingly going down. On the other hand it is also possible to use more than 64 pixels. Then the quality of the selection result is going up, however, this needs more computational effort. Therefore, typically a trade-off which provides an optimization between best quality of the selection result and simultaneously a minimum computation effort is employed.
- In a preferred embodiment for each selected motion vector 37 a single matching process is performed in the way described above. This matching process is performed by assigning a quality degree and/or a failure degree for each series of
pixels 30. Then, a quality-degree and/or a failure degree are assigned to each one of those series ofpixels 30 which are undergoing the matching process. Those series ofpixels 30 having the highest quality-degrees and/or the lowest failure degrees are selected as most probable series of pixels. These series ofpixels 30 are then used for computing the motion vectors for the horizontal motion. Typically, but not necessarily a SAD method (SAD=sum of absolute difference) and/or ADRC method is used for the comparison of the luminance and/or chrominance values. - Assuming the motion of an object in the scene will be constant from frame to frame and the object is larger than a series of pixels (e.g. the above mentioned 9 pixels), then the matching process can then be performed more efficiently if a
set 38 ofpre-selected motion vectors 37—the so-calledmotion vector samples 37—are checked for a matching of the luminance profile (seeFIG. 5 ). For example, one selectedmotion vector 37 can be taken from the neighbouring pixel. A second selected motion vector can be taken from the previous line, if the already estimated motion vectors are stored in a vector memory specially designed for the different motion vector samples. - The zero-vector which indicates no motion of the object is typically one of the most used motion vector samples. This zero-vector is used in order to more efficiently detect regions within a picture showing no motion. In principle the amount of
pre-selected motion vectors 37 which will taken into account depend strongly on what kind of motion vector quality is desired. - In order to set up the process of motion estimation and to follow the deviation from the constant motion, a variation of certain pre-selected motion vectors is required for test operation purposes. That means that for pre-selected motion vector samples a certain amount of motion will be added or subtracted. This can be done by applying a variance with different amount of motion speed to these motion vectors. The tested implementation checks between odd pixels and even pixels alternating an update of +/−1 pixels and +/−4 pixels on the previously determined motion vector. The selection of the variance is adjustable and variable as required or as the need arises and depends e.g. on the resolution of the incoming video signal.
- For the line-based motion estimation it is very advantageous that the motion vector will converge quickly for the real motion in the scene. Therefore, the selection of the tested motion vectors is treated differently for the first line of a frame or field. For the first line of a frame or field testing is not possible in the normal way since a line above the first line which is needed for testing is not existing. In the first line of each field the selected motion vectors which normally test the motion vectors of the line above are loaded with vector values, which e.g. vary according to a triangle function from pixel to pixel. The triangle function oscillates between an adjustable minimum value and an adjustable maximum value. For that purpose also other regular oscillating functions e.g. a saw tooth function, a sinusoidal function, and the like may be employed for the determination of the motion vector of the first line.
- In a preferred embodiment the matching process assigns a failure value to each tested motion vector. In another embodiment this value may be also a quality value. It might also be possible to evaluate as well a failure value and a quality value for the matching process. Preferably, the sum of the absolute difference (SAD) is used as the failure value or to at least derive the failure value. Ideally, to find the optimal motion vector a failure value of zero is needed. However, typically the failure value is different from zero. Therefore, the motion vector corresponding with the lowest failure value is then selected as the most probably motion vector representing the motion of an object in the local scene.
- In a preferred embodiment a damping value is used which depends on the vector attenuation of the different motion vectors. This enables to control the motion vectors with equal failure values and/or to furnish the motion vector selection process with a certain direction.
- The different motion vectors are advantageously stored in a vector memory. These motion vectors can be then—if required—fetched from the vector memory for further processing and/or for the motion estimation of the next pixels.
- The motion estimation process forms a recursive process. Therefore, the size of this vector memory mainly depends on the desired quality level of the matching process. In one embodiment, the tested implementation comprises only one line of a vector memory. In this vector memory every second motion vector will be stored alternately, in order that an access of the motion vectors from the measured line above is possible.
- In a preferred embodiment a motion vector histogram is calculated in order to create a highly reliable and homogeneous field of motion vectors. This vector histogram allows a vector majority ranking to derive most and less used motion vectors in an actual scene.
-
FIG. 6 shows a preferred embodiment to illustrate the basic principle for the provision of a motion vector histogram accordingly to the present invention.FIG. 6 shows avector histogram generator 40 to provide a motion vector histogram. In the embodiment inFIG. 6 thevector histogram generator 40 comprises aswitching device 41, which is controlled by an +1-incrementingdevice 42. The switchingdevice 41 is controlled on the one hand by amotion vector 43 information and on the other hand by the incrementingdevice 42 which shift theswitching device 41 to the next input terminal of acounting device 45 whensoever the nextidentical motion vector 43 occurs. Thecounting device 45 which comprisesdifferent counter cells 44 counts the occurrence of each motion vector and increments the counter by +1 for each occurrence of the motion vector. Aranking device 46—which e.g. comprises a simple comparing means—is coupled to the output terminals of thedifferent counter cells 44 of thecounting device 45. Thisranking device 46 selects the most often used motion vector and applies this motion vector for the estimation determination. The most often used motion vector may be then stored in a motionvector histogram memory 47. - The provision of a motion vector histogram can be done either for the whole frame or field or only for parts of the frame or field. It is very efficient to split the picture into horizontal stripes and return a most often used vector for each stripe. In very a preferred embodiment news ticker information within a picture can be detected in that way very reliable.
-
FIG. 7 shows a block diagram illustrating an embodiment of the line-based motion estimation according to the present invention as described above and as implemented in amotion estimation device 25 as shown inFIG. 4 . - The
motion estimation device 25 comprises amatching device 80, a cost/quality function device 81 and avector selector device 82, which are arranged in series connection between theinput side 83 of themotion estimation device 25 where the image data signals X1, X1′ stored in the bothline memories output side 84 of themotion estimation device 25 where the motion vector signal X4 in present. In the device elements 80-82 a matching process and a vector selection as described with regard toFIG. 5 is implemented. - The
motion estimation device 25 further comprises avector quality device 85 which is connected on the one hand to theinput side 83 and on the other hand to theoutput side 84. Thevector quality device 85 generates a quality signal X6 comprising an information of the vector quality out of the image data signals X1, X1′ and the motion vector signal X4. - The
motion estimation device 25 further comprises avector histogram device 86 and avector majority device 87 which are arranged in series connection in a feedback path between theoutput side 84 and thematching device 80. Here, in thedevice elements 86, 87 a vector histogram is generated to provide a ranking of most and less used vectors in the actual scene as shown and described with regard toFIG. 6 . Thus, theelements vector histogram generator 40 ofFIG. 6 . - The
motion estimation device 25 may further comprise afurther line memory 88 to store the motion vector data X4 and/or the data X6 for the vector quality. - The
motion estimation device 25 further comprises avector sample device 89. Thisvector sample device 89 is also arranged in the feedback path and is connected at its input side with theline memory 88, thevector majority device 87 and advantageously with afurther device 90. Thisfurther device 90 performs a variation of the motion vector samples by using a special signal having a certain magnitude, e.g. a sinusoidal signal, a saw tooth signal or the like. This certain signal is then used for a testing and/or matching process and/or an up-dating process of the first line of a frame or field. However, it might also be possible to randomly up-date different lines of the frame or field. On its output side, thevector sample device 89 is connected at its output side to thematching device 80. - The
motion estimation device 25 further comprises a verticalmotion estimation device 91. For vertical motions the above described one-dimensional motion estimation algorithm is not able to compensate fully motion in the vertical direction. However, the occurrence of vertical motions can be used to reduce the compensation in same regions of the picture by splitting the picture into different regions to derive vertical motion for each region. In this case the luminance values of the lines in the different region of the same picture will be summed up and stored individually for each line of this picture. This results in an accumulated vertical profile for different regions of the same picture. Then, the whole picture can be divided into smaller regions to derive a vertical motion for each of these regions. This vertical motion estimation process is performed in the verticalmotion estimation device 91 which is connected to theinput side 83 and which provides at its output side a sector based vertical motion index X7. - Thus, the vertical MEMC as sketched above can be performed independently of horizontal MEMC and also in combination with the horizontal MEMC, wherein the combination can be performed in dependence on a certain situation or the motions present, respectively. Further, such a methodology allows an implementation of vertical MEMC, which does not need large amounts of additional memory capacity to analyze data of consecutive frames being the case in the most methodologies of the prior art.
- The
motion estimation device 25 further comprises avector damping device 92. In this damping device 92 a damping value as described above may be used to damp vector samples of thevector sample device 89 and to provide damped vector samples to thevector selector 82. - Hereinafter the motion compensation process which is performed in the
motion compensation device 26 ofFIG. 4 is described with regard toFIG. 8 in more detail.FIG. 8 shows a block diagram illustrating an embodiment of the line-based motion compensation according to the present invention using adaptive artefact concealments as described above. - The
motion compensation device 26 comprises acompensation device 100 which performs the temporal motion interpolation according to the motion vectors X4 estimated by themotion estimation device 25. In a preferred embodiment thecompensation device 100 comprises a Median Filter which uses as input data the luminance values of the vector compensated previous line, the vector compensated current and the uncompensated previous line. Additionally, also the chrominance values can be compensated. - Depending on the vector quality a replacement vector indicated as reliable vector will be searched in the local area of the vector memory from the line above. If no reliable vector can be found the adaptive blurring typically tries to cover this artefact.
- The
motion compensation device 26 further comprises a verticalmotion control device 101 which provides a control signal X8 to thecompensation device 100 in order to incorporate also information of a vertical motion to thecompensation device 100. - The
motion compensation device 26 further comprises a badvector modification device 102. Based on information X4, X6 provided by themotion estimation device 25 the badvector modification device 102 modifies bad vectors. These information X9 about modified bad vectors is then used—together with the control signal X8—to perform the motion compensation within thecompensation device 100. Thecompensation device 100 then generates at its output side a motion compensated image data signal X10. - The
motion compensation device 26 further comprises anadaptive blurring device 103. Based on the motion compensated image data signal X10 and a blurring control signal generated by the badvector modification device 102 thisadaptive blurring device 103 performs an adaptive blurring. Theadaptive blurring device 103 generates an adaptive blurred image data signal X5′ which might correspond to the image signal X5 ofFIG. 4 . -
FIG. 9 shows a block diagram of a second embodiment of a line-based MEMC system according to the present invention using the line memories assigned to the de-interlacer device also for the motion estimation device. - Unlike the first embodiment in
FIG. 4 ade-interlacer device 113 is arranged between theline memories motion compensation device 26. Thede-interlacer device 113 is typically used to convert a field represented by video data stream into a full frame which is then also represented by another video data stream. - On-chip solutions for video processing which are memory-based have already existing internal line buffers 110-112—the so-called line memories 110-112—which carry video data from the previous and current field or frame. These line buffers 110-112 can be located e.g. within temporal noise reductions or
de-interlacing units 113 which operate motion adaptive. With the proposed line-based MEMC these line buffers can be reused additionally for the motion estimation. For that purpose and in order to reduce motion judder artefacts from movie sources, a movie detector which indicates the current interpolated sequence of pull-down mode is used. A line buffer selector transfers the video signal data to the motion estimation device according to the previous and the current video input signal. This technique allows using already existing memory resources also for motion estimation which also prevents additional bandwidth for the temporal up-conversion process. Therefore, the chip area for the motion estimation and the motion compensation can be reduced to a minimum. - The
de-interlacer device 113 uses threeline memories memory bus 22 and providing at their output side line data. This line data provided by theline memories motion compensation device 26. According to the present invention, theseline memories motion estimation device 25. For this purpose, thesystem 20 additionally comprises aselector device 114, where a movie sequence X0 is provided to thisselector device 114. This movie sequence X0 may be then stored in anexternal memory 28 via thememory bus 22 and can be read out from thisexternal memory 28 through theline memories line memories de-interlacer device 113 can be also used for MEMC. For this purpose the data stored in theline memories motion estimation device 25 and themotion compensation 26 device. - While embodiments and applications of this invention have been shown and described above, it should be apparent to those skilled in the art, that many more modifications (than mentioned above) are possible without departing from the inventive concept described herein. The invention, therefore, is not restricted except in the spirit of the appending claims. It is therefore intended that the foregoing detailed description is to be regarded as illustrative rather than limiting and that it is understood that it is the following claims including all equivalents described in these claims that are intended to define the spirit and the scope of this invention. Nor is anything in the foregoing description intended to disavow the scope of the invention as claimed or any equivalents thereof.
- It is also noted that the above mentioned embodiments, examples and numerical data should be understood to be only exemplary. That means that additional system arrangements and functional units and operation methods and standards may be implemented within the MEMC-system.
- It is self understood that the above mentioned numerical data is merely illustrative and may be adapted to best provide an optimized blurring effect.
- At this point, it should be also mentioned, that the present invention is not based necessarily on so-called line-based MEMC systems, although in the above embodiments of the present invention always reference is made on line-based MEMC systems. In fact, the present invention is related generally to all implementations using motion estimation of video image data, i.e. especially so-called block-based motion estimation, line-based motion estimation and the like. It is self understood that for those implementations which do not apply line-based motion estimation typically other memory means than line memories maybe employed.
Claims (24)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP07017665.6 | 2007-09-10 | ||
EP07017665 | 2007-09-10 | ||
PCT/IB2008/053147 WO2009034489A2 (en) | 2007-09-10 | 2008-08-05 | Method and apparatus for motion estimation in video image data |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100271554A1 true US20100271554A1 (en) | 2010-10-28 |
Family
ID=40379094
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/677,507 Abandoned US20100271554A1 (en) | 2007-09-10 | 2008-08-05 | Method And Apparatus For Motion Estimation In Video Image Data |
US12/677,523 Abandoned US20110205438A1 (en) | 2007-09-10 | 2008-08-05 | Method and apparatus for motion estimation and motion compensation in video image data |
US12/677,508 Expired - Fee Related US8526502B2 (en) | 2007-09-10 | 2008-08-05 | Method and apparatus for line based vertical motion estimation and compensation |
US12/676,364 Active 2032-01-29 US9036082B2 (en) | 2007-09-10 | 2008-08-22 | Method, apparatus, and system for line-based motion compensation in video image data |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/677,523 Abandoned US20110205438A1 (en) | 2007-09-10 | 2008-08-05 | Method and apparatus for motion estimation and motion compensation in video image data |
US12/677,508 Expired - Fee Related US8526502B2 (en) | 2007-09-10 | 2008-08-05 | Method and apparatus for line based vertical motion estimation and compensation |
US12/676,364 Active 2032-01-29 US9036082B2 (en) | 2007-09-10 | 2008-08-22 | Method, apparatus, and system for line-based motion compensation in video image data |
Country Status (4)
Country | Link |
---|---|
US (4) | US20100271554A1 (en) |
EP (5) | EP2188979A2 (en) |
CN (5) | CN101803361B (en) |
WO (6) | WO2009034487A2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050129312A1 (en) * | 2002-02-06 | 2005-06-16 | Ernst Fabian E. | Unit for and method of segmentation |
US20100238355A1 (en) * | 2007-09-10 | 2010-09-23 | Volker Blume | Method And Apparatus For Line Based Vertical Motion Estimation And Compensation |
US20120195472A1 (en) * | 2011-01-31 | 2012-08-02 | Novatek Microelectronics Corp. | Motion adaptive de-interlacing apparatus and method |
CN102788973A (en) * | 2011-05-17 | 2012-11-21 | 株式会社电装 | Radar device, calibration system and calibration method |
US20140063031A1 (en) * | 2012-09-05 | 2014-03-06 | Imagination Technologies Limited | Pixel buffering |
US10083498B2 (en) * | 2016-02-24 | 2018-09-25 | Novatek Microelectronics Corp. | Compensation method for display device and related compensation module |
US10595043B2 (en) | 2012-08-30 | 2020-03-17 | Novatek Microelectronics Corp. | Encoding method and encoding device for 3D video |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070274385A1 (en) * | 2006-05-26 | 2007-11-29 | Zhongli He | Method of increasing coding efficiency and reducing power consumption by on-line scene change detection while encoding inter-frame |
US8300958B2 (en) * | 2007-07-11 | 2012-10-30 | Samsung Electronics Co., Ltd. | System and method for detecting scrolling text in mixed mode film and video |
US20100091859A1 (en) * | 2008-10-09 | 2010-04-15 | Shao-Yi Chien | Motion compensation apparatus and a motion compensation method |
GB2466044B (en) * | 2008-12-09 | 2014-05-21 | Snell Ltd | Motion image rendering system |
EP2227012A1 (en) * | 2009-03-05 | 2010-09-08 | Sony Corporation | Method and system for providing reliable motion vectors |
EP2234402B1 (en) | 2009-03-25 | 2012-07-04 | Bang & Olufsen A/S | A method and a system for adapting film judder correction |
US20100303301A1 (en) * | 2009-06-01 | 2010-12-02 | Gregory Micheal Lamoureux | Inter-Frame Motion Detection |
KR101643613B1 (en) * | 2010-02-01 | 2016-07-29 | 삼성전자주식회사 | Digital image process apparatus, method for image processing and storage medium thereof |
US9143758B2 (en) | 2010-03-22 | 2015-09-22 | Thomson Licensing | Method and apparatus for low-bandwidth content-preserving encoding of stereoscopic 3D images |
KR101279128B1 (en) * | 2010-07-08 | 2013-06-26 | 엘지디스플레이 주식회사 | Stereoscopic image display and driving method thereof |
US10424274B2 (en) * | 2010-11-24 | 2019-09-24 | Ati Technologies Ulc | Method and apparatus for providing temporal image processing using multi-stream field information |
US11582479B2 (en) * | 2011-07-05 | 2023-02-14 | Texas Instruments Incorporated | Method and apparatus for reference area transfer with pre-analysis |
US9699456B2 (en) | 2011-07-20 | 2017-07-04 | Qualcomm Incorporated | Buffering prediction data in video coding |
KR101896026B1 (en) * | 2011-11-08 | 2018-09-07 | 삼성전자주식회사 | Apparatus and method for generating a motion blur in a portable terminal |
US8810666B2 (en) * | 2012-01-16 | 2014-08-19 | Google Inc. | Methods and systems for processing a video for stabilization using dynamic crop |
TWI601075B (en) * | 2012-07-03 | 2017-10-01 | 晨星半導體股份有限公司 | Motion compensation image processing apparatus and image processing method |
CN103686190A (en) * | 2012-09-07 | 2014-03-26 | 联咏科技股份有限公司 | Coding method and coding device for stereoscopic videos |
TWI606418B (en) * | 2012-09-28 | 2017-11-21 | 輝達公司 | Computer system and method for gpu driver-generated interpolated frames |
US20140105305A1 (en) * | 2012-10-15 | 2014-04-17 | Vixs Systems, Inc. | Memory cache for use in video processing and methods for use therewith |
US8629939B1 (en) * | 2012-11-05 | 2014-01-14 | Lsi Corporation | Television ticker overlay |
HUE054443T2 (en) * | 2013-02-08 | 2021-09-28 | Novartis Ag | Specific sites for modifying antibodies to make immunoconjugates |
US10326969B2 (en) * | 2013-08-12 | 2019-06-18 | Magna Electronics Inc. | Vehicle vision system with reduction of temporal noise in images |
GB2527315B (en) * | 2014-06-17 | 2017-03-15 | Imagination Tech Ltd | Error detection in motion estimation |
US9728166B2 (en) * | 2015-08-20 | 2017-08-08 | Qualcomm Incorporated | Refresh rate matching with predictive time-shift compensation |
US9819900B2 (en) * | 2015-12-30 | 2017-11-14 | Spreadtrum Communications (Shanghai) Co., Ltd. | Method and apparatus for de-interlacing television signal |
US10504269B2 (en) * | 2016-09-27 | 2019-12-10 | Ziva Dynamics Inc. | Inertial damping for enhanced simulation of elastic bodies |
CN109729298B (en) * | 2017-10-27 | 2020-11-06 | 联咏科技股份有限公司 | Image processing method and image processing apparatus |
CN108305228B (en) * | 2018-01-26 | 2020-11-27 | 网易(杭州)网络有限公司 | Image processing method, image processing device, storage medium and processor |
KR102707596B1 (en) * | 2018-08-07 | 2024-09-19 | 삼성전자주식회사 | Device and method to estimate ego motion |
WO2020149919A1 (en) * | 2019-01-18 | 2020-07-23 | Ziva Dynamics Inc. | Inertial damping for enhanced simulation of elastic bodies |
CN112017229B (en) * | 2020-09-06 | 2023-06-27 | 桂林电子科技大学 | Camera relative pose solving method |
CN112561951B (en) * | 2020-12-24 | 2024-03-15 | 上海富瀚微电子股份有限公司 | Motion and brightness detection method based on frame difference absolute error and SAD |
US20220210467A1 (en) * | 2020-12-30 | 2022-06-30 | Beijing Dajia Internet Information Technology Co., Ltd. | System and method for frame rate up-conversion of video data based on a quality reliability prediction |
US20220301184A1 (en) * | 2021-03-16 | 2022-09-22 | Samsung Electronics Co., Ltd. | Accurate optical flow interpolation optimizing bi-directional consistency and temporal smoothness |
EP4192002A1 (en) * | 2021-12-03 | 2023-06-07 | Melexis Technologies NV | Method of generating a de-interlacing filter and image processing apparatus |
CN116016831B (en) * | 2022-12-13 | 2023-12-05 | 湖南快乐阳光互动娱乐传媒有限公司 | Low-time-delay image de-interlacing method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4668986A (en) * | 1984-04-27 | 1987-05-26 | Nec Corporation | Motion-adaptive interpolation method for motion video signal and device for implementing the same |
US4891699A (en) * | 1989-02-23 | 1990-01-02 | Matsushita Electric Industrial Co., Ltd. | Receiving system for band-compression image signal |
US4992869A (en) * | 1989-04-27 | 1991-02-12 | Sony Corporation | Motion dependent video signal processing |
US6078618A (en) * | 1997-05-28 | 2000-06-20 | Nec Corporation | Motion vector estimation system |
US20020196362A1 (en) * | 2001-06-11 | 2002-12-26 | Samsung Electronics Co., Ltd. | Apparatus and method for adaptive motion compensated de-interlacing of video data |
US20040101047A1 (en) * | 2002-11-23 | 2004-05-27 | Samsung Electronics Co., Ltd. | Motion estimation apparatus, method, and machine-readable medium capable of detecting scrolling text and graphic data |
US20050271144A1 (en) * | 2004-04-09 | 2005-12-08 | Sony Corporation | Image processing apparatus and method, and recording medium and program used therewith |
US20070040935A1 (en) * | 2005-08-17 | 2007-02-22 | Samsung Electronics Co., Ltd. | Apparatus for converting image signal and a method thereof |
Family Cites Families (146)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ATE74219T1 (en) * | 1987-06-02 | 1992-04-15 | Siemens Ag | METHOD FOR DETERMINING MOTION VECTOR FIELDS FROM DIGITAL IMAGE SEQUENCES. |
FR2648979B1 (en) | 1989-06-27 | 1996-09-06 | Thomson Consumer Electronics | METHOD FOR SEGMENTATION OF THE MOTION FIELD OF AN IMAGE AND ITS APPLICATION TO THE CODING OF VIDEO IMAGES |
JPH0385884A (en) * | 1989-08-29 | 1991-04-11 | Sony Corp | Image movement detecting circuit |
KR930004824B1 (en) * | 1990-05-12 | 1993-06-08 | 삼성전자 주식회사 | High definition television signal transmitting method and circuit |
EP0476603B1 (en) | 1990-09-20 | 1997-06-18 | Nec Corporation | Method and apparatus for coding moving image signal |
GB2248361B (en) * | 1990-09-28 | 1994-06-01 | Sony Broadcast & Communication | Motion dependent video signal processing |
GB2265065B (en) * | 1992-03-02 | 1995-08-16 | Sony Broadcast & Communication | Motion compensated image processing |
US5440344A (en) * | 1992-04-28 | 1995-08-08 | Mitsubishi Denki Kabushiki Kaisha | Video encoder using adjacent pixel difference for quantizer control |
KR950009699B1 (en) * | 1992-06-09 | 1995-08-26 | 대우전자주식회사 | Motion vector detection method and apparatus |
US5828907A (en) * | 1992-06-30 | 1998-10-27 | Discovision Associates | Token-based adaptive video processing arrangement |
DE69318377T2 (en) * | 1992-11-19 | 1998-09-03 | Thomson Multimedia Sa | Method and device for increasing the frame rate |
JP3106749B2 (en) * | 1992-12-10 | 2000-11-06 | ソニー株式会社 | Adaptive dynamic range coding device |
US5703646A (en) * | 1993-04-09 | 1997-12-30 | Sony Corporation | Picture encoding method, picture encoding apparatus and picture recording medium |
EP0892561B1 (en) * | 1993-04-09 | 2002-06-26 | Sony Corporation | Picture encoding method and apparatus |
JPH07115646A (en) * | 1993-10-20 | 1995-05-02 | Sony Corp | Image processor |
KR0151210B1 (en) * | 1994-09-23 | 1998-10-15 | 구자홍 | Motion compensation control apparatus for mpeg |
US5768438A (en) * | 1994-10-19 | 1998-06-16 | Matsushita Electric Industrial Co., Ltd. | Image encoding/decoding device |
EP0765573B1 (en) | 1995-03-14 | 1999-06-09 | Koninklijke Philips Electronics N.V. | Motion-compensated interpolation |
EP0735746B1 (en) * | 1995-03-31 | 1999-09-08 | THOMSON multimedia | Method and apparatus for motion compensated frame rate upconversion |
DE69609028T2 (en) | 1995-04-11 | 2001-02-22 | Koninklijke Philips Electronics N.V., Eindhoven | MOTION COMPENSATED IMAGE FREQUENCY CONVERSION |
US5805178A (en) | 1995-04-12 | 1998-09-08 | Eastman Kodak Company | Ink jet halftoning with different ink concentrations |
AUPN234595A0 (en) | 1995-04-12 | 1995-05-04 | Eastman Kodak Company | Improvements in image halftoning |
US6119213A (en) * | 1995-06-07 | 2000-09-12 | Discovision Associates | Method for addressing data having variable data width using a fixed number of bits for address and width defining fields |
US5798948A (en) * | 1995-06-20 | 1998-08-25 | Intel Corporation | Method and apparatus for video filtering |
JP4145351B2 (en) | 1995-11-01 | 2008-09-03 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Video signal scanning conversion method, apparatus, and video signal display apparatus |
DE19541457C1 (en) * | 1995-11-07 | 1997-07-03 | Siemens Ag | Method for coding a video data stream of a video sequence consisting of picture blocks |
JP3681835B2 (en) * | 1995-12-27 | 2005-08-10 | 三菱電機株式会社 | Image encoding apparatus, image decoding apparatus, and encoding / decoding system |
US6957350B1 (en) * | 1996-01-30 | 2005-10-18 | Dolby Laboratories Licensing Corporation | Encrypted and watermarked temporal and resolution layering in advanced television |
CN1162010C (en) * | 1996-08-29 | 2004-08-11 | 松下电器产业株式会社 | Image decoder and image memory overcoming various kinds of delaying factors caused by hardware specifications specific to image memory by improving storing system and reading-out system |
GB2319684B (en) * | 1996-11-26 | 2000-09-06 | Sony Uk Ltd | Scene change detection |
JP3633159B2 (en) * | 1996-12-18 | 2005-03-30 | ソニー株式会社 | Moving picture signal encoding method and apparatus, and moving picture signal transmission method |
FR2759524B1 (en) * | 1997-02-10 | 1999-05-14 | Thomson Multimedia Sa | LUMINANCE ESTIMATION CODING METHOD AND DEVICE |
CA2283330C (en) * | 1997-03-06 | 2004-10-26 | Fujitsu General Limited | Moving picture correcting circuit of display |
US6539120B1 (en) * | 1997-03-12 | 2003-03-25 | Matsushita Electric Industrial Co., Ltd. | MPEG decoder providing multiple standard output signals |
US6788347B1 (en) * | 1997-03-12 | 2004-09-07 | Matsushita Electric Industrial Co., Ltd. | HDTV downconversion system |
US6175592B1 (en) * | 1997-03-12 | 2001-01-16 | Matsushita Electric Industrial Co., Ltd. | Frequency domain filtering for down conversion of a DCT encoded picture |
DE69727911D1 (en) | 1997-04-24 | 2004-04-08 | St Microelectronics Srl | Method for increasing the motion-estimated and motion-compensated frame rate for video applications, and device for using such a method |
US6057884A (en) * | 1997-06-05 | 2000-05-02 | General Instrument Corporation | Temporal and spatial scaleable coding for video object planes |
US6084641A (en) * | 1997-08-06 | 2000-07-04 | General Instrument Corporation | Fade detector for digital video |
EP0905987B1 (en) * | 1997-09-26 | 2005-06-15 | Matsushita Electric Industrial Co., Ltd. | Image decoding method and apparatus, and data recording medium |
US6108047A (en) * | 1997-10-28 | 2000-08-22 | Stream Machine Company | Variable-size spatial and temporal video scaler |
JP4186242B2 (en) * | 1997-12-26 | 2008-11-26 | ソニー株式会社 | Image signal processing apparatus and image signal processing method |
US6040875A (en) * | 1998-03-23 | 2000-03-21 | International Business Machines Corporation | Method to compensate for a fade in a digital video input sequence |
US6539058B1 (en) * | 1998-04-13 | 2003-03-25 | Hitachi America, Ltd. | Methods and apparatus for reducing drift due to averaging in reduced resolution video decoders |
US6192079B1 (en) * | 1998-05-07 | 2001-02-20 | Intel Corporation | Method and apparatus for increasing video frame rate |
US6274299B1 (en) * | 1998-06-25 | 2001-08-14 | Eastman Kodak Company | Method of electronically processing an image from a color negative film element |
JP4443767B2 (en) * | 1998-09-07 | 2010-03-31 | トムソン マルチメディア | Motion estimation method for reducing motion vector transmission cost |
JP4453202B2 (en) * | 1998-11-25 | 2010-04-21 | ソニー株式会社 | Image processing apparatus, image processing method, and computer-readable recording medium |
US6782049B1 (en) * | 1999-01-29 | 2004-08-24 | Hewlett-Packard Development Company, L.P. | System for selecting a keyframe to represent a video |
WO2000051355A1 (en) * | 1999-02-26 | 2000-08-31 | Stmicroelectronics Asia Pacific Pte Ltd | Method and apparatus for interlaced/non-interlaced frame determination, repeat-field identification and scene-change detection |
US6360015B1 (en) * | 1999-04-06 | 2002-03-19 | Philips Electronics North America Corp. | RAM-based search engine for orthogonal-sum block match motion estimation system |
US6400764B1 (en) * | 1999-04-06 | 2002-06-04 | Koninklijke Philips Electronics N. V. | Motion estimation method featuring orthogonal-sum concurrent multi matching |
US6438275B1 (en) * | 1999-04-21 | 2002-08-20 | Intel Corporation | Method for motion compensated frame rate upsampling based on piecewise affine warping |
US6831948B1 (en) * | 1999-07-30 | 2004-12-14 | Koninklijke Philips Electronics N.V. | System and method for motion compensation of image planes in color sequential displays |
US6836273B1 (en) * | 1999-11-11 | 2004-12-28 | Matsushita Electric Industrial Co., Ltd. | Memory management method, image coding method, image decoding method, image display method, memory management apparatus, and memory management program storage medium |
JP3753578B2 (en) * | 1999-12-07 | 2006-03-08 | Necエレクトロニクス株式会社 | Motion vector search apparatus and method |
EP1128678A1 (en) * | 2000-02-24 | 2001-08-29 | Koninklijke Philips Electronics N.V. | Motion estimation apparatus and method |
JP3677192B2 (en) * | 2000-04-19 | 2005-07-27 | シャープ株式会社 | Image processing device |
US6647061B1 (en) * | 2000-06-09 | 2003-11-11 | General Instrument Corporation | Video size conversion and transcoding from MPEG-2 to MPEG-4 |
KR100370076B1 (en) * | 2000-07-27 | 2003-01-30 | 엘지전자 주식회사 | video decoder with down conversion function and method of decoding a video signal |
US6847406B2 (en) * | 2000-12-06 | 2005-01-25 | Koninklijke Philips Electronics N.V. | High quality, cost-effective film-to-video converter for high definition television |
US20110013081A1 (en) | 2001-01-11 | 2011-01-20 | Pixelworks, Inc. | System and method for detecting a non-video source in video signals |
PL373421A1 (en) * | 2001-04-12 | 2005-08-22 | Koninklijke Philips Electronics N.V. | Watermark embedding |
US7904814B2 (en) * | 2001-04-19 | 2011-03-08 | Sharp Laboratories Of America, Inc. | System for presenting audio-video content |
US7170932B2 (en) * | 2001-05-11 | 2007-01-30 | Mitsubishi Electric Research Laboratories, Inc. | Video transcoder with spatial resolution reduction and drift compensation |
US7088780B2 (en) * | 2001-05-11 | 2006-08-08 | Mitsubishi Electric Research Labs, Inc. | Video transcoder with drift compensation |
US6898241B2 (en) * | 2001-05-11 | 2005-05-24 | Mitsubishi Electric Research Labs, Inc. | Video transcoder with up-sampling |
US6671322B2 (en) * | 2001-05-11 | 2003-12-30 | Mitsubishi Electric Research Laboratories, Inc. | Video transcoder with spatial resolution reduction |
US6714594B2 (en) * | 2001-05-14 | 2004-03-30 | Koninklijke Philips Electronics N.V. | Video content detection method and system leveraging data-compression constructs |
US20030112863A1 (en) * | 2001-07-12 | 2003-06-19 | Demos Gary A. | Method and system for improving compressed image chroma information |
US7142251B2 (en) * | 2001-07-31 | 2006-11-28 | Micronas Usa, Inc. | Video input processor in multi-format video compression system |
WO2003024116A1 (en) * | 2001-09-12 | 2003-03-20 | Koninklijke Philips Electronics N.V. | Motion estimation and/or compensation |
KR100396558B1 (en) * | 2001-10-25 | 2003-09-02 | 삼성전자주식회사 | Apparatus and method for converting frame and/or field rate using adaptive motion compensation |
KR100453222B1 (en) * | 2001-12-17 | 2004-10-15 | 한국전자통신연구원 | Method and apparatus for estimating camera motion |
US6823015B2 (en) * | 2002-01-23 | 2004-11-23 | International Business Machines Corporation | Macroblock coding using luminance date in analyzing temporal redundancy of picture, biased by chrominance data |
KR100492127B1 (en) * | 2002-02-23 | 2005-06-01 | 삼성전자주식회사 | Apparatus and method of adaptive motion estimation |
JP3940616B2 (en) * | 2002-02-25 | 2007-07-04 | 松下電器産業株式会社 | Optical receiver circuit |
US20030161400A1 (en) * | 2002-02-27 | 2003-08-28 | Dinerstein Jonathan J. | Method and system for improved diamond motion search |
EP1481546A1 (en) * | 2002-02-28 | 2004-12-01 | Koninklijke Philips Electronics N.V. | Method and apparatus for field rate up-conversion |
JP4082664B2 (en) * | 2002-09-20 | 2008-04-30 | Kddi株式会社 | Video search device |
KR20040049214A (en) * | 2002-12-05 | 2004-06-11 | 삼성전자주식회사 | Apparatus and Method for searching motion vector with high speed |
GB0229354D0 (en) * | 2002-12-18 | 2003-01-22 | Robert Gordon The University | Video encoding |
KR100751422B1 (en) * | 2002-12-27 | 2007-08-23 | 한국전자통신연구원 | A Method of Coding and Decoding Stereoscopic Video and A Apparatus for Coding and Decoding the Same |
US20040179599A1 (en) * | 2003-03-13 | 2004-09-16 | Motorola, Inc. | Programmable video motion accelerator method and apparatus |
US20040190625A1 (en) * | 2003-03-13 | 2004-09-30 | Motorola, Inc. | Programmable video encoding accelerator method and apparatus |
JP2004336103A (en) * | 2003-04-30 | 2004-11-25 | Texas Instr Japan Ltd | Image information compression apparatus |
GB2401502B (en) * | 2003-05-07 | 2007-02-14 | British Broadcasting Corp | Data processing |
KR100561398B1 (en) * | 2003-06-10 | 2006-03-16 | 삼성전자주식회사 | Apparatus and method for detecting and compensating luminance change of each partition in moving picture |
DE10327576A1 (en) * | 2003-06-18 | 2005-01-13 | Micronas Gmbh | Method and apparatus for motion vector based pixel interpolation |
US7129987B1 (en) | 2003-07-02 | 2006-10-31 | Raymond John Westwater | Method for converting the resolution and frame rate of video data using Discrete Cosine Transforms |
KR100573696B1 (en) * | 2003-07-31 | 2006-04-26 | 삼성전자주식회사 | Apparatus and method for correction motion vector |
US7620254B2 (en) * | 2003-08-07 | 2009-11-17 | Trident Microsystems (Far East) Ltd. | Apparatus and method for motion-vector-aided interpolation of a pixel of an intermediate image of an image sequence |
US8064520B2 (en) * | 2003-09-07 | 2011-11-22 | Microsoft Corporation | Advanced bi-directional predictive coding of interlaced video |
US7724827B2 (en) * | 2003-09-07 | 2010-05-25 | Microsoft Corporation | Multi-layer run level encoding and decoding |
US7366237B2 (en) * | 2003-09-09 | 2008-04-29 | Microsoft Corporation | Low complexity real-time video coding |
US7330509B2 (en) * | 2003-09-12 | 2008-02-12 | International Business Machines Corporation | Method for video transcoding with adaptive frame rate control |
KR20050049680A (en) * | 2003-11-22 | 2005-05-27 | 삼성전자주식회사 | Noise reduction and de-interlacing apparatus |
KR100601935B1 (en) * | 2003-12-02 | 2006-07-14 | 삼성전자주식회사 | Method and apparatus for processing digital motion image |
US7308029B2 (en) * | 2003-12-23 | 2007-12-11 | International Business Machines Corporation | Method and apparatus for implementing B-picture scene changes |
US20080144716A1 (en) * | 2004-03-11 | 2008-06-19 | Gerard De Haan | Method For Motion Vector Determination |
DE102004017145B4 (en) * | 2004-04-07 | 2006-02-16 | Micronas Gmbh | Method and device for determining motion vectors associated with image areas of an image |
FR2869753A1 (en) * | 2004-04-29 | 2005-11-04 | St Microelectronics Sa | METHOD AND DEVICE FOR GENERATING CANDIDATE VECTORS FOR IMAGE INTERPOLATION SYSTEMS BY ESTIMATION AND MOTION COMPENSATION |
US8731054B2 (en) * | 2004-05-04 | 2014-05-20 | Qualcomm Incorporated | Method and apparatus for weighted prediction in predictive frames |
JP5464803B2 (en) * | 2004-05-25 | 2014-04-09 | エントロピック・コミュニケーションズ・インコーポレイテッド | Motion estimation of interlaced video images |
KR20070040397A (en) * | 2004-07-20 | 2007-04-16 | 퀄컴 인코포레이티드 | Method and apparatus for frame rate up conversion with multiple reference frames and variable block sizes |
AR049593A1 (en) * | 2004-07-20 | 2006-08-16 | Qualcomm Inc | METHOD AND APPARATUS FOR PREDICTING THE MOTION VECTOR IN TEMPORARY VIDEO COMPRESSION. |
KR101127220B1 (en) | 2004-07-28 | 2012-04-12 | 세종대학교산학협력단 | Apparatus for motion compensation-adaptive de-interlacing and method the same |
EP1638337A1 (en) * | 2004-09-16 | 2006-03-22 | STMicroelectronics S.r.l. | Method and system for multiple description coding and computer program product therefor |
US7447337B2 (en) * | 2004-10-25 | 2008-11-04 | Hewlett-Packard Development Company, L.P. | Video content understanding through real time video motion analysis |
JP2006246431A (en) * | 2005-02-07 | 2006-09-14 | Matsushita Electric Ind Co Ltd | Image coding apparatus and method |
US20060285586A1 (en) * | 2005-05-16 | 2006-12-21 | Ensequence, Inc. | Methods and systems for achieving transition effects with MPEG-encoded picture content |
US7755667B2 (en) * | 2005-05-17 | 2010-07-13 | Eastman Kodak Company | Image sequence stabilization method and camera having dual path image sequence stabilization |
US8401070B2 (en) * | 2005-11-10 | 2013-03-19 | Lsi Corporation | Method for robust inverse telecine |
US7925120B2 (en) * | 2005-11-14 | 2011-04-12 | Mediatek Inc. | Methods of image processing with reduced memory requirements for video encoder and decoder |
US8036263B2 (en) * | 2005-12-23 | 2011-10-11 | Qualcomm Incorporated | Selecting key frames from video frames |
US8160144B1 (en) * | 2006-05-10 | 2012-04-17 | Texas Instruments Incorporated | Video motion estimation |
US8519928B2 (en) | 2006-06-22 | 2013-08-27 | Entropic Communications, Inc. | Method and system for frame insertion in a digital display system |
EP1884893A1 (en) | 2006-08-03 | 2008-02-06 | Mitsubishi Electric Information Technology Centre Europe B.V. | Sparse integral image description with application to motion analysis |
US20080056367A1 (en) * | 2006-08-30 | 2008-03-06 | Liu Wenjin | Multi-step directional-line motion estimation |
SG140508A1 (en) | 2006-08-31 | 2008-03-28 | St Microelectronics Asia | Multimode filter for de-blocking and de-ringing |
US20080055477A1 (en) * | 2006-08-31 | 2008-03-06 | Dongsheng Wu | Method and System for Motion Compensated Noise Reduction |
EP2102805A1 (en) * | 2006-12-11 | 2009-09-23 | Cinnafilm, Inc. | Real-time film effects processing for digital video |
US8428125B2 (en) * | 2006-12-22 | 2013-04-23 | Qualcomm Incorporated | Techniques for content adaptive video frame slicing and non-uniform access unit coding |
US8144778B2 (en) * | 2007-02-22 | 2012-03-27 | Sigma Designs, Inc. | Motion compensated frame rate conversion system and method |
JP5141043B2 (en) * | 2007-02-27 | 2013-02-13 | 株式会社日立製作所 | Image display device and image display method |
US8204128B2 (en) * | 2007-08-01 | 2012-06-19 | Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry, Through The Communications Research Centre Canada | Learning filters for enhancing the quality of block coded still and video images |
EP2188979A2 (en) * | 2007-09-10 | 2010-05-26 | Nxp B.V. | Method and apparatus for motion estimation in video image data |
US8243790B2 (en) | 2007-09-28 | 2012-08-14 | Dolby Laboratories Licensing Corporation | Treating video information |
KR20090054828A (en) | 2007-11-27 | 2009-06-01 | 삼성전자주식회사 | Video apparatus for adding gui to frame rate converted video and gui providing using the same |
TWI386058B (en) | 2008-10-03 | 2013-02-11 | Realtek Semiconductor Corp | Video processing method and device |
KR101350723B1 (en) * | 2008-06-16 | 2014-01-16 | 돌비 레버러토리즈 라이쎈싱 코오포레이션 | Rate control model adaptation based on slice dependencies for video coding |
FR2933520B1 (en) * | 2008-07-04 | 2011-02-11 | Canon Kk | METHOD AND DEVICE FOR RESTORING A VIDEO SEQUENCE |
KR101574080B1 (en) * | 2009-04-15 | 2015-12-04 | 삼성디스플레이 주식회사 | Method of processing data data processing device for performing the method and display apparatus having the data processing device |
US8675736B2 (en) * | 2009-05-14 | 2014-03-18 | Qualcomm Incorporated | Motion vector processing |
TWI398159B (en) * | 2009-06-29 | 2013-06-01 | Silicon Integrated Sys Corp | Apparatus and method of frame rate up-conversion with dynamic quality control |
US8643776B2 (en) | 2009-11-30 | 2014-02-04 | Mediatek Inc. | Video processing method capable of performing predetermined data processing operation upon output of frame rate conversion with reduced storage device bandwidth usage and related video processing apparatus thereof |
US8879632B2 (en) * | 2010-02-18 | 2014-11-04 | Qualcomm Incorporated | Fixed point implementation for geometric motion partitioning |
TWI413023B (en) | 2010-03-30 | 2013-10-21 | Novatek Microelectronics Corp | Method and apparatus for motion detection |
US20110255596A1 (en) * | 2010-04-15 | 2011-10-20 | Himax Technologies Limited | Frame rate up conversion system and method |
KR101152464B1 (en) | 2010-05-10 | 2012-06-01 | 삼성모바일디스플레이주식회사 | Organic Light Emitting Display Device and Driving Method Thereof |
US8446524B2 (en) * | 2010-06-21 | 2013-05-21 | Realtek Semiconductor Corp. | Apparatus and method for frame rate conversion |
KR101279128B1 (en) | 2010-07-08 | 2013-06-26 | 엘지디스플레이 주식회사 | Stereoscopic image display and driving method thereof |
KR101681779B1 (en) * | 2010-07-14 | 2016-12-02 | 엘지디스플레이 주식회사 | Stereoscopic image display and method of controlling backlight thereof |
US8610707B2 (en) | 2010-09-03 | 2013-12-17 | Himax Technologies Ltd. | Three-dimensional imaging system and method |
KR101707101B1 (en) | 2010-09-09 | 2017-02-28 | 삼성디스플레이 주식회사 | Method of processing image data and display device performing the method |
KR101742182B1 (en) | 2010-09-17 | 2017-06-16 | 삼성디스플레이 주식회사 | Method of processing image data, and display apparatus performing the method of displaying image |
-
2008
- 2008-08-05 EP EP08807260A patent/EP2188979A2/en not_active Withdrawn
- 2008-08-05 US US12/677,507 patent/US20100271554A1/en not_active Abandoned
- 2008-08-05 US US12/677,523 patent/US20110205438A1/en not_active Abandoned
- 2008-08-05 WO PCT/IB2008/053127 patent/WO2009034487A2/en active Application Filing
- 2008-08-05 EP EP08789551A patent/EP2206342A2/en not_active Withdrawn
- 2008-08-05 CN CN200880106214XA patent/CN101803361B/en not_active Expired - Fee Related
- 2008-08-05 WO PCT/IB2008/053146 patent/WO2009034488A2/en active Application Filing
- 2008-08-05 US US12/677,508 patent/US8526502B2/en not_active Expired - Fee Related
- 2008-08-05 CN CN200880106002A patent/CN101796813A/en active Pending
- 2008-08-05 EP EP08789545A patent/EP2188978A2/en not_active Withdrawn
- 2008-08-05 CN CN2008801062224A patent/CN101803363B/en not_active Expired - Fee Related
- 2008-08-05 CN CN200880106215A patent/CN101803362A/en active Pending
- 2008-08-05 WO PCT/IB2008/053121 patent/WO2009034486A2/en active Application Filing
- 2008-08-05 WO PCT/IB2008/053147 patent/WO2009034489A2/en active Application Filing
- 2008-08-05 EP EP08807259A patent/EP2206341A2/en not_active Withdrawn
- 2008-08-22 CN CN2008801061128A patent/CN101836441B/en not_active Expired - Fee Related
- 2008-08-22 WO PCT/IB2008/053373 patent/WO2009034492A2/en active Application Filing
- 2008-08-22 US US12/676,364 patent/US9036082B2/en active Active
- 2008-08-22 EP EP08807405A patent/EP2188990A2/en not_active Withdrawn
- 2008-08-22 WO PCT/IB2008/053374 patent/WO2009034493A2/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4668986A (en) * | 1984-04-27 | 1987-05-26 | Nec Corporation | Motion-adaptive interpolation method for motion video signal and device for implementing the same |
US4891699A (en) * | 1989-02-23 | 1990-01-02 | Matsushita Electric Industrial Co., Ltd. | Receiving system for band-compression image signal |
US4992869A (en) * | 1989-04-27 | 1991-02-12 | Sony Corporation | Motion dependent video signal processing |
US6078618A (en) * | 1997-05-28 | 2000-06-20 | Nec Corporation | Motion vector estimation system |
US20020196362A1 (en) * | 2001-06-11 | 2002-12-26 | Samsung Electronics Co., Ltd. | Apparatus and method for adaptive motion compensated de-interlacing of video data |
US20040101047A1 (en) * | 2002-11-23 | 2004-05-27 | Samsung Electronics Co., Ltd. | Motion estimation apparatus, method, and machine-readable medium capable of detecting scrolling text and graphic data |
US20050271144A1 (en) * | 2004-04-09 | 2005-12-08 | Sony Corporation | Image processing apparatus and method, and recording medium and program used therewith |
US20070040935A1 (en) * | 2005-08-17 | 2007-02-22 | Samsung Electronics Co., Ltd. | Apparatus for converting image signal and a method thereof |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050129312A1 (en) * | 2002-02-06 | 2005-06-16 | Ernst Fabian E. | Unit for and method of segmentation |
US8582882B2 (en) * | 2002-02-06 | 2013-11-12 | Koninklijke Philipse N.V. | Unit for and method of segmentation using average homogeneity |
US20100238355A1 (en) * | 2007-09-10 | 2010-09-23 | Volker Blume | Method And Apparatus For Line Based Vertical Motion Estimation And Compensation |
US20110205438A1 (en) * | 2007-09-10 | 2011-08-25 | Trident Microsystems (Far East) Ltd. | Method and apparatus for motion estimation and motion compensation in video image data |
US8526502B2 (en) * | 2007-09-10 | 2013-09-03 | Entropic Communications, Inc. | Method and apparatus for line based vertical motion estimation and compensation |
US20120195472A1 (en) * | 2011-01-31 | 2012-08-02 | Novatek Microelectronics Corp. | Motion adaptive de-interlacing apparatus and method |
CN102788973A (en) * | 2011-05-17 | 2012-11-21 | 株式会社电装 | Radar device, calibration system and calibration method |
US10595043B2 (en) | 2012-08-30 | 2020-03-17 | Novatek Microelectronics Corp. | Encoding method and encoding device for 3D video |
US20140063031A1 (en) * | 2012-09-05 | 2014-03-06 | Imagination Technologies Limited | Pixel buffering |
US10109032B2 (en) * | 2012-09-05 | 2018-10-23 | Imagination Technologies Limted | Pixel buffering |
US11587199B2 (en) | 2012-09-05 | 2023-02-21 | Imagination Technologies Limited | Upscaling lower resolution image data for processing |
US10083498B2 (en) * | 2016-02-24 | 2018-09-25 | Novatek Microelectronics Corp. | Compensation method for display device and related compensation module |
Also Published As
Publication number | Publication date |
---|---|
EP2188979A2 (en) | 2010-05-26 |
WO2009034486A3 (en) | 2009-04-30 |
WO2009034489A2 (en) | 2009-03-19 |
US20100277644A1 (en) | 2010-11-04 |
WO2009034493A2 (en) | 2009-03-19 |
EP2188978A2 (en) | 2010-05-26 |
WO2009034489A3 (en) | 2009-05-28 |
WO2009034488A2 (en) | 2009-03-19 |
US20110205438A1 (en) | 2011-08-25 |
US20100238355A1 (en) | 2010-09-23 |
WO2009034487A3 (en) | 2009-04-30 |
CN101796813A (en) | 2010-08-04 |
EP2206341A2 (en) | 2010-07-14 |
WO2009034486A2 (en) | 2009-03-19 |
WO2009034487A2 (en) | 2009-03-19 |
CN101836441B (en) | 2012-03-21 |
WO2009034492A2 (en) | 2009-03-19 |
CN101803363B (en) | 2013-09-18 |
CN101803363A (en) | 2010-08-11 |
EP2188990A2 (en) | 2010-05-26 |
WO2009034492A3 (en) | 2009-08-13 |
CN101803361B (en) | 2013-01-23 |
US9036082B2 (en) | 2015-05-19 |
WO2009034493A3 (en) | 2009-08-06 |
EP2206342A2 (en) | 2010-07-14 |
US8526502B2 (en) | 2013-09-03 |
WO2009034488A3 (en) | 2009-07-23 |
CN101803362A (en) | 2010-08-11 |
CN101803361A (en) | 2010-08-11 |
CN101836441A (en) | 2010-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20100271554A1 (en) | Method And Apparatus For Motion Estimation In Video Image Data | |
US8189105B2 (en) | Systems and methods of motion and edge adaptive processing including motion compensation features | |
JP2000134585A (en) | Motion vector deciding method and method and circuit for number of frames of image signal conversion | |
NL1027270C2 (en) | The interlining device with a noise reduction / removal device. | |
KR20070069615A (en) | Motion estimator and motion estimating method | |
EP1557037A1 (en) | Image processing unit with fall-back | |
WO2007119183A2 (en) | Method and system for creating an interpolated image | |
US7356439B2 (en) | Motion detection apparatus and method | |
EP1460847B1 (en) | Image signal processing apparatus and processing method | |
KR20060047638A (en) | Film mode correction in still areas | |
US8761262B2 (en) | Motion vector refining apparatus | |
JP5197374B2 (en) | Motion estimation | |
AU2004200237B2 (en) | Image processing apparatus with frame-rate conversion and method thereof | |
US9324131B1 (en) | Method and apparatus for motion adaptive deinterlacing with reduced artifacts | |
JP3576618B2 (en) | Motion vector detection method and apparatus | |
JPH08223540A (en) | Motion interpolation method using motion vector, motion interpolation circuit, motion vector detection method and motion vector detection circuit | |
KR100949137B1 (en) | Apparatus for video interpolation, method thereof and computer recordable medium storing the method | |
TWI425840B (en) | Robust Adaptive Decoupling Device and Method for Displacement Compensation Using Overlapping Blocks | |
KR20050015189A (en) | Apparatus for de-interlacing based on phase corrected field and method therefor, and recording medium for recording programs for realizing the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TRIDENT MICROSYSTEMS (FAR EAST) LTD., CAYMAN ISLAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NXP B.V.;REEL/FRAME:025075/0064 Effective date: 20100930 |
|
AS | Assignment |
Owner name: ENTROPIC COMMUNICATIONS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TRIDENT MICROSYSTEMS, INC.;TRIDENT MICROSYSTEMS (FAR EAST) LTD.;REEL/FRAME:028146/0178 Effective date: 20120411 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |