[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20060133495A1 - Temporal error concealment for video communications - Google Patents

Temporal error concealment for video communications Download PDF

Info

Publication number
US20060133495A1
US20060133495A1 US11/022,362 US2236204A US2006133495A1 US 20060133495 A1 US20060133495 A1 US 20060133495A1 US 2236204 A US2236204 A US 2236204A US 2006133495 A1 US2006133495 A1 US 2006133495A1
Authority
US
United States
Prior art keywords
motion vectors
frame
macroblock
motion
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/022,362
Inventor
Yan Ye
Gokce Dane
Yen-Chi Lee
Ming-Chang Tsai
Nien-Chung Feng
Karl Ni
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to US11/022,362 priority Critical patent/US20060133495A1/en
Assigned to QUALCOMM INCORPORATED reassignment QUALCOMM INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TSAI, MING-CHANG, FENG, NIEN-CHANG, NI, KARL, DANE, GOKEE, LEE, YEN-CHI, YE, YAN
Assigned to QUALCOMM INCORPORATED, A DELAWARE CORPORATION reassignment QUALCOMM INCORPORATED, A DELAWARE CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE SECOND AND THE FIFTH ASSIGNOR PREVIOUSLY RECORDED AT REEL 016302 FRAME 0791 Assignors: TSAI, MING-CHANG, FENG, NIEN-CHUNG, NI, KARL, DANE, GOKCE, LEE, YEN-CHI, YE, YAN
Priority to JP2007548507A priority patent/JP5021494B2/en
Priority to DE602005025808T priority patent/DE602005025808D1/en
Priority to PCT/US2005/046739 priority patent/WO2006069297A1/en
Priority to CN2005800480510A priority patent/CN101116345B/en
Priority to KR1020077015762A priority patent/KR100964407B1/en
Priority to TW094145929A priority patent/TW200637375A/en
Priority to EP05855322A priority patent/EP1829383B1/en
Priority to AT05855322T priority patent/ATE494735T1/en
Publication of US20060133495A1 publication Critical patent/US20060133495A1/en
Priority to US12/694,522 priority patent/US8817879B2/en
Priority to JP2011158191A priority patent/JP5420600B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment

Definitions

  • Embodiments of the present invention pertain to the processing of multimedia data, and in particular to the decoding (decompressing) of video data.
  • Media systems transmit media data, such as video data, over wired and/or wireless channels. Data transmitted over such channels may be lost or corrupted or may experience delays along the way, perhaps arriving late at its destination. Late or lost data can be particularly troublesome for video data that are predictively encoded (compressed) using techniques such as but not limited to MPEG (Moving Pictures Experts Group) encoding. Predictive encoding introduces dependencies in the encoded data, so that the decoding of some data depends on the decoding of other data. While predictive encoding generally improves the amount of compression, it can also result in error propagation should data relied on for the decoding of other data be lost or arrive late. Any late or lost data can impact the quality of the reconstructed (decoded or decompressed) video data. However, the impact can be aggravated if the lost or late data is part of a reference frame used for motion compensated prediction because errors will propagate to other frames that are dependent on the reference frame.
  • MPEG Motion Picture Experts Group
  • a video decoder can apply an error recovery (e.g., error concealment) process to the received data.
  • error recovery e.g., error concealment
  • the quality of the reconstructed video can be significantly improved if motion vectors can be recovered (e.g., estimated).
  • Temporal error concealment improves the quality of the reconstructed video by estimating missing or incorrectly received motion vectors in a current frame using properly received information from the current frame and/or preceding frames.
  • a goal of temporal error concealment is to estimate motion vectors using their spatial as well as temporal associates.
  • Conventional temporal error concealment techniques are based in the pixel domain.
  • a frame the current frame in which a motion vector associated with an area (e.g., a macroblock of interest) in the frame is missing.
  • a set of motion vectors is formed by selecting motion vectors associated with macroblocks that surround the macroblock of interest in the current frame and motion vectors associated with macroblocks that surround the co-located macroblock in the reference frame (the co-located macroblock is the macroblock that is at the same position in the reference frame as the macroblock of interest is in the current frame).
  • a measure of distortion is calculated for each of the motion vectors in the set. To evaluate the distortion, pixel values are taken from the reconstructed frame buffer.
  • the motion vector that minimizes the distortion measure is chosen as the replacement for the absent motion vector.
  • a search for a motion vector that minimizes the distortion measure is performed within, for example, a 3 ⁇ 3 window of macroblocks.
  • Pixel-domain error concealment is problematic because it is computationally complex and time-consuming. Evaluating the distortion for each potential motion vector can require a large number of computations, consuming computational resources and causing delays in the decoding process. Pixel-domain error concealment is most effective when performed after the decoder has finished decoding a frame; hence, the delay introduced by error concealment in the pixel domain may be equivalent to one frame duration. Furthermore, accessing the reconstructed frame buffer to retrieve pixel values for the distortion evaluation takes time, which adds to the delays.
  • a set of candidate motion vectors is selected from motion vectors associated with macroblocks in a first frame of video data and from motion vectors associated with macroblocks in a second frame of the video data.
  • the first frame precedes the second frame in order of display.
  • a statistical measure of the set is determined. For example, the average or the median of the candidate motion vectors can be determined. The statistical measure defines a motion vector for a macroblock of interest in the second frame.
  • Various methods can be used to select the candidate motion vectors.
  • the selection of the candidate motion vectors and the determination of a replacement motion vector are performed in the motion vector domain instead of the pixel domain. Consequently, computational complexity and latency are reduced. As an additional benefit, hardware modifications are not required.
  • FIG. 1 is a block diagram of one example of a system for decoding video data.
  • FIG. 2 illustrates an example of two frames of image data organized as macroblocks.
  • FIG. 3 illustrates an example of two image frames showing the motion of an object from one frame to the next.
  • FIG. 4 is a data flow diagram showing the flow of data from a data encoding process to a data decoding process.
  • FIG. 5 is a flowchart of a motion vector domain-based temporal error concealment method.
  • FIG. 6 illustrates the flow of information according to the method of FIG. 5 .
  • FIG. 7 is a flowchart of a method for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process.
  • FIG. 8 illustrates the flow of information according to the method of FIG. 7 .
  • FIG. 9 is a flowchart of a method for detecting frame-to-frame motion change and used for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process.
  • FIG. 10 is a flowchart of another method for detecting frame-to-frame motion change and used for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process.
  • FIG. 11 is a flowchart of a method for locating a motion boundary within a frame and used for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process.
  • FIG. 12 illustrates the flow of information according to the method of FIG. 11 .
  • FIG. 13 is a flowchart of a method that uses the trajectory of a moving object to select a candidate motion vector used in a motion vector domain-based temporal error concealment process.
  • FIG. 14 illustrates the method of FIG. 13 .
  • a computer-usable medium that has computer-readable program code embodied therein is implemented.
  • a computer system can include, in general, a processor for processing information and instructions, random access (volatile) memory (RAM) for storing information and instructions, read-only (non-volatile) memory (ROM) for storing static information and instructions, a data storage device such as a magnetic or optical disk and disk drive for storing information and instructions, an optional user output device such as a display device (e.g., a monitor) for displaying information to the computer user, an optional user input device including alphanumeric and function keys (e.g., a keyboard) for communicating information and command selections to the processor, and an optional user input device such as a cursor control device (e.g., a mouse) for communicating user input information and command selections to the processor.
  • the computer system may also include an input/output device for providing a physical communication link between the computer system and a network, using either a wired or a wireless communication interface.
  • FIG. 1 is a block diagram of an example of a system 10 upon which embodiments may be implemented.
  • the system 10 shows the components of an execution platform for implementing certain functionality of the embodiments.
  • the system 10 includes a microprocessor 12 (e.g., an Advanced Reduced Instruction Set Computer Machine, or ARM, processor) coupled to a digital signal processor (DSP) 15 via a host interface 11 .
  • DSP digital signal processor
  • the host interface 11 translates data and commands passing between the microprocessor 12 and the DSP 15 into their respective formats.
  • both the microprocessor 12 and the DSP 15 are coupled to a memory 17 via a memory controller 16 .
  • the memory 17 is a shared memory, whereby the memory 17 stores instructions and data for both the microprocessor 12 and the DSP 15 . Access to the shared memory 17 is through the memory controller 16 .
  • the shared memory 16 also includes a video frame buffer for storing pixel data that drives a coupled display 18 .
  • certain processes and steps of the embodiments are realized, in at least one embodiment, as a series of instructions (e.g., a software program or programs) that reside within computer-readable memory (e.g., memory 17 ) of a computer system (e.g., system 10 ) and are executed by the microprocessor 12 and DSP 15 of system 10 . When executed, the instructions cause the system 10 to implement the functionality of embodiments as described below.
  • certain processes and steps of the present invention are realized in hardware.
  • video-based data also referred to as media data or multimedia data or content
  • present invention is not so limited.
  • embodiments may also be used with image-based data, Web page-based data, graphic-based data and the like, and combinations thereof.
  • Embodiments of the present invention can be used with Moving Pictures Experts Group (MPEG) compression (encoding) schemes such as MPEG-1, MPEG-2, MPEG-4, and International Telecommunication Union (ITU) encoding schemes such as H.261, H.263 and H.264; however, the present invention is not so limited.
  • MPEG Moving Pictures Experts Group
  • ITU International Telecommunication Union
  • embodiments can be used with encoding schemes that make use of temporal redundancy or motion compensation—in essence, encoding schemes that use motion vectors to increase the amount of compression (the compression ratio).
  • FIG. 2 illustrates an example of two frames 21 and 22 of image or video data.
  • the frame 21 also referred to herein as the first frame or the reference frame precedes the frame 22 (also referred to herein as the second frame or current frame) in display order.
  • Each of the frames 21 and 22 is organized as a plurality of macroblocks, exemplified by the macroblock 23 .
  • a macroblock has dimensions of 16 pixels by 16 pixels; however, the present invention is not so limited—macroblocks can have dimensions other than 16 ⁇ 16 pixels.
  • FIG. 2 shows a certain number of macroblocks, the present invention is not so limited.
  • a motion vector is associated with each macroblock.
  • a motion vector has a dimension that describes its length (magnitude) and a dimension that describes its direction (angle).
  • a motion vector may have a magnitude of zero.
  • a motion vector that is properly received at a decoder is represented herein as an arrow (e.g., arrow 24 ) within a macroblock.
  • Macroblocks e.g., macroblock 25
  • the frame 22 also includes a macroblock 28 of interest, indicated by an “X,” where a motion vector was not properly received.
  • a motion vector may not be properly received if the data describing the motion vector is late, corrupted or missing.
  • a motion vector can be estimated for each macroblock in the slice for which a motion vector was not properly received.
  • a motion vector can be estimated for any macroblock where there is a desire to do so.
  • a macroblock 29 in the reference frame 21 is identified.
  • the macroblock 29 is in the same position within the frame 21 as the macroblock 28 of interest is in the frame 22 . Accordingly, the macroblock 28 and the macroblock 29 are said to be co-located.
  • a first plurality (window 26 ) of macroblocks in the current frame 22 that neighbor the macroblock 28 is identified, and a second plurality (window 27 ) of macroblocks in the reference frame 21 that neighbor the macroblock 29 in the reference frame 21 is also identified.
  • the window 27 is in the same position within the frame 21 as the window 26 is in the frame 22 .
  • the window 26 and the window 27 are also said to be co-located.
  • co-located is used to describe a region (e.g., a macroblock or a window of macroblocks) of one frame and a corresponding region in another frame that are in the same positions within their respective frames.
  • a pair of co-located macroblocks 108 and 109 is also indicated; that is, macroblock 108 is at a position within window 27 that is the same as the position of macroblock 109 within window 26 .
  • a motion vector can be estimated for any macroblock of interest in the frame 22 by considering the properly received motion vectors associated with macroblocks in the current frame 22 that neighbor the macroblock of interest, and by considering the properly received motion vectors associated with macroblocks in the reference frame 21 that neighbor a macroblock that is co-located with the macroblock of interest.
  • the array of macroblocks in the window 26 surrounds the macroblock 28 of interest.
  • the window 26 and the window 27 each include a 3 ⁇ 3 array of macroblocks. Windows of different dimensions, including windows that are not square-shaped, can be selected. Also, a window does not necessarily have to surround the macroblock of interest, in particular for those instances in which the macroblock of interest is at the edge of a frame.
  • FIG. 3 illustrates two consecutive image frames (a first frame 32 and a second frame 34 ) according to one embodiment.
  • the second frame 34 follows the first frame 32 in display order.
  • the first frame 32 can correspond to, for example, an I-frame or a P-frame
  • the second frame 34 can correspond to, for example, a P-frame.
  • the first frame 32 and the second frame 34 are “inter-coded” (e.g., inter-coded frames are encoded dependent on other frames).
  • an object 33 is located at a certain position within the first frame 32 , and the same object 33 is located at a different position within the second frame 34 .
  • the MPEG compression scheme works by encoding the differences between frames.
  • a motion vector 35 is used as the simplest way of communicating the change in the image between the frames 32 and 34 ; that is, the image of the object 33 does not have to be sent again just because it moved.
  • a motion vector can be associated with a macroblock in a frame (e.g., the macroblock 23 of FIG. 2 ).
  • FIG. 4 is a data flow diagram 40 showing the flow of data from an encoder to a decoder according to one embodiment.
  • an encoding process 42 compresses (encodes) data 41 using an encoding scheme such as MPEG-1, MPEG-2, MPEG-4, H.261, H.263 or H.264.
  • the compressed data 43 is sent from the encoder to a decoder (e.g., the system 10 of FIG. 1 ) via the channel 44 , which may be a wired or wireless channel.
  • the received data 45 may include both properly received data and corrupted data. Also, some data may also be lost during transmission or may arrive late at the decoder.
  • the decoding process 46 decompresses (reconstructs) the received data 45 to generate the reconstructed data 47 .
  • FIG. 5 is a flowchart 50 of one embodiment of a motion vector domain-based temporal error concealment method. Although specific steps are disclosed in the flowchart 50 of FIG. 5 (as well as in the flowcharts 70 , 90 , 100 , 110 and 130 of FIGS. 7, 9 , 10 , 11 and 13 , respectively), such steps are exemplary. That is, other embodiments may be formulated by performing various other steps or variations of the steps recited in the flowcharts 50 , 70 , 90 , 100 , 110 and 130 .
  • FIG. 5 is described with reference also to FIG. 6 .
  • FIG. 6 shows a 3 ⁇ 3 window 63 of macroblocks selected from a reference frame 61 , and a 3 ⁇ 3 window 64 of macroblocks selected from a current frame 62 . It is understood that the reference frame 61 and the current frame 62 each include macroblocks in addition to the macroblocks included in the windows 63 and 64 , respectively.
  • the window 63 and the window 64 are co-located.
  • the macroblock (MB) 68 of interest that is, the macroblock for which a motion vector is to be estimated—lies at the center of the window 64 , but as mentioned above, that does not have to be the case.
  • the windows 63 and 64 can be other than 3 ⁇ 3 windows. For instance, 5 ⁇ 5 windows may be used. Also, if the macroblock of interest is along one edge of the current frame 62 , then a window that is not square in shape (e.g., a 3 ⁇ 2 or a 2 ⁇ 3 window) may be used.
  • the reference frame 61 precedes the current frame 62 in display order.
  • the reference frame 61 may be a frame that comes after the current frame 62 in display order; that is, the reference frame 61 may be a “future frame.”
  • both the frame preceding the current frame 62 and the future frame following the current frame 62 may be considered for the error concealment methods described herein.
  • motion vectors from a future frame may introduce delays into the decoding process. However, in applications in which delays can be tolerated, motion vectors from a future frame may be used for error concealment. Also, motion vectors from a future frame may be used in instances in which the current frame 62 is the first frame in a sequence of frames (e.g., an I-frame).
  • one of the objectives of the method of the flowchart 50 is to intelligently select a set 65 of candidate motion vectors from the properly received motion vectors that are associated with the macroblocks of the frames 61 and 62 .
  • a vector median filter (VMF) 66 is applied to the vectors in the set 65 .
  • the output of the VMF 66 is an estimated motion vector (MV) 67 for the macroblock 68 of interest.
  • the windows 63 and 64 are identified. Correctly received motion vectors associated with the macroblocks in the window 63 , and correctly received motion vectors associated with the macroblocks in the window 64 , are accessed.
  • motion vectors in the reference frame can be included in the set 65 of candidate motion vectors, then motion vectors from the window 63 and from the window 64 are intelligently selected and included in the set 65 .
  • Embodiments of methods used to select motion vectors from the windows 63 and 64 are described in conjunction with FIGS. 11, 12 , 13 and 14 , below.
  • a block 54 of FIG. 5 in one embodiment, if motion vectors in the reference frame are not eligible to be included in the set 65 of candidate motion vectors, then only motion vectors from window 64 are selected and included in the set 65 . Note that it is possible that there may be instances in which the window 64 contains no properly received motion vectors. The method of FIGS. 7 and 8 can be used to address those instances.
  • a statistical measure of the set 65 of candidate motion vectors is determined.
  • the statistical measure defines a motion vector 67 for the macroblock 68 of interest.
  • the motion vector 67 can then be applied to the macroblock 68 of interest.
  • the statistical measure is the median of the set 65 of candidate motion vectors.
  • the median specifically, the median vector of the set 65 is determined, as follows.
  • p denotes the p-norm metrics between the vectors.
  • the estimated motion vector 67 for the macroblock 68 of interest is the median of the set 65 of candidate motion vectors.
  • Statistical measures of the set 65 of candidate motion vectors other than the median can be determined and used for the estimated motion vector 67 .
  • the average of the set 65 can be determined and used.
  • a set 65 of candidate motion vectors is identified.
  • the set 65 is then operated on in some manner to determine an estimated motion vector 67 for the macroblock 68 of interest.
  • the estimated motion vector 67 may be one of the motion vectors in the set 65 , or the estimated motion vector 67 may be a motion vector determined by operating on the set 65 .
  • the estimated motion vector 67 is determined in the motion vector domain and not in the pixel domain. Specifically, pixel values are not used for error concealment, and distortion values associated with each of the candidate motion vectors are not calculated for error concealment. Accordingly, computational complexity and associated decoding delays are reduced. Also, there is no need to access the frame buffer to retrieve pixel values, eliminating that source of additional decoding delays. Furthermore, by intelligently selecting motion vectors to be included in the set 65 of candidate motion vectors, peak signal-to-noise ratios (PSNRs) comparable to if not better than the PSNRs associated with pixel-based error concealment techniques are achieved.
  • PSNRs peak signal-to-noise ratios
  • FIG. 7 is a flowchart 70 of one embodiment of a method for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process.
  • Flowchart 70 describes one embodiment of a method for implementing blocks 52 , 53 and 54 of FIG. 5 .
  • FIG. 7 is described with reference also to FIG. 8 .
  • the window 83 in a reference frame 81
  • the window 84 in the current frame 82
  • the reference frame 81 and the current frame 82 each include macroblocks in addition to the macroblocks included in the windows 83 and 84 , respectively.
  • Properly received motion vectors associated with the macroblocks in the window 83 can then be accessed.
  • Properly received motion vectors in the window 83 are identified using a letter A, while properly received motion vectors in the window 84 are identified using a letter B.
  • a properly received motion vector for a macroblock in the window 84 that motion vector is included in the set 85 of candidate motion vectors, and the motion vector for the co-located macroblock in the window 83 is not included in the set 85 .
  • the motion vector associated with the macroblock 89 current frame 82
  • the motion vector associated with the macroblock 87 reference frame 81
  • a block 74 if there is not a properly received motion vector for a macroblock in the window 84 , then the motion vector for the co-located macroblock in the window 83 is included in the set 85 of candidate motion vectors. For example, there is not a properly received motion vector for the macroblock 88 of interest, and so the motion vector associated with the co-located macroblock 86 (in the reference frame 81 ) is included in the set 85 .
  • a statistical measure of the set 85 of candidate motion vectors is determined (refer to the discussion of FIGS. 5 and 6 ).
  • motion from one frame to the next frame may not be continuous.
  • a reference frame may include one type of motion, while motion in the current frame may have changed direction or stopped.
  • an object in a reference frame may move out of the neighborhood of a macroblock of interest, and so it may not be suitable to include a motion vector for that object in the set of candidate motion vectors.
  • FIG. 9 is a flowchart 90 of one embodiment of a method for detecting frame-to-frame motion change.
  • FIG. 10 is a flowchart 100 of another embodiment of a method for detecting frame-to-frame motion change. Either or both of the methods of the flowcharts 90 and 100 can be used to determine whether motion vectors from a reference frame should be included in the set of candidate motion vectors, in order to address the points mentioned in the preceding paragraph.
  • the flowchart 90 describes one embodiment of a method for implementing the block 52 of FIG. 5 .
  • a first range of values for motion vectors associated with a reference frame is determined.
  • a second range of values for motion vectors associated with the current frame is determined.
  • the first and second ranges of values are compared, and the motion vectors associated with the reference frame are included in the set of candidate motion vectors according to the results of the comparison.
  • FIG. 9 is described further with reference also to FIG. 2 .
  • motion vector statistics are calculated for the properly received motion vectors associated with the reference frame 21 .
  • motion vector statistics are calculated for the properly received motion vectors associated with the current frame 22 .
  • all of the motion vectors associated with the reference frame 21 and the current frame 22 are included in the calculations of motion vector statistics.
  • only subsets of the motion vectors are used instead of all of the motion vectors.
  • the subsets may include only the motion vectors associated with macroblocks for which motion vectors for both frames were properly received. That is, for example, a motion vector for a macroblock in the reference frame 21 is only included in a first subset if the motion vector for the co-located macroblock in the current frame 22 was also properly received. Similarly, a motion vector for a macroblock in the current frame 22 is only included in a second subset if the motion vector for the co-located macroblock in the reference frame 21 was also properly received.
  • the statistics calculated include the mean and standard deviation of the motion vector dimensions (magnitude/length and direction/angle). Let/be the set of indices of the motion vectors ⁇ overscore (v) ⁇ ; that are included in the calculations of motion vector statistics, and let M be the size of the set l.
  • the ranges of the motion vector magnitudes for the reference frame 21 and for the current frame 22 are compared, and the ranges of the motion vector angles for the reference frame 21 and for the current frame 22 are also compared. In one embodiment, if the range of motion vector magnitudes for the reference frame 21 overlaps the range of motion vector magnitudes for the current frame 22 , and if the range of motion vector angles for the reference frame 21 overlaps the range of motion vector angles for the current frame 22 , then the reference frame 21 and the current frame 22 are judged to have similar motion. Accordingly, motion vectors from the reference frame 21 are eligible for inclusion in the set of candidate motion vectors (e.g., the set 65 of FIG. 6 ).
  • the flowchart 100 describes another embodiment of a method for implementing the block 52 of FIG. 5 .
  • a block 101 the dimensions of pairs of motion vectors are compared to determine whether motion vectors in each of the pairs are similar to each other.
  • Each of the pairs of motion vectors includes a first motion vector associated with a first macroblock at a position in a reference frame, and a second motion vector associated with a second macroblock at the position in the current frame.
  • a block 102 the number of pairs of motion vectors in the reference and current frames that are similar is counted.
  • a block 103 motion vectors from the reference frame are eligible for inclusion in the set of candidate motion vectors if the number exceeds a threshold.
  • FIG. 10 is described further with reference also to FIG. 2 .
  • the dimensions of each pair of co-located macroblocks are compared.
  • the macroblocks 108 and 109 of FIG. 2 are an example of a pair of co-located macroblocks.
  • each received motion vector in the reference frame 21 and each received motion vector in the current frame 22 is given a magnitude label and a direction label.
  • the magnitude label has a value of either zero (0) or one (1), depending on its relative magnitude. For example, a motion vector having a magnitude of less than or equal to two (2) pixels is assigned a magnitude label of 0, and a motion vector having a magnitude of more than 2 pixels is assigned a magnitude label of 1.
  • the direction label has a value of 0, 1, 2 or three (3).
  • a motion vector having an angle greater than or equal to ⁇ 45 degrees but less than 45 degrees could be assigned a direction label of 0
  • a motion vector having an angle greater than or equal to 45 degrees but less than 135 degrees could be assigned a direction label of 1, and so on.
  • Other schemes for labeling the magnitude and direction of motion vectors can be used.
  • the magnitude labels of the 2 motion vectors in the pair are compared, and the direction labels of the 2 motion vectors in the pair are compared. In one embodiment, if the magnitude labels are the same and the direction labels are not opposite for the 2 motion vectors in a pair, then that pair of motion vectors is defined as being similar. Note that, in the present embodiment, the direction labels do not necessarily have to be the same in order for the 2 motion vectors in a pair to be considered similar. For example, using the scheme described above, a direction label of 0 would be considered similar to a direction label of 0, 1 or 3, but opposite to a direction label of 2. Other rules defining what constitutes similar motion vectors can be used.
  • the number of pairs of co-located macroblocks that contain similar motion vectors is counted. In other words, the number of pairs of similar motion vectors is counted.
  • motion vectors from the reference frame 21 are eligible for inclusion in the set of candidate motion vectors (e.g., the set 65 of FIG. 6 ) if the count made in the block 102 exceeds a threshold.
  • the threshold is equal to one-half of the number of macroblocks in either of the two frames 21 or 22 .
  • FIG. 11 is a flowchart 110 of one embodiment of a method for locating a motion boundary.
  • the flowchart 110 describes one embodiment of a method of implementing the block 53 of FIG. 5 . Note that, in one embodiment, the block 53 (and hence the method of the flowchart 110 ) is implemented depending on the outcome of the block 52 of FIG. 5 .
  • a motion boundary is identified in a reference frame.
  • the set of candidate motion vectors includes only those motion vectors that are associated with macroblocks in the reference frame that lie on the same side of the motion boundary as a macroblock in the reference frame that is co-located with a macroblock of interest in the current frame.
  • FIG. 11 is described further with reference also to FIG. 12 .
  • FIG. 12 shows a window 125 in a reference frame 121 , and a window 126 in a current frame 122 . It is understood that the reference frame 121 and the current frame 122 each include macroblocks in addition to the macroblocks included in the windows 125 and 126 , respectively.
  • a motion boundary 129 is identified in the reference frame 121 .
  • the motion boundary 129 is identified in the following manner.
  • Each of the motion vectors associated with the macroblocks in the window 125 in the reference frame 121 is assigned a magnitude label and a direction label. The discussion above in conjunction with FIG. 10 describes one method for labeling motion vectors.
  • the motion vector associated with the macroblock 124 in the reference frame 121 that is at the same position as the macroblock 123 of interest in the current frame 122 is classified as class 0 . That is, the macroblock 124 is co-located with the macroblock 123 of interest, and as such, the motion vector associated with the macroblock 124 is identified as being the first member of a particular class (e.g., class 0 ).
  • the magnitude labels of the other motion vectors associated with the window 125 are each compared to the magnitude label of the motion vector associated with the macroblock 124
  • the direction labels of the other motion vectors in the window 125 are each compared to the direction label of the motion vector associated with the macroblock 124 .
  • the magnitude label for a motion vector is the same as that of the motion vector associated with the macroblock 124 , and if the angle label for that motion vector is not opposite that of the motion vector associated with the macroblock 124 , then that motion vector is defined as being similar to the motion vector associated with the macroblock 124 , and that motion vector is also classified as class 0 . As mentioned, the process just described is repeated for each motion vector associated with the window 125 , to generate the local motion class map 127 .
  • the block 112 only those motion vectors associated with the window 125 that are in the same class as the motion vector associated with the macroblock 124 are included in the set 126 of candidate motion vectors.
  • the motion vectors in the window 125 in the reference frame 121 that are on the same side of the motion boundary 129 as the macroblock 124 (the macroblock co-located with the macroblock 123 of interest) are included in the set 128 of candidate motion vectors. That is, in the example of FIG. 12 , only the motion vectors classified as class 0 are included in the set 128 .
  • a statistical measure of the set 128 of candidate motion vectors is then determined (refer to the discussion of FIGS. 5 and 6 ).
  • properly received motion vectors associated with the window 126 of the current frame 122 can also be included in the set 128 if they are associated with macroblocks that also lie on the same side of the motion boundary as the macroblock 123 of interest. For example, after the map 127 is determined, the macroblocks in the window 126 that are co-located with those macroblocks in the window 127 that are classified as class 0 can also be classified as class 0 , and the motion vectors associated with those macroblocks in the window 126 can be included in the set 128 .
  • FIG. 13 is a flowchart 130 of one embodiment of a method that uses the trajectory of a moving object to select a candidate motion vector.
  • the flowchart 130 describes one embodiment of a method of implementing the block 53 of FIG. 5 .
  • an object in a first macroblock in a reference frame is identified.
  • a motion vector associated with the object is included in the set of candidate motion vectors if the object sufficiently overlaps a co-located second macroblock in the current frame (that is, the first macroblock and the second macroblock are in the same position within their respective frames).
  • FIG. 13 is described further with reference also to FIG. 14 .
  • FIG. 14 shows a window 147 of a reference frame 141 and a window 148 of a current frame 142 .
  • the macroblock 143 is co-located with the macroblock 146 . It is understood that the reference frame 141 and the current frame 142 each include macroblocks in addition to the macroblocks included in the windows 147 and 148 , respectively.
  • an object 144 within the reference frame, and associated with the macroblock 143 that is co-located with the macroblock 146 is identified.
  • the object 144 has moved to a different position, and is now associated with a macroblock 145 .
  • an overlap of greater than or equal to 25 percent is considered sufficient.
  • Various techniques can be used to determine whether the macroblock 145 overlaps the macroblock 146 by that amount.
  • the macroblocks 145 and 146 are each associated with a set of two-dimensional coordinates that define their respective positions within the current frame 142 . Using these coordinates, for example, the corners of one of the macroblocks 145 and 146 can be compared to the midpoints of the sides of the other macroblock to determine whether the amount of overlap exceeds 25 percent. Thresholds other than 25 percent can be used.
  • FIGS. 9-14 have been described separately in order to more clearly describe certain aspects of the embodiments; however, it is appreciated that the embodiments may be implemented by combining different aspects of these embodiments. In one embodiment, one of the methods described in conjunction with FIGS. 9 and 10 is combined with one of the methods described in conjunction with FIGS. 11-14 .
  • embodiments in accordance with the present invention provide methods and systems for temporal error concealment using motion vectors in the motion vector domain rather than pixel values in the pixel domain. Accordingly, computational complexity is reduced because distortion evaluations can be eliminated with regard to error concealment; the number of computation steps may be reduced by as much as 85 percent.
  • Decoding delays are reduced from one frame to one slice of macroblocks; that is, in order to use neighboring motion vectors to estimate an absent motion vector, processing of only a slice (e.g., one row) of macroblocks may be delayed.
  • Memory access times, and associated decoding delays, are reduced because memory accesses to retrieve pixel values can be eliminated with regard to error concealment.
  • embodiments described herein yield PSNRs that are comparable to if not better than PSNRs associated with pixel-based error concealment techniques.
  • embodiments can be implemented without having to make hardware changes.
  • embodiments can be used in motion estimation at an encoder.
  • motion vectors found in the lowest spatial resolution are used as initial estimates of motion vectors for higher resolutions.
  • motion vectors selected as described above can be used as the initial estimates to speed up motion estimation at the encoder.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

Methods and systems for processing video data are described. A set of candidate motion vectors is selected from motion vectors associated with macroblocks in a first frame of video data and from motion vectors associated with macroblocks in a second frame of the video data. A statistical measure of the set is determined. The statistical measure defines a motion vector for a macroblock of interest in the second frame.

Description

    FIELD
  • Embodiments of the present invention pertain to the processing of multimedia data, and in particular to the decoding (decompressing) of video data.
  • BACKGROUND
  • Media systems transmit media data, such as video data, over wired and/or wireless channels. Data transmitted over such channels may be lost or corrupted or may experience delays along the way, perhaps arriving late at its destination. Late or lost data can be particularly troublesome for video data that are predictively encoded (compressed) using techniques such as but not limited to MPEG (Moving Pictures Experts Group) encoding. Predictive encoding introduces dependencies in the encoded data, so that the decoding of some data depends on the decoding of other data. While predictive encoding generally improves the amount of compression, it can also result in error propagation should data relied on for the decoding of other data be lost or arrive late. Any late or lost data can impact the quality of the reconstructed (decoded or decompressed) video data. However, the impact can be aggravated if the lost or late data is part of a reference frame used for motion compensated prediction because errors will propagate to other frames that are dependent on the reference frame.
  • For instance, consider a moving object that appears in different positions in successive frames of video data. Using predictive encoding techniques, the object is described by data in the first frame, but in the second frame the object is described using a motion vector that describes how the object moved from the first frame to the second frame. Thus, only the data for the motion vector needs to be transmitted in the second frame, improving the amount of compression because the data describing the object does not need to be retransmitted. However, if the motion vector is not received, then the object cannot be properly rendered when the second frame is reconstructed into a video image, thus reducing the quality of the reconstructed video. Subsequent frames in which the object appears may also be affected, because they may depend on proper placement of the object in the second frame.
  • To alleviate the impact of absent (e.g., missing, lost, late or incorrectly received) data on the quality of the reconstructed video, a video decoder can apply an error recovery (e.g., error concealment) process to the received data. Studies have shown that the quality of the reconstructed video can be significantly improved if motion vectors can be recovered (e.g., estimated). Temporal error concealment improves the quality of the reconstructed video by estimating missing or incorrectly received motion vectors in a current frame using properly received information from the current frame and/or preceding frames. In other words, a goal of temporal error concealment is to estimate motion vectors using their spatial as well as temporal associates.
  • Conventional temporal error concealment techniques are based in the pixel domain. Consider a frame (the current frame) in which a motion vector associated with an area (e.g., a macroblock of interest) in the frame is missing. A set of motion vectors is formed by selecting motion vectors associated with macroblocks that surround the macroblock of interest in the current frame and motion vectors associated with macroblocks that surround the co-located macroblock in the reference frame (the co-located macroblock is the macroblock that is at the same position in the reference frame as the macroblock of interest is in the current frame). With a pixel-domain approach, a measure of distortion is calculated for each of the motion vectors in the set. To evaluate the distortion, pixel values are taken from the reconstructed frame buffer. In a motion select technique, the motion vector that minimizes the distortion measure is chosen as the replacement for the absent motion vector. In a motion search technique, a search for a motion vector that minimizes the distortion measure is performed within, for example, a 3×3 window of macroblocks.
  • Pixel-domain error concealment is problematic because it is computationally complex and time-consuming. Evaluating the distortion for each potential motion vector can require a large number of computations, consuming computational resources and causing delays in the decoding process. Pixel-domain error concealment is most effective when performed after the decoder has finished decoding a frame; hence, the delay introduced by error concealment in the pixel domain may be equivalent to one frame duration. Furthermore, accessing the reconstructed frame buffer to retrieve pixel values for the distortion evaluation takes time, which adds to the delays.
  • Accordingly, a method and/or system that can reduce computational complexity and decoding delays would be desirable.
  • SUMMARY
  • Methods and systems for processing video data are described. In one embodiment, a set of candidate motion vectors is selected from motion vectors associated with macroblocks in a first frame of video data and from motion vectors associated with macroblocks in a second frame of the video data. In one embodiment, the first frame precedes the second frame in order of display. A statistical measure of the set is determined. For example, the average or the median of the candidate motion vectors can be determined. The statistical measure defines a motion vector for a macroblock of interest in the second frame.
  • Various methods can be used to select the candidate motion vectors. The selection of the candidate motion vectors and the determination of a replacement motion vector are performed in the motion vector domain instead of the pixel domain. Consequently, computational complexity and latency are reduced. As an additional benefit, hardware modifications are not required.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of one example of a system for decoding video data.
  • FIG. 2 illustrates an example of two frames of image data organized as macroblocks.
  • FIG. 3 illustrates an example of two image frames showing the motion of an object from one frame to the next.
  • FIG. 4 is a data flow diagram showing the flow of data from a data encoding process to a data decoding process.
  • FIG. 5 is a flowchart of a motion vector domain-based temporal error concealment method.
  • FIG. 6 illustrates the flow of information according to the method of FIG. 5.
  • FIG. 7 is a flowchart of a method for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process.
  • FIG. 8 illustrates the flow of information according to the method of FIG. 7.
  • FIG. 9 is a flowchart of a method for detecting frame-to-frame motion change and used for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process.
  • FIG. 10 is a flowchart of another method for detecting frame-to-frame motion change and used for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process.
  • FIG. 11 is a flowchart of a method for locating a motion boundary within a frame and used for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process.
  • FIG. 12 illustrates the flow of information according to the method of FIG. 11.
  • FIG. 13 is a flowchart of a method that uses the trajectory of a moving object to select a candidate motion vector used in a motion vector domain-based temporal error concealment process.
  • FIG. 14 illustrates the method of FIG. 13.
  • DETAILED DESCRIPTION
  • In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the concepts presented herein. However, it will be recognized by one skilled in the art that embodiments of the invention may be practiced without these specific details or with equivalents thereof. In other instances, well-known methods, procedures and components have not been described in detail as not to unnecessarily obscure aspects of these embodiments.
  • Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits that can be performed in computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer-executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system or similar electronic computing device. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
  • It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present embodiments, discussions utilizing terms such as “selecting” or “determining” or “comparing” or “counting” or “deciding” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • In one embodiment, a computer-usable medium that has computer-readable program code embodied therein is implemented. A computer system can include, in general, a processor for processing information and instructions, random access (volatile) memory (RAM) for storing information and instructions, read-only (non-volatile) memory (ROM) for storing static information and instructions, a data storage device such as a magnetic or optical disk and disk drive for storing information and instructions, an optional user output device such as a display device (e.g., a monitor) for displaying information to the computer user, an optional user input device including alphanumeric and function keys (e.g., a keyboard) for communicating information and command selections to the processor, and an optional user input device such as a cursor control device (e.g., a mouse) for communicating user input information and command selections to the processor. The computer system may also include an input/output device for providing a physical communication link between the computer system and a network, using either a wired or a wireless communication interface.
  • FIG. 1 is a block diagram of an example of a system 10 upon which embodiments may be implemented. The system 10 shows the components of an execution platform for implementing certain functionality of the embodiments. As depicted in FIG. 1, the system 10 includes a microprocessor 12 (e.g., an Advanced Reduced Instruction Set Computer Machine, or ARM, processor) coupled to a digital signal processor (DSP) 15 via a host interface 11. The host interface 11 translates data and commands passing between the microprocessor 12 and the DSP 15 into their respective formats. In the present embodiment, both the microprocessor 12 and the DSP 15 are coupled to a memory 17 via a memory controller 16. In the system 10 embodiment, the memory 17 is a shared memory, whereby the memory 17 stores instructions and data for both the microprocessor 12 and the DSP 15. Access to the shared memory 17 is through the memory controller 16. The shared memory 16 also includes a video frame buffer for storing pixel data that drives a coupled display 18.
  • As described above, certain processes and steps of the embodiments are realized, in at least one embodiment, as a series of instructions (e.g., a software program or programs) that reside within computer-readable memory (e.g., memory 17) of a computer system (e.g., system 10) and are executed by the microprocessor 12 and DSP 15 of system 10. When executed, the instructions cause the system 10 to implement the functionality of embodiments as described below. In another embodiment, certain processes and steps of the present invention are realized in hardware.
  • The descriptions and examples provided herein are discussed in the context of video-based data (also referred to as media data or multimedia data or content), but the present invention is not so limited. For example, embodiments may also be used with image-based data, Web page-based data, graphic-based data and the like, and combinations thereof.
  • Embodiments of the present invention can be used with Moving Pictures Experts Group (MPEG) compression (encoding) schemes such as MPEG-1, MPEG-2, MPEG-4, and International Telecommunication Union (ITU) encoding schemes such as H.261, H.263 and H.264; however, the present invention is not so limited. In general, embodiments can be used with encoding schemes that make use of temporal redundancy or motion compensation—in essence, encoding schemes that use motion vectors to increase the amount of compression (the compression ratio).
  • FIG. 2 illustrates an example of two frames 21 and 22 of image or video data. In the example of FIG. 2, the frame 21 (also referred to herein as the first frame or the reference frame) precedes the frame 22 (also referred to herein as the second frame or current frame) in display order. Each of the frames 21 and 22 is organized as a plurality of macroblocks, exemplified by the macroblock 23. In one embodiment, a macroblock has dimensions of 16 pixels by 16 pixels; however, the present invention is not so limited—macroblocks can have dimensions other than 16×16 pixels. Although FIG. 2 shows a certain number of macroblocks, the present invention is not so limited.
  • In the example of FIG. 2, a motion vector is associated with each macroblock. A motion vector has a dimension that describes its length (magnitude) and a dimension that describes its direction (angle). A motion vector may have a magnitude of zero. For purposes of illustration, a motion vector that is properly received at a decoder is represented herein as an arrow (e.g., arrow 24) within a macroblock. Macroblocks (e.g., macroblock 25) for which an associated motion vector was not properly received are indicated in FIG. 2 by shading. The frame 22 also includes a macroblock 28 of interest, indicated by an “X,” where a motion vector was not properly received. A motion vector may not be properly received if the data describing the motion vector is late, corrupted or missing.
  • As indicated in FIG. 2, there may be instances in which motion vectors for several macroblocks (that is, a slice of macroblocks consisting of one or more consecutive macroblocks) are not properly received. As will be seen, according to the embodiments, a motion vector can be estimated for each macroblock in the slice for which a motion vector was not properly received. However, in general, a motion vector can be estimated for any macroblock where there is a desire to do so.
  • In one embodiment, to estimate a motion vector for a macroblock 28 in the current frame 22, a macroblock 29 in the reference frame 21 is identified. The macroblock 29 is in the same position within the frame 21 as the macroblock 28 of interest is in the frame 22. Accordingly, the macroblock 28 and the macroblock 29 are said to be co-located. Further, a first plurality (window 26) of macroblocks in the current frame 22 that neighbor the macroblock 28 is identified, and a second plurality (window 27) of macroblocks in the reference frame 21 that neighbor the macroblock 29 in the reference frame 21 is also identified. In one embodiment, the window 27 is in the same position within the frame 21 as the window 26 is in the frame 22. Accordingly, the window 26 and the window 27 are also said to be co-located. In general, the term “co-located” is used to describe a region (e.g., a macroblock or a window of macroblocks) of one frame and a corresponding region in another frame that are in the same positions within their respective frames. A pair of co-located macroblocks 108 and 109 is also indicated; that is, macroblock 108 is at a position within window 27 that is the same as the position of macroblock 109 within window 26.
  • In general, according to embodiments, a motion vector can be estimated for any macroblock of interest in the frame 22 by considering the properly received motion vectors associated with macroblocks in the current frame 22 that neighbor the macroblock of interest, and by considering the properly received motion vectors associated with macroblocks in the reference frame 21 that neighbor a macroblock that is co-located with the macroblock of interest.
  • In one embodiment, the array of macroblocks in the window 26 surrounds the macroblock 28 of interest. In one such embodiment, the window 26 and the window 27 each include a 3×3 array of macroblocks. Windows of different dimensions, including windows that are not square-shaped, can be selected. Also, a window does not necessarily have to surround the macroblock of interest, in particular for those instances in which the macroblock of interest is at the edge of a frame.
  • FIG. 3 illustrates two consecutive image frames (a first frame 32 and a second frame 34) according to one embodiment. In the example of FIG. 3, the second frame 34 follows the first frame 32 in display order. In an MPEG compression scheme, the first frame 32 can correspond to, for example, an I-frame or a P-frame, and the second frame 34 can correspond to, for example, a P-frame. In general, the first frame 32 and the second frame 34 are “inter-coded” (e.g., inter-coded frames are encoded dependent on other frames).
  • In the example of FIG. 3, an object 33 is located at a certain position within the first frame 32, and the same object 33 is located at a different position within the second frame 34. The MPEG compression scheme works by encoding the differences between frames. A motion vector 35 is used as the simplest way of communicating the change in the image between the frames 32 and 34; that is, the image of the object 33 does not have to be sent again just because it moved. In a similar manner, a motion vector can be associated with a macroblock in a frame (e.g., the macroblock 23 of FIG. 2).
  • FIG. 4 is a data flow diagram 40 showing the flow of data from an encoder to a decoder according to one embodiment. In an encoder, an encoding process 42 compresses (encodes) data 41 using an encoding scheme such as MPEG-1, MPEG-2, MPEG-4, H.261, H.263 or H.264. The compressed data 43 is sent from the encoder to a decoder (e.g., the system 10 of FIG. 1) via the channel 44, which may be a wired or wireless channel. The received data 45 may include both properly received data and corrupted data. Also, some data may also be lost during transmission or may arrive late at the decoder. The decoding process 46 decompresses (reconstructs) the received data 45 to generate the reconstructed data 47.
  • FIG. 5 is a flowchart 50 of one embodiment of a motion vector domain-based temporal error concealment method. Although specific steps are disclosed in the flowchart 50 of FIG. 5 (as well as in the flowcharts 70, 90, 100, 110 and 130 of FIGS. 7, 9, 10, 11 and 13, respectively), such steps are exemplary. That is, other embodiments may be formulated by performing various other steps or variations of the steps recited in the flowcharts 50, 70, 90, 100, 110 and 130. It is appreciated that the steps in the flowcharts 50, 70, 90, 100, 110 and 130 may be performed in an order different than presented, and that the steps in the flowcharts 50, 70, 90, 100, 110 and 130 are not necessarily performed in the sequence illustrated.
  • FIG. 5 is described with reference also to FIG. 6. FIG. 6 shows a 3×3 window 63 of macroblocks selected from a reference frame 61, and a 3×3 window 64 of macroblocks selected from a current frame 62. It is understood that the reference frame 61 and the current frame 62 each include macroblocks in addition to the macroblocks included in the windows 63 and 64, respectively.
  • The window 63 and the window 64 are co-located. In the present embodiment, the macroblock (MB) 68 of interest—that is, the macroblock for which a motion vector is to be estimated—lies at the center of the window 64, but as mentioned above, that does not have to be the case.
  • It is understood that the windows 63 and 64 can be other than 3×3 windows. For instance, 5×5 windows may be used. Also, if the macroblock of interest is along one edge of the current frame 62, then a window that is not square in shape (e.g., a 3×2 or a 2×3 window) may be used.
  • In one embodiment, the reference frame 61 precedes the current frame 62 in display order. In another embodiment, the reference frame 61 may be a frame that comes after the current frame 62 in display order; that is, the reference frame 61 may be a “future frame.” In yet another embodiment, both the frame preceding the current frame 62 and the future frame following the current frame 62 may be considered for the error concealment methods described herein.
  • The use of a future frame may introduce delays into the decoding process. However, in applications in which delays can be tolerated, motion vectors from a future frame may be used for error concealment. Also, motion vectors from a future frame may be used in instances in which the current frame 62 is the first frame in a sequence of frames (e.g., an I-frame).
  • In overview, one of the objectives of the method of the flowchart 50 is to intelligently select a set 65 of candidate motion vectors from the properly received motion vectors that are associated with the macroblocks of the frames 61 and 62. In one embodiment, once the set 65 of candidate motion vectors is identified, a vector median filter (VMF) 66 is applied to the vectors in the set 65. The output of the VMF 66 is an estimated motion vector (MV) 67 for the macroblock 68 of interest.
  • In one embodiment, in a block 51 of FIG. 5, the windows 63 and 64 are identified. Correctly received motion vectors associated with the macroblocks in the window 63, and correctly received motion vectors associated with the macroblocks in the window 64, are accessed.
  • In a block 52, in one embodiment, a determination is made as to whether motion vectors in the reference frame 61 (specifically, in the window 63) are eligible to be included in the set 65 of candidate motion vectors. Embodiments of methods used to make this determination are described in conjunction with FIGS. 7, 8, 9 and 10, below.
  • In a block 53 of FIG. 5, in one embodiment, if motion vectors in the reference frame can be included in the set 65 of candidate motion vectors, then motion vectors from the window 63 and from the window 64 are intelligently selected and included in the set 65. Embodiments of methods used to select motion vectors from the windows 63 and 64 are described in conjunction with FIGS. 11, 12, 13 and 14, below.
  • In a block 54 of FIG. 5, in one embodiment, if motion vectors in the reference frame are not eligible to be included in the set 65 of candidate motion vectors, then only motion vectors from window 64 are selected and included in the set 65. Note that it is possible that there may be instances in which the window 64 contains no properly received motion vectors. The method of FIGS. 7 and 8 can be used to address those instances.
  • In a block 55 of FIG. 5, in one embodiment, a statistical measure of the set 65 of candidate motion vectors is determined. The statistical measure defines a motion vector 67 for the macroblock 68 of interest. The motion vector 67 can then be applied to the macroblock 68 of interest.
  • In one embodiment, the statistical measure is the median of the set 65 of candidate motion vectors. In one such embodiment, the median (specifically, the median vector) of the set 65 is determined, as follows.
  • For an array of N m-dimensional vectors, V=({overscore (v)}1,{overscore (v)}2, . . . ,{overscore (v)}N), with {overscore (v)}1ε
    Figure US20060133495A1-20060622-P00900
    m, for i=1,2, . . . ,N, the median vector {overscore (v)}VM is the vector that satisfies the following constraint: i = 1 N v VM - v i p i = 1 N v j - v i p ; j = 1 , 2 , , N ;
  • where p denotes the p-norm metrics between the vectors. For simplicity, in one embodiment, p=1 is used. For a two-dimensional vector {overscore (v)}=(v(x), v(y)), the 1-norm distance between {overscore (v)}0 and {overscore (v)}1 is:
    {overscore (v)} 0 −{overscore (v)} 1p=1 =|v 0(x)−v 1(x)|+|v 0(y)−v 1(y)|.
  • Thus, in one embodiment, the estimated motion vector 67 for the macroblock 68 of interest is the median of the set 65 of candidate motion vectors. Statistical measures of the set 65 of candidate motion vectors other than the median can be determined and used for the estimated motion vector 67. For example, the average of the set 65 can be determined and used.
  • In general, a set 65 of candidate motion vectors is identified. The set 65 is then operated on in some manner to determine an estimated motion vector 67 for the macroblock 68 of interest. The estimated motion vector 67 may be one of the motion vectors in the set 65, or the estimated motion vector 67 may be a motion vector determined by operating on the set 65.
  • Significantly, the estimated motion vector 67 is determined in the motion vector domain and not in the pixel domain. Specifically, pixel values are not used for error concealment, and distortion values associated with each of the candidate motion vectors are not calculated for error concealment. Accordingly, computational complexity and associated decoding delays are reduced. Also, there is no need to access the frame buffer to retrieve pixel values, eliminating that source of additional decoding delays. Furthermore, by intelligently selecting motion vectors to be included in the set 65 of candidate motion vectors, peak signal-to-noise ratios (PSNRs) comparable to if not better than the PSNRs associated with pixel-based error concealment techniques are achieved.
  • FIG. 7 is a flowchart 70 of one embodiment of a method for selecting candidate motion vectors used in a motion vector domain-based temporal error concealment process. Flowchart 70 describes one embodiment of a method for implementing blocks 52, 53 and 54 of FIG. 5. FIG. 7 is described with reference also to FIG. 8.
  • In one embodiment, in a block 71 of FIG. 7, the window 83 (in a reference frame 81) and the window 84 (in the current frame 82) are identified. It is understood that the reference frame 81 and the current frame 82 each include macroblocks in addition to the macroblocks included in the windows 83 and 84, respectively.
  • Properly received motion vectors associated with the macroblocks in the window 83, and properly received motion vectors associated with the macroblocks in the window 84, can then be accessed. Properly received motion vectors in the window 83 are identified using a letter A, while properly received motion vectors in the window 84 are identified using a letter B.
  • In a block 72, for each pair of co-located macroblocks within the windows 83 and 84, a determination is made as to whether there is a properly received motion vector for the macroblock in the window 84.
  • In a block 73, if there is a properly received motion vector for a macroblock in the window 84, that motion vector is included in the set 85 of candidate motion vectors, and the motion vector for the co-located macroblock in the window 83 is not included in the set 85. For example, there is a properly received motion vector for the macroblock 87 (in the window 83 in the reference frame 81) and a properly received motion vector for the macroblock 89 (in the window 84 in the current frame 82). According to one embodiment, the motion vector associated with the macroblock 89 (current frame 82) is included in the set 85, and the motion vector associated with the macroblock 87 (reference frame 81) is not included in the set 85.
  • In a block 74, if there is not a properly received motion vector for a macroblock in the window 84, then the motion vector for the co-located macroblock in the window 83 is included in the set 85 of candidate motion vectors. For example, there is not a properly received motion vector for the macroblock 88 of interest, and so the motion vector associated with the co-located macroblock 86 (in the reference frame 81) is included in the set 85.
  • As described above, a statistical measure of the set 85 of candidate motion vectors is determined (refer to the discussion of FIGS. 5 and 6).
  • In some instances, motion from one frame to the next frame may not be continuous. For example, a reference frame may include one type of motion, while motion in the current frame may have changed direction or stopped. Furthermore, an object in a reference frame may move out of the neighborhood of a macroblock of interest, and so it may not be suitable to include a motion vector for that object in the set of candidate motion vectors.
  • FIG. 9 is a flowchart 90 of one embodiment of a method for detecting frame-to-frame motion change. FIG. 10 is a flowchart 100 of another embodiment of a method for detecting frame-to-frame motion change. Either or both of the methods of the flowcharts 90 and 100 can be used to determine whether motion vectors from a reference frame should be included in the set of candidate motion vectors, in order to address the points mentioned in the preceding paragraph.
  • With reference first to FIG. 9, the flowchart 90 describes one embodiment of a method for implementing the block 52 of FIG. 5. In a block 91, a first range of values for motion vectors associated with a reference frame is determined. In a block 92, a second range of values for motion vectors associated with the current frame is determined. In a block 93, the first and second ranges of values are compared, and the motion vectors associated with the reference frame are included in the set of candidate motion vectors according to the results of the comparison.
  • FIG. 9 is described further with reference also to FIG. 2. In the block 91, in one embodiment, motion vector statistics are calculated for the properly received motion vectors associated with the reference frame 21.
  • In the block 92, in one embodiment, motion vector statistics are calculated for the properly received motion vectors associated with the current frame 22.
  • In one embodiment, all of the motion vectors associated with the reference frame 21 and the current frame 22 are included in the calculations of motion vector statistics. In another embodiment, only subsets of the motion vectors are used instead of all of the motion vectors. In the latter embodiment, for example, the subsets may include only the motion vectors associated with macroblocks for which motion vectors for both frames were properly received. That is, for example, a motion vector for a macroblock in the reference frame 21 is only included in a first subset if the motion vector for the co-located macroblock in the current frame 22 was also properly received. Similarly, a motion vector for a macroblock in the current frame 22 is only included in a second subset if the motion vector for the co-located macroblock in the reference frame 21 was also properly received.
  • In one embodiment, for each frame, the statistics calculated include the mean and standard deviation of the motion vector dimensions (magnitude/length and direction/angle). Let/be the set of indices of the motion vectors {overscore (v)}; that are included in the calculations of motion vector statistics, and let M be the size of the set l. Then the means and standard deviations (std) for the magnitudes (mag) and angles (ang) are calculated as follows for the reference frame 21 and the current frame 22: mean mag_frm = 1 M i I mag ( v frm ( i ) ) ; mean ang_frm = 1 M i I ang ( v frm ( i ) ) ; std mag_frm = 1 m i I ( mag ( v frm ( i ) - mean mag_frm ) 2 ; and std ang_frm = 1 M i I ( ang ( v frm ( i ) - mean ang_frm ) 2 ;
  • where the subscript “form” refers to either the current frame or the reference frame. Once the means and standard deviations are calculated, the ranges (meanmag frm−stdmag frm, meanmag frm+stdmag frm) and (meanang frm−stdang frm, meanang frm+stdang frm) are formed for each of the current and reference frames.
  • In the block 93, in one embodiment, the ranges of the motion vector magnitudes for the reference frame 21 and for the current frame 22 are compared, and the ranges of the motion vector angles for the reference frame 21 and for the current frame 22 are also compared. In one embodiment, if the range of motion vector magnitudes for the reference frame 21 overlaps the range of motion vector magnitudes for the current frame 22, and if the range of motion vector angles for the reference frame 21 overlaps the range of motion vector angles for the current frame 22, then the reference frame 21 and the current frame 22 are judged to have similar motion. Accordingly, motion vectors from the reference frame 21 are eligible for inclusion in the set of candidate motion vectors (e.g., the set 65 of FIG. 6).
  • With reference now to FIG. 10, the flowchart 100 describes another embodiment of a method for implementing the block 52 of FIG. 5. In a block 101, the dimensions of pairs of motion vectors are compared to determine whether motion vectors in each of the pairs are similar to each other. Each of the pairs of motion vectors includes a first motion vector associated with a first macroblock at a position in a reference frame, and a second motion vector associated with a second macroblock at the position in the current frame.
  • In a block 102, the number of pairs of motion vectors in the reference and current frames that are similar is counted. In a block 103, motion vectors from the reference frame are eligible for inclusion in the set of candidate motion vectors if the number exceeds a threshold.
  • FIG. 10 is described further with reference also to FIG. 2. In the block 101, in one embodiment, the dimensions of each pair of co-located macroblocks are compared. The macroblocks 108 and 109 of FIG. 2 are an example of a pair of co-located macroblocks.
  • In one embodiment, to facilitate the comparison, each received motion vector in the reference frame 21 and each received motion vector in the current frame 22 is given a magnitude label and a direction label. In one such embodiment, the magnitude label has a value of either zero (0) or one (1), depending on its relative magnitude. For example, a motion vector having a magnitude of less than or equal to two (2) pixels is assigned a magnitude label of 0, and a motion vector having a magnitude of more than 2 pixels is assigned a magnitude label of 1. In one embodiment, the direction label has a value of 0, 1, 2 or three (3). For example, relative to a vertical line in a frame, a motion vector having an angle greater than or equal to −45 degrees but less than 45 degrees could be assigned a direction label of 0, a motion vector having an angle greater than or equal to 45 degrees but less than 135 degrees could be assigned a direction label of 1, and so on. Other schemes for labeling the magnitude and direction of motion vectors can be used.
  • In one embodiment, for each pair of co-located macroblocks, the magnitude labels of the 2 motion vectors in the pair are compared, and the direction labels of the 2 motion vectors in the pair are compared. In one embodiment, if the magnitude labels are the same and the direction labels are not opposite for the 2 motion vectors in a pair, then that pair of motion vectors is defined as being similar. Note that, in the present embodiment, the direction labels do not necessarily have to be the same in order for the 2 motion vectors in a pair to be considered similar. For example, using the scheme described above, a direction label of 0 would be considered similar to a direction label of 0, 1 or 3, but opposite to a direction label of 2. Other rules defining what constitutes similar motion vectors can be used.
  • In the block 102 of FIG. 10, in one embodiment, the number of pairs of co-located macroblocks that contain similar motion vectors is counted. In other words, the number of pairs of similar motion vectors is counted.
  • In the block 103, in one embodiment, motion vectors from the reference frame 21 are eligible for inclusion in the set of candidate motion vectors (e.g., the set 65 of FIG. 6) if the count made in the block 102 exceeds a threshold. In one embodiment, the threshold is equal to one-half of the number of macroblocks in either of the two frames 21 or 22.
  • In the neighborhood of a macroblock of interest, there may be a motion boundary—objects on one side of a motion boundary may move differently from objects on the other side of the motion boundary. FIG. 11 is a flowchart 110 of one embodiment of a method for locating a motion boundary. The flowchart 110 describes one embodiment of a method of implementing the block 53 of FIG. 5. Note that, in one embodiment, the block 53 (and hence the method of the flowchart 110) is implemented depending on the outcome of the block 52 of FIG. 5.
  • In a block 111 of FIG. 11, a motion boundary is identified in a reference frame. In a block 112, the set of candidate motion vectors includes only those motion vectors that are associated with macroblocks in the reference frame that lie on the same side of the motion boundary as a macroblock in the reference frame that is co-located with a macroblock of interest in the current frame.
  • FIG. 11 is described further with reference also to FIG. 12. FIG. 12 shows a window 125 in a reference frame 121, and a window 126 in a current frame 122. It is understood that the reference frame 121 and the current frame 122 each include macroblocks in addition to the macroblocks included in the windows 125 and 126, respectively.
  • In one embodiment, in the block 111, a motion boundary 129 is identified in the reference frame 121. In one embodiment, the motion boundary 129 is identified in the following manner. Each of the motion vectors associated with the macroblocks in the window 125 in the reference frame 121 is assigned a magnitude label and a direction label. The discussion above in conjunction with FIG. 10 describes one method for labeling motion vectors.
  • The motion vector associated with the macroblock 124 in the reference frame 121 that is at the same position as the macroblock 123 of interest in the current frame 122 is classified as class 0. That is, the macroblock 124 is co-located with the macroblock 123 of interest, and as such, the motion vector associated with the macroblock 124 is identified as being the first member of a particular class (e.g., class 0).
  • The magnitude labels of the other motion vectors associated with the window 125 are each compared to the magnitude label of the motion vector associated with the macroblock 124, and the direction labels of the other motion vectors in the window 125 are each compared to the direction label of the motion vector associated with the macroblock 124.
  • In one embodiment, if the magnitude label for a motion vector is the same as that of the motion vector associated with the macroblock 124, and if the angle label for that motion vector is not opposite that of the motion vector associated with the macroblock 124, then that motion vector is defined as being similar to the motion vector associated with the macroblock 124, and that motion vector is also classified as class 0. As mentioned, the process just described is repeated for each motion vector associated with the window 125, to generate the local motion class map 127.
  • In one embodiment, in the block 112, only those motion vectors associated with the window 125 that are in the same class as the motion vector associated with the macroblock 124 are included in the set 126 of candidate motion vectors. In other words, in the present embodiment, only the motion vectors in the window 125 in the reference frame 121 that are on the same side of the motion boundary 129 as the macroblock 124 (the macroblock co-located with the macroblock 123 of interest) are included in the set 128 of candidate motion vectors. That is, in the example of FIG. 12, only the motion vectors classified as class 0 are included in the set 128. As described above, a statistical measure of the set 128 of candidate motion vectors is then determined (refer to the discussion of FIGS. 5 and 6).
  • Note that properly received motion vectors associated with the window 126 of the current frame 122 can also be included in the set 128 if they are associated with macroblocks that also lie on the same side of the motion boundary as the macroblock 123 of interest. For example, after the map 127 is determined, the macroblocks in the window 126 that are co-located with those macroblocks in the window 127 that are classified as class 0 can also be classified as class 0, and the motion vectors associated with those macroblocks in the window 126 can be included in the set 128.
  • FIG. 13 is a flowchart 130 of one embodiment of a method that uses the trajectory of a moving object to select a candidate motion vector. The flowchart 130 describes one embodiment of a method of implementing the block 53 of FIG. 5.
  • In a block 131 of FIG. 13, an object in a first macroblock in a reference frame is identified. In a block 132, a motion vector associated with the object is included in the set of candidate motion vectors if the object sufficiently overlaps a co-located second macroblock in the current frame (that is, the first macroblock and the second macroblock are in the same position within their respective frames).
  • FIG. 13 is described further with reference also to FIG. 14. FIG. 14 shows a window 147 of a reference frame 141 and a window 148 of a current frame 142. The macroblock 143 is co-located with the macroblock 146. It is understood that the reference frame 141 and the current frame 142 each include macroblocks in addition to the macroblocks included in the windows 147 and 148, respectively.
  • In the block 131, in one embodiment, an object 144 within the reference frame, and associated with the macroblock 143 that is co-located with the macroblock 146, is identified. In the current frame 142, the object 144 has moved to a different position, and is now associated with a macroblock 145.
  • In the block 132, in one embodiment, a determination is made as to whether the macroblock 145 that contains the object 144 overlaps the macroblock 146 by a sufficient amount. If so, the motion vector associated with the object 144 can be included in the set of candidate motion vectors (e.g., the set 65 of FIG. 6). If not, the motion vector associated with the object 144 is not included in the set.
  • Note that method described in conjunction with FIGS. 13 and 14 can be similarly applied to any of the macroblocks within the windows 147 and 148. That is, although described for the center macroblock of the windows 147 and 148, the present invention is not so limited.
  • In one embodiment, an overlap of greater than or equal to 25 percent is considered sufficient. Various techniques can be used to determine whether the macroblock 145 overlaps the macroblock 146 by that amount. In one embodiment, the macroblocks 145 and 146 are each associated with a set of two-dimensional coordinates that define their respective positions within the current frame 142. Using these coordinates, for example, the corners of one of the macroblocks 145 and 146 can be compared to the midpoints of the sides of the other macroblock to determine whether the amount of overlap exceeds 25 percent. Thresholds other than 25 percent can be used.
  • The embodiments of FIGS. 9-14 have been described separately in order to more clearly describe certain aspects of the embodiments; however, it is appreciated that the embodiments may be implemented by combining different aspects of these embodiments. In one embodiment, one of the methods described in conjunction with FIGS. 9 and 10 is combined with one of the methods described in conjunction with FIGS. 11-14.
  • In summary, embodiments in accordance with the present invention provide methods and systems for temporal error concealment using motion vectors in the motion vector domain rather than pixel values in the pixel domain. Accordingly, computational complexity is reduced because distortion evaluations can be eliminated with regard to error concealment; the number of computation steps may be reduced by as much as 85 percent. Decoding delays are reduced from one frame to one slice of macroblocks; that is, in order to use neighboring motion vectors to estimate an absent motion vector, processing of only a slice (e.g., one row) of macroblocks may be delayed. Memory access times, and associated decoding delays, are reduced because memory accesses to retrieve pixel values can be eliminated with regard to error concealment. Yet the embodiments described herein yield PSNRs that are comparable to if not better than PSNRs associated with pixel-based error concealment techniques. Furthermore, embodiments can be implemented without having to make hardware changes.
  • The concepts described herein can be used for applications other than error concealment. For instance, embodiments can be used in motion estimation at an encoder. For example, in conventional hierarchical motion estimation, motion vectors found in the lowest spatial resolution are used as initial estimates of motion vectors for higher resolutions. Instead, motion vectors selected as described above can be used as the initial estimates to speed up motion estimation at the encoder.
  • Embodiments of the present invention are thus described. While the present invention has been described by the various different embodiments, it should be appreciated that the present invention should not be construed as limited by such embodiments, but rather construed according to the below claims.

Claims (35)

1. A method of processing video data, said method comprising:
selecting a set of motion vectors from a first plurality of motion vectors associated with a first plurality of macroblocks in a first frame of said video data and from a second plurality of motion vectors associated with a second plurality of macroblocks in a second frame of said video data;
determining a statistical measure of said set of motion vectors, said statistical measure defining a motion vector for a macroblock of interest in said second plurality of macroblocks; and
applying said motion vector to said macroblock of interest.
2. The method of claim 1 wherein said determining comprises determining the median of said set, wherein said motion vector for said macroblock of interest is said median.
3. The method of claim 1 wherein said determining comprises determining the average of said set, wherein said motion vector for said macroblock of interest is said average.
4. The method of claim 1 wherein said set comprises:
said second plurality of motion vectors; and
motion vectors for selected macroblocks of said first plurality of macroblocks, wherein if there is a first motion vector for a first macroblock at a position in said first frame and also a second motion vector for a second macroblock at said position in said second frame, said second motion vector is included in said set and said first motion vector is not included in said set.
5. The method of claim 1 further comprising deciding whether motion vectors from said first plurality of motion vectors are eligible for inclusion in said set.
6. The method of claim 5 wherein said deciding comprises:
determining a first range of values for motion vectors associated with said first frame;
determining a second range of values for motion vectors associated with said second frame; and
comparing said first and second ranges of values, wherein motion vectors from said first plurality of motion vectors are eligible for inclusion in said set if said first and second ranges overlap by a specified amount.
7. The method of claim 5 wherein said deciding comprises:
comparing dimensions of pairs of motion vectors to determine whether motion vectors in each of said pairs are similar to each other, each of said pairs of motion vectors comprising a first motion vector associated with a first macroblock at a position in said first frame and a second motion vector associated with a second macroblock at said position in said second frame, wherein motion vectors are similar provided they satisfy a rule; and
counting the number of pairs of motion vectors in said first and second frames that are similar, wherein motion vectors from said first plurality of motion vectors are eligible for inclusion in said set if said number exceeds a threshold.
8. The method of claim 5 wherein said set comprises motion vectors associated with macroblocks of said first plurality of macroblocks that lie on the same side of a motion boundary as a macroblock in said first frame at the same position as said macroblock of interest.
9. The method of claim 5 wherein said set comprises motion vectors from said first plurality of motion vectors that are similar to a motion vector for a macroblock in said first plurality of macroblocks that is at the same position as said macroblock of interest, wherein motion vectors are similar provided they satisfy a rule.
10. The method of claim 1 wherein a motion vector associated with an object that is contained in a first macroblock in said first frame at the same position as a second macroblock in said second frame is included in said set provided that in said second frame said object overlaps said second macroblock by a specified amount.
11. The method of claim 1 wherein said first frame precedes said second frame in order of display.
12. A computer-usable medium having computer-readable program code embodied therein for causing a decoding device to perform a video data processing method comprising:
selecting a set of motion vectors from a first plurality of motion vectors associated with a first plurality of macroblocks in a first frame of video data and from a second plurality of motion vectors associated with a second plurality of macroblocks in a second frame of said video data;
determining a statistical measure of said set of motion vectors, said statistical measure defining a motion vector for a macroblock of interest in said second plurality of macroblocks; and
applying said motion vector to said macroblock of interest.
13. The computer-usable medium of claim 12 wherein said set comprises:
said second plurality of motion vectors; and
motion vectors for selected macroblocks of said first plurality of macroblocks, wherein if there is a first motion vector for a first macroblock at a position in said first frame and also a second motion vector for a second macroblock at said position in said second frame, said second motion vector is included in said set and said first motion vector is not included in said set.
14. The computer-usable medium of claim 12 wherein said computer-readable program code embodied therein causes said decoding device to perform said video data processing method further comprising:
determining a first range of values for motion vectors associated with said first frame;
determining a second range of values for motion vectors associated with said second frame; and
comparing said first and second ranges of values, wherein motion vectors from said first plurality of motion vectors are eligible for inclusion in said set if said first and second ranges overlap by a specified amount.
15. The computer-usable medium of claim 12 wherein said computer-readable program code embodied therein causes said decoding device to perform said video data processing method further comprising:
comparing dimensions of pairs of motion vectors to determine whether motion vectors in each of said pairs are similar to each other, each of said pairs of motion vectors comprising a first motion vector associated with a first macroblock at a position in said first frame and a second motion vector associated with a second macroblock at said position in said second frame, wherein motion vectors are similar provided they satisfy a rule; and
counting the number of pairs of motion vectors in said first and second frames that are similar, wherein motion vectors from said first plurality of motion vectors are eligible for inclusion in said set if said number exceeds a threshold.
16. The computer-usable medium of claim 12 wherein said set comprises motion vectors associated with macroblocks of said first plurality of macroblocks that lie on the same side of a motion boundary as a macroblock in said first frame at the same position as said macroblock of interest.
17. The computer-usable medium of claim 12 wherein a motion vector associated with an object that is contained in a first macroblock in said first frame at the same position as a second macroblock in said second frame is included in said set provided that in said second frame said object overlaps said second macroblock by a specified amount.
18. The computer-usable medium of claim 12 wherein said first frame precedes said second frame in order of display.
19. A system for processing video data, said system comprising:
means for selecting a set of motion vectors from a first plurality of motion vectors associated with a first plurality of macroblocks in a first frame of said video data and from a second plurality of motion vectors associated with a second plurality of macroblocks in a second frame of said video data;
means for determining a statistical measure of said set of motion vectors, said statistical measure defining a motion vector for a macroblock of interest in said second plurality of macroblocks; and
applying said motion vector to said macroblock of interest.
20. The system of claim 19 wherein said set comprises:
said second plurality of motion vectors; and
motion vectors for selected macroblocks of said first plurality of macroblocks, wherein if there is a first motion vector for a first macroblock at a position in said first frame and also a second motion vector for a second macroblock at said position in said second frame, said second motion vector is included in said set and said first motion vector is not included in said set.
21. The system of claim 19 further comprising means for deciding whether motion vectors from said first plurality of motion vectors are eligible for inclusion in said set.
22. The system of claim 21 wherein said means for deciding comprises:
means for determining a first range of values for motion vectors associated with said first frame;
means for determining a second range of values for motion vectors associated with said second frame; and
means for comparing said first and second ranges of values, wherein motion vectors from said first plurality of motion vectors are eligible for inclusion in said set if said first and second ranges overlap by a specified amount.
23. The system of claim 21 wherein said means for deciding comprises:
means for comparing dimensions of pairs of motion vectors to determine whether motion vectors in each of said pairs are similar to each other, each of said pairs of motion vectors comprising a first motion vector associated with a first macroblock at a position in said first frame and a second motion vector associated with a second macroblock at said position in said second frame, wherein motion vectors are similar provided they satisfy a rule; and
means for counting the number of pairs of motion vectors in said first and second frames that are similar, wherein motion vectors from said first plurality of motion vectors are eligible for inclusion in said set if said number exceeds a threshold.
24. The system of claim 21 wherein said set comprises motion vectors associated with macroblocks of said first plurality of macroblocks that lie on the same side of a motion boundary as a macroblock in said first frame at the same position as said macroblock of interest.
25. The system of claim 21 wherein said set comprises motion vectors from said first plurality of motion vectors that are similar to a motion vector for a macroblock in said first plurality of macroblocks that is at the same position as said macroblock of interest, wherein motion vectors are similar provided they satisfy a rule.
26. The system of claim 19 wherein a motion vector associated with an object that is contained in a first macroblock in said first frame at the same position as a second macroblock in said second frame is included in said set provided that in said second frame said object overlaps said second macroblock by a specified amount.
27. The system of claim 19 wherein said first frame precedes said second frame in order of display.
28. A device comprising:
a microprocessor; and
a memory unit coupled to said microprocessor, said memory unit containing instructions that when executed by said microprocessor implement a method for processing video data, said method comprising:
selecting a set of motion vectors from a first plurality of motion vectors associated with a first plurality of macroblocks in a first frame of said video data and from a second plurality of motion vectors associated with a second plurality of macroblocks in a second frame of said video data;
determining a statistical measure of said set of motion vectors, said statistical measure defining a motion vector for a macroblock of interest in said second plurality of macroblocks; and
applying said motion vector to said macroblock of interest.
29. The device of claim 28 wherein said set comprises:
said second plurality of motion vectors; and
motion vectors for selected macroblocks of said first plurality of macroblocks, wherein if there is a first motion vector for a first macroblock at a position in said first frame and also a second motion vector for a second macroblock at said position in said second frame, said second motion vector is included in said set and said first motion vector is not included in said set.
30. The device of claim 28 wherein said method further comprises:
determining a first range of values for motion vectors associated with said first frame;
determining a second range of values for motion vectors associated with said second frame; and
comparing said first and second ranges of values, wherein motion vectors from said first plurality of motion vectors are eligible for inclusion in said set if said first and second ranges overlap by a specified amount.
31. The device of claim 28 wherein said method further comprises:
comparing dimensions of pairs of motion vectors to determine whether motion vectors in each of said pairs are similar to each other, each of said pairs of motion vectors comprising a first motion vector associated with a first macroblock at a position in said first frame and a second motion vector associated with a second macroblock at said position in said second frame, wherein motion vectors are similar provided they satisfy a rule; and
counting the number of pairs of motion vectors in said first and second frames that are similar, wherein motion vectors from said first plurality of motion vectors are eligible for inclusion in said set if said number exceeds a threshold.
32. The device of claim 28 wherein said set comprises motion vectors associated with macroblocks of said first plurality of macroblocks that lie on the same side of a motion boundary as a macroblock in said first frame at the same position as said macroblock of interest.
33. The device of claim 28 wherein said set comprises motion vectors from said first plurality of motion vectors that are similar to a motion vector for a macroblock in said first plurality of macroblocks that is at the same position as said macroblock of interest, wherein motion vectors are similar provided they satisfy a rule.
34. The device of claim 28 wherein a motion vector associated with an object that is contained in a first macroblock in said first frame at the same position as a second macroblock in said second frame is included in said set provided that in said second frame said object overlaps said second macroblock by a specified amount.
35. The device of claim 28 wherein said first frame precedes said second frame in order of display.
US11/022,362 2004-12-22 2004-12-22 Temporal error concealment for video communications Abandoned US20060133495A1 (en)

Priority Applications (11)

Application Number Priority Date Filing Date Title
US11/022,362 US20060133495A1 (en) 2004-12-22 2004-12-22 Temporal error concealment for video communications
AT05855322T ATE494735T1 (en) 2004-12-22 2005-12-22 TEMPORARY ESTIMATION OF A MOTION VECTOR FOR VIDEO COMMUNICATIONS
EP05855322A EP1829383B1 (en) 2004-12-22 2005-12-22 Temporal estimation of a motion vector for video communications
KR1020077015762A KR100964407B1 (en) 2004-12-22 2005-12-22 Temporal estimation of a motion vector for video communications
DE602005025808T DE602005025808D1 (en) 2004-12-22 2005-12-22 TEMPORARY ESTIMATION OF A MOVEMENT VECTOR FOR VIDEO COMMUNICATIONS
PCT/US2005/046739 WO2006069297A1 (en) 2004-12-22 2005-12-22 Temporal estimation of a motion vector for video communications
CN2005800480510A CN101116345B (en) 2004-12-22 2005-12-22 Temporal estimation of a motion vector for video communications
JP2007548507A JP5021494B2 (en) 2004-12-22 2005-12-22 Temporal estimation of motion vectors for video communication
TW094145929A TW200637375A (en) 2004-12-22 2005-12-22 Temporal error concealment for video communications
US12/694,522 US8817879B2 (en) 2004-12-22 2010-01-27 Temporal error concealment for video communications
JP2011158191A JP5420600B2 (en) 2004-12-22 2011-07-19 Temporal estimation of motion vectors for video communication.

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/022,362 US20060133495A1 (en) 2004-12-22 2004-12-22 Temporal error concealment for video communications

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/694,522 Division US8817879B2 (en) 2004-12-22 2010-01-27 Temporal error concealment for video communications

Publications (1)

Publication Number Publication Date
US20060133495A1 true US20060133495A1 (en) 2006-06-22

Family

ID=36177977

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/022,362 Abandoned US20060133495A1 (en) 2004-12-22 2004-12-22 Temporal error concealment for video communications
US12/694,522 Expired - Fee Related US8817879B2 (en) 2004-12-22 2010-01-27 Temporal error concealment for video communications

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/694,522 Expired - Fee Related US8817879B2 (en) 2004-12-22 2010-01-27 Temporal error concealment for video communications

Country Status (9)

Country Link
US (2) US20060133495A1 (en)
EP (1) EP1829383B1 (en)
JP (2) JP5021494B2 (en)
KR (1) KR100964407B1 (en)
CN (1) CN101116345B (en)
AT (1) ATE494735T1 (en)
DE (1) DE602005025808D1 (en)
TW (1) TW200637375A (en)
WO (1) WO2006069297A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060198443A1 (en) * 2005-03-01 2006-09-07 Yi Liang Adaptive frame skipping techniques for rate controlled video encoding
US20060269153A1 (en) * 2005-05-11 2006-11-30 Fang Shi Temporal error concealment for bi-directinally predicted frames
US20070133686A1 (en) * 2005-12-14 2007-06-14 Samsung Electronics Co., Ltd. Apparatus and method for frame interpolation based on motion estimation
US20080095246A1 (en) * 2005-04-20 2008-04-24 Zhong Luo Method, receiver and transmitter for eliminating errors in h.264 compressed video transmission
US20080107180A1 (en) * 2006-11-03 2008-05-08 Samsung Electronics Co., Ltd. Method and apparatus for video predictive encoding and method and apparatus for video predictive decoding
US20080117978A1 (en) * 2006-10-06 2008-05-22 Ujval Kapasi Video coding on parallel processing systems
US20090190037A1 (en) * 2008-01-25 2009-07-30 Mediatek Inc. Method and integrated circuit for video processing
US20090270138A1 (en) * 2008-04-23 2009-10-29 Qualcomm Incorporated Coordinating power management functions in a multi-media device
US20090323809A1 (en) * 2008-06-25 2009-12-31 Qualcomm Incorporated Fragmented reference in temporal compression for video coding
US20100034270A1 (en) * 2008-08-05 2010-02-11 Qualcomm Incorporated Intensity compensation techniques in video processing
US20100046631A1 (en) * 2008-08-19 2010-02-25 Qualcomm Incorporated Power and computational load management techniques in video processing
US20100046637A1 (en) * 2008-08-19 2010-02-25 Qualcomm Incorporated Power and computational load management techniques in video processing
US20100118970A1 (en) * 2004-12-22 2010-05-13 Qualcomm Incorporated Temporal error concealment for video communications
US20100128791A1 (en) * 2007-04-20 2010-05-27 Canon Kabushiki Kaisha Video coding method and device
CN102131095A (en) * 2010-01-18 2011-07-20 联发科技股份有限公司 Motion prediction method and video encoding method
CN103124353A (en) * 2010-01-18 2013-05-29 联发科技股份有限公司 Motion prediction method and video coding method
US8879979B2 (en) 2003-10-24 2014-11-04 Qualcomm Incorporated Method and apparatus for seamlessly switching reception between multimedia streams in a wireless communication system
US8948258B2 (en) 2008-10-03 2015-02-03 Qualcomm Incorporated Video coding with large macroblocks
US20150195521A1 (en) * 2014-01-09 2015-07-09 Nvidia Corporation Candidate motion vector selection systems and methods
CN105100798A (en) * 2011-07-02 2015-11-25 三星电子株式会社 Method and apparatus for coding video, and method and apparatus for decoding video accompanied by inter prediction using collocated image
US20160261823A1 (en) * 2015-03-02 2016-09-08 Chih-Ta Star Sung Semiconductor display driver device, mobile multimedia apparatus and method for frame rate conversion

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8948262B2 (en) 2004-07-01 2015-02-03 Qualcomm Incorporated Method and apparatus for using frame rate up conversion techniques in scalable video coding
EP1772017A2 (en) 2004-07-20 2007-04-11 Qualcomm Incorporated Method and apparatus for encoder assisted-frame rate up conversion (ea-fruc) for video compression
US8553776B2 (en) 2004-07-21 2013-10-08 QUALCOMM Inorporated Method and apparatus for motion vector assignment
GB0500332D0 (en) * 2005-01-08 2005-02-16 Univ Bristol Enhanced error concealment
US8401082B2 (en) 2006-03-27 2013-03-19 Qualcomm Incorporated Methods and systems for refinement coefficient coding in video compression
US8848789B2 (en) 2006-03-27 2014-09-30 Qualcomm Incorporated Method and system for coding and decoding information associated with video compression
US8750387B2 (en) 2006-04-04 2014-06-10 Qualcomm Incorporated Adaptive encoder-assisted frame rate up conversion
US8634463B2 (en) 2006-04-04 2014-01-21 Qualcomm Incorporated Apparatus and method of enhanced frame interpolation in video compression
US8683213B2 (en) 2007-10-26 2014-03-25 Qualcomm Incorporated Progressive boot for a wireless device
EP2240905B1 (en) * 2008-01-11 2012-08-08 Zoran (France) Sparse geometry for super resolution video processing
US8634456B2 (en) 2008-10-03 2014-01-21 Qualcomm Incorporated Video coding with large macroblocks
US8619856B2 (en) 2008-10-03 2013-12-31 Qualcomm Incorporated Video coding with large macroblocks
US8976873B2 (en) * 2010-11-24 2015-03-10 Stmicroelectronics S.R.L. Apparatus and method for performing error concealment of inter-coded video frames
JP5649523B2 (en) * 2011-06-27 2015-01-07 日本電信電話株式会社 Video encoding method, apparatus, video decoding method, apparatus, and program thereof
MX2014000160A (en) * 2011-06-27 2014-02-19 Samsung Electronics Co Ltd Method and apparatus for encoding motion information, and method and apparatus for decoding same.
US9888421B2 (en) 2014-09-16 2018-02-06 Mediatek Inc. Method of enhanced bearer continuity for 3GPP system change

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040141555A1 (en) * 2003-01-16 2004-07-22 Rault Patrick M. Method of motion vector prediction and system thereof

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5737022A (en) 1993-02-26 1998-04-07 Kabushiki Kaisha Toshiba Motion picture error concealment using simplified motion compensation
US5621467A (en) * 1995-02-16 1997-04-15 Thomson Multimedia S.A. Temporal-spatial error concealment apparatus and method for video signal processors
EP0897247A3 (en) * 1997-08-14 2001-02-07 Philips Patentverwaltung GmbH Method for computing motion vectors
US6865227B2 (en) * 2001-07-10 2005-03-08 Sony Corporation Error concealment of video data using motion vector data recovery
DE60135036D1 (en) 2001-10-05 2008-09-04 Mitsubishi Electric Corp Method and device for compensating erroneous motion vectors in image data
EP1395061A1 (en) * 2002-08-27 2004-03-03 Mitsubishi Electric Information Technology Centre Europe B.V. Method and apparatus for compensation of erroneous motion vectors in video data
WO2004030369A1 (en) * 2002-09-27 2004-04-08 Videosoft, Inc. Real-time video coding/decoding
US20060133495A1 (en) * 2004-12-22 2006-06-22 Yan Ye Temporal error concealment for video communications

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040141555A1 (en) * 2003-01-16 2004-07-22 Rault Patrick M. Method of motion vector prediction and system thereof

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8879979B2 (en) 2003-10-24 2014-11-04 Qualcomm Incorporated Method and apparatus for seamlessly switching reception between multimedia streams in a wireless communication system
US8817879B2 (en) * 2004-12-22 2014-08-26 Qualcomm Incorporated Temporal error concealment for video communications
US20100118970A1 (en) * 2004-12-22 2010-05-13 Qualcomm Incorporated Temporal error concealment for video communications
US20060198443A1 (en) * 2005-03-01 2006-09-07 Yi Liang Adaptive frame skipping techniques for rate controlled video encoding
US8514933B2 (en) 2005-03-01 2013-08-20 Qualcomm Incorporated Adaptive frame skipping techniques for rate controlled video encoding
US20080095246A1 (en) * 2005-04-20 2008-04-24 Zhong Luo Method, receiver and transmitter for eliminating errors in h.264 compressed video transmission
US7660354B2 (en) * 2005-05-11 2010-02-09 Fang Shi Temporal error concealment for bi-directionally predicted frames
WO2006122314A3 (en) * 2005-05-11 2008-11-06 Qualcomm Inc Temporal error concealment for bi-directionally predicted frames
US20060269153A1 (en) * 2005-05-11 2006-11-30 Fang Shi Temporal error concealment for bi-directinally predicted frames
US20070133686A1 (en) * 2005-12-14 2007-06-14 Samsung Electronics Co., Ltd. Apparatus and method for frame interpolation based on motion estimation
US10841579B2 (en) 2006-10-06 2020-11-17 OL Security Limited Liability Hierarchical packing of syntax elements
US8259807B2 (en) 2006-10-06 2012-09-04 Calos Fund Limited Liability Company Fast detection and coding of data blocks
US20080117978A1 (en) * 2006-10-06 2008-05-22 Ujval Kapasi Video coding on parallel processing systems
US11665342B2 (en) * 2006-10-06 2023-05-30 Ol Security Limited Liability Company Hierarchical packing of syntax elements
US20210281839A1 (en) * 2006-10-06 2021-09-09 Ol Security Limited Liability Company Hierarchical packing of syntax elements
US20080298466A1 (en) * 2006-10-06 2008-12-04 Yipeng Liu Fast detection and coding of data blocks
US8861611B2 (en) 2006-10-06 2014-10-14 Calos Fund Limited Liability Company Hierarchical packing of syntax elements
US20090003453A1 (en) * 2006-10-06 2009-01-01 Kapasi Ujval J Hierarchical packing of syntax elements
US9667962B2 (en) 2006-10-06 2017-05-30 Ol Security Limited Liability Company Hierarchical packing of syntax elements
US8213509B2 (en) * 2006-10-06 2012-07-03 Calos Fund Limited Liability Company Video coding on parallel processing systems
US20080107180A1 (en) * 2006-11-03 2008-05-08 Samsung Electronics Co., Ltd. Method and apparatus for video predictive encoding and method and apparatus for video predictive decoding
US20100128791A1 (en) * 2007-04-20 2010-05-27 Canon Kabushiki Kaisha Video coding method and device
US9641861B2 (en) * 2008-01-25 2017-05-02 Mediatek Inc. Method and integrated circuit for video processing
US20090190037A1 (en) * 2008-01-25 2009-07-30 Mediatek Inc. Method and integrated circuit for video processing
US20090270138A1 (en) * 2008-04-23 2009-10-29 Qualcomm Incorporated Coordinating power management functions in a multi-media device
US8948822B2 (en) 2008-04-23 2015-02-03 Qualcomm Incorporated Coordinating power management functions in a multi-media device
US20090323809A1 (en) * 2008-06-25 2009-12-31 Qualcomm Incorporated Fragmented reference in temporal compression for video coding
US8908763B2 (en) 2008-06-25 2014-12-09 Qualcomm Incorporated Fragmented reference in temporal compression for video coding
US20100034270A1 (en) * 2008-08-05 2010-02-11 Qualcomm Incorporated Intensity compensation techniques in video processing
US8599920B2 (en) 2008-08-05 2013-12-03 Qualcomm Incorporated Intensity compensation techniques in video processing
US9462326B2 (en) 2008-08-19 2016-10-04 Qualcomm Incorporated Power and computational load management techniques in video processing
US9565467B2 (en) 2008-08-19 2017-02-07 Qualcomm Incorporated Power and computational load management techniques in video processing
US8948270B2 (en) 2008-08-19 2015-02-03 Qualcomm Incorporated Power and computational load management techniques in video processing
US8964828B2 (en) 2008-08-19 2015-02-24 Qualcomm Incorporated Power and computational load management techniques in video processing
US20100046631A1 (en) * 2008-08-19 2010-02-25 Qualcomm Incorporated Power and computational load management techniques in video processing
US20100046637A1 (en) * 2008-08-19 2010-02-25 Qualcomm Incorporated Power and computational load management techniques in video processing
US11039171B2 (en) 2008-10-03 2021-06-15 Velos Media, Llc Device and method for video decoding video blocks
US9788015B2 (en) 2008-10-03 2017-10-10 Velos Media, Llc Video coding with large macroblocks
US8948258B2 (en) 2008-10-03 2015-02-03 Qualcomm Incorporated Video coding with large macroblocks
US9930365B2 (en) 2008-10-03 2018-03-27 Velos Media, Llc Video coding with large macroblocks
US11758194B2 (en) 2008-10-03 2023-09-12 Qualcomm Incorporated Device and method for video decoding video blocks
US10225581B2 (en) 2008-10-03 2019-03-05 Velos Media, Llc Video coding with large macroblocks
CN103124353A (en) * 2010-01-18 2013-05-29 联发科技股份有限公司 Motion prediction method and video coding method
CN102131095A (en) * 2010-01-18 2011-07-20 联发科技股份有限公司 Motion prediction method and video encoding method
US10034014B2 (en) 2011-07-02 2018-07-24 Samsung Electronics Co., Ltd. Method and apparatus for coding video, and method and apparatus for decoding video accompanied by inter prediction using collocated image
US10397601B2 (en) 2011-07-02 2019-08-27 Samsung Electronics Co., Ltd. Method and apparatus for coding video, and method and apparatus for decoding video accompanied by inter prediction using collocated image
CN105100798A (en) * 2011-07-02 2015-11-25 三星电子株式会社 Method and apparatus for coding video, and method and apparatus for decoding video accompanied by inter prediction using collocated image
US20150195521A1 (en) * 2014-01-09 2015-07-09 Nvidia Corporation Candidate motion vector selection systems and methods
US9800825B2 (en) * 2015-03-02 2017-10-24 Chih-Ta Star Sung Semiconductor display driver device, mobile multimedia apparatus and method for frame rate conversion
US20160261823A1 (en) * 2015-03-02 2016-09-08 Chih-Ta Star Sung Semiconductor display driver device, mobile multimedia apparatus and method for frame rate conversion

Also Published As

Publication number Publication date
KR100964407B1 (en) 2010-06-15
EP1829383B1 (en) 2011-01-05
JP2008526119A (en) 2008-07-17
JP5420600B2 (en) 2014-02-19
US8817879B2 (en) 2014-08-26
WO2006069297A1 (en) 2006-06-29
ATE494735T1 (en) 2011-01-15
TW200637375A (en) 2006-10-16
EP1829383A1 (en) 2007-09-05
KR20070090242A (en) 2007-09-05
CN101116345B (en) 2010-12-22
JP2011254508A (en) 2011-12-15
DE602005025808D1 (en) 2011-02-17
CN101116345A (en) 2008-01-30
JP5021494B2 (en) 2012-09-05
US20100118970A1 (en) 2010-05-13

Similar Documents

Publication Publication Date Title
US8817879B2 (en) Temporal error concealment for video communications
KR100446235B1 (en) Merging search method of motion vector using multi-candidates
US8879642B2 (en) Methods and apparatus for concealing corrupted blocks of video data
US8320470B2 (en) Method for spatial error concealment
US7072398B2 (en) System and method for motion vector generation and analysis of digital video clips
US20080285651A1 (en) Spatio-temporal boundary matching algorithm for temporal error concealment
US6590934B1 (en) Error concealment method
US20150201209A1 (en) Motion vector detection apparatus and method
KR20080021654A (en) Temporal error concealment for bi-directionally predicted frames
JP2009509413A (en) Adaptive motion estimation for temporal prediction filters for irregular motion vector samples
CN107820085B (en) Method for improving video compression coding efficiency based on deep learning
US8045619B2 (en) Motion estimation apparatus and method
US20110019740A1 (en) Video Decoding Method
US20050129124A1 (en) Adaptive motion compensated interpolating method and apparatus
JP2000224593A (en) Method and device for interpolating frame and recording medium recording the method
JP2001186521A (en) Image decoder and method therefor
CN105992012B (en) Error concealment method and device
EP3637769B1 (en) Method and device for determining video frame complexity measure
CN115086665A (en) Error code masking method, device, system, storage medium and computer equipment
JP2008508787A (en) Error concealment technology for inter-coded sequences
JP2008301270A (en) Moving image encoding device and moving image encoding method
KR20040027047A (en) Encoding/decoding apparatus and method for image using predictive scanning
Kazemi Refinement of the recovered motion vectors for error concealment in HEVC
CN116847090A (en) Parallel encoding and decoding method and system based on desktop video interested region
Radmehr et al. PCA-based hierarchical clustering approach for motion vector estimation in H. 265/HEVC video error concealment

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUALCOMM INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YE, YAN;DANE, GOKEE;LEE, YEN-CHI;AND OTHERS;REEL/FRAME:016302/0791;SIGNING DATES FROM 20050513 TO 20050518

AS Assignment

Owner name: QUALCOMM INCORPORATED, A DELAWARE CORPORATION, CAL

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE SECOND AND THE FIFTH ASSIGNOR PREVIOUSLY RECORDED AT REEL 016302 FRAME 0791;ASSIGNORS:YE, YAN;DANE, GOKCE;LEE, YEN-CHI;AND OTHERS;REEL/FRAME:016611/0979;SIGNING DATES FROM 20050513 TO 20050518

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION