WO2011033972A1 - 映像表示装置 - Google Patents
映像表示装置 Download PDFInfo
- Publication number
- WO2011033972A1 WO2011033972A1 PCT/JP2010/065373 JP2010065373W WO2011033972A1 WO 2011033972 A1 WO2011033972 A1 WO 2011033972A1 JP 2010065373 W JP2010065373 W JP 2010065373W WO 2011033972 A1 WO2011033972 A1 WO 2011033972A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- still image
- image display
- display area
- motion vector
- Prior art date
Links
- 239000013598 vector Substances 0.000 claims abstract description 73
- 238000001514 detection method Methods 0.000 claims description 23
- 230000006866 deterioration Effects 0.000 abstract description 3
- 238000000034 method Methods 0.000 description 24
- 230000015556 catabolic process Effects 0.000 description 11
- 238000006731 degradation reaction Methods 0.000 description 11
- 239000004973 liquid crystal related substance Substances 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 206010047571 Visual impairment Diseases 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0127—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
- H04N7/013—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter the incoming video signal comprising different parts having originally different frame rate, e.g. video and graphics
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0257—Reduction of after-image effects
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/10—Special adaptations of display systems for operation with variable images
- G09G2320/106—Determination of movement vectors or equivalent parameters within the image
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
- G09G2340/0435—Change or adaptation of the frame rate of the video stream
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/16—Determination of a pixel data signal depending on the signal applied in the previous frame
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/16—Calculation or use of calculated indices related to luminance levels in display data
Definitions
- the present invention relates to a video display device that performs frame rate conversion processing, and more particularly to a video display device that prevents video degradation when the frame rate conversion processing is executed.
- FRC processing detects the motion vector of an object from the difference between multiple (usually two) frames included in a video signal, creates an interpolation frame (also called an interpolation frame), and executes processing to insert between frames To do.
- Patent Document 1 discloses an image display device provided with technical means for accurately detecting a motion vector of a telop portion that moves in the vertical direction and the left-right direction with respect to the screen to prevent video deterioration of the telop portion.
- Patent Document 2 discloses a video processing apparatus provided with technical means for detecting a motion vector in a video in which a plurality of moving objects intersect with high accuracy and suppressing failure of the entire video.
- a frame that includes still video that has been stationary for a certain period of time, such as weather information, time information, channel information, OSD, and captions
- this still video is displayed.
- Degradation may occur in the still image display area.
- image degradation is likely to occur near the boundary between the still image display area and its periphery.
- FIG. 6A and 6B are diagrams for explaining such video degradation.
- FIG. 6A schematically shows a video of frame #n
- FIG. 6B schematically shows a video of frame # n + 1.
- FIG. 6C is a diagram schematically showing an image of the interpolation frame #I generated by executing the FRC process.
- 101 is an image of frame #n
- 102 is an area in which a predetermined still image is displayed (hereinafter referred to as a still image display area)
- 103 is a periphery of the still image display area 102
- still image An object 104 is displayed in the lower left corner of the display area 102.
- 101 ′ indicates the frame # n + 1 image
- 102 ′ indicates the still image display area
- 103 ′ indicates the periphery of the still image display area 102 ′
- an object is displayed at the lower right corner of the still image display area 102 ′.
- 104 ' is displayed.
- the symbol P indicates a part of the video displayed in the still video display area.
- the object 104 is moving in the right direction of the screen.
- 111 indicates an image of the interpolation frame #I
- 112 indicates a still image display area
- 113 indicates a periphery of the still image display area 112
- 114 indicates an object.
- the object 104 of the frame #n is displayed as an object 114 below the center of the still image display area 112, but the still image displayed in the still image display area 102 of the frame #n is dragged by the movement of the object 114.
- a part P of the image may be displayed on the top of the object 114, and the image in the still image display area 112 may deteriorate.
- FRC processing motion vector detection processing and interpolation
- FRC process motion vector detection processing and interpolation
- Patent Document 1 is to accurately detect a moving image vector of a moving telop part to prevent image degradation of the moving telop part.
- Patent Document 2 includes a plurality of techniques. The motion vector in the video where the moving objects of the two animals intersect is detected with high accuracy to suppress the collapse of the entire video. It cannot be prevented.
- the present invention has been made in view of such a situation, and an object of the present invention is to provide a video display device that performs FRC processing and prevents a video degradation in a still video display region and a peripheral boundary portion thereof.
- the first technical means includes a motion vector detection unit that detects a motion vector from a difference between a first video frame and a second video frame that are sequentially input, and the first video frame and the second video frame based on the detected motion vector.
- An image display device including an interpolation frame generation unit that generates an interpolation frame to be inserted between video frames, and detecting a region where a predetermined still image is displayed in the first video frame and the second video frame A still image display area detecting unit; and the interpolation frame generating unit is configured to detect the first image frame when the magnitude of a motion vector around the still image display area detected by the motion vector detecting unit is less than a predetermined threshold.
- the still picture display area and its surrounding pixel values are used as the still picture display area of the interpolation frame and its surrounding interpolation pixel values, and the motion vector is displayed.
- the still image display area of the second image frame and the surrounding pixel values are set as the still image display area of the interpolation frame and the surrounding interpolation pixel values. This is a video display device.
- a second technical means is the first technical means, wherein the interpolation frame generation unit is less than a predetermined threshold when the magnitude of a motion vector around the still image display area detected by the motion vector detection unit is When the still image display area of the first video frame and its surrounding pixel values are set as the still image display area of the interpolation frame and its surrounding interpolation pixel values, and the magnitude of the motion vector is equal to or greater than the predetermined threshold,
- the video display device is characterized in that, based on the magnitude of a motion vector, a still video region of the interpolation frame and its surrounding interpolation pixel values are generated and the interpolation pixels are blurred.
- the present invention in the video display device that executes the FRC process, it is possible to prevent video degradation at the boundary portion between the still video display area and its periphery, so that the viewer can comfortably view the video.
- FIG. 1 is a functional block diagram of a video display device according to the present invention. It is a figure explaining an example of the detection method of a motion vector. It is a figure explaining the production
- FIG. 1 is a functional block diagram of a video display apparatus according to the present invention.
- the video display device 1 includes a tuner 11, a decoder 12, a video processing unit 13, an FRC unit 14, a memory 15, an OSD synthesis unit 16, a frame memory 17, an electrode driving unit 18, and a liquid crystal panel 19.
- the tuner 11 selects a desired channel from a digital broadcast signal received by an antenna (not shown), and outputs a digital received signal of the selected channel to the decoder 12.
- the decoder 12 decodes the encoded digital reception signal to generate a video signal, and outputs the generated video signal to the video processing unit 13.
- the video processing unit 13 performs image quality correction such as ⁇ correction and color correction on the video signal input from the decoder 12, and outputs the video signal subjected to the image quality correction to the FRC unit 14.
- the FRC unit 14 extracts two consecutive frames (original frames) from the video signal input from the video processing unit 13 and outputs the extracted two frames to the frame memory 17. Then, an interpolation frame is generated based on the two frames, and the number of frames of the input video signal is converted by interpolating the interpolation frame between the two frames. Further, the frame stored in the frame memory 17 is output to the electrode driving unit 18.
- the motion vector detection unit 14 a of the FRC unit 14 detects the motion vector of the video (object) between the two frames from the difference between the two frames sequentially input to the frame memory 17.
- the interpolation frame generation unit 14b generates an interpolation frame to be inserted between two frames based on the motion vector detected by the motion vector detection unit 14a.
- the still image display area detection unit 14c detects a still image display area in the frame. The detection will be described in detail later with reference to FIG.
- the memory 15 stores information such as OSD data 15a composed of bitmap data and the like, still image display area coordinate data 15b, and the like.
- the still image display area coordinate data 15b will be described in detail later with reference to FIG.
- the OSD combining unit 16 combines the OSD data 15a in the memory 15 with the original frame in the frame memory 17 (also referred to as ⁇ blending).
- the frame memory 17 has a storage capacity for storing at least three frames of two original frames and an interpolation frame.
- the electrode driver 18 drives the scan electrodes and data electrodes of the liquid crystal panel 19 based on the frame of the frame memory 17.
- the liquid crystal panel 19 is an active matrix liquid crystal panel having a liquid crystal layer and electrodes for applying scanning signals and data signals to the liquid crystal layer. In addition to the liquid crystal panel, an organic EL panel or the like can be used.
- FIG. 2 is a diagram illustrating an example of a motion vector detection method.
- frame #n (first frame) and frame # n + 1 (second frame) indicate frames (original frames) sequentially input to the frame memory 17, and the interpolation frame #I is obtained by executing the FRC process.
- Frame #n and frame # n + 1 are shown.
- X represents the horizontal direction of the frame
- Y represents the vertical direction of the frame
- T represents time.
- the coordinate value of the interpolation pixel P i to be interpolated in the interpolation frame #I is (0, 0).
- the motion vector detection unit 14a determines a search region R n and a search region R n + 1 that are motion vector search regions for the frame #n and the frame # n + 1.
- the search area R n of the frame #n is the X axis 9 centered on the pixel P n (see the straight line L) of the frame #n that is spatially located at the same position as the interpolation pixel P i of the interpolation frame #I. It is assumed that the pixel has a size of 9 pixels on the Y axis.
- the frame # n + 1 of the search region R n + 1 is also frame #n, nine pixels in the X-axis around the pixel P n + 1 of the frame # n + 1 in the interpolation pixel P i and the spatially same position, the nine pixels in the Y-axis Suppose that it is a size.
- the coordinates of the pixel P n and the pixel P n + 1 are (0, 0).
- the motion vector detecting section 14a sets the straight line passing through the search area R n + 1 of the frame #n search area R n and the frame # n + 1.
- this straight line a straight line connecting the pixel P x ( ⁇ 3,0) at the left center of the search region R n , the interpolation pixel P i, and the pixel P x + 1 (3,0) at the right center of the search region R n + 1. There is.
- Such a straight line is set for all pixels in the search region R n and the search region R n + 1 .
- a difference between a pixel value (also referred to as a luminance value) of a pixel in the search region R n through which each straight line passes and a pixel value of a pixel in the search region R n + 1 is calculated.
- a straight line having a pair of pixels having the smallest difference is determined as a motion vector of the pixel of frame #n.
- the motion vector detecting section 14a a straight line connecting the pixel P x + 1 of the interpolation pixel P i and the frame # n + 1 and the pixel P x and frame #I frame #n, detected as the motion vector MV of the pixel P x. That is, the pixel P x of the frame #n moves to the pixel P x + 1 in the frame # n + 1 through the same pixel as the interpolation pixel P i in the interpolation frame #I according to the direction indicated by the motion vector MV. .
- the motion vector detection unit 14a detects the magnitude of the motion vector MV.
- the interpolation frame generating portion 14b is the pixel value of the interpolation pixel P i is calculated (generated) by using the motion vector MV. For example, by calculating the average value between the pixel value of the pixel P x + 1 of the pixel value of the pixel P x in the frame #n and the frame # n + 1 of the motion vector MV passes, calculates the pixel value of the interpolation pixel P i To do. This interpolation pixel creation processing is executed for all the pixels of the interpolation frame #I to generate the interpolation frame #I.
- this interpolation frame #I is inserted between frames #n and # n + 1, this frame rate can be converted to 120 Hz when the frame rate of the input video signal is 60 Hz. The same applies when the frame rate is set to 240 Hz. Note that by replacing the repeating frame of the 2-3 pull-down video signal with an interpolated frame, the motion can be smoothed with the frame rate kept at 60 Hz (also referred to as a dejada function).
- FIG. 3A schematically shows the video of frame #n
- FIG. 3B schematically shows the video of frame # n + 1
- FIG. 3C shows the interpolation frame #I. It is a figure which shows an image
- 51 is an image of frame #n
- 52 is a region where a predetermined still image is displayed (hereinafter referred to as a still image display region)
- 53 is a periphery of the still image display region 52
- still image An object 54 is displayed in the lower left corner of the display area 52.
- 51 ′ indicates the frame # n + 1 image
- 52 ′ indicates the still image display area
- 53 ′ indicates the periphery of the still image display area 52 ′
- an object is located in the lower left corner of the still image display area 52 ′.
- 54 ' is displayed.
- the still image display area (52, 52 ′) is a fixed time stationary such as weather information, time information, channel information, OSD image, text information (telop) displayed in the upper right corner or upper left corner of the image. Indicates the area where the video is displayed.
- the still image display area detection unit 14c of the present invention detects the still image display area 52 of frame #n. Similarly, the still image display area detector 14c detects the still image display area 52 'of frame # n + 1.
- this detection method various conventionally proposed methods can be used. For example, when watching a morning news program, there is a high possibility that weather information, time information, etc. will be displayed in the upper right corner of the video and the upper left corner of the video, so this portion may be regarded as a still video display area. .
- the type of program being viewed can be obtained from the EPG data.
- the coordinate value of the still image display area is stored in the memory 15 as still image display area coordinate data 15b.
- the OSD composition unit 16 when the OSD composition unit 16 is operating, that is, when OSD display is performed, the still image display area may be detected based on the area coordinate value of the portion in the frame where the OSD is synthesized.
- the motion vector detection unit 14a of the present invention detects the motion vector of the periphery 53 of the still image display area 52 and the size of this motion vector.
- the motion vector and the magnitude of the motion vector for the object (pixel) 54 displayed in the periphery 53 of the still image display area 52 will be described.
- the interpolated frame generation unit 14b of the present invention When the magnitude of the motion vector of the object 54 included in the periphery 53 of the still image display area 52 is less than a predetermined threshold, the interpolated frame generation unit 14b of the present invention generates the still image display area 52 and the periphery 53 of the frame #n.
- the pixel values are the interpolation pixel values of the still image display area 62 and the periphery 63 of the interpolation frame #I that are spatially located at the same positions as the still image display area 52 and the periphery 53 of the frame #n.
- the predetermined threshold may be 50% of the number of horizontal pixels of the still image display area 52.
- the predetermined threshold value is 50.
- the predetermined threshold is 50% of the number of vertical pixels in the still image display area 52.
- the predetermined threshold can be arbitrarily set.
- the still image display area 62 and the periphery 63 of the interpolation frame #I have the frame # The same image as the n still image display area 52 and the periphery 53 is displayed (see reference numeral 54 in FIG. 3C).
- the interpolation frame generation unit 14b uses the pixel values of the still image display area 52 ′ and the periphery 53 ′ of frame # n + 1 as the still image of the interpolation frame #I. Interpolated pixel values for the display area 62 and the periphery 63 are used. That is, when the object 54 moves relatively with respect to the still image display area 52, the still image display area 62 and the periphery 63 of the interpolation frame #I are the same as the still image display area 52 ′ and the periphery 53 ′ of the frame # n + 1. An image is displayed (see reference numeral 54 'in FIG. 3C). Note that reference numeral 55 in FIG. 3C will be described later.
- the size (range) of the periphery (53, 53 ', 63) of the still image display area (52, 52', 62) can be arbitrarily determined. Further, not only the magnitude of the motion vector of the object 54 in frame #n but also the magnitude of the motion vector as the object 54 by referring to the average value of the motion vectors of all the pixels constituting the video 51 in frame #n, for example.
- the interpolation pixel value of the interpolation frame #I may be determined based on the magnitude of the motion vector as described above.
- the generation of the interpolation pixels in the video area other than the still video display area 62 and the periphery 63 of the interpolation frame #I may be executed by a normal interpolation process.
- the frame interpolation processing of the present invention is not executed.
- a telop (still image) may be displayed in the full width of the upper part of the screen, such as breaking news.
- the predetermined threshold is 50% of the number of horizontal pixels of the still image display area, it is unlikely that the size of the motion vector of the object exceeds the predetermined threshold. (It is unthinkable that the object moves at high speed from the left edge of the screen to the right edge between two frames). Therefore, the images of the still image display area 52 and the periphery 53 of the frame #n are always displayed in the still image display area 62 and the periphery 63 of the interpolation frame #I, and the image is not displayed smoothly. There is.
- FIG. 4 is a functional block diagram of the video display apparatus according to the second embodiment.
- the video display apparatus 2 is obtained by adding a blurring processing unit 14d to the video display apparatus 1 of FIG.
- the functional blocks of the video display device 2 having the same functions as those of the functional blocks of the video display device 1 described in FIG.
- the interpolated frame generation unit 14b ′ determines the pixel values of the still image display area 52 and the periphery 53 of frame #n. Are the interpolated pixel values of the still image display area 62 and the periphery 63 of the interpolated frame #I that are spatially the same position as the still image display area 52 and the periphery 53 of the frame #n.
- the predetermined threshold is assumed to be the same value (number of pixels: 50) as the threshold described in the first embodiment.
- the interpolation frame generation unit 14b ′ calculates the interpolation pixel values of the still image area 62 and the periphery 63 of the interpolation frame #I based on the magnitude of the motion vector. Generate. Specifically, the interpolation frame generating portion 14b ', as described in FIG. 2, the pixel value and the frame # of pixels in the still image display area 52 and the peripheral 53 of the frame #n (reference pixel P x in FIG. 2) An average value of the pixel values of the pixels of the n + 1 still image display area 52 ′ and the periphery 53 ′ (see the pixel P x + 1 in FIG.
- pixels may be calculated pixel value (see the interpolation pixel P i in FIG. 2) (product).
- the object 54 is displayed as an object 55 in FIG. 3C, for example.
- the interpolation pixel value is calculated in this way, as described in FIG. 6, a part of the still image displayed in the still image display area 52 of the frame #n is displayed on the upper part of the moving object, and the interpolation frame #I is displayed.
- the still image display area 62 may deteriorate.
- the blur processing unit 14d of the FRC unit 14 performs a process of blurring the video (interpolation pixels) of the still video display area 62 and the surrounding 63 in the interpolation frame #I.
- the blur processing is performed by calculating a 3 ⁇ 3 filter, a 3 ⁇ 1 filter, a general median filter, or the like on the interpolated pixel values of the still image display area 62 and the periphery 63. Also good.
- FIG. 5 shows the interpolated frame #I of the frame #n (see FIG. 3A) and the frame # n + 1 (see FIG. 3B) described in FIG. Corresponds to interpolation frame #I.
- the periphery 63 of the still image display area 62 is divided into a first periphery 63a and a second periphery 63b for the sake of explanation.
- the blur processing unit 14d blurs the video in the still video display area 62 with a predetermined blur amount, and blurs the video of the surrounding 63 with a blur amount smaller than the predetermined blur amount as the distance from the still video display area 62 increases. Apply. Specifically, the blurring processing unit 14d calculates a 5 tap filter (1, 1, 1, 1, 1) ⁇ 1/5 for the pixels (interpolated pixels) of the still image display area 62, and calculates the first peripheral 63a. A 3 tap filter (1, 1, 1) ⁇ 1/3 is calculated for the pixel (interpolated pixel), and a 3 tap filter (1, 2, 1) ⁇ 1/4 is similarly applied to the pixel (interpolated pixel) in the second peripheral 63b. Calculate. Note that a 5-tap filter (1, 1, 1, 1, 1) ⁇ 1/5 may be calculated for pixels in the vicinity of the boundary portion 64 instead of the entire still image display area 62.
- the FRC unit 14 sequentially outputs the frame #n, the interpolation frame #I, and the frame # n + 1 of the frame memory 17 to the electrode driving unit 18 to display an image on the liquid crystal panel 19.
- the video is displayed smoothly, and even if the video deteriorates due to this interpolation frame, the video deterioration can be made conspicuous by blurring the video. You can watch the video comfortably.
- the still image of frame #n (without blur) ⁇ the still image of interpolation frame #I (with blur) ⁇ the still image of frame # n + 1 (without blur) is sequentially displayed in the still image display area. Since the unblurred video and the blurred video are integrated into this, and this video can be seen clearly, there is no sense of incongruity in the displayed video.
- the frame rate is increased by replacing the repetitive frame of the 2-3 pull-down video signal with the interpolation frame, if only the interpolation frame #I is blurred, the still image of frame #n (no blur) ⁇ Since there is a high possibility that the four interpolated frames #I (with blur) will cause a sense of incongruity, it is preferable to blur the still image of frame #n.
- the interpolation frame generation unit 14b ′ determines the pixel values of the interpolation pixels in the still image display area 62 and the periphery 63 of the interpolation frame #I.
- the interpolation pixels of the still image display area 62 and the periphery 63 may be generated by calculation using the following formula.
- P i (1 ⁇ k) ⁇ P x + k ⁇ P x + 1 (Equation 1)
- P i is the pixel value of the interpolated pixels in the still image display area 62 and the periphery 63 of the interpolation frame #I
- P x is the pixel value of the pixels in the still image display area 52 and the periphery 53 of the frame #n
- P x + 1 is The pixel values of the pixels in the still image display area 52 ′ and the periphery 53 ′ of frame # n + 1 are shown.
- k means the ratio of the size of the motion vector of the object 54 (the number of moving pixels) to the number of horizontal pixels of the still image display area 52 when the object 54 moves in the horizontal direction of the screen.
- the number of horizontal pixels of the still image display area 52 is 100, for example, if the size of the motion vector of the object 54 is 10, k is the size of the motion vector of the object 54 (10) / still image.
- the number of horizontal pixels in the display area 52 (100) 0.1.
- the interpolation frame generation unit 14b ′ determines the value of k based not only on the motion vector of the object 54 but also on the average value of the motion vectors of all the pixels constituting the video 51 of the frame #n. May be. Thus, the value of k can be determined in consideration of various factors.
- Still video display area 53, 53 ', 63, 103, 103', 113 ... periphery of still image display area, 54, 54 ', 55, 104, 104', 114 ... object, 63a ... first periphery, 63b ... second periphery, 64 ... boundary portion.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Systems (AREA)
- Liquid Crystal Display Device Control (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
Abstract
Description
FRC処理は、映像信号に含まれる複数(通常は2枚)フレーム間の差分から物体の動きベクトルを検知して補間フレーム(内挿フレームともいう)を作成し、フレーム間に挿入する処理を実行するものである。
例えば、特許文献1には、画面に対して上下方向、左右方向に動くテロップ部分の動きベクトルを正確に検知してテロップ部分の映像劣化を防止する技術的手段を備えた画像表示装置が開示されている。
また、特許文献2には、複数の動物体が交差するような映像における動きベクトルを高精度に検知して映像全体の破綻を抑制する技術的手段を備えた映像処理装置が開示されている。
また、図6(B)の101’はフレーム#n+1の映像、102’は静止映像表示領域、103’は静止映像表示領域102’の周辺を示し、静止映像表示領域102’の右下隅に物体104’が表示されている。なお、符号Pは、静止映像表示領域に表示されている映像の一部を示す。図からも明らかなように、物体104は、画面右方向に移動している。
フレーム#nの物体104は、静止映像表示領域112の中央下に物体114として表示されるが、物体114の動きに引きずられるように、フレーム#nの静止映像表示領域102に表示されている静止映像の一部Pが物体114の上部に表示され、静止映像表示領域112の映像が劣化することがある。
図1は、本発明に係る映像表示装置の機能ブロック図である。映像表示装置1は、チューナ11、デコーダ12、映像処理部13、FRC部14、メモリ15、OSD合成部16、フレームメモリ17、電極駆動部18、液晶パネル19から構成される。
チューナ11は、アンテナ(図示せず)で受信されたデジタル放送信号から希望するチャンネルを選局し、選局されたチャンネルのデジタル受信信号をデコーダ12に出力する。デコーダ12は、エンコードされているデジタル受信信号をデコードして、映像信号を生成し、生成した映像信号を映像処理部13に出力する。映像処理部13は、デコーダ12から入力された映像信号に対してγ補正や色補正などの画質補正を施し、画質補正を施した映像信号をFRC部14に出力する。
補間フレーム生成部14bは、動きベクトル検知部14aが検知した動きベクトルに基づき2フレーム間に挿入する補間フレームを生成する。
静止映像表示領域検知部14cは、フレーム内の静止映像表示領域を検知する。なお、検知については、後述の図3で詳細に説明する。
OSD合成部16は、メモリ15のOSDデータ15aをフレームメモリ17の原フレームに合成する(αブレンディングともいう)。
<動きベクトルの検知>
FRC部14においてFRC処理を実行する際に、FRC部14の動きベクトル検知部14aは映像の動きベクトルを検知する。
図2は、動きベクトルの検知方法の一例を説明する図である。
図2において、フレーム#n(第1フレーム)、フレーム#n+1(第2フレーム)は、フレームメモリ17に順次入力されるフレーム(原フレーム)を示し、補間フレーム#Iは、FRC処理の実行によって、フレーム#n、フレーム#n+1に基づき生成されるフレームを示す。Xはフレームの水平方向、Yはフレームの垂直方向、Tは時間を示す。ここで、補間フレーム#I内にある補間対象となる補間画素Piの座標値を(0,0)とする。
図3(A)は、フレーム#nの映像を模式的に示す図、図3(B)は、フレーム#n+1の映像を模式的に示す図、図3(C)は、補間フレーム#Iの映像を模式的に示す図である。
また、図3(B)の51’はフレーム#n+1の映像、52’は静止映像表示領域、53’は静止映像表示領域52’の周辺を示し、静止映像表示領域52’の左下隅に物体54’が表示されている。
つまり、物体54が静止映像表示領域52に対して比較的動く場合、補間フレーム#Iの静止映像表示領域62及び周辺63には、フレーム#n+1の静止映像表示領域52’及び周辺53’と同じ映像が表示される(図3(C)の符号54’参照)。なお、図3(C)の符号55については、後述する。
シーンがチェンジした場合、シーンチェンジ前後で映像が全く異なるので、本発明のフレーム補間処理は実行しない。
例えば、速報ニュース等のように、画面上部分の横幅一杯にテロップ(静止映像)が表示されることがある。このとき、実施例1で説明したように、前記所定の閾値を静止映像表示領域の横幅画素数の50%とすると、物体の動きベクトルの大きさが前記所定の閾値を超えることは考えられない(物体が2フレーム間で画面左端から右端に高速移動することは考えられない)。したがって、常に、補間フレーム#Iの静止映像表示領域62及び周辺63には、フレーム#nの静止映像表示領域52及び周辺53の映像が表示されることになり、映像が滑らかに表示されなくなることがある。
具体的には、補間フレーム生成部14b’は、図2で説明したように、フレーム#nの静止映像表示領域52及び周辺53の画素(図2の画素Px参照)の画素値並びにフレーム#n+1の静止映像表示領域52’及び周辺53’の画素(図2の画素Px+1参照)の画素値との平均値を算出して、補間フレーム#Iの静止映像表示領域62及び周辺63の補間画素(図2の補間画素Pi参照)の画素値を算出(生成)してもよい。この場合、物体54は、例えば、図3(C)の物体55のように表示される。
図5は、図3で説明したフレーム#n(図3(A)参照)、同フレーム#n+1(図3(B)参照)の補間フレーム#Iを示したもので、図3(C)の補間フレーム#Iに対応する。
静止映像表示領域62の周辺63は、説明のため、第1周辺63aと第2周辺63bに区分している。
Pi=(1-k)×Px+k×Px+1 …(数1)
ここで、Piは補間フレーム#Iの静止映像表示領域62及び周辺63の補間画素の画素値、Pxはフレーム#nの静止映像表示領域52及び周辺53の画素の画素値、Px+1はフレーム#n+1の静止映像表示領域52’及び周辺53’の画素の画素値を示す。
また、前述したように、補間フレーム生成部14b’は、物体54の動きベクトルだけでなく、フレーム#nの映像51を構成する全画素の動きベクトルの平均値に基づき、kの値を決定してもよい。このように、様々な要素を考慮してkの値を決定することができる。
Claims (2)
- 順次入力される第1映像フレームと第2映像フレームとの差分から動きベクトルを検知する動きベクトル検知部と、検知した動きベクトルに基づき前記第1映像フレームと前記第2映像フレームとの間に挿入する補間フレームを生成する補間フレーム生成部とを備えた映像表示装置において、
前記第1映像フレーム及び前記第2映像フレームにおける所定の静止映像が表示された領域を検知する静止映像表示領域検知部を備え、
前記補間フレーム生成部は、前記動きベクトル検知部が検知した静止映像表示領域の周辺の動きベクトルの大きさが所定の閾値未満の場合、前記第1映像フレームの静止映像表示領域及びその周辺の画素値を前記補間フレームの静止映像表示領域及びその周辺の補間画素値とし、当該動きベクトルの大きさが前記所定の閾値以上の場合、前記第2映像フレームの静止映像表示領域及びその周辺の画素値を前記補間フレームの静止映像表示領域及びその周辺の補間画素値とすることを特徴とする映像表示装置。 - 前記補間フレーム生成部は、前記動きベクトル検知部が検知した前記静止映像表示領域の周辺の動きベクトルの大きさが所定の閾値未満の場合、前記第1映像フレームの静止映像表示領域及びその周辺の画素値を前記補間フレームの静止映像表示領域及びその周辺の補間画素値とし、当該動きベクトルの大きさが前記所定の閾値以上の場合、該動きベクトルの大きさに基づいて前記補間フレームの静止映像領域及びその周辺の補間画素値を生成すると共に、当該補間画素をぼかすことを特徴とする請求項1に記載の映像表示装置。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP10817085A EP2479985A4 (en) | 2009-09-18 | 2010-09-08 | VIDEO DISPLAY DEVICE |
MX2012003206A MX2012003206A (es) | 2009-09-18 | 2010-09-08 | Aparato de visualizacion de imagen. |
RU2012115477/07A RU2012115477A (ru) | 2009-09-18 | 2010-09-08 | Устройство отображения изображения |
JP2011531894A JP5255704B2 (ja) | 2009-09-18 | 2010-09-08 | 映像表示装置 |
US13/496,830 US8830257B2 (en) | 2009-09-18 | 2010-09-08 | Image displaying apparatus |
CN201080041920.8A CN102577365B (zh) | 2009-09-18 | 2010-09-08 | 视频显示装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009217423 | 2009-09-18 | ||
JP2009-217423 | 2009-09-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011033972A1 true WO2011033972A1 (ja) | 2011-03-24 |
Family
ID=43758580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/065373 WO2011033972A1 (ja) | 2009-09-18 | 2010-09-08 | 映像表示装置 |
Country Status (7)
Country | Link |
---|---|
US (1) | US8830257B2 (ja) |
EP (1) | EP2479985A4 (ja) |
JP (1) | JP5255704B2 (ja) |
CN (1) | CN102577365B (ja) |
MX (1) | MX2012003206A (ja) |
RU (1) | RU2012115477A (ja) |
WO (1) | WO2011033972A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012238009A (ja) * | 2009-12-28 | 2012-12-06 | Canon Inc | 画像処理装置及び画像処理方法 |
JP2012249123A (ja) * | 2011-05-30 | 2012-12-13 | Jvc Kenwood Corp | 映像処理装置及び補間フレーム生成方法 |
CN103139524A (zh) * | 2011-12-05 | 2013-06-05 | 联想(北京)有限公司 | 视频优化方法以及信息处理设备 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9807339B2 (en) * | 2015-06-12 | 2017-10-31 | Sharp Laboratories Of America, Inc. | Frame rate conversion system |
KR102428272B1 (ko) * | 2015-08-24 | 2022-08-03 | 삼성전자주식회사 | 디스플레이 장치, 그 제어 방법 및 컴퓨터 판독가능 기록 매체 |
CN106157919B (zh) * | 2016-09-09 | 2019-01-18 | 京东方科技集团股份有限公司 | 一种液晶显示面板的显示方法及装置 |
CN114664209A (zh) * | 2020-12-23 | 2022-06-24 | 奇景光电股份有限公司 | 影像显示系统 |
KR20230039867A (ko) * | 2021-09-14 | 2023-03-22 | 삼성디스플레이 주식회사 | 잔상 분석부, 표시 장치, 및 표시 장치의 잔상 보상 방법 |
CN115334335B (zh) * | 2022-07-13 | 2024-01-09 | 北京优酷科技有限公司 | 视频插帧方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08149421A (ja) * | 1994-11-22 | 1996-06-07 | Oki Electric Ind Co Ltd | 動きベクトルを用いた動き内挿方法および動き内挿回路 |
JP3121519B2 (ja) * | 1994-12-12 | 2001-01-09 | 沖電気工業株式会社 | 動きベクトルを用いた動き内挿方法および動き内挿回路ならびに動きベクトル検出方法および動きベクトル検出回路 |
JP2007267360A (ja) * | 2006-02-28 | 2007-10-11 | Sharp Corp | 画像表示装置及び方法、画像処理装置及び方法 |
JP2008306330A (ja) * | 2007-06-06 | 2008-12-18 | Toshiba Corp | 情報処理装置、動きベクトル生成プログラムおよび補間画像生成プログラム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1404130A1 (en) * | 2002-09-24 | 2004-03-31 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for processing a video signal mixed with an additional image signal |
JP4157579B2 (ja) | 2006-09-28 | 2008-10-01 | シャープ株式会社 | 画像表示装置及び方法、画像処理装置及び方法 |
JP4303743B2 (ja) * | 2006-10-04 | 2009-07-29 | シャープ株式会社 | 画像表示装置及び方法、画像処理装置及び方法 |
JP2008160591A (ja) * | 2006-12-25 | 2008-07-10 | Hitachi Ltd | テレビジョン受信機及びそのフレームレート変換方法 |
JP4513819B2 (ja) | 2007-03-19 | 2010-07-28 | 株式会社日立製作所 | 映像変換装置、映像表示装置、映像変換方法 |
JP5023893B2 (ja) | 2007-08-31 | 2012-09-12 | ソニー株式会社 | 表示装置 |
KR20090054828A (ko) | 2007-11-27 | 2009-06-01 | 삼성전자주식회사 | Frc된 비디오에 gui를 부가하는 비디오 기기 및 그의gui 제공방법 |
CN101489031A (zh) * | 2009-01-16 | 2009-07-22 | 西安电子科技大学 | 基于运动分类的自适应帧速率上转换方法 |
JP4743912B2 (ja) * | 2009-02-23 | 2011-08-10 | キヤノン株式会社 | 画像表示システム、画像表示装置、および、画像表示装置の制御方法 |
-
2010
- 2010-09-08 US US13/496,830 patent/US8830257B2/en not_active Expired - Fee Related
- 2010-09-08 WO PCT/JP2010/065373 patent/WO2011033972A1/ja active Application Filing
- 2010-09-08 EP EP10817085A patent/EP2479985A4/en not_active Withdrawn
- 2010-09-08 MX MX2012003206A patent/MX2012003206A/es not_active Application Discontinuation
- 2010-09-08 CN CN201080041920.8A patent/CN102577365B/zh not_active Expired - Fee Related
- 2010-09-08 JP JP2011531894A patent/JP5255704B2/ja not_active Expired - Fee Related
- 2010-09-08 RU RU2012115477/07A patent/RU2012115477A/ru not_active Application Discontinuation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08149421A (ja) * | 1994-11-22 | 1996-06-07 | Oki Electric Ind Co Ltd | 動きベクトルを用いた動き内挿方法および動き内挿回路 |
JP3121519B2 (ja) * | 1994-12-12 | 2001-01-09 | 沖電気工業株式会社 | 動きベクトルを用いた動き内挿方法および動き内挿回路ならびに動きベクトル検出方法および動きベクトル検出回路 |
JP2007267360A (ja) * | 2006-02-28 | 2007-10-11 | Sharp Corp | 画像表示装置及び方法、画像処理装置及び方法 |
JP2008306330A (ja) * | 2007-06-06 | 2008-12-18 | Toshiba Corp | 情報処理装置、動きベクトル生成プログラムおよび補間画像生成プログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP2479985A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012238009A (ja) * | 2009-12-28 | 2012-12-06 | Canon Inc | 画像処理装置及び画像処理方法 |
JP2012249123A (ja) * | 2011-05-30 | 2012-12-13 | Jvc Kenwood Corp | 映像処理装置及び補間フレーム生成方法 |
CN103139524A (zh) * | 2011-12-05 | 2013-06-05 | 联想(北京)有限公司 | 视频优化方法以及信息处理设备 |
Also Published As
Publication number | Publication date |
---|---|
EP2479985A4 (en) | 2013-03-13 |
CN102577365A (zh) | 2012-07-11 |
JP5255704B2 (ja) | 2013-08-07 |
CN102577365B (zh) | 2014-12-24 |
JPWO2011033972A1 (ja) | 2013-02-14 |
US20120182311A1 (en) | 2012-07-19 |
RU2012115477A (ru) | 2013-10-27 |
MX2012003206A (es) | 2012-05-29 |
US8830257B2 (en) | 2014-09-09 |
EP2479985A1 (en) | 2012-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5255704B2 (ja) | 映像表示装置 | |
US7965303B2 (en) | Image displaying apparatus and method, and image processing apparatus and method | |
JP4157579B2 (ja) | 画像表示装置及び方法、画像処理装置及び方法 | |
JP5215668B2 (ja) | 画像表示装置及び方法 | |
JP2008160591A (ja) | テレビジョン受信機及びそのフレームレート変換方法 | |
US20100027664A1 (en) | Image Processing Apparatus and Image Processing Method | |
US20100026885A1 (en) | Image Processing Apparatus | |
US7106286B2 (en) | Liquid crystal displaying method | |
US20080063308A1 (en) | Frame interpolating circuit, frame interpolating method, and display apparatus | |
US9215353B2 (en) | Image processing device, image processing method, image display device, and image display method | |
JP2014077993A (ja) | 表示装置 | |
US20100026737A1 (en) | Video Display Device | |
JP2009055340A (ja) | 画像表示装置及び方法、画像処理装置及び方法 | |
JP5051983B2 (ja) | フレームレート制御によるlcdぼけ低減 | |
JP4878628B2 (ja) | 映像処理装置および映像処理方法 | |
JP2008131244A (ja) | テレビジョン受信機及びその映像表示方法 | |
JP5114274B2 (ja) | テレビジョン受信機及びそのフレームレート変換方法 | |
JP2008109625A (ja) | 画像表示装置及び方法、画像処理装置及び方法 | |
JP5219646B2 (ja) | 映像処理装置及び映像処理装置の制御方法 | |
JP2009181067A (ja) | 画像表示装置及び方法、画像処理装置及び方法 | |
JP2008193730A (ja) | 画像表示装置及び方法、画像処理装置及び方法 | |
JP5671957B2 (ja) | 表示装置および表示方法 | |
JP4722672B2 (ja) | 画像表示装置 | |
JP2009258269A (ja) | 画像表示装置 | |
JP4157587B2 (ja) | 画像表示装置及び方法、画像処理装置及び方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080041920.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10817085 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011531894 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2012/003206 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010817085 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 3038/CHENP/2012 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13496830 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012115477 Country of ref document: RU |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112012006169 Country of ref document: BR |