[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116016831B - Low-time-delay image de-interlacing method and device - Google Patents

Low-time-delay image de-interlacing method and device Download PDF

Info

Publication number
CN116016831B
CN116016831B CN202211593963.8A CN202211593963A CN116016831B CN 116016831 B CN116016831 B CN 116016831B CN 202211593963 A CN202211593963 A CN 202211593963A CN 116016831 B CN116016831 B CN 116016831B
Authority
CN
China
Prior art keywords
pixel
field
interpolation
horizontal displacement
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211593963.8A
Other languages
Chinese (zh)
Other versions
CN116016831A (en
Inventor
旷文彬
胡红阳
林宇辉
刘琛良
刘芸江
卢海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Bosun Network Technology Co ltd
Hunan MgtvCom Interactive Entertainment Media Co Ltd
Original Assignee
Fuzhou Bosun Network Technology Co ltd
Hunan MgtvCom Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Bosun Network Technology Co ltd, Hunan MgtvCom Interactive Entertainment Media Co Ltd filed Critical Fuzhou Bosun Network Technology Co ltd
Priority to CN202211593963.8A priority Critical patent/CN116016831B/en
Publication of CN116016831A publication Critical patent/CN116016831A/en
Application granted granted Critical
Publication of CN116016831B publication Critical patent/CN116016831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Television Systems (AREA)

Abstract

The application provides a low-time-delay image de-interlacing method and a low-time-delay image de-interlacing device, which relate to the field of image processing, wherein the method comprises the following steps: acquiring a current field and at least one field of the current field in the forward direction; performing horizontal displacement estimation on a target pixel in the current field according to at least one forward field to obtain a horizontal displacement vector; determining a target interpolation mode according to whether the horizontal displacement vector is credible or not, and interpolating according to the target interpolation mode to obtain an interpolation pixel value of a target pixel; the pixel value of the pixel at the target location in the output frame is determined based on the interpolated pixel value, the forward at least one field, and the current field. The application uses at least one field in the forward direction of the current field as a reference image to carry out de-interlacing processing, so that an output frame can be generated when the current field is output without continuously receiving field data after the current field, and the time delay is reduced.

Description

Low-time-delay image de-interlacing method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a low-delay image de-interlacing method and apparatus.
Background
Interlaced scanning (also known as interlaced scanning) is a technique that alternates the odd-even lines of video frames before and after scanning, thereby doubling the perceived frame rate of a video display without adding additional bandwidth and acquisition capabilities. Interlacing standards, particularly at high definition resolution, are high definition playout and storage formats in the broadcast television field.
Since the latest coding schemes such as h.265 and mobile phone display screens do not support the interlaced scanning standard, a de-interlacing (de-interleaving) algorithm is required to reproduce interlaced pictures to a progressive scheme to obtain better picture effects. The traditional space-time interpolation combined de-interlacing method adopts a backward field as a reference image, namely, the de-interlacing treatment can be carried out on the field a only by continuously receiving 1-2 field pictures after the field a, and the de-interlacing method leads to that an output picture is delayed by 1-2 fields compared with an input picture, so that the application effect of a video transmission scene with high delay requirement is poor in some cases.
Disclosure of Invention
In view of this, the present application provides a low-delay image de-interlacing method and device, which are used for solving the problem of high delay of output pictures in the prior art, and the technical scheme is as follows:
a low latency image de-interlacing method comprising:
acquiring a current field and at least one field of the current field in the forward direction, wherein an output frame is generated when the current field is output;
performing horizontal displacement estimation on a target pixel in the current field according to at least one forward field to obtain a horizontal displacement vector, wherein the target pixel is a pixel at a target position which is not scanned in the current field;
Determining a target interpolation mode according to whether the horizontal displacement vector is credible or not, and interpolating according to the target interpolation mode to obtain an interpolation pixel value of a target pixel;
the pixel value of the pixel at the target location in the output frame is determined based on the interpolated pixel value, the forward at least one field, and the current field.
Optionally, the forward at least one field includes a first field forward adjacent to the current field, a second field forward adjacent to the first field, a third field forward adjacent to the second field, and a fourth field forward adjacent to the third field;
performing horizontal displacement estimation on a target pixel in the current field according to at least one forward field, including:
setting a first pixel reference range in the current field, a second pixel reference range in the first field and a horizontal displacement threshold according to the target pixel;
determining pixel estimated ranges corresponding to the second field and the fourth field respectively according to the first pixel reference range and the horizontal displacement threshold;
according to pixels in a first pixel reference range in the current field, horizontal displacement prediction is carried out in a pixel prediction range corresponding to a second field and a fourth field respectively, and a first horizontal displacement sub-vector corresponding to the second field and a second horizontal displacement sub-vector corresponding to the fourth field are obtained respectively;
Under the condition that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be credible, determining a pixel processing range corresponding to the first field according to the second pixel reference range and the first horizontal displacement sub-vector, and determining a pixel estimated range corresponding to the third field according to the pixel processing range and a horizontal displacement threshold;
according to pixels in the pixel processing range in the first field, horizontal displacement estimation is carried out in the pixel estimation range corresponding to the third field, and a third horizontal displacement sub-vector corresponding to the third field is obtained;
the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are used as horizontal displacement vectors.
Optionally, determining whether the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be authentic includes:
calculating a first difference value between twice of the first horizontal displacement sub-vector and the second horizontal displacement sub-vector, if the absolute value of the first difference value is smaller than a preset first difference value threshold value, determining that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be trusted, otherwise, determining that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not trusted;
determining whether the horizontal displacement vector is authentic comprises:
And calculating a second difference value of the first horizontal displacement sub-vector and the third horizontal displacement sub-vector, if the absolute value of the second difference value is smaller than a preset second difference value threshold value, determining that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are credible, otherwise, determining that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not credible.
Optionally, determining a target interpolation mode according to whether the horizontal displacement vector is reliable or not, and interpolating according to the target interpolation mode to obtain an interpolation pixel value of the target pixel, including:
under the condition that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are credible, taking a time interpolation mode as a target interpolation mode, and interpolating according to the time interpolation mode to obtain an interpolation pixel value of a target pixel, wherein the time interpolation mode is a mode of obtaining the interpolation pixel value according to at least one forward field interpolation;
under the condition that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are not credible, taking a spatial interpolation mode as a target interpolation mode, and interpolating according to the spatial interpolation mode to obtain an interpolation pixel value of a target pixel, wherein the spatial interpolation mode refers to a mode of obtaining the interpolation pixel value according to the current field interpolation.
Optionally, the method further comprises:
and under the condition that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not credible, interpolating according to a spatial interpolation mode to obtain an interpolation pixel value of the target pixel.
Optionally, interpolating according to a time interpolation mode to obtain an interpolation pixel value of the target pixel, including:
calculating the average value of the first horizontal displacement sub-vector and the third horizontal displacement sub-vector;
selecting a pixel at a position shifted by a half-time average value from the target position in the first field as a first temporal interpolation reference pixel;
selecting a pixel at a position shifted by three times the average value of the target position from the third field as a second temporal interpolation reference pixel;
an average pixel value of the first temporal interpolation reference pixel and the second temporal interpolation reference pixel is determined as an interpolation pixel value of the target pixel.
Optionally, interpolating according to a spatial interpolation mode to obtain an interpolation pixel value of the target pixel, including:
setting a third pixel reference range in the current field according to the target pixel;
and carrying out interpolation processing on the target pixel based on the pixel in the third pixel reference range in the current field to obtain an interpolation pixel value of the target pixel.
Optionally, determining the pixel value of the pixel at the target position in the output frame according to the interpolated pixel value, the forward at least one field and the current field includes:
determining interpolation variation tolerance according to the first field, the second field, the third field and the current field;
selecting a pixel at a target position from the first field, and taking the pixel value of the pixel as a candidate pixel value;
calculating a third difference value between the candidate pixel value and the interpolation pixel value;
if the absolute value of the third difference value is smaller than the interpolation variation tolerance, taking the interpolation pixel value as the pixel value of the pixel at the target position in the output frame;
and if the absolute value of the third difference value is greater than or equal to the interpolation variation tolerance, the candidate pixel value is used as the pixel value of the pixel at the target position in the output frame according to the value after the sign deviation interpolation variation tolerance of the third difference value.
Optionally, determining the interpolation variation tolerance according to the first field, the second field, the third field and the current field includes:
selecting each pixel in the reference range of the first pixel from the current field, taking each selected pixel as a third pixel, and selecting the pixels at the same positions of each third pixel from the second field;
determining absolute values of pixel difference values of the third pixels at the same positions in the second field respectively, and taking the maximum value in the determined absolute values as a first maximum difference value;
Selecting each pixel in the second pixel reference range from the first field, taking each selected pixel as a fourth pixel, and selecting the pixels at the same positions of each fourth pixel from the third field;
determining absolute values of pixel difference values of the fourth pixels at the same positions in the third field respectively, and taking the maximum value in the determined absolute values as a second maximum difference value;
and determining the maximum value of the first maximum difference value and the second maximum difference value as interpolation variation tolerance.
A low-latency image de-interlacing device comprising:
the field acquisition module is used for acquiring a current field and at least one field in the forward direction of the current field, wherein an output frame is generated when the current field is output;
the horizontal displacement estimation module is used for carrying out horizontal displacement estimation on a target pixel in the current field according to at least one forward field to obtain a horizontal displacement vector, wherein the target pixel is a pixel at a target position which is not scanned in the current field;
the interpolation pixel value determining module is used for determining a target interpolation mode according to whether the horizontal displacement vector is credible or not and interpolating according to the target interpolation mode to obtain an interpolation pixel value of the target pixel;
And the output frame pixel value determining module is used for determining the pixel value of the pixel at the target position in the output frame according to the interpolation pixel value, the forward at least one field and the current field.
According to the technical scheme, the low-delay image de-interlacing method provided by the application comprises the steps of firstly obtaining a current field and at least one forward field of the current field, then carrying out horizontal displacement prediction on a target pixel in the current field according to the at least one forward field to obtain a horizontal displacement vector, then determining a target interpolation mode according to whether the horizontal displacement vector is reliable or not, interpolating according to the target interpolation mode to obtain an interpolation pixel value of the target pixel, and finally determining the pixel value of the pixel at the target position in an output frame according to the interpolation pixel value, the at least one forward field and the current field. The application uses at least one field in the forward direction of the current field as a reference image to carry out de-interlacing processing, so that an output frame can be generated when the current field is output without continuously receiving field data after the current field, and the time delay is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a low-delay image de-interlacing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a video data including a current field and four forward neighboring fields;
FIG. 3 is a schematic diagram of a temporal interpolation result and a spatial interpolation result according to an embodiment of the present application;
fig. 4 is a schematic diagram of an output frame Pr according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a low-delay image de-interlacing device according to an embodiment of the present application;
fig. 6 is a hardware block diagram of an image de-interlacing device with low latency according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The application provides a low-delay image de-interlacing method, which is described in detail by the following embodiments.
Referring to fig. 1, a flow chart of a low-latency image de-interlacing method according to an embodiment of the present application is shown, where the low-latency image de-interlacing method may include:
step S101, acquiring a current field and at least one field forward of the current field.
Wherein an output frame is generated when the current field is output.
It will be appreciated that in interlaced scanning, each video frame in the video data contains two fields, one field being a field of pixels in odd lines (defined as an odd field) and the other two fields being a field of pixels in even lines (defined as an even field), the two fields of data being transmitted alternately during transmission of the video data. For example, the video data contains 5 video frames, and is transmitted in the following order: an odd field of video frame 1, an even field of video frame 1, an odd field of video frame 2, an even field of video frame 2, an odd field of video frame 3, an even field of video frame 3, an odd field of video frame 4, an even field of video frame 4, an odd field of video frame 5, an even field of video frame 5.
In this step, the current field may be a field formed by pixels in an odd line or a field formed by pixels in an even line, and the current field and at least one field in a forward direction of the current field may be obtained in this step, where the forward field of the current field refers to a field whose transmission time is earlier than that of the current field.
Optionally, the at least one field in the forward direction of the current field may include a first field adjacent to the current field in the forward direction, a second field adjacent to the first field in the forward direction, a third field adjacent to the second field in the forward direction, and a fourth field adjacent to the third field in the forward direction, where if the current field is defined as a T field, the first field is a T-1 field, the second field is a T-2 field, the third field is a T-3 field, and the fourth field is a T-4 field in the order from the near to the far in time order.
For example, referring to the schematic diagram of a video data including the current field and the forward neighboring four fields shown in FIG. 2, (a) is the T-4 field, (b) is the T-3 field, (c) is the T-2 field, (d) is the T-1 field, and (e) is the T field. In FIG. 2, scan lines 11, 31 and 51 are the N-1 th scan lines in field T, field T-2 and field T-4, respectively; scan lines 12, 22, 42 are the nth scan line in field T, field T-1, field T-3, respectively; scan lines 13, 33, 53 are the (n+1) th scan line of field T, field T-2, and field T-4, respectively.
Of course, at least one of the fields in the forward direction obtained in this step may also include other fields, which are not particularly limited in the present application.
Step S102, horizontal displacement estimation is carried out on a target pixel in the current field according to at least one forward field, and a horizontal displacement vector is obtained.
The target pixel is a pixel at a target position not scanned in the current field, for example, as shown in (e) of fig. 2, the target pixel 100 to be interpolated is located on the nth scan line in the current field T.
It should be appreciated that video data is made up of a plurality of consecutive video frames, and that some pixels in a video data will appear in multiple fields (although the positions may vary) contained in the video frame, the present embodiment defines the same pixels as corresponding pixels that appear in multiple fields. For example, field 1 and field 2 in the person video data each contain a "face" pixel, then the "face" pixel in field 1 and the "face" pixel in field 2 are corresponding pixels.
The step can determine the corresponding pixels of a plurality of pixels in the neighborhood of the target pixel from the pixels of at least one field from the front, and obtain the horizontal displacement vector in the step based on the distances between the plurality of pixels in the neighborhood and the corresponding pixels. Here, some pixels in video data may appear in multiple fields,
the forward at least one field obtained in the above step includes a first field, a second field, a third field and a fourth field, and optionally, the forward at least one field according to the step may include the second field and the fourth field, or may include the first field, the second field, the third field and the fourth field.
And step S103, determining a target interpolation mode according to whether the horizontal displacement vector is reliable or not, and interpolating according to the target interpolation mode to obtain an interpolation pixel value of the target pixel.
Because the calculation of the horizontal displacement vector is easy to be interfered by various factors, the horizontal displacement vector is easy to estimate incorrectly, whether the horizontal displacement vector is credible or not can be judged in order to avoid inaccurate interpolation pixel values caused by estimating incorrectly horizontal displacement vectors, a proper target interpolation mode is selected from a plurality of preset interpolation modes through a judging result, and then the target pixel is interpolated according to the selected target interpolation mode, so that the interpolated pixel value is obtained.
For convenience of the following description, the interpolated pixel value is defined as an interpolated pixel value. Here, if the horizontal displacement vector is reliable, it is explained that the horizontal displacement vector determined in the foregoing steps is relatively accurate; if the horizontal displacement vector is not credible, the horizontal displacement vector determined in the previous step is inaccurate.
Optionally, the plurality of interpolation modes preset in this embodiment include a temporal interpolation mode and a spatial interpolation mode, where the temporal interpolation mode refers to a mode of obtaining an interpolation pixel value of the target pixel according to interpolation of pixels in at least one field in a forward direction, and the spatial interpolation mode refers to a mode of obtaining an interpolation pixel value of the target pixel according to interpolation of pixels in the current field.
It will be appreciated that if the step determines that the horizontal displacement vector is reliable, which means that the corresponding pixel of the target pixel found in at least one field in the forward direction according to the horizontal displacement vector is relatively accurate, then the interpolation pixel value can be interpolated based on the pixels, so in this case, the time interpolation mode can be used as the target interpolation mode, that is, in the case that the horizontal displacement vector is reliable, the step can interpolate the interpolation pixel value of the target pixel according to the time interpolation mode.
Otherwise, if the step determines that the horizontal displacement vector is not reliable, it indicates that the corresponding pixel of the target pixel may not be found in at least one forward field, or the found corresponding pixel has a larger error, in this case, the interpolation pixel value of the target pixel may be interpolated based on the pixel of the current field, that is, the spatial interpolation mode may be used as the target interpolation mode, and in the case that the horizontal displacement vector is not reliable, the step may interpolate to obtain the interpolation pixel value of the target pixel according to the spatial interpolation mode.
Referring to fig. 3, a schematic diagram of a temporal interpolation result and a spatial interpolation result is shown, where (a) is a spatial interpolation result Ps, and (b) is a temporal interpolation result Pt.
As shown in fig. 3, the scanning lines 61 and 71 are the N-1 th scanning line of Ps and Pt, respectively; scan lines 62 and 72 are respectively the nth scan line of Ps and Pt; the scanning lines 63 and 73 are the (n+1) th scanning line of Ps and Pt, respectively. The de-interlacing method in this embodiment generates Ps and Pt that are output in a progressive scan manner, where the N-1 th scan line and the n+1 th scan line belong to the same field and the N-th scan line belongs to another field.
The position of the pixel 101 in Ps and the position of the pixel 102 in Pt are the same as the position of the target pixel 100 in the T field, and are target positions (for example, the target positions may be coordinates). The pixels in the scanning line 61 of Ps and the pixels in the scanning line 71 of Pt are the same as the pixels in the scanning line 11 in fig. 2 (e), and the pixels in the scanning line 63 of Ps and the pixels in the scanning line 73 of Pt are the same as the pixels in the scanning line 13 in fig. 2 (e).
In the present embodiment, the interpolation pixel value of the target pixel 100 obtained by the temporal interpolation method may be filled into the pixel 102 of Pt in the case where the horizontal displacement vector is reliable, and the interpolation pixel value of the target pixel 100 obtained by the spatial interpolation method may be filled into the pixel 101 of Ps in the case where the horizontal displacement vector is not reliable. In this case, ps shown in fig. 3 (a) includes an interpolation pixel value of the target pixel obtained by the spatial interpolation method, and Pt shown in fig. 3 (b) includes an interpolation pixel value of the target pixel obtained by the temporal interpolation method.
Alternatively, in order to facilitate the processing in the subsequent step, in the case where the horizontal displacement vector is not reliable, the interpolated pixel value of the target pixel 100 obtained by the spatial interpolation method may be filled into the pixel 101 of Ps, and then the interpolated pixel value in the pixel 101 of Ps is filled into the pixel 102 of Pt, that is, in this case, the temporal interpolation result includes the interpolated pixel value of the target pixel obtained by the spatial interpolation method.
Step S104, determining the pixel value of the pixel at the target position in the output frame according to the interpolation pixel value, the forward at least one field and the current field.
The step may determine a pixel value for a pixel at a target location in the output frame based on interpolated pixel values in Ps or Pt, in combination with the forward at least one field and the current field.
Of course, if the interpolated pixel value in Ps is filled to Pt, this step may determine the pixel value of the pixel at the target position in the output frame based on the interpolated pixel value in Pt in combination with the forward at least one field and the current field.
For example, referring to fig. 4, a schematic diagram of an output frame Pr provided in this embodiment is shown. The scanning line 81 is the N-1 scanning line in Pr; the scan line 82 is the nth scan line in Pr; the scan line 83 is the (n+1) th scan line in Pr. The output frame Pr finally obtained by the de-interlacing method in this embodiment is output in a progressive scan mode, in Pr, the N-1 th scan line and the n+1 th scan line belong to the same field, and the N-th scan line belongs to another field.
The pixels in the scanning line 81 of Pr are the same as those in the scanning line 11 in (e) of fig. 2, and the pixels in the scanning line 83 of Pr are the same as those in the scanning line 13 in (e) of fig. 2.
The pixel 103 is a pixel at a target location in the output frame, and the pixel value of the pixel 103 can be determined in this step.
It should be noted that, in this embodiment, each pixel that is not scanned in the current field T shown in fig. 2 (e) is taken as a target pixel, and the pixel value of the pixel at the same position in the output frame Pr can be obtained according to the steps S101 to S104, so that the complete output frame Pr can be obtained.
The application provides a low-delay image de-interlacing method, which comprises the steps of firstly obtaining a current field and at least one forward field of the current field, then carrying out horizontal displacement prediction on a target pixel in the current field according to the at least one forward field to obtain a horizontal displacement vector, then determining a target interpolation mode according to whether the horizontal displacement vector is reliable or not, interpolating according to the target interpolation mode to obtain an interpolation pixel value of the target pixel, and finally determining the pixel value of the pixel at the target position in an output frame according to the interpolation pixel value, the at least one forward field and the current field. The application uses at least one field in the forward direction of the current field as a reference image to carry out de-interlacing processing, so that an output frame can be generated when the current field is output without continuously receiving field data after the current field, and the time delay is reduced.
In one possible implementation, taking as an example that the at least one field includes a first field T-1 forward adjacent to the current field T, a second field T-2 forward adjacent to the first field, a third field T-3 forward adjacent to the second field, and a fourth field T-4 forward adjacent to the third field, a procedure of "estimating horizontal displacement of a target pixel in the current field according to the at least one field forward" will be described.
In the present embodiment, the process of step S102 includes a plurality of implementations, and the following two implementations are provided but not limited thereto.
The first implementation mode: and performing horizontal displacement estimation on a target pixel in the current field according to the second field and the fourth field. Optionally, the implementation of this implementation may include:
a1, setting a first pixel reference range and a horizontal displacement threshold value in the current field according to the target pixel.
Here, the first pixel reference range refers to a range of several pixel compositions around the target pixel in the current field, for example, alternatively, the pixels within the first pixel reference range include pixels 112, 113, and 114 in the 11 scan line shown in (e) in fig. 2, and pixels 122, 123, and 124 in the 13 scan line.
The horizontal displacement threshold is denoted by lambda.
A2, determining pixel estimated ranges corresponding to the second field and the fourth field respectively according to the first pixel reference range and the horizontal displacement threshold.
Specifically, the process of determining the estimated pixel range corresponding to the second field according to the first pixel reference range and the horizontal displacement threshold value includes: and determining the pixel range with the same position of the first pixel reference range in the second field, horizontally moving the determined pixel range according to + -lambda, and forming a pixel estimated range corresponding to the second field by all pixels in the moving range. For example, the pixels in the first pixel reference range include pixels 112, 113, and 114 in the 11 scan line and pixels 122, 123, and 124 in the 13 scan line shown in (e) of fig. 2, and the pixels in the same pixel range in the second field include pixels 312, 313, and 314 in the 31 scan line and pixels 322, 323, and 324 in the 33 scan line shown in (c) of fig. 2, and if λ=2, the pixels in the estimated pixel range corresponding to the second field include: pixels 310-316 in 31 scan lines and pixels 320-326 in 33 scan lines shown in (c) of fig. 2.
Correspondingly, the process of determining the pixel estimated range corresponding to the fourth field according to the first pixel reference range and the horizontal displacement threshold value includes: and determining the pixel range with the same position of the first pixel reference range in the second field, horizontally moving the determined pixel range according to +/-2 lambda, and forming a pixel estimated range corresponding to the fourth field by all pixels in the moving range.
A3, horizontal displacement estimation is carried out in the pixel estimation range corresponding to the second field and the fourth field respectively according to the pixels in the first pixel reference range in the current field, and a first horizontal displacement sub-vector corresponding to the second field and a second horizontal displacement sub-vector corresponding to the fourth field are obtained respectively.
In the embodiment, the corresponding pixel of the pixel in the first pixel reference range in the current field can be determined in the pixel estimated range corresponding to the second field, and then the first horizontal displacement sub-vector is determined according to the determined corresponding pixel and the pixel in the pixel estimated range; similarly, the corresponding pixel of the pixel in the first pixel reference range in the current field can be determined in the pixel estimated range corresponding to the fourth field, and then the second horizontal displacement sub-vector is determined according to the determined corresponding pixel and the pixel in the pixel estimated range.
For convenience of the following description, the first horizontal displacement sub-vector is defined as λ 1 Defining a second horizontal displacement sub-vector as lambda 2
A4, taking the first horizontal displacement sub-vector and the second horizontal displacement sub-vector as horizontal displacement vectors.
The second implementation mode: and performing horizontal displacement estimation on a target pixel in the current field according to the first field, the second field, the third field and the fourth field. Optionally, the implementation of this implementation may include:
B1, setting a first pixel reference range in the current field, a second pixel reference range in the first field and a horizontal displacement threshold according to the target pixel.
In this step, the description of the first pixel reference range and the horizontal displacement threshold may refer to the description in A1, and will not be described herein.
The second pixel reference range refers to a range of several pixel compositions in the first field, for example, the pixels within the second pixel reference range include pixels 212, 213, and 214 in the 22 scan line shown in (d) of fig. 2.
And B2, determining pixel estimated ranges corresponding to the second field and the fourth field respectively according to the first pixel reference range and the horizontal displacement threshold.
And B3, according to the pixels in the first pixel reference range in the current field, horizontal displacement estimation is carried out in the pixel estimation range corresponding to the second field and the fourth field respectively, and a first horizontal displacement sub-vector corresponding to the second field and a second horizontal displacement sub-vector corresponding to the fourth field are obtained respectively.
B2 to B3 are in one-to-one correspondence with the above A2 to A3, and detailed description of the above steps will be referred to herein, and will not be repeated.
And B4, under the condition that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be credible, determining a pixel processing range corresponding to the first field according to the second pixel reference range and the first horizontal displacement sub-vector, and determining a pixel estimated range corresponding to the third field according to the pixel processing range and the horizontal displacement threshold.
The step may first determine whether the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be authentic.
It will be appreciated by those skilled in the art that if the first horizontal displacement sub-vector lambda is determined by the preceding steps 1 And a second horizontal displacement sub-vector lambda 2 More accurate, then lambda 2 And 2λ 1 Should be approximately equal. Therefore, the embodiment can calculate the first difference between the two times of the first horizontal displacement sub-vector and the second horizontal displacement sub-vector, if the absolute value of the first difference is smaller than the preset first difference threshold, then determine that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be trusted, otherwise determine that the first horizontal displacement sub-vector is not the same as the second horizontal displacement sub-vectorOne horizontal displacement sub-vector and the second horizontal displacement sub-vector are not trusted.
That is, the present embodiment can calculate abs (λ 2 -2λ 1 ) If the calculated value is smaller than the first difference threshold, the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are considered to be suspected to be credible; and if the calculated value is greater than or equal to the first difference threshold, the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are considered to be unreliable. Abs represents an arithmetic absolute value.
In this embodiment, the confidence threshold Th may be set according to the actual situation, and then, optionally, the first difference threshold may be 2Th.
The step is to determine the first horizontal displacement sub-vector lambda according to the above process 1 And a second horizontal displacement sub-vector lambda 2 When the first field is suspected to be trusted, a pixel processing range can be determined in the first field according to the second pixel reference range and the first horizontal displacement subvector.
Specifically, the horizontal coordinates of the pixels within the second pixel parameter range can be shifted by λ 1 And/2, obtaining a pixel processing range corresponding to the first field T-1. For example, if the pixels in the second pixel reference range include pixels 212, 213, and 214, λ in the 22 scan line shown in (d) of fig. 2 1 =8, then the pixels within the pixel processing range corresponding to the first field T-1 include pixels 216, 217, and 218 in the 22 scan line of the first field T-1.
The step can also determine a pixel estimated range corresponding to the third field according to the pixel processing range and the horizontal displacement threshold.
Specifically, firstly, a pixel range with the same position as the pixel processing range is found in a third field T-3, then, the pixels in the found pixel range are horizontally moved according to + -lambda, and the pixels in the moving range form a pixel estimated range corresponding to the third field.
For example, if the pixels in the pixel processing range include pixels 216, 217, and 218 in the 22 scan line of the first field T-1, then the pixels in the pixel range found in the third field T-3 include pixels 416, 417, and 418 in the 42 scan line of the T-3 field, and then the pixels 416, 417, and 418 may be horizontally shifted by ±λ to determine the pixel estimation range corresponding to the third field.
And B5, performing horizontal displacement estimation in a pixel estimation range corresponding to the third field according to pixels in a pixel processing range in the first field, and obtaining a third horizontal displacement sub-vector corresponding to the third field.
In this embodiment, the corresponding pixel of the pixel within the pixel processing range in the first field T-1 may be determined within the pixel estimation range corresponding to the third field, and then the third horizontal displacement sub-vector may be determined according to the determined corresponding pixel and the pixel within the pixel processing range.
For example, the pixels in the pixel processing range include pixels 216, 217 and 218 in the 22 scan line of the first field T-1, and then the horizontal displacement estimation of the pixel 217 can be performed in the pixel estimation range corresponding to the third field in this step, so as to obtain the third horizontal displacement sub-vector.
For convenience of the following description, the third horizontal displacement sub-vector is defined as λ 3
And B6, taking the first horizontal displacement sub-vector and the third horizontal displacement sub-vector as horizontal displacement vectors.
After determining the horizontal displacement vector via the above-mentioned A1 to A4 or B1 to B6 or other embodiments, it may be determined whether the horizontal displacement vector is reliable or not through step S103, and the target interpolation mode is determined according to whether the horizontal displacement vector is reliable or not, and the interpolation pixel value of the target pixel is obtained by interpolation according to the target interpolation mode.
If A4 shifts the first horizontal displacement sub-vector lambda 1 And a second horizontal displacement sub-vector lambda 2 As the horizontal displacement vector, the present embodiment may determine the first horizontal displacement sub-vector λ 1 And a second horizontal displacement sub-vector lambda 2 Whether or not the first horizontal displacement sub-vector lambda is trusted is judged by a specific judging process and B4 1 And a second horizontal displacement sub-vector lambda 2 The process of whether the trusted process is suspected is the same and will not be described in detail here.
If B6 shifts the first horizontal displacement sub-vector lambda 1 And a third horizontal displacement sub-vector lambda 3 As a horizontal displacement vector, the present embodimentFor example, the first horizontal displacement sub-vector lambda can be determined 1 And a third horizontal displacement sub-vector lambda 3 Whether or not it is trusted.
It will be appreciated by those skilled in the art that if the first horizontal displacement sub-vector lambda is determined by the preceding steps 1 And a third horizontal displacement sub-vector lambda 3 More accurate, then lambda 1 And lambda is 3 Should be approximately equal. Therefore, the embodiment may calculate the second difference value between the first horizontal displacement sub-vector and the third horizontal displacement sub-vector, if the absolute value of the second difference value is smaller than the preset second difference value threshold, it is determined that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are trusted, otherwise, it is determined that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not trusted.
That is, the present embodiment can calculate abs (λ 31 ) If the calculated value is smaller than the second difference threshold, the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are considered to be credible; and if the calculated value is greater than or equal to the second difference threshold, the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are considered to be unreliable.
Optionally, the second difference threshold is the confidence threshold Th described above.
As described above, "step S103, determining the target interpolation mode according to whether the horizontal displacement vector is reliable or not, and interpolating according to the target interpolation mode to obtain the interpolation pixel value of the target pixel" may include: under the condition that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are credible, taking a time interpolation mode as a target interpolation mode, and interpolating according to the time interpolation mode to obtain an interpolation pixel value of a target pixel, wherein the time interpolation mode is a mode of obtaining the interpolation pixel value according to at least one forward field interpolation; under the condition that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are not credible, taking a spatial interpolation mode as a target interpolation mode, and interpolating according to the spatial interpolation mode to obtain an interpolation pixel value of a target pixel, wherein the spatial interpolation mode refers to a mode of obtaining the interpolation pixel value according to the current field interpolation.
In an alternative embodiment, in this embodiment, in the case that it is determined that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not authentic, the interpolation pixel value of the target pixel may be obtained by interpolation in a spatial interpolation manner.
The following describes a procedure of "interpolation pixel value of the target pixel obtained by interpolation in the time interpolation manner" and "interpolation pixel value of the target pixel obtained by interpolation in the spatial interpolation manner".
Alternatively, the process of interpolating the interpolated pixel value of the target pixel in a time interpolation manner may include:
and C1, calculating the average value of the first horizontal displacement sub-vector and the third horizontal displacement sub-vector.
The foregoing step has determined that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are reliable, in which case an average of the two may be taken as the horizontal displacement vector for determining the corresponding pixel of the target pixel.
The present embodiment may record the average value as lambda 4 Lambda is then 4 =(λ 12 )/2。
C2, selecting a pixel at a position shifted by a half of the average value of the target position from the first field as a first temporal interpolation reference pixel.
Specifically, the pixel 213 at the same position as the target pixel 100 in the 22 scan line in the T-1 field shown in FIG. 2 (d) is shifted by λ 4 The pixel of/2 serves as a first temporal interpolation reference pixel for the target pixel 100, which is the corresponding pixel of the target pixel in the first field.
C3, selecting a pixel at a position which is three times as large as the average value of the target position shift from the third field as a second time interpolation reference pixel.
Specifically, the pixel 413 shift (3λ) at the same position as the target pixel 100 in the 42 scan line in the T-3 field shown in FIG. 2 (b) is taken 4 ) The pixel of/2 serves as a second temporal interpolation reference pixel for the target pixel 100, which is the corresponding pixel of the target pixel in the third field.
And C4, determining the average pixel value of the first time interpolation reference pixel and the second time interpolation reference pixel as the interpolation pixel value of the target pixel.
In summary, for the time interpolation scheme, λ can be based on 4 Corresponding pixels of the target pixel are respectively determined from the first field and the third field, and then interpolation pixel values in a time interpolation mode are calculated based on the determined two corresponding pixels. Lambda (lambda) 1 And lambda (lambda) 3 Trusted, so the lambda is determined 4 Is relatively accurate at lambda 4 Under the credible condition, the two corresponding pixels found by C2 and C3 are more accurate, so that the finally interpolated pixel value is more accurate.
Alternatively, the process of interpolating the interpolated pixel value of the target pixel by the spatial interpolation method may include:
d1, setting a third pixel reference range in the current field according to the target pixel.
For example, the pixels within the third pixel reference range set in this step include pixels 110 to 116 in the 11 scan line and pixels 120 to 126 in the 13 scan line shown in (e) in fig. 2.
And D2, carrying out interpolation processing on the target pixel based on the pixel in the third pixel reference range in the current field to obtain an interpolation pixel value of the target pixel.
In summary, in the present embodiment, under the condition that the horizontal displacement vector is not reliable, the interpolation pixel value is obtained through a spatial interpolation mode, so that the problem that the calculated interpolation pixel value is inaccurate due to the estimation error of the horizontal displacement vector is avoided.
In another embodiment of the present application, a description is given of a process of "determining a pixel value of a pixel at a target position in an output frame based on the interpolated pixel value, the forward at least one field, and the current field" in step S104.
Optionally, "step S104, determining the pixel value of the pixel at the target position in the output frame based on the interpolated pixel value, the forward at least one field, and the current field" may include:
And E1, determining interpolation variation tolerance according to the first field, the second field, the third field and the current field.
Specifically, the process of this step may include:
and E11, selecting each pixel in the reference range of the first pixel from the current field, taking each selected pixel as a third pixel, and selecting the pixels at the same positions of each third pixel from the second field.
For example, the pixels in the first pixel reference range include pixels 112, 113, and 114 in the 11 scan line and pixels 122, 123, and 124 in the 13 scan line shown in (e) of fig. 2, and then the pixels selected in the second field are pixels 312, 313, and 314 in the 31 scan line and pixels 322, 323, and 324 in the 33 scan line shown in (c) of fig. 2.
And E12, determining absolute values of pixel difference values of the third pixels and the pixels at the same positions in the second field respectively, and taking the maximum value in the determined absolute values as a first maximum difference value.
For example, for the example given in step E11 above, the first maximum difference value determined in this step is: diff1=max (abs (V112-V312), abs (V113-V313), abs (V114-V314), abs (V122-V322), abs (V123-V323), abs (V124-V324)). Where V denotes a value of a pixel at that point, abs denotes an arithmetic absolute value, max denotes a maximum value, and diff1 denotes a first maximum difference value.
E13, selecting each pixel in the second pixel reference range from the first field, taking each selected pixel as a fourth pixel, and selecting the pixels at the same positions of each fourth pixel from the third field.
For example, the pixels in the second pixel reference range include pixels 212, 213, and 214 in the 22 scan line shown in fig. 2 (d), and the pixels selected in the third field are pixels 412, 413, and 414 in the 42 scan line shown in fig. 2 (b).
And E14, determining absolute values of pixel difference values of the fourth pixels and the pixels at the same positions in the third field respectively, and taking the maximum value in the determined absolute values as a second maximum difference value.
For example, for the example given in step E13 above, the second maximum difference value determined in this step is: diff2=max (abs (V212-V412), abs (V213-V413), abs (V214-V414)), wherein diff2 represents the second largest difference value.
And E15, determining the maximum value of the first maximum difference value and the second maximum difference value as interpolation variation tolerance.
I.e. the tolerance of the difference variation is noted as diff, diff=max (diff 1, diff 2).
And E2, selecting a pixel at the target position from the first field, and taking the pixel value of the pixel as a candidate pixel value.
For example, for the first field shown in fig. 2 (d), the pixel at the target position is 213, and the candidate pixel value is the pixel value of 213.
And E3, calculating a third difference value between the candidate pixel value and the interpolation pixel value.
For example, if the interpolation pixel value obtained in the spatial interpolation method is filled into Pt, the difference between the candidate pixel value and the interpolation pixel value at the target position in Pt (i.e., the value of the pixel 102 shown in fig. 3) can be calculated, and the difference calculated in this step is referred to as a third difference in order to be distinguished from the difference calculated in the previous step.
And E4a, if the absolute value of the third difference value is smaller than the interpolation variation tolerance, taking the interpolation pixel value as the pixel value of the pixel at the target position in the output frame.
For example, referring to fig. 3 and 4, if the absolute value of the third difference is smaller than the tolerance of interpolation variation, the interpolated pixel value (for example, the value of the pixel 102 in Pt in fig. 3) is filled into the position 103 corresponding to the output frame Pr shown in fig. 4, so as to obtain the pixel value of the pixel 103.
And E4b, if the absolute value of the third difference value is greater than or equal to the interpolation variation tolerance, taking the value of the candidate pixel value after the interpolation variation tolerance is shifted according to the sign of the third difference value as the pixel value of the pixel at the target position in the output frame.
For example, fig. 4 outputs a pixel value v103=v213+sign (V102-V213) diff of 103 in the frame Pr, where Sign is a Sign (positive or negative) of the operation result, and diff is the interpolation variation tolerance calculated by E15.
In summary, in the low-delay image de-interlacing method provided by the present application, when de-interlacing video data, an output frame can be generated when the current field is output based on at least one field in the forward direction of the current field, i.e. the present application can generate de-interlaced image data in real time in the process of dynamically receiving image pixel sequences. Because the output frame corresponding to the current field does not need to be based on the backward field of the current field when the output frame corresponding to the current field is generated, the time delay of the application is reduced, and the application effect is better in certain video transmission scenes with high delay requirements.
The embodiment of the application also provides a low-delay image de-interlacing device, which is described below, and the low-delay image de-interlacing device and the low-delay image de-interlacing method described below can be referred to correspondingly.
Referring to fig. 5, a schematic structural diagram of a low-latency image de-interlacing device according to an embodiment of the present application is shown, and as shown in fig. 5, the low-latency image de-interlacing device may include: a field acquisition module 501, a horizontal displacement estimation module 502, an interpolation pixel value determination module 503, and an output frame pixel value determination module 504.
The field acquisition module 501 is configured to acquire the current field and at least one field forward of the current field, where the current field generates an output frame when output.
The horizontal displacement estimation module 502 is configured to perform horizontal displacement estimation on a target pixel in the current field according to at least one forward field to obtain a horizontal displacement vector, where the target pixel is a pixel at a target position that is not scanned in the current field.
The interpolation pixel value determining module 503 is configured to determine a target interpolation mode according to whether the horizontal displacement vector is reliable, and interpolate according to the target interpolation mode to obtain an interpolation pixel value of the target pixel.
An output frame pixel value determination module 504 for determining a pixel value of a pixel at a target location in the output frame based on the interpolated pixel value, the forward at least one field, and the current field.
The application provides a low-delay image de-interlacing device, which comprises the steps of firstly obtaining a current field and at least one forward field of the current field, then carrying out horizontal displacement prediction on a target pixel in the current field according to the at least one forward field to obtain a horizontal displacement vector, then determining a target interpolation mode according to whether the horizontal displacement vector is reliable or not, interpolating according to the target interpolation mode to obtain an interpolation pixel value of the target pixel, and finally determining the pixel value of the pixel at the target position in an output frame according to the interpolation pixel value, the at least one forward field and the current field. The application uses at least one field in the forward direction of the current field as a reference image to carry out de-interlacing processing, so that an output frame can be generated when the current field is output without continuously receiving field data after the current field, and the time delay is reduced.
In one possible implementation, the at least one field includes a first field forward adjacent to the current field, a second field forward adjacent to the first field, a third field forward adjacent to the second field, and a fourth field forward adjacent to the third field.
Then, the horizontal displacement estimation module 502 may include: the device comprises a first reference information setting unit, a first pixel estimated range determining unit, a first horizontal displacement estimating unit, a second pixel estimated range determining unit, a second horizontal displacement estimating unit and a horizontal displacement vector determining unit.
A first reference information setting unit for setting a first pixel reference range in the current field, a second pixel reference range in the first field, and a horizontal displacement threshold according to the target pixel.
The first pixel estimated range determining unit is used for determining the pixel estimated ranges corresponding to the second field and the fourth field respectively according to the first pixel reference range and the horizontal displacement threshold value.
The first horizontal displacement estimating unit is used for carrying out horizontal displacement estimation in the pixel estimating range corresponding to the second field and the fourth field respectively according to the pixels in the first pixel reference range in the current field, and respectively obtaining a first horizontal displacement sub-vector corresponding to the second field and a second horizontal displacement sub-vector corresponding to the fourth field.
The second pixel estimated range determining unit is configured to determine, when the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be authentic, a pixel processing range corresponding to the first field according to the second pixel reference range and the first horizontal displacement sub-vector, and determine a pixel estimated range corresponding to the third field according to the pixel processing range and the horizontal displacement threshold.
The second horizontal displacement estimating unit is used for carrying out horizontal displacement estimation in the pixel estimating range corresponding to the third field according to the pixels in the pixel processing range in the first field, so as to obtain a third horizontal displacement sub-vector corresponding to the third field.
And the horizontal displacement vector determining unit is used for taking the first horizontal displacement sub-vector and the third horizontal displacement sub-vector as horizontal displacement vectors.
In one possible implementation manner, the process of determining, by the second pixel prediction range determining unit, whether the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be trusted may include: and calculating a first difference value between twice of the first horizontal displacement sub-vector and the second horizontal displacement sub-vector, if the absolute value of the first difference value is smaller than a preset first difference value threshold value, determining that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be trusted, otherwise, determining that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not trusted.
In one possible implementation manner, the process of determining whether the horizontal displacement vector is trusted by the interpolation pixel value determining module 503 may include: and calculating a second difference value of the first horizontal displacement sub-vector and the third horizontal displacement sub-vector, if the absolute value of the second difference value is smaller than a preset second difference value threshold value, determining that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are credible, otherwise, determining that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not credible.
In one possible implementation, the interpolation pixel value determining module 503 may include: a temporal interpolation unit and a spatial interpolation unit.
And the time interpolation unit is used for taking the time interpolation mode as a target interpolation mode and interpolating to obtain an interpolation pixel value of the target pixel according to the time interpolation mode under the condition that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are credible, wherein the time interpolation mode refers to a mode of obtaining the interpolation pixel value according to at least one forward field interpolation.
And the spatial interpolation unit is used for taking the spatial interpolation mode as a target interpolation mode and interpolating according to the spatial interpolation mode to obtain an interpolation pixel value of the target pixel under the condition that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are not credible, wherein the spatial interpolation mode refers to a mode of obtaining the interpolation pixel value according to the current field interpolation.
In one possible implementation manner, the second pixel prediction range determining unit may be further configured to interpolate, in a spatial interpolation manner, an interpolated pixel value of the target pixel if the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not reliable.
In one possible implementation manner, the time interpolation unit may include: the device comprises a sub-vector mean value calculation unit, a first time interpolation reference pixel determination unit, a second time interpolation reference pixel determination unit and a reference pixel mean value calculation unit.
And the sub-vector average value calculation unit is used for calculating the average value of the first horizontal displacement sub-vector and the third horizontal displacement sub-vector.
A first temporal interpolation reference pixel determination unit for selecting, as the first temporal interpolation reference pixel, a pixel at a position shifted by a half-time average value from the first field.
And a second temporal interpolation reference pixel determination unit for selecting, as the second temporal interpolation reference pixel, a pixel at a position shifted by three times the average value of the target position from the third field.
And a reference pixel mean value calculation unit configured to determine an average pixel value of the first temporal interpolation reference pixel and the second temporal interpolation reference pixel as an interpolation pixel value of the target pixel.
In one possible implementation manner, the spatial interpolation unit may include: a second reference information setting unit and a spatial interpolation calculation unit.
And the second reference information setting unit is used for setting a third pixel reference range in the current field according to the target pixel.
And the spatial interpolation calculation unit is used for carrying out interpolation processing on the target pixel based on the pixel in the third pixel reference range in the current field to obtain an interpolation pixel value of the target pixel.
In one possible implementation, the output frame pixel value determining module 504 may include: the interpolation variation tolerance determining unit, the candidate pixel value determining unit, the third difference calculating unit, the first output frame pixel value determining subunit and the second output frame pixel value determining subunit.
And the interpolation change tolerance determining unit is used for determining the interpolation change tolerance according to the first field, the second field, the third field and the current field.
And a candidate pixel value determining unit configured to select a pixel at the target position from the first field, and take a pixel value of the pixel as a candidate pixel value.
And a third difference value calculating unit for calculating a third difference value between the candidate pixel value and the interpolation pixel value.
And the first output frame pixel value determining subunit is used for taking the interpolation pixel value as the pixel value of the pixel at the target position in the output frame if the absolute value of the third difference value is smaller than the interpolation variation tolerance.
And the second output frame pixel value determining subunit is used for taking the value of the candidate pixel value after the interpolation change tolerance according to the sign deviation of the third difference value as the pixel value of the pixel at the target position in the output frame if the absolute value of the third difference value is greater than or equal to the interpolation change tolerance.
In one possible implementation manner, the interpolation variation tolerance determining unit may include: the image processing device comprises a first pixel selection unit, a first maximum difference value determination unit, a second pixel selection unit, a second maximum difference value determination unit and a maximum difference value comparison unit.
The first pixel selecting unit is used for selecting each pixel in the first pixel reference range from the current field, taking each selected pixel as a third pixel, and selecting the pixels at the same positions of each third pixel from the second field.
And a first maximum difference value determining unit configured to determine absolute values of pixel difference values of the respective third pixels and pixels at the same positions in the second field, and to take a maximum value of the determined absolute values as a first maximum difference value.
And the second pixel selection unit is used for selecting each pixel in the second pixel reference range from the first field, taking each selected pixel as a fourth pixel, and selecting the pixels at the same positions of each fourth pixel from the third field.
And a second maximum difference value determining unit for determining absolute values of pixel difference values of the respective fourth pixels at the same positions as the pixels in the third field, and taking the maximum value of the determined absolute values as a second maximum difference value.
And the maximum difference value comparison unit is used for determining the maximum value of the first maximum difference value and the second maximum difference value as the interpolation variation tolerance.
The embodiment of the application also provides low-delay image de-interlacing equipment. Alternatively, fig. 6 shows a block diagram of a hardware structure of a low-latency image deinterlacing apparatus, and referring to fig. 6, the hardware structure of the low-latency image deinterlacing apparatus may include: at least one processor 601, at least one communication interface 602, at least one memory 603 and at least one communication bus 604;
in the embodiment of the present application, the number of the processor 601, the communication interface 602, the memory 603 and the communication bus 604 is at least one, and the processor 601, the communication interface 602 and the memory 603 complete communication with each other through the communication bus 604;
processor 601 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present application, etc.;
The memory 603 may include a high-speed RAM memory, and may further include a non-volatile memory (non-volatile memory), etc., such as at least one disk memory;
wherein the memory 603 stores a program, the processor 601 may call the program stored in the memory 603, the program being for:
acquiring a current field and at least one field of the current field in the forward direction, wherein an output frame is generated when the current field is output;
performing horizontal displacement estimation on a target pixel in the current field according to at least one forward field to obtain a horizontal displacement vector, wherein the target pixel is a pixel at a target position which is not scanned in the current field;
determining a target interpolation mode according to whether the horizontal displacement vector is credible or not, and interpolating according to the target interpolation mode to obtain an interpolation pixel value of a target pixel;
the pixel value of the pixel at the target location in the output frame is determined based on the interpolated pixel value, the forward at least one field, and the current field.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
The embodiment of the application also provides a readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the low-latency image de-interlacing method as described above.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
Finally, it is further noted that relational terms such as second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A low latency image de-interlacing method comprising:
acquiring a current field and at least two forward fields of the current field, wherein an output frame is generated when the current field is output;
performing horizontal displacement estimation on a target pixel in the current field according to the at least two forward fields to obtain a horizontal displacement vector, wherein the target pixel is a pixel at a target position which is not scanned in the current field; the horizontal displacement vector comprises two horizontal displacement sub-vectors;
determining a target interpolation mode according to whether the horizontal displacement vector is credible or not, and interpolating according to the target interpolation mode to obtain an interpolation pixel value of the target pixel; determining that the horizontal displacement vector is reliable under the condition that two horizontal displacement sub-vectors are approximately equal, taking a time interpolation mode as the target interpolation mode under the condition that the horizontal displacement vector is reliable, and interpolating to obtain an interpolation pixel value of the target pixel according to the time interpolation mode, wherein the time interpolation mode refers to a mode of obtaining the interpolation pixel value according to the forward at least two-field interpolation, and taking a space interpolation mode as the target interpolation mode under the condition that the horizontal displacement vector is not reliable, and interpolating to obtain the interpolation pixel value of the target pixel according to the space interpolation mode, wherein the space interpolation mode refers to a mode of obtaining the interpolation pixel value according to the current field interpolation;
Determining a pixel value of a pixel at the target position in the output frame according to the interpolation pixel value, the forward at least two fields and the current field;
the forward at least two fields including a first field forward adjacent to the current field, a second field forward adjacent to the first field, and a third field forward adjacent to the second field, the determining a pixel value of a pixel at the target location in the output frame based on the interpolated pixel value, the forward at least two fields, and the current field, comprising:
determining interpolation variation tolerance according to the first field, the second field, the third field and the current field;
selecting a pixel at the target position from the first field, and taking the pixel value of the pixel as a candidate pixel value;
calculating a third difference value between the candidate pixel value and the interpolation pixel value;
if the absolute value of the third difference value is smaller than the interpolation variation tolerance, the interpolation pixel value is used as the pixel value of the pixel at the target position in the output frame;
and if the absolute value of the third difference value is greater than or equal to the interpolation variation tolerance, the candidate pixel value is offset by the value of the interpolation variation tolerance according to the sign of the third difference value and is used as the pixel value of the pixel at the target position in the output frame.
2. The low-latency image deinterlacing method of claim 1, wherein the forward at least two fields comprise a first field forward adjacent to the current field, a second field forward adjacent to the first field, a third field forward adjacent to the second field, and a fourth field forward adjacent to the third field;
the estimating the horizontal displacement of a target pixel in the current field according to the at least two forward fields includes:
setting a first pixel reference range in the current field, a second pixel reference range in the first field and a horizontal displacement threshold according to the target pixel;
determining pixel estimated ranges respectively corresponding to the second field and the fourth field according to the first pixel reference range and the horizontal displacement threshold;
according to the pixels in the first pixel reference range in the current field, horizontal displacement estimation is carried out in the pixel estimation range corresponding to the second field and the fourth field respectively, and a first horizontal displacement sub-vector corresponding to the second field and a second horizontal displacement sub-vector corresponding to the fourth field are obtained respectively;
under the condition that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be credible, determining a pixel processing range corresponding to the first field according to the second pixel reference range and the first horizontal displacement sub-vector, and determining a pixel estimated range corresponding to the third field according to the pixel processing range and the horizontal displacement threshold;
According to the pixels in the pixel processing range in the first field, horizontal displacement estimation is carried out in the pixel estimation range corresponding to the third field, and a third horizontal displacement sub-vector corresponding to the third field is obtained;
and taking the first horizontal displacement sub-vector and the third horizontal displacement sub-vector as the horizontal displacement vectors.
3. The low-latency image de-interlacing method according to claim 2, wherein determining whether the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be authentic comprises:
calculating a first difference value between twice of the first horizontal displacement sub-vector and the second horizontal displacement sub-vector, if the absolute value of the first difference value is smaller than a preset first difference value threshold, determining that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are suspected to be trusted, otherwise, determining that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not trusted;
judging whether the horizontal displacement vector is credible or not comprises the following steps:
calculating a second difference value of the first horizontal displacement sub-vector and the third horizontal displacement sub-vector, if the absolute value of the second difference value is smaller than a preset second difference value threshold value, determining that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are credible, otherwise, determining that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not credible.
4. The method for deinterlacing a low-latency image of claim 3, wherein determining a target interpolation mode according to whether the horizontal displacement vector is authentic and interpolating the target pixel according to the target interpolation mode to obtain an interpolated pixel value of the target pixel comprises:
under the condition that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are credible, taking a time interpolation mode as the target interpolation mode, and interpolating according to the time interpolation mode to obtain an interpolation pixel value of the target pixel, wherein the time interpolation mode is a mode of obtaining the interpolation pixel value according to the forward at least two-field interpolation;
and under the condition that the first horizontal displacement sub-vector and the third horizontal displacement sub-vector are not credible, taking a spatial interpolation mode as the target interpolation mode, and interpolating according to the spatial interpolation mode to obtain an interpolation pixel value of the target pixel, wherein the spatial interpolation mode refers to a mode of obtaining the interpolation pixel value according to the current field interpolation.
5. The low-latency image deinterlacing method of claim 4, further comprising:
And under the condition that the first horizontal displacement sub-vector and the second horizontal displacement sub-vector are not credible, interpolating according to the spatial interpolation mode to obtain the interpolation pixel value of the target pixel.
6. The method for deinterlacing a low-latency image of claim 4, wherein interpolating the interpolated pixel value for the target pixel according to the temporal interpolation scheme comprises:
calculating an average value of the first horizontal displacement sub-vector and the third horizontal displacement sub-vector;
selecting a pixel at a position shifted by a factor of two from the target position by the average value from the first field as a first temporal interpolation reference pixel;
selecting a pixel at a position of the target position shifted by three times the average value from the third field as a second temporal interpolation reference pixel;
and determining average pixel values of the first time interpolation reference pixel and the second time interpolation reference pixel as interpolation pixel values of the target pixel.
7. The method for deinterlacing a low-latency image of claim 5, wherein interpolating the interpolated pixel value for the target pixel according to the spatial interpolation scheme comprises:
Setting a third pixel reference range in the current field according to the target pixel;
and carrying out interpolation processing on the target pixel based on the pixel in the third pixel reference range in the current field to obtain an interpolation pixel value of the target pixel.
8. The low-latency image deinterlacing method of claim 2, wherein the determining an interpolation variation tolerance from the first field, the second field, the third field, and the current field comprises:
selecting each pixel in the first pixel reference range from the current field, taking each selected pixel as a third pixel, and selecting the pixel at the same position of each third pixel from the second field;
determining absolute values of pixel difference values of the third pixels and the pixels at the same positions in the second field respectively, and taking the maximum value in the determined absolute values as a first maximum difference value;
selecting each pixel in the second pixel reference range from the first field, taking each selected pixel as a fourth pixel, and selecting the pixel at the same position of each fourth pixel from the third field;
Determining absolute values of pixel difference values of the fourth pixels and the pixels at the same positions in the third field respectively, and taking the maximum value in the determined absolute values as a second maximum difference value;
and determining the maximum value of the first maximum difference value and the second maximum difference value as the interpolation variation tolerance.
9. A low-latency image de-interlacing device, comprising:
the field acquisition module is used for acquiring a current field and at least two forward fields of the current field, wherein an output frame is generated when the current field is output;
the horizontal displacement estimation module is used for carrying out horizontal displacement estimation on a target pixel in the current field according to the at least two forward fields to obtain a horizontal displacement vector, wherein the target pixel is a pixel at a target position which is not scanned in the current field; the horizontal displacement vector comprises two horizontal displacement sub-vectors;
the interpolation pixel value determining module is used for determining a target interpolation mode according to whether the horizontal displacement vector is credible or not and interpolating according to the target interpolation mode to obtain an interpolation pixel value of the target pixel; determining that the horizontal displacement vector is reliable under the condition that two horizontal displacement sub-vectors are approximately equal, taking a time interpolation mode as the target interpolation mode under the condition that the horizontal displacement vector is reliable, and interpolating to obtain an interpolation pixel value of the target pixel according to the time interpolation mode, wherein the time interpolation mode refers to a mode of obtaining the interpolation pixel value according to the forward at least two-field interpolation, and taking a space interpolation mode as the target interpolation mode under the condition that the horizontal displacement vector is not reliable, and interpolating to obtain the interpolation pixel value of the target pixel according to the space interpolation mode, wherein the space interpolation mode refers to a mode of obtaining the interpolation pixel value according to the current field interpolation;
An output frame pixel value determining module, configured to determine a pixel value of a pixel at the target position in the output frame according to the interpolated pixel value, the forward at least two fields, and the current field;
the forward at least two fields include a first field forward adjacent to the current field, a second field forward adjacent to the first field, and a third field forward adjacent to the second field, and the output frame pixel value determining module determines a pixel value of a pixel at the target position in the output frame based on the interpolated pixel value, the forward at least two fields, and the current field, including:
determining interpolation variation tolerance according to the first field, the second field, the third field and the current field;
selecting a pixel at the target position from the first field, and taking the pixel value of the pixel as a candidate pixel value;
calculating a third difference value between the candidate pixel value and the interpolation pixel value;
if the absolute value of the third difference value is smaller than the interpolation variation tolerance, the interpolation pixel value is used as the pixel value of the pixel at the target position in the output frame;
and if the absolute value of the third difference value is greater than or equal to the interpolation variation tolerance, the candidate pixel value is offset by the value of the interpolation variation tolerance according to the sign of the third difference value and is used as the pixel value of the pixel at the target position in the output frame.
CN202211593963.8A 2022-12-13 2022-12-13 Low-time-delay image de-interlacing method and device Active CN116016831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211593963.8A CN116016831B (en) 2022-12-13 2022-12-13 Low-time-delay image de-interlacing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211593963.8A CN116016831B (en) 2022-12-13 2022-12-13 Low-time-delay image de-interlacing method and device

Publications (2)

Publication Number Publication Date
CN116016831A CN116016831A (en) 2023-04-25
CN116016831B true CN116016831B (en) 2023-12-05

Family

ID=86022142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211593963.8A Active CN116016831B (en) 2022-12-13 2022-12-13 Low-time-delay image de-interlacing method and device

Country Status (1)

Country Link
CN (1) CN116016831B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5682205A (en) * 1994-08-19 1997-10-28 Eastman Kodak Company Adaptive, global-motion compensated deinterlacing of sequential video fields with post processing
CN101483746A (en) * 2008-12-22 2009-07-15 四川虹微技术有限公司 Deinterlacing method based on movement detection
CN101521761A (en) * 2008-02-26 2009-09-02 瑞昱半导体股份有限公司 Method and device for deinterlacing video by utilizing horizontal shift prediction and compensation
CN101836441A (en) * 2007-09-10 2010-09-15 Nxp股份有限公司 Method, apparatus, and system for line-based motion compensation in video image data
CN102497525A (en) * 2011-12-27 2012-06-13 广东威创视讯科技股份有限公司 Motion compensation deinterlacing method
CN103051857A (en) * 2013-01-25 2013-04-17 西安电子科技大学 Motion compensation-based 1/4 pixel precision video image deinterlacing method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2140687A2 (en) * 2007-04-03 2010-01-06 Gary Demos Flowfield motion compensation for video compression
US20100309372A1 (en) * 2009-06-08 2010-12-09 Sheng Zhong Method And System For Motion Compensated Video De-Interlacing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5682205A (en) * 1994-08-19 1997-10-28 Eastman Kodak Company Adaptive, global-motion compensated deinterlacing of sequential video fields with post processing
CN101836441A (en) * 2007-09-10 2010-09-15 Nxp股份有限公司 Method, apparatus, and system for line-based motion compensation in video image data
CN101521761A (en) * 2008-02-26 2009-09-02 瑞昱半导体股份有限公司 Method and device for deinterlacing video by utilizing horizontal shift prediction and compensation
CN101483746A (en) * 2008-12-22 2009-07-15 四川虹微技术有限公司 Deinterlacing method based on movement detection
CN102497525A (en) * 2011-12-27 2012-06-13 广东威创视讯科技股份有限公司 Motion compensation deinterlacing method
CN103051857A (en) * 2013-01-25 2013-04-17 西安电子科技大学 Motion compensation-based 1/4 pixel precision video image deinterlacing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
立体视频中的虚拟视点合成技术;周雪梅;潘多;郭崇云;;煤炭技术(第10期);全文 *

Also Published As

Publication number Publication date
CN116016831A (en) 2023-04-25

Similar Documents

Publication Publication Date Title
US7057665B2 (en) Deinterlacing apparatus and method
US6509930B1 (en) Circuit for scan conversion of picture signal using motion compensation
US7667773B2 (en) Apparatus and method of motion-compensation adaptive deinterlacing
US6999128B2 (en) Stillness judging device and scanning line interpolating device having it
JP2004516724A (en) Method for improving the accuracy of block-based motion compensation
WO2008152951A1 (en) Method of and apparatus for frame rate conversion
TWI384865B (en) Image processing method and circuit
JP3293561B2 (en) Image display device and image display method
CN100433791C (en) Film mode correction in still areas
KR20050025086A (en) Image processing apparatus and image processing method
US20120274845A1 (en) Image processing device and method, and program
EP1646228B1 (en) Image processing apparatus and method
CN116016831B (en) Low-time-delay image de-interlacing method and device
TWI471010B (en) A motion compensation deinterlacing image processing apparatus and method thereof
JP4222090B2 (en) Moving picture time axis interpolation method and moving picture time axis interpolation apparatus
KR100692597B1 (en) Image processing apparatus capable of selecting field and method the same
US7379120B2 (en) Image processing device and image processing method
JP2005318622A (en) Reverse film mode extrapolation method
EP1714482A1 (en) Motion compensated de-interlacing with film mode adaptation
JP4179089B2 (en) Motion estimation method for motion image interpolation and motion estimation device for motion image interpolation
KR100594780B1 (en) Apparatus for converting image and method the same
JP2008500758A (en) Motion estimation of interlaced video images
JP3576618B2 (en) Motion vector detection method and apparatus
JPH08186802A (en) Interpolation picture element generating method for interlace scanning image
KR20050015189A (en) Apparatus for de-interlacing based on phase corrected field and method therefor, and recording medium for recording programs for realizing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant