[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

GB2244887A - Spatial transformation of video images - Google Patents

Spatial transformation of video images Download PDF

Info

Publication number
GB2244887A
GB2244887A GB9107732A GB9107732A GB2244887A GB 2244887 A GB2244887 A GB 2244887A GB 9107732 A GB9107732 A GB 9107732A GB 9107732 A GB9107732 A GB 9107732A GB 2244887 A GB2244887 A GB 2244887A
Authority
GB
United Kingdom
Prior art keywords
image
input
transformation
pixels
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9107732A
Other versions
GB9107732D0 (en
Inventor
Matthew Raymond Starr
Martin Fairhurst
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rank Cintel Ltd
Original Assignee
Rank Cintel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rank Cintel Ltd filed Critical Rank Cintel Ltd
Publication of GB9107732D0 publication Critical patent/GB9107732D0/en
Publication of GB2244887A publication Critical patent/GB2244887A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

A digital video effects system, that can be used as a subsystem of a digital video system, performs real-time, anti-aliased spatial transforms of input video i.e. it provides an output signal where the picture is a spatially distorted version of the input picture. Any defined area of the input video frame can be placed anywhere in the output frame with independent changes in size and direction along either axis, by manipulation first vertically and then horizontally. For vertical transformation a vertical sequence controller 14 is operative to select and order regions to which transformation parameters are applied from a vertical parameter controller 18. Transformation is undertaken in a vertical spatial interpolator 30. The intermediate vertically transformed image is then similarly horizontally transformed in a horizontal spatial interpolator 32 under control of a sequence controller 22 and parameter controller 26. <IMAGE>

Description

TRANSFORMATION OF VIDEO IMAGES Background of the Invention The present invention relates to the transformation of video images, such as may be used in special effects and compositing of digital video signals, and is particularly suitable for use in systems where real-time spatial transformations must be generated without artifacts. Simulated perspective is a typical example, the mapping of flat images onto non-linear surfaces is another.
In this specification, real-time data processing means processing at full NTSC/PAL data rates digitally encoded to CCIR Recommendations 601 and 656 (27M Bytes/second) and trapezoid means quadrilateral with at least one pair of parallel sides.
Prior art special effects systems such as the Quantel Mirage (registered trade mark) are flexible but effects must be pre-compiled. Other real-time systems have a limited range of image transformation before undesirable artifacts including aliasing effects become visible. A method of transformation is described in IEEE (USA) Computer Graphics and Applications, Volume 6, No.l, Jan.1986, pp 71-80, in an article by Karl M.
Fant. This method is of a very restricted scope.
Summary of the Invention The invention is defined in the appended claims 1 and 12 to which reference should now be made. Advantageous features of the invention are set out in the subclaims.
In a preferred digital video effects system embodying the invention, any defined area of the input video frame can be placed anywhere in the output frame with independent changes in size and direction along either or both axes, by manipulation first vertically and then horizontally. For vertical transformation a vertical sequence controller is operative to select and order regions to which transformation parameters are applied from a vertical parameter controller. Vertical transformation is undertaken in a vertical spatial interpolator.
The intermediate vertically transformed image is then similarly horizontally transformed in a horizontal spatial interpolator under control of a sequence controller and parameter controller.
The digital video effects system can be used as a subsystem of a digital video system and performs real-time, anti-aliased spatial transforms of input video i.e. it provides an output signal where the picture is a spatially distorted version of the input picture.
Brief Description of the Drawings An embodiment of the invention will now be described, by way of example, with reference to the drawings, in which: Figure 1 is a schematic diagram showing transformation along one line of a video image in a first direction, Figure 2 is a schematic diagram showing transformation of a 2D region of a video image in a first direction, Figure 3 is a schematic diagram illustrating transformation of a video image in a first direction and a second direction perpendicular to the first, Figure 4 is a block diagram of apparatus embodying the invention for transformation of video images, Figure 5 is a block diagram of sequence controller and memory (director) hardware, Figure 5a is a schematic diagram of the director controller Figure 6 is a detailed block diagram of sequence controller and memory (director) hardware, Figure 7 is a block diagram of the parameter controller and memory, Figure 8 is a schematic diagram of pixel scaling at boundaries of an image transformation region, Figure 9 is a schematic diagram of translation of a string of pixels as part of a transformation, Figure 10 is a schematic diagram of expansion of a string of pixels as part of a transformation, (expand mode), Figure 11 is a further schematic diagram of pixel line expansion, Figure 12 is a schematic example of pixel line expansion, Figure 13 is another schematic example of pixel line expansion, Figure 14 is a schematic diagram of compression of a string of pixels as part of a transformation (compress mode), Figure 15 is a schematic example of pixel line compression, Figure 16 is a block diagram of pixel line transformation hardware, Figure 17 is a further block diagram of pixel line transformation hardware, Figure 18 is a block diagram of the fraction generator, Figure 19 is a simplified block diagram of the fraction generator operating in expand mode, Figure 20 is a simplified block diagram of the fraction generator operating in compress mode, Figure 21 is a schematic diagram showing lighting transformation, and Figure 22 is a schematic diagram showing a method of obtaining simulated perspective on a video image.
Description of a Preferred Embodiment An embodiment of the invention will now be described in which an original two-dimensional image is manipulated in one dimension at a time, once in the vertical direction and once in the horizontal direction, with the combination providing the required overall spatial rearrangement in two dimensions. The actual pixel interpolation process must be able to be done for every output pixel of every frame, requiring fast hardware. To mix completely in both dimensions, simultaneously, over a variable number of pixels would require complex hardware, whereas to mix over one dimension simply requires a serial mixer, which can more easily achieve the speed required.Such a hardware unit performs two operations in series or cascade, a vertical interpolation along "columns" followed by a horizontal interpolation along "rows". Rows correspond to the actual video horizontal scan lines.
Accordingly, the apparatus consists of two major sections that are nearly identical, one for the vertical pass and one for the horizontal pass. The vertical processing section consists of video and matte inputs, a set of framestores and the actual arithmetic circuits. Matte is sometimes referred to as shape in this specification. The output of the vertical processing section goes to a set of intermediate framestores, which act as the input to the horizontal processing section, while the output of the horizontal processing section goes direct to a video bus output. The whole system is synchronized to video timing, and there is a two-frame delay from input to output.
An arbitrary segment of linearly arranged input pixels is mapped onto an arbitrary segment of linearly arranged output pixels, by specifying start position, finish position, processing direction, size change and run-length (number of pixels to process). Since the input and output pixels are the same size, the ratio of output to input pixels determines whether an apparent expansion or compression takes place. In Figure 1, such expansion and compression is shown for segments 1A and 3A of a line of pixels respectively. As shown in that figure, segment 1A is expanded to be output as segment 1B. Segment 3A is compressed to form output segment 3B, and segment 2A is output as segment 2B with an unchanged number of pixels.
Fractions are generated in the hardware (see below) and are multiplied by the luma/chroma values of input pixels, and the results are added to generate the luma/chroma values of new output pixels. It is possible to generate apparent fractional displacements in this way, giving high quality output images that are relatively free of artifacts and which preserve most of the information contained in the input image. In this description, the concept of a fractional pixel is used although in reality there is no such thing, the illusion of fractional pixels is achieved by scaling.
Each column/row can be subdivided into regions that can be transformed differently. The transition between these regions can occur with minimal overhead. Furthermore, adjacent parts of the columns/rows can be defined to lie in the same transformation region. Thus 2-dimensional transformation regions can be specified as shown in Figure 2; where the transformation of one source region 10 to a destination region 10A in one direction is shown. It is possible to automatically vary the start position, destination position, runlength and size change from one column/row to the next. Thus it is possible to define trapezoidal shaped transformation regions which are necessary to fully subdivide polygonal input areas of the input image, degenerating to triangles or lines one row/column wide.
The transformation of three regions of an original input image 42 is illustrated in Figure 3. Selected regions 4, 5, 6 are first respectively transformed in a vertical direction, shown as regions 4A, 5A, 6A. The resulting intermediate image 44, 46 is then redivided into a new set of transformation regions 7, 8, 9, and transformed horizontally 7A, 8A, 9A to provide a processed image 48.
The spatial transformation hardware is shown in Figure 4.
It consists of an original image input framestore 12 with inputs for luminance, two chrominance or colour-difference signals and a shape signal (see below). These original input image framestores are similarly connected to a vertical spatial interpolator 30.
A vertical sequence controller 14, having an associated RAM memory 16, determines the sequence and form of the individual regions for transformation in the vertical direction. The controller 14 is connected to a vertical parameter controller 18 also having an associated RAM 20, which supplies the transformation parameters for each region to the vertical spatial interpolator 30 which includes an interpolation controller 29.
The spatial interpolator 30 includes hardware for pixel string expansion and compression.
The vertical spatial interpolator 30 is connected with column stores 36. The column stores are connected to vertically processed image framestores 34 which hold an intermediate image resulting from vertical processing. The vertically processed image framestores 34 are connected to a horizontal spatial interpolator 32 which includes a horizontal interpolation controller 31.
As described above for vertical processing, for horizontal processing, there is a horizontal sequence controller 22 having an associated RAM 24 and which determines the sequence and form of regions for transformation in the horizontal direction. The horizontal sequence controller 22 is connected to a horizontal parameter controller 26 which also has an associated RAM 28. The RAM memories 16, 20, 24, 28 are connected to a system bus 21, which is connected to a software-controlled host processor 19. The horizontal parameter controller 26 supplies the transformation parameters for each region to the horizontal spatial interpolator 32. The horizontal spatial interpolator 32 also includes hardware for pixel string expansion and compression.
The horizontal spatial interpolator 32 is connected to row stores 38 which have a light signal input in addition to luma, chroma B and chroma R and shape signal inputs.
The row stores 38 are connected via lighting maps 40 to luma, chroma B, chroma R and shape signal outputs. The lighting maps 40 are programmable and apply an 8-bit "light" parameter to each signal. The light parameters have the effect of signal transfer functions as described below. The light maps 40 are connected to mixing circuit 41 by luma, chroma B and chroma R and shape outputs/inputs. A background video signal is also input to the mixing circuit 41.
The original image input stores 12, vertical spatial interpolator 30, column stores 36, vertically processed image framestores 34, horizontal spatial interpolator 32, row stores 38, output maps 40 and mixing circuit 41 are interconnected by outputs and inputs for luma, chroma B, chroma R, and shape signals. Within the transformation hardware, the luma, Cb, Cr and Shape signals all have the same sample period of 74 ns (4:4:4:4 format). There are four parallel internal video paths for luma (Y), two chroma-difference (Cr and Cb) and shape (Sh) signals. The apparent increase in chroma resolution comes about by using each input chroma byte twice.
The structure and functions of these various hardware elements are considered below:i) Original Image InPut Framestores 12: A pair of identical framestores is used so that while reading takes place from one, writing takes place into the other.
At the end of the frame, each framestore switches modes. A second pair of framestores operates in a similar fashion, except that when writing into it the video lines are effectively transposed by 900 where transposition is either a 900 turn or a rotation around a diagonal axis. This is necessary for rotational transformations of greater than +45 as described below.
Each framestore 12 comprises dynamic RAMS (DRAMs) with a 4 to 1 interleave. The input 0 and 900 framestores 12 (and intermediate processed framestore 34) all employ this interleaving scheme. There are different requirements for reading and writing of pixels, so there are two distinct modes for reading and writing. When writing to a framestore, the data is continuous and ordered, and so the incoming pixels can be accommodated by cycling through the 4 banks. When reading, the accesses are at random addresses within the row/column, so fast page mode read is used. Frames are written and stored with the fields interleaved, and the spatially higher field is always stored above the lower field regardless of whether it is the odd or even field that is the higher field. The 0 and 900 input framestores 12, each consist of 4 banks of DRAM pairs, and each bank is divided into two halves (1 DRAM), with the lower half starting at integrated circuit 'chip' row address 0, and the upper half starting at chip row address 256. The chip row and column addresses are not the same as the image frame row and column numbers.
For the 0 input framestores the incoming data is in the form of horizontal scan lines, each line consisting of 720 pixels, with interlaced fields Each field is divided into four on a column basis, so that as the input pixels of field 1 arrive, the first pixel of each line goes to bank 0, the second to bank 1, the third to bank 2, and the fourth to bank 3. At the fifth pixel, the sequence repeats. Pixels in different fields that are aligned vertically must be in different banks, so for field 2, the sequence is different, with the first pixel of the line going to bank 2, the second to bank 3, the third to bank 0, and the fourth to bank 1.When reading columns from the oe framestores, fast page mode is used, with two banks being enabled simultaneously for page mode reads. Two banks are read together either bank 0 and 2, or bank 1 and 3.
In the goe framestore, the incoming lines are written in the same direction that they are read out, that is as columns.
Hence there is a transposition of 900. When writing, the banks are selected for writing cyclically. When reading, all four banks are enabled for column mode reading simultaneously.
In the vertically processed image framestore 34, see below, pixel columns from the column stores are written into the banks which are selected in a cyclic fashion. The order of bank selection varies with alternate columns. For reading of image rows, pairs of banks are simultaneously enabled for fast page mode reading.
iil Seauence Controller/Memory (Director) 14, 16 and 22, 24: These system components are referred to as "directors" for each direction. The function of each sequence controller 14, 22 is to control the order of processing of the individual transformation regions and the number of iterations of rows/columns within the individual transformation regions of the input image by passing a sequence of transformation region numbers for each column/row to the parameter controller 18, 26. Each region number passed to the parameter controller 18, 26 points to an area of parameter memory 20, 28 where the appropriate parameters are stored for the processing of that region. The sequence controller 14, 22 derives this sequence of numbers from a table stored in dedicated SRAM 16, 24 by the system software.
There are two independent directors which are substantially identical in function, one for vertical processing 14, 16 and one for horizontal processing 22, 24.
The stucture of a director is shown in Figures 5, 5a and 6. The director is controlled by a director controller, shown in Figure 5a, which is a state machine which has inputs for a clock signal and handshake signals from the associated parameter controller 18, 26, fraction generator 80 and column or row stores. The director includes an access address latch 52 which stores the address of that portion of SRAM 16 to be accessed at the start of the first line of pixels. The SRAM 16 has information of region numbers and iteration counts, i.e. the numbers of pixels in line segments of various regions. At the start of a line segment of a transformation region, an address counter 56 holds the address of the iteration count. The number of lines to be transformed is stored in a loop counter 54.The address counter 56 then holds the address of the region number of the first transformation region. For each region, the address stored in the director address counter 56 is incremented until the number of regions to be transformed is reached, and the region numbers are passed from the SRAM via a latch 60 with clock enable to the parameter controller 18, 26.
When the end of a line is reached (as indicated by a flag) a latch 58 for loopback becomes operative. This stored the SRAM address of the region number of the first transformation region of a line such that the SRAM is again correctly addressed at the start of the next subsequent line. The address stored by the director address counter 56 is then reloaded from the latch 58 unless the number of iterations which remain is zero, in which case the director address controller is incremented.
iii) Parameter Controller/Memory 18, 20 and 26. 28: For each transformation region there is a set of parameters to define the nature of the transformation from input region to output region. Specifically these include the source and destination addresses including both integer pixel (i.e.
which column/row segment to operate upon) and fractional (i.e.
address within the column/row) parts, the number of pixels to process (run-length), the size change values, and various flags bits. As well the amounts by which the parameters are to change laterally, i.e. in adjacent rows/columns of that particular region, are specified. Parameters are in fixed point form i.e. they consist of integer and fraction parts. The use of "fractional" pixels is important in eliminating aliasing effects, as described later.
There are two parameter controllers 18, 26, one for each direction. The parameter controller 18, 26 accepts the transformation region number from the director (sequence controller 14, 22) and then prepares and updates the associated set of parameters for use by the interpolation controller 29, 31.
These parameters are stored in dedicated SRAM 20, 28 by the software. Once every iteration the parameter controller 18, 26 updates these items by adding incremental "delta" values back into them so that they change from output column/row to output column/row.
As shown in Figure 7, the SRAM 20, 28 of each parameter controller 18, 26 stores both initial (main) parameter values and corresponding incremental ('delta') values. These are added in the parameter adder 112 to give current parameter values which are fed back to the parameter SEAM 20, 28 for further increments.
Another function of the parameter controller is calculation of a weighted average of the size change value at the common boundary of two contiguous regions in order to smooth the transition between them (i.e. to reduce aliasing effects).
It is necessary that when the first and/or last pixel in an output segment, of a line (a run) is a 'partial' pixel that scaling of intensity should occur to minimize aliasing effects.
This has the effect of smoothing the transition between contiguous transformation regions. The scaling factor is derived from the destination fraction in the case of the first output pixel, and from the run length fraction for the last output pixel and is applied to the shape signal only. The scaling factor, which is known as weighted average size must also take into account the size change associated with that particular pixel run, and this is done automatically in the hardware shown in Figure 7.
In Figure 8 a run of output pixels PO' to P3' is shown, as well as the runlength 100 and start destination position 102.
The destination start position has a fractional component, and this is the main factor for determining the scaling factor of the first pixel 106 in the line segment. In particular for expand mode only which is discussed below, for the first output pixel, the destination fraction 104 is multiplied by the size to give a scaled size value as a scaling factor.
For the last pixel, the sum of destination fraction 104 and runlength 100, itself composed of an integer and fraction, gives a quantity known as modified run length of which the fraction component is the scaling factor for the last pixel 108.
It should be remembered that ends-of-run scaling (rescaling at region boundaries) only takes place on the shape output. The luma and chroma values are modified by the shape when they pass through the mixing circuit 41, as described later.
In terms of hardware, as shown in Figure 7, the parameter SRAM 20, 28 has an output for run length and destination fraction which is connected to an ALU 114. The ALU 114 has its output 116 fed back to a secondary input and connected to a latch 118 for truncation to give the modified run length fraction at its output 120.
The output 120 is connected to an input at a multiplier/accumulator 122 which has one input also connected to a destination fraction (or source fraction) output 124 from a latch 126. A second input to the multiplier/accumulator has a size input 128 from a latch 130.
In expand mode, the size i is multiplied by the destination fraction modified in the multiplier/accumulator 122.
A scaled size (i.e. destination fraction x size) is output to an adder 132 where it is either added or subtracted from the start source address from an input 134. A corrected source value is output in expand mode to a latch 136 which provides an FB value with which the fraction generator is initialised, as described below.
For compress mode, the destination fraction output is input to a latch 138 to provide a REM signal which initialises the fraction generator, as also described later. A scaled size is generated by multiplying source fraction x size. This scaled size is used in compress mode initialisation which is also described below.
iv) SPatial Interpolators 30. 32: The two spatial interpolators (horizontal 32 and vertical 30) perform substantially identical functions. For each output column in turn, the vertical spatial interpolator 30 takes segments of input columns from the original input store 12 and one-dimensionally interpolates these according to the sequence programmed for each frame. The horizontal spatial interpolator 32 produces output rows by interpolation from segments of rows stored in the vertically processed store 34.
The spatial interpolator controller 29, 31 accepts the parameters from the parameter controller 18, 26 for each transformation region and processes strings of input pixels to produce strings of output pixels. Two separate algorithms are used for compression and expansion of a string of pixels.
Segments of pixel lines are translated, expanded and compressed, as described in turn below: Translation Figure 9 is a representation of a line of input pixels with pixel values PO to P4, the length of the arrows. A sequence of translated output pixel values PO', P1', etc. are generated by adding weighted proportions of pairs of adjacent pixels. Thus, PO' = (FA x PO) + (FB x Pl P1' = (FA x P1) + (FB x P2) P2' = (FA x P2) + (FB x P3) etc, where FA + FB = 1.
The weighting coefficients, FA and FB, are fractions and if the same values of FA and FB are used for each new pixel, then when the new pixels are displayed there will be an apparent fractional displacement, relative to the original pixels. The extent of the apparent displacement depends on the values of FA and FB.
Translation is considered by the hardware as an expansion with a 1:1 expansion ratio. This hardware is described below.
Expansion Expansion is shown in Figure 10. New pixel values PO', P1', P2', etc, are generated from an original sequence PO, P1, P2, etc, by combining pairs of adjacent original pixel values.
However the fractional coefficients FA and FB are different for each new generated pixel. Furthermore, the same pair of original pixels can be used more than once, so more new pixel values can be generated from fewer original pixels, although the information contained is of course essentially the same. Hence an apparent expansion has taken place.
Figure 11 is an alternate representation of the expansion algorithm, which shows it as a 1 pixel wide "window" 101, that moves along and samples the input pixels such as PO, P1, P2 for example. At each new position of the input window a new output pixel is generated from two input pixels. Thus, PO' = (FAO*PO) + (FBO*Pl), P1' = (FAl*PO) + (FBl*Pl), P2' = (FA2*P1) + (FB2#P2).
The proportions of the input pixels depend on the position of the window, and the displacement Z of the window at each sample governs the rate at which input pixels are processed, so that the apparent size change depends on Z. The quantity Z is a fraction, and the expansion ratio is 1/Z.
The expansion algorithm can use a one input pixel wide averaging window to smooth transitions between input pixels, as shown in the example of Figure 12. A half input pixel averaging window may be used as shown in the example of Figure 13. In these figures, IP1 denotes the input signal for pixel 1, for example.
ComPression A group of adjacent pixel values can be multiplied by weighting coefficients and summed, to produce a resultant output pixel. In the example shown in Figure 14, input pixel values PO and P3 have smaller weighting than P1 and P2, and it can be said that the output pixel PO' is made up of two "partial" pixels (PO and P3) and two "whole" pixels (P1 and P2). This means that a non-integer number of pixels has gone into producing a single output pixel, so effectively causing compression, with a "compression ratio" that is non-integral.If an input pixel is only partially overlapped by a given output pixel, then the remainder of the pixel can be used for the next output pixel, so that all input pixels contribute equally to the output (except for the input pixels on the boundaries of each input string, which are treated as a special case). In Figure 14, PO to P6 are input pixels, PO' and Pl' are output pixels and the compression ratio is l/z : 1. Thus, PO' = (FBO*PO) + (Z*P1) + (Z*P2) + (FB3*P3) where FB1 = Z, FB2 = Z, FBO + Z + Z + FB3 = 1, FA4 + FB3 = similarly P1' = (FA4*P3) + (Z*P4) + (Z*P5) + (FB6*P6), and so on.
The compression algorithm combines values from input compression pixels to make up complete output pixels as shown in Figure 15 for a ratio of 2:1, where IP1 denotes the signal for input pixel 1 for example.
Although the expansion and compression algorithms operate differently, they produce the same results in the 1:1 case. The mechanism is implemented with a continuous scale of expansion/compression. Transitions between the two algorithms are smooth and do not require special programming.
The algorithms as implemented require a minimum amount of hardware for each video path. Hardware for one video path is shown in Figure 16, and for one video path and the shape path in Figure 17. Two consecutive pixel input values are input to the hardware, namely a current value B and a previous value A. These two values are held by respective series connected latches 86, 88. A common clock is connected to elements of the hardware, so that each element performs one task in each clock cycle. Each latch has a clock-enable, and stores the pixel value present at its input during the clock cycle, when the clock-enable is on.
The fraction generator 80 produces a pair of numbers for each cycle of its operation called FA and FB. The FA and FB values are 12 bits in length, i.e. they range from 0 to 2048 which is 2 to the power of 11. They are regarded as fractions, since there is in the hardware an implicit divide by 2048 at a later stage. Hence an FB value of 2048 is equivalent to 100%, while an FB value of 1024 is 50% and so on. FB is the coefficient of the most recently read input pixel value, while FA is the coefficient of the previous pixel value.
These two numbers are fed to each video path in parallel, such that FB and current signal input value B are fed to a first Multiplier/Accumulator unit 84 and FA and previous input signal value A are fed to a second Multiplier/Accumulator unit 82.
The fraction generator 80 is a synchronous i.e. clocked arithmetic unit in which for each clock cycle a new FA and FB pair is generated. Each fraction generator 80 is initialised by its respective parameter controller 18, 26.
For each cycle, video signals related to B*FB and A*FA are respectively accumulated. These signals are added in an adder 90 until an output pixel is provided to a video output by way of a hard-wired binary division unit 92, as shown in Figure 16. Both the Expand and the Compress algorithms allow the final result to be scaled by the binary division unit 92.
OPeration The operation of the hardware shown in Figure 16 in expansion mode is as follows: For any expansion, no more than two input pixels can contribute to any output pixel. At the start of the processing of an output pixel, the first contributing pixel is latched into one 88 of the latches, while the next input pixel is latched into the other 86. The fraction generator provides the fractions appropriate to the contribution from each input pixel to the output pixel. These fractions are multiplied by the appropriate input pixel values in multiplier/ accumulator 82, 84 and added in adder 90 and output to the following stage as the output pixel value. Then the value of the second input pixel replaces the first pixel value in the latch 88 while the third input pixel is latched to replace the second in latch 86. The fractions are changed to suit the new pixels.
These fractions are multiplied by the input pixel values, and the result is output. This continues, outputting one pixel per clock cycle, until the final input pixel which contributes to this output pixel has been processed.
The operation of the hardware shown in Figure 16 in compression mode is as follows: at the start of processing of an output pixel, the fraction generator 80 calculates the fraction of the first input pixel that contributes to the output pixel. In the example in Figure 15 this is IP1/2/2, which is 1/4. This factor is multiplied by the input pixel value and put in the accumulator of the multiplier/accumulator 84.
Then the next input pixel is processed, being multiplied by its fraction, and added to the accumulator. This continues until the final input pixel which contributes to this output pixel has been processed (in Figure 15, this is input pixel 3).
At this point, the value in the accumulator is equal to the value for that output pixel. This output is passed out via the adder 90. Then the next sequence of input pixels is processed, starting with the last processed pixel (if it contributes to two output pixels).
The shape video path hardware of the spatial interpolators 30, 32 is basically similar to that for each video path, as shown in Figure 16 and 17. However there is additionally a multiplier 91 connected between the adder 90 and the hardware output. Fractional antialiasing corrections are selectively applied to the multiplier in order to produce shape signals applicable in the mixing circuit 41, see later, to produce apparent fractional pixels by scaling.
Fraction Generator The basic hardware of the fraction generator 80 is shown in Figure 18. The fraction generator includes an ALU (arithmetic logic unit) 140 with an "o" size input 141. As discussed below, the value "o" is essentially unity in expand mode and is the compression factor (lying between zero and unity) in compress mode. The ALU 140 is only operative in compress mode. It has an output 142 which provides a remainder REM- nO" value. The REM-"o" signal is input to a multiplexer 144 with clock enable which has a second input 148 for a 'fractional' signal FB. FB = REM if REM-"o" is negative otherwise FB = "o" if REM-"o" is positive. The output of the multiplexer 144 is an unlatched REM signal which is input to a latch 150.The REM signal is initialised by the parameter controller at the input 146 to the latch 50. The latch 150 acts to buffer the REM signal and provide that REM signal at its output 152. The REM signal is input to a further multiplexer 154 with clock enable and is fed back to a second input of the ALU 140.
The multiplexer 154 also has a size "o" input and an output 158 for the FB values. After initialisation, the multiplexer 154 is output-enabled only during compress mode. The FB signal output by the multiplexer 154 is latched by a latch 160 which has an initialising FB value input 156. The latch 160 has an output 162 for the latched FB signal.
The latched FB signal from output 162 is passed via an input 164 to a second ALU 166. This ALU 166 has a second input 167 for size "i" in expand mode or size "o" in compress mode. As discussed below, the value "i" is essentially the complement with respect to two of the expansion factor, in expand mode, and is unity in compress mode. A combined value FB+"i" (expand mode) or FB-"o" (compress mode) is provided at the output 168 of the ALU 166, and passed to a buffer 172, which is output-enabled in expand mode only to provide the FB+"i" value to the latch 160.
The unlatched FB signal is provided to selective inverter 174 which provides an unlatched FA value at its output 176. The FA value is latched by a latch 180.
Fractions are basically produced as follows. In expand mode as shown in Figure 19, size Ni" and a fed back FB value are input to the second ALU 166. This adds the two input signals to output a combined FB+"i" value. After passing through the buffer 172 and latch 160, this is output as the FB value. The FB+"i" value from the ALU 166 is inverted by the inverter 174 and passes via the latch 180 to be output as the FA value, in the same clock cycle.
In compress mode, as shown in Figure 20, size "o" is subtracted from an FB remainder REM value in the ALU 140 and latched by the latch 150 to give the new REM value, which is fed back to the ALU 140. The REM value is input to the multiplexer 154 together with the "o" size value. The output of the multiplexer 154 is the FB value such that FB=REM if REM-"o" is negative, else FB="o". FA is produced one cycle later when size "o" is applied to the ALU 166 (shown in Figure 18) and FB-"o" is output from it.
Initialisation Expand initialization depends on knowing the source fraction and the destination fraction, i.e. the fractional components of the respective addresses of the first input and output pixels of a run. These fractions are required for calculating the proportion of the first input pixel needed to produce the first output pixel, and hence the proportion of the second input pixel required. To calculate this initialize value, the source fraction is read from parameter RAM, and is modified by adding or subtracting a scaled version of the destination fraction. The scaling of the destination fraction is done by multiplying it by a scale factor "size" which is inversely proportional to the expansion ratio, such that for a 1:1 transformation the destination fraction is unchanged, while for n:l expansion the scale factor is 1/n.This scaled destination fraction is either added or subtracted from the source fraction, depending on how the source/destination direction flags are set.
These flags indicate the directions in which the input/output pixel strings are to be respectively read/generated during expansion or compression. One string may be read in the opposite direction to the other in order for example to reflect an input image. Note that it is possible if the scaled destination fraction is sufficiently large for the integer part of the start position to change by +1.
For compress mode initialisation, the fractional components of the destination and source addresses stored in parameter RAM determine the initial values for the fraction generator. From the destination fraction is derived a quantity REM(O) = (1 - destination fraction) corresponding to the amount of the output pixel that needs to be tilled". However, in a similar way to the expand case, for compress mode initialisation, the source fraction value is likely to be non-zero, so there will be some amount of destination displacement corresponding to that source fraction. In practice, the value of (1 - source fraction) is scaled by multiplication by the size value and used to incorrect" the destination fraction.
Fraction Generator OPeration If "size" is the actual quantity specified in the list of parameters, then 2048 s size < 4096 o =2048 (Expansion mode) i = 4096-size 0 < size < 2048 o = size (Compression mode) i = 2048 The "zoom factor", or apparent magnification is o/i in each case. (The hardware uses the most significant bit of the "size" parameter to determine which mode to use).
For an expansion ratio of 2048/i a sequence of coefficients Foe(0), FB(1),...FB(n) and FA(0), FA(l),...FA(n) for expand mode is generated as follows. At each cycle, or iteration, a pair of FA and FB values is generated. An initial value FB(0) that is derived from the corrected source address is passed to the fraction generator.FA(0) is given by: FA(0)=2048-FB(0) On the next iteration, FB(l)=FB(0)+i FE(0)+i < 2048 or =FB(01+i-2048 FB(0)+i > 2048 FA(l)=2048-(FB(0)+i) FB(0)+i < 2048 or =4096-(FB(0)+i) FB(0)+i > 2048 In general FB(n)=FB(n-l) Expand Initialize or =FB(n-l)+i(n) FB(n-1)+i# < 2048 or =FB(n-l)+i(n)-2048 FB(n-l)+i > 2048 or =FB(n-l) Hold mode Usually i is constant during a run, but under certain circumstances, such as a transition between contiguous runs it will have a different value.
In compress mode, a quantity REM is defined to be the amount of an output pixel that remains to be "filled". From this is derived FB and then FA is derived from FB. For a compression of o/2048 where 0 < o < 2047 a sequence REM(0), REM(l),...REM(n) and of coefficients FB(0), FB(1),...FA(n) and FA(0), FA(1),...FA(n) is generated as follows. An initial value REM(0) is derived from the corrected destination address, and passed to the fraction generator. In the first cycle, the first REM, FB and FA values are given by REM(l)=REM(0)-o REM(0) > o or =REM(0)-o+2048 REM(0) < o FB(1)=o REM(0) > o or =REM(0) REM(0)# < o FA(1)=0 In the next cycle, the following values are generated.
REM(2)=REM(1)-0 REM(0) > 0 or =REM(1)-0+2048 REM(0)#0 FB(2)=0 REM(1) > 0 or =REM(1) REM(1)# < o FA(2)=o-FB(1) In general, REM(n)=REM(n-1)-0 REM(n-1) > 0 or REM(n-1)-0+2048 REM(n-1) < 0 FB(n)=0 REM(n-1) > 0 or -REM(n-1) REM(n-1) < 0 FA(n)=0-FB(n-1) REM(n-1) < 0 or =0 REM(n-1) > 0 Pixel String Compression - Example By way of example, the compression of an input pixel string by a size ratio Z illustrated in Figure 14 can now be described in more detail. The compression of the seven input pixels shown in Figure 14 takes seven clock cycles, during which the values of REM, FA and FB are generated as shown in Table 1.
Table 1 Clock Cycle REM FB FA 0 REM0 FB0 1 REM > 0 FB1 = Z FA1 = 0 2 REM2 > o FB2 = Z FA2 = 0 3 REM3 < o FB3 = REM3 FA3 = 0 4 REM4 > o FB4 = Z FA4 = -Z-REM3 5 REM5 > o FB5 = Z FAS = 0 6 REM6 < o FB6 = REM6 FA6 = 0 At the start of the process, REMO and FBO are set according to the initialisation process described above using the source and destination fractions, size ratio and source/destination direction flags. Compression then proceeds as illustrated as pairs of pixel values are clocked through the series connected latches 86,88 of the spatial interpolator shown in Figure 16, a pair of fractions FA and FB being generated by the fraction generator 80 during each clock cycle. The fractions are numbered in this example according to their clock cycle.
When the entire output string, in this example only comprising two pixels, PO' and P1', has been generated, the fraction generator 80 may be reinitialised. The details of reinitialisation depend on whether the next output string to be generated is contiguous with the first or not. If it is contiguous but has a different size ratio, the parameter controller 18,26 (Figure 4) causes a weighted size value intermediate between the previous and new size values to be used to initialise generation of the new output pixel string so as to reduce aliasing artefacts at the boundary between the output strings.
vl Vertically Processed Image Framestores.
While the original input image is being scanned column by column, the output columns from the vertical processing must be stored in intermediate framestores, known as the vertically processed image framestores 34, until the complete frame has been scanned and processed. As with the original input framestores 12, there are two, which alternate between read and write mode once per frame.
vi) Line Stores 36, 38 There are two sets of linestores, one for columns 36 and one for rows 38. The column-stores 36 convert the erratically timed and ordered outputs from the vertical spatial interpolator 30 into the ordered sequence required for operation of the interleaved writes into the vertically processed store 34.
Similarly the row-stores 38 convert the randomly-timed horizontally processed outputs into a form consistent with horizontal video timing. Flash-clear RAMs are used to allow unwritten areas to assume a zero value.
vii) Lighting maps 40 and outputs All of the output video paths pass through the lighting maps which consist of separate but similar programmable RAM maps for each of Luma, Chroma B, Chroma R and Shape, as shown in Figure 21. These store possible output values of the video and shape signals. For each horizontal transformation region there is an associated 8-bit f'Light" parameter passed from the horizontal parameter controller 26 that determines which of the 256 possible transfer functions is applicable for that region.
The 8 bit light parameter and 8 bit input video (or shape) signal values are combined to produce 16 bit addresses in order to access a RAM map. Effectively, the 'light' parameters alter the values of the video and shape signals for each region in what is essentially an image modulation technique. Since the luma, Cr,Cb and shape maps are independent, a range of effects is possible.
For example, this configuration allows different objects to be highlighted by using non-linear transfer functions, or apparent colour shifts to take place, or "shadows'! to be created by reducing the shape value for the shadow region.
vigil Mixing Circuit 41 At the outputs of the lighting maps 40, the shape output signal is in synchronism with the video output signal. However, the shape path is configured so that the edges of the output shape are anti-aliased, i.e. "smoothed". The output video signal is "keyed" i.e. multiplied by, the shape signal in mixing circuit 41 so that the edges of the output video become antialiased.
The shape signal controls the mixture of fractional intensities of two video signals, one of which is the transformed video image signal, and the other of which is a signal received at a video input 42 (Figure 4) which may be a background signal.
This may be considered as a 'fade in/fade out' operation.
Considering a transformed region and an untransformed background region, for example, aliasing is avoided by smoothing the transition of pixel lines across the boundary between them. The desired smoothing is put into effect by the application of appropriate fractional shape signals at the boundary. The mixing circuit 41 has two programmable memory look up tables (one for each video signal) by which the shape signal is applied to produce fade in/fade out, before a composite output signal is produced in an adder within the mixing circuit 41.
Possible Effects Real-time, fully antialised spatial transforms of input video in digital component 4:2:2 format (per CCIR Recommendation 601 and 656) are possible, i.e. to produce an output signal where the picture is a spatially distorted version of the input picture. Output frames are generated at the same rate as the input frames with a two-frame-time delay from input to output.
Any defined area of the input video frame can be placed anywhere in the output frame with independent changes in size and direction along either axis, vertically and horizontally. Many small areas of the image can be manipulated to provide complex effects such as folding, rotation and shattering. The hardware has been designed for high flexibility, high image quality, and to operate with low software overhead.
A variety of image effects are achievable using the present system. These include perspective views, rotations, the display of previously flat regions as non-linear surfaces, and a "tiles" effect as described below.
al Perspective Transformations This method relates to the area of special effects in processing of digital video signals, where apparent movement of the image in 3D-space is required. The implementation can involve digital computer hardware, of the type described above or can be purely in software.
A 2-dimensional image can be made to appear to be at any distance and orientation in 3D-space with respect to the viewer by performing the appropriate subdivision and transformations.
The method of simulating perspective by video image subdivision and vertical processing followed by image subdivision and horizontal processing is shown in Figure 22. An original image 62 showing an object 64 is divided into regions 66. The image is transformed vertically to provide an intermediate image 68. The intermediate image 68 is again subdivided into a new set of regions 70. Horizontal transformation then results in an output image 72 showing the object in simulated perspective 74. The image regions may be trapezoidal (as previously defined), triangular or even a segment of one line. The regions may be contiguous or otherwise.
This process can also be carried out simultaneously so that a particular group of pixels can be replicated at each of a number of locations in the output. A further enhancement is to use an interactive device, such as a spaceball or tracker ball, to control the apparent position of the image in real-time.
To summarize, a 2-dimensional image with an arbitrary polygonal outline is subdivided into trapezoid shaped regions, or triangles in the special case, and each of these input regions undergoes a different spatial transformation to map onto an output region which is also trapezoidal (or triangular) to give the variation in size over the output image that would occur in a true perspective view. The nature of the transformation depends on the required position and orientation of the output region is 3-space relative to the input region. If a sufficient number of regions are used then a good approximation to a perspective view is attained.
The original image transformation is carried out in two passes as described above, once in the vertical direction to give an intermediate output which is subsequently processed in the horizontal direction. The combination of the two passes gives rise to the required arrangement of the output in two dimensions.
In the one-dimensional transform, an arbitrary number of linearly arranged input pixels is mapped onto an arbitrary number of linearly arranged output pixels, by a process of interpolation and accumulation of the input pixels to generate new output pixels. Depending on the ratio of an input to output pixels, an apparent expansion or contraction occurs. More than one segment of the line may be involved, with each segment having a different transformation onto the output segment. The segments may or may not be contiguous.
The trapezoids are formed by adjacent groups of the line segments of varying length, where the variation in length follows an arithmetic progression. Hence the parallel sides of the trapezoids are parallel to the direction of processing. The non-parallel sides of the trapezoids correspond to the ends of the segments, and these may be contiguous with the segments of other trapezoids.
b) Rotations When an effect such as a rotating rectangle is performed, then the size of the columns and rows must be changed as the rectangle rotates to provide a constant apparent size on the screen. This function is an inverse cosine function and so it is apparent that it is impossible to rotate a picture by 900 in one pass. The picture is manipulated to allow a continuous rotation that has no obvious discontinuity.This is achieved by writing into input framestores 12 at 900 to normal and so effectively rotating the picture by 900. A continuous rotation is then possible by changing the amount and direction of the skew at the same time as changing the input by 900. In practice two sets of input framestores 12 are used with the 900 rotated image stored in one, so that both images are available at all times, enabling simultaneous asynchronous rotations of various parts of the picture. The framestores 12 are capable of being both written to in one plane or read from in the other plane at full pixel rate.
cl Non-linear surfaces The transformation on each region can be done in such a way that the overall transformed image appears to be wrapped over a non-linear surface. Examples of such effects could be described as warps, ripples, cylindrical wraps or page turns according to the subjective effect. They can be further enhanced by using the lighting maps to simulate illumination and shadowing. These effects usually require more transformation regions than ordinary perspective views, the transformation regions being trapezoidal, triangular or linear.
d) Tiles effect The original image is first subdivided into rectangular "tiles" which then rotate independently about their horizontal axes, while moving away from the viewer. A possible variation of this effect is to reverse the sequence of operation of the effect, so that small rotating tiles move towards the viewer and stop to form a complete picture.

Claims (17)

CLAIMS:
1. Apparatus for transformation of video images comprising: input means for receiving a signal representative of a video image; first image region selection means (14) adapted to select a plurality of regions of an image, first direction transformation means (18, 29, 301 coupled to the input means and to the first image region selection means and adapted to scan each region in a first direction by mapping a selected number of input pixels to a selected number of output pixels that form scan lines in the first direction, and to combine each scanned region to form an intermediate image; second image region selection means (22) for selection of a plurality of regions of the intermediate image;; second direction transformation means (26, 31, 32) coupled to the output of the first direction transformation means and to the second image region selection means and adapted to scan each region of the intermediate image in a second direction by mapping a second selected number of input pixels to a second selected number of output pixels that form scan lines in the second direction, and to combine each scanned region of the intermediate image to form a transformed image.
2. Apparatus according to claim 1, in which each direction transformation means comprises control means (18, 26) and spatial interpolator means (30, 32), the control means (18, 26) being adapted to provide at least one transformation control parameter describing transformation in the respective direction, to the spatial interpolator means (30, 32), and the spatial interpolator means (30, 32) being adapted to transform the image in the respective direction.
3. Apparatus according to claim 2, in which each spatial interpolator means (30, 32) comprises a directional spatial interpolator which is capable of smoothing the transition at a boundary between two contiguous transformation regions.
4. Apparatus according to claim 2 or claim 3 in which transformation control parameters comprise one or more of: (i) source column/row segment to operate on, (ii) source and destination address for selected pixels, (iii)the number of pixels to operate on (run length), (iv) size change values, and (v) selected bit flags.
5. Apparatus according to claim 1, 2, 3 or 4 in which one of the said directions is vertical in the image and the other direction is horizontal in the image.
6. Apparatus according to any preceding claim, in which each direction transformation means is operative to control the sequence of processing of image regions for transformation and the sequence of pixels in each region.
7. Apparatus according to any preceding claim, for real-time transformation of video images, in which output frames are produced at substantially the same rate as input frames are read.
8. Apparatus according to claim 7, in which there is a two frame delay between input and output frames.
9. Apparatus according to any preceding claim, in which the received signals are digital video signals and the apparatus operates with digital signals.
10. Apparatus according to any preceding claim, in which upon expansion or compression of a region, samples of one input pixel size are weightedly averaged so as to smooth transitions between pixels of that expanded or compressed region.
11. Apparatus according to any preceding claim further comprising light map means (40) operative to selectively adjust brightness and/or colour of each of a plurality of transformed image regions.
12. A method of transformation of video images comprising the steps of: (a) dividing an input image into a first plurality of regions; (b) scanning each region in a first direction by mapping a selected number of input pixels to a selected number of output pixels that form scan lines in the first direction, and combining each scanned region to form an intermediate image; (c) dividing the intermediate input image into a second plurality of regions; and (d) scanning each region of the intermediate image in a second direction different from the first direction by mapping a second selected number of input pixels to a second selected number of output pixels that form scan lines in the second direction, and combining each scanned region of the intermediate image to form a transformed image.
13. A method according to claim 12 in which the first direction is substantially vertical.
14. A method according to claim 12 or 13, in which the first and second directions are substantially orthogonal.
15. A method according to claim 12, 13 or 14, in which each region is trapezoidal or triangular or a segment of a pixel line.
16. A method according to any of claims 12 to 15, in which at least some of the regions of the input image are not contiguous with any of the other regions.
17. A method according to claims 12 to 16, in which brightness and/or colour of each of a plurality of transformed image regions are selectively adjusted.
GB9107732A 1990-04-11 1991-04-11 Spatial transformation of video images Withdrawn GB2244887A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AUPJ958290 1990-04-11
AUPK067490 1990-06-18
AUPK067590 1990-06-18
AUPK098690 1990-07-03

Publications (2)

Publication Number Publication Date
GB9107732D0 GB9107732D0 (en) 1991-05-29
GB2244887A true GB2244887A (en) 1991-12-11

Family

ID=27424284

Family Applications (2)

Application Number Title Priority Date Filing Date
GB9107732A Withdrawn GB2244887A (en) 1990-04-11 1991-04-11 Spatial transformation of video images
GB9107726A Withdrawn GB2245124A (en) 1990-04-11 1991-04-11 Spatial transformation of video images

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB9107726A Withdrawn GB2245124A (en) 1990-04-11 1991-04-11 Spatial transformation of video images

Country Status (1)

Country Link
GB (2) GB2244887A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0576696A1 (en) * 1992-06-29 1994-01-05 International Business Machines Corporation Apparatus and method for high speed 2D/3D image transformation and display using a pipelined hardware
EP0703706A2 (en) * 1994-09-20 1996-03-27 Kabushiki Kaisha Toshiba Digital television set
EP0762760A2 (en) * 1995-08-25 1997-03-12 Matsushita Electric Industrial Co., Ltd. Circuit and method for converting a video signal format
EP0777198A1 (en) * 1995-11-30 1997-06-04 Victor Company Of Japan, Limited Image processing apparatus
GB2323991A (en) * 1997-04-04 1998-10-07 Questech Ltd Processing digital video images
US6091446A (en) * 1992-01-21 2000-07-18 Walker; Bradley William Consecutive frame scanning of cinematographic film
EP1072154A1 (en) * 1998-03-25 2001-01-31 Intel Corporation Method and apparatus for improving video camera output quality
WO2002093483A1 (en) * 2001-05-15 2002-11-21 Koninklijke Philips Electronics N.V. Method and apparatus for adjusting an image to compensate for an offset position of an observer

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260654B (en) * 2018-11-30 2024-03-19 西安诺瓦星云科技股份有限公司 Video image processing method and video processor

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1981002939A1 (en) * 1980-04-11 1981-10-15 Ampex System for spatially transforming images
EP0162501A2 (en) * 1984-04-26 1985-11-27 Philips Electronics Uk Limited Video signal processing arrangement

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1455822A (en) * 1973-05-23 1976-11-17 British Broadcasting Corp Sampling rate changer
GB2047040B (en) * 1978-03-08 1982-10-06 Secr Defence Scan converter for a television display
US4419686A (en) * 1981-02-04 1983-12-06 Ampex Corporation Digital chrominance filter for digital component television system
GB2181923B (en) * 1985-10-21 1989-09-20 Sony Corp Signal interpolators
JPH02137473A (en) * 1988-11-17 1990-05-25 Dainippon Screen Mfg Co Ltd Interpolating method in image scan recording
GB2236452B (en) * 1989-07-14 1993-12-08 Tektronix Inc Coefficient reduction in a low ratio sampling rate converter
KR920009609B1 (en) * 1989-09-07 1992-10-21 삼성전자 주식회사 Video signal scene-definition using interpolation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1981002939A1 (en) * 1980-04-11 1981-10-15 Ampex System for spatially transforming images
EP0162501A2 (en) * 1984-04-26 1985-11-27 Philips Electronics Uk Limited Video signal processing arrangement

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6091446A (en) * 1992-01-21 2000-07-18 Walker; Bradley William Consecutive frame scanning of cinematographic film
US6239812B1 (en) 1992-06-29 2001-05-29 International Business Machines Corporation Apparatus and method for high speed 2D/3D image transformation and display using a pipelined hardware
EP0576696A1 (en) * 1992-06-29 1994-01-05 International Business Machines Corporation Apparatus and method for high speed 2D/3D image transformation and display using a pipelined hardware
EP0703706A2 (en) * 1994-09-20 1996-03-27 Kabushiki Kaisha Toshiba Digital television set
EP0703706A3 (en) * 1994-09-20 1997-01-15 Toshiba Kk Digital television set
US5712689A (en) * 1994-09-20 1998-01-27 Kabushiki Kaisha Toshiba Digital television set
EP0762760A2 (en) * 1995-08-25 1997-03-12 Matsushita Electric Industrial Co., Ltd. Circuit and method for converting a video signal format
EP0762760A3 (en) * 1995-08-25 1998-08-26 Matsushita Electric Industrial Co., Ltd. Circuit and method for converting a video signal format
US5764311A (en) * 1995-11-30 1998-06-09 Victor Company Of Japan, Ltd. Image processing apparatus
EP0777198A1 (en) * 1995-11-30 1997-06-04 Victor Company Of Japan, Limited Image processing apparatus
GB2323991B (en) * 1997-04-04 1999-05-12 Questech Ltd Improvements in and relating to the processing of digital video images
GB2323991A (en) * 1997-04-04 1998-10-07 Questech Ltd Processing digital video images
EP1072154A1 (en) * 1998-03-25 2001-01-31 Intel Corporation Method and apparatus for improving video camera output quality
EP1072154A4 (en) * 1998-03-25 2002-11-20 Intel Corp Method and apparatus for improving video camera output quality
WO2002093483A1 (en) * 2001-05-15 2002-11-21 Koninklijke Philips Electronics N.V. Method and apparatus for adjusting an image to compensate for an offset position of an observer

Also Published As

Publication number Publication date
GB9107726D0 (en) 1991-05-29
GB2245124A (en) 1991-12-18
GB9107732D0 (en) 1991-05-29

Similar Documents

Publication Publication Date Title
US5369735A (en) Method for controlling a 3D patch-driven special effects system
US4602285A (en) System and method for transforming and filtering a video image
DE3177299T4 (en) Image processing system for spatial image transformation.
US4631750A (en) Method and system for spacially transforming images
US4472732A (en) System for spatially transforming images
US4468688A (en) Controller for system for spatially transforming images
US5173948A (en) Video image mapping system
US4774581A (en) Television picture zoom system
JPS61285576A (en) Method and apparatus for generating fractal
EP0287331B1 (en) Sampled data memory system eg for a television picture magnification system
US6166773A (en) Method and apparatus for de-interlacing video fields to progressive scan video frames
JPH069377B2 (en) Video signal processor
GB2244887A (en) Spatial transformation of video images
US5646696A (en) Continuously changing image scaling performed by incremented pixel interpolation
JP2813881B2 (en) Video signal processing device
EP0449469A2 (en) Device and method for 3D video special effects
JPH0440176A (en) Television special effect device
JPH03128583A (en) Method and apparatus for spatially deforming image
CA1236924A (en) Memory system for sequential storage and retrieval of video data
GB2248534A (en) Digital video processing apparatus
EP0201130A2 (en) System for spatially transforming images
Jung Digital HDTV image manipulation: architecture of a transformation frame store
JPS62140549A (en) Image editing processor
JPS59100975A (en) Output device of interpolated picture
JPH0589236A (en) Image processor

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)