[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

AU2009222438A1 - Anti-aliased polygon rendering - Google Patents

Anti-aliased polygon rendering Download PDF

Info

Publication number
AU2009222438A1
AU2009222438A1 AU2009222438A AU2009222438A AU2009222438A1 AU 2009222438 A1 AU2009222438 A1 AU 2009222438A1 AU 2009222438 A AU2009222438 A AU 2009222438A AU 2009222438 A AU2009222438 A AU 2009222438A AU 2009222438 A1 AU2009222438 A1 AU 2009222438A1
Authority
AU
Australia
Prior art keywords
pixel
edge
value
area
edges
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2009222438A
Inventor
Philip Andrew Chambers
Alan Valev Tonisson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2009222438A priority Critical patent/AU2009222438A1/en
Publication of AU2009222438A1 publication Critical patent/AU2009222438A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Description

S&F Ref: 920322 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Alan Valev Tonisson Philip Andrew Chambers Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Anti-aliased polygon rendering The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(2312415_1) - 1 ANTI-ALIASED POLYGON RENDERING TECHNICAL FIELD The present invention relates to a method and apparatus for rendering images and, in particular, to the anti-aliased rendering of polygons for printing or display on a display device. The invention also relates to a computer program product including a 5 computer readable medium having recorded thereon a computer program for the anti aliased rendering of polygons for printing or display on a display device. BACKGROUND A real world image can be represented as a mathematical continuous description. When this is rendered with limited resolution, for example on a computer display, 10 aliasing artefacts become visible. Aliasing is caused by high frequency components of the original image appearing as low frequency components in the rendered image. When rendering polygons, aliasing is visible as staircasing or "jaggies" along the edges of the polygons. An image may be described in terms of the coordinates of line segments. Fig. 3 15 shows an example of a line segment 310, marking one edge of an object 320, passing through two points 330 and 340. The square tiles, e.g. 360 of Fig. 3 represent pixels. Fig. 3 is based on the assumption that pixels are square and that the pixel boundaries have integer x or y coordinates. When the image is rendered the line segment 310 will appear jagged because of the discrete pixels used to represent the image. 20 The process of removing or reducing the effects of aliasing is referred to as anti aliasing. Aliasing can be removed or reduced by filtering or blurring all or part of the image before sampling. The filtering or blurring removes the high frequency components of the image prior to sampling. Anti-aliasing removes or reduces the jagged 2280422_1 920322_specilodge -2 edges but produces a less sharp image. The loss in sharpness is generally preferable to the jagged edges. Anti-Aliasing techniques can be applied at various stages in image generation. For example, when producing an image composed of multiple objects, each object can 5 be individually anti-aliased before the objects are composited together to form the final image. In this case the pixels of the images of the individual objects will typically have associated opacity information. Anti-aliasing techniques may be applied to the opacity information to determine the best opacity values for the pixels in each object. There are a number of methods available for anti-aliasing images, but they are all 10 effectively the same as filtering or blurring all or part of the image in some way. The methods differ in how they calculate the resulting filtered and sampled image. One such anti-aliasing technique is described below: For each pixel through which an edge of an object passes, the intensity of said pixel is set according to the proportion of area of the said pixel covered by the said object. In Fig. 3 an object 320 15 with an intensity of 128 out of 255 is placed on top of a background of intensity 0. The object completely covers the 3 pixels in the upper right corner; 350, 352 and 354. Therefore the intensity of these pixels is set to the intensity of the object, i.e. 128. The object does not cover any of the pixel in the bottom left hand corner such as pixel 360 and so the intensity of this pixel is set to the intensity of the background, i.e. 0. As the 20 object 320 covers 9/32 of the area of the centre pixel 370 so the intensity of this pixel is set to (128 * 9/32) + (0 * 23/32) = 36 where the zero value is the intensity of the background. This technique therefore requires calculating, for each pixel through which the edge of an object passes, the area of said pixel covered by the object. 2280422_1 920322_speci lodge -3 One technique for approximating this area calculation consists of pre-calculating tables of coverage areas and performing a lookup using properties of the edge, such as orientation and intercept with the x-axis of the pixel, as indices. In particular Fig. 5 shows the edge of an object 510 of intensity 128 on a background of intensity 0 passing 5 through 4 pixels lying on a scan line bordered by lines yo and yi. The object intersects scan line yo in the left most pixel 530 and scan line yi in the right most pixel 570. For each pixel two indices are generated, the first is the number of pixels to the left of the said pixel that a scan line intersect lies and the second is the number of pixels to the right that a scan line intersect lies. For pixel 540 the indices are 1 and 2 respectively. For 10 pixel 550 the indices are 2 and 1 respectively. Pixels containing intersect points are subdivided into N vertical strips (in this example 3) and the strip in which an intersect point lies are used as a subindex. Pixel 530 therefore has indices 0,1 and 3 and pixel 570 has indices 3 and 0,0. A significant problem with the table lookup method is that a large amount of 15 memory is required to hold the table and the accuracy of the area result returned is limited by the size of the table. When a pixel contains area fragments from a plurality of polygons the contributions from each fragment must be blended together to produce an overall output value for the pixel in a process known as compositing. One method for performing such 20 compositing is the Porter-Duff method. Porter-Duff compositing is designed for compositing semi-transparent objects pixel by pixel without considering exact boundaries of the objects. The Porter-Duff model of opacity is based on the assumption that each pixel of each object is composed of a large number of randomly distributed sub-pixels and each of these sub-pixels is treated as being either fully opaque or fully 2280422_1 920322_spedlodge -4 transparent. The probability of a particular sub-pixel being opaque is determined by the percentage of opaque sub-pixels and this is given by the opacity value of each object. When a first object is composited on top of a second object the probability that the resulting sub-pixel will consist of an opaque sub-pixel from the first object sitting on top 5 of an opaque sub-pixel from the second object is given by the percentage of opaque sub pixels in the first object multiplied by the percentage of opaque sub-pixels in the second object. Therefore the area of overlap of the two objects can be assumed to be the product of the opacities of the overlapping objects. Fig. 18 shows an example of compositing using the Porter-Duff method for a 10 pixel that contains two polygon fragments. Fragment A 1810 has an intensity of 100 and an opacity of 50%. Fragment B 1820 has an intensity of 200 and an opacity of 30% and is to be composited on top of fragment A using a Porter-Duff 'over' operation. The area of overlap 1830 is assumed to be the product of the object opacities and is given by; 30% * 50% = 15%. Therefore the area of A not overlapped by B, 1840 is given by 50% 15 15% = 35%. The area of B not overlapping A, 1850 is given by 30% - 15% = 15%. The area of the pixel covered by objects, which is taken as the opacity of the composted pixel, is calculated by summing these three areas, i.e. 35% + 15% + 15% = 65%. The intensity of the composted pixel is calculated by multiplying the area of each region by its corresponding intensity i.e. (200 * 15%) + (200 * 15%) + (100 * 35%) = 95. 20 Therefore the resulting composited pixel can be represented by 1860. A limitation of the Porter-Duff method arises from the assumption of overlap based on object opacities. This can result in calculated opacities being less than the correct value leading to some bleed-through of the background colour in the rendered image. 2280422 1 920322 sped lodge -5 To prevent this effect occurring it is not sufficient to obtain just pixel fragment areas. Information on fragment overlap or the visible area of each pixel fragment is required so that the pixel can be composited taking true visible fragment areas into account. For the table lookup method described previously this requires a greatly 5 increased size of lookup table to provide estimates of not only individual area fragments but also fragment overlap. Another technique for approximating fragment area calculation which provides a finer estimate of fragment overlap is termed the super-sampling technique. It consists of dividing each output pixel into N x M sub pixels, determining whether the centre of each 10 sub pixel lies inside or outside of the object and setting the luminance of the output pixel according to the number of sub pixels lying inside or outside the object. In particular Fig. 4 shows the edge of an object 410 of intensity 128 on a background of intensity 0 passing through a pixel 420. The pixel is divided into 4x4 sub pixels. The centre of 6 of these sub pixels lie within the object whilst 10 do not. Therefore the intensity of the 15 output pixel is set to 48; ((6 * 128) + (10 * 0)) / 16 = 48. A significant problem with super-sampling is that it is inefficient because of the large amount of processing required to handle sub pixel resolutions. If each output pixel corresponds to a four by four block of super-sampled pixels then sixteen sub-pixel 20 operations need to be performed for each output pixel. Furthermore, as super-sampling still relies on the Porter-Duff method to perform compositing on the sub-pixel regions, bleed-through can still be present in the rendered image. It is an object of the present invention to ameliorate one or more of the limitations of the methods described above. 2280422_1 920322_specilodge -6 SUMMARY Disclosed is a method of rendering anti-aliased edges of a polygon. The method processes a description of the polygon to produce a plurality of line segments and scans each of the line segments to determine a plurality of values for a triangle function of the 5 line segment. The method calculates a value of the triangle function for a corner of a pixel which is intersected by one of the plurality of line segments by determining a value for an area function of the pixel based on a value of an area function of a previous pixel. The method then determines an opacity value for the pixel, utilising the calculated value; and determines a colour value for each pixel, utilising the opacity value. 10 Other aspects are disclosed. BRIEF DESCRIPTION OF THE DRAWINGS One or more embodiments of the invention will now be described with reference to the following drawings, where: Fig. I is a flow diagram showing a two pass operation for rendering an image; 15 Fig. 2a is a flow diagram showing a first pass of a method for rendering an image; Fig. 2b is a flow diagram showing a second pass of a method for rendering an image; Fig. 3 shows a polygon sitting on a grid of pixels; 20 Fig. 4 shows a method for approximating pixel fragment areas; Fig. 5 shows a method for approximating pixel fragment areas; Fig. 6 shows a plurality of edges generated from a polygon description; Fig. 7 shows the splitting of edges for intersecting polygons; Fig. 8 shows the sorting of edges of polygons in scan order; 2280422_1 920322_specilodge -7 Fig. 9 shows an example of processing an image; Fig. 10 shows a triangle function for a given point with respect to a given edge; Fig. 11 shows the change in triangle function as a point is stepped horizontally; Fig. 12 shows the change in triangle function as a point is stepped vertically; 5 Fig. 13 shows a pixel fragment calculation for an edge entering a pixel at the left edge and exiting at the right edge; Fig. 14 shows a pixel fragment calculation for an edge entering a pixel at the left edge and exiting at the bottom edge; Fig. 15 shows a pixel fragment calculation for an edge entering a pixel at the top 10 edge and exiting at the right edge; Fig. 16 is a flow diagram showing a method of processing a plurality of edges in a pixel; Fig. 17 is a flow diagram showing a method of calculating a pixel fragment area for an active edge; 15 Fig. 18 shows the compositing of overlapping polygon fragments; Fig. 19 shows the compositing of abutting polygon fragments; Fig. 20 shows a polygon edge having a negative slope; Fig. 21 shows the stepping operations for edges having a negative slope; Fig. 22 is a flow diagram showing a method of calculating a pixel fragment area 20 for an active edge with a negative slope; Fig. 23 shows the calculation of initial values for an edge with a non-integer start point; Fig. 24 shows the calculation of initial values for an edge with a non-integer start point; 2280422_1 920322_specilodge -8 Fig. 25 shows fragments in a pixel cut by a plurality of edges; Fig. 26 shows the calculation of the exit side of a pixel for an edge; Fig. 27 shows the fragment area for a pixel containing an edge end point, the edge entering the pixel from the top; 5 Fig. 28 shows the fragment area for a pixel containing an edge end point, the edge entering the pixel from the left hand side; Fig. 29 shows the placement of pseudo start and end points for a line of negative slope; and Figs. 30A and 30B form a schematic block diagram of a general purpose 10 computer system upon which the arrangements described can be practiced. DETAILED DESCRIPTION INCLUDING BEST MODE The present invention is a method of rendering polygons represented by a plurality of line segments incorporating a fast and accurate means of calculating areas of pixel fragments enabling high quality anti-aliasing of polygon edges. 15 Figs. 30A and 30B collectively form a schematic block diagram of a general purpose computer system 3000, upon which the various arrangements described can be practiced. As seen in Fig. 30A, the computer system 3000 is formed by a computer module 3001, input devices such as a keyboard 3002, a mouse pointer device 3003, a 20 scanner 3026, a camera 3027, and a microphone 3080, and output devices including a printer 3015, a display device 3014 and loudspeakers 3017. An external Modulator Demodulator (Modem) transceiver device 3016 may be used by the computer module 3001 for communicating to and from a communications network 3020 via a connection 3021. The network 3020 may be a wide-area network (WAN), such as the 2280422_1 920322_specijodge -9 Internet or a private WAN. Where the connection 3021 is a telephone line, the modem 3016 may be a traditional "dial-up" modem. Alternatively, where the connection 3021 is a high capacity (eg: cable) connection, the modem 3016 may be a broadband modem. A wireless modem may also be used for wireless connection to the network 3020. 5 The computer module 3001 typically includes at least one processor unit 3005, and a memory unit 3006 for example formed from semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The module 3001 also includes an number of input/output (1/O) interfaces including an audio-video interface 3007 that couples to the video display 3014, loudspeakers 3017 and 10 microphone 3080, an 1/0 interface 3013 for the keyboard 3002, mouse 3003, scanner 3026, camera 3027 and optionally a joystick (not illustrated), and an interface 3008 for the external modem 3016 and printer 3015. In some implementations, the modem 3016 may be incorporated within the computer module 3001, for example within the interface 3008. The computer module 3001 also has a local network interface 15 3011 which, via a connection 3023, permits coupling of the computer system 3000 to a local computer network 3022, known as a Local Area Network (LAN). As also illustrated, the local network 3022 may also couple to the wide network 3020 via a connection 3024, which would typically include a so-called "firewall" device or device of similar functionality. The interface 3011 may be formed by an Ethernetim circuit 20 card, a BluetoothTM wireless arrangement or an IEEE 802.11 wireless arrangement. The interfaces 3008 and 3013 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 3009 are provided and typically include a hard disk drive (HDD) 3010. 2280422_1 920322_speci lodge -10 Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 3012 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (eg: CD-ROM, DVD), USB-RAM, and floppy disks for example may then be used as appropriate 5 sources of data to the system 3000. The components 3005 to 3013 of the computer module 3001 typically communicate via an interconnected bus 3004 and in a manner which results in a conventional mode of operation of the computer system 3000 known to those in the relevant art. Examples of computers on which the described arrangements can be 10 practised include IBM-PC's and compatibles, Sun Sparcstations, Apple MacTM or alike computer systems evolved therefrom. The method of method of rendering polygons may be implemented using the computer system 3000 wherein the processes of Figs. I to 29, to be described, may be implemented as one or more software application programs 3033 executable within the 15 computer system 3000. In particular, the steps of the method of rendering polygons are effected by instructions 3031 in the software 3033 that are carried out within the computer system 3000. The software instructions 3031 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code 20 modules performs the rendering methods and a second part and the corresponding code modules manage a user interface between the first part and the user. The software 3033 is generally loaded into the computer system 3000 from a computer readable medium, and is then typically stored in the HDD 3010, as illustrated in Fig. 30A, or the memory 3006, after which the software 3033 can be executed by the 2280422_1 920322_speci lodge - 11 computer system 3000. In some instances, the application programs 3033 may be supplied to the user encoded on one or more CD-ROM 3025 and read via the corresponding drive 3012 prior to storage in the memory 3010 or 3006. Alternatively the software 3033 may be read by the computer system 3000 from the networks 3020 5 or 3022 or loaded into the computer system 3000 from other computer readable media. Computer readable storage media refers to any storage medium that participates in providing instructions and/or data to the computer system 3000 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical 10 disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 3001. Examples of computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 3001 include radio or infra-red transmission channels as well as a network connection to 15 another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. The second part of the application programs 3033 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 3014. 20 Through manipulation of typically the keyboard 3002 and the mouse 3003, a user of the computer system 3000 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may 2280422_1 920322_specdlodge - 12 also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 3017 and user voice commands input via the microphone 3080. Fig. 30B is a detailed schematic block diagram of the processor 3005 and a "memory" 3034. The memory 3034 represents a logical aggregation of all the memory 5 devices (including the HDD 3010 and semiconductor memory 3006) that can be accessed by the computer module 3001 in Fig. 30A. When the computer module 3001 is initially powered up, a power-on self-test (POST) program 3050 executes. The POST program 3050 is typically stored in a ROM 3049 of the semiconductor memory 3006. A program permanently stored in a 10 hardware device such as the ROM 3049 is sometimes referred to as firmware. The POST program 3050 examines hardware within the computer module 3001 to ensure proper functioning, and typically checks the processor 3005, the memory (3009, 3006), and a basic input-output systems software (BIOS) module 3051, also typically stored in the ROM 3049, for correct operation. Once the POST program 3050 has run 15 successfully, the BIOS 3051 activates the hard disk drive 3010. Activation of the hard disk drive 3010 causes a bootstrap loader program 3052 that is resident on the hard disk drive 3010 to execute via the processor 3005. This loads an operating system 3053 into the RAM memory 3006 upon which the operating system 3053 commences operation. The operating system 3053 is a system level application, executable by the 20 processor 3005, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface. The operating system 3053 manages the memory (3009, 3006) in order to ensure that each process or application running on the computer module 3001 has sufficient 2280422_1 920322_speci lodge - 13 memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 3000 must be used properly so that each process can run effectively. Accordingly, the aggregated memory 3034 is not intended to illustrate how particular segments of memory are 5 allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 3000 and how such is used. The processor 3005 includes a number of functional modules including a control unit 3039, an arithmetic logic unit (ALU) 3040, and a local or internal memory 3048, sometimes called a cache memory. The cache memory 3048 typically includes a number 10 of storage registers 3044 - 3046 in a register section. One or more internal buses 3041 functionally interconnect these functional modules. The processor 3005 typically also has one or more interfaces 3042 for communicating with external devices via the system bus 3004, using a connection 3018. The application program 3033 includes a sequence of instructions 3031 that may 15 include conditional branch and loop instructions. The program 3033 may also include data 3032 which is used in execution of the program 3033. The instructions 3031 and the data 3032 are stored in memory locations 3028-3030 and 3035-3037 respectively. Depending upon the relative size of the instructions 3031 and the memory locations 3028-3030, a particular instruction may be stored in a single memory location 20 as depicted by the instruction shown in the memory location 3030. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 3028-3029. 2280422_1 920322_specilodge - 14 In general, the processor 3005 is given a set of instructions which are executed therein. The processor 3005 then waits for a subsequent input, to which it reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input 5 devices 3002, 3003, data received from an external source across one of the networks 3020, 3022, data retrieved from one of the storage devices 3006, 3009 or data retrieved from a storage medium 3025 inserted into the corresponding reader 3012. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 3034. 10 The disclosed rendering arrangements use input variables 3054, that are stored in the memory 3034 in corresponding memory locations 3055-3058. The rendering arrangements produce output variables 3061, that are stored in the memory 3034 in corresponding memory locations 3062-3065. Intermediate variables may be stored in memory locations 3059, 3060, 3066 and 3067. 15 The register section 3044-3046, the arithmetic logic unit (ALU) 3040, and the control unit 3039 of the processor 3005 work together to perform sequences of micro operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 3033. Each fetch, decode, and execute cycle comprises: 20 (a) a fetch operation, which fetches or reads an instruction 3031 from a memory location 3028; (b) a decode operation in which the control unit 3039 determines which instruction has been fetched; and 2280422_1 920322_specilodge - 15 (c) an execute operation in which the control unit 3039 and/or the ALU 3040 execute the instruction. Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 3039 5 stores or writes a value to a memory location 3032. Each step or sub-process in the processes of Figs. I - 29 is associated with one or more segments of the program 3033, and is performed by the register section 3044-3047, the ALU 3040, and the control unit 3039 in the processor 3005 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set 10 for the noted segments of the program 3033. The method of rendering polygons may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of polygon rendering. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated 15 memories. The method starts with a description of an image including a set of polygons to be composited, onto a page for printing or displaying, with a set of compositing operations. Each polygon is associated with a colour and a transparency value, transparency being the complement of opacity. The method describes how the image 20 may be rendered at a specified resolution by calculating a colour value for each pixel in the image. In the following description it is assumed that pixels are square, that pixel boundaries have integer x and y coordinates, that x coordinates increase to the right and that y coordinates increase downwards. The line segments of the polygons being rendered can have integer or non-integer end points. 2280422_1 920322_speci lodge - 16 Fig. 1 is a flow diagram showing the top level operations performed by the application program when anti-aliasing the edges of polygons in an image in accordance with an embodiment of the present invention. The process begins at step 110 where a description of the polygons in an image is converted into a plurality of records each 5 containing details relating to the edges of the polygons. This process will be described in more detail below. The next step 120 is to render the image. This process will be described in more detail below. Fig. 2a is a flow diagram of the step of converting the description of polygons into a set of edge records (step 110 of Fig. 1). The process begins at step 210 where a 10 description of the polygons in an image is converted into a plurality of edge records. The number of edge records will vary depending on the number of edges which, in turn, is dependant on the number of polygons and the number of sides the polygons have. The edge records comprise coordinate values representing the start and end points of a particular edge where start and end refer to raster scan order. The next step 215 15 calculates any points where edges of polygons intersect and to split such edges into separate edges at the point of intersection. The next step 220 generates and stores pseudo start and end points for edges with negative slope. Such a case is shown in Fig.10. More detail on these pseudo start and end points will be given below. The next step 225 sorts the edges by start point in scan order. A point (x,y) precedes point (x', y') 20 in scan order if and only if y < y', or y = y' and x < x'. An example of scan ordering is shown in Fig. 8. Here two polygons containing a total of six edges have their edges labelled in ascending scan order, 810 to 860. Referring back to Fig. 2a, the final step in the calculation of initial variables 230 calculates initial values for the variables 2280422_1 920322_speci lodge - 17 associated with each edge. These variables and calculations will be explained in more detail below. The steps in Fig. 2a will now be illustrated using examples. The conversion of polygons to edges step 210 of Fig. 2a is illustrated in Fig. 6. A polygon 610 has three 5 corners; 620, 630 and 640. Three edge records are recorded; 650 beginning at 620 and ending at 630, 660 beginning at 620 and ending at 640, and 670 beginning at 630 and ending at 640. The splitting of intersecting edges into separate edges (step 215 of Fig. 2a) is illustrated in Fig. 7. An edge starting at point 710 and ending at point 720 intersects 10 with an edge starting at point 730 and ending at point 740. The two edges intersect at point 750. The edge that originally ran from 710 to 720 is split into two edges 760 and 770. Similarly the edge that originally ran from 730 to 740 is split into two edges 780 and 790. The following section describes the generation of pseudo start and end points 15 (step 220 of Fig. 2a). Fig. 29 shows an edge with negative slope. The edge 2910 has its start point located in the top right corner pixel 2920 and its end point located in the bottom left corner pixel 2930. Using the coordinates of the start and end points, the first pixel where the edge exits through the bottom of the pixel is calculated and marked as a pseudo start point. In Fig. 29 this is pixel 2940. Using the coordinates of the start and 20 end points, the last pixel where the edge enters from the top of the pixel is calculated and marked as a pseudo end point. In Fig. 29 this is pixel 2950. The purpose of these pseudo start and end points will be described below. Once the conversion of polygons to edges has been performed, the image is rendered in step 120 of Fig. 1. Fig. 2b is a flow diagram of the process of rendering the 2280422_1 920322_specilodge -18 image. The process begins at step 250 where the scan variables, used for keeping track of position within the image, are initialised to the top left hand corner of the image and the current colour set to the background colour of the image. An active edge list is used to store information related to each active edge in the image. An edge is active, and thus 5 has an entry in the active edge list, if, inside the flow detailed in Fig. 2b, the start point of the edge has been passed and the end point of the edge has not. This will be explained in more detail below. The next step 255 clears the active edge list of all entries. The next step 260 identifies the next pixel, in raster scan order, that will be intersected by an edge identified during the first pass operation illustrated in Fig. 2a. Referring back to Fig. 2b, 10 in the next step 265 the current colour is output and the image scanned in raster scan order until the edge pixel identified in step 260 is reached. The next step 270 processes this edge pixel. This operation will be described in detail later. At the decision step 275 the process then loops back to step 260 and this loop continues until all the pixels in the image have been processed according to the yes condition 280. 15 The process of rendering an image, shown as a flow diagram in Fig. 2b will now be illustrated using an example. Fig. 9 contains a 6 by 2 grid of pixels containing a single polygon 910. The operation starts in pixel 920 containing a background colour. The next pixel intersected by an edge is identified as pixel 930. The image is scanned and the background colour outputted until pixel 930 is reached. This edge pixel is 20 processed and the resultant colour output. The next pixel containing an edge is identified 940 and the object colour output until pixel 940 is reached. This edge pixel is processed and the resultant colour output. This process continues handling edge pixels 950 and 960 until the last pixel of the image 970 has been processed. 2280422_1 920322_specilodge - 19 Fig. 16 is a flow diagram showing how a pixel containing one or more edges is processed (step 270 in Fig. 2b). A variable, PL, is used to store the pixel luminance value calculated from the edges that have been processed so far in the pixel. A variable, FC, stores the area of the current pixel that is covered by edges that have been processed 5 so far. PL and FC are cleared at the start of processing the pixel 1610. The flow then enters a loop processing each edge identified in the pixel. If the current edge is identified as a start point 1615, is not a pseudo start point 1620 and the slope of the edge is greater than zero 1630 then the edge is added to the active edge list 1635 together with its T, A, B, dA and dB values which were pre 10 calculated during step 230 of Fig. 2a. The purpose of these values will be described below. If the edge start point is identified as a pseudo start point 1620 then the edge is added to the active edge list 1635 together with its T, A, B, dA and dB values. If the edge is identified as a start point, is not a pseudo start point and the slope of the edge is less than zero then the associated edge will previously have been added to the active 15 edge list and so no action is taken. For all start points the fragment area for the pixel, R, has been pre-calculated during step 230 of Fig. 2a. If the current edge is identified as not being a start point the fragment area for the pixel, R, must be calculated 1640. This will be described below. If the edge is identified as an end point 1645 and is either a pseudo end point 1650 or has a slope greater than 20 zero 1655 then the edge is removed from the active edge list 1660. Fig. 25 shows a pixel which is cut by two active edges, 2510 and 2520, resulting in two fragment areas, 2530 and 2540. The operations for calculating fragment areas, described below, return the area, R, of the pixel lying above the edge. For edge 2510 this will be area 2530. For edge 2520 this will be area 2540 plus area 2530. Therefore to 2280422_1 920322_speci lodge - 20 obtain the fragment area 2540, the sum of all previous fragment areas, FC, is subtracted from R to obtain the fragment area FA. Returning to Fig. 16 the new fragment coverage value for the pixel, FC, takes the value of R. The fragment area FA is multiplied by the colour 'alue of the edge associated with the object to produce a luminance value L. The 5 overall pixel luminance value, PL, is then incremented by L 1665. If there are more edges in the pixel 1670 the flow loops back to the edge identification decision 1620. Once all edges have been processed, a luminance value for the area of the pixel not covered by any objects is calculated by multiplying (1 - FC) by the background colour and adding the result to PL. This results in the final luminance 10 value for the pixel, PL 1680. It can be seen from the previous description that it is necessary to calculate pixel fragment areas. A mechanism will now be described to calculate the pixel fragment area for an active edge of positive slope. All the arithmetic described for this mechanism is performed modulo one. Two numbers in modulo one arithmetic are equal if the 15 difference between the numbers is an integer. For a line through two points (xs,y,) and (xe,ye) such that dy = ye - y, and dx = x, -x,, a triangle function T(x,y) is defined with reference to Fig. 10 where T(x,y) represents the 20 area 1010 bounded by a line segment 1020 and the horizontal 1030 and vertical 1040 lines through a given point (x,y). This area T(x,y) = 2|AxAy|. To calculate T(x,y) efficiently, horizontal and vertical difference functions are defined: A(x, y) = T(x +1, y) - T(x, y), 2280422_1 920322_speci_lodge - 21 B(x,y) = T(x,y + 1)- T(x,y). With reference to Fig. 11, A(x,y) represents the area 1110. With reference to Fig. 12, B(x,y) represents the area 1210. Ignoring the integer parts of A and B (as performing arithmetic modulo one), the 5 fractional part of A depends only on x and the fractional part of B depends only on y, therefore A(x) = frac(A(x, y)) = frac(T(x + 1, y) - T(x, y)) for any integer y (1) B(y) = frac(B(x, y) = T(x, y + 1) - T(x, y)) for any integer x (2) Five equations will now be introduced which describe how these quantities are 10 related. These equations allow incremental pixel areas to be calculated, as will be described below. T(x,, y,) = T(x,,y,) = 0; (3) A(x +1) = frac(A(x) + dA) , where dA = frac (4) Sdxy B(y+1) =frac(B(y)+dB), where dB=frac(- (5) 15 Furthermore A(x,)= A(x,)= frac C dy (6) S2 dx) B(y,)= B(y,)= frac C !x (7) Y2 dy) The following examples will illustrate how the triangle function T constructed from difference functions A and B is used to calculate areas of pixels cut by an edge as 20 the edge is followed using a modification of Bresenham's algorithm which is described 2280422_1 920322_specilodge - 22 below. Three different methods will be described to handle the three different ways that an edge can enter and exit a pixel. Case 1: Fig. 13 shows one pixel step in the x-direction for the case where the edge enters the pixel from the left hand side of the pixel and exits from the right hand 5 side of the pixel. Before the step, T(x,y) relates to area 1310. After the step, T(x+1,y) includes the area 1320. Therefore the area of just 1320 is the difference between these triangle function, i.e. T(x+1, y)- T(x, y), which from formula (1) it follows is equal to A(x). As the sum of all the area fragments within a pixel sum to one this area is also given by T(x + 1, y +1)- T(x, y +1) when working in modulo one arithmetic. As a step 10 has been performed in the horizontal direction it is also necessary to update the difference function variable A according to formula (4). Case 2: Fig. 14 shows one pixel step in the x-direction for the case where the edge enters the pixel from the left hand side of the pixel and exits from the bottom of the pixel. T(x,y+1) relates to the area 1410 and therefore the area 1420 is given by I 15 T(x,y+1). As a step has been performed in the horizontal direction it is also necessary to update the difference function variable A according to formula (4). Case 3: Fig. 15 shows one pixel step in the y-direction for the case where the edge enters the pixel from the top of the pixel and exits from the right hand side of the pixel. The area 1510 is given by T(x+1,y). As a step has been performed in the vertical 20 direction it is also necessary to update the difference function variable B according to formula (5). Following the demonstration of how the triangle function T is used to calculate areas, a mechanism for calculating a pixel fragment area for an active edge with positive slope will be described with reference to the flow diagram in Fig. 17. For each active 2280422 1 920322 speci lodoe - 23 edge, four variables are maintained; T, A, B and entertop together with two constants dA and dB: where T represents the triangle function value for the edge; A represents the horizontal difference value for the edge; 5 B represents the vertical difference value for the edge; enter-top is true if the edge enters the pixel on the top edge, false otherwise; dA represents the increment of A when stepping horizontally; dB represents the increment of B when stepping vertically. 10 The variables associated with the active edge are read from the active edge list 1710. The side of the pixel where the edge exits the pixel is then calculated 1715 using a variation on Bresenham's algorithm, this will be described below. If enter-top is set 1720 the mechanism returns the current value of T 1725, sets enter-top to false 1730, increments T by the current value of B and increments B by dB 1735. If enter-top is not 15 set 1740 and the calculation 1715 of the exit side indicates that the edge exits the pixel at the side 1745 the mechanism returns the current value of A 1750 and sets enter-top to true 1755. If enter-top is set and the calculation of the exit side indicates that the edge exits the pixel at the bottom 1760 the mechanism returns the value of (1 - T) 1765 and sets enter-top to false 1770. For both path 1745 and path 1760 T is incremented by the 20 current value of A and A is incremented by dA 1775. For all paths the updated variables are written into the active edge location in the active edge list 1780. The modified version of Bresenham's algorithm, used to calculate the exit side of an edge from a pixel, will now be described. Fig. 26. shows a pixel being cut by an edge 2610. If yj is the y value where the edge enters the pixel 2620 and y2 is the y value 2280422_1 920322 sped lodge - 24 where the edge exits the pixel 2630 then the area of the pixel lying above the edge is the horizontal difference value for the edge, A, and from simple geometry the area of A is given by the following when applied to a pixel of fixed with of one; A Y 1 +Y2 2 5 Furthermore; Y2 = Y +dA Therefore; dA y 2
=A+
2 This expression is evaluated (non modulo-1) and if the result is greater than one 10 then exitbottom is set true, otherwise it is set false. When processing an image for compositing onto a page for printing or displaying it is efficient to process pixels in raster scan order. Fig. 20 shows an edge 2010 of negative slope (Ay/Ax < 0) passing through a set of pixels. Applying the method described so far the order of processing pixels would be 2020 followed by 2030 followed 15 by 2040 followed by 2050. To allow pixels to be processed in raster scan order it is beneficial to modify the method to process pixels in the order 2020 followed by 2050 followed by 2040 followed by 2030. Such a modified method will now be described. An additional variable C(x,y) is maintained for each active edge of negative slope where; 20 C(x, y) = T(x + N, y) - T(x, y) i.e. C represents the change in the triangle function when we step N pixels in the horizontal axis. It should be noted that; C(x,y + 1)= C(x,y)+ N and 2280422_1 920322_spedlodge - 25 B(x + N,y) = B(x,y)+ N for integer N. Since we are only interested in the fractional parts of areas, we can ignore the integer part of C. Note that the fractional part of C depends only on x (for integer y). Therefore 5 C(x)= frac(C(x,y))= frac(T(x+ N,y) - T(x, y)) for any integer y. Furthermore N-I C(x)= 2 A(x+n) n=O As we step through pixels in the x-direction, the value of C is calculated according to: 10 C(x+1)-C(x)= Z(A(x+n+1)- A(x+n))= NdA n=O C(x+N)-C(x)= (A(x+n+N)--A(x+n)) = N 2 dA n=O A set of equations will now be introduced which will describe how the variables associated with the edge are related (modulo 1). These equations allow incremental pixel areas for edges of negative slope to be calculated, as will be described below. 15 T(x,, y,)= T(x,, y,)=0 ; A(x,) = A(x) = frac j (8) (2dx (9 B(x,)=B(x,)= frac 1 dx) () (2 dy A(x +1) frac(A(x) + dA), where dA = frac -- (10) dx A(x + N) = frac(A(x) + NdA) (11) 2280422_1 920322_specilodge - 26 B(y + 1)= frac(B(y) + dB), where dB = fracf" ' (12) dy C(x + 1)= ]-ac(A(x) + NdA) (13) C(x + N) = frac(A(x) + N 2 dA) (14) T(x+ 1,y) = frac(T(x,y) + A(x)) (15) 5 T(x,y + 1) = frac(T(x, y) + B(y)) (16) T(x + N, y)= frac(T(x, y) + C(x)) (17) N =2 d(18) . dy The following example will illustrate how the method is modified to calculate area of pixels cut by an edge of negative slope. Fig. 21 shows the steps used to process 10 the pixels in raster scan order. Starting in pixel 2110, where the edge enters the pixel from the top, a step of one pixel is performed in the y-direction and B updated according to formula (9) and T according to formula (15). N backward steps are then performed in x-direction to arrive at pixel 2120, the pixel one scan line down from pixel 2010 in which the edge exits from the bottom of the pixel. A is updated according to formula 15 (11), C according to formula (14) and T according to formula (17). The fragment area of pixel 2120 can then be calculated utilising T in the usual manner. A single step in the x-direction is then performed to arrive at pixel 2130. A is updated according to formula (10), C according to formula (13) and T according to formula (15). The fragment area of pixel 2130 can then be calculated using T in the 20 usual manner. 2280422_1 920322_speci-lodge - 27 These single steps in the x-direction are repeated N/2 times until a pixel having the edge enter its top edge is reached, (pixel 2140) which is analogous to the starting position one scan line down. Now that an example of handling edges of negative slope has been demonstrated 5 a mechanism for calculating a pixel fragment area for an active edge with negative slope will be described with reference to the flow diagram in Fig. 22. For each active edge, five variables are maintained, T, A, B, C and exitbottom with four constants dA, NdA,
N
2 dA and dB: where: T represents the triangle function value for the edge; 10 A represents the single step horizontal difference value for the edge; B represents the single step vertical difference value for the edge; C represents the multiple step horizontal difference value for the edge; 15 exitbottom is true if the edge exits the pixel on the bottom edge, false otherwise; dA represents the increment of A when single stepping horizontally; dB represents the increment of B when stepping vertically; 20 NdA represents the increment of A when multi-stepping horizontally;
N
2 dA represents the increment of C when multi-stepping horizontally. 2280422 1 920322_speci lodge - 28 The variables associated with the active edge are read from the active edge list 2210. The side of the pixel where the edge enters the pixel is then calculated using Bresenham's algorithm 2220. If the edge enters from the top 2230 the mechanism returns the current value of T 2235, increments T by the current value of B plus C, 5 increments A by dA, increments B by dB, increments C by N 2 dA 2240 and sets exit bottom to true 2245. If the edge does not enter the pixel from the top 2250 and exitbottom is not set 2255, the mechanism returns the current value of A 2260. If the edge does not enter the pixel from the top and exit-bottom is set 2265, the mechanism returns the value of (1 - T) 2270. For both path 2255 and path 2265 T is incremented by 10 the current value of A, A incremented by dA, C incremented by NdA 2275 and exit bottom set to false 2280. For all paths the updated variables are written into the active edge location in the active edge list 2290. The method above has described how to calculate fragment areas of a pixel when the pixel is cut by an edge by using variables associated with the edge. As stated in Fig. 15 2a it is necessary to calculate the initial values for these variables (step 230) and the process for doing this is now described with reference to Fig. 23 and Fig. 24. When an edge start point has integer coordinates, and so lies on a pixel boundary, the initial values for the edge variables can be calculated directly from formula (3), formula (4) and formula (5). When an edge start point does not have integer coordinates, additional 20 calculation is required. The calculation differs depending on whether the edge exits the pixel at the bottom or side and these two calculations are now described. Fig. 23 shows an start point with non-integer coordinates (xs,ys) for an edge of positive slope where the edge exits on the bottom of the pixel. The initial values for area of the fragment 2310, T 2320, A (2320 - 2330) and B (-2340 - 2310) can be calculated from (x,,y,) and the slope 2280422_1 920322_speci.lodge - 29 of the line dy/dx. Fig. 24 shows an edge start point with non-integer coordinates (xs,ys) for an edge of positive slope where the edge exits on the right hand side of the pixel. The initial values for the area of the fragment 2410, T 2420, A -2430 and B (2420 - 2410 - 2440) can be calculated from (xs,ys) and the slope of the line dy/dx. 5 To complete the method it is necessary to have a process for calculating fragment areas for pixels where the edge ends. This process will now be described with reference to Fig. 27 and Fig. 28. When an edge end point has integer coordinates, and so lies on a pixel boundary, the final value for the fragment area can be calculated directly using the method described previously. When an edge end point does not have integer 10 coordinates, additional calculation is required. The calculation differs depending on whether the edge enters the pixel from the top or the left hand side and these two calculations are now described. Fig. 27 shows an edge 2710 with an end point with non integer coordinates (xe,ye) where the edge enters from the left hand edge of the pixel. If xef and yer represent the fractional part of xe and ye respectively then the fragment area for 15 the edge 2720 is calculated using; Xef2 dy area=ye area= f 2 dx Fig. 28 shows an edge 2810 with an end point with non-integer coordinates (xe,ye) where the edge enters from the top edge of the pixel. If xer and yef represent the fractional part of xe and ye respectively then the fragment area for the edge 2820 is 20 calculated using; area = YefKYi7 +1 - Xe, The method described thus far suffers from accumulating round off errors and as a result the areas calculated get less accurate towards the end of each line segment. This 2280422_1 920322_speci-lodge -30 problem can be reduced by increasing the number of bits used to represent the quantities calculated in the algorithm. However this requires more hardware and still does not eliminate errors in the results. An improved method will now be described. Given a rational number A = a/d, where d >0, there exist integers q and r such 5 that Ad=a=qd+r,where 0:r5d i.e. q = [a and r = a(mod d) _d_ Therefore A can be represented exactly using the pair of integers (q, r), i.e. q is A rounded down to the nearest integer and r represents the fractional part. 10 The algorithm described thus far is concerned only with the fractional parts of variables, as calculations are performed modulo one. The format described above can be used to represent the fractional parts of variables as n-bit fixed point numbers of the form (q,r) where q=[2 d a and r =2"a(mod d) 15 Two numbers in this form can be added together using the following rule; (q.,ro)+(q,, r)= (qO + q,(mod 2"),ro + r) if ro +r, < d (q.,r)+(q.,r,)=(O+q, +1(mod2"), ro + r,-d) if r +r, 2d Representing variables in this form ensures that calculations are exact and there is no accumulated rounding error. 20 INDUSTRIAL APPLICABILITY The arrangements described are applicable to the computer and data processing industries and particularly for the rendering of images, particularly those with polygons. 2280422_1 920322_specilodge -31 The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. (Australia Only) In the context of this specification, the word "comprising" 5 means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings. 10 2280422 1 920322_specilodge

Claims (5)

1. A method of rendering anti-aliased edges of a polygon, said method comprising the steps of: 5 processing a description of said polygon to produce a plurality of line segments; scanning each of said line segments to determine a plurality of values for a triangle function of the line segment; calculating a value of the triangle function for a corner of a pixel which is intersected by one of the plurality of line segments by determining a value for an area 10 function of the pixel based on a value of an area function of a previous pixel; determining an opacity value for the pixel, utilising said calculated value; and determining a colour value for each pixel, utilising said opacity value.
2. A method of rendering anti-aliased edges of a polygon substantially as described 15 herein with reference to the drawings.
3. Computer apparatus adapted to perform the method of claim I or 2.
4. A computer readable storage medium having a computer program recorded 20 thereon, the program being executable by computer apparatus to perform a method according to claim I or 2. 2280422 1 920322_speci_lodge - 33 Dated this 25th day of September 2009 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant
5 Spruson & Ferguson 2280422 1 920322_specilodge
AU2009222438A 2009-09-25 2009-09-25 Anti-aliased polygon rendering Abandoned AU2009222438A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2009222438A AU2009222438A1 (en) 2009-09-25 2009-09-25 Anti-aliased polygon rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2009222438A AU2009222438A1 (en) 2009-09-25 2009-09-25 Anti-aliased polygon rendering

Publications (1)

Publication Number Publication Date
AU2009222438A1 true AU2009222438A1 (en) 2011-04-14

Family

ID=43857441

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2009222438A Abandoned AU2009222438A1 (en) 2009-09-25 2009-09-25 Anti-aliased polygon rendering

Country Status (1)

Country Link
AU (1) AU2009222438A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2854108A3 (en) * 2013-09-26 2015-07-01 Intel Corporation Anti-aliasing for graphics hardware
US9779526B2 (en) 2012-11-27 2017-10-03 Canon Kabushiki Kaisha Method, system and apparatus for determining area of a pixel covered by a scalable definition for a character

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9779526B2 (en) 2012-11-27 2017-10-03 Canon Kabushiki Kaisha Method, system and apparatus for determining area of a pixel covered by a scalable definition for a character
EP2854108A3 (en) * 2013-09-26 2015-07-01 Intel Corporation Anti-aliasing for graphics hardware
US10134160B2 (en) 2013-09-26 2018-11-20 Intel Corporation Anti-aliasing for graphics hardware

Similar Documents

Publication Publication Date Title
AU2010202390B2 (en) Rasterising disjoint regions of a page in parallel
EP1642240B1 (en) A method for tracking depths in a scanline based raster image processor
US20110285724A1 (en) Conversion of dashed strokes into quadratic bèzier segment sequences
US7292256B2 (en) Optimising compositing calculations for a run of pixels
US7280120B2 (en) Compositing with a sub-pixel mask in graphic object rendering
US10186068B2 (en) Method, apparatus and system for rendering an image
US7148897B2 (en) Rendering successive frames in a graphic object system
US7532222B2 (en) Anti-aliasing content using opacity blending
JPH0322188A (en) Method of changing graphic image into raster and expressing said image
JP2005100176A (en) Image processor and its method
CN115330986B (en) Method and system for processing graphics in block rendering mode
US9401035B2 (en) Text rendering method with improved clarity of corners
AU2011205085B2 (en) 2D region rendering
CN109064483B (en) Picture anti-aliasing method and device for LCD screen, single chip microcomputer and storage medium
AU2009222438A1 (en) Anti-aliased polygon rendering
EP0855682B1 (en) Scan line rendering of convolutions
WO2016025980A1 (en) Rendering diffusion curve images using multigrid laplacian smoothing with boundary constraint pixels
US9779526B2 (en) Method, system and apparatus for determining area of a pixel covered by a scalable definition for a character
US9269031B2 (en) Method of detecting regions in an edge-based representation
Ajorian et al. Fast image resizing for more efficient device adaptation
AU2004202825B2 (en) Optimising Compositing Calculations for a Run of Pixels
AU2004202827B2 (en) Rendering Successive Frames in a Graphic Object System
CN115660935A (en) Method and system for processing graphics in block rendering mode
EP1700271A1 (en) Computer graphics processor and method of rendering images
AU2014277651A1 (en) Generating and rendering an anti-aliased page representation

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application