[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

AU2008264171A1 - Print quality assessment method - Google Patents

Print quality assessment method Download PDF

Info

Publication number
AU2008264171A1
AU2008264171A1 AU2008264171A AU2008264171A AU2008264171A1 AU 2008264171 A1 AU2008264171 A1 AU 2008264171A1 AU 2008264171 A AU2008264171 A AU 2008264171A AU 2008264171 A AU2008264171 A AU 2008264171A AU 2008264171 A1 AU2008264171 A1 AU 2008264171A1
Authority
AU
Australia
Prior art keywords
image
tile
bitmap version
bitmap
scanned image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2008264171A
Inventor
Eric Wau-Shing Chong
Stephen James Hardy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2008264171A priority Critical patent/AU2008264171A1/en
Publication of AU2008264171A1 publication Critical patent/AU2008264171A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00007Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for relating to particular apparatus or devices
    • H04N1/00015Reproducing apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00007Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for relating to particular apparatus or devices
    • H04N1/00023Colour systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00026Methods therefor
    • H04N1/00031Testing, i.e. determining the result of a trial
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00026Methods therefor
    • H04N1/00045Methods therefor using a reference pattern designed for the purpose, e.g. a test chart
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00026Methods therefor
    • H04N1/00053Methods therefor out of service, i.e. outside of normal operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00026Methods therefor
    • H04N1/00063Methods therefor using at least a part of the apparatus itself, e.g. self-testing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00026Methods therefor
    • H04N1/00068Calculating or estimating
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00071Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for characterised by the action taken
    • H04N1/00082Adjusting or controlling
    • H04N1/00087Setting or calibrating
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00071Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for characterised by the action taken
    • H04N1/0009Storage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Description

S&F Ref: 884734 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant : chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Stephen James Hardy, Eric Wau-Shing Chong Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Print quality assessment method The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(1906785 1) -1 PRINT QUALITY ASSESSMENT METHOD FIELD OF INVENTION The current invention relates generally to the assessment of the quality of printed documents, and particularly, to a system for detection of print defects on the printed 5 medium. DESCRIPTION OF BACKGROUND ART There is a general need for measuring the output quality of a printing system. The results from such quality measurement may be used for fine-tuning and configuring its system parameters for improved performance. Traditionally, this has been performed in an 10 offline fashion through manual inspection. With ever increasing printing speeds and volume, the need for automatic real time detection of print defects to maintain print quality has increased. Timely identification of print defects can allow immediate corrective action such as re-printing to be taken, which in turn reduces waste in paper and ink or toner, while improving 15 efficiency. A number of automatic print defect detection systems have been developed. In some arrangements, these involve the use of an image acquisition device such as a CCD (charge couple device) camera to capture an image of a document printout, which is then compared to the original document image. Any discrepancies identified during the 20 comparison are flagged as print defects. Such approaches are typically computationally expensive and inefficient. SUMMARY OF THE INVENTION It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements. 25 Disclosed are arrangements, referred to as the Adaptive Print Verification (APV), which seek to address the above problems by selecting appropriate comparison methods based upon attributes of the region of the document to be considered for comparing a bitmap version of a source document to the printed version thereof. According to a first aspect of the present invention, there is provided a method 30 for assessing the quality of output of a printing device, said method comprising the steps of: (a) rendering (120) a source document (166) to generate a bitmap version (160) of the source document and attribute data (161) associated with the source document (166); -2 (b) printing (130) said bitmap version (160) on a print medium to form a printed medium; (c) scanning (140) said printed medium using an imaging device (2114) to form a scanned image (164); 5 (d) aligning (240) the bitmap version (160) and the scanned image (164); (e) selecting (520), dependent upon the attribute data (161), a comparison method for comparing the bitmap version (160) to the scanned image (164); and detecting, using said selected comparison method, print defects on the printed medium. 10 According to another aspect of the present invention, there is provided a method for assessing the quality of output of a printing device, said method comprising the steps of: (a) rendering (120) a source document (166) to generate a bitmap version (160) of the source document and attribute data (161) associated with the source is document (166); (b) printing (130) said bitmap version (160) on a print medium to form a printed medium; (c) scanning (140) said printed medium using an imaging device (2026) to form a scanned image (164), wherein the scanned image (164) is aligned with the bitmap 20 version (160); (d) selecting (520), dependent upon the attribute data (161), a comparison method for comparing the bitmap version (160) to the scanned image (164); and (e) detecting, using said selected comparison method, print defects on the printed medium. 25 According to another aspect of the present invention, there is provided an apparatus for assessing the quality of output of a printing device, said apparatus comprising: (a) a renderer for rendering a source document to generate a bitmap version of the source document and attribute data associated with the source document; 30 (b) a printer for printing said bitmap version on a print medium to form a printed medium; (c) a scanner for scanning said printed medium to form a scanned image; (d) means for aligning the bitmap version and the scanned image; (e) a selector for selecting, dependent upon the attribute data, a comparison 35 method for comparing the bitmap version to the scanned image; and -3 (f) a plurality of comparison means for detecting, based upon said selected comparison method, print defects on the printed medium. According to another aspect of the present invention, there is provided an apparatus for assessing the quality of output of a printing device, said apparatus 5 comprising: a memory for storing a program and data during program execution; a punter; an imaging device; and a processor for executing the program, said program comprising: 10 (a) code for rendering a source document to generate a bitmap version of the source document and attribute data associated with the source document; (b) code for printing said bitmap version on a print medium using the printer to form a printed medium; (c) code for scanning said printed medium using the imaging device to form is a scanned image; (d) code for aligning the bitmap version and the scanned image; (e) code for selecting, dependent upon the attribute data, a comparison method for comparing the bitmap version to the scanned image; and (f) code for detecting, using said selected comparison method, print defects 20 on the printed medium. According to another aspect of the present invention, there is provided a computer program product including a computer readable medium having recorded thereon a computer program for directing a processor to execute a method for assessing the quality of output of a printing device, said program comprising: 25 (a) code for rendering a source document to generate a bitmap version of the source document and attribute data associated with the source document; (b) code for printing said bitmap version on a print medium using the printer to form a printed medium; (c) code for scanning said printed medium using the imaging device to form 30 a scanned image; (d) code for aligning the bitmap version and the scanned image; (e) code for selecting, dependent upon the attribute data, a comparison method for comparing the bitmap version to the scanned image; and code for detecting, using said selected comparison method, print defects on the 35 printed medium.
-4 Other aspects of the invention are also disclosed. Brief Description of the Drawings One or more embodiments of the invention will now be described with reference to the following drawings, in which: 5 Fig. I is a flow chart of a process for performing colour imaging according to an embodiment of the invention; Fig. 2 is a flow chart of a process for performing print defect detection according to an embodiment of the invention; Fig. 3 is an illustration of a PDL document image to which the embodiment 1o shown in Fig. I is applied; Fig. 4 is a flow chart of a process for performing image alignment according to an embodiment of the invention; Fig. 5 is a flow chart of a process for performing image comparison according to an embodiment of the invention; is Fig. 6 is a flow chart of a process for performing tile based image comparison according to an embodiment of the invention; Fig. 7 is a flow chart of a process for processing blank type tiles according to an embodiment of the invention; Fig. 8 is a flow chart of a process for processing text type tiles according to an 20 embodiment of the invention; Fig. 9 is a flow chart of a process for processing image type tiles according to an embodiment of the invention; Fig. 10 is a flow chart of a process for processing line-art type tiles according to an embodiment of the invention; 25 Fig. 11 is a flow chart of a process for processing hybrid type tiles according to an embodiment of the invention; Fig. 12 illustrates two image strips and alignable regions within those strips when performing region based correlation; Fig. 13 illustrates image tiles with text characters; 30 Fig. 14 is a flow chart of a process for detecting colour misregistration in text type tiles according to an embodiment of the invention; Fig. 15 is a flow chart of a process for detecting text character errors in text type tiles according to an embodiment of the invention; Fig. 16 is a flow chart of a process for detecting colour errors in image type tiles 35 according to an embodiment of the invention; -5 Fig. 17 is a flow chart of a process for detecting line errors in line-art type tiles according to an embodiment of the invention; Fig. 18A to 18F illustrate parts shown in Fig. 3 in units of attributes; Fig. 19 is a table showing the relationship between the image attributes and 5 attribute map; Figs. 20A and 20B form a schematic block diagram of a general purpose computer system upon which the APV arrangements described can be practiced; Fig. 21 is a schematic block diagram of an embedded computer system in a suitably configured printer upon which the APV arrangements described can be practiced; 10 and Fig. 22 illustrates a pipelined processing arrangement according to an embodiment of the invention. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION Where reference is made in any one or more of the accompanying drawings to is steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears. It is to be noted that the discussions contained in the "Background" section and that above relating to prior art arrangements relate to discussions of devices which may 20 form public knowledge through their use. Such discussions should not be interpreted as a representation by the present inventor(s) or the patent applicant that such documents or devices in any way form part of the common general knowledge in the art. Fig. 21 is a schematic block diagram of an embedded computer system 2100 in a suitably configured printer upon which the APV arrangements described can be practiced. 25 The arrangement 2100 includes a printer 2107 equipped with a print system engine 2112 that incorporates rollers, toner and other necessary modules (not shown) necessary to print information onto a suitable print medium 2120. Information for printing is provided over a network 2101 via an input/output interface 2105 and a bus system 2106 to a processor 2111. The processor communicates over the bus system and an input/output interface 30 2110 responsive, for example, to commands provided by a user via a keypad 2108, in response to information displayed on an LCD display 2109. The processor 2111 controls the print system engine 2112 to thereby print a strip 2116 of one or more scanlines onto the print medium 2120. As the printed medium 2120 feeds out of the printer 2107 in the direction 2119, a scanner system 2114 captures an image of a previously printed strip 35 2117 and provides scanned image information to the processor 2111 over the bus system -6 2106. The processor 2111 communicates with a rendering engine 2102 and an APV Application Specific Integrated Circuit (ASIC) 2118 and/or with an APV software application 2103 in a memory 2104, in order to practice the APV method. Figs. 20A and 20B collectively form a schematic block diagram of a general 5 purpose computer system 2000, upon which, in an alternate arrangement, the various APV arrangements described can be practiced. As seen in Fig. 20A, the computer system 2000 is formed by a computer module 2001, input devices such as a keyboard 2002, a mouse pointer device 2003, a scanner 2026, a camera 2027, and a microphone 2080, and output devices including a 10 printer 2015, a display device 2014 and loudspeakers 2017. An external Modulator Demodulator (Modem) transceiver device 2016 may be used by the computer module 2001 for communicating to and from a communications network 2020 via a connection 2021. The network 2020 may be a wide-area network (WAN), such as the Internet or a private WAN. Where the connection 2021 is a telephone line, the modem 15 2016 may be a traditional "dial-up" modem. Alternatively, where the connection 2021 is a high capacity (eg: cable) connection, the modem 2016 may be a broadband modem. A wireless modem may also be used for wireless connection to the network 2020. The computer module 2001 typically includes at least one processor unit 2005, and a memory unit 2006 for example formed from semiconductor random access memory 20 (RAM) and semiconductor read only memory (ROM). The module 2001 also includes an number of input/output (1/0) interfaces including an audio-video interface 2007 that couples to the video display 2014, loudspeakers 2017 and microphone 2080, an 1/0 interface 2013 for the keyboard 2002, mouse 2003, scanner 2026, camera 2027 and optionally a joystick (not illustrated), and an interface 2008 for the external modem 2016 25 and printer 2015. In some implementations, the modem 2016 may be incorporated within the computer module 2001, for example within the interface 2008. The computer module 2001 also has a local network interface 2011 which, via a connection 2023, permits coupling of the computer system 2000 to a local computer network 2022, known as a Local Area Network (LAN). As also illustrated, the local network 2022 may also couple 30 to the wide network 2020 via a connection 2024, which would typically include a so called "firewall" device or device of similar functionality. The interface 2011 may be formed by an EthernetTM circuit card, a Bluetooth wireless arrangement or an IEEE 802.11 wireless arrangement. The interfaces 2008 and 2013 may afford either or both of serial and parallel 35 connectivity, the former typically being implemented according to the Universal Serial -7 Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 2009 are provided and typically include a hard disk drive (HDD) 2010. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 2012 is typically provided to act as a non-volatile 5 source of data. Portable memory devices, such optical disks (eg: CD-ROM, DVD), USB RAM, and floppy disks for example may then be used as appropriate sources of data to the system 2000. The components 2005 to 2013 of the computer module 2001 typically communicate via an interconnected bus 2004 and in a manner which results in a io conventional mode of operation of the computer system 2000 known to those in the relevant art. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple Mac" or alike computer systems evolved therefrom. The APV method may be implemented using the computer system 2000 wherein is the processes of Figs.1-2, 4-11, and 14-17 to be described, may be implemented as one or more software application programs 2033 executable within the computer system 2000. In particular, the steps of the APV method are effected by instructions 2031 in the software 2033 that are carried out within the computer system 2000. The software instructions 2031 may be formed as one or more code modules, each for performing one 20 or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the APV methods and a second part and the corresponding code modules manage a user interface between the first part and the user. The software 2033 is generally loaded into the computer system 2000 from a 25 computer readable medium, and is then typically stored in the HDD 2010, as illustrated in Fig. 20A, or the memory 2006, after which the software 2033 can be executed by the computer system 2000. In some instances, the application programs 2033 may be supplied to the user encoded on one or more CD-ROM 2025 and read via the corresponding drive 2012 prior to storage in the memory 2010 or 2006. Alternatively the 30 software 2033 may be read by the computer system 2000 from the networks 2020 or 2022 or loaded into the computer system 2000 from other computer readable media. Computer readable storage media refers to any storage medium that participates in providing instructions and/or data to the computer system 2000 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard 35 disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a -8 computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 2001. Examples of computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 2001 include radio or infra-red 5 transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. The second part of the APV application programs 2033 and the corresponding code modules mentioned above may be executed to implement one or more graphical user io interfaces (GUIs) to be rendered or otherwise represented upon the display 2014. Through manipulation of typically the keyboard 2002 and the mouse 2003, a user of the computer system 2000 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may 15 also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 2017 and user voice commands input via the microphone 2080. Fig. 20B is a detailed schematic block diagram of the processor 2005 and a "memory" 2034. The memory 2034 represents a logical aggregation of all the memory devices (including the HDD 2010 and semiconductor memory 2006) that can be accessed 20 by the computer module 2001 in Fig. 20A. When the computer module 2001 is initially powered up, a power-on self-test (POST) program 2050 executes. The POST program 2050 is typically stored in a ROM 2049 of the semiconductor memory 2006. A program permanently stored in a hardware device such as the ROM 2049 is sometimes referred to as firmware. The POST 25 program 2050 examines hardware within the computer module 2001 to ensure proper functioning, and typically checks the processor 2005, the memory (2009, 2006), and a basic input-output systems software (BIOS) module 2051, also typically stored in the ROM 2049, for correct operation. Once the POST program 2050 has run successfully, the BIOS 2051 activates the hard disk drive 2010. Activation of the hard disk drive 2010 30 causes a bootstrap loader program 2052 that is resident on the hard disk drive 2010 to execute via the processor 2005. This loads an operating system 2053 into the RAM memory 2006 upon which the operating system 2053 commences operation. The operating system 2053 is a system level application, executable by the processor 2005, to fulfil various high level functions, including processor management, memory -9 management, device management, storage management, software application interface, and generic user interface. The operating system 2053 manages the memory (2009, 2006) in order to ensure that each process or application running on the computer module 2001 has sufficient 5 memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 2000 must be used properly so that each process can run effectively. Accordingly, the aggregated memory 2034 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory 10 accessible by the computer system 2000 and how such is used. The processor 2005 includes a number of functional modules including a control unit 2039, an arithmetic logic unit (ALU) 2040, and a local or internal memory 2048, sometimes called a cache memory. The cache memory 2048 typically includes a number of storage registers 2044 - 2046 in a register section. One or more internal buses 2041 is functionally interconnect these functional modules. The processor 2005 typically also has one or more interfaces 2042 for communicating with external devices via the system bus 2004, using a connection 2018. The application program 2033 includes a sequence of instructions 2031 that may include conditional branch and loop instructions. The program 2033 may also include 20 data 2032 which is used in execution of the program 2033. The instructions 2031 and the data 2032 are stored in memory locations 2028-2030 and 2035-2037 respectively. Depending upon the relative size of the instructions 2031 and the memory locations 2028 2030, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 2030. Alternately, an instruction may be 25 segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 2028-2029. In general, the processor 2005 is given a set of instructions which are executed therein. The processor 2005 then waits for a subsequent input, to which it reacts to by executing another set of instructions. Each input may be provided from one or more of a 30 number of sources, including data generated by one or more of the input devices 2002, 2003, data received from an external source across one of the networks 2020, 2022, data retrieved from one of the storage devices 2006, 2009 or data retrieved from a storage medium 2025 inserted into the corresponding reader 2012. The execution of a set of the instructions may in some cases result in output of data. 35 Execution may also involve storing data or variables to the memory 2034.
-10 The disclosed APV arrangements use input variables 2054, that are stored in the memory 2034 in corresponding memory locations 2055-2058. The APV arrangements produce output variables 2061, that are stored in the memory 2034 in corresponding memory locations 2062-2065. Intermediate variables may be stored in memory s locations 2059, 2060, 2066 and 2067. The register section 2044-2046, the arithmetic logic unit (ALU) 2040, and the control unit 2039 of the processor 2005 work together to perform sequences of micro operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 2033. Each fetch, decode, and execute cycle 10 comprises: (a) a fetch operation, which fetches or reads an instruction 2031 from a memory location 2028; (b) a decode operation in which the control unit 2039 determines which instruction has been fetched; and is (c) an execute operation in which the control unit 2039 and/or the ALU 2040 execute the instruction. Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 2039 stores or writes a value to a memory location 2032. 20 Each step or sub-process in the processes of Figs. 1-2, 4-11, and 14-17 is associated with one or more segments of the program 2033, and is performed by the register section 2044-2047, the ALU 2040, and the control unit 2039 in the processor 2005 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 2033. 25 The APV method may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the APV functions or sub functions. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories. Methods, apparatuses, and computer program products are disclosed for 30 processing and inspecting digital images. In the following description, numerous specific details, including particular image alignment techniques, colour spaces, spatial resolutions, tile sizes, and the like are set forth. However, from this disclosure, it will be apparent to those skilled in the art that modifications and/or substitutions may be made without departing from the scope and spirit of the invention. In other circumstances, 35 specific details may be omitted so as not to obscure the invention.
-11 In the following description colour is specified in terms of red (R), green (G) and blue (B) components, this being a commonly used colour configuration referred to as RGB. Different colour spaces (e.g., CMYK), grey-scale blends, and other configurations are also possible for use with the described methods by changing the number and names 5 of the components. Fig. 1 provides a high-level overview of a flow chart of a process for performing colour imaging according to a preferred APV arrangement running on the system 2100 in Fig. 21. The colour imaging system which performs the noted process employs an image inspection device (eg the scanner 2114) to assess the quality of printouts by detecting to visually significant print defects arising from the print system engine 2112 which performs a printing step 130 in the arrangement in Fig. 1. The source input 166 to the system is, in the present example, a digital document expressed in the form of a page description language (PDL) script, which describes the appearance of document pages containing text, graphical and sampled images in terms of objects. The source document is 166 can also be referred to as a source image, source image data and so on. Each object has a plurality of properties such as colour, location, shape, and other object specific properties. These objects are assembled to form compound objects describing the content of the document. In a step 120 the document is rendered using a rasteriser (under control of the APV ASIC 2118 and/or the APV software application 2103), by processing the list 20 of objects, to generate a two-dimensional bitmap image 160 of the document (this two dimensional bitmap version of the source document 166 being referred to as the original image 160 hereinafter). In addition, the rasteriser generates an attribute map 161 of the image attributes of the document objects, and alignment information (also referred to as alignment hints) that can take the form of a list 162 of regions of the rendered bitmap with 25 intrinsic alignment structure (referred to as "alignable" regions hereinafter). The rendered bitmap image 160, the associated attribute data 161 and the list of alignable regions 162 are temporarily stored in an image memory eg 2104. Upon completing processing in the step 120, the rendered bitmap 160 is sent to a colour printer process 130. The colour printer process 130 uses a known 30 electrophotographic or ink-jet recording image forming unit (eg the print system engine 2112), and prints out a printed image 163 by forming a visible image on a print medium such as a paper sheet using such unit 2112. The rendered bitmap 160 in the image memory is transferred in synchronism with a sync signal and clock signal required for operating the image forming unit 2112, a transfer request of a specific colour component 35 signal, or the like. The rendered bitmap 160 is also sent to the print defect detection -12 process 150, together with all the generated attribute 161 and alignment 162 data in the memory as shown in Fig. 1. The printout 163 generated by the colour print process 130 is scanned by an image capturing process 140 running on the scanner 2114 for example. The image 5 capturing device 2114 is preferably a colour line scanner for real-time processing. However, any image capturing device that is capable of producing high quality digital copy of printouts can be used. In one APV arrangement as depicted in Fig. 21, the scanner 2114 can be configured to capture the image of the printed image 163 on a scan-line by scan-line basis, or on a strip by strip basis where each strip comprises a number of scan to lines. The captured image 164 (referred to as the scan image hereinafter) is sent to the print defect detection process 150 (performed by the APV ASIC 2118 and/or the APV software 2103 application), which aligns and compares the original image 160 and the scan image 164 using the attribute data 161 and the alignment data 162 from the rendering process 120 in order to locate and identify print defects. Upon completion, the print defect is detection process 150 outputs a decision signal or defect map 165 indicating defect types and locations of all detected defects. Fig. 22 shows how, in a preferred APV arrangement, the printing process 130, the scanning process 140 and the defect detection process 150 can be arranged in a pipeline. In this arrangement, a section (preferably a strip) of the rendered bitmap 160 is 20 printed by the print system engine 2112 to form a section of the printed image 163. The printed section on the printed image 163 is scanned by the scanning process 140 using the scanner 2114 to form a section of the scanned image 164. The scanned section, as a strip of scan-lines, is sent to the print defect detection process 150 for alignment and comparison with the rendered section that was sent to the print engine 2112. A next 25 section of the rendered bitmap 160 is printed by the print system engine 2112 to form a next section of the printed image 163. The next printed section on the printed image 163 is scanned by the scanning process 140 using the scanner 2114 to form a next section of the scanned image 164. The next scanned section is sent to the print defect detection process 150 for alignment and comparison with the next rendered section that was sent to 30 the print system engine 2112. A further next section of the rendered bitmap 160 is processed in the same manner as shown in Fig. 22. Thus, the pipeline arrangement allows all three processing stages to occur concurrently after the first two sections. Fig. 19 shows an example of an attribute map 161, showing the relationship between image attributes 191 and attribute map information 192 in this APV arrangement. 35 The attribute map 161 comprises a 3-bit flag at each pixel location (depicted by 192), and -13 is generated in accordance with six different part attributes 191. The attribute map information includes three attributes, namely vector attribute, character attribute, and colour attribute. The vector attribute assumes "0" when a given part (object or pixel) is a natural image, or "1" when it is a graphic object such as a character, figure, or the like. 5 The character attribute assumes "0" when a given object is a natural image or graphic object, or "1" when it is a character object. The colour attribute is determined based on whether an object is colour (chromatic) or monochrome (achromatic), and is set at "0" when a given object is a colour object irrespective of a natural image, character, or graphic object, or "1" in case of a monochrome object. Note that these objects are io definitions for one object. For example, a colour attribute="O" indicates that the entire object is chromatic, but does not indicate that the pixel of interest is chromatic. Fig. 3 shows an example of a source input document 166 described in the PDL 110 in Fig. 1. This document 166 is composed of a plurality of objects having different attributes. These objects may be further decomposed according to attributes (such as 191) is as shown in Figs. 18A to 18F. Fig. 18A depicts a colour natural image object, and Fig. 18B depicts a monochrome natural image object. Fig. 18C depicts a colour character object, Fig. 18D a monochrome character object, Fig. 18E a colour graphic object, and Fig. 18F a monochrome graphic object. When all these objects are superposed on each other, the 20 document shown in Fig. 3 is obtained. Areas occupied by the objects shown in Fig. 18A to 18F have corresponding individual attributes in the left column 191 in Fig. 19, and attribute map information 192 is generated at the coordinate position of each object on a pixel-based basis, in accordance with the rules shown in Fig. 19, and is written in attribute map memory 2104. 25 For example, attribute values, i.e., vector attribute=0, character attribute=0, and colour attribute=0 are written as attribute map information on a per-pixel-basis over the rectangular area 181 in the attribute map memory where the image shown in Fig. 18A is present. Also, in the monochrome natural image area shown in Fig. 18B, colour 30 attribute=1 is generated in a rectangular area 182 where the image is present (other attributes, i.e., vector and character attributes are "0"). Likewise, vector attribute=1, character attribute=1, and colour attribute=0 are generated in the attribute map memory corresponding to the area 183 where the image shown in Fig. 18C is rendered, and vector attribute=1, character attribute=1, and colour 35 attribute=1 are generated in Fig. 18D. It is noted that each object (a text character, a line, a -14 photo) is described in the input PDL 110. The rasterizer interprets all objects in PDL and renders each object in its precise location and colour. So the area associated with each object is defined in the PDL explicitly. The rasterizer simply interprets the PDL and renders pixels accordingly. s Furthermore, vector attribute=1, character attribute=0, and colour attribute=0 are generated in Fig. 18E, and vector attribute=1, character attribute=0, and colour attribute=1 are generated in Fig. 18F. It is advantageous that during rasterisation in the step 120, an image analysis is performed on the rendered bitmap image 160 in order to identify the alignable regions 162 1o which provide valuable alignment hints to the print defect detection step 150 as shown in Fig. 1. Accurate registration of the original image 160 and the printout scan image 164 enables image quality metric evaluation to be performed on a pixel-to-pixel basis. One of the significant advantages of such an approach is that precise image alignment can be performed without the need to embed special registration marks or patterns explicitly in i5 the source input document 166 and/or the original image 160. The image analysis performed in the step 120 to determine the alignment hints is preferably based on Harris corners. One of the outputs of the image analysis in the step 120 is the list 162 of alignable regions. Each region is described by a data structure comprising three data fields 20 for storing the x-coordinate of the centre of the region, the y-coordinate of the centre of the region, and the corner strength of the region. The process of detecting Harris corners is described as follows. Given an A4 size document rendered at 300dpi, the rasteriser process in the step 120 generates the bitmap image 160 of size 2490 by 3510 pixels. The first step for detecting Harris corners is to 25 determine the gradient or spatial derivatives of the bitmap 160 in both x and y directions, denoted as I, and I,. In practice, this can be achieved by applying the Sobel operator to the bitmap 160. The Sobel detectors use the following kernels: - 1 0 1 S, = -2 0 2 -- 1 0 1 -1 -2 -1 S, = 0 0 0 -1 2 1 Edge detection is performed with the following operations: I, S,*I 30 IY =S Y I -15 where * is the convolution operator, I is the image data, S,,S, are the kernels defined above, and I,,I,, are images containing the strength of the edge in the x and y direction respectively. From I,, I, , three images are produced as follows: I x = I, o I, I,, = 1 , I, Iy = y , oI 5 where o is a pixel-wise multiplication. This allows a local structure matrix A to be calculated over a neighbourhood around each pixel, using the following relationship: A=Ew(XY)r where w(x, y) is a windowing function for spatial averaging over the io neighbourhood. In a preferred APV arrangement w(x,y) can be implemented as a box filter. The next step is to form a "cornerness" image by determining the minimum eigenvalue of the local structure matrix at each pixel location. The cornerness image is a 2D map of the likelihood that each pixel is a corner. A pixel is classified as a corner pixel if it is the local maximum in its eight pixel neighbourhood. is A list of all the corner points detected, C together with the strength (cornerness) at that point is created. The list of corner points, C is further filtered by deleting points which are within spread pixels from another corner point. In the current APV arrangement, spread = 64 is used. The list of accepted comers, Ce, is output to the defect detection step 150 for image alignment. 20 Alternatively, other suitable methods for determining image structure of the original image 160 such as Gradient Structure Tensor or Scale-Invariant Feature Transform (S[FT) can also be used. In another APV arrangement, the original image 160 is represented as a multi scale image pyramid in the step 120 prior to determining the alignable regions 162. The 25 image pyramid is a hierarchical structure composed of a sequence of copies of the original image 160 in which both sample density and resolution are decreased in regular steps. This approach allows image alignment to be performed at different resolutions, providing an efficient and effective method for handling printouts 163 on different paper sizes or printout scans 164 at different resolutions. 30 Fig. 2 illustrates in detail the step 150 of Fig. 1. The process 150 works on strips 2116 of the original image 160 and scan image 164. A strip of an image, for example, is a number of consecutive image lines stored in the memory buffer 2104. The height of each -16 strip is preferably 256 scanlines in the present APV arrangement, and the width of each strip may be the width of the input image 160. In the case of an A4 document image 160 at 300dpi, the width is 2490 pixels. Image data on the buffer is updated continuously in a rolling buffer arrangement where a fixed number of scanlines are acquired by the scanning 5 sensors in the step 140 and stored in the buffer 2104 by flushing an equal number of scanlines off the buffer in a first-in-first-out (FIFO) manner. In a preferred APV arrangement, the number of scanlines acquired at each scanner sampling instance is 64. Processing of the step 150 begins at a step 210 where the memory buffer 2104 is filled with a strip of image data from the scanned image 164 fed by the scanning step 140. 10 In a preferred APV arrangement, the scan strip is optionally downsampled in a step 230 using a separable Burt-Adelson filter as is commonly known in the art to reduce the amount of data to be processed. At the same time, a strip of the original image 160 and its attribute map 161, at the corresponding resolution and location as the scan strip, are obtained in a step 220. Furthermore, the list of corner points generated during rendering in is the step 120 for image alignment is passed to the step 220. A scan strip 235 and an original strip 225 are then processed by a step 240, performed by the processor 2111 as directed by the APV ASIC 2102 and/or the APV software application program 2103, which perform image alignment on the strips using alignment hints derived in the step 120. The purpose of this step 240 is to establish pixel 20 to-pixel correspondence between the scan strip 235 and the original strip 225 prior to a comparison process in a step 270. It is noted that in order to perform real-time print defect detection, a fast and accurate image alignment method is desirable. A block based correlation technique where correlation is performed for every block in a regular grid is inefficient. Furthermore, the block based correlation does not take into account whether a 25 block contains image structure that is intrinsically alignable. Inclusion of unreliable correlation results can affect the overall image alignment accuracy. The present APV arrangement overcomes the above disadvantages of the block based correlation by employing a sparse image alignment technique that accurately estimates a geometrical transformation between the images using alignable regions. The alignment process 240 30 will be described in greater detail with reference to Fig. 4 below. In a following step 250, a test is performed by the processor 2111 as directed by the APV ASIC 2118 and/or the APV software application program 2103 to determine if any geometric errors indicating a misalignment condition (eg. shift, skew, etc) were detected in the step 240. If the result is Yes, processing moves to a step 2110. Otherwise -17 processing continues at a step 260. As a result of processing in the step 240, the two image strips are accurately aligned with pixel-to-pixel correspondence. Once the strips have been registered (ie aligned in the step 240), the next step 260 performs matching of colours of the aligned scan and original strips. The colours of a 5 document can be changed considerably by the process of printing and scanning. In order to detect only the significant differences between two images, it is useful to attempt to match their colours. The colour matching process assumes the colour of the image changes in a predictable way according to a predetermined model. In a preferred APV arrangement, it is assumed that the colour undergoes an affine transformation. However, io other suitable models can be used, e.g., a gamma correction model, an nth order polynomial model. If the colour has undergone an affine transformation, it has been transformed according to the following equation: Rpre A,, A12 A 13 Rorg C Ron 8 G,,,e A21 A22 A23 Gor, + C2 =A G,,i + C B,,,ed _A31 A32 A33-. Brg _C3 Bri, is where (Rred Gprd , Bp,d) are the predicted RGB values of the original image 160 after printing in the step 130 and scanning in the step 140 according to this predefined model, (Rig Gorig,,Borig) are the RGB values of the original image 160, and A and C are the affine transformation parameters. In the step 260, the aligned original strip is transformed using the above affine 20 transformation to the scan strip colour space. The aligned image strips are further processed by a step 270, performed by the processor 2111 as directed by the APV ASIC 2118 and/or the APV software application program 2103, which compares the strip contents to locate and identify print defects adaptively based on the content attributes as indicated by the associated attribute map 161. The step 270 will be described in greater 25 detail with reference to Fig. 5 below. Following the step 270, a check is made at a step 280 to determine if any print defects were detected in the step 270. If the result of step 280 is No, processing continues at a step 290. Otherwise processing continues at the step 2110. The step 290 determines if there are any new scanlines from the scanner 2114 from the step 140 to be processed. If 30 the result of the step 290 is Yes, processing continues at the step 210 where the existing strip in the buffer is rolled. That is, the top 16 scanlines are removed and the rest of the scanlines in the buffer are moved up by 16 lines, with the final 16 lines replaced by the -18 newly acquired scanlines from the step 140. If the result of the step 290 is No, processing continues at a step 2100. In the step 2100, the affine transformation colour conversion model used in the step 260 is updated based on information acquired during the strip comparison step 270. 5 Following the step 2100, the defect map 165 is updated and output in step the 2110. When evaluating a colour printer, such as a CMYK printer, it is desirable to also measure the alignment of different colour channels. For example, the C channel of an image 163 printed in the CMYK colour space may be several pixels offset from other channels due to mechanical inaccuracy in the printer. This mis-registration leads to io noticeable visual defects in the printer's output 163, namely visible lines of white between objects of different colour that should not be present. Detecting such errors is an important stage in a print defect detection system. Colour registration errors can be detected by comparing the relative spatial transformations between the colour channels of the scan strip 235 and those of original is strip 225. This is achieved by first converting the input strips from the RGB colour space to CMYK. The alignment process of the step 240 is then performed between each of the C, M, Y and K channels of the scan strip 235 and those of the original strip 225 in order to produce an affine transformation for each of the C, M, Y and K channels. Each transformation shows the mis-registration of the corresponding colour channel relative to 20 the other colour channels. These transformations may be supplied to a field engineer to allow physical correction of the misregistration problems, or alternately, they may be input to the printer for use in a correction circuit that digitally corrects for the printer colour channel misregistration. Fig. 4 depicts the alignment process 240 in greater detail, depicting a flow 25 diagram of the steps for performing the image alignment step 240 in Fig. 2. The step 240 operates on two image strips, those being the scan image strip 235 and the original image strip 225, and makes use of the additional attribute information 161 and the alignment hint data 162 derived in the step 120. In a step 410, an alignable region 415 is selected, based upon the list of alignable regions 162, from a number of pre-determined alignable regions 30 from the original image strip. The alignable region 415 is described by a data structure comprising three data fields for storing the x-coordinate of the centre of the region, the y coordinate of the centre of the region, and the corner strength of the region. In a step 420 a region 425 from the scan image strip 235, corresponding to the alignable region, is selected from the scan image strip. The corresponding image region 425 is determined 35 using a transformation derived from a previous alignment operation on a previous -19 document image or strip to transform the x and y coordinates of the alignable region 415 to its corresponding location (x and y coordinates) in the scan image strip 235. This transformed location is the centre of the corresponding image region 425. Fig. 12 illustrates examples of the original image strip 225 and the scan image 5 strip 235. Relative positions of an example alignable region 415 in the original image strip 225 and its corresponding region 425 in the scan image strip 235 are shown. Phase only correlation (hereinafter known as phase correlation) is then performed, by the processor 2111 as directed by the APV ASIC 2118 and/or the APV arrangement software program 2103, on the two regions 415 and 425 to determine the to translation that best relates the two regions 415 and 425. A next pair of regions, shown as 417 and 427 in Fig. 12, are then selected from the original image strip 225 and the scan image strip 235. The region 417 is another alignable region and the region 427 is the corresponding region as determined by the transformation between the two images. Correlation is then repeated between this new pair of regions 417 and 427. These steps are is repeated until all the alignable regions in strip 225 have been processed. In a preferred APV arrangement, the size of an alignable region is 64 by 64 pixels. The output of the correlation is a set of displacement vectors that represents the transformation that is required to map the pixels of the original image strip 225 to the scan image strip 235. Returning to Fig. 4, a following step 430 begins by applying a window function 20 such as a Hanning window to each of the two regions 415 and 425, and the two windowed regions are then phase correlated. The result of the phase correlation in the step 430 is a raster array of real values. In a following step 440 the location of a highest peak is determined within the raster array, with the location being relative to the centre of the alignable region. The location of the peak and the centre of the alignable region are then 25 stored in a system memory location 2104 in a step 450. If it is determined in a following step 460 that more alignable regions exist, then processing moves back to steps 410 and 420, where a next pair of regions is selected. Otherwise processing continues to a step 470. In an alternative APV arrangement, binary correlation may be used in place of phase correlation. 30 Processing in the step 470 determines a transformation from the displacement vectors. In a preferred APV arrangement, the transformation is an affine transformation with a set of linear transform parameters ( b , b 12 , b 21 , b 22 , Ax , Ay ), that best relates the displacement vectors in the Cartesian coordinate system as follows: xy ) b , b 22 ) x ij ) ,x ) y~j b . b2 YU AY -20 where (xj ,y) are alignable region centres and (zY, ,,) are affine transformed points. In addition, the points (xu,y ) are displaced by the displacement vectors D(ij) to give the displaced points (i 4 ., pU ) as follows: 5 ( ,pj)=(xuy )+D(i,j) The best fitting affine transformation is determined by minimising the error between the displaced coordinates, (k ,5p,), and the affine transformed points(Y,, ) by changing the affine transform parameters (b , b , b 2 , b, Ax , Ay). The error functional to be minimised is the Euclidean norm measure E as follows: N to E = I (-i, - y )2 +( - )2 n=1 The minimising solution is as follows: 'bi I) (Z nx b = M-I ziny b22 =M~ L J J ,Ay., with S S S , 2 xZy, Zx, 1s M = SY S,, S, = Xn ~nx, Z y~ Xyn S, S, Sj Jx J Y, Z , -SS,+SS,, - SS+SS, S,,S,-SS M-SSxy+SSY -SS,+SS_ S, S-SS, SXYS, - S, S,, SS,, -S.S, - SxySx ,S. and |M|= det M =-S SXYS,+2SxS,Sy - S.SS - SSS, +SS.S,, where the sums are carried out over all displacement pixels with non-zero 20 confidence estimates on the displacement vectors. Following the step 470, the set of linear transform parameters (b ,Ib2,5 b b 22 , Ax, Ay) is examined in a step 480 to identify geometric errors such as rotation, scaling, shearing and translation. The set of linear transform parameters (bHb 2 ,b 2 , ,Ax , Ay) when considered without the translation is a 2x2 matrix as 25 follows: A=[bI b2 b12 b22 -21 which can be decomposed into individual transformations assuming a particular order of transformations as follows: b b2 0 cosO -sino b12 b2 0 S h, 1_ sin0 cosJ Where scaling is defined as follows: s., 0 5 1 0 SY where s, and s, specify the scale factor along the x-axis and y-axis, respectively. Where shearing is defined as follows: [2 or [I _h, 1 _0 1_ 10 where h, and h, specify the shear factor along the x-axis and y-axis, respectively. Where rotation is defined as follows: [ cos0 -sin 0 sin6 cos6] where 0 specifies the angle of rotation. is The parameters s,, s,, h,, and 6 can be computed from the above matrix coefficients by the following: s, = bI + bi 1 det(A) - det(A) 20 tan9=- -2 bi In a preferred APV arrangement, the maximum allowable horizontal or vertical displacement magnitude A. is 4 for images at 300dpi, and the acceptable scale factor range (s,s.) is (0.98, 1.02), the maximum allowable shear factor magnitude h,,, is 0.01, and the maximum allowable angle of rotation is 0.1 degree. 25 However, it will be apparent to those skilled in the art that suitable alternative parameters may be used without departing from the scope and spirit of the APV arrangement, such as allowing for greater translation or rotation. If the derived transformation obtained in the step 470 satisfies the above affine transformation criteria, then the scan strip 235 is deemed to be free of geometric errors in -22 a following step 490, and processing continues at a step 4100. Otherwise processing moves to a step 4110 where the step 240 terminates and the process 150 in Fig. 2 proceeds to the step 250. In the step 4100, the set of registration parameters is used to map the original 5 image strip 225 and its corresponding attribute map strip to the scan image space. In particular, the RGB value at coordinate (x,,y,) in the transformed image strip is the same as the RGB value at coordinate (x,y) in the original image strip, where coordinate (x,y) is determined by an inverse of the linear transformation represented by the registration parameters as follows: lo (X x I b 22 -b '2 (X, - AX y bb 22 -b, 2 b2 - b byi y, - Ay For coordinates (x,y) that do not correspond to pixel positions, bi-cubic interpolation is used to calculate the RGB value for that position from neighbouring values. Following the step 4100, processing terminates at the step 4110, and the process 150 in Fig. 2 proceeds to the step 250. In an alternative APV arrangement, in the step is 4100 the set of registration parameters is used to map the scan image strip 235 to the original image space. As a result of the mapping in the step 4100, the original strip 225 and the scan strip 235 are aligned. Fig. 5 depicts the comparison process 270 in more detail, showing a schematic flow diagram of the steps for performing the image comparison. The step 270 operates on 20 three image strips, those being the scan strip 235, the transformed original strip 502 and its transformed attribute map strip 504, with the latter two resulting from processing in the step 480. Processing in the step 270 operates in a tile raster order, in which tiles are made available for processing from top-to-bottom and left-to-right one at a time. Beginning in a 25 step 510, a Q by Q pixels tile is selected from each of the three strips, with the tiles having corresponding positions in the respective strips. The three tiles, namely an attribute tile 512, an original tile 514 and a scan tile 516 are then processed by a following step 520. In a preferred APV arrangement, Q is 32 pixels. The purpose of the step 520, performed by the processor 2111 as directed by the 30 APV ASIC 2118 and/or the APV software application 2103, is to adaptively examine a printed region according to its document object type, so that customized techniques can be applied to more effectively identify print defects whilst reducing false alarm rate. The human visual system is particularly sensitive to sharp edges, thus print defects in text regions are much more noticeable than those in image regions. For -23 example, a small misprint like missing the dot on the letter "i" is a very obvious error. In contrast, a misprint of similar size to the "i" dot in an image region may be hard to detect for a person. It is clear that a print defect detection method that does not take content type into account cannot effectively identify and quantify defects. However, determining 5 document contents from a scanned image is both difficult and computationally expensive. The present APV arrangement addresses these issues by sending the attribute map 161 of the rendered image 160 to the print defect detection step 150. The attribute map 161 allows pixel accurate classification of the rendered image 160 to be performed. The step 520 is described in greater detail with reference to Fig. 6. 10 Following processing in the step 520, any detected print defects are stored in the defect map 165 in the step 530. In a following step 540, a check is made to determine if any print defects exist in 530. It is noted that the step 530 stores defect type and location information in a 2D map, and this allows the user to see what's wrong with the printout. The step 540 is a decision, like a switch to break out of the loop once a defect has been 15 detected, and no further processing is necessary. If the result of step the 540 is No, processing continues at a following step 550. Otherwise processing continues at a step 560. The step 550 determines if there are any remaining tiles to be processed. If the result of the step 550 is Yes, processing continues at the step 510 by selecting a next set of tiles. If the result of 550 is No, processing terminates at the step 560. 20 Fig. 6 depicts the step 520 in more detail. Processing begins with the attribute tile 512 being sent for processing by a step 610. A test is performed in the step 610 to determine whether tile 512 contains only blank type of pixels. If the result of step 610 is Yes, the three tiles, 512, 514 and 516 are sent to a step 620 for blank tile processing. Following the step 620, the process 520 terminates at a step 6100. If the result of the step 25 610 is No, processing moves to a step 630. A test, performed by the processor 2111 as directed by the APV ASIC 2118 and/or the APV software application 2103, is performed in the step 630 to determine whether the tile 512 contains only text type of pixels. If the result of the step 630 is Yes, the three tiles, 512, 514 and 516 are sent to a step 640 for text tile processing. Following 30 the step 640, the process 520 terminates at the step 6100. If the result of the step 630 is No, processing moves to a step 650. A test is performed in the step 650 to determine whether the tile 512 contains only image type of pixels. If the result of the step 650 is Yes, the three tiles, 512, 514 and 516 are sent to a step 660 for image processing. Following the step 660, the process 520 -24 terminates at the step 6100. If the result of the step 650 is No, processing moves to a step 670. A test is performed in the step 670 to determine whether the tile 512 contains only line-art type of pixels. If the result of the step 670 is Yes, the three tiles, 512, 514 and 5 516 are sent to a step 680 for line-art processing. Following the step 680, the process 520 terminates at the step 6100. If the result of the step 670 is No, processing moves to a step 690 for hybrid tile processing. A hybrid tile is an image region with two or more different document attributes such text and image or image and line-art (blank pixels are ignored in classifying hybrid tiles). Following the step 690, the process 520 terminates at the step 10 6100. It is clear in Fig. 6 that document image tiles are adaptively processed according to their content types. Specifically, we identify five different types of tiles based on their attributes, resulting in five specialized processing steps in 620, 640, 660, 680 and 690. These specialized steps will be described below with reference to Figs. 7-11, respectively. 15 Fig. 7 shows the step 620 in Fig. 6 in more detail. A blank tile on a printout is considered to be an image region that is free of any visible marks when viewed directly by a human observer under good lighting conditions. The purpose of this step 620 is to verify that a current scan image tile is a blank tile as suggested by its corresponding attribute map tile 512. Processing starts with the scan tile 516 being sent to a step 710, which 20 determines the level of luminance activity within the tile 516. The activity level is then compared with a predefined threshold in a following step 720. If the activity level is less than the threshold, the scan tile 516 is considered a blank tile. Processing is complete and terminates at a step 740. If the activity level is greater than the threshold, a print defect is detected. As a result, a step 730 outputs a print defect signal to the step 530 in Fig. 5 and 25 processing terminates at the step 740. In a preferred APV arrangement, luminance activities in the step 710 are measured at both pixel and tile levels. Within each tile 516, a local background luminance level, Yg, is first established by determining the median value of luminance of the scan tile 516. Luminance values are preferably floating point values in the range of 0 to 1. The 30 luminance activity at each pixel local (ij) is defined as the absolute difference of the luminance at location (i,j) and the local background luminance Ybg. In the step 720, this activity is compared with a predefined threshold # for the following condition: lY,j - Ybg, < $0 In addition, a strip background luminance level Ybg(sip) is maintained during 35 processing of the tile 516. This background luminance value is used to detect changes in -25 luminance at the tile level. A tile is considered to have a significant change in luminance relative to the strip to which the tile belongs if it fails the following condition: Ibg(strip) - bg I< The strip background luminance level Yg(,,,) is updated at the end of the step s 720 through the following: Y (n - )Ybg( ,p) +Yg bg(strip)~ where n is the number of processed tiles in the current strip. In a preferred APV arrangement, # is 0.08. However, it will be apparent to those skilled in the art that suitable alternative 10 ways of combining tests with different parameters may be practiced without departing from the scope and spirit of the APV arrangement, such as measuring gradient or edge activities within a tile, or comparing luminance mean, variance, contrast, etc. Fig. 8 depicts the step 640, for detecting print defects in a text type tile, in more detail. The inputs for the process 640 are the attribute tile 512, the original strip tile 514 is and the scan strip tile 516. Processing begins by the processor 2111 examining the attribute tile 512 in a step 810 as directed by the APV ASIC 2118 and/or the APV arrangement software application 2103, to determine whether colour processing is required. If it is, processing continues at a step 820, or otherwise to a step 840. Fig. 13 provides an illustration of text tile processing. It shows three tiles namely 20 the attribute map tile 512, the original image tile 514, and the scan image tile 516. As described above, an attribute map contains 3-bits of information per pixel location. For text processing, the attribute map is binarised to a 1-bit bitmask, and so text pixel locations are labelled with "1", and non-text pixels are labelled with "0". Thus, a background region 1310 is represented by label 0; and text regions 1320 are represented 25 by label 1. It can be seen in Fig. 13 that the original image tile 514 contains two characters of different colours, namely 1330 and 1340. The corresponding characters are shown in the scan image tile 516 as 1350 and 1360. Now let A(x,y) represent the attribute tile 512, O(x,y) the original image tile 514, and S(x,y) the scan image tile 516. The colour error detection step 820 begins 30 processing on the tiles in raster order. If A(x,y) is 0, the process 820 proceeds to a next pixel location until A(x, y) is 1. At the first instance of A(x, y) equals 1, a text colour list is generated by adding, to a list of text colours within a tile, the colour co, at O(x,y), and initializing a pixel count for that colour to 1. For other instances of A(x,y) equals 1, the colour at O(x,y) is compared against all existing colours on the text colour list. If the -26 colour is unique, it is added to the list and its pixel count is set to one. Otherwise, the matched colour's pixel count is incremented by one. For each colour co, on the text colour list, there is a corresponding colour cs,, which is the average colour of the pixels in S(x, y) corresponding to those pixels of colour co, in O(x, y). Thus, for the example in 5 Fig. 13, there would be two text colour values co, and co 2 , and two scan text colour values cs, and cs 2 , representing the colours of "B" and "9" , in 1330, 1340, 1350 and 1360, respectively. In a preferred APV arrangement, colour values are expressed in 24-bit RGB format, and each cs, value is preferably stored as a running total of individual RGB 10 values. These cs, values are converted to average scan text colours by dividing the running totals by their pixel counts at the end of the raster processing. Finally, a colour distance for each text colour pair, co, and cs,, is determined. If the distance satisfies the following condition: (Ro - Rs 2 +(Go -Gs ) 2 +(Bo - Bs )<co, 15 where ®,, is a predefined colour threshold, then the scan text colour cs. is deemed to be sufficiently close to the original text colour co,. Otherwise, the printed text is considered to have a colour print defect. Returning to Fig. 8, following the step 820, processing continues at a following step 830. The purpose of the step 830 is to detect registration errors in text regions. The 20 colour registration error detection step 830 of Fig. 8 is further expanded upon in Fig. 14. Fig. 14 is a flow chart of a process for detecting colour misregistration in text type tiles according to one APV arrangement. Processing starts in a step 1410 by converting the scan tile 516 (see Fig. 5) from the RGB colour space to the CMYK colour space through the following relationships: C =(255- R)1255 25 M =(255 -G)/255 Y = (255- B)/255 K = min(C,M,Y) C= C - K M' = M - K Y'= Y-K This results in four different colour planes, one for each of the CMYK channels. A colour plane is then selected in a following step 1420. The colour plane is binarised according to a predefined threshold in a following step 1430, so that background and non 30 background pixels become Os and Is, respectively. In a subsequent step 1440, a bitwise logical AND operation is performed between the binarised colour plane and a text -27 bitmask derived from the attribute tile 512. The total sum of the AND operation output bitmask is determined and compared with the sum of the binarised colour plane in a following step 1450. If the two sums differ greater than a predefined threshold, a registration error is detected. This completes the processing for a single colour plane, and s processing continues to a step 1460, which determines if another unprocessed colour plane is available. If the result is Yes, processing moves back to a step 1420. Otherwise, processing terminates at a step 1470. Referring again to Fig. 8, following the step 830, processing continues at a step 840 where the scan tile 516 is analysed for text errors. The step 840 is further expanded 10 upon in Fig. 15. Fig. 15 is a flow chart of a process for detecting text character errors in text type tiles according to one APV arrangement. Processing starts in a step 1510 by the binarising scan tile 516 to a 1-bit bitmask, so that background and non-background pixels become Os and Is, respectively. 15 In step a following 1520, performed by the processor 2111 as directed by the APV ASIC 2118 and/or the APV arrangement software application 2103, a bitwise logical XOR operation is performed between the binarised scan tile and a text bitmask derived from the attribute tile 512. The total sum of the XOR operation output bitmask is determined. This sum is then compared to a predefined threshold in a following step 20 1530. If the sum is less than the threshold, no text error is detected and processing terminates at a step 1550. If the sum is greater than the threshold, the defect map is updated in a step 1540. Processing then terminates at the step 1550. In an alternative APV arrangement, connected components (CCs) may be formed from the binarised scan tile and the text bitmask. Each connected component comprises a 25 group of pixels that are spatially connected and semantically related. Furthermore, statistics of the connected components may be gathered for comparison. The statistics may comprise any one or more of bounding boxes, pixel count, border length, fill ratio, number of connected components, connected component width and edge ratio. These statistics may be used to detect missing or breaks in character stroke, fill-ins, incorrect 30 text weight, etc. This concludes the text tile processing in the step 640 of Fig. 6. Fig. 9 is a flow chart of a process for processing image type tiles according to an APV arrangement, in which the image tile processing in step 660 is described in greater detail in which a flow diagram of the steps for detecting print defects in an image type tile 35 is shown. The step 660 (see fig. 6) takes three image tiles namely the attribute tile 512, the -28 original tile 514 and the scan tile 516 as inputs. Processing starts in a step 910 by examining the attribute tile 512 to determine if colour processing is required. If the attribute tile 512 indicates that the original tile 514 is a colour image tile, then processing continues at a following step 920. Otherwise, the tiles 514 and 516 will be processed by a 5 step 930. In the step 920, a spatial colour metric is used to evaluate and measure colour reproduction errors in the scan tile 516. The spatial colour metric is based on the S CIELAB colour space, which is a spatial extension to the standard CIELAB system. The S-CIELAB colour metric is specially designed for predicting local colour reproduction io errors in patterned images. The spatial colour error detection step 920 will now be described in more detail with reference to Fig. 16. Fig. 16 is a flow chart of a process for detecting colour errors in image type tiles according to an APV arrangement. Processing in the step 920 begins by converting the original tile 514 and the scan tile 516 to the CIE-XYZ colour space in a step 1610 through is the following relationship: X 0.4124 0.3576 0.1805 R Y =0.02126 0.7152 0.0722 G Z_ 0.0193 0.1192 0.9505 B where (R, G, B) are the red, green and blue colour channel values at each pixel location. In a preferred APV arrangement, both the image tiles, 514 and 516, are in the sRGB colour space. The image tiles 514 and 516 are further transformed into three 20 opponent colour planes (0,,02,03) that represent luminance, red-green and blue-yellow images in a following step 1620. The linear transformation from XYZ to opponent colours is as follows: 01 =0.279X +0.72Y - 0.107Z 02= -0.449X + 0.29Y - 0.077Z. 03 = 0.086X - 0.59Y + 0.501Z In a following step 1630, the three colour planes are convolved with a specific 25 spatial filter, which consists of parameters that are determined by the human visual sensitivity of each of the opponent colour planes. The 2-D spatial filter is defined by the following: f = kE wAE where Ei = k -exp - +2 Y, k is a scaling factor, and (wi,o- ) are kernel 30 parameters as given in the table below.
-29 Plane Weight (w;) Spread (a,) Luminance 0.921 0.0283 0.105 0.133 -0.108 4.336 Red-green 0.531 0.0392 0.330 0.494 Blue-yellow 0.488 0.0536 0.371 0.386 The two image tiles are now both in the S-CIELAB colour space. A colour distance AC, is determined in a following step 1640 through the following relationship: s AC, = (Oorg, - 0 n,] Y + (0r2 - 0 .Ol Y + (00, - 0 .an .2 The resulting distance AC, is then compared against a predefined threshold in a following step 1650. The two image tiles are considered visually similar if AC, is less than the threshold, in which case, processing terminates at a step 1670. Otherwise, a colour print defect is registered in a step 1660 before terminating at the step 1670. 10 Referring again to Fig. 9, error detection for a greyscale image tile is performed in the step 930. The image tiles 514 and 516 are preferably 8-bit greyscale regions. In the step 930, the root mean square error (RMSE) is determined by the following: RMSE = ~' n where Y(i) is the luminance value of the original tile 514 at pixel location i, and is Y, (i) is the luminance value of the scan tile 516 at pixel location i. Alternatively, other image quality models may be employed. Processing in Fig. 9 the terminates at a step 940. Fig. 10 describes the step 680, for detecting print defects in a line-art type tile, in more detail. The inputs for the step 680 are the attribute tile 512, the original strip tile 514 and the scan strip tile 516. Processing begins by examining the attribute tile 514 in a step 20 1010 to determine whether colour processing is required. If it is, processing continues at a step 1020. Otherwise processing proceeds to a step 1030. For line-art processing, the attribute map is binarised to a 1-bit bitmask so that line-art pixel locations are labelled with "l", and non-line-art pixels are labelled with "0". Now let A(x,y) represent the attribute tile 512, O(x,y) the original image tile 25 514, and S(x,y) the scan image tile 516. The colour error detection step 1020 begins -30 processing on the tiles in raster order, this being performed by the processor 2111 as directed by the APV ASIC 2118 and/or the APV arrangement software application 2103. If A(x,y) is 0, the step 1020 proceeds to a next pixel location until A(x,y) is 1. At the first instance of A(x,y) equalling 1, a line-art colour list is generated by adding the 5 colour at O(x, y), ie co,, and initializing a pixel count for that colour to 1. For other instances of A(x,y) equalling 1, the colour at O(x,y) is compared against all existing colours on the line-art colour list. If the colour is unique, the colour is added to the list and its pixel count is set to one. Otherwise, the matched colour's pixel count is incremented by one. For each colour on the line-art colour list co,, there is a corresponding colour cs,, 10 which is the average colour of the pixels in S(x,y) corresponding to those pixels of colour co, in O(x,y). In a preferred APV arrangement, colour values are expressed in 24-bit RGB format, and each cs, value is preferably stored as a running total of individual RGB values. These cs, values are converted to average scan line-art colours by dividing with is their pixel counts at the end of the raster processing. Finally, a colour distance for each line-art colour pair, co; and cs,, is determined. If the distance satisfies the following condition: (Ro - Rs 2 +(Go -Gs ) +(Bo -Bs )2< , where E)d, is a predefined colour threshold, then the scan line-art colour cs, is 20 deemed to be sufficiently close to the original line-art colour coi. Otherwise, the printed line-art is considered to have a colour print defect. Following the colour error detection in the step 1020, processing moves to the step 1030, where the quality of printed line-arts is evaluated. Fig. 17 depicts the line error detection process 1030 in more detail, in which a 25 schematic flow diagram of the steps for performing the line error detection is shown. Processing starts by binarising the attribute tile 512 and the scan tile 516 in a step 1710, so that background and non-background pixels become Os and Is, respectively. In a following step 1720, a bitwise logical XOR operation is performed between the attribute and scan tile bitmasks. The sum of the resulting bitmask is compared against a predefined 30 threshold in a following step 1730. If the sum is less than the threshold, the line-art in scan tile 516 is considered sufficiently similar at pixel level and the process 1030 proceeds to a following step 1740. This is because having a large number of corresponding pixels may not guarantee the scan region to be free of breaks and voids. Accordingly, in addition to pixel correspondence, the present APV arrangement provides -31 several other defect detection mechanisms to ensure line-art quality. These are described in the following steps. In the step 1740, connected components are formed in the bitmasks determined in the step 1710. Statistics of each connected component are generated in the process. The 5 statistics may comprise any one or more of bounding box, pixel count, border length, and fill and edge ratios. Corresponding connected components in the attribute and scan bitmasks are compared. If any of the statistics differs more than their predefined thresholds, a line-art print defect is detected and registered in a step 1770. In a step 1750, each connected component is decomposed into smaller connected 10 components, which can be identified as lines. The width of each line is then determined by measuring the average pixel separation between the two longer sides of the line without including its end points. Furthermore, corresponding lines in the attribute and scan bitmasks are compared to identify print defects in line-width. If the width of any printed line is greater or less than a predefined acceptable deviation, a line-art print defect 15 is detected and registered in the step 1770. End points of a line such as arrowheads are particularly important for illustrating print quality, thus special attention is paid to their integrity. Accordingly, in a step 1760, each identified line in the step 1750 is further analysed to extract its end points by removing pixels along the uniform width section of the line. The extracted end points in 20 the attribute bitmask are compared with those in the scan bitmask to identify print defects in the end points. Any identified defects are updated in the defect map in the step 1770. Processing of the process 1030 then terminates at a step 1780. Fig. 11 describes the hybrid tile processing step 690 in Fig. 6 in which a schematic flow diagram of the steps for performing the hybrid tile processing is shown. 25 The inputs for the step 690 are the attribute tile 512, the original strip tile 514 and the scan strip tile 516. In a step 1110, the attribute tile 512 acts as a selection mask that separates pixel data in the original strip tile 514 and the scan strip tile 516 into individual tile buffers according to the pixel attributes. Processing in the step 1110 proceeds in raster order in regard to each strip, and at 30 each pixel location (x,y), the attribute A(x,y) is examined. If A(x,y) is a text attribute, pixel values at O(x,y) and S(x,y) are copied to text buffers T, 0 (x,y) and T(x,y), respectively. If A(x,y) is a line-art attribute, pixel values at O(x,y) and S(x,y) are copied to line-art buffers L(x,y) and L,(x,y), respectively. If A(x,y) is an image attribute, pixel values at O(x,y) and S(x,y) are copied to image buffers I,(x,y) and 35 I,(x,y), respectively. If A(x,y) is a blank pixel, processing skips to the next pixel -32 location. In a preferred APV arrangement, the individual tile buffers are initialised to white or (255, 255, 255) in RGB values. Following tile data separation in the step 1110, the resulting text, line-art and image tiles are processing by steps 640, 680 and 660, respectively. s Second APV arrangement. In the above APV arrangement, attribute map data is generated by the rasteriser in step 120 using information derived from an input PDL document, however the APV approach is not restricted to PDL documents. In an alternative APV arrangement, the source input document 166 is a bitmap image comprising a variety of document object 10 types such as text, natural image and graphics. In the absence of the PDL command data 166 (also referred to as the PDL file 166), the rasteriser process in the step 120 is unable to generate any attribute data (such as 161) of the input image 166 directly. In the second APV arrangement therefore, a document layout analysis is performed in the step 120 to identify regions of different document object types such as text, natural image and 15 graphics. As a result, the input bitmap image is partitioned into rectangular regions that are defined by bounding box coordinates. Each region is given a label, indicating its content type. Once the document layout analysis is complete, an attribute map 161 in the same format as described above can be generated. Starting with each 3-bit flag initialised to 0, 20 the attribute map 161 is generated in accordance with six different part attributes as shown in Fig. 19. The vector attribute assumes a value of "1" when a given region is bi-level in tone, or a value of "0" when it is a busy area with multi-level colours and various levels of contrast. The character attribute assumes a value of "1" when a given region is bi-level in tone and rich in text like characteristics such as high contrast and high edge ratio, or a 25 value of "0" otherwise. The colour attribute is set to a value of "0" if a given region has colours other than grey or black irrespective of region classification, and is set to a value of "1" if that region contains only different shades of grey and/or black. Upon generating the attribute map 161, processing continues as in the first APV arrangement. The second APV arrangement allows printouts 163 of bitmap images to be 30 monitored for print defects in the same way with the same effectiveness as in the first APV arrangement. Variations In the first two APV arrangements, three different attributes 192 respectively indicating whether the pixel of interest is a vector image or not, a character or not, and 35 colour or monochrome have been described (see Fig. 19). Of course, the present APV -33 arrangement is not limited to these specific attributes. For example, various other kinds of flag information, e.g., a flag that identifies the hue or saturation of a given colour of a colour object, a flag indicating if the pixel of interest belongs to an edge portion of a given object, a flag indicating if the pixel is part of a gradient blend region, and the like can be s used. Also, the adaptive defect detection methods to be switched based on the attribute map information 161 are not limited to those in the above APV arrangements. For example, various other image processes such as topological skeletonisation of text regions for shape comparison, morphological operations for filtering visually insignificant defects, 10 and the like can be applied. The recording process 130 is not limited to any particular type of printer or apparatus. For example, an engine that ejects ink droplets, or any other engines may be used. That is, an optimal recording process can be selected as needed in accordance with the attribute information 161 stored in the attribute map memory. 15 Furthermore, in the above APV arrangements, the rasteriser process in the step 120 can be incorporated in the housing of the printer 2107 in the printing process 130, or may be implemented as one of the processes of the processor 2111. In the latter case, the rasteriser can be built in especially as one of functions of the printer driver. Hence, a number of processing steps in the present APV arrangement can also be achieved by 20 supplying, to a system or apparatus, a storage medium that stores program codes of the software 2103, which implements those processing steps of the above APV arrangements, and reading out and executing the program codes 2103 stored in the storage medium 2104 by the processor 2111 (or a CPU or MPU) of the system or apparatus. In the above APV arrangements, image alignment is a necessary step prior to 25 image comparison. However this step can become optional if the paper feed and image formation processes are highly accurate, allowing direct comparison between the original input 160 and its printout image 163. Furthermore, the printing medium may contain text and graphical objects such as logos and address information in a letterhead prior to printing. An additional scanner (not 30 shown) is used to capture a preprint scan of the printing medium. Image comparison in the step 270 is performed between the final printout scan 164 and the combined original and preprint image. Note that the arrangement of each of the above APV arrangements can be applied to either a system constituted by a plurality of devices (e.g., the host computer 2001, the 35 interface devices 2008, 2013, the reader 2026, the printer 2015, and the like), or an -34 apparatus consisting of a single equipment (e.g., the printer 2107, a facsimile apparatus, or the like). The above APV arrangements are also provided by supplying a storage medium 2025, which records a program code of the software program 2033 that can implement the 5 functions of the above-mentioned APV arrangements to the system or apparatus, and reading out and executing the program code stored in the storage medium by the computer 2001 (or a CPU or MPU) of the system or apparatus. In this case, the program code 2033 itself is read out from the storage medium 2025 that implements the functions of the above-mentioned APV arrangements, and the io storage medium which stores the program code constitutes the present APV arrangement. As the storage medium for supplying the program code, for example, a floppy disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, magnetic tape, non volatile memory card, ROM, and the like may be used. The functions of the above-mentioned APV arrangements can be implemented is not only by executing the readout program code by the computer but also by some or all of actual processing operations executed by an OS (operating system) running on the computer on the basis of an instruction of the program code. Furthermore, the functions of the above-mentioned APV arrangements can be implemented by some or all of actual processing operations executed by a CPU or the like 20 arranged in a function extension board or a function extension unit, which is inserted in or connected to the computer, after the program code read out from the storage medium is written in a memory of the extension board or unit. As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the 25 invention is not limited to the specific embodiments thereof. Industrial Applicability The arrangements described are applicable to the computer and data processing industries and particularly for the quality measurement and assurance industries. The foregoing describes only some embodiments of the present invention, and 30 modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have 35 correspondingly varied meanings.

Claims (14)

1. A method for assessing the quality of output of a printing device, said method comprising the steps of: 5 (a) rendering (120) a source document (166) to generate a bitmap version (160) of the source document and attribute data (161) associated with the source document (166); (b) printing (130) at least a part of said bitmap version (160) on a print medium to form a printed medium; 1o (c) scanning (140) said printed medium using an imaging device (2026) to form a scanned image (164); (d) aligning (240) the bitmap version (160) and the scanned image (164); (e) selecting, dependent upon the attribute data (161), a comparison method for comparing the bitmap version (160) to the scanned image (164); and is (0 detecting, using said selected comparison method, print defects on the printed medium.
2. A method according to claim 1, wherein the rendering step comprises: rasterising the source document to form the bitmap version if the source 20 document is expressed as a PDL file (166); or performing a document analysis of the source document to identify regions of different document object types if the source document is expressed as a bitmap, said document analysis being used to form the bitmap version. 25
3. A method according to claim 1, wherein: the rendering step generates alignment information (162) for the source document; and the aligning step uses the alignment information to align the bitmap version and the scanned image. 30
4. The method according to claim 1, wherein said attribute data comprises an attribute map that specifies attributes for pixels of the bitmap version, said attributes defining a said pixel as belonging to one of a natural image, a character object, a graphic object, a colour object and a monochrome object. 35 -36
5. The method according to claim 3, wherein the alignment information comprises a list (162) of alignable regions within the bitmap version, said aligning step comprises the sub-steps of: (a) arranging the bitmap version, the attribute map and the scanned image 5 into a plurality of strips each comprising a predetermined number of consecutive lines of pixels; and (b) processing the bitmap version strips and the scanned image strips in pairs, wherein said processing step comprises, in relation to each said pair, the sub-steps of: 10 (i) using a plurality of the alignable regions to identify corresponding regions in the scanned image; (ii) correlating said alignable region pairs to determine a geometric transformation that maps pixels of the bitmap version and the attribute map to corresponding pixels of the scanned image; and 15 (iii) detecting misalignment between the bitmap version and the scanned document according to predetermined transformation parameters.
6. The method according to claim 1, wherein said selecting step comprises the sub steps of: 20 (a) arranging the bitmap version, the attribute map and the scanned document strips into a plurality of sets of tiles; and (b) processing, according to tile attributes within each attribute map tile, said bitmap version tiles, said attribute map tiles and said scanned document tiles on a per-tile basis, wherein each said set of tiles comprises one bitmap version tile, one 25 attribute map tile and one scanned document tile in corresponding spatial locations.
7. The method according to claim 6, where said tile attributes comprise at least one of blank, text, image and line-art. 30
8. The method according to claim 1, where said comparison method comprises at least one of blank tile processing, text tile processing, image tile processing, line-art tile processing and hybrid tile processing.
9. A method for assessing the quality of output of a printing device, said method 35 comprising the steps of: -37 (a) rendering (120) a source document (166) to generate a bitmap version (160) of the source document and attribute data (161) associated with the source document (166); (b) printing (130) said bitmap version (160) on a print medium to form a 5 printed medium; (c) scanning (140) said printed medium using an imaging device (2026) to form a scanned image (164), wherein the scanned image (164) is aligned with the bitmap version (160); (d) selecting, dependent upon the attribute data (161), a comparison method 1o for comparing the bitmap version (160) to the scanned image (164); and (e) detecting, using said selected comparison method, print defects on the printed medium.
10. An apparatus for assessing the quality of output of a printing device, said Is apparatus comprising: (a) a renderer for rendering a source document to generate a bitmap version of the source document and attribute data associated with the source document; (b) a printer for printing said bitmap version on a print medium to form a printed medium; 20 (c) a scanner for scanning said printed medium to form a scanned image; (d) means for aligning the bitmap version and the scanned image; (e) a selector for selecting, dependent upon the attribute data, a comparison method for comparing the bitmap version to the scanned image; and (f) a plurality of comparison means for detecting, based upon said selected 25 comparison method, print defects on the printed medium.
11. An apparatus for assessing the quality of output of a printing device, said apparatus comprising: a memory for storing a program; 30 a printer; an imaging device; and a processor for executing the program, said program comprising: (a) code for rendering a source document to generate a bitmap version of the source document and attribute data associated with the source document; -38 (b) code for printing said bitmap version on a print medium using the printer to form a printed medium; (c) code for scanning said printed medium using the imaging device to form a scanned image; 5 (d) code for aligning the bitmap version and the scanned image; (e) code for selecting, dependent upon the attribute data, a comparison method for comparing the bitmap version to the scanned image; and (f) code for detecting, using said selected comparison method, print defects on the printed medium. 10
12. A computer program product including a computer readable medium having recorded thereon a computer program for directing a processor to execute a method for assessing the quality of output of a printing device, said program comprising: (a) code for rendering a source document to generate a bitmap version of 15 the source document and attribute data associated with the source document; (b) code for printing said bitmap version on a print medium using the printer to form a printed medium; (c) code for scanning said printed medium using the imaging device to form a scanned image; 20 (d) code for aligning the bitmap version and the scanned image; (e) code for selecting, dependent upon the attribute data, a comparison method for comparing the bitmap version to the scanned image; and (f) code for detecting, using said selected comparison method, print defects on the printed medium. 25
13. A method for assessing the quality of output of a printing device, said method comprising the steps of: (a) rendering (120) a source document (166) to generate a bitmap version (160) of the source document and attribute data (161) associated with the source 30 document (166); (b) printing (130) at least a first strip of said bitmap version (160) on a print medium to form a printed medium, said first strip being characterized by a first height; (c) scanning (140) at least a strip of the printed medium using an imaging device to form a scan strip (235), said scan strip being characterized by a second height, 35 and said second height being at most equal to said first height; -39 (d) obtaining an original strip (225) from the bitmap version (160), said original strip is at least partly corresponding to the scan strip; (e) aligning (240) the original strip (225) and the scan strip (235); (f) selecting, dependent upon the attribute data (161), a comparison method 5 for comparing the original strip (225) to the scan strip (235); and (g) detecting, using said selected comparison method, print defects on the printed medium.
14. A method according to claim 1 or 13, wherein said selection is performed tile by io tile. Dated 23 December, 2008 Canon Kabushiki Kaisha Patent Attorneys for the Applicant/Nominated Person Is SPRUSON & FERGUSON
AU2008264171A 2008-12-23 2008-12-23 Print quality assessment method Abandoned AU2008264171A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2008264171A AU2008264171A1 (en) 2008-12-23 2008-12-23 Print quality assessment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2008264171A AU2008264171A1 (en) 2008-12-23 2008-12-23 Print quality assessment method

Publications (1)

Publication Number Publication Date
AU2008264171A1 true AU2008264171A1 (en) 2010-07-08

Family

ID=42313413

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2008264171A Abandoned AU2008264171A1 (en) 2008-12-23 2008-12-23 Print quality assessment method

Country Status (1)

Country Link
AU (1) AU2008264171A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2642738A1 (en) * 2012-03-21 2013-09-25 Ricoh Company, Ltd. Apparatus, system, and method of inspecting image, and carrier medium storing image inspection control program
CN104062301A (en) * 2013-03-22 2014-09-24 富士施乐株式会社 Image Inspection System And Image Inspection Apparatus
US10999452B2 (en) 2018-01-25 2021-05-04 Hewlett-Packard Development Company, L.P. Predicting depleted printing device colorant from color fading
EP4124939A1 (en) * 2021-07-30 2023-02-01 Ricoh Company, Ltd. Printing system, image processing apparatus, and comparison method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2642738A1 (en) * 2012-03-21 2013-09-25 Ricoh Company, Ltd. Apparatus, system, and method of inspecting image, and carrier medium storing image inspection control program
US20130250370A1 (en) * 2012-03-21 2013-09-26 Ricoh Company, Ltd. Apparatus, system, and method of inspecting image, and recording medium storing image inspection control program
US9172824B2 (en) 2012-03-21 2015-10-27 Ricoh Company, Ltd. Apparatus, system, and method of inspecting image, and recording medium storing image inspection control program
CN104062301A (en) * 2013-03-22 2014-09-24 富士施乐株式会社 Image Inspection System And Image Inspection Apparatus
EP2782324A1 (en) * 2013-03-22 2014-09-24 Fuji Xerox Co., Ltd. Image inspection system and image inspection apparatus
US8908232B2 (en) 2013-03-22 2014-12-09 Fuji Xerox Co., Ltd. Image inspection system and image inspection apparatus
AU2013245454B2 (en) * 2013-03-22 2015-09-24 Fujifilm Business Innovation Corp. Image inspection system and image inspection apparatus
CN104062301B (en) * 2013-03-22 2018-06-15 富士施乐株式会社 Image review systems and image testing device
US10999452B2 (en) 2018-01-25 2021-05-04 Hewlett-Packard Development Company, L.P. Predicting depleted printing device colorant from color fading
EP4124939A1 (en) * 2021-07-30 2023-02-01 Ricoh Company, Ltd. Printing system, image processing apparatus, and comparison method
US11797804B2 (en) 2021-07-30 2023-10-24 Ricoh Company, Ltd. Printing system, image processing apparatus, and comparison method

Similar Documents

Publication Publication Date Title
US6898316B2 (en) Multiple image area detection in a digital image
US9088673B2 (en) Image registration
AU2009251147B2 (en) Dynamic printer modelling for output checking
US8931700B2 (en) Four dimensional (4D) color barcode for high capacity data encoding and decoding
US8331670B2 (en) Method of detection document alteration by comparing characters using shape features of characters
EP1327955A2 (en) Text extraction from a compound document
US20030118234A1 (en) Image processing device, image processing method, program for executing image processing, and computer readable recording medium on which the program is stored
WO2007127085A1 (en) Generating a bitonal image from a scanned colour image
CA2676283C (en) A method for aligning a modified document and an original document for comparison and difference highlighting
JP2008252862A (en) Image processing apparatus, image processing method, and image processing program
US10715683B2 (en) Print quality diagnosis
US8913852B2 (en) Band-based patch selection with a dynamic grid
US6360006B1 (en) Color block selection
AU2008264171A1 (en) Print quality assessment method
AU2009202451B2 (en) Image processing apparatus, image forming apparatus and program
US8340409B2 (en) Systems and methods for outlining image differences
JP4208520B2 (en) Image processing apparatus, image processing method, program, and storage medium
US8990681B2 (en) Method for aligning a modified document and an original document for comparison and difference highlighting
JP5517028B2 (en) Image processing device
JP2009105541A (en) Image processing apparatus, method and program
JP4228905B2 (en) Image processing apparatus and program
JP2009060216A (en) Image processor, and image processing program
US20120200896A1 (en) Method for Optimizing the Search for Trapping Regions
JP4311183B2 (en) Image processing apparatus and program
AU2011203230A1 (en) Variable patch size alignment hints

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application