WO2014126187A1 - Image processing device, image processing method, image reading device, and program - Google Patents
Image processing device, image processing method, image reading device, and program Download PDFInfo
- Publication number
- WO2014126187A1 WO2014126187A1 PCT/JP2014/053430 JP2014053430W WO2014126187A1 WO 2014126187 A1 WO2014126187 A1 WO 2014126187A1 JP 2014053430 W JP2014053430 W JP 2014053430W WO 2014126187 A1 WO2014126187 A1 WO 2014126187A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- scanning direction
- sub
- image
- image data
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/04—Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
- H04N1/19—Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays
- H04N1/191—Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a one-dimensional array, or a combination of one-dimensional arrays, or a substantially one-dimensional array, e.g. an array of staggered elements
- H04N1/192—Simultaneously or substantially simultaneously scanning picture elements on one main scanning line
- H04N1/193—Simultaneously or substantially simultaneously scanning picture elements on one main scanning line using electrically scanned linear arrays, e.g. linear CCD arrays
- H04N1/1932—Simultaneously or substantially simultaneously scanning picture elements on one main scanning line using electrically scanned linear arrays, e.g. linear CCD arrays using an array of elements displaced from one another in the sub scan direction, e.g. a diagonally arranged array
- H04N1/1933—Staggered element arrays, e.g. arrays with elements arranged in a zigzag
Definitions
- the present invention relates to an image processing apparatus, an image processing method, and an image reading apparatus that combine image data obtained by scanning a read object with a plurality of line sensors to generate composite image data corresponding to the read object.
- the present invention also relates to a program for causing a computer to execute processing for combining the image data.
- the image reading apparatus includes a first row line sensor group and a second row line sensor group including a plurality of line sensors arranged at intervals in a line in the main scanning direction, and a first row line sensor.
- a telecentric optical system that forms an erect image on each of the group and the second row of line sensor groups.
- JP 2009-246623 A paragraphs 0047 to 0049
- a document reading magnification (hereinafter also referred to as “variable magnification”) according to the copying magnification, as in the case where the scanning speed in the sub-scanning direction (sub-scanning speed) changes. ), And an image is read by enlarging or reducing the image (hereinafter also referred to as “performing scaling process”) (for example, see Patent Document 2).
- performing scaling process for example, see Patent Document 2.
- the number of lines multiplied by the scaling factor R is scaled (that is, enlarged or reduced).
- the relative movement speed between the document and the line sensor (or the document and the document sensor) while the reading cycle (line sensor drive timing) by the line sensor is constant.
- a method of changing the relative movement speed between the optical system and the optical system to a relative movement speed of 1 / R of the relative movement speed when the magnification is 1 is common.
- the number of lines (hereinafter also referred to as “reading lines”) read by the first row line sensor and the second row line sensor (hereinafter also referred to as “line number”) depends on the scaling factor. Changed.
- the first column in the range from the position in the scanning direction to the farthest position for detecting the amount of positional deviation for shifting the position in the sub-scanning direction, that is, in the search range for detecting the amount of positional deviation.
- Both the line sensor in the second row and the line sensor in the second row perform processing for detecting the position in the sub-scanning direction in which the same image (that is, the image having the highest similarity) is read in the overlap region.
- the search range is included in the search range in the scaling process when the scaling ratio R is 1
- the search range is too narrow and the following problem may occur.
- the position in the sub-scanning direction where the image data matches in the overlap region between the image data of the image read by the line sensor in the first row and the image data of the image read by the line sensor in the second row Is not detected within the search range, and the amount of positional deviation in the sub-scanning direction between the image data of the image read by the line sensor in the first row and the image data of the image read by the line sensor in the second row ( The number of lines) cannot be obtained with high accuracy. For this reason, in the enlargement / reduction processing, in order to detect the positional deviation amount in the sub-scanning direction with high accuracy, the search range for detecting the positional deviation amount (number of lines) is expanded in accordance with the magnification ratio. It is desirable to increase the number of lines.
- An object of the present invention is to subdivide a divided image indicated by image data generated by a line sensor belonging to the first row line sensor group and a divided image indicated by image data generated by a line sensor belonging to the second row line sensor group.
- Image processing apparatus, image processing method, and image reading apparatus capable of generating high-quality composite image data corresponding to an object to be read by a processing procedure that does not accumulate positional deviation in the scanning direction, and combining the image data It is to provide a program for causing a computer to execute processing to be performed.
- Another object of the present invention is to eliminate the need for expanding the search range for detecting the amount of positional deviation in the sub-scanning direction between the image data of the line sensors in each column even when the reading magnification is changed. There is no need to change the content of the amount detection processing, the amount of positional deviation can be obtained accurately without increasing the circuit scale, and an image processing apparatus, an image processing method, and an image processing method capable of generating high-quality composite image data, And an image reading apparatus, and a program for causing a computer to execute processing for combining the image data.
- An image processing apparatus includes a first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and the first row of the first row sensors.
- An image pickup including a line sensor group and a second row line sensor group including a plurality of second line sensors arranged at different positions in the sub-scanning direction and arranged in a line at a second interval in the main scanning direction.
- the plurality of first line sensors belonging to the first row line sensor group are arranged to face the second interval in the second row line sensor group, and the second row The plurality of second line sensors belonging to the line sensor group are arranged so as to face the first interval in the line sensor group of the first row, and the adjacent first line sensor and the second line sensor group Adjacent to the line sensor
- An image processing apparatus that processes image data generated by the imaging unit having an overlap region in which ends overlap in the main scanning direction, and is based on a detection signal generated by the first row line sensor group
- An image memory for storing second image data based on the first image data and a detection signal generated by the second row of line sensor groups, and the first image data read from the image memory Selected from the reference data having a predetermined first width in the sub-scanning direction in the overlap region and the second image data read from the image memory in the same overlap region.
- the comparison data having the same width as that of the comparison data is processed for a plurality of positions where the positions of the comparison data are moved in the sub-scanning direction, and the reference data
- a similarity calculation unit for calculating the similarity between the comparison data and the comparison data, the position of the reference data in the sub-scanning direction, and the comparison data having the highest similarity among the plurality of comparison data in the sub-scanning direction A shift amount estimation unit that calculates shift amount data based on a difference from the position; and reads out the first image data and the second image data from the image memory, and divides the shift amount indicated by the shift amount data.
- An image processing method includes a first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and the first row. And a second row of line sensor groups including a plurality of second line sensors arranged at different positions in the sub-scanning direction and arranged in a line at a second interval in the main scanning direction.
- the imaging unit wherein the plurality of first line sensors belonging to the first row line sensor group are arranged to face the second interval in the second row line sensor group, and The plurality of second line sensors belonging to a line sensor group in a row are arranged to face the first interval in the line sensor group in the first row, and the first line sensor and the second adjacent to each other.
- Adjacent line sensors An image processing method for processing image data generated by the imaging unit having an overlap region in which the end portions overlap in the main scanning direction, the detection signal generated by the line sensor group in the first row Of the first image data read out from the image memory storing the first image data based on the second image data based on the first image data based on the detection signal generated by the line sensor group in the second column, and Same as the first width selected from the reference data having a predetermined first width in the sub-scanning direction in the overlap region and the second image data read from the image memory in the same overlap region A process of comparing the comparison data with the width is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, and the comparison with the reference data is performed.
- An image reading apparatus includes a first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and the first row. And a second row of line sensor groups including a plurality of second line sensors arranged at different positions in the sub-scanning direction and arranged in a line at a second interval in the main scanning direction.
- the imaging unit wherein the plurality of first line sensors belonging to the first row line sensor group are arranged to face the second interval in the second row line sensor group, and The plurality of second line sensors belonging to a line sensor group in a row are arranged to face the first interval in the line sensor group in the first row, and the first line sensor and the second adjacent to each other.
- Adjacent line sensors First image data based on detection signals generated by the first row of line sensor groups, and the second row of line sensors.
- An image memory for storing the second image data based on the detection signal generated by the group, and a predetermined value in the sub-scanning direction in the overlap region of the first image data read from the image memory.
- a program includes a line sensor group in a first row including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and a line in the first row.
- An image pickup unit having a sensor group and a second row of line sensor groups including a plurality of second line sensors arranged at different positions in the sub-scanning direction and arranged in a line at a second interval in the main scanning direction
- the plurality of first line sensors belonging to the first row line sensor group are arranged to face the second interval in the second row line sensor group, and the second row
- the plurality of second line sensors belonging to a line sensor group are arranged so as to face the first interval in the first row of line sensor groups, and the adjacent first line sensor and the second line Adjacent to the sensor
- Processing for calculating the similarity between the reference data and the comparison data, the position of the reference data in the sub-scanning direction, and the sub-scanning direction of the comparison data having the highest similarity among the plurality of comparison data A process of calculating shift amount data based on the difference from the position of the image, and reading out the first image data and the second image data from the image memory, and combining in the sub-scanning direction based on the shift amount data It is characterized by causing a computer to execute processing for combining the first image data and the second image data by changing the position.
- An image processing apparatus includes at least two line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction at different positions in the sub scanning direction, and adjacent line sensors in different lines.
- a process of comparing the reference data read by the read control unit and the reference data with the comparison data is performed for a plurality of positions where the positions of the comparison data are moved in the sub-scanning direction, and the reference data
- a similarity calculation unit that calculates the similarity between the comparison data for the positions in the plurality of sub-scanning directions, the position in the sub-scanning direction of the reference data, and the comparison data at the positions in the plurality of sub-scanning directions
- a shift amount estimation unit that calculates shift amount data based on a difference between the comparison data having the highest degree of similarity and a position in the sub scanning direction, and the shift amount data in the sub scanning direction based on the thinning rate.
- a shift amount enlargement unit that converts the shift amount data into enlarged shift amount data by enlarging and interpolating, and the image memory based on the enlarged shift amount data. Determine the position of the image data read from the image in the sub-scanning direction, read the image data at the determined position from the image memory, and combine the image data read by the adjacent line sensors in different columns And a combination processing unit for generating composite image data.
- An image processing method includes at least two line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction at different positions in the sub scanning direction, and adjacent line sensors in different lines.
- a similarity calculation step for calculating a similarity between the reference data and the comparison data for the plurality of sub-scanning direction positions; a position in the sub-scanning direction of the reference data; and a plurality of sub-scanning position positions
- a shift amount calculating step for calculating shift amount data based on a difference between the comparison data having the highest similarity in the comparison data and a position in the sub-scanning direction, and subtracting the shift amount data based on the thinning rate.
- a shift amount enlargement step for converting the shift amount data into enlarged shift amount data by enlarging and interpolating in the scanning direction;
- the position of the image data read from the image memory in the sub-scanning direction is determined based on the enlarged shift amount data, the image data at the determined position is read from the image memory, and the adjacent line sensors in different columns are used.
- An image reading apparatus has at least two line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction at different positions in the sub scanning direction, and adjacent line sensors in different lines.
- An image pickup unit arranged so as to have an overlap region in which the end portions overlap in the main scanning direction, an image memory storing image data based on an output from the line sensor, and a sub-scanning direction by the line sensor Based on the thinning rate set in accordance with the reading magnification, from the image data stored in the image memory, reference data at a position in a predetermined sub-scanning direction in the overlap area, and the overrun of the reference data.
- a read control unit that reads comparison data in a region that overlaps the wrap region, and the read control unit reads The process of comparing the issued reference data and the comparison data is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, and the reference data and the plurality of positions in the sub-scanning direction are compared.
- a similarity calculation unit for calculating a similarity to the comparison data, a position of the reference data in the sub-scanning direction, and comparison data having the highest similarity among the comparison data at the plurality of sub-scanning positions.
- a shift amount estimation unit that calculates shift amount data based on a difference from a position in the sub-scanning direction, and the shift amount data is expanded and interpolated in the sub-scanning direction based on the decimation rate.
- a shift amount enlargement unit for converting the image data into enlargement shift amount data, and a position in the sub-scanning direction of image data read from the image memory based on the enlargement shift amount data
- a combination processing unit configured to read out the image data at the determined position from the image memory and combine the image data read out by adjacent line sensors in different columns to generate composite image data; It is characterized by providing.
- a program includes at least two line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction at different positions in the sub-scanning direction, and ends of adjacent line sensors in different columns.
- a plurality of read control processes for reading comparison data in an overlapping region and a process for comparing the reference data read in the reading step with the comparison data, the positions of the comparison data being moved in the sub-scanning direction
- a similarity calculation process for calculating a similarity between the reference
- a shift amount calculation process for calculating shift amount data based on a difference between the comparison data having the highest degree of similarity in the comparison data at a position in the scanning direction and a position in the sub-scanning direction, and based on the decimation rate, By expanding and interpolating the shift amount data in the sub-scanning direction, the shift amount data for converting the shift amount data into enlarged shift amount data is displayed. Determining a position in the sub-scanning direction of image data read from the image memory based on the step and the enlargement shift amount data, reading the image data at the determined position from the image memory, and adjacent to different columns And a combining process for generating composite image data by combining the image data read by the line sensor.
- a line belonging to the second row line sensor group is set to a reference position in the sub-scanning direction of the divided image indicated by the image data generated by the line sensor belonging to the first row line sensor group. Since the position in the sub-scanning direction of the divided image indicated by the image data generated by the sensor is shifted and no accumulation of misalignment occurs, high-quality composite image data corresponding to the object to be read can be generated.
- FIG. 1 is a functional block diagram schematically showing a configuration of an image reading apparatus according to Embodiment 1 of the present invention.
- (A) is a top view which shows roughly the 1st line
- (b) is a 1st row
- FIG. 4C is a plan view of a document
- FIG. 5C shows read image data.
- (A) to (c) show a state where the document is floating from the glass surface
- (a) is a schematic side view of the imaging unit
- (b) is a schematic plan view of the imaging unit
- the document (C) is a figure which shows the read image data. It is a figure for demonstrating the digital data stored in an image memory. It is explanatory drawing which shows operation
- FIG. 10 It is a functional block diagram which shows roughly the structure of the image reading apparatus and image processing apparatus which concern on Embodiment 2 of this invention.
- 10 is a flowchart schematically illustrating an example of a processing procedure performed by the arithmetic device according to the second embodiment.
- (A) And (b) is a side view which shows roughly the structure of the imaging part of the image reader which concerns on Embodiment 3 of this invention.
- (A) And (b) is a figure for demonstrating the imaging part in Embodiment 4,
- (a) is the 1st line
- FIG. 2B is a plan view schematically showing a line sensor group in a row
- FIG. 5B is a view showing a document as a read portion read by the line sensor group in the first row and the line sensor group in the second row. is there.
- It is a schematic plan view which expands and shows one line sensor shown by Fig.18 (a).
- (A) to (c) show the state when the document is in close contact with the glass surface
- (a) is a schematic side view of the imaging unit
- (b) is a schematic plan view of the imaging unit and A plan view of the document
- (c) is a diagram conceptually showing image data of the read image.
- FIG. 10 is a diagram for explaining an operation when a document is in close contact with a glass surface and is read at a variable magnification of 1 (equal magnification) in the read control unit of the fourth embodiment.
- FIG. 10 is a diagram for explaining an operation when a document is separated from a glass surface and read at a variable magnification of 1 (equal magnification) in the read control unit of the fourth embodiment.
- FIG. 10 is a diagram for explaining an operation when a document is in close contact with a glass surface and read at a magnification of 4 times, which is an example of a magnification process, in the readout control unit of the fourth embodiment.
- FIG. 10 is a diagram for explaining an operation when a document is separated from a glass surface and read at a magnification of 4 which is an example of a magnification process in the read control unit of the fourth embodiment.
- (A) And (b) is a figure which shows the positional relationship of the data for demonstrating operation
- FIG. 15 is a diagram for explaining an operation of calculating enlargement shift amount data when the variable magnification is 1 (equal magnification) in the shift amount enlargement unit of the fourth embodiment.
- FIG. 18 is an explanatory diagram illustrating an example of an operation of calculating enlargement shift amount data in a case where the shift amount enlargement unit according to the fourth embodiment has a scaling factor of 4 as an example of a scaling process.
- FIG. 16 is an explanatory diagram illustrating another example of the operation of calculating enlargement shift amount data in the case of the enlargement / reduction rate of 4 times, which is an example of enlargement / reduction processing, in the shift amount enlargement unit of the fourth embodiment.
- FIG. 20 is an explanatory diagram illustrating an example of a combining operation in a combining processing unit according to the fourth embodiment.
- FIG. 20 is a diagram for explaining the relationship between the search range of the positional deviation amount in the sub-scanning direction with respect to the similarity data (correlation data) in the shift amount estimation unit of the fifth embodiment.
- FIG. 20 is a diagram illustrating an example of image data of an image read by an imaging unit according to the seventh embodiment. In Embodiment 7, it is a figure for demonstrating the image data after correct
- FIG. 1 is a functional block diagram schematically showing a configuration of an image reading apparatus 1 according to Embodiment 1 of the present invention.
- the image reading apparatus 1 according to the first embodiment includes an imaging unit 2, an A / D conversion unit 3, and an image processing unit 4.
- the image processing unit 4 is an image processing apparatus according to the first embodiment (an apparatus that can perform the image processing method according to the first embodiment), and includes an image memory 41, a similarity calculation unit 42, and a shift amount.
- An estimation unit 43 and a combination processing unit 44 are provided.
- the image reading apparatus 1 includes a plurality of first line sensors (for example, FIG. 2A) arranged in a line at a first interval (for example, 22 or 23 in FIG. 2A) in the main scanning direction.
- the first row of line sensor groups including 21O or 21E), and the first row of line sensor groups arranged at different positions in the sub-scanning direction and having a second interval in the main scanning direction (for example, FIG. )
- the second row sensor group including a plurality of second line sensors (for example, 21E or 21O in FIG. 2A) that are arranged in a line by opening 23 or 22).
- a plurality of first line sensors for example, 21O or 21E in FIG.
- the second interval (eg, A plurality of second line sensors (for example, 21E or 21O in FIG. 2A) that are arranged to face 23 or 22) in (a) and belong to the second row line sensor group are the first.
- An adjacent first line sensor and an adjacent end of the second line sensor are arranged so as to oppose a first interval (for example, 22 or 23 in FIG. 2A) in the line sensor group of the row.
- the portions (for example, sr, sl in FIG. 2A) overlap each other in the main scanning direction (for example, Ak, k, etc. in FIG. 2A).
- the image reading apparatus 1 receives the first image data based on the detection signal generated by the first row line sensor group and the second image data based on the detection signal generated by the second row line sensor group.
- the image memory 41 to be stored and the reference data having a predetermined first width in the sub-scanning direction in the overlap area in the first image data read from the image memory 41 and the image in the same overlap area
- the process of comparing the comparison data having the same width as the first width selected from the second image data read from the memory 41 is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction.
- a similarity calculation unit 42 for calculating the similarity between the reference data and the comparison data.
- the image reading apparatus 1 calculates shift amount data based on the difference between the position of the reference data in the sub-scanning direction and the position of the comparison data having the highest similarity among the plurality of comparison data in the sub-scanning direction.
- the first image data and the second image data are read from the shift amount estimation unit 43 and the image memory 41, and the first image data and the second image data are changed by changing the coupling position in the sub-scanning direction based on the shift amount data.
- FIG. 2A is a plan view schematically showing the imaging unit 2
- FIG. 2B is a plan view showing a document 60 as an object to be read.
- FIG. 2A shows, for example, a state in which an original table glass (hereinafter referred to as “glass surface”) 26 of the copying machine is viewed from above.
- FIG. 3 is a diagram for explaining the configuration of the line sensor using the line sensor 21O 1 which is one of the components of the imaging unit 2.
- the imaging unit 2 includes a sensor substrate 20.
- a plurality of line sensors are arranged in two rows on the sensor substrate 20.
- one end (e.g., left side) line sensor 21O 1 located odd counted from, ..., 21O k, ..., 21O n is spaced 22 in the main scanning direction of the linear are arranged, the line sensor 21E 1 located to the even-numbered from the left, ..., 21E k, ..., 21E n is the line sensor 21O 1 located odd, ..., 21O k, ..., and 21O n are arranged at different positions in the main scanning direction (X direction) with a space 23 in a straight line in the main scanning direction.
- n is an integer of 2 or more
- k is an integer of 1 to n.
- Line sensor 21O 1 located odd, ..., 21O k, ..., 21O n is the first column line group of sensors comprising a plurality of first line sensor (or first comprises a plurality second line sensor
- the line sensors 21E 1 ,..., 21E k ,..., 21E n located in the even-numbered line sensors are configured as a second line sensor group (or a plurality of second line sensors) (or 1st line sensor group including a plurality of first line sensors).
- a plurality of first line sensors belonging to the first row line sensor group are replaced with the second row line sensors.
- a plurality of line sensors (for example, 21O 1 ,..., 21O k ,..., 21O that are arranged to face a second interval (for example, 23,. n ) are arranged to face a first interval (for example, 22,..., 22) in the first row of line sensor groups, and adjacent end portions of the adjacent first line sensor and the second line sensor.
- the imaging unit 2 is moved in the sub-scanning direction (Y direction) by the transport unit 24, and reads a document 60 as a read object.
- the transport unit 24 may be a device that transports the document 60 in a direction opposite to the sub-scanning direction ( ⁇ Y direction). In each embodiment of the present application, a case where the imaging unit 2 is moved by the transport unit 24 will be described.
- the sub-scanning direction indicates the moving direction of the imaging unit 2, the main scanning direction, the line sensor 21O 1 located odd, ..., 21O k, ..., the arranging direction of 21O n, or, line sensor 21E 1 located even-numbered, showing ..., 21E k, ..., the arranging direction of 21E n.
- the line sensor 21O 1 includes a plurality of red photoelectric conversion elements (R photoelectric conversion elements) 26R that convert red component light of received light into electrical signals, and the received light.
- an element (B photoelectric conversion element) 26B As shown in FIG. 3, the plurality of R photoelectric conversion elements 26R are linearly arranged in the main scanning direction (X direction), and the plurality of G photoelectric conversion elements 26G are linear in the main scanning direction (X direction).
- the plurality of B photoelectric conversion elements 26B are arranged linearly in the main scanning direction (X direction).
- the line sensor having the configuration shown in FIG. 3 will be described.
- the present invention may be configured such that black and white photoelectric conversion elements that do not identify colors are arranged in a line.
- the arrangement of the plurality of R photoelectric conversion elements 26R, the plurality of G photoelectric conversion elements 26G, and the plurality of B photoelectric conversion elements 26B is not limited to the example of FIG.
- the line sensor 21O 1 outputs the received information as an electric signal SI (O 1 ).
- the line sensor 21E 1, 21O 2, ..., 21O n, 21E n similarly received information an electric signal SI (E 1), SI ( O 2), ..., SI (O n), SI (E n ). Electrical signals output from all line sensors are denoted as electrical signal SI.
- the electrical signal SI output from the imaging unit 2 is input to the A / D conversion unit 3.
- Line sensor 21O 1 located odd, ..., 21O k, ..., 21O n a line sensor 21E 1 located even-numbered, ..., 21E k, ..., A 21E n, the original 60 and partially overlapping overlap region a 1,1, a 1,2 reading, ..., a k, k, a k, K + 1, a k + 1, K + 1, ..., having a n, n. Details of the overlap region will be described later.
- the A / D conversion unit 3 converts the electric signal SI output from the imaging unit 2 into digital data DI.
- the digital data DI is input to the image processing unit 4 and stored in the image memory 41 of the image processing unit 4.
- FIGS. 4A to 4C are diagrams for explaining the digital data DI stored in the image memory 41.
- FIG. 4 (a) is a line sensor 21O 1 located odd, ..., 21O k, ..., the line sensor 21E 1 located to the even-numbered and 21O n, ..., 21E k, ..., the optical axis of 21E n
- FIG. 4B is a diagram illustrating an example of the document 60.
- FIG. 4C is a diagram showing digital data DI corresponding to the document 60 of FIG. 4B when the document 60 and the line sensor are in the positional relationship of FIG.
- FIG. 4A is a schematic side view of the image reading apparatus 1 and shows a state where the copying machine is viewed from the side.
- Line sensor 21O 1 positioned to the odd-numbered as shown in FIG. 2 (a), ..., 21O k, ..., 21O n is also the line sensor 21O expressed, the line sensor 21E 1 located even-numbered, ..., 21E k ,..., 21E n are also expressed as a line sensor 21E.
- the reflected light of the document 60 irradiated with the illumination light source 25 such as a light emitting diode (LED) is guided to the line sensor 21O along the optical axis 27O, and is guided to the line sensor 21E along the optical axis 27E.
- the illumination light source 25 such as a light emitting diode (LED)
- the imaging unit 2 conveyed in the sub-scanning direction (Y direction) sequentially photoelectrically converts the reflected light of the document 60 placed on the glass surface 26 and outputs the converted electric signal SI.
- the A / D conversion unit 3 The electrical signal SI is converted into digital data DI and output.
- the digital data DI as shown in FIG. Stored in the memory 41.
- Digital data DI the line sensor 21O 1 located odd, ..., 21O k, ..., the digital data DI (O 1) generated by 21O n, ..., DI (O k), ..., DI (O n) , 21E k ,..., 21E n and the digital data DI (E 1 ),..., DI (E k ),..., DI (E n ) generated by the even-numbered line sensors 21E 1 ,. .
- the imaging unit 2 is conveyed in the sub-scanning direction (Y-direction), the line sensor 21O 1 located odd, ..., 21O k, ..., even-numbered and 21O n line sensor 21E 1 located, ..., 21E k, ..., reads a subset same region (overlap region) at 21E n.
- the left end sl the right sr and the line sensor 21E 1 of the line sensor 21O 1 reads the area A 1, 1 of the document 60.
- the right end sr of the line sensor 21 E 1 and the left end sl of the line sensor 21 O 2 read the areas A 1 and 2 of the document 60.
- the left end sl the right sr and the line sensor 21E k of the line sensor 21O k are both read the area A k, k of the original 60, and the right end sr line sensor 21E k left sl line sensor 21O k + 1 reads both areas a k of the original 60, a k + 1, the left end sl the right sr and the line sensor 21E k + 1 of the line sensor 21O k + 1 are both read the area a k + 1, k + 1 of the document 60.
- the digital data DI (O k) corresponding to the line sensor 21O k includes digital data d r corresponding to the area A k, k of the original 60
- the digital data DI corresponding to the line sensor 21E k (E k) Includes digital data dl corresponding to the areas A k, k of the document 60.
- FIGS. 5A to 5C are diagrams for explaining the digital data DI stored in the image memory 41.
- FIG. 5 (a) is a line sensor 21O 1 of document 60 is located in the odd-numbered floats from the glass surface 26, ..., 21O k, ..., the line sensor 21E 1 located to the even-numbered and 21O n, ..., 21E k , ..., the position where the optical axis of 21E n intersect a diagram showing a positional relationship between the document 60 and the line sensor when there is a document 60 in a different position
- FIG. 5 (b) an example of a document 60
- FIG. 5C is a diagram showing digital data DI corresponding to the document 60 of FIG. 4B when the document 60 and the line sensor are in the positional relationship of FIG. 5A.
- both read the area A k, k of the original 60, the line sensor 21E k the right sr and the line sensor 21O k + 1 of the left sl are both reading area a k of the original 60, a k + 1, the line sensor 21O k + 1 of the rightmost sr and the line sensor 21E k + 1 of the left sl are both area a k + 1 of the original 60, Read k + 1 .
- FIG. 5 when viewed in side view of the imaging unit 2, the line sensor 21O 1 document 60 is positioned on an odd-numbered order that floats from the glass surface 26, ..., 21O k, ..., the line sensor 21E 1 to the optical axis 27O of 21O n is located at the position and the even-numbered intersecting the document 60, ..., 21E k, ..., a position where the optical axis 27E intersects the document 60 21E n different. Therefore, when the document 60 is floating from the glass surface 26, the reading position is different in the sub-scanning direction (Y direction).
- each line sensor sequentially performs photoelectric conversion, and therefore, even-numbered line sensors 21E 1 ,..., 21E k ,. , 21E n is the line sensor 21O 1 located odd, ..., 21O k, ..., compared to 21O n, to obtain an image of the same position temporally delayed.
- the digital data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 21O k and 21O k + 1 and the odd-numbered line sensor 21E.
- the digital data DI (E k ) and DI (E k + 1 ) corresponding to k 1 and 21E k + 1 are stored in the image memory 41 in a shifted manner.
- FIG. 6 and 7 are diagrams for explaining the operation of the similarity calculation unit 42.
- Figure 6 is a view corresponding to FIG. 4 (c), it shows the positional relationship between the image data MO and the image data ME at position Y m in the sub-scanning direction (Y-direction).
- Figure 7 is a view corresponding to FIG. 5 (c), the shows the positional relationship between the image data MO and the image data ME at position Y m in the sub-scanning direction (Y-direction).
- the similarity calculation unit 42 generates correlation data D42 based on the image data MO and the image data ME.
- the position Y image data MO (O k data around the m sub-scanning direction (Y direction) in the region d r of the line sensor 21O k into corresponding digital data DI (O k), d r , Y m ), and the data around the position Y m in the sub-scanning direction (Y direction) in the region d l of the digital data DI (E k ) corresponding to the line sensor 21E k is ME (E k , d l , Y m ).
- the image data (comparison data) ME is set to be wider in the sub-scanning direction (Y direction) than the image data (reference data) MO.
- FIGS. 8A and 8B are diagrams for explaining the operation of the similarity calculation unit 42 in more detail.
- the image data MO (O k , d r , Y m ) and the image data ME (E k , d l , Y m ) are composed of a plurality of pixel data.
- the similarity calculation unit 42 has a plurality of the same size (width in the same sub-scanning direction) as the image data MO (O k , d r , Y m ) from the image data ME (E k , d l , Y m ).
- Image data ME (E k , d l , Y m , ⁇ Y) is extracted.
- ⁇ Y is the amount of deviation from the image data MO (O k , dr , Y m ), and takes a value within the range of ⁇ y to y.
- Data having the same center position of the image data MO (O k , dr , Y m ) is taken as image data ME (E k , d l , Y m , 0), and data shifted by one pixel is ME (E k , d 1 , Y m , 1), and further shifted by one pixel, the data shifted by y pixels is image data ME (E k , d l , Y m , y), and the data shifted by y pixels in the reverse direction
- the image data is ME (E k , d l , Y m , -y).
- D42 (O k , E k , Y m ) is calculated.
- Each of the image data MO (O k , d r , Y m ) and the image data ME (E k , d l , Y m , -y) to ME (E k , d l , Y m , y) has a size.
- image data MO O k , d r , Y m
- image data ME E k , d l , Y m , -y
- ME E k , d l , Y m , y
- the shift amount estimation unit 43 outputs the shift amount ⁇ Y corresponding to the data with the highest similarity as the shift amount data D43.
- the correlation data and the dashed line corresponds to the FIG. 6 D42 (O k, E k , Y m) is the correlation data solid line corresponds to FIG. 7 D42 (O k, E k , Y m) It is.
- the line sensor 21O 1 located odd, ..., 21O k, ..., the line sensor 21E 1 located to the even-numbered relative to 21O n, ..., 21E k, ..., 21E n of the read image Is shifted upward, the degree of similarity is highest (that is, the degree of dissimilarity is lowest) at the position of the negative value ( ⁇ Y ⁇ ).
- the combination processing unit 44 performs a process of shifting data based on the shift amount data.
- FIG. 9 and FIG. 10 are diagrams for explaining the operation of the combination processing unit 44. 9, only the data corresponding to the even-numbered line sensors 21E 1 ,..., 21E k ,..., 21E n may be shifted, or as shown in FIGS. line sensor 21O 1 located odd, ..., 21O k, ..., data corresponding to 21O n, a line sensor 21E 1 located even-numbered, ..., 21E k, ..., the both data corresponding to 21E n
- the shift amount may be combined and shifted so as to be equivalent to the shift amount data.
- FIG. 9 and FIG. 10 are diagrams for explaining the operation of the combination processing unit 44. 9, only the data corresponding to the even-numbered line sensors 21E 1 ,..., 21E k ,..., 21E n may be shifted, or as shown in FIGS. line sensor 21O 1 located odd, ..., 21O k,
- the combination processing unit 44 reads out the first image data and the second image data that are adjacent (partially overlap) with each other in the main scanning direction from the image memory 41, and the first image data and the second image data The shift amount indicated by the shift amount data between the image data is divided, and the first shift amount data indicating one of the divided shift amounts and the second shift amount data indicating the other divided shift amount.
- the position of the first image data in the sub-scanning direction is corrected based on the first shift amount data
- the position of the second image data in the sub-scanning direction is corrected based on the second shift amount data. Correction may be made to combine the first image data and the second image data.
- Coupling processor 44 based on the shift amount data D43, and calculates the reading position RP of the image data, reads out the image data M44 corresponding to the reading position, the line sensor 21O 1 located odd, ..., 21O k , ..., the line sensor 21E 1 located data and even-numbered 21O n, ..., 21E k, ..., to generate image data D44 by combining where data 21E n overlap.
- Even number similarity calculation unit 42 the line sensor 21O 1 positioned to the odd-numbered relative to the position Y m of a sub-scanning direction (Y-direction), ..., 21O k, ..., based on the image data corresponding to 21O n line sensor 21E 1 located th, ..., 21E k, ..., and calculates the shifting image data similarity data (correlation data) D42 corresponding to 21E n, the shift amount estimation unit 43, for its position Y m
- the shift amount corresponding to the one having the highest degree of similarity (the largest correlation) is calculated as shift amount data D43
- the combination processing unit 44 reads the image data based on the shift amount data D43, and combines the combined image data D44. Output.
- FIG. 11 is an image obtained by combining the images of FIG. 4C or FIG. Particularly, in the case of FIG. 5 (c), the line sensor 21O 1 located odd, ..., 21O k, ..., the line sensor 21E 1 located to the image data and the even-numbered 21O n, ..., 21E k, .., 21E n is shifted in the sub-scanning direction (Y direction), but the same image as the original 60 shown in FIG. 5B can be output.
- FIG. 11 is an explanatory diagram conceptually showing the image data D44 after the combination process output from the combination processing unit 44.
- FIGS. 12A and 12B are diagrams illustrating an example in which the position of the document 60 with respect to the glass surface 26 is changed while the imaging unit 2 is being transported.
- the position Y m in the sub-scanning direction (Y-direction) the document 60 is floated from the glass surface 26, the position Y u, document 60 is adhered to the glass surface 26. Since the imaging unit 2 sequentially processes the image data at each position in the sub-scanning direction (Y direction), even if the position of the document 60 changes during conveyance, the amount of deviation at that position is calculated. , Can combine images correctly.
- FIGS. 13A and 13B are diagrams for explaining a case where the distance between the document 60 and the glass surface 26 changes according to the position in the main scanning direction.
- the region F1 including the overlapping area of the line sensor 21O k and 21E k, the original 60 is floated from the glass surface 26 (gap G1 has a positive value) is, the line sensor 21E In the region F3 including the overlapping region of k and 21O k + 1 and the overlapping region of the line sensors 21O k + 1 and 21E k + 1 , the document 60 and the glass surface 26 are in close contact (the value of the gap G3 is 0). Therefore, in the example of FIG.
- the shift amount (interval G2) gradually changes in the digital data DI (E k ) generated by the line sensor 21E k that reads the portion corresponding to the area F2 of the document 60.
- the combined position of the image data gradually changes in the sub-scanning direction according to the position in the main scanning direction (main scanning position).
- the combining processing unit is configured to combine the first image data and the second image data.
- the image reading device 1, the image processing device 4, and the image processing method according to the first embodiment the division indicated by the image data generated by the line sensors belonging to the first row of line sensor groups.
- the position in the sub-scanning direction of the image as the reference position reference data in FIG. 8A
- the position in the sub-scanning direction of the divided image indicated by the image data generated by the line sensors belonging to the second row of line sensor groups is shifted.
- high-quality composite image data corresponding to the object to be read can be generated.
- Embodiment 2 A part of the functions of the image reading apparatus may be realized by a hardware configuration, or may be realized by a computer program executed by a microprocessor including a CPU (central processing unit). When a part of the function is realized by a computer program, the microprocessor loads and executes the computer program from a computer-readable storage medium or by communication such as the Internet. Part can be realized.
- FIG. 14 is a functional block diagram showing a configuration when a part of the functions of the image reading apparatus 1a is realized by a computer program.
- the image reading device 1 a includes an imaging unit 2, an A / D conversion unit 3, and an arithmetic device 5.
- the arithmetic device 5 includes a processor 51 including a CPU, a RAM (random access memory) 52, a nonvolatile memory 53, a large-capacity storage medium 54, and a bus 55.
- the non-volatile memory 53 for example, a flash memory can be used.
- the large-capacity storage medium 54 for example, a hard disk (magnetic disk), an optical disk, or a semiconductor storage device can be used.
- the A / D conversion unit 3 has the same function as the A / D conversion unit 3 in FIG. 1, converts the electrical signal SI output from the imaging unit 2 into digital data, and stores the digital data in the RAM 52 via the processor 51. .
- the processor 51 can realize the function of the image processing unit 4 by loading and executing a computer program from the nonvolatile memory 53 or the mass storage medium 54.
- FIG. 15 is a flowchart schematically showing an example of processing by the arithmetic device 5 according to the second embodiment.
- the processor 51 first executes similarity calculation processing (step S1). Thereafter, the processor 51 executes a shift amount estimation process (step S2). Finally, the processor 51 executes a combination process (step S3). Note that the processing in steps S1 to S3 by the arithmetic device 5 is the same as the processing performed by the image processing unit 4 in the first embodiment.
- the image reading apparatus 1a even if the position of the document 60 is changed during conveyance of the imaging unit 2, the amount of deviation at the position is calculated, so that the images can be combined correctly. .
- FIG. 4 as (a), the one end (e.g., the left end) line sensor 21O 1 located odd counted from, ..., 21O k, ..., of 21O n line sensor 21E 1 located to the even-numbered and the optical axis 27O, ..., 21E k, ... , and the optical axis 27E of 21E n has been described a case where intersecting.
- the one end (e.g., the left end) line sensor 21O 1 located odd counted from, ..., 21O k, ..., of 21O n line sensor 21E 1 located to the even-numbered and the optical axis 27O, ..., 21E k, ... , and the optical axis 27E of 21E n has been described a case where intersecting.
- one end e.g., the left end
- line sensor 21O 1 located odd counted from, ..., 21O k, ..., the line sensor located even-numbered and the optical axis 28O of 21O n
- FIG. 1 is also referred to in the description of the third embodiment.
- Figure 16 (a) and (b) is a line sensor 21O 1 positioned to the odd-numbered image pickup section 2 of the image reading apparatus according to the third embodiment, ..., 21O k, ..., the optical axis 28O and even 21O n line sensor 21E 1, located th ..., 21E k, ..., if the optical axis 28E of 21E n are parallel, schematic side view showing the positional relationship between the document 60 and the line sensor as the read object It is.
- FIG. 16A shows a case where the document 60 is in close contact with the glass surface 26 which is the document table mounting surface
- FIG. 16B shows a case where the document 60 is slightly lifted away from the glass surface 26. Indicates.
- the image reading apparatus when the document 60 is in close contact with the glass surface 26 as shown in FIG. 16A, the document 60 is glass as shown in FIG. be any of the cases that floats from the surface 26, the line sensor 21O 1 located odd, ..., 21O k, ..., the read image of the document 60 by 21O n hardly changes, similarly, the even-numbered line sensor 21E 1 located, ..., 21E k, ..., the read image of the document 60 by 21E n is hardly changed. Therefore, the image processing unit 4, the line sensor 21O 1 located odd, ..., 21O k, ..., the image reading by 21O n, or line sensor 21E 1 located even-numbered, ..., 21E k, ... , 21E n, a process of shifting one or both of the scanned images by a substantially constant amount in the sub-scanning direction is performed.
- time is taken for the conveyance speed by the conveyance machine when moving either one or both of the document 60 and the imaging unit 2 in the sub-scanning direction. Even when there is a natural fluctuation (that is, speed fluctuation), as in the case of the first embodiment, it is possible to generate combined image data by correctly combining images.
- FIG. 17 is a functional block diagram schematically showing the configuration of the image reading apparatus 101 according to the fourth embodiment of the present invention.
- the image reading apparatus 101 according to the fourth embodiment performs operations of the imaging unit 102, the A / D conversion unit 103, the image processing unit 104, the imaging unit 102, and the image processing unit 104. And a controller 107 to be controlled.
- the image processing unit 104 is an image processing apparatus according to the fourth embodiment (an apparatus capable of performing the image processing method according to the fourth embodiment), and includes an image memory 141, a read control unit 142, and similarity calculation.
- Unit 143, shift amount estimation unit 144, shift amount enlargement unit 145, and combination processing unit 146 is an image memory 141, a read control unit 142, and similarity calculation.
- the imaging unit 102 has a first row of line sensor groups including a plurality of (for example, n) first row line sensors arranged in a line at intervals in the main scanning direction, and an interval in the main scanning direction. And a second row line sensor group including a plurality of (for example, n) second row line sensors arranged in a line.
- n is a positive integer.
- the positions of the plurality of first row line sensors in the main scanning direction are regions where the plurality of second row line sensors are not provided (that is, regions between the second row line sensors adjacent in the main scanning direction). It is a position facing.
- the positions of the plurality of second row line sensors in the main scanning direction are regions where the plurality of first row line sensors are not provided (that is, regions between the first row line sensors adjacent in the main scanning direction). It is a position facing.
- the plurality of first row line sensors and the plurality of second row line sensors are arranged in a staggered pattern on the sensor substrate.
- the first row line sensor and the second row line sensor which are adjacent to each other are adjacent to each other in the main scanning. They are arranged so as to have overlapping areas (hereinafter referred to as “overlap areas”) overlapping in the direction.
- the imaging unit 102 optically reads an image of a document as an object to be read and generates an electrical signal (image data) SI corresponding to the image of the document.
- the electrical signal (image data) SI generated by the imaging unit 102 includes first image data output from a plurality of first row line sensors constituting the first row line sensor group, and second row lines. And second image data output from a plurality of second line sensors constituting the sensor group. Note that, for example, an erect image is formed on each of the first row line sensor group and the second row line sensor group between the first row line sensor group and the second row line sensor group and the original.
- An optical system such as a lens may be provided.
- the present invention can also be applied when the number of lines in the line sensor group is three or more.
- the present invention can be applied to a case where an image of an object to be read is read by an imaging unit having two or more line sensors having an overlap region. Therefore, the present invention can also be applied to the case where the first row line sensor group is composed of one line sensor and the second row line sensor group is composed of one line sensor.
- the A / D conversion unit 103 converts the electrical signal (image data) SI output from the imaging unit 102 into digital data (image data) DI.
- the image data DI is input to the image processing unit 104 and stored in the image memory 141 in the image processing unit 104.
- Image data DI stored in the image memory 141 in the image processing unit 104 includes first image data based on image data output from a plurality of first-line line sensors constituting the first-line line sensor group, and Second image data based on image data output from a plurality of second line sensors constituting the second row of line sensor groups.
- the read control unit 142 in the image processing unit 104 determines from the image data DI stored in the image memory 141 among the image data in the overlap region.
- Reference data for example, reference data MO (O in FIG. 26B and FIG. 27B described later) of a position in a predetermined sub-scanning direction (also referred to as “predetermined sub-scanning position” or “predetermined line”).
- k , dr predetermined sub-scanning position
- comparison data for example, comparison data in FIG. 26B and FIG. 27B described later
- ME E k , dl
- the similarity calculation unit 143 in the image processing unit 104 performs a process of comparing the reference data and the comparison data for the same overlap region read by the read control unit 142 with a plurality of positions obtained from the comparison data.
- the similarity in the overlap region between the reference data and the comparison data at a plurality of positions in the sub-scanning direction that is, the comparison data of a plurality of lines
- the similarity calculation unit 143 calculates the similarity (that is, the plurality of similarities corresponding to the plurality of comparison data) for each of the plurality of lines of comparison data with respect to the reference data.
- the shift amount estimation unit 144 in the image processing unit 104 is similar to the position of the reference data in the sub-scanning direction based on the comparison data having the highest similarity among the plurality of similarities calculated by the similarity calculation unit 143.
- the shift amount data d sh corresponding to the difference between the highest degree comparison data and the position in the sub-scanning direction is calculated.
- the shift amount enlargement unit 145 in the image processing unit 104 performs processing (interpolation processing) for enlarging the shift amount data d sh based on the thinning rate M, thereby converting the shift amount data d sh into the enlarged shift amount data. to convert to ⁇ y b.
- the combination processing unit 146 in the image processing unit 104 changes the position in the sub-scanning direction of the image data read from the image memory 141 based on the enlargement shift amount data ⁇ y b , that is, read from the image memory 141.
- First image data image data corresponding to an image read by the first row of line sensor groups
- second image data image data corresponding to an image read by the second row of line sensor groups
- the first image data and the second image data are combined with each other after the positional deviation in the sub-scanning direction is eliminated.
- the controller 107 controls the operations of the imaging unit 102 and the image processing unit 104.
- the controller 107 sends the input setting information such as the scaling ratio R or instruction information to the imaging unit 102 and the image processing unit 104, thereby controlling the reading operation by the imaging unit 102 and the image processing by the image processing unit 104.
- Control When zooming processing is performed by the image reading apparatus 101, when a scaling factor is designated by the controller 107 in the image reading apparatus 101, the controller 107 sends setting information for setting the reading scaling factor (scaling factor) R to the imaging unit 102.
- the instruction information such as the thinning rate M set based on the transmission and variable magnification is sent to the image processing unit 104.
- the imaging unit 102 reads the document at a variable magnification other than the equal magnification (that is, a variable magnification of 1), and image data (image signal) of the number of lines expanded or reduced in the sub-scanning direction according to the variable magnification. Is generated and output.
- the thinning rate M set in the controller 107 in the image reading apparatus 101 is set to an integer value of 1 or more close to a variable magnification R that is a reading magnification, for example.
- the thinning rate M is set to 2
- the thinning rate M is set to 4.
- the thinning rate M is set to 1
- the thinning rate M is set to 2.
- the thinning rate M is set to an integer value equal to the scaling factor R or an integer value closest to the scaling factor R, for example.
- the method of setting the thinning rate M corresponding to the scaling factor R is not limited to the above example. Accordingly, in the image processing unit 104, it is necessary to perform the thinning process in the read control unit 142 and the enlargement process in the shift amount enlargement unit 145 by complicated processing such as enlargement or reduction processing by a filter operation such as line interpolation. However, it can be performed by simplified processing such as line thinning for every integer line or double writing interpolation (repeated insertion) and averaging processing, and there is no need to increase the circuit scale of the image processing unit 104.
- FIG. 18A and 18B are diagrams for explaining the imaging unit 102.
- FIG. 18A is a plan view schematically showing the imaging unit 102
- FIG. 18B is a plan view showing a document 160 as an object to be read.
- FIG. 18A shows, for example, a state in which an original table (hereinafter referred to as “original table glass” or “glass surface”) 126 of a copying machine on which an original as an object to be read is placed is viewed from above.
- Figure 19 is a diagram for explaining the structure of one of a plurality of line sensors provided in the imaging unit 102 (illustrated line sensors 121 o 1). Note that the document table 126 may have another structure as long as it is limited to the glass surface and can determine the position of the document 160 at a predetermined position.
- the imaging unit 102 includes a sensor substrate 120. Two rows of line sensor groups including a first row line sensor group and a second row line sensor group are arranged on the sensor substrate 120. In the sensor substrate 120, one end (e.g., left side) the line sensor 121 o 1 located odd counted from, ..., 121O k, ..., 121O n , the spacing in a straight line in the main scanning direction (X-direction) Is opened and placed.
- the line sensor 121E 1 located to the even-numbered counting from the same end ..., 121E k, ..., 121E n is the line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n are arranged at different positions in the main scanning direction at intervals in a straight line in the main scanning direction so as to partially face each other and form a staggered pattern.
- n is an integer of 2 or more
- k is an integer of 1 to n.
- Line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n is the first column line group of sensors including a line sensor of a plurality of the first column (or a line sensor of a plurality of the second column Line sensors 121E 1 ,..., 121E k ,..., 121E n are included in the second row line sensors including a plurality of second row line sensors.
- a group (or a first row line sensor group including a plurality of first row line sensors) is formed.
- a plurality of first row line sensors (eg, 121E 1 ,..., 121E k ,..., 121E n ) belonging to the first row line sensor group
- the line sensors are arranged so as to face regions between line sensors adjacent to each other in the main scanning direction.
- a plurality of line sensors belonging to the second column line group of sensors (e.g., 121O 1, ..., 121O k , ..., 121O n) during the line sensors adjacent to each other the main scanning direction in the first column line group of sensors It arrange
- the imaging unit 102 is moved in the sub-scanning direction (Y direction) by the transport unit 124 (for example, shown in FIG. 20A), and the document 160 as a reading object.
- the imaging unit 102 may be a device that reads the document 160 by fixing the imaging unit 102, transporting the document 160 in the direction opposite to the sub-scanning direction ( ⁇ Y direction) by the transport unit 124.
- ⁇ Y direction sub-scanning direction
- a case will be described in which the imaging unit 102 is moved in the direction of the arrow Dy (for example, shown in FIG. 18A) by the transport unit 124.
- the sub-scanning direction (Y direction) indicates the moving direction of the imaging unit 102 (the arrow Dy direction in FIG.
- the main scanning direction is an odd-numbered line sensor 121O 1 ,..., 121O k. , ..., the arranging direction of 121 o n, or line sensor 121E 1 located even-numbered, ..., shown 121E k, ..., the arranging direction of 121E n.
- setting information for setting the scanning magnification that is, the magnification ratio R is sent to the imaging unit 102.
- the magnification of the imaging unit 102 is changed by changing the speed of the imaging unit 102 that is moved by the transport unit 124 to 1 / R when the magnification is 1 ⁇ while the reading period of the line sensor is constant.
- scaling processing can also be performed by fixing the imaging unit 102 and changing the speed at which the document 160 is conveyed to 1 / R when the scaling ratio is 1.
- the line sensor 121O 1 includes a plurality of red photoelectric conversion elements (R photoelectric conversion elements) 126R that convert red component light of received light into an electrical signal, and the received light.
- the plurality of R photoelectric conversion elements 126R are linearly arranged in the main scanning direction (X direction), and the plurality of G photoelectric conversion elements 126G are linear in the main scanning direction (X direction).
- the plurality of B photoelectric conversion elements 126B are arranged linearly in the main scanning direction (X direction).
- the line sensor having the configuration shown in FIG. 19 will be described.
- the present invention is a line sensor in which monochrome photoelectric conversion elements that do not identify colors are arranged in a line in the main scanning direction (X direction).
- the present invention is also applicable to an image reading apparatus provided with Further, the arrangement of the plurality of R photoelectric conversion elements 126R, the plurality of G photoelectric conversion elements 126G, and the plurality of B photoelectric conversion elements 126B is not limited to the arrangement shown in FIG. 19, and other arrangement methods may be adopted. Good.
- the line sensor 121O 1 outputs the received information as an electric signal SI (O 1 ).
- the line sensors 121E 1 , 121O 2 ,..., 121O n , 121E n also receive the received light signals as electrical signals SI (E 1 ), SI (O 2 ),..., SI (O n ), SI. Output as (E n ).
- SI (E 1 ), SI (O 2 ),..., SI (O n ), SI (E n ) When the entire electric signals SI (E 1 ), SI (O 2 ),..., SI (O n ), SI (E n ) are shown, the electric signals (image data) output from the respective line sensors are converted into SI. Also written.
- the electrical signal SI output from the imaging unit 102 is input to the A / D conversion unit 103.
- the electrical signal SI input to the A / D conversion unit 103 is converted into digital data (image data) DI and stored in the image memory 141 of the image processing unit 104.
- FIG. 20 (a) is the line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n and the line sensor 121E 1 located even-numbered, ..., 121E k, ..., the optical axis 127O of 121E n And the optical axis 127E intersect with each other (that is, when the optical axes 127O and 127E intersect on the glass surface 126 when the optical axes 127O and 127E are viewed in the Y direction).
- FIG. 20 (a) is the line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n and the line sensor 121E 1 located even-numbered, ..., 121E k, ..., the optical axis 127O of 121E n
- the optical axis 127E intersect with each other (that is, when the optical axes 127O and 127E intersect on the glass surface 126 when the optical axes
- FIG. 6 is a diagram showing a positional relationship between the original 160 and a line sensor.
- FIG. 20B is a diagram illustrating an example of the document 160.
- FIG. 20C conceptually shows image data DI corresponding to the document 160 of FIG. 20B read when the document 160 and the line sensor are in the positional relationship of FIG. .
- FIG. 20A is a schematic side view of the image reading apparatus 101 and shows a state in which an apparatus (for example, a copying machine) including the image reading apparatus 101 is viewed from the side.
- Line sensor 121 o 1 positioned to the odd-numbered as shown in Figure 18 (a), ..., 121O k, ..., 121O n is also the line sensor 121 o expressed, the line sensor 121E 1 located even-numbered, ..., 121E k ,..., 121E n are also referred to as line sensors 121E.
- the imaging unit 102 conveyed in the sub-scanning direction (Y direction) sequentially photoelectrically converts the reflected light of the document 160 placed on the glass surface 126 and outputs the converted electric signal SI.
- the A / D conversion unit 103 The electric signal SI is converted into image data DI and output.
- the direction of the optical axis 127O, the line sensor 121O 1, ..., 121O k, ..., can be set to a desired direction by way of the installation of the 121 o n
- the direction of the optical axis 127E, the line sensor 121E 1, ..., 121E k, ... can be set to a desired direction by way of the installation of the 121E n.
- the direction of the optical axis 127O, the line sensor 121O 1, ..., 121O k, ... can also be configured by an optical system such as a lens disposed between the 121 o n and the glass surface 126, the optical axis 127E direction, the line sensor 121E 1, ..., 121E k, ..., may be set by the optical system such as a lens disposed between the 121E n and the glass surface 126.
- FIG. 20C shows image data DI (O k ) and DI (O k + 1 ) generated by the odd-numbered line sensors 121O k and 121O k + 1 and the even-numbered line sensors 121E k and 121E k + 1.
- the image data DI (E k ) and DI (E k + 1 ) generated by the above are shown.
- the overlap area A 1,1, A 1,2, ..., A k, k, A k, k + 1, A k + 1, K + 1, ..., A n, n will be described.
- the odd-numbered line sensors 121O 1 ,..., 121O k ,. and the region on the document 160 n reads
- the line sensor 121E 1 located even-numbered, ..., 121E k, ..., a region on the document 160 to be read by the 121E n is a portion overlapping region (overlapping region) Become.
- the right end (end portion) of the line sensor 121 o 1 Both the sr and left (end) of the line sensor 121E 1 sl, reads the overlap area A 1, 1 of the document 160.
- the right end sr of the line sensor 121O k and the left end sl of the line sensor 121E k both read the overlap areas A k and k of the original 160, and the line sensor 121E k the right end sr and the line sensor 121 o k + 1 of the left sl, both read overlap area a k of the original 160, a k + 1, the line sensor 121 o k + 1 of the rightmost sr and the line sensor 121E k + 1 of the left sl, both over the original 160
- the wrap areas A k + 1 and k + 1 are read.
- image data DI (O k) corresponding to the line sensor 121 o k is the overlap area A k of the document 160, includes a digital data d r corresponding to k
- image data DI (E corresponding to the line sensors 121E k k ) includes digital data d 1 corresponding to the overlap areas A k, k of the original 160.
- FIGS. 21A to 21C are diagrams for explaining the image data DI stored in the image memory 141.
- FIG. FIG. 21 (a) original document 160 are floated from the glass surface 126, the line sensor 121 o 1 located odd, ..., 121 o k, ..., the line sensor 121E 1 positioned to the optical axis and the even-numbered 121 o n , ..., 121E k, ..., in a case where the optical axis of 121E n is the document 160 in a different position than the position that intersects a diagram showing the positional relationship between the document 160 and the line sensor.
- FIG. 21 (a) original document 160 are floated from the glass surface 126, the line sensor 121 o 1 located odd, ..., 121 o k, ..., the line sensor 121E 1 positioned to the optical axis and the even-numbered 121 o n , ..., 121E k, ..., in a case where the optical axi
- FIG. 21B is a diagram illustrating an example of the document 160.
- FIG. 21C illustrates the document 160 illustrated in FIG. 21B when the document 160 and the line sensor are in the positional relationship illustrated in FIG. It is a figure which shows notionally the image data DI corresponding to.
- the positional relationship between the line sensor and the document 160 when viewed in plan is not changed. That is, the image data acquired by reading the image of the original 160 in which each line sensor is in close contact with the glass surface 126 and the image of the original 160 at which each line sensor is lifted from the glass surface 126 are acquired.
- the image data to be processed is the same data in the main scanning direction (X direction). Accordingly, in FIG. 21B, as in the case of FIG. 20B, the right end sr of the line sensor 121O k and the left end sl of the line sensor 121E k are both overlap regions A k, k.
- the line sensor 121 o 1 located odd, ..., 121 o k , ...
- the line sensor 121E 1 of the optical axis 127O of 121 o n is located at the position and the even-numbered intersecting the document 160, ..., 121E k, ..., and a position where the optical axis 127E intersects the original 160 121E n different. Therefore, when the document 160 is floating from the glass surface 126, the reading position in the sub-scanning direction (Y direction) is different.
- the image data obtained by reading the image of the original 160 in which each line sensor is in close contact with the glass surface 126 and the image of the original 160 in which each line sensor is lifted from the glass surface 126 are obtained.
- the image data to be processed is different data in the sub-scanning direction (Y direction). This is because when the imaging unit 102 is transported in the sub-scanning direction (Y direction), each line sensor sequentially performs photoelectric conversion, and therefore, even-numbered line sensors 121E 1 ,..., 121E k ,.
- 121E n is the line sensor 121 o 1 located odd, ..., 121 o k, ..., compared to 121 o n, because the obtaining the image data of the image in the same position temporally delayed. Accordingly, as shown in FIG. 21C, the image data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 121O k and 121O k + 1 and the odd-numbered line sensor 121E.
- the image data DI (E k ) and DI (E k + 1 ) corresponding to k 1 and 121E k + 1 are stored in the image memory 141 with their positions in the sub-scanning direction shifted.
- the position of the image data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 121O k and 121O k + 1 and the odd-numbered line sensors 121E k and 121E k + 1. are different positions (different lines) from the positions of the image data DI (E k ) and DI (E k + 1 ) in the sub-scanning direction.
- the image data DI stored in the image memory 141 when performing the scaling process will be described.
- the magnification R is changed.
- the image data DI is stored in the image memory 141.
- the number of lines in the sub-scanning direction of the image data DI corresponding to the document 160 shown in FIGS. 20C and 21C is the magnification ratio.
- the image is stored in the image memory 141 as an image having four times the number of lines when R is one.
- the read image is reduced, and the number of lines in the sub-scanning direction is 0.
- the image multiplied by 8 is stored in the image memory 141.
- the image processing unit 104 reads the image data in the overlap region from the image memory 141 based on the thinning rate M set according to the scaling ratio R, compares the reference data with a plurality of comparison data, and compares a plurality of similarities.
- the shift amount data d sh is calculated using the position in the sub-scanning direction of the comparison data from which the highest similarity among the calculated similarities is obtained.
- the image processing unit 104 uses the thinning rate M, by performing the processing for enlarging the shift amount data d sh in the sub-scanning direction, to generate an enlarged shift amount data [Delta] y b, the expansion shift quantity data [Delta] y b
- the image data at the position in the sub-scanning direction is read from the image memory 141, and the image data combining process is executed.
- the processing of the image processing unit 104 will be specifically described.
- the read control unit 142 in the image processing unit 104 receives a thinning rate M set according to the scaling factor R from the controller 107. Based on the thinning rate M (for example, every M line interval indicated by the thinning rate M) of the image data in the overlap region in the image memory 141, that is, the image data DI, the read control unit 142 is a predetermined amount. position Y m and the image data around the sub-scanning direction (Y direction) (for example, by cross-hatching of each read reference line Y m and M line intervals indicated in FIG. 26 to be described later (a) and FIG. 27 (a) The indicated line) is read out as reference data rMO.
- the read control unit 142 includes a reference data RMO to duplicate the same overlapping area is read, within a predetermined range before and after the line of position Y m in the sub-scanning direction of the reference data RMO (Y-direction) position and the image data of the periphery thereof (e.g., FIG. 26 to be described later (a) and a line indicated by hatching for each read reference line Y m and M line intervals indicated in FIG. 27 (a)) compare data rME Read as.
- the read control unit 142 outputs reference data MO and comparison data ME having a predetermined number of lines.
- Predetermined range is a range from -y around the reference line Y m to + y, that is, the search range "-y ⁇ + y".
- the + y direction is the sub-scanning direction (Y direction), and the -y direction is the opposite direction to the + y direction.
- the predetermined search range “ ⁇ y to + y” is described as a difference from the center line position of the reference data rMO, that is, a positional deviation amount (also referred to as “shift amount”, which is referred to as a positional deviation amount ⁇ Ya or a positional deviation amount ⁇ Y. .) Search range.
- the predetermined search range “ ⁇ y to + y” is a search range for detecting a position (line) in the sub-scanning direction in which image data is shifted in the combination processing, that is, a position shift amount in the sub-scanning direction.
- the deviation amount ⁇ Y takes a value within a predetermined search range “ ⁇ y to + y”.
- the positional deviation amount (difference between the center position of the reference data and the line position) at the time of reading from the image memory 141 is defined as a positional deviation amount ⁇ Ya
- the positional deviation amount at the time of output from the readout control unit 142 is the positional deviation amount ⁇ Y.
- the positional deviation amount ⁇ Ya is within a range of M times the number of lines by the thinning rate M that changes with the scaling factor R, that is, within a search range from ⁇ (M ⁇ y) to + (M ⁇ y) with the reference line as the center. Value.
- This search range is also expressed as “ ⁇ (M ⁇ y) to + (M ⁇ y)”.
- the positional deviation amount ⁇ Y when output from the read control unit 142 is always a value within the positional deviation search range “ ⁇ y to + y”.
- the readout control unit 142 When the thinning rate M is 1 and the magnification ratio R is equal to 1, the readout control unit 142 performs the same magnification processing or the reduction magnification processing in which the magnification ratio R is smaller than 1, and the read control unit 142 sets one line based on the thinning rate M.
- the line Y m in the overlap region in the image memory 141 and the data rMO that is the peripheral image data are output as read reference data MO, and the corresponding other overlap region Is a data within the search range “ ⁇ y to + y” with the line Y m as the center, that is, data in the range from the line (Y m ⁇ y) to (Y m + y) and the surrounding image data.
- the read control unit 142 outputs the reference data MO and the comparison data ME of the lines in the search range “ ⁇ y to + y” with the line Y m as the center.
- Possible range of the displacement amount ⁇ Y becomes a search range for detecting a sub-scanning direction positional shift amount, the reference data MO and the line Y m centered search range "-y ⁇ + y" comparison data ME in Is sent to the similarity calculator 143.
- Read control unit 142 when performing scaling processing for the decimation rate M on the basis of the thinning rate M, at each M-line interval, the image data rMO in the overlap region of image memory 141 near the line Y m Is output as reference data MO, and the corresponding overlap region is within the search range “ ⁇ (M ⁇ y) to + (M ⁇ y)” centered on the line Y m , that is, the line (Y m - reading the image data rME per M lines in the range of (M ⁇ y)) to (Y m + (M ⁇ y )), and outputs the comparison data ME.
- the positional deviation amount ⁇ Ya at the time of reading is a line within the search range “ ⁇ (M ⁇ y) to + (M ⁇ y)” with the line Y m as the center.
- the search range “ ⁇ (M ⁇ y) to + (M ⁇ y)” centered on the line Y m is a range of M times the number of lines based on the thinning rate M.
- the read control unit 142 reads in the M line interval by thinning rate M (i.e., reading one line for each M line) since the image data output from the read control unit 142, with a focus on lines Y m Search
- the line is within the range of the number of lines within the range “ ⁇ y to + y”.
- the read control unit 142 outputs the reference data MO and the comparison data ME of the lines in the search range “ ⁇ y to + y” with the line Y m as the center.
- Possible range of the displacement amount ⁇ Y becomes a search range for detecting a sub-scanning direction positional shift amount
- the read control unit 142 reads the data for each M line interval as the reference data rMO from the image data rMO stored in the image memory 141 using the thinning rate M, and outputs it as the reference data MO. Then, from the image data rME stored in the image memory 141, data at intervals of M lines, and the positional deviation amount ⁇ Ya within the search range “ ⁇ (M ⁇ y) to + (M ⁇ y)” , Read out as comparison data rME and output as comparison data ME. For this reason, in the read control unit 142, the misregistration amount ⁇ Ya at the time of reading includes a line number range of M times using the thinning rate M in order to include a line range changed by the scaling factor R.
- the search range for detecting the amount of positional deviation in the sub-scanning direction when outputting from the read control unit 142 is that the positional deviation amount ⁇ Y is 1 within the search range “ ⁇ y to + y” centered on the line Y m. It is possible to output the reference data MO that is not changed by the scaling factor and the comparison data ME of the lines in the search range “ ⁇ y to + y” in a line unit range. Therefore, the line corresponding to the range in the sub-scanning direction can be accurately searched without the need to increase the circuit scale such as increasing the line memory to expand the search range for the misalignment amount. Image data can be obtained.
- ⁇ 4-2 Operation of Embodiment 4 ⁇ 4-2-1 >> Operation of Read Control Unit 142
- the read control unit 142 uses the thinning rate M to read the reference data rMO from the image memory 141. It is a figure for demonstrating the operation
- FIG. 22 is a diagram corresponding to FIG. 20C (when the original 160 is in close contact with the glass surface 126), and in the same magnification processing, a position Y m (line Y m ) in the sub-scanning direction (Y direction). ) Shows the positional relationship between the image data rMO read out as reference data and the image data rME read out as comparison data.
- the image data rME read out as the comparison data has the positional deviation amount ⁇ Ya within the search range “ ⁇ y to + y” centered on the line Y m , that is, from the line (Y m ⁇ y) to the line (Y m + y).
- FIG. 23 is a diagram corresponding to FIG. 21C (when the document 160 is separated from the glass surface 126), and in the same magnification processing, the position Y m (line Y m ) in the sub-scanning direction (Y direction).
- the positional deviation amount ⁇ Ya read as the reference data and the image data rMO read out as the reference data in FIG. 4 indicate the positional relationship between the image data rME within the search range “ ⁇ y to + y” with the line Y m as the center.
- the positional relationship between the image data rMO read out as reference data and the image data rME read out as comparison data at the position Y m4 (the reading position of the original 160 corresponding to the line Y m in FIG. 22) is shown.
- the image data rME read out as comparison data has a positional deviation amount ⁇ Ya within the search range “ ⁇ 4y to + 4y” centered on the line Y m4 , that is, from the line (Y m4 ⁇ 4y) to the line (Y m4 + 4y).
- the read control unit 142 reads the data rMO stored in the image memory 141 based on the thinning rate M, outputs the reference data MO, and is stored in the image memory 141.
- the read data rME is read and the comparison data ME is output.
- the read control unit 142 in the overlap region d r of the image data DI corresponding to the line sensor 121O k (O k), 1 line Peripheral image data centered on the line Y m is read as reference data rMO (O k , d r , Y m ) for each interval, and the overlap region d of the image data DI (E k ) corresponding to the line sensor 121E k is read.
- the line in the search range “ ⁇ y to + y” centered on the line Y m and the surrounding image data are read out as comparison data rME (E k , d l , Y m ).
- the read control unit 142 reads the image data of the lines in the search range “ ⁇ y to + y” with the line Y m as the center.
- Image data (comparison data) rME becomes wider in the sub-scanning direction (Y direction) than reference data) rMO.
- the read control unit 142 the overlap region d l of the image data DI overlap region (O k) d r and the image data DI (E k), the reference data RMO (O k, Similar to the case of reading out d r , Y m ) and comparison data rME (E k , d l , Y m ), the overlap region d l of the image data DI (O k + 1 ) and the image data DI (E k ) are sequentially read.
- the reference data rMO (O k + 1 , d 1 , Y m ) and the comparison data rME (E k , d r , Y m ) are read from the overlap area d r of the image data DI (O k + 1 ). from the overlap region d l of the r and the image data DI (E k + 1), the reference data rMO (O k + 1, d r, Y m) and the comparison data rME (E k + 1, d l , Y m ).
- the readout control unit 142 Since the readout control unit 142 reads out the image data of the lines at intervals of 4 lines within the search range “ ⁇ 4y to + 4y” with the line Y m4 as the center, the image data (comparison data) is compared with the image data (reference data) rMO. Data) rME becomes wider in the sub-scanning direction (Y direction). Also in FIGS.
- the read control unit 142 sequentially reads the overlap region d l of the image data DI (O k + 1 ) and the image data DI (E k ).
- the reference data rMO (O k + 1 , d 1 , Y m4 ) and the comparison data rME (E k , d r , Y m4 ) are read out from the overlap area d r of the image data DI (O k + 1 ). from the overlap region d l of the r and the image data DI (E k + 1), the reference data rMO (O k + 1, d r, Y m4) and comparison data RME (E k + 1 , d l , Y m4 ).
- the positional deviation amount ⁇ Ya is read from the range within the search range “ ⁇ 4y to + 4y”, but since the reading control unit 142 reads out every four lines by the thinning rate M, the comparison data rME The number of lines at is equal to the number of lines in the range of one line unit within the search range “ ⁇ y to + y” where the positional deviation amount ⁇ Y is.
- FIGS. 26A and 26B and FIGS. 27A and 27B are diagrams for explaining the operation of the read control unit 142 in more detail.
- the reference data rMO for the line Y m and Y m4 read reference line
- FIG. 26 (a) lines indicated by grid-like shading (cross-hatching) and the amount of displacement ⁇ Ya (within the search range “ ⁇ y to + y” in FIG.
- the positional relationship of the lines for reading the comparison data rME (within the range “ ⁇ 4y to + 4y”) (the lines indicated by diagonal lines (hatched hatched parallel lines) in FIGS. 26A and 27A)
- the positional relationship between the reference data MO and the comparison data ME in the sub-scanning direction (line direction) when output from the read control unit 142 is shown.
- the read control unit 142 uses the line Y m and the line Y m4 as reference lines, respectively, and the reference data rMO and comparison data for each M line with the thinning rate M. Read rME.
- the reference data rMO and the comparison data rME are usually composed of data of a plurality of pixels in the main scanning direction and the sub scanning direction. Further, as shown in FIGS. 26B and 27B, the reference data MO and the comparison data ME output from the read control unit 142 are in the main scanning direction and the sub data centering on the detection reference line Y0. It consists of data of a plurality of pixels in the scanning direction.
- the positional deviation amount ⁇ Ya is a positional deviation amount at the time of reading the reference data rMO (O k , d r , Y m ) and the comparison data rME (E k , d l , Y m ), and the search range “ ⁇ The value of the interval between one line in y to + y ′′ is taken.
- the reference data rMO O k , d r , Y m
- the comparison data rME E k , d l , Y m
- the data having the same center position as the reference line Y m is set as image data rME (E k , d l , Y m , 0), 1
- the data at the position shifted by the line interval is rME (E k , d l , Y m , 1), and is further shifted by one line interval, and the data shifted at the y line interval is converted into image data rME (E k , d l , Y m , + Y), and the data shifted by the y-line interval in the opposite direction is the image data rME (E k , d 1 , Y m , ⁇ y).
- the read control unit 142 compares the image data rME (E k , d l , Y m , -y) to rME (E k , d l , Y m , + y) with the comparison data rME (E k , d l , Y m ).
- the 1-line interval is a value indicating the width in the sub-scanning direction of an image of one line read by the line sensor
- M-line interval (M is a positive integer) is determined by the line sensor. This is a value indicating the width in the sub-scanning direction of the sequentially read M-line image by the number of lines arranged in the sub-scanning direction.
- the read control unit 142 uses the reference data rMO (O k , d r , Y m ) and the comparison data rME (E k , d l , Y m ) as reference data MO (O k , d r ) and comparison data ME (E k , d l ) for output.
- the comparison data ME (E k , d l ) together with information indicating the positional relationship in the sub-scanning direction.
- the read comparison data rME (E k , d l , Y m ) is the image data for each line centering on the line within the search range “ ⁇ y to + y”
- the positional deviation amount ⁇ Ya is read control.
- the search range of the positional deviation amount ⁇ Y is the image data ME (E k , d l , each line) within the search range “ ⁇ y to + y”.
- the image data rME (E k , d l , Y m , ⁇ Ya) in the read comparison data rME (E k , d l , Y m ) is arranged as it is, and the reference data MO and the search are performed. It is output as comparison data ME for the number of lines in the range “ ⁇ y to + y”.
- the read control unit 142 uses the image data rME (E k , d l , Y m , 0) as data having the same center position as the detection reference line Y0.
- ME (E k , d l , 0) is used, and data rME (E k , d l , Y m , 1) at the position shifted by one line interval is ME (E k , d l , 1) and y line interval is shifted.
- the data rME (E k , d l , Y m , + y) is changed to ME (E k , d l , + y), and the data rME (E k , d l , Y m , ⁇ y) shifted by the y line interval in the reverse direction. )
- ME (E k , d l , -y) As ME (E k , d l , -y) and output.
- Image data of a width bh line (at intervals of 4 lines in the 4 ⁇ bh line) centered in the sub-scanning direction is used as reference data rMO (O k , d r , Y m4 ) (in FIG.
- the positional deviation amount ⁇ Ya is a positional deviation amount when the reference data rMO (O k , d r , Y m4 ) and the comparison data rME (E k , d l , Y m4 ) are read, and the search range “ ⁇ 4y to + 4y”. The value of the interval between the four lines is taken.
- the reference data rMO O k , d r , Y m4
- the comparison data rME E k , d l , Y m4
- data having the same center position as the reference line Y m4 is image data rME (E k , d l , Y m4 , 0), and data at a position shifted by four line intervals is rME (E k , d 1 , Y m4 , 4), and further shifted by 4 line intervals, the data shifted by 4y line intervals is set as image data rME (E k , d l , Y m4 , 4y), and the data shifted by 4y line intervals in the reverse direction. Is image data rME (E k , d l , Y m4 , ⁇ 4y).
- the read control unit 142 converts the image data at the 4-line interval from the image data rME (E k , d l , Y m4 , ⁇ 4y) to rME (E k , d l , Y m4 , 4y) into the comparison data rME ( E k , d l , Y m4 ).
- the read control unit 142 uses the reference data rMO (O k , d r , Y m4 ) and the comparison data rME (E k , d l , Y m4 ) as reference data MO (O k , d r ) and comparison data ME (E k , d l ) for output.
- the read control unit 142 reads out the image data of the width bh line in the sub-scanning direction every 4 lines in the 4 ⁇ bh line from the reference data rMO (O k , d r , Y m4 ).
- the comparison data ME (E k , d l ) output from the read control unit 142 is the image data ME (E k , d l , ⁇ Y) of the positional deviation amount ⁇ Y within the search range “ ⁇ y to + y”.
- the read control unit 142 converts the image data rME (E k , d l , Y m4 , 0) to ME (E k , D l , 0), and the data rME (E k , d l , Y m4 , 4) at positions shifted by 4 line intervals are ME (E k , d l , 1) shifted by 1 line intervals, and 4y line intervals
- the shifted data rME (E k , d l , Y m4 , 4y) is defined as ME (E k , d l , + y), and the data rME (E k , d l , Y m , ⁇ 4y) is replaced with ME (E k , d l , ⁇ y) and output.
- the comparison data rME is read from within the search range “ ⁇ 4y to + 4y” of the positional deviation amount ⁇ Ya, but is read every four lines with the thinning rate M, so the positional deviation amount in the comparative data rME.
- the line of ⁇ Y can be the same as the line in the unit of one line within the search range “ ⁇ y to + y”.
- the read control unit 142 reads the center of the image data in the overlap area from the image memory 141. And the surrounding image data are read out with a width bh line in the sub-scanning direction at every M line interval at every thinning rate M (every M line interval in the M ⁇ bh line), but the width bh is only one line. (That is, only the center line).
- the read control unit 142 reads out image data at intervals of M lines based on the thinning rate M.
- the read control unit 142 may read the image data in units of M lines at the thinning rate M, and use the data averaged in units of M lines as reference data rMO and comparison data rME.
- the positional deviation amount ⁇ Ya at the time of reading is within the search range “ ⁇ (M ⁇ y) to + (M ⁇ y)”.
- the positional deviation amount ⁇ Y at the time of outputting can be a line in the range of one line unit within the search range “ ⁇ y to + y”.
- a method of averaging in units of M lines while resetting the values in units of M lines, calculating the average value of the adjacent lines and the average value of the next line and the previous line, one line at a time
- the averaging process can be performed efficiently.
- the reference data MO (O k , dr ) having more accurate peripheral image data information can be used.
- the comparison data ME (E k , d l ), and the similarity can be calculated more accurately by the processing in the similarity calculation unit 143.
- the similarity calculation unit 143 in the image processing unit 104 includes a search range “ ⁇ y ⁇ + y” for the reference data MO and the positional deviation amount ⁇ Y from the read control unit 142.
- the similarity calculation unit 143 performs a process of comparing the reference data MO and the comparison data ME for a plurality of positions within the search range “ ⁇ y to + y” of the positional deviation amount ⁇ Y, and calculates the correlation data D142 that is the similarity. And generate.
- FIGS. 28A and 28B are diagrams for explaining the operation of the similarity calculation unit 143.
- reference data MO (O k , d r ) and comparison data ME (E k , d l ) centered on the detection reference line Y0 are input to the similarity calculation unit 143. Shows the case.
- the similarity calculation unit 143 first searches for the positional deviation amount ⁇ Y with the detection reference line Y0 as the center with respect to the input reference data MO (O k , dr ).
- the positional deviation amount ⁇ Y is the positional deviation amount in the sub-scanning direction from the reference data MO (O k , d r ) for the comparison data ME (E k , d l ), and is within the search range “ ⁇ y to + y”.
- the similarity calculation unit 143 performs image data ME (E k , d l , -y) to ME (E k ) in the reference data MO (O k , d r ) and the comparison data ME (E k , d l ).
- D l , + y) is calculated as correlation data D142 (O k , E k ).
- the similarity calculation unit 143 Is, for example, the sum of absolute values of differences for each pixel of the reference data MO (O k , d r ) and the image data ME (E k , d l , -y) to ME (E k , d l , + y), Alternatively, the sum of squares of differences for each pixel is calculated as the similarity, and is output as correlation data D142 (O k , E k , ⁇ Y).
- the similarity calculation unit 143 includes the next reference data MO (O k + 1 , d l ) and comparison data ME (E k , dr ), the reference data MO (O k + 1 , dr ) and the comparison data ME (E k + 1 , d l ),... are sequentially calculated, and correlation data D 142 (O k + 1 , E k , ⁇ Y), D 142 (O k + 1 , E k + 1 , ⁇ Y),.
- the search range of the positional deviation amount ⁇ Y between the reference data MO and the comparison data ME is the search range “ ⁇ y to + y”, and the number of lines of the image data is not changed by the scaling factor R. Therefore, the similarity calculation unit 143 Therefore, it is not necessary to calculate the degree of similarity for detecting the amount of misalignment and to change the number of correlation data D142 to be generated, and the circuit scale is not changed according to the scaling factor.
- the similarity calculation unit 143 shifts the image from the comparison data ME (E k , d l ) within the search range “ ⁇ y to + y” by one line interval as the positional shift amount ⁇ Y.
- data ME (E k, d l, ⁇ Y) was to calculate the take similarity (correlation data D142), the similarity calculation unit 143, by interpolation processing from the image data before and after the line, between the line and the line It is also possible to obtain the position in the sub-scanning direction (denoted as sub-line), calculate the similarity to the reference data MO, and generate the correlation data D142.
- FIG. 28B shows image data ME (E k ) in reference data MO (O k , d r ) and comparison data ME (E k , d l ) centered on the detection reference line Y0 in the similarity calculation unit 143.
- An example of the value of k , ⁇ Y) is shown.
- the curve indicated by the broken line is the correlation data D142 (O k , E k , ⁇ Y) corresponding to FIGS.
- FIGS. 22 and 24 Corresponding correlation data D142 (O k , E k , ⁇ Y).
- the correlation data D142 (O k , E k , ⁇ Y) has the same positional deviation amount ⁇ Y calculated by the number of lines in the search range “ ⁇ y to + y”, and the number of correlation data D142 generated. Alternatively, the processing is not changed according to the scaling factor.
- the shift amount estimator 144 receives, from the similarity calculator 143, correlation data D142 for the positional deviation amounts ⁇ Y of a plurality of lines within the search range “ ⁇ y to + y”. Receive. The shift amount estimation unit 144 outputs the shift amount ⁇ Y corresponding to the data with the highest similarity among the correlation data D142 of the plurality of lines as the shift amount data d sh to the shift amount enlargement unit 145.
- the shift amount enlargement unit 145 in the image processing unit 104 receives the thinning rate M set based on the variable magnification from the controller 107, and receives the shift amount data d sh . Received from the shift amount estimation unit 144. The shift amount enlargement unit 145 is based on the thinning rate M with respect to the shift amount data d sh from the shift amount estimation unit 144 obtained at the position Y m (or Y m4 ) in the sub-scanning direction (Y direction).
- the value of the shift amount data d sh is multiplied by M and expanded as (M ⁇ d sh ), and for example, between the line intervals indicated by the thinning rate M (for example, between the line Y m and the line Y m + M)
- the data (M ⁇ d sh ) which is the shift amount data d sh multiplied by M, is repeatedly inserted to be interpolated (enlarged) in the sub-scanning direction, converted into enlarged shift amount data ⁇ y b and output.
- the shift amount data d sh input for each line Y m is output as enlarged shift amount data ⁇ y b as it is.
- the shift amount enlargement unit 145 When the shift amount enlargement unit 145 performs a scaling process with a thinning rate M (M ⁇ 2), the shift amount data d sh is input every M line intervals, and the search range “ ⁇ (M ⁇ y) ⁇ Since + (M ⁇ y) ”is a value obtained by thinning out the M line interval (1 / M line value), the shift amount data d sh is multiplied by M from the thinning rate M ( M ⁇ d sh ), and the shift amount data expanded to (M ⁇ d sh ) is repeatedly inserted between M line intervals (for example, between lines Y m and Y m + M) and expanded in the sub-scanning direction.
- M line intervals for example, between lines Y m and Y m + M
- the enlarged and interpolated enlargement shift amount data ⁇ y b becomes a positional shift amount in the sub-scanning direction in each line for combining the image data, and this enlargement shift amount data ⁇ y b is sent to the combination processing unit 146.
- Shift amount enlargement unit 145 a thinning rate M, while expanding the value of the shift amount data d sh by M times as (M ⁇ d sh), ( M ⁇ d sh) and the shift amount data M line spacing Therefore, the enlargement shift amount data ⁇ y b is obtained as a value indicating the amount of misalignment corresponding to the line range changed by the variable magnification R, and is simply enlarged.
- shift amount data corresponding to the value in the sub-scanning direction according to the magnification is accurately obtained without the need to increase the circuit scale such as increasing the line memory for expanding the search range of the positional deviation amount. be able to.
- the value of the shift amount data d sh is multiplied by M to (M ⁇ d sh ), and then repeatedly inserted during the M line interval and interpolated in the sub-scanning direction.
- the shift amount data d sh is repeatedly inserted during the M line interval and interpolated and enlarged in the sub-scanning direction
- the value of the shift amount data d sh is multiplied by M and enlarged as (M ⁇ d sh ).
- And may be converted into enlarged shift amount data ⁇ y b , and the same effect as the shift amount enlargement unit 145 described above can be obtained.
- the value of the shift amount data d sh is multiplied by M based on the decimation rate M in the shift amount enlarging unit 145 and expanded as (M ⁇ d sh ), and the line indicated by the decimation rate M It is a figure for demonstrating the operation
- FIG. 29 shows the value of the enlarged shift amount data ⁇ y b converted from the shift amount data d sh for each position Y m (horizontal axis) in the sub-scanning direction in the same magnification process ( It is a diagram showing the vertical axis), and shows the case where the shift amount data d sh is directly converted into the enlarged shift amount data ⁇ y b for each line and output.
- FIG. 29 shows the value of the enlarged shift amount data ⁇ y b converted from the shift amount data d sh for each position Y
- the black circles in FIG. 30 indicate the expanded shift amount data (4 ⁇ d sh ) for the lines from which the shift amount data d sh input at every four line intervals is obtained.
- the image can be enlarged by a simple operation of doubling, and it is not necessary to perform an interpolation process by a filter operation such as line interpolation using several lines, and can be converted into the enlarged shift amount data ⁇ y b by a process with a small circuit scale. it can.
- the shift amount enlargement unit 145 in FIG. 30 converts the shift amount data set to (M ⁇ d sh ) between the line intervals indicated by the thinning rate M (for example, between the lines Y m and Y m + M).
- the interpolation process as smoothly varies with the movement of the line, it may be converted into interpolated (expanded) in the slow-scan enlargement shift quantity data [Delta] y b.
- the circuit scale such as linear interpolation and averaging processing between the line intervals indicated by the thinning rate M is small. It can be converted into enlarged shift amount data ⁇ y b by processing.
- the join processing unit 146 in the image processing unit 104 receives the enlarged shift amount data ⁇ y b output from the shift amount enlarging unit 145.
- the combination processing unit 146 calculates the read position RP of the image data based on the enlargement shift amount data ⁇ y b , reads the image data M146 corresponding to the read position, and subtracts the image data based on the enlargement shift amount data ⁇ y b.
- a process of moving in the scanning direction is performed, and the data in the overlap region of the image data is combined to generate composite image data.
- FIG. 32 and FIGS. 33A and 33B are diagrams for explaining the operation of the combination processing unit 146. 32, only the data corresponding to the even-numbered line sensors 121E 1 ,..., 121E k ,..., 121E n are positioned in the sub-scanning direction by an amount corresponding to the enlarged shift amount data ⁇ y b. may be shifted, as shown in shown in FIG.
- the line sensor 121 o 1 located odd, ..., 121 o k, ..., data corresponding to 121 o n, located in the even-numbered line sensors 121E 1 to, ..., 121E k, ..., so that equivalent to both shift amount combined enlarge shift quantity data [Delta] y b of the corresponding data to 121E n, may be shifted to positions in the sub-scanning direction .
- the combination processing unit 146 calculates the read position RP of the image data based on the enlargement shift amount data ⁇ y b , reads the image data M146 corresponding to the read position from the image memory 141, and sets the odd-numbered line.
- the image data read position RP is calculated based on the enlargement shift amount data ⁇ y b. In this case, even when the scaling process is performed, the reading position corresponding to each line corresponding to the scaling ratio can be obtained. At this time, without performing thinning or the like, in each line in the sub-scanning direction corresponding to the variable magnification, an image corresponding to the enlargement shift amount data ⁇ y b is used by using the image data of the position and the lines around the position. Since the data D146 is generated, the combined image does not deteriorate.
- the read control unit 142 In the image processing unit 104, the read control unit 142, a position Y m of a sub-scanning direction (Y-direction) as the reference line, the line sensor 121 o 1 located odd in every M lines by thinning rate M, ..., 121O k, ..., read image data in the overlap region corresponding to 121 o n, the line sensor 121E 1 located even-numbered, ..., 121E k, ..., the image data in the overlap region corresponding to 121E n, pre
- the reference data MO and the comparison data ME of the determined number of lines are output, and the similarity calculation unit 143 compares the reference data MO and the comparison data ME to calculate similarity data (correlation data) D143, and shifts the data.
- the amount estimation unit 144 has the highest similarity for the detection reference line Y0 (the largest correlation). Calculates the position deviation amount corresponding to the sub scanning direction position of the stomach) comparison data as the shift quantity data d sh, the shift amount larger portion 145, on the basis of the thinning rate M, the value of the shift amount data d sh.
- the image data is enlarged, repeatedly inserted in the sub-scanning direction, interpolated and converted into enlarged shift amount data ⁇ y b , and the combination processing unit 146 reads out the image data based on the enlarged shift amount data ⁇ y b and combines the combined image data D146. Output. This is sequentially performed in the sub-scanning direction (Y direction).
- the line sensor 121 o 1 located odd, ..., 121 o k, ..., the line sensor positioned in the image data and the even-numbered 121 o n 121E 1 ,..., 121E k ,..., 121E n are shifted in the sub-scanning direction (Y direction), but image data of the same image as the original 160 shown in FIG.
- FIG. 34 is also an explanatory diagram conceptually showing the image data D146 after the combination process output from the combination processing unit 146.
- the displacement of the sensors 121E 1 ,..., 121E k ,..., 121E n in the sub-scanning direction is corrected, and the overlapping area images that have been read are combined.
- FIGS. 35A and 35B are diagrams illustrating an example in which the position of the document 160 with respect to the glass surface 126 changes while the imaging unit 102 is being conveyed.
- the position Y m in the sub-scanning direction (Y-direction) the original 160 is floated from the glass surface 126, the position Yu, the document 160 is adhered to the glass surface 126. Since the imaging unit 102 sequentially processes the image data at each position in the sub-scanning direction (Y direction), even if the position of the document 160 changes during conveyance, the image processing unit 104 determines the position at each position.
- the shift amount is calculated, the images can be combined correctly, and even in the case of the scaling process, the overlap area is obtained by thinning out the lines based on the thinning rate M set according to the scaling ratio. Image data is extracted to calculate the amount of misregistration, and the readout position at that position corresponding to the variable magnification is obtained by enlarging the obtained misregistration amount. Images can be combined.
- the line sensor 121E 1 located to the even-numbered and 121O n, ..., 121E k, ..., 121E Since the shift amount is calculated individually for all n , even if the position of the document 160 with respect to the glass surface 126 changes in the main scanning direction (X direction), the images can be combined correctly.
- the read control unit 142 uses the scaling factor.
- the image data read by the first row line sensor and the second row line sensor by thinning out the position (line) in the sub-scanning direction to be read out for each M lines with the thinning rate M set according to reads the data in the overlap regions, as a reference line position Y m of a sub-scanning direction (Y direction), and outputs the comparison data ME with reference data MO of a predetermined number of lines, the similarity calculation unit 143 Then, the similarity data (correlation data) D143 is calculated by comparing the reference data MO and the comparison data ME, and the shift amount estimation unit 144 has the highest similarity.
- Enlarge calculates a positional deviation amount corresponding to the sub scanning direction position of the stomach comparison data as the shift quantity data d sh, at the shift amount larger portion 145, on the basis of the thinning rate M, the value of the shift amount data d sh and, have been converted into expanded shift quantity data [Delta] y b interpolated repeatedly inserted in the sub-scanning direction, is shifted in the sub-scanning direction position of the divided image based on the enlargement shift amount data [Delta] y b, bound Image data is generated.
- the data in the overlap region is read for each M lines with the thinning rate M, and the reference data MO and the search range “ ⁇ ” that are not changed by the scaling factor.
- the comparison data ME for the number of lines in y to + y ′′ can be obtained, the position misalignment search range is expanded, and the number or processing for generating similarity data is changed without changing according to the scaling factor.
- the shift amount data d sh that is the shift amount can be calculated, and the shift amount data d sh is enlarged and interpolated to obtain a position shift amount corresponding to the line range that is changed based on the magnification. As a result, combined image data can be generated.
- the detection range of the amount of positional deviation in the sub-scanning direction is not expanded according to the scaling ratio. Obtain the position deviation amount in the sub-scanning direction between accurately image data without increasing the circuit scale, can generate high-quality combined image data corresponding to the read object.
- the thinning rate M set in accordance with the scaling factor is set to an integer of 1 or more close to the scaling factor R that is the reading magnification
- the thinning processing in the reading control unit 142 and the shift amount expanding unit 145 or The enlargement process does not need to be performed by an enlargement / reduction process by a filter operation such as line interpolation, and can be performed by a process with a small circuit scale.
- Embodiment 5 ⁇ 5-1 Configuration of Embodiment 5
- the decimation rate M set based on the scaling ratio is set in the scaling process.
- the image data is sent to the read control unit 142 and the shift amount enlargement unit 145 in the image processing unit 104, the image data in the overlap region is read from the image memory 141 based on the thinning rate M, and the shift amount data is enlarged and reduced based on the thinning rate M.
- the shift amount data is converted into enlarged shift amount data by interpolation processing, and image data of the combined image is generated using the obtained enlarged shift amount data.
- the image reading apparatus 101a has a position obtained from the scaling factor and the thinning rate M in addition to the thinning rate M set based on the scaling factor.
- the number LMT of search range exclusion lines that limit the search range of the shift amount is acquired, the image data in the overlap area is read from the image memory 141 based on the thinning rate M, and the shift amount data is expanded and expanded shift amount data by interpolation.
- the shift amount data d sh that is the position shift amount is calculated by limiting (reducing) the search range of the position shift amount used at the time of shift amount estimation by the search range excluded line number LMT. You can also.
- FIG. 36 is a functional block diagram schematically showing the configuration of the image reading apparatus 101a according to the fifth embodiment of the present invention.
- constituent elements that are the same as or correspond to those shown in FIG. 17 (Embodiment 4) are assigned the same reference numerals.
- the image reading apparatus 101a according to the fifth embodiment is different from the image reading apparatus 101 according to the fourth embodiment in the configuration and operation of the image processing unit 104a and the controller 107a. Except for the configuration and operation of the image processing unit 104a and the controller 107a, the image reading apparatus 101a according to the fifth embodiment is the same as the image reading apparatus according to the fourth embodiment. As shown in FIG.
- the controller 107a in the fifth embodiment sends the thinning rate M and the search range excluded line number LMT set based on the scaling factor designated by the user or the like to the image processing unit 104a.
- the image processing unit 104a uses the received thinning rate M and the search range excluded line number LMT to perform scaling processing and detection of the amount of positional deviation in the sub-scanning direction, and uses the detected amount of positional deviation to obtain image data. Join.
- the image reading apparatus 101a performs operations of the imaging unit 102, the A / D conversion unit 103, the image processing unit 104a, the imaging unit 102, and the image processing unit 104a. And a controller 107a to be controlled.
- the image processing unit 104a is an image processing apparatus according to the fifth embodiment (an apparatus that can perform the image processing method according to the fifth embodiment), and includes an image memory 141, a read control unit 142, and similarity calculation.
- the shift amount estimation unit 144a uses the positional deviation amount corresponding to the position in the sub-scanning direction of the comparison data having the highest similarity (the highest correlation) based on the search range excluded line number LMT. Shift amount data is calculated.
- the configurations and operations of other components, for example, the imaging unit 102, the A / D conversion unit 103, the image memory 141, the read control unit 142, the similarity calculation unit 143, the shift amount enlargement unit 145, and the combination processing unit 146 are as follows. This is the same as that shown in the fourth embodiment.
- the controller 107a of the image reading apparatus 101a in FIG. 36 receives setting information or instruction information such as a variable magnification by the user, etc.
- the image is sent to the processing unit 104a, and the imaging unit 102 and the image processing unit 104a are controlled.
- the controller 107a sends setting information for setting a reading magnification (magnification ratio) R to the imaging unit 102, and a thinning rate set based on the scaling ratio.
- M is sent to the image processing unit 104a.
- the controller 107a sets the search range excluded line number LMT based on the scaling factor, and sends the search range excluded line number LMT to the image processing unit 104a.
- the controller 107a sets, as the thinning rate M, an integer of 1 or more close to the scaling factor R that is the reading magnification.
- the search range excluded line number LMT is Set the number of lines that limit the range of misregistration.
- the search range excluded line number LMT is a position shift amount value that can limit the search range of the position shift amount when estimating the shift amount, the similarity value is invalidated from the outside of the range.
- the number of lines may be such, or a signal indicating invalidity or validity at each position may be used.
- the scaling factor R is 0.8
- the shift amount estimating unit 144a has a thinning rate M of 1. Therefore, the difference between the scaling factor R and the thinning rate M at the position in the sub-scanning direction outside the search range is 0.
- a document 160 is read by the imaging unit 102, converted into digital data (image data) DI by the A / D conversion unit 103, input to the image processing unit 104a, and stored in the image memory 141 of the image processing unit 104a.
- the readout control unit 142 in the image processing unit 104a reads out data in the overlap region by thinning out the position (line) in the sub-scanning direction to be read out for each M lines with the thinning rate M set based on the variable magnification.
- the MO and the comparison data ME are output, and the similarity calculation unit 143 compares the reference data MO and the comparison data ME to calculate the similarity data (correlation data) D143.
- the configuration and operation described above in the fifth embodiment are the same as those in the fourth embodiment shown in FIG.
- the shift amount estimation unit 144a receives from the similarity calculation unit 143 the correlation data D142 that is similarity data in the positional deviation amounts ⁇ Y of a plurality of lines within the search range “ ⁇ y to + y”, and receives a scaling factor from the controller 107a.
- the number LMT of search range exclusion lines set based on is received.
- the shift amount estimation unit 144a calculates a positional deviation amount ⁇ Y corresponding to the data with the highest similarity among the correlation data D142 of the plurality of lines.
- the shift amount estimation unit 144a limits (reduces) the search range for obtaining data with the highest degree of similarity based on the search range excluded line number LMT, and within the limited (reduced) search range.
- the positional deviation amount ⁇ Y corresponding to the data with the highest similarity is output to the shift amount enlargement unit 145 as the shift amount data d sh .
- the thinning-out rate M includes the scaling factor R or is set to an integer value close to the scaling factor, the positional deviation of a plurality of lines within the search range “ ⁇ y to + y” from the similarity calculation unit 143
- the amount ⁇ Y is data in a range in which the original positional deviation amount ⁇ Ya is read at intervals of M lines within “ ⁇ (M ⁇ y) to + (M ⁇ y)”, and the thinning rate M is larger than the scaling factor R. If it is a value, the search range has an excessive amount (that is, it is too wide), and an excessive amount of misalignment is detected.
- the positional deviation amount ⁇ Ya is read as comparison range “ ⁇ 2y to + 2y” at two line intervals.
- the search range “ ⁇ 2y to + 2y” is outside the search range “ ⁇ 1.5y to + 1.5y” as compared to the search range “ ⁇ 1.5y to + 1.5y” that is changed with the actual scaling factor.
- the controller 107a sets a value indicating an area corresponding to the number of lines 0.5y outside the search range “ ⁇ 1.5y to + 1.5y” as the search range excluded line number LMT, and shifts it. It supplies to the quantity estimation part 144a.
- the shift amount estimating unit 144a invalidates the search range so that the correlation data D142 at the positional deviation amount indicated by the search range excluded line number LMT is not included in the search range, and sets the search range to the range “ ⁇ (y ⁇ LMT) to + (y In the line limited to “ ⁇ LMT)”, the shift amount ⁇ Y corresponding to the data with the highest similarity is used as the shift amount data d sh .
- the search range is ⁇ Y as a curve indicated by a broken line as a line limited within the range “ ⁇ (y ⁇ LMT) to + (y ⁇ LMT)” within the search range “ ⁇ y to + y”.
- the shift amount enlargement unit 145 enlarges and interpolates the shift amount data d sh to convert it to enlargement shift amount data ⁇ y b
- the combination processing unit 146 converts it into the enlargement shift amount data ⁇ y b .
- ⁇ 5-3 Effects of Embodiment 5
- the rate M the data in the overlap region is read for each M lines at the thinning rate M, and the reference data MO that is not changed by the scaling factor is compared with the number of lines in the search range “ ⁇ y to + y”
- the data ME can be obtained, the search range of the positional deviation amount can be expanded, the shift amount data d sh can be calculated without changing the number or processing of generating similarity data based on the scaling factor, and the search Even if the range is excessive, the search range for which data with the highest degree of similarity is determined (reduced) is limited (reduced) based on the number LMT of search range exclusion lines, and the position shifts within an appropriate range.
- the shift quantity data d sh can be calculated, by expanding and interpolating the shift amount data d sh, obtained as a value indicating a positional deviation amount corresponding to a line range is changed based on the magnification
- the combined image data can be generated. Therefore, according to the image reading apparatus 101a, the image processing apparatus 4a, and the image processing method according to the fifth embodiment, even in the scaling process, the detection range of the positional deviation amount in the sub-scanning direction is expanded based on the scaling ratio. Therefore, it is possible to accurately obtain the positional deviation amount in the sub-scanning direction between the image data without increasing the circuit scale, and to generate high-quality composite image data corresponding to the object to be read.
- Embodiment 6 ⁇ 6-1 Configuration of Embodiment 6
- the functions of the image reading apparatuses 101 and 1a according to Embodiments 4 and 5 can be realized by a hardware configuration. However, some of the functions of the image reading apparatuses 101 and 101a may be realized by using a computer program executed by a microprocessor including a CPU. When a part of the functions of the image reading apparatuses 101 and 101a is realized by using a computer program, the microprocessor reads program data stored in a computer-readable information storage medium, or the Internet The computer program is acquired by downloading via the communication means, and processing according to the acquired computer program is executed.
- FIG. 38 is a functional block diagram showing a configuration of the image reading apparatus 101b when a part of the functions of the image reading apparatus 101b according to the sixth embodiment is realized by a computer program.
- the image reading apparatus 101b according to the sixth embodiment includes an imaging unit 102, an A / D conversion unit 103, and an arithmetic device 105.
- the arithmetic unit 105 includes a processor (for example, a microprocessor) 151 including a CPU, a RAM 152 as a volatile memory, a nonvolatile memory 153, a mass storage medium 154 as an information recording medium, and the above-described configurations 151 to 154. And a bus 155 connected thereto.
- the non-volatile memory 153 is, for example, a flash memory.
- the mass storage medium 154 is, for example, a hard disk (magnetic disk), an optical disk, or a semiconductor storage device.
- the A / D converter 103 shown in FIG. 38 has the same function as the A / D converter 103 shown in FIGS. 17 and 36, and converts the electrical signal SI output from the imaging unit 102 into digital data DI.
- the processor 151 stores the supplied digital data in the RAM 152.
- the processor 151 loads a computer program from the nonvolatile memory 153 or the large-capacity storage medium 154 and sets a thinning rate M set based on the scaling factor (or the thinning rate M and the number of search range excluded lines) Set the LMT) and execute image processing.
- image processing similar to the image processing by the image processing units 104 and 104a shown in FIGS. 17 and 36 can be realized.
- FIG. 39 is a flowchart schematically showing an example of processing by the arithmetic unit 105 of the sixth embodiment.
- the processor 151 first uses the thinning rate M set based on the scaling factor (or uses the thinning rate M and the search range excluded line number LMT).
- the data in the overlap region is read out, and a read control process is executed so as to extract the reference data MO and the comparison data ME with a predetermined number of lines (step S101), and the reference data MO and the comparison data ME are compared.
- a similarity calculation process is executed (step S102).
- step S103 the processor 151 executes a shift amount estimation process (step S103), and expands and interpolates the shift amount data using the thinning rate M (or using the thinning rate M and the search range excluded line number LMT).
- step S104 A shift amount enlargement process is executed (step S104).
- step S105 the processor 151 executes a combination process (step S105). Note that the processing in steps S1 to S5 by the arithmetic device 105 is the same as the processing performed by the image processing unit 104 in the fourth embodiment or the processing performed by the image processing unit 104a in the fifth embodiment.
- the number or processing of generating the similarity data by expanding the search range of the positional deviation amount. the without changing according to the magnification ratio, can be calculated shift amount data d sh, by expanding and interpolating the shift amount data d sh, the positional deviation amount corresponding to a line range is changed based on the magnification It can be obtained as a value to indicate and combined image data can be generated.
- the image reading apparatus 101b according to the sixth embodiment even in the scaling process, the accuracy without increasing the circuit scale without expanding the detection range of the positional deviation amount in the sub-scanning direction according to the scaling ratio. It is often possible to obtain the amount of positional deviation in the sub-scanning direction between image data, and to generate high-quality composite image data corresponding to the object to be read.
- Embodiment 7 ⁇ 7-1 Configuration of Embodiment 7
- the odd-numbered positions counted from one end (for example, the left end) of the sensor substrate 120 are provided.
- one end of the sensor substrate 120 (e.g., left) line sensor 121 o 1 located odd counted from the side, ..., 121 o k, ..., even-numbered and the optical axis 128O of 121 o n line sensors 121E 1 located, ..., 121E k, ..., not intersect with the optical axis 128E of 121E n, it will be described the case of installing the line sensor so as to be parallel.
- the line sensor 121O 1, ..., 121O k, ..., between 121 o n and the glass surface 126, and the line sensor 121E 1, ..., 121E k, ..., between the 121E n and the glass surface 126 An optical system such as a lens for forming an image of a document on a line sensor may be provided.
- the direction of the optical axis 128O and the direction of the optical axis 128E can also be set by an optical system provided between the line sensor and the glass surface 126.
- FIG. 17 is also referred to in the description of the seventh embodiment.
- FIGS. 40 (a) and 40 (b) show line sensors 121O 1 ,..., 121O k located at odd-numbered positions of the imaging unit 102 of the image reading apparatus according to Embodiment 7.
- FIG. , ..., the line sensor 121E 1 located to the even-numbered and the optical axis 128O of 121 o n, ..., 121E k, ..., if the optical axis 128E of 121E n are parallel to each other, the document 160 as an object to be read was
- FIG. 40A shows a case where the document 160 is in close contact with the glass surface 126 which is the document table mounting surface
- FIG. 40B shows a case where the document 160 is slightly lifted away from the glass surface 126. Indicates.
- document 160 when document 160 is in close contact with glass surface 126 as shown in FIG. 40A, document 160 is glass as shown in FIG. be any of the case away from the surface 126, the line sensor 121 o 1 located odd, ..., 121 o k, ..., the read image of the document 160 by 121 o n hardly changes, similarly, the even-numbered line sensors 121E 1 located, ..., 121E k, ..., the read image of the document 160 by 121E n is hardly changed.
- the line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n is the line sensor 121E 1 located even number, ..., 121E k, ..., compared to the 121E n, obtains the image data of the image in the same position temporally delayed.
- shift amount L 1 in the sub-scanning direction position of the image to be read is 121E n can be obtained as a substantially constant amount.
- the positional deviation amount ⁇ is caused by the deviation amount of the optical axis 128O or the optical axis 128E due to the attachment error of the line sensor and the temporal fluctuation (that is, speed fluctuation) of the conveyance speed of the imaging unit 102 in the sub-scanning direction. It is a value including a deviation amount based on it.
- FIG. 41 shows image data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 121O k and 121O k + 1 when the document is read, and the odd-numbered line sensors 121E k and 121E.
- image data DI corresponding to k + 1 (E k) shows the DI (E k + 1).
- FIG. 41 shows image data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 121O k and 121O k + 1 when the document is read, and the odd-numbered line sensors 121E k and 121E.
- image data DI corresponding to k + 1 (E k) shows the DI (E k + 1).
- FIG. 41 shows image data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 121O k and 121O k + 1 when the document is read, and the odd-numbered line sensors 121
- Corresponding image data DI (E k ) and DI (E k + 1 ) are displaced from each other by a distance (Y L + ⁇ ) in which the position (line) in the sub-scanning direction is obtained by adding a positional deviation amount ⁇ to a certain amount Y L. ing.
- the line sensor 121 o 1 located odd, ..., 121 o k, ..., scanned image by 121 o n, or located in the even-numbered line sensors 121E 1 to, ..., 121E k, ..., either one of the image reading by 121E n, or both, by performing a substantially constant amount Y L shifting process in the sub-line located in the odd-numbered sensor 121O 1, ..., 121O k, ..., the line sensor 121E 1 located to the image and the even-numbered reading by 121O n, ..., 121E k, ..., the sub-scanning direction positional shift between the image reading by 121E n Decrease.
- the image data stored in the image memory 141 shifts (shifts) the position (line) in the sub-scanning direction by a distance equal to the positional shift amount ⁇ .
- Time scaling processing is enlarged or reduced by the magnification of the magnification ratio R is in the sub-expansion by a factor of a predetermined amount Y L and the position deviation amount ⁇ magnification R also Or it will be reduced. Therefore, the image processing unit 104 shifts ( L ⁇ Y L ) lines, which is the number of lines obtained by multiplying Y L by R equal to the variable magnification, instead of the process of shifting the predetermined amount Y L when writing image data to the image memory 141. Process.
- the read control unit 142 and the connection processing unit 146 in the case of reading the image data, it may be configured to determine the position added as offset to the position to read out the predetermined amount Y L.
- the positional deviation amount ⁇ is obtained and the images are combined.
- the overlap region in the image data from the image memory 141 is obtained.
- the image data is read out, the reference data is compared with the comparison data, the similarity is calculated, the shift amount data is calculated from the position of the comparison data with the highest similarity in the sub-scanning direction, and based on the thinning rate M
- Enlarged shift amount data obtained by enlarging the shift amount data in the sub-scanning direction is obtained, and the image data is read from the image memory 141 by the enlarged shift amount data and combined.
- the configuration and operation for performing such processing are the same as those in the fourth embodiment.
- ⁇ 7-3 Effects of Embodiment 7
- either the document 160 or the imaging unit 102 or Even in the case where there is temporal fluctuation (that is, speed fluctuation) in the transport speed by the transport mechanism when moving both in the sub-scanning direction, as in the case of the fourth embodiment, depending on the magnification ratio.
- temporal fluctuation that is, speed fluctuation
- the amount of misalignment in the sub-scanning direction between the image data can be obtained accurately, and the combined image data is obtained by accurately combining the image data. Can be generated.
- the image processing apparatus, image processing method, image reading apparatus, and program according to the present invention can be applied to information processing apparatuses such as a copying machine, a scanner, a facsimile, and a personal computer.
- 1,1a image reading apparatus 1,1a image reading apparatus, second imaging unit, 3 A / D conversion unit, 4 an image processing section (image processing apparatus), 5 calculation unit, 20 sensor substrate, 21O 1, ..., 21O k , ..., 21O n odd 21E 1 ,..., 21E k ,..., 21E n even-numbered line sensors, 22 spacing between odd-numbered adjacent line sensors, and 23 even-numbered adjacent lines 25.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Input (AREA)
- Facsimile Scanning Arrangements (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
This image processing device (1) has: a similarity calculation unit (42) that calculates the similarity between comparison data and baseline data in an overlap region using first image data generated by a first column line sensor group and second image data generated by a second column line sensor group; a shift amount estimation unit (43) that calculates shift amount data on the basis of the position in the sub-scanning direction of comparison data having the highest similarity; and a linking processing unit (44) that links the first image data and second image data by altering the linking position in the sub-scanning direction on the basis of the shift amount data.
Description
本発明は、被読取物を複数のラインセンサでスキャンすることによって得られた画像データを結合して被読取物に対応する合成画像データを生成する画像処理装置、画像処理方法、及び画像読取装置、並びに、前記画像データを結合させる処理をコンピュータに実行させるプログラムに関する。
The present invention relates to an image processing apparatus, an image processing method, and an image reading apparatus that combine image data obtained by scanning a read object with a plurality of line sensors to generate composite image data corresponding to the read object. The present invention also relates to a program for causing a computer to execute processing for combining the image data.
被読取物を主走査方向にライン状に並ぶ複数の撮像素子を有するラインセンサ(一次元撮像素子)によってスキャンして、被読取物に対応する画像データを生成する画像読取装置が、複写機、スキャナ、及びファクシミリ等に広く使用されている。画像読取装置の代表例として、ラインセンサを被読取物に密着させる密着イメージセンサ式の装置と、縮小光学系で被読取物の縮小画像を生成し、この縮小画像をラインセンサで読み取る縮小転写式の装置とがある。密着イメージセンサ式の装置は、小型化が容易であるという利点を持つが、焦点深度が浅いという欠点を持つ。一方、縮小転写式の装置は、焦点深度が深いという利点を持つが、小型化が困難であるという欠点を持つ。
An image reading apparatus that scans a reading object with a line sensor (one-dimensional imaging element) having a plurality of imaging elements arranged in a line in the main scanning direction and generates image data corresponding to the reading object is a copying machine, Widely used in scanners and facsimiles. As a typical example of an image reading apparatus, a contact image sensor type apparatus that closely contacts a line sensor to an object to be read and a reduced transfer type that generates a reduced image of the object to be read by a reduction optical system and reads the reduced image by the line sensor. There is a device with. The contact image sensor type apparatus has an advantage that it can be easily downsized, but has a disadvantage that the depth of focus is shallow. On the other hand, the reduction transfer type apparatus has an advantage that the depth of focus is deep, but has a disadvantage that miniaturization is difficult.
そこで、小型化が容易で且つ焦点深度が深い画像読取装置が、例えば、特許文献1(特開2009-246623号公報)に提案されている。この画像読取装置は、主走査方向にライン状に間隔を開けて配列された複数台のラインセンサから成る第1列のラインセンサ群及び第2列のラインセンサ群と、第1列のラインセンサ群及び第2列のラインセンサ群のそれぞれに正立像を結像させるテレセントリックな光学系とを備えている。
Therefore, an image reading apparatus that can be easily downsized and has a large depth of focus has been proposed in, for example, Japanese Patent Application Laid-Open No. 2009-246623. The image reading apparatus includes a first row line sensor group and a second row line sensor group including a plurality of line sensors arranged at intervals in a line in the main scanning direction, and a first row line sensor. A telecentric optical system that forms an erect image on each of the group and the second row of line sensor groups.
この画像読取装置は、第1列のラインセンサ群に属する複数の第1列のラインセンサ及び第2列のラインセンサ群に属する複数の第2列のラインセンサのスキャンによって生成された複数の画像データから、第1列のラインセンサと第2列のラインセンサとが重複して読み取る画像を使って、第1列のラインセンサと第2列のラインセンサとによる画像データの間の副走査方向の位置ずれ量を求め、この求められた位置ずれ量に応じて画像データの副走査方向の位置をシフトさせることで、被読取物に対応する合成画像データを生成している。
The image reading apparatus includes a plurality of images generated by scanning a plurality of first row line sensors belonging to the first row line sensor group and a plurality of second row line sensors belonging to the second row line sensor group. The sub-scanning direction between the image data of the line sensor of the first row and the line sensor of the second row using the image which the line sensor of the first row and the line sensor of the second row read from the data in an overlapping manner Is obtained, and the position of the image data in the sub-scanning direction is shifted in accordance with the obtained amount of positional deviation, thereby generating composite image data corresponding to the object to be read.
しかしながら、特許文献1の画像読取装置においては、合成画像データを生成する際に、以下のような処理を行っていた。
(処理1)先ず、第1列のラインセンサ群の内の一方の端部側(例えば、左側)から1番目のラインセンサが生成する画像データが示す分割画像の副走査方向の位置を基準にして、第2列のラインセンサ群の内の左側から1番目のラインセンサが生成する画像データが示す分割画像の副走査方向の位置をシフトさせる。
(処理2)次に、第2列のラインセンサ群の内の左から1番目のラインセンサが生成する画像データが示す分割画像を基準にして、第1列のラインセンサ群の内の左から2番目のラインセンサが生成する画像データが示す分割画像の副走査方向の位置をシフトさせる。
(処理3)その後、処理の対象となるラインセンサを右側に1つずつ変えて、処理1及び2と同様の処理を繰り返す。
このように、処理1~3を順に行った場合には、処理の対象となるラインセンサが他方の端部側(例えば、右側)に移るたびに、隣接するラインセンサが生成する画像データが示す分割画像の副走査方向の位置ずれが累積され、ラインセンサの他方の端部側(例えば、右側)において画像データの副走査方向の位置を合わせる処理が困難になることがあり得る。 However, in the image reading apparatus ofPatent Document 1, the following processing is performed when generating the composite image data.
(Process 1) First, the position in the sub-scanning direction of the divided image indicated by the image data generated by the first line sensor from one end side (for example, the left side) of the line sensor group in the first row is used as a reference. Thus, the position in the sub-scanning direction of the divided image indicated by the image data generated by the first line sensor from the left side of the line sensor group in the second row is shifted.
(Process 2) Next, from the left in the first row of line sensor groups, the divided image indicated by the image data generated by the first line sensor from the left in the second row of line sensor groups is used as a reference. The position in the sub-scanning direction of the divided image indicated by the image data generated by the second line sensor is shifted.
(Process 3) Thereafter, the line sensors to be processed are changed one by one to the right, and the same processes as those inProcess 1 and Process 2 are repeated.
As described above, when theprocesses 1 to 3 are performed in order, every time the line sensor to be processed moves to the other end side (for example, the right side), the image data generated by the adjacent line sensor indicates. Misalignment in the sub-scanning direction of the divided images is accumulated, and it may be difficult to align the position of the image data in the sub-scanning direction on the other end side (for example, the right side) of the line sensor.
(処理1)先ず、第1列のラインセンサ群の内の一方の端部側(例えば、左側)から1番目のラインセンサが生成する画像データが示す分割画像の副走査方向の位置を基準にして、第2列のラインセンサ群の内の左側から1番目のラインセンサが生成する画像データが示す分割画像の副走査方向の位置をシフトさせる。
(処理2)次に、第2列のラインセンサ群の内の左から1番目のラインセンサが生成する画像データが示す分割画像を基準にして、第1列のラインセンサ群の内の左から2番目のラインセンサが生成する画像データが示す分割画像の副走査方向の位置をシフトさせる。
(処理3)その後、処理の対象となるラインセンサを右側に1つずつ変えて、処理1及び2と同様の処理を繰り返す。
このように、処理1~3を順に行った場合には、処理の対象となるラインセンサが他方の端部側(例えば、右側)に移るたびに、隣接するラインセンサが生成する画像データが示す分割画像の副走査方向の位置ずれが累積され、ラインセンサの他方の端部側(例えば、右側)において画像データの副走査方向の位置を合わせる処理が困難になることがあり得る。 However, in the image reading apparatus of
(Process 1) First, the position in the sub-scanning direction of the divided image indicated by the image data generated by the first line sensor from one end side (for example, the left side) of the line sensor group in the first row is used as a reference. Thus, the position in the sub-scanning direction of the divided image indicated by the image data generated by the first line sensor from the left side of the line sensor group in the second row is shifted.
(Process 2) Next, from the left in the first row of line sensor groups, the divided image indicated by the image data generated by the first line sensor from the left in the second row of line sensor groups is used as a reference. The position in the sub-scanning direction of the divided image indicated by the image data generated by the second line sensor is shifted.
(Process 3) Thereafter, the line sensors to be processed are changed one by one to the right, and the same processes as those in
As described above, when the
また、例えば、縮小・拡大機能を有する複写機において副走査方向の走査速度(副走査速度)が変化する場合のように、複写倍率に応じて原稿の読み取り倍率(以下「変倍率」ともいう。)を変更し、画像を拡大又は縮小して読み取る(以下「変倍処理を行う」ともいう。)画像読取装置がある(例えば、特許文献2参照)。変倍処理を行う場合には、副走査方向に読み取る画素数(すなわち、ラインセンサによって読み取られた読取ラインのライン数)は、変倍率が1倍(すなわち、100%)のときのライン数に、変倍率Rを乗じたライン数に変倍(すなわち、拡大又は縮小)される。なお、変倍率を1倍からR倍に変更する方法としては、ラインセンサによる読み取り周期(ラインセンサの駆動タイミング)を一定としつつ、原稿とラインセンサとの間の相対移動速度(又は、原稿と光学系との間の相対移動速度)を、変倍率が1倍のときの相対移動速度の1/Rの相対移動速度に変更する方法が一般的である。
Also, for example, in a copying machine having a reduction / enlargement function, a document reading magnification (hereinafter also referred to as “variable magnification”) according to the copying magnification, as in the case where the scanning speed in the sub-scanning direction (sub-scanning speed) changes. ), And an image is read by enlarging or reducing the image (hereinafter also referred to as “performing scaling process”) (for example, see Patent Document 2). When scaling processing is performed, the number of pixels read in the sub-scanning direction (that is, the number of read lines read by the line sensor) is equal to the number of lines when the scaling factor is 1 (that is, 100%). The number of lines multiplied by the scaling factor R is scaled (that is, enlarged or reduced). As a method of changing the magnification ratio from 1 to R times, the relative movement speed between the document and the line sensor (or the document and the document sensor) while the reading cycle (line sensor drive timing) by the line sensor is constant. A method of changing the relative movement speed between the optical system and the optical system to a relative movement speed of 1 / R of the relative movement speed when the magnification is 1 is common.
変倍処理を行う場合、第1列のラインセンサ及び第2列のラインセンサによって読み取られるライン(以下「読取ライン」ともいう)の数(以下「ライン数」ともいう)は、変倍率に応じて変更される。変倍率RがR0倍(R=R0)のときにラインセンサによって読み取られるラインのライン数は、変倍率Rが1倍(R=1)のときにラインセンサによって読み取られるラインのライン数に比べて、R0倍になる。このため、特許文献1の画像読取装置において拡大変倍処理を行う場合、第1列のラインセンサによって読み取られた画像の画像データと第2列のラインセンサによって読み取られた画像の画像データとの間の副走査方向の位置ずれ量(ライン数)は、変倍率Rに応じて増加したライン数になる。
When scaling processing is performed, the number of lines (hereinafter also referred to as “reading lines”) read by the first row line sensor and the second row line sensor (hereinafter also referred to as “line number”) depends on the scaling factor. Changed. The number of lines read by the line sensor when the variable magnification R is R 0 times (R = R 0 ) is the number of lines read by the line sensor when the variable magnification R is 1 time (R = 1). R 0 times compared to For this reason, when the enlargement / reduction processing is performed in the image reading apparatus of Patent Document 1, the image data of the image read by the first line sensor and the image data of the image read by the second line sensor are used. The amount of misalignment in the sub-scanning direction (number of lines) between them increases according to the magnification ratio R.
第1列のラインセンサによって読み取られた画像の画像データと第2列のラインセンサによって読み取られた画像の画像データとの間の副走査方向の位置ずれ量を求める際には、基準となる副走査方向の位置から、副走査方向の位置をシフトさせるための位置ずれ量の検出を行う最も遠い位置までの範囲内、すなわち、位置ずれ量の検出を行うための検索範囲内において、第1列のラインセンサと第2列のラインセンサの両方が、オーバーラップ領域において同じ画像(すなわち、類似度が最も高い画像)を読み取る副走査方向の位置を検出するための処理を行う。しかし、変倍率Rを1倍より大きい値とした拡大変倍処理において、第1列のラインセンサによって読み取られた画像の画像データと第2列のラインセンサによって読み取られた画像の画像データとの間の副走査方向の位置ずれ量(ライン数)を求める際に、上記検索範囲内に含まれるラインのライン数を、変倍率Rが1倍であるときの変倍処理における検索範囲内に含まれるラインのライン数と同じにした場合には、検索範囲が狭すぎるために、以下の問題が発生し得る。例えば、第1列のラインセンサによって読み取られた画像の画像データと第2列のラインセンサによって読み取られた画像の画像データとの間で、オーバーラップ領域において画像データが一致する副走査方向の位置が検索範囲内において検出されず、第1列のラインセンサによって読み取られた画像の画像データと第2列のラインセンサによって読み取られた画像の画像データとの間の副走査方向の位置ずれ量(ライン数)を精度よく取得することができない。このため、拡大変倍処理において、精度よく副走査方向の位置ずれ量を検出するためには、位置ずれ量(ライン数)の検出を行うための検索範囲を変倍率に応じて広げる、すなわち、ライン数を増やすことが望ましい。
When determining the amount of positional deviation in the sub-scanning direction between the image data of the image read by the line sensor in the first row and the image data of the image read by the line sensor in the second row, The first column in the range from the position in the scanning direction to the farthest position for detecting the amount of positional deviation for shifting the position in the sub-scanning direction, that is, in the search range for detecting the amount of positional deviation. Both the line sensor in the second row and the line sensor in the second row perform processing for detecting the position in the sub-scanning direction in which the same image (that is, the image having the highest similarity) is read in the overlap region. However, in the enlargement / reduction processing in which the enlargement / reduction ratio R is larger than 1, the image data of the image read by the line sensor in the first row and the image data of the image read by the line sensor in the second row When determining the amount of positional deviation (number of lines) in the sub-scanning direction, the number of lines included in the search range is included in the search range in the scaling process when the scaling ratio R is 1 When the number of lines is the same as the number of lines to be generated, the search range is too narrow and the following problem may occur. For example, the position in the sub-scanning direction where the image data matches in the overlap region between the image data of the image read by the line sensor in the first row and the image data of the image read by the line sensor in the second row Is not detected within the search range, and the amount of positional deviation in the sub-scanning direction between the image data of the image read by the line sensor in the first row and the image data of the image read by the line sensor in the second row ( The number of lines) cannot be obtained with high accuracy. For this reason, in the enlargement / reduction processing, in order to detect the positional deviation amount in the sub-scanning direction with high accuracy, the search range for detecting the positional deviation amount (number of lines) is expanded in accordance with the magnification ratio. It is desirable to increase the number of lines.
しかし、変倍率に応じて位置ずれ量の検出を行うための検索範囲を広げるためには、変倍率に応じて位置ずれ量検出処理の内容を切り換える必要があり、位置ずれ量検出処理に必要とされる回路規模の増加につながるという問題があった。
However, in order to expand the search range for detecting the displacement amount according to the magnification, it is necessary to switch the content of the displacement amount detection process according to the magnification, which is necessary for the displacement amount detection process. There has been a problem that this leads to an increase in circuit scale.
本発明の目的は、第1列のラインセンサ群に属するラインセンサが生成する画像データが示す分割画像と第2列のラインセンサ群に属するラインセンサが生成する画像データが示す分割画像との副走査方向の位置ずれを累積させない処理手順によって、被読取物に対応する高品質な合成画像データを生成することができる画像処理装置、画像処理方法、及び画像読取装置、並びに、前記画像データを結合する処理をコンピュータに実行させるプログラムを提供することである。
An object of the present invention is to subdivide a divided image indicated by image data generated by a line sensor belonging to the first row line sensor group and a divided image indicated by image data generated by a line sensor belonging to the second row line sensor group. Image processing apparatus, image processing method, and image reading apparatus capable of generating high-quality composite image data corresponding to an object to be read by a processing procedure that does not accumulate positional deviation in the scanning direction, and combining the image data It is to provide a program for causing a computer to execute processing to be performed.
本発明の他の目的は、読み取り倍率が変更された場合でも、各列のラインセンサの画像データ間の副走査方向の位置ずれ量の検出を行うための検索範囲を広げる必要がなく、位置ずれ量検出処理の内容を変更する必要がなく、回路規模を増やさずに且つ精度よく位置ずれ量を求めることができ、高品質な合成画像データを生成することができる画像処理装置、画像処理方法、及び画像読取装置、並びに、前記画像データを結合する処理をコンピュータに実行させるプログラムを提供することである。
Another object of the present invention is to eliminate the need for expanding the search range for detecting the amount of positional deviation in the sub-scanning direction between the image data of the line sensors in each column even when the reading magnification is changed. There is no need to change the content of the amount detection processing, the amount of positional deviation can be obtained accurately without increasing the circuit scale, and an image processing apparatus, an image processing method, and an image processing method capable of generating high-quality composite image data, And an image reading apparatus, and a program for causing a computer to execute processing for combining the image data.
本発明の一態様に係る画像処理装置は、主走査方向に第1の間隔を開けてライン状に並ぶ複数の第1のラインセンサを含む第1列のラインセンサ群と、前記第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ前記主走査方向に第2の間隔を開けてライン状に並ぶ複数の第2のラインセンサを含む第2列のラインセンサ群とを有する撮像部であって、前記第1列のラインセンサ群に属する前記複数の第1のラインセンサが前記第2列のラインセンサ群における前記第2の間隔に対向するように配置され、前記第2列のラインセンサ群に属する前記複数の第2のラインセンサが前記第1列のラインセンサ群における前記第1の間隔に対向するように配置され、隣接する前記第1のラインセンサと前記第2のラインセンサの隣接する端部同士が前記主走査方向に重なるオーバーラップ領域を有する前記撮像部によって生成された画像データを処理する画像処理装置であって、前記第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと前記第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリと、前記画像メモリから読み出された前記第1の画像データの内の、前記オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における前記画像メモリから読み出された前記第2の画像データから選択された、前記第1の幅と同じ幅の比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する類似度算出部と、前記基準データの副走査方向の位置と、前記複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部と、前記画像メモリから前記第1の画像データ及び前記第2の画像データを読み出し、前記シフト量データが示すシフト量を分割し、前記分割された一方のシフト量を示す第1のシフト量データと前記分割された他方のシフト量を示す第2のシフト量データとを生成し、前記第1のシフト量データに基づいて前記第1の画像データの副走査方向の位置を補正し、前記第2のシフト量データに基づいて前記第2の画像データの副走査方向の位置を補正して、前記第1の画像データと前記第2の画像データとを結合する結合処理部とを有することを特徴としている。
An image processing apparatus according to an aspect of the present invention includes a first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and the first row of the first row sensors. An image pickup including a line sensor group and a second row line sensor group including a plurality of second line sensors arranged at different positions in the sub-scanning direction and arranged in a line at a second interval in the main scanning direction. The plurality of first line sensors belonging to the first row line sensor group are arranged to face the second interval in the second row line sensor group, and the second row The plurality of second line sensors belonging to the line sensor group are arranged so as to face the first interval in the line sensor group of the first row, and the adjacent first line sensor and the second line sensor group Adjacent to the line sensor An image processing apparatus that processes image data generated by the imaging unit having an overlap region in which ends overlap in the main scanning direction, and is based on a detection signal generated by the first row line sensor group An image memory for storing second image data based on the first image data and a detection signal generated by the second row of line sensor groups, and the first image data read from the image memory Selected from the reference data having a predetermined first width in the sub-scanning direction in the overlap region and the second image data read from the image memory in the same overlap region. The comparison data having the same width as that of the comparison data is processed for a plurality of positions where the positions of the comparison data are moved in the sub-scanning direction, and the reference data A similarity calculation unit for calculating the similarity between the comparison data and the comparison data, the position of the reference data in the sub-scanning direction, and the comparison data having the highest similarity among the plurality of comparison data in the sub-scanning direction A shift amount estimation unit that calculates shift amount data based on a difference from the position; and reads out the first image data and the second image data from the image memory, and divides the shift amount indicated by the shift amount data. Generating first shift amount data indicating one of the divided shift amounts and second shift amount data indicating the other divided shift amount, and generating the shift amount data based on the first shift amount data. The position of the first image data in the sub-scanning direction is corrected, the position of the second image data in the sub-scanning direction is corrected based on the second shift amount data, and the first image data and the Second image data And a combination processing unit for combining the data.
本発明の他の態様に係る画像処理方法は、主走査方向に第1の間隔を開けてライン状に並ぶ複数の第1のラインセンサを含む第1列のラインセンサ群と、前記第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ前記主走査方向に第2の間隔を開けてライン状に並ぶ複数の第2のラインセンサを含む第2列のラインセンサ群とを有する撮像部であって、前記第1列のラインセンサ群に属する前記複数の第1のラインセンサが前記第2列のラインセンサ群における前記第2の間隔に対向するように配置され、前記第2列のラインセンサ群に属する前記複数の第2のラインセンサが前記第1列のラインセンサ群における前記第1の間隔に対向するように配置され、隣接する前記第1のラインセンサと前記第2のラインセンサの隣接する端部同士が前記主走査方向に重なるオーバーラップ領域を有する前記撮像部によって生成された画像データを処理する画像処理方法であって、前記第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと前記第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリから読み出された前記第1の画像データの内の、前記オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における前記画像メモリから読み出された前記第2の画像データから選択された、前記第1の幅と同じ幅の比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する工程と、前記基準データの副走査方向の位置と、前記複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出する工程と、前記画像メモリから前記第1の画像データ及び前記第2の画像データを読み出し、前記シフト量データに基づいて、副走査方向の結合位置を変えて前記第1の画像データと前記第2の画像データとを結合する工程とを有することを特徴としている。
An image processing method according to another aspect of the present invention includes a first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and the first row. And a second row of line sensor groups including a plurality of second line sensors arranged at different positions in the sub-scanning direction and arranged in a line at a second interval in the main scanning direction. The imaging unit, wherein the plurality of first line sensors belonging to the first row line sensor group are arranged to face the second interval in the second row line sensor group, and The plurality of second line sensors belonging to a line sensor group in a row are arranged to face the first interval in the line sensor group in the first row, and the first line sensor and the second adjacent to each other. Adjacent line sensors An image processing method for processing image data generated by the imaging unit having an overlap region in which the end portions overlap in the main scanning direction, the detection signal generated by the line sensor group in the first row Of the first image data read out from the image memory storing the first image data based on the second image data based on the first image data based on the detection signal generated by the line sensor group in the second column, and Same as the first width selected from the reference data having a predetermined first width in the sub-scanning direction in the overlap region and the second image data read from the image memory in the same overlap region A process of comparing the comparison data with the width is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, and the comparison with the reference data is performed. Based on the difference between the position in the sub-scanning direction of the reference data and the position in the sub-scanning direction of the comparison data having the highest similarity among the plurality of comparison data Calculating the shift amount data; reading out the first image data and the second image data from the image memory; and changing the coupling position in the sub-scanning direction based on the shift amount data. And a step of combining the image data and the second image data.
本発明の他の態様に係る画像読取装置は、主走査方向に第1の間隔を開けてライン状に並ぶ複数の第1のラインセンサを含む第1列のラインセンサ群と、前記第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ前記主走査方向に第2の間隔を開けてライン状に並ぶ複数の第2のラインセンサを含む第2列のラインセンサ群とを有する撮像部であって、前記第1列のラインセンサ群に属する前記複数の第1のラインセンサが前記第2列のラインセンサ群における前記第2の間隔に対向するように配置され、前記第2列のラインセンサ群に属する前記複数の第2のラインセンサが前記第1列のラインセンサ群における前記第1の間隔に対向するように配置され、隣接する前記第1のラインセンサと前記第2のラインセンサの隣接する端部同士が前記主走査方向に重なるオーバーラップ領域を有する前記撮像部と、前記第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと前記第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリと、前記画像メモリから読み出された前記第1の画像データの内の、前記オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における前記画像メモリから読み出された前記第2の画像データから選択された、前記第1の幅と同じ幅の比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する類似度算出部と、前記基準データの副走査方向の位置と、前記複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部と、前記画像メモリから前記第1の画像データ及び前記第2の画像データを読み出し、前記シフト量データに基づいて、副走査方向の結合位置を変えて前記第1の画像データと前記第2の画像データとを結合する結合処理部とを有することを特徴としている。
An image reading apparatus according to another aspect of the present invention includes a first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and the first row. And a second row of line sensor groups including a plurality of second line sensors arranged at different positions in the sub-scanning direction and arranged in a line at a second interval in the main scanning direction. The imaging unit, wherein the plurality of first line sensors belonging to the first row line sensor group are arranged to face the second interval in the second row line sensor group, and The plurality of second line sensors belonging to a line sensor group in a row are arranged to face the first interval in the line sensor group in the first row, and the first line sensor and the second adjacent to each other. Adjacent line sensors First image data based on detection signals generated by the first row of line sensor groups, and the second row of line sensors. An image memory for storing the second image data based on the detection signal generated by the group, and a predetermined value in the sub-scanning direction in the overlap region of the first image data read from the image memory. Processing for comparing the reference data having the first width and the comparison data having the same width as the first width selected from the second image data read from the image memory in the same overlap region Is performed on a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, and a similarity calculation unit that calculates the similarity between the reference data and the comparison data A shift amount estimation unit that calculates shift amount data based on a difference between a position in the sub-scanning direction of the reference data and a position in the sub-scanning direction of the comparison data having the highest similarity among the plurality of comparison data; The first image data and the second image data are read from the image memory, and the first image data and the second image are changed by changing the coupling position in the sub-scanning direction based on the shift amount data. And a combination processing unit that combines data.
本発明の他の態様に係るプログラムは、主走査方向に第1の間隔を開けてライン状に並ぶ複数の第1のラインセンサを含む第1列のラインセンサ群と、前記第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ前記主走査方向に第2の間隔を開けてライン状に並ぶ複数の第2のラインセンサを含む第2列のラインセンサ群とを有する撮像部であって、前記第1列のラインセンサ群に属する前記複数の第1のラインセンサが前記第2列のラインセンサ群における前記第2の間隔に対向するように配置され、前記第2列のラインセンサ群に属する前記複数の第2のラインセンサが前記第1列のラインセンサ群における前記第1の間隔に対向するように配置され、隣接する前記第1のラインセンサと前記第2のラインセンサの隣接する端部同士が前記主走査方向に重なるオーバーラップ領域を有する前記撮像部によって生成された画像データを処理するための、コンピュータによって実行可能なプログラムであって、前記第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと前記第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリから読み出された前記第1の画像データの内の、前記オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における前記画像メモリから読み出された前記第2の画像データから選択された、前記第1の幅と同じ幅の比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する処理と、前記基準データの副走査方向の位置と、前記複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出する処理と、前記画像メモリから前記第1の画像データ及び前記第2の画像データを読み出し、前記シフト量データに基づいて、副走査方向の結合位置を変えて前記第1の画像データと前記第2の画像データとを結合する処理とをコンピュータに実行させることを特徴としている。
A program according to another aspect of the present invention includes a line sensor group in a first row including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and a line in the first row. An image pickup unit having a sensor group and a second row of line sensor groups including a plurality of second line sensors arranged at different positions in the sub-scanning direction and arranged in a line at a second interval in the main scanning direction The plurality of first line sensors belonging to the first row line sensor group are arranged to face the second interval in the second row line sensor group, and the second row The plurality of second line sensors belonging to a line sensor group are arranged so as to face the first interval in the first row of line sensor groups, and the adjacent first line sensor and the second line Adjacent to the sensor A computer-executable program for processing image data generated by the imaging unit having an overlap region where ends overlap in the main scanning direction, and generated by the line sensor group in the first row Of the first image data read from the image memory storing the first image data based on the detected signal and the second image data based on the detection signal generated by the line sensor group of the second row. Selected from the reference data having a predetermined first width in the sub-scanning direction in the overlap region and the second image data read from the image memory in the same overlap region. The process of comparing the comparison data having the same width as the width of 1 is applied to a plurality of positions where the position of the comparison data is moved in the sub-scanning direction. Processing for calculating the similarity between the reference data and the comparison data, the position of the reference data in the sub-scanning direction, and the sub-scanning direction of the comparison data having the highest similarity among the plurality of comparison data A process of calculating shift amount data based on the difference from the position of the image, and reading out the first image data and the second image data from the image memory, and combining in the sub-scanning direction based on the shift amount data It is characterized by causing a computer to execute processing for combining the first image data and the second image data by changing the position.
本発明の他の態様に係る画像処理装置は、主走査方向に並ぶ複数の光電変換素子を含むラインセンサを、副走査方向の異なる位置に少なくとも2列有し、互いに異なる列の隣接するラインセンサの端部同士が主走査方向に重なるオーバーラップ領域を有するように配置されている撮像部によって生成された画像データを処理する装置であって、前記ラインセンサからの出力に基づく画像データを格納する画像メモリと、前記ラインセンサによる副走査方向の読み取り倍率に応じて設定される間引き率に基づいて、前記画像メモリに格納されている前記画像データから、前記オーバーラップ領域における所定の副走査方向の位置における基準データと、前記基準データの前記オーバーラップ領域に重複する領域における比較データとを読み出す読み出し制御部と、前記読み出し制御部によって読み出された前記基準データと前記比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記複数の副走査方向の位置についての前記比較データとの類似度を算出する類似度算出部と、前記基準データの副走査方向の位置と、前記複数の副走査方向の位置における前記比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部と、前記間引き率に基づいて、前記シフト量データを副走査方向に拡大及び補間することによって、前記シフト量データを拡大シフト量データに変換するシフト量拡大部と、前記拡大シフト量データに基づいて前記画像メモリから読み出される画像データの副走査方向の位置を決定し、前記決定された位置の画像データを前記画像メモリから読み出し、前記互いに異なる列の隣接するラインセンサによって読み出された画像データを結合することによって合成画像データを生成する結合処理部とを備えることを特徴とする。
An image processing apparatus according to another aspect of the present invention includes at least two line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction at different positions in the sub scanning direction, and adjacent line sensors in different lines. Is an apparatus for processing image data generated by an imaging unit arranged so as to have an overlap region in which the end portions overlap in the main scanning direction, and stores image data based on an output from the line sensor Based on an image memory and a thinning rate set in accordance with a reading magnification in the sub-scanning direction by the line sensor, the image data stored in the image memory is used in a predetermined sub-scanning direction in the overlap region. Read the reference data at the position and the comparison data in the area overlapping the overlap area of the reference data A process of comparing the reference data read by the read control unit and the reference data with the comparison data is performed for a plurality of positions where the positions of the comparison data are moved in the sub-scanning direction, and the reference data A similarity calculation unit that calculates the similarity between the comparison data for the positions in the plurality of sub-scanning directions, the position in the sub-scanning direction of the reference data, and the comparison data at the positions in the plurality of sub-scanning directions A shift amount estimation unit that calculates shift amount data based on a difference between the comparison data having the highest degree of similarity and a position in the sub scanning direction, and the shift amount data in the sub scanning direction based on the thinning rate. A shift amount enlargement unit that converts the shift amount data into enlarged shift amount data by enlarging and interpolating, and the image memory based on the enlarged shift amount data. Determine the position of the image data read from the image in the sub-scanning direction, read the image data at the determined position from the image memory, and combine the image data read by the adjacent line sensors in different columns And a combination processing unit for generating composite image data.
本発明の他の態様に係る画像処理方法は、主走査方向に並ぶ複数の光電変換素子を含むラインセンサを、副走査方向の異なる位置に少なくとも2列有し、互いに異なる列の隣接するラインセンサの端部同士が主走査方向に重なるオーバーラップ領域を有するように配置されている撮像部によって生成された画像データを処理する方法であって、前記ラインセンサからの出力に基づく画像データを画像メモリに格納する格納ステップと、前記ラインセンサによる副走査方向の読み取り倍率に応じて設定される間引き率に基づいて、前記画像メモリに格納されている前記画像データから、前記オーバーラップ領域における所定の副走査方向の位置における基準データと、前記基準データの前記オーバーラップ領域に重複する領域における比較データとを読み出す読み出し制御ステップと、前記読み出しステップによって読み出された前記基準データと前記比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記複数の副走査方向の位置についての前記比較データとの類似度を算出する類似度算出ステップと、前記基準データの副走査方向の位置と、前記複数の副走査方向の位置における前記比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量算出ステップと、前記間引き率に基づいて、前記シフト量データを副走査方向に拡大及び補間することによって、前記シフト量データを拡大シフト量データに変換するシフト量拡大ステップと、前記拡大シフト量データに基づいて前記画像メモリから読み出される画像データの副走査方向の位置を決定し、前記決定された位置の画像データを前記画像メモリから読み出し、前記互いに異なる列の隣接するラインセンサによって読み出された画像データを結合することによって合成画像データを生成する結合処理ステップとを備えることを特徴とする。
An image processing method according to another aspect of the present invention includes at least two line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction at different positions in the sub scanning direction, and adjacent line sensors in different lines. A method of processing image data generated by an imaging unit arranged so as to have an overlap region where ends of the image sensor overlap in the main scanning direction, wherein image data based on an output from the line sensor is stored in an image memory And a predetermined sub-level in the overlap region from the image data stored in the image memory based on a storage step stored in the image sensor and a thinning rate set according to a reading magnification in the sub-scanning direction by the line sensor. Reference data at a position in the scanning direction and comparison data in an area overlapping the overlap area of the reference data. Performing a process of comparing the reference data and the comparison data read in the reading step with respect to a plurality of positions in which the position of the comparison data is moved in the sub-scanning direction, A similarity calculation step for calculating a similarity between the reference data and the comparison data for the plurality of sub-scanning direction positions; a position in the sub-scanning direction of the reference data; and a plurality of sub-scanning position positions A shift amount calculating step for calculating shift amount data based on a difference between the comparison data having the highest similarity in the comparison data and a position in the sub-scanning direction, and subtracting the shift amount data based on the thinning rate. A shift amount enlargement step for converting the shift amount data into enlarged shift amount data by enlarging and interpolating in the scanning direction; The position of the image data read from the image memory in the sub-scanning direction is determined based on the enlarged shift amount data, the image data at the determined position is read from the image memory, and the adjacent line sensors in different columns are used. A combination processing step of generating composite image data by combining the read image data.
本発明の他の態様に係る画像読取装置は、主走査方向に並ぶ複数の光電変換素子を含むラインセンサを、副走査方向の異なる位置に少なくとも2列有し、互いに異なる列の隣接するラインセンサの端部同士が主走査方向に重なるオーバーラップ領域を有するように配置されている撮像部と、前記ラインセンサからの出力に基づく画像データを格納する画像メモリと、前記ラインセンサによる副走査方向の読み取り倍率に応じて設定される間引き率に基づいて、前記画像メモリに格納されている前記画像データから、前記オーバーラップ領域における所定の副走査方向の位置における基準データと、前記基準データの前記オーバーラップ領域に重複する領域における比較データとを読み出す読み出し制御部と、前記読み出し制御部によって読み出された前記基準データと前記比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記複数の副走査方向の位置についての前記比較データとの類似度を算出する類似度算出部と、前記基準データの副走査方向の位置と、前記複数の副走査方向の位置における前記比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部と、前記間引き率に基づいて、前記シフト量データを副走査方向に拡大及び補間することによって、前記シフト量データを拡大シフト量データに変換するシフト量拡大部と、前記拡大シフト量データに基づいて前記画像メモリから読み出される画像データの副走査方向の位置を決定し、前記決定された位置の画像データを前記画像メモリから読み出し、前記互いに異なる列の隣接するラインセンサによって読み出された画像データを結合することによって合成画像データを生成する結合処理部とを備えることを特徴とする。
An image reading apparatus according to another aspect of the present invention has at least two line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction at different positions in the sub scanning direction, and adjacent line sensors in different lines. An image pickup unit arranged so as to have an overlap region in which the end portions overlap in the main scanning direction, an image memory storing image data based on an output from the line sensor, and a sub-scanning direction by the line sensor Based on the thinning rate set in accordance with the reading magnification, from the image data stored in the image memory, reference data at a position in a predetermined sub-scanning direction in the overlap area, and the overrun of the reference data. A read control unit that reads comparison data in a region that overlaps the wrap region, and the read control unit reads The process of comparing the issued reference data and the comparison data is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, and the reference data and the plurality of positions in the sub-scanning direction are compared. A similarity calculation unit for calculating a similarity to the comparison data, a position of the reference data in the sub-scanning direction, and comparison data having the highest similarity among the comparison data at the plurality of sub-scanning positions. A shift amount estimation unit that calculates shift amount data based on a difference from a position in the sub-scanning direction, and the shift amount data is expanded and interpolated in the sub-scanning direction based on the decimation rate. A shift amount enlargement unit for converting the image data into enlargement shift amount data, and a position in the sub-scanning direction of image data read from the image memory based on the enlargement shift amount data A combination processing unit configured to read out the image data at the determined position from the image memory and combine the image data read out by adjacent line sensors in different columns to generate composite image data; It is characterized by providing.
本発明の他の態様に係るプログラムは、主走査方向に並ぶ複数の光電変換素子を含むラインセンサを、副走査方向の異なる位置に少なくとも2列有し、互いに異なる列の隣接するラインセンサの端部同士が主走査方向に重なるオーバーラップ領域を有するように配置されている撮像部によって生成された画像データの処理をコンピュータに実行させるためのプログラムであって、前記ラインセンサからの出力に基づく画像データを画像メモリに格納する格納処理と、前記ラインセンサによる副走査方向の読み取り倍率に応じて設定される間引き率に基づいて、前記画像メモリに格納されている前記画像データから、前記オーバーラップ領域における所定の副走査方向の位置における基準データと、前記基準データの前記オーバーラップ領域に重複する領域における比較データとを読み出す読み出し制御処理と、前記読み出しステップによって読み出された前記基準データと前記比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記複数の副走査方向の位置についての前記比較データとの類似度を算出する類似度算出処理と、前記基準データの副走査方向の位置と、前記複数の副走査方向の位置における前記比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量算出処理と、前記間引き率に基づいて、前記シフト量データを副走査方向に拡大及び補間することによって、前記シフト量データを拡大シフト量データに変換するシフト量拡大ステップと、前記拡大シフト量データに基づいて前記画像メモリから読み出される画像データの副走査方向の位置を決定し、前記決定された位置の画像データを前記画像メモリから読み出し、前記互いに異なる列の隣接するラインセンサによって読み出された画像データを結合することによって合成画像データを生成する結合処理とをコンピュータに実行させることを特徴とする。
A program according to another aspect of the present invention includes at least two line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction at different positions in the sub-scanning direction, and ends of adjacent line sensors in different columns. A program for causing a computer to execute processing of image data generated by an imaging unit arranged so as to have an overlap region that overlaps each other in the main scanning direction, and an image based on an output from the line sensor From the image data stored in the image memory based on a storage process for storing data in the image memory and a thinning rate set in accordance with a reading magnification in the sub-scanning direction by the line sensor, the overlap region Reference data at a predetermined position in the sub-scanning direction and the overlap region of the reference data A plurality of read control processes for reading comparison data in an overlapping region and a process for comparing the reference data read in the reading step with the comparison data, the positions of the comparison data being moved in the sub-scanning direction A similarity calculation process for calculating a similarity between the reference data and the comparison data for the plurality of sub-scanning direction positions, a position of the reference data in the sub-scanning direction, and the plurality of sub-scanning positions. A shift amount calculation process for calculating shift amount data based on a difference between the comparison data having the highest degree of similarity in the comparison data at a position in the scanning direction and a position in the sub-scanning direction, and based on the decimation rate, By expanding and interpolating the shift amount data in the sub-scanning direction, the shift amount data for converting the shift amount data into enlarged shift amount data is displayed. Determining a position in the sub-scanning direction of image data read from the image memory based on the step and the enlargement shift amount data, reading the image data at the determined position from the image memory, and adjacent to different columns And a combining process for generating composite image data by combining the image data read by the line sensor.
本発明の一態様によれば、第1列のラインセンサ群に属するラインセンサが生成する画像データが示す分割画像の副走査方向の位置を基準位置とし、第2列のラインセンサ群に属するラインセンサが生成する画像データが示す分割画像の副走査方向の位置をシフトさせており、位置ずれの累積は生じないので、被読取物に対応する高品質な合成画像データを生成することができる。
According to one aspect of the present invention, a line belonging to the second row line sensor group is set to a reference position in the sub-scanning direction of the divided image indicated by the image data generated by the line sensor belonging to the first row line sensor group. Since the position in the sub-scanning direction of the divided image indicated by the image data generated by the sensor is shifted and no accumulation of misalignment occurs, high-quality composite image data corresponding to the object to be read can be generated.
本発明の他の態様によれば、読み取り倍率が変更された場合でも、各列のラインセンサの画像データ間の副走査方向の位置ずれ量の検出を行うための検索範囲を広げる必要がないので、位置ずれ量検出処理の内容を変更する必要がなく、回路規模を増やす必要がない。また、本発明によれば、読み取り倍率が変更された場合でも、精度よく位置ずれ量を求めることができるので、高品質な合成画像データを生成することができる。
According to another aspect of the present invention, there is no need to expand the search range for detecting the amount of positional deviation in the sub-scanning direction between the image data of the line sensors in each column even when the reading magnification is changed. Therefore, it is not necessary to change the contents of the positional deviation amount detection process, and it is not necessary to increase the circuit scale. Furthermore, according to the present invention, even when the reading magnification is changed, the amount of positional deviation can be obtained with high accuracy, so that high-quality composite image data can be generated.
《1》実施の形態1
図1は、本発明の実施の形態1に係る画像読取装置1の構成を概略的に示す機能ブロック図である。図1に示されるように、実施の形態1に係る画像読取装置1は、撮像部2と、A/D変換部3と、画像処理部4とを備えている。画像処理部4は、実施の形態1に係る画像処理装置(実施の形態1に係る画像処理方法を実施することができる装置)であり、画像メモリ41と、類似度算出部42と、シフト量推定部43と、結合処理部44とを備えている。 << 1 >>Embodiment 1
FIG. 1 is a functional block diagram schematically showing a configuration of animage reading apparatus 1 according to Embodiment 1 of the present invention. As shown in FIG. 1, the image reading apparatus 1 according to the first embodiment includes an imaging unit 2, an A / D conversion unit 3, and an image processing unit 4. The image processing unit 4 is an image processing apparatus according to the first embodiment (an apparatus that can perform the image processing method according to the first embodiment), and includes an image memory 41, a similarity calculation unit 42, and a shift amount. An estimation unit 43 and a combination processing unit 44 are provided.
図1は、本発明の実施の形態1に係る画像読取装置1の構成を概略的に示す機能ブロック図である。図1に示されるように、実施の形態1に係る画像読取装置1は、撮像部2と、A/D変換部3と、画像処理部4とを備えている。画像処理部4は、実施の形態1に係る画像処理装置(実施の形態1に係る画像処理方法を実施することができる装置)であり、画像メモリ41と、類似度算出部42と、シフト量推定部43と、結合処理部44とを備えている。 << 1 >>
FIG. 1 is a functional block diagram schematically showing a configuration of an
画像読取装置1は、主走査方向に第1の間隔(例えば、図2(a)における、22又は23)を開けてライン状に並ぶ複数の第1のラインセンサ(例えば、図2(a)における、21O又は21E)を含む第1列のラインセンサ群と、第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ主走査方向に第2の間隔(例えば、図2(a)における、23又は22)を開けてライン状に並ぶ複数の第2のラインセンサ(例えば、図2(a)における、21E又は21O)を含む第2列のラインセンサ群とを有する撮像部2を備えており、撮像部2においては、第1列のラインセンサ群に属する複数の第1のラインセンサ(例えば、図2(a)における、21O又は21E)が第2列のラインセンサ群における第2の間隔(例えば、図2(a)における、23又は22)に対向するように配置され、第2列のラインセンサ群に属する複数の第2のラインセンサ(例えば、図2(a)における、21E又は21O)が第1列のラインセンサ群における第1の間隔(例えば、図2(a)における、22又は23)に対向するように配置され、隣接する第1のラインセンサと前記第2のラインセンサの隣接する端部同士(例えば、図2(a)における、sr,sl)が前記主走査方向に重なるオーバーラップ領域(例えば、図2(a)における、Ak,kなど)を有する。また、画像読取装置1は、第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリ41と、画像メモリ41から読み出された第1の画像データの内の、オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における画像メモリ41から読み出された第2の画像データから選択された、第1の幅と同じ幅の比較データとを比較する処理を、比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する類似度算出部42とを備えている。また、画像読取装置1は、基準データの副走査方向の位置と、複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部43と、画像メモリ41から第1の画像データ及び第2の画像データを読み出し、シフト量データに基づいて、副走査方向の結合位置を変えて第1の画像データと第2の画像データとを結合する結合処理部44とを有している。
The image reading apparatus 1 includes a plurality of first line sensors (for example, FIG. 2A) arranged in a line at a first interval (for example, 22 or 23 in FIG. 2A) in the main scanning direction. , The first row of line sensor groups including 21O or 21E), and the first row of line sensor groups arranged at different positions in the sub-scanning direction and having a second interval in the main scanning direction (for example, FIG. ) In the second row sensor group including a plurality of second line sensors (for example, 21E or 21O in FIG. 2A) that are arranged in a line by opening 23 or 22). In the imaging unit 2, a plurality of first line sensors (for example, 21O or 21E in FIG. 2A) belonging to the first row line sensor group are included in the second row line sensor group. The second interval (eg, A plurality of second line sensors (for example, 21E or 21O in FIG. 2A) that are arranged to face 23 or 22) in (a) and belong to the second row line sensor group are the first. An adjacent first line sensor and an adjacent end of the second line sensor are arranged so as to oppose a first interval (for example, 22 or 23 in FIG. 2A) in the line sensor group of the row. The portions (for example, sr, sl in FIG. 2A) overlap each other in the main scanning direction (for example, Ak, k, etc. in FIG. 2A). In addition, the image reading apparatus 1 receives the first image data based on the detection signal generated by the first row line sensor group and the second image data based on the detection signal generated by the second row line sensor group. The image memory 41 to be stored and the reference data having a predetermined first width in the sub-scanning direction in the overlap area in the first image data read from the image memory 41 and the image in the same overlap area The process of comparing the comparison data having the same width as the first width selected from the second image data read from the memory 41 is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction. And a similarity calculation unit 42 for calculating the similarity between the reference data and the comparison data. Further, the image reading apparatus 1 calculates shift amount data based on the difference between the position of the reference data in the sub-scanning direction and the position of the comparison data having the highest similarity among the plurality of comparison data in the sub-scanning direction. The first image data and the second image data are read from the shift amount estimation unit 43 and the image memory 41, and the first image data and the second image data are changed by changing the coupling position in the sub-scanning direction based on the shift amount data. A combination processing unit 44 for combining the image data;
図2(a)は、撮像部2を概略的に示す平面図であり、図2(b)は、被読取物としての原稿60を示す平面図である。図2(a)は、例えば、複写機の原稿台ガラス(以下「ガラス面」という)26を上から見た状態を示している。図3は、撮像部2の構成要素の1つであるラインセンサ21O1を使ってラインセンサの構成を説明するための図である。
FIG. 2A is a plan view schematically showing the imaging unit 2, and FIG. 2B is a plan view showing a document 60 as an object to be read. FIG. 2A shows, for example, a state in which an original table glass (hereinafter referred to as “glass surface”) 26 of the copying machine is viewed from above. FIG. 3 is a diagram for explaining the configuration of the line sensor using the line sensor 21O 1 which is one of the components of the imaging unit 2.
図2(a)に示されるように、撮像部2は、センサ基板20を有する。センサ基板20には、複数のラインセンサが2列配置されている。センサ基板20において、一方の端部(例えば、左側)から数えて奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onは、主走査方向の直線状に間隔22を開けて配置されており、左から数えて偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enは、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onとは、主走査方向(X方向)について異なる位置に、主走査方向の直線状に間隔23を開けて配置される。ここで、nは2以上の整数であり、kは1以上n以下の整数である。奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onは、複数の第1のラインセンサを含む第1列のラインセンサ群(又は、複数の第2のラインセンサを含む第2列のラインセンサ群)を構成し、偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enは、複数の第2のラインセンサを含む第2列のラインセンサ群(又は、複数の第1のラインセンサを含む第1列のラインセンサ群)を構成する。
As illustrated in FIG. 2A, the imaging unit 2 includes a sensor substrate 20. A plurality of line sensors are arranged in two rows on the sensor substrate 20. In the sensor substrate 20, one end (e.g., left side) line sensor 21O 1 located odd counted from, ..., 21O k, ..., 21O n is spaced 22 in the main scanning direction of the linear are arranged, the line sensor 21E 1 located to the even-numbered from the left, ..., 21E k, ..., 21E n is the line sensor 21O 1 located odd, ..., 21O k, ..., and 21O n Are arranged at different positions in the main scanning direction (X direction) with a space 23 in a straight line in the main scanning direction. Here, n is an integer of 2 or more, and k is an integer of 1 to n. Line sensor 21O 1 located odd, ..., 21O k, ..., 21O n is the first column line group of sensors comprising a plurality of first line sensor (or first comprises a plurality second line sensor The line sensors 21E 1 ,..., 21E k ,..., 21E n located in the even-numbered line sensors are configured as a second line sensor group (or a plurality of second line sensors) (or 1st line sensor group including a plurality of first line sensors).
図2(a)に示されるように、第1列のラインセンサ群に属する複数の第1のラインセンサ(例えば、21E1,…,21Ek,…,21En)が第2列のラインセンサ群における第2の間隔(例えば、23,…,23)に対向するように配置され、第2列のラインセンサ群に属する複数のラインセンサ(例えば、21O1,…,21Ok,…,21On)が第1列のラインセンサ群における第1の間隔(例えば、22,…,22)に対向するように配置され、隣接する第1のラインセンサと第2のラインセンサの隣接する端部同士(端部srとsl)が主走査方向に重なる重複領域(オーバーラップ領域)を有する。
As shown in FIG. 2A, a plurality of first line sensors (for example, 21E 1 ,..., 21E k ,..., 21E n ) belonging to the first row line sensor group are replaced with the second row line sensors. A plurality of line sensors (for example, 21O 1 ,..., 21O k ,..., 21O that are arranged to face a second interval (for example, 23,. n ) are arranged to face a first interval (for example, 22,..., 22) in the first row of line sensor groups, and adjacent end portions of the adjacent first line sensor and the second line sensor. There is an overlapping region (overlap region) where the ends (ends sr and sl) overlap in the main scanning direction.
図2(a)に示されるように、撮像部2は、搬送部24によって副走査方向(Y方向)に移動し、被読取物としての原稿60を読み取る。また、搬送部24は、副走査方向の反対方向(-Y方向)に原稿60を搬送させる装置であってもよい。なお、本出願の各実施の形態においては、搬送部24によって撮像部2が移動する場合を説明する。なお、副走査方向(Y方向)は、撮像部2の移動方向を示し、主走査方向は、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onの配列方向、又は、偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの配列方向を示す。
As shown in FIG. 2A, the imaging unit 2 is moved in the sub-scanning direction (Y direction) by the transport unit 24, and reads a document 60 as a read object. Further, the transport unit 24 may be a device that transports the document 60 in a direction opposite to the sub-scanning direction (−Y direction). In each embodiment of the present application, a case where the imaging unit 2 is moved by the transport unit 24 will be described. Incidentally, the sub-scanning direction (Y-direction) indicates the moving direction of the imaging unit 2, the main scanning direction, the line sensor 21O 1 located odd, ..., 21O k, ..., the arranging direction of 21O n, or, line sensor 21E 1 located even-numbered, showing ..., 21E k, ..., the arranging direction of 21E n.
図3に示されるように、ラインセンサ21O1は、受光した光の内の赤色成分の光を電気信号に変換する複数の赤色用光電変換素子(R光電変換素子)26Rと、受光した光の内の緑色成分の光を電気信号に変換する複数の緑色用光電変換素子(G光電変換素子)26Gと、受光した光の内の青色成分の光を電気信号に変換する複数の青色用光電変換素子(B光電変換素子)26Bとを備えている。図3に示されるように、複数のR光電変換素子26Rは、主走査方向(X方向)に直線状に配列され、複数のG光電変換素子26Gは、主走査方向(X方向)に直線状に配列され、複数のB光電変換素子26Bは、主走査方向(X方向)に直線状に配列されている。実施の形態1においては、図3の構成のラインセンサについて説明するが、本発明は、色を識別しない白黒の光電変換素子が1列に並んだものとしてもよい。また、複数のR光電変換素子26R、複数のG光電変換素子26G、及び複数のB光電変換素子26Bの配列は、図3の例に限定されない。ラインセンサ21O1は、受光した情報を電気信号SI(O1)として出力する。また、ラインセンサ21E1,21O2,…,21On,21Enも同様に受光した情報を電気信号SI(E1),SI(O2),…,SI(On),SI(En)として出力する。全てのラインセンサから出力される電気信号を、電気信号SIと表記する。撮像部2から出力された電気信号SIは、A/D変換部3に入力される。
As shown in FIG. 3, the line sensor 21O 1 includes a plurality of red photoelectric conversion elements (R photoelectric conversion elements) 26R that convert red component light of received light into electrical signals, and the received light. A plurality of green photoelectric conversion elements (G photoelectric conversion elements) 26G for converting green component light into electrical signals and a plurality of blue photoelectric conversions for converting blue component light of received light into electrical signals And an element (B photoelectric conversion element) 26B. As shown in FIG. 3, the plurality of R photoelectric conversion elements 26R are linearly arranged in the main scanning direction (X direction), and the plurality of G photoelectric conversion elements 26G are linear in the main scanning direction (X direction). The plurality of B photoelectric conversion elements 26B are arranged linearly in the main scanning direction (X direction). In Embodiment 1, the line sensor having the configuration shown in FIG. 3 will be described. However, the present invention may be configured such that black and white photoelectric conversion elements that do not identify colors are arranged in a line. Further, the arrangement of the plurality of R photoelectric conversion elements 26R, the plurality of G photoelectric conversion elements 26G, and the plurality of B photoelectric conversion elements 26B is not limited to the example of FIG. The line sensor 21O 1 outputs the received information as an electric signal SI (O 1 ). The line sensor 21E 1, 21O 2, ..., 21O n, 21E n similarly received information an electric signal SI (E 1), SI ( O 2), ..., SI (O n), SI (E n ). Electrical signals output from all line sensors are denoted as electrical signal SI. The electrical signal SI output from the imaging unit 2 is input to the A / D conversion unit 3.
奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enとは、一部重複して原稿60を読み取るオーバーラップ領域A1,1,A1,2,…,Ak,k,Ak,K+1,Ak+1,K+1,…,An,nを有する。オーバーラップ領域の詳細は、後述する。
Line sensor 21O 1 located odd, ..., 21O k, ..., 21O n a line sensor 21E 1 located even-numbered, ..., 21E k, ..., A 21E n, the original 60 and partially overlapping overlap region a 1,1, a 1,2 reading, ..., a k, k, a k, K + 1, a k + 1, K + 1, ..., having a n, n. Details of the overlap region will be described later.
A/D変換部3は、撮像部2から出力される電気信号SIをデジタルデータDIに変換する。デジタルデータDIは、画像処理部4に入力され、画像処理部4の画像メモリ41に格納される。
The A / D conversion unit 3 converts the electric signal SI output from the imaging unit 2 into digital data DI. The digital data DI is input to the image processing unit 4 and stored in the image memory 41 of the image processing unit 4.
図4(a)~(c)は、画像メモリ41に格納されるデジタルデータDIを説明するための図である。図4(a)は、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの光軸が交差する位置に原稿60がある場合の原稿60とラインセンサの位置関係を示す図である。図4(b)は、原稿60の一例を示す図である。図4(c)は、原稿60とラインセンサが図4(a)の位置関係にある場合の図4(b)の原稿60に対応するデジタルデータDIを示す図である。
FIGS. 4A to 4C are diagrams for explaining the digital data DI stored in the image memory 41. FIG. 4 (a) is a line sensor 21O 1 located odd, ..., 21O k, ..., the line sensor 21E 1 located to the even-numbered and 21O n, ..., 21E k, ..., the optical axis of 21E n It is a figure which shows the positional relationship of the original document 60 and line sensor in case the original document 60 exists in the crossing position. FIG. 4B is a diagram illustrating an example of the document 60. FIG. 4C is a diagram showing digital data DI corresponding to the document 60 of FIG. 4B when the document 60 and the line sensor are in the positional relationship of FIG.
図4(a)は、画像読取装置1の概略的な側面図であり、複写機を横から見た状態を示している。図2(a)で示した奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onは、ラインセンサ21Oとも表記し、偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enは、ラインセンサ21Eとも表記する。発光ダイオード(LED)などの照明光源25で光照射された原稿60の反射光は、光軸27Oに沿ってラインセンサ21Oに誘導され、光軸27Eに沿ってラインセンサ21Eに誘導される。副走査方向(Y方向)に搬送される撮像部2は、ガラス面26に置かれた原稿60の反射光を逐次光電変換し、変換した電気信号SIを出力し、A/D変換部3は、その電気信号SIをデジタルデータDIに変換して出力する。
FIG. 4A is a schematic side view of the image reading apparatus 1 and shows a state where the copying machine is viewed from the side. Line sensor 21O 1 positioned to the odd-numbered as shown in FIG. 2 (a), ..., 21O k, ..., 21O n is also the line sensor 21O expressed, the line sensor 21E 1 located even-numbered, ..., 21E k ,..., 21E n are also expressed as a line sensor 21E. The reflected light of the document 60 irradiated with the illumination light source 25 such as a light emitting diode (LED) is guided to the line sensor 21O along the optical axis 27O, and is guided to the line sensor 21E along the optical axis 27E. The imaging unit 2 conveyed in the sub-scanning direction (Y direction) sequentially photoelectrically converts the reflected light of the document 60 placed on the glass surface 26 and outputs the converted electric signal SI. The A / D conversion unit 3 The electrical signal SI is converted into digital data DI and output.
図4(b)に示したような原稿60を、撮像部2により逐次光電変換し、A/D変換部3によりデジタルデータに変換すると、図4(c)に示すようなデジタルデータDIが画像メモリ41に格納される。デジタルデータDIは、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onが生成するデジタルデータDI(O1),…,DI(Ok),…,DI(On)と、偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enが生成するデジタルデータDI(E1),…,DI(Ek),…,DI(En)とから成る。図4(c)では、奇数番目に位置するラインセンサ21Okと21Ok+1が生成するデジタルデータDI(Ok)とDI(Ok+1)と、偶数番目に位置するラインセンサ21Ekと21Ek+1が生成するデジタルデータDI(Ek)とDI(Ek+1)とを示している。
When the original 60 as shown in FIG. 4B is sequentially photoelectrically converted by the imaging unit 2 and converted into digital data by the A / D converter 3, the digital data DI as shown in FIG. Stored in the memory 41. Digital data DI, the line sensor 21O 1 located odd, ..., 21O k, ..., the digital data DI (O 1) generated by 21O n, ..., DI (O k), ..., DI (O n) , 21E k ,..., 21E n and the digital data DI (E 1 ),..., DI (E k ),..., DI (E n ) generated by the even-numbered line sensors 21E 1 ,. . In FIG. 4C, digital data DI (O k ) and DI (O k + 1 ) generated by odd-numbered line sensors 21O k and 21O k + 1 , and even-numbered line sensors 21E k and 21E k + 1 are obtained. Digital data DI (E k ) and DI (E k + 1 ) to be generated are shown.
ここで、オーバーラップ領域A1,1,A1,2,…,Ak,k,Ak,K+1,Ak+1,K+1,…,An,nについて説明する。図2(a)に示されるように、撮像部2が副走査方向(Y方向)に搬送されると、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enで一部同じ領域(オーバーラップ領域)を読み取る。例えば、ラインセンサ21O1の右端srとラインセンサ21E1の左端slは、原稿60の領域A1,1を読み取る。同様に、ラインセンサ21E1の右端srとラインセンサ21O2の左端slは、原稿60の領域A1,2を読み取る。
Here, the overlap area A 1,1, A 1,2, ..., A k, k, A k, K + 1, A k + 1, K + 1, ..., A n, n will be described. As shown in FIG. 2 (a), the imaging unit 2 is conveyed in the sub-scanning direction (Y-direction), the line sensor 21O 1 located odd, ..., 21O k, ..., even-numbered and 21O n line sensor 21E 1 located, ..., 21E k, ..., reads a subset same region (overlap region) at 21E n. For example, the left end sl the right sr and the line sensor 21E 1 of the line sensor 21O 1 reads the area A 1, 1 of the document 60. Similarly, the right end sr of the line sensor 21 E 1 and the left end sl of the line sensor 21 O 2 read the areas A 1 and 2 of the document 60.
図4(b)に示される例で説明すると、ラインセンサ21Okの右端srとラインセンサ21Ekの左端slは、ともに原稿60の領域Ak,kを読み取り、ラインセンサ21Ekの右端srとラインセンサ21Ok+1の左端slは、ともに原稿60の領域Ak,k+1を読み取り、ラインセンサ21Ok+1の右端srとラインセンサ21Ek+1の左端slは、ともに原稿60の領域Ak+1,k+1を読み取る。
Referring to the example shown in FIG. 4 (b), the left end sl the right sr and the line sensor 21E k of the line sensor 21O k are both read the area A k, k of the original 60, and the right end sr line sensor 21E k left sl line sensor 21O k + 1 reads both areas a k of the original 60, a k + 1, the left end sl the right sr and the line sensor 21E k + 1 of the line sensor 21O k + 1 are both read the area a k + 1, k + 1 of the document 60.
したがって、ラインセンサ21Okに対応するデジタルデータDI(Ok)は、原稿60の領域Ak,kに対応するデジタルデータdrを含み、ラインセンサ21Ekに対応するデジタルデータDI(Ek)は、原稿60の領域Ak,kに対応するデジタルデータdlを含んでいる。原稿60が、図4(a)で示されるように、ガラス面26に密着している場合、ラインセンサ21Oとラインセンサ21Eの、原稿60の副走査方向(Y方向)についての読み取り位置は、ほぼ同じ位置であるため、図4(c)に示されるように、隣接するデジタルデータdrとdlは、原稿60の副走査方向(Y方向)についての位置ずれの無いデータになる。
Thus, the digital data DI (O k) corresponding to the line sensor 21O k includes digital data d r corresponding to the area A k, k of the original 60, the digital data DI corresponding to the line sensor 21E k (E k) Includes digital data dl corresponding to the areas A k, k of the document 60. When the original 60 is in close contact with the glass surface 26 as shown in FIG. 4A, the reading positions of the line sensor 21O and the line sensor 21E in the sub-scanning direction (Y direction) of the original 60 are it is almost the same position, as shown in FIG. 4 (c), the digital data d r and d l adjacent will free data position shift of the sub-scanning direction of the document 60 (Y-direction).
次に、原稿60がガラス面26から離れることにより原稿60とラインセンサの位置関係が図4(a)と変わる場合について説明する。図5(a)~(c)は、画像メモリ41に格納されるデジタルデータDIを説明するための図である。図5(a)は、原稿60がガラス面26から浮いて奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの光軸が交差する位置とは、違う位置に原稿60がある場合の原稿60とラインセンサの位置関係を示す図であり、図5(b)は、原稿60の一例を示す図であり、図5(c)は、原稿60とラインセンサが図5(a)の位置関係にある場合の図4(b)の原稿60に対応するデジタルデータDIを示す図である。
Next, the case where the positional relationship between the original 60 and the line sensor changes from that shown in FIG. FIGS. 5A to 5C are diagrams for explaining the digital data DI stored in the image memory 41. FIG. 5 (a) is a line sensor 21O 1 of document 60 is located in the odd-numbered floats from the glass surface 26, ..., 21O k, ..., the line sensor 21E 1 located to the even-numbered and 21O n, ..., 21E k , ..., the position where the optical axis of 21E n intersect a diagram showing a positional relationship between the document 60 and the line sensor when there is a document 60 in a different position, FIG. 5 (b), an example of a document 60 FIG. 5C is a diagram showing digital data DI corresponding to the document 60 of FIG. 4B when the document 60 and the line sensor are in the positional relationship of FIG. 5A.
原稿60がガラス面26から浮いたとしても、平面図で見た場合のラインセンサと原稿60の位置関係は、変わらない。すなわち、主走査方向(X方向)には、同じ位置のデータを取得する。したがって、図5(b)では、図4(b)と同様にラインセンサ21Okの右端srとラインセンサ21Ekの左端slは、ともに原稿60の領域Ak,kを読み取り、ラインセンサ21Ekの右端srとラインセンサ21Ok+1の左端slは、ともに原稿60の領域Ak,k+1を読み取り、ラインセンサ21Ok+1の右端srとラインセンサ21Ek+1の左端slは、ともに原稿60の領域Ak+1,k+1を読み取る。
Even if the document 60 is lifted off the glass surface 26, the positional relationship between the line sensor and the document 60 when viewed in plan is not changed. That is, data at the same position is acquired in the main scanning direction (X direction). Thus, in FIG. 5 (b), the left sl the right sr and the line sensor 21E k likewise line sensor 21O k and FIG. 4 (b), both read the area A k, k of the original 60, the line sensor 21E k the right sr and the line sensor 21O k + 1 of the left sl are both reading area a k of the original 60, a k + 1, the line sensor 21O k + 1 of the rightmost sr and the line sensor 21E k + 1 of the left sl are both area a k + 1 of the original 60, Read k + 1 .
一方、図5(a)に示されるように、撮像部2の側面図で見た場合、原稿60がガラス面26から浮いているため奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onの光軸27Oが原稿60と交わる位置と偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの光軸27Eが原稿60と交わる位置が異なる。そのため、原稿60がガラス面26から浮いている場合、副走査方向(Y方向)には、読み取り位置が異なる。これは、副走査方向(Y方向)に撮像部2が搬送される場合、それぞれのラインセンサは、逐次光電変換しているため、偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enは、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onに比べて、同じ位置の画像を時間的に遅れて取得する。したがって、図5(c)に示されるように、偶数番目に位置するラインセンサ21Ok,21Ok+1に対応するデジタルデータDI(Ok),DI(Ok+1)と奇数番目に位置するラインセンサ21Ek,21Ek+1に対応するデジタルデータDI(Ek),DI(Ek+1)は、ずれて画像メモリ41に格納される。
On the other hand, FIG. 5 as shown in (a), when viewed in side view of the imaging unit 2, the line sensor 21O 1 document 60 is positioned on an odd-numbered order that floats from the glass surface 26, ..., 21O k, ..., the line sensor 21E 1 to the optical axis 27O of 21O n is located at the position and the even-numbered intersecting the document 60, ..., 21E k, ..., a position where the optical axis 27E intersects the document 60 21E n different. Therefore, when the document 60 is floating from the glass surface 26, the reading position is different in the sub-scanning direction (Y direction). This is because when the imaging unit 2 is transported in the sub-scanning direction (Y direction), each line sensor sequentially performs photoelectric conversion, and therefore, even-numbered line sensors 21E 1 ,..., 21E k ,. , 21E n is the line sensor 21O 1 located odd, ..., 21O k, ..., compared to 21O n, to obtain an image of the same position temporally delayed. Accordingly, as shown in FIG. 5C, the digital data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 21O k and 21O k + 1 and the odd-numbered line sensor 21E. The digital data DI (E k ) and DI (E k + 1 ) corresponding to k 1 and 21E k + 1 are stored in the image memory 41 in a shifted manner.
図6及び図7は、類似度算出部42の動作を説明するための図である。図6は、図4(c)に対応する図であり、副走査方向(Y方向)の位置Ymにおいて画像データMOと画像データMEの位置関係を示している。図7は、図5(c)に対応する図であり、副走査方向(Y方向)の位置Ymにおいて画像データMOと画像データMEの位置関係を示している。図1に示されるように、類似度算出部42は、画像データMOと画像データMEに基づいて相関データD42を生成する。
6 and 7 are diagrams for explaining the operation of the similarity calculation unit 42. Figure 6 is a view corresponding to FIG. 4 (c), it shows the positional relationship between the image data MO and the image data ME at position Y m in the sub-scanning direction (Y-direction). Figure 7 is a view corresponding to FIG. 5 (c), the shows the positional relationship between the image data MO and the image data ME at position Y m in the sub-scanning direction (Y-direction). As shown in FIG. 1, the similarity calculation unit 42 generates correlation data D42 based on the image data MO and the image data ME.
類似度算出部42は、ラインセンサ21Okに対応するデジタルデータDI(Ok)の領域drにおいて副走査方向(Y方向)の位置Ymの周辺のデータを画像データMO(Ok,dr,Ym)として読み出し、ラインセンサ21Ekに対応するデジタルデータDI(Ek)の領域dlにおいて副走査方向(Y方向)の位置Ymの周辺のデータをME(Ek,dl,Ym)として読み出す。画像データ(基準データ)MOより画像データ(比較データ)MEが副走査方向(Y方向)に広くなるように設定する。
Similarity calculation unit 42, the position Y image data MO (O k data around the m sub-scanning direction (Y direction) in the region d r of the line sensor 21O k into corresponding digital data DI (O k), d r , Y m ), and the data around the position Y m in the sub-scanning direction (Y direction) in the region d l of the digital data DI (E k ) corresponding to the line sensor 21E k is ME (E k , d l , Y m ). The image data (comparison data) ME is set to be wider in the sub-scanning direction (Y direction) than the image data (reference data) MO.
図8(a)及び(b)は、類似度算出部42の動作をさらに詳しく説明するための図である。画像データMO(Ok,dr,Ym)と画像データME(Ek,dl,Ym)は、複数の画素のデータで構成される。類似度算出部42は、まず、画像データME(Ek,dl,Ym)から画像データMO(Ok,dr,Ym)と同じ大きさ(同じ副走査方向の幅)の複数の画像データME(Ek,dl,Ym,ΔY)を抽出する。ΔYは、画像データMO(Ok,dr,Ym)とのずれ量であり-y~yの範囲内の値をとる。画像データMO(Ok,dr,Ym)の中心位置が同じデータを画像データME(Ek,dl,Ym,0)とし、1画素分ずらしたデータをME(Ek,dl,Ym,1)、さらに1画素ずつずらしていき、y画素分ずらしたデータを画像データME(Ek,dl,Ym,y)とし、逆方向にy画素分ずらしたデータを画像データME(Ek,dl,Ym,-y)とする。
FIGS. 8A and 8B are diagrams for explaining the operation of the similarity calculation unit 42 in more detail. The image data MO (O k , d r , Y m ) and the image data ME (E k , d l , Y m ) are composed of a plurality of pixel data. First, the similarity calculation unit 42 has a plurality of the same size (width in the same sub-scanning direction) as the image data MO (O k , d r , Y m ) from the image data ME (E k , d l , Y m ). Image data ME (E k , d l , Y m , ΔY) is extracted. ΔY is the amount of deviation from the image data MO (O k , dr , Y m ), and takes a value within the range of −y to y. Data having the same center position of the image data MO (O k , dr , Y m ) is taken as image data ME (E k , d l , Y m , 0), and data shifted by one pixel is ME (E k , d 1 , Y m , 1), and further shifted by one pixel, the data shifted by y pixels is image data ME (E k , d l , Y m , y), and the data shifted by y pixels in the reverse direction The image data is ME (E k , d l , Y m , -y).
次に、画像データMO(Ok,dr,Ym)と画像データME(Ek,dl,Ym,-y)~ME(Ek,dl,Ym,y)の相関データD42(Ok,Ek,Ym)を算出する。画像データMO(Ok,dr,Ym)と画像データME(Ek,dl,Ym,-y)~ME(Ek,dl,Ym,y)のそれぞれは、大きさが同じなので、例えば、画像データMO(Ok,dr,Ym)と画像データME(Ek,dl,Ym,-y)~ME(Ek,dl,Ym,y)の画素毎の差分の絶対値の和、又は、画素毎の差分の二乗和を算出し、相関データD42(Ok,Ek,Ym)として出力する。画像データMO(Ek,dr,Ym)と画像データME(Ok+1,dl,Ym)、画像データMO(Ok+1,dr,Ym)と画像データME(Ek+1,dl,Ym)についても同様に相関データD42(Ek,Ok+1,Ym)、D42(Ok+1,Ek+1,Ym)を算出する。
Next, correlation data between the image data MO (O k , dr , Y m ) and the image data ME (E k , d l , Y m , -y) to ME (E k , d l , Y m , y). D42 (O k , E k , Y m ) is calculated. Each of the image data MO (O k , d r , Y m ) and the image data ME (E k , d l , Y m , -y) to ME (E k , d l , Y m , y) has a size. Are the same, for example, image data MO (O k , d r , Y m ) and image data ME (E k , d l , Y m , -y) to ME (E k , d l , Y m , y) The sum of absolute values of differences for each pixel or the sum of squares of differences for each pixel is calculated and output as correlation data D42 (O k , E k , Y m ). Image data MO (E k, d r, Y m) and the image data ME (O k + 1, d l, Y m), the image data MO (O k + 1, d r, Y m) and the image data ME (E k + 1, d l, Y m) Similarly correlation data also D42 (E k, O k + 1, Y m), calculates the D42 (O k + 1, E k + 1, Y m).
シフト量推定部43は、類似度が最も高いずれデータに対応するずれ量ΔYをシフト量データD43として出力する。図8(b)では、破線が図6に対応する相関データD42(Ok,Ek,Ym)であり、実線が図7に対応する相関データD42(Ok,Ek,Ym)である。図6では、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enが副走査方向(Y方向)に同じ位置を読み取っているため、そのデータもずれていない。したがって、図8(b)の破線のようにΔY=0の位置で類似度が高く(すなわち、非類似度が低く)なる。また、図7では、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onに対して偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの読み取り画像が上にずれているため、負の値(ΔY=-α)の位置で、類似度が最も高く(すなわち、非類似度が最も低く)なる。
The shift amount estimation unit 43 outputs the shift amount ΔY corresponding to the data with the highest similarity as the shift amount data D43. In FIG. 8 (b), the correlation data and the dashed line corresponds to the FIG. 6 D42 (O k, E k , Y m) is the correlation data solid line corresponds to FIG. 7 D42 (O k, E k , Y m) It is. In Figure 6, the line sensor 21O 1 located odd, ..., 21O k, ..., the line sensor 21E 1 located to the even-numbered and 21O n, ..., 21E k, ..., 21E n sub-scanning direction (Y-direction ), The same position is read, so the data is not shifted. Accordingly, the degree of similarity is high (that is, the degree of dissimilarity is low) at the position of ΔY = 0 as indicated by the broken line in FIG. Further, in FIG. 7, the line sensor 21O 1 located odd, ..., 21O k, ..., the line sensor 21E 1 located to the even-numbered relative to 21O n, ..., 21E k, ..., 21E n of the read image Is shifted upward, the degree of similarity is highest (that is, the degree of dissimilarity is lowest) at the position of the negative value (ΔY = −α).
結合処理部44は、シフト量データに基づいてデータをずらす処理を行う。図9及び図10は、結合処理部44の動作を説明するための図である。図9のように偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enに対応するデータだけをずらしてもよいし、図10(a)及び図10(b)のように奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onに対応するデータ、偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enに対応するデータの両方のシフト量を合わせてシフト量データと同等となるようにずらしてもよい。図10(a)では、類似度算出部42で使用した副走査方向(Y方向)の位置Ymをずらし、図10(b)では、Ymの位置に離れたデータをさらにYmずらしている。例えば、結合処理部44は、画像メモリ41から互いに主走査方向に隣接する(部分的に重複する)第1の画像データ及び第2の画像データを読み出し、これら第1の画像データと第2の画像データとの間のシフト量データが示すシフト量を分割し、前記分割された一方のシフト量を示す第1のシフト量データと前記分割された他方のシフト量を示す第2のシフト量データとを生成し、第1のシフト量データに基づいて第1の画像データの副走査方向の位置を補正し、第2のシフト量データに基づいて第2の画像データの副走査方向の位置を補正して、第1の画像データと第2の画像データとを結合してもよい。
The combination processing unit 44 performs a process of shifting data based on the shift amount data. FIG. 9 and FIG. 10 are diagrams for explaining the operation of the combination processing unit 44. 9, only the data corresponding to the even-numbered line sensors 21E 1 ,..., 21E k ,..., 21E n may be shifted, or as shown in FIGS. line sensor 21O 1 located odd, ..., 21O k, ..., data corresponding to 21O n, a line sensor 21E 1 located even-numbered, ..., 21E k, ..., the both data corresponding to 21E n The shift amount may be combined and shifted so as to be equivalent to the shift amount data. In FIG. 10 (a), shifting the position Y m in the sub-scanning direction used in the similarity calculation unit 42 (Y-direction), in FIG. 10 (b), further shifted Y m data apart to the position of Y m Yes. For example, the combination processing unit 44 reads out the first image data and the second image data that are adjacent (partially overlap) with each other in the main scanning direction from the image memory 41, and the first image data and the second image data The shift amount indicated by the shift amount data between the image data is divided, and the first shift amount data indicating one of the divided shift amounts and the second shift amount data indicating the other divided shift amount. The position of the first image data in the sub-scanning direction is corrected based on the first shift amount data, and the position of the second image data in the sub-scanning direction is corrected based on the second shift amount data. Correction may be made to combine the first image data and the second image data.
結合処理部44は、シフト量データD43に基づいて、画像データの読み出し位置RPを算出し、その読み出し位置に対応する画像データM44を読み出し、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onのデータと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enのデータが重複するところを結合して画像データD44を生成する。
Coupling processor 44, based on the shift amount data D43, and calculates the reading position RP of the image data, reads out the image data M44 corresponding to the reading position, the line sensor 21O 1 located odd, ..., 21O k , ..., the line sensor 21E 1 located data and even-numbered 21O n, ..., 21E k, ..., to generate image data D44 by combining where data 21E n overlap.
類似度算出部42で、ある副走査方向(Y方向)の位置Ymに対して奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onに対応する画像データを基準に偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enに対応する画像データをずらして類似度データ(相関データ)D42を算出し、シフト量推定部43で、その位置Ymについて類似度の最も高い(相関の最も大きい)ものに対応するずれ量をシフト量データD43として算出し、結合処理部44で、シフト量データD43に基づいて画像データを読み出し、結合した画像データD44を出力する。これを副走査方向(Y方向)に順次行う。図11は、図4(c)又は図5(c)の画像を結合した画像である。特に、図5(c)の場合には、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onの画像データと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの画像データが副走査方向(Y方向)にずれていたが、図5(b)で示した原稿60と同じ画像を出力することができる。
Even number similarity calculation unit 42, the line sensor 21O 1 positioned to the odd-numbered relative to the position Y m of a sub-scanning direction (Y-direction), ..., 21O k, ..., based on the image data corresponding to 21O n line sensor 21E 1 located th, ..., 21E k, ..., and calculates the shifting image data similarity data (correlation data) D42 corresponding to 21E n, the shift amount estimation unit 43, for its position Y m The shift amount corresponding to the one having the highest degree of similarity (the largest correlation) is calculated as shift amount data D43, and the combination processing unit 44 reads the image data based on the shift amount data D43, and combines the combined image data D44. Output. This is sequentially performed in the sub-scanning direction (Y direction). FIG. 11 is an image obtained by combining the images of FIG. 4C or FIG. Particularly, in the case of FIG. 5 (c), the line sensor 21O 1 located odd, ..., 21O k, ..., the line sensor 21E 1 located to the image data and the even-numbered 21O n, ..., 21E k, .., 21E n is shifted in the sub-scanning direction (Y direction), but the same image as the original 60 shown in FIG. 5B can be output.
図11は、結合処理部44から出力される結合処理後の画像データD44を概念的に示す説明図である。奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enのずれが修正され、重複して読み取っていた画像が結合される。
FIG. 11 is an explanatory diagram conceptually showing the image data D44 after the combination process output from the combination processing unit 44. Line sensor 21O 1 located odd, ..., 21O k, ..., the line sensor 21E 1 located to the even-numbered and 21O n, ..., 21E k, ..., fixes a displacement of 21E n, is reading overlapping Merged images.
図12(a)及び(b)は、撮像部2を搬送中に、ガラス面26に対する原稿60の位置が変わる例を示す図である。副走査方向(Y方向)の位置Ymでは、原稿60は、ガラス面26から浮いており、位置Yuでは、原稿60は、ガラス面26に密着している。撮像部2は、副走査方向(Y方向)の各位置において順次画像データを処理しているため、搬送中に原稿60の位置が変わったとしても、その位置におけるずれ量を算出しているため、正しく画像を結合することができる。また、図6及び図7で説明したように奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの全ての間で個別にずれ量を算出しているので、主走査方向(X方向)に原稿60のガラス面26に対する位置が変わったとしても正しく画像を結合することができる。
12A and 12B are diagrams illustrating an example in which the position of the document 60 with respect to the glass surface 26 is changed while the imaging unit 2 is being transported. In the position Y m in the sub-scanning direction (Y-direction), the document 60 is floated from the glass surface 26, the position Y u, document 60 is adhered to the glass surface 26. Since the imaging unit 2 sequentially processes the image data at each position in the sub-scanning direction (Y direction), even if the position of the document 60 changes during conveyance, the amount of deviation at that position is calculated. , Can combine images correctly. The line sensor 21O 1 located odd as described in FIG. 6 and FIG. 7, ..., 21O k, ... , 21O n a line sensor 21E 1 located even-numbered, ..., 21E k, ..., 21E Since the shift amount is calculated individually for all n , even if the position of the document 60 relative to the glass surface 26 in the main scanning direction (X direction) is changed, the images can be combined correctly.
図13(a)及び(b)は、原稿60とガラス面26との間の距離が主走査方向の位置に応じて変化する場合について説明するための図である。図13(a)の例では、ラインセンサ21Okと21Ekの重複領域を含む領域F1では、原稿60がガラス面26から浮いている(間隔G1が正の値を持つ)が、ラインセンサ21Ekと21Ok+1の重複領域とラインセンサ21Ok+1と21Ek+1の重複領域とを含む領域F3では原稿60とガラス面26は密着している(間隔G3の値が0である)。従って、図13(a)の例では、原稿60の領域F2に対応する部分を読み取るラインセンサ21Ekが生成するデジタルデータDI(Ek)の中で、ずれ量(間隔G2)が徐々に変化する。この場合、図13(b)に示すように、主走査方向の位置(主走査位置)に応じて画像データの結合位置は副走査方向に徐々に変化している。このように、主走査方向の位置に応じて原稿60のガラス面26に対する位置が徐々に変わったとしても、結合処理部は、第1の画像データと第2の画像データとを結合させるときの、第1の画像データの副走査方向の位置と第2の画像データの副走査方向の位置とを、主走査方向の位置に応じて変化させることで、正しく画像を結合することができる。
FIGS. 13A and 13B are diagrams for explaining a case where the distance between the document 60 and the glass surface 26 changes according to the position in the main scanning direction. In the example of FIG. 13 (a), the the region F1 including the overlapping area of the line sensor 21O k and 21E k, the original 60 is floated from the glass surface 26 (gap G1 has a positive value) is, the line sensor 21E In the region F3 including the overlapping region of k and 21O k + 1 and the overlapping region of the line sensors 21O k + 1 and 21E k + 1 , the document 60 and the glass surface 26 are in close contact (the value of the gap G3 is 0). Therefore, in the example of FIG. 13A, the shift amount (interval G2) gradually changes in the digital data DI (E k ) generated by the line sensor 21E k that reads the portion corresponding to the area F2 of the document 60. To do. In this case, as shown in FIG. 13B, the combined position of the image data gradually changes in the sub-scanning direction according to the position in the main scanning direction (main scanning position). As described above, even when the position of the document 60 with respect to the glass surface 26 gradually changes according to the position in the main scanning direction, the combining processing unit is configured to combine the first image data and the second image data. By changing the position of the first image data in the sub-scanning direction and the position of the second image data in the sub-scanning direction according to the position in the main scanning direction, the images can be combined correctly.
以上に説明したように、実施の形態1に係る画像読取装置1、画像処理装置4、及び画像処理方法によれば、第1列のラインセンサ群に属するラインセンサが生成する画像データが示す分割画像の副走査方向の位置を基準位置(図8(a)の基準データ)とし、第2列のラインセンサ群に属するラインセンサが生成する画像データが示す分割画像の副走査方向の位置をシフトさせており、位置ずれの累積は生じないので、被読取物に対応する高品質な合成画像データを生成することができる。
As described above, according to the image reading device 1, the image processing device 4, and the image processing method according to the first embodiment, the division indicated by the image data generated by the line sensors belonging to the first row of line sensor groups. Using the position in the sub-scanning direction of the image as the reference position (reference data in FIG. 8A), the position in the sub-scanning direction of the divided image indicated by the image data generated by the line sensors belonging to the second row of line sensor groups is shifted. In addition, since there is no accumulation of misalignment, high-quality composite image data corresponding to the object to be read can be generated.
《2》実施の形態2
画像読取装置の機能の一部は、ハードウェア構成で実現されてもよいし、あるいは、CPU(central processing unit)を含むマイクロプロセッサにより実行されるコンピュータプログラムで実現されてもよい。当該機能の一部がコンピュータプログラムで実現される場合には、マイクロプロセッサは、コンピュータ読み取り可能な記憶媒体から、又は、インターネットなどの通信によって、当該コンピュータプログラムをロードし実行することによって当該機能の一部を実現することができる。 << 2 >>Embodiment 2
A part of the functions of the image reading apparatus may be realized by a hardware configuration, or may be realized by a computer program executed by a microprocessor including a CPU (central processing unit). When a part of the function is realized by a computer program, the microprocessor loads and executes the computer program from a computer-readable storage medium or by communication such as the Internet. Part can be realized.
画像読取装置の機能の一部は、ハードウェア構成で実現されてもよいし、あるいは、CPU(central processing unit)を含むマイクロプロセッサにより実行されるコンピュータプログラムで実現されてもよい。当該機能の一部がコンピュータプログラムで実現される場合には、マイクロプロセッサは、コンピュータ読み取り可能な記憶媒体から、又は、インターネットなどの通信によって、当該コンピュータプログラムをロードし実行することによって当該機能の一部を実現することができる。 << 2 >>
A part of the functions of the image reading apparatus may be realized by a hardware configuration, or may be realized by a computer program executed by a microprocessor including a CPU (central processing unit). When a part of the function is realized by a computer program, the microprocessor loads and executes the computer program from a computer-readable storage medium or by communication such as the Internet. Part can be realized.
図14は、画像読取装置1aの機能の一部をコンピュータプログラムで実現する場合の構成を示す機能ブロック図である。図14に示されるように、実施の形態2に係る画像読取装置1aは、撮像部2と、A/D変換部3と、演算装置5とを有する。演算装置5は、CPUを含むプロセッサ51、RAM(random access memory)52、不揮発性メモリ53、大容量記憶媒体54、及びバス55を備えている。不揮発性メモリ53としては、例えば、フラッシュメモリを使用することができる。また、大容量記憶媒体54としては、例えば、ハードディスク(磁気ディスク)、光ディスク、又は、半導体記憶装置を使用することができる。
FIG. 14 is a functional block diagram showing a configuration when a part of the functions of the image reading apparatus 1a is realized by a computer program. As illustrated in FIG. 14, the image reading device 1 a according to the second embodiment includes an imaging unit 2, an A / D conversion unit 3, and an arithmetic device 5. The arithmetic device 5 includes a processor 51 including a CPU, a RAM (random access memory) 52, a nonvolatile memory 53, a large-capacity storage medium 54, and a bus 55. As the non-volatile memory 53, for example, a flash memory can be used. Further, as the large-capacity storage medium 54, for example, a hard disk (magnetic disk), an optical disk, or a semiconductor storage device can be used.
A/D変換部3は、図1のA/D変換部3と同じ機能を有し、撮像部2が出力する電気信号SIをデジタルデータに変換してプロセッサ51を介してRAM52に格納される。
The A / D conversion unit 3 has the same function as the A / D conversion unit 3 in FIG. 1, converts the electrical signal SI output from the imaging unit 2 into digital data, and stores the digital data in the RAM 52 via the processor 51. .
プロセッサ51は、不揮発性メモリ53又は、大容量記憶媒体54からコンピュータプログラムをロードし実行することによって画像処理部4の機能を実現することができる。
The processor 51 can realize the function of the image processing unit 4 by loading and executing a computer program from the nonvolatile memory 53 or the mass storage medium 54.
図15は、実施の形態2の演算装置5による処理の一例を概略的に示すフローチャートである。図15に示されるように、プロセッサ51は、まず、類似度算出処理を実行する(ステップS1)。その後、プロセッサ51は、シフト量推定処理を実行する(ステップS2)。最後に、プロセッサ51は、結合処理を実行する(ステップS3)。なお、演算装置5によるステップS1~S3の処理は、実施の形態1における画像処理部4が行う処理と同じである。
FIG. 15 is a flowchart schematically showing an example of processing by the arithmetic device 5 according to the second embodiment. As shown in FIG. 15, the processor 51 first executes similarity calculation processing (step S1). Thereafter, the processor 51 executes a shift amount estimation process (step S2). Finally, the processor 51 executes a combination process (step S3). Note that the processing in steps S1 to S3 by the arithmetic device 5 is the same as the processing performed by the image processing unit 4 in the first embodiment.
実施の形態2に係る画像読取装置1aによれば、撮像部2の搬送中に原稿60の位置が変わったとしてもその位置におけるずれ量を算出しているため、正しく画像を結合することができる。
According to the image reading apparatus 1a according to the second embodiment, even if the position of the document 60 is changed during conveyance of the imaging unit 2, the amount of deviation at the position is calculated, so that the images can be combined correctly. .
《3》実施の形態3
実施の形態1においては、図4(a)に示されるように、一方の端部(例えば、左端)から数えて奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onの光軸27Oと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの光軸27Eとが交差する場合を説明した。実施の形態3においては、一方の端部(例えば、左端)から数えて奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onの光軸28Oと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの光軸28Eとが交差せず、平行である場合について説明する。実施の形態3に係る画像読取装置は、光軸28O,28Eが平行である点を除き、実施の形態1に係る画像読取装置1と実質的に同じである。したがって、実施の形態3の説明においては、図1をも参照する。 << 3 >>Embodiment 3
In the first embodiment, FIG. 4 as (a), the one end (e.g., the left end)line sensor 21O 1 located odd counted from, ..., 21O k, ..., of 21O n line sensor 21E 1 located to the even-numbered and the optical axis 27O, ..., 21E k, ... , and the optical axis 27E of 21E n has been described a case where intersecting. In the third embodiment, one end (e.g., the left end) line sensor 21O 1 located odd counted from, ..., 21O k, ..., the line sensor located even-numbered and the optical axis 28O of 21O n A case where the optical axes 28E of 21E 1 ,..., 21E k ,. The image reading apparatus according to the third embodiment is substantially the same as the image reading apparatus 1 according to the first embodiment except that the optical axes 28O and 28E are parallel. Therefore, FIG. 1 is also referred to in the description of the third embodiment.
実施の形態1においては、図4(a)に示されるように、一方の端部(例えば、左端)から数えて奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onの光軸27Oと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの光軸27Eとが交差する場合を説明した。実施の形態3においては、一方の端部(例えば、左端)から数えて奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onの光軸28Oと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの光軸28Eとが交差せず、平行である場合について説明する。実施の形態3に係る画像読取装置は、光軸28O,28Eが平行である点を除き、実施の形態1に係る画像読取装置1と実質的に同じである。したがって、実施の形態3の説明においては、図1をも参照する。 << 3 >>
In the first embodiment, FIG. 4 as (a), the one end (e.g., the left end)
図16(a)及び(b)は、実施の形態3に係る画像読取装置の撮像部2の奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onの光軸28Oと偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enの光軸28Eが平行である場合の、被読取物としての原稿60とラインセンサとの位置関係を示す概略的な側面図である。図16(a)は、原稿60が原稿台載置面であるガラス面26に密着している場合を示し、図16(b)は、原稿60がガラス面26から少し浮いて離れている場合を示す。
Figure 16 (a) and (b) is a line sensor 21O 1 positioned to the odd-numbered image pickup section 2 of the image reading apparatus according to the third embodiment, ..., 21O k, ..., the optical axis 28O and even 21O n line sensor 21E 1, located th ..., 21E k, ..., if the optical axis 28E of 21E n are parallel, schematic side view showing the positional relationship between the document 60 and the line sensor as the read object It is. FIG. 16A shows a case where the document 60 is in close contact with the glass surface 26 which is the document table mounting surface, and FIG. 16B shows a case where the document 60 is slightly lifted away from the glass surface 26. Indicates.
実施の形態3に係る画像読取装置おいては、図16(a)に示されるように原稿60がガラス面26に密着している場合、図16(b)に示されるように原稿60がガラス面26から浮いている場合のいずれであっても、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onによる原稿60の読取画像はほとんど変化せず、同様に、偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enによる原稿60の読取画像はほとんど変化しない。このため、画像処理部4は、奇数番目に位置するラインセンサ21O1,…,21Ok,…,21Onによる読取画像、又は、偶数番目に位置するラインセンサ21E1,…,21Ek,…,21Enによる読取画像のいずれか一方、又は、両方を、副走査方向にほぼ一定量ずらす処理を行う。
In the image reading apparatus according to the third embodiment, when the document 60 is in close contact with the glass surface 26 as shown in FIG. 16A, the document 60 is glass as shown in FIG. be any of the cases that floats from the surface 26, the line sensor 21O 1 located odd, ..., 21O k, ..., the read image of the document 60 by 21O n hardly changes, similarly, the even-numbered line sensor 21E 1 located, ..., 21E k, ..., the read image of the document 60 by 21E n is hardly changed. Therefore, the image processing unit 4, the line sensor 21O 1 located odd, ..., 21O k, ..., the image reading by 21O n, or line sensor 21E 1 located even-numbered, ..., 21E k, ... , 21E n, a process of shifting one or both of the scanned images by a substantially constant amount in the sub-scanning direction is performed.
実施の形態3に係る画像読取装置によれば、撮像部2による撮像時に、原稿60又は撮像部2のいずれか一方、又は、両方を副走査方向に移動する際の搬送機による搬送速度に時間的なゆらぎ(すなわち、速度変動)がある場合であっても、実施の形態1の場合と同様に、正しく画像を結合して合成画像データを生成することができる。
According to the image reading apparatus according to the third embodiment, at the time of imaging by the imaging unit 2, time is taken for the conveyance speed by the conveyance machine when moving either one or both of the document 60 and the imaging unit 2 in the sub-scanning direction. Even when there is a natural fluctuation (that is, speed fluctuation), as in the case of the first embodiment, it is possible to generate combined image data by correctly combining images.
《4》実施の形態4
《4-1》実施の形態4の構成
《4-1-1》画像読取装置101
図17は、本発明の実施の形態4に係る画像読取装置101の構成を概略的に示す機能ブロック図である。図17に示されるように、実施の形態4に係る画像読取装置101は、撮像部102と、A/D変換部103と、画像処理部104と、撮像部102及び画像処理部104の動作を制御するコントローラ107とを備えている。画像処理部104は、実施の形態4に係る画像処理装置(実施の形態4に係る画像処理方法を実施することができる装置)であり、画像メモリ141と、読み出し制御部142と、類似度算出部143と、シフト量推定部144と、シフト量拡大部145と、結合処理部146とを備えている。 << 4 >>Embodiment 4
<4-1> Configuration ofEmbodiment 4 <4-1-1> Image Reading Device 101
FIG. 17 is a functional block diagram schematically showing the configuration of theimage reading apparatus 101 according to the fourth embodiment of the present invention. As illustrated in FIG. 17, the image reading apparatus 101 according to the fourth embodiment performs operations of the imaging unit 102, the A / D conversion unit 103, the image processing unit 104, the imaging unit 102, and the image processing unit 104. And a controller 107 to be controlled. The image processing unit 104 is an image processing apparatus according to the fourth embodiment (an apparatus capable of performing the image processing method according to the fourth embodiment), and includes an image memory 141, a read control unit 142, and similarity calculation. Unit 143, shift amount estimation unit 144, shift amount enlargement unit 145, and combination processing unit 146.
《4-1》実施の形態4の構成
《4-1-1》画像読取装置101
図17は、本発明の実施の形態4に係る画像読取装置101の構成を概略的に示す機能ブロック図である。図17に示されるように、実施の形態4に係る画像読取装置101は、撮像部102と、A/D変換部103と、画像処理部104と、撮像部102及び画像処理部104の動作を制御するコントローラ107とを備えている。画像処理部104は、実施の形態4に係る画像処理装置(実施の形態4に係る画像処理方法を実施することができる装置)であり、画像メモリ141と、読み出し制御部142と、類似度算出部143と、シフト量推定部144と、シフト量拡大部145と、結合処理部146とを備えている。 << 4 >>
<4-1> Configuration of
FIG. 17 is a functional block diagram schematically showing the configuration of the
撮像部102は、主走査方向に間隔を開けてライン状に並ぶ複数(例えば、n個)の第1列のラインセンサを含む第1列のラインセンサ群と、主走査方向に間隔を開けてライン状に並ぶ複数(例えば、n個)の第2列のラインセンサを含む第2列のラインセンサ群とを備えている。ここで、nは正の整数である。複数の第1列のラインセンサの主走査方向の位置は、複数の第2列のラインセンサが備えられていない領域(すなわち、主走査方向に隣接する第2列のラインセンサの間の領域)に対向する位置である。複数の第2列のラインセンサの主走査方向の位置は、複数の第1列のラインセンサが備えられていない領域(すなわち、主走査方向に隣接する第1列のラインセンサの間の領域)に対向する位置である。その結果、複数の第1列のラインセンサと複数の第2列のラインセンサとは、センサ基板上に千鳥状に配列される。また、複数の第1列のラインセンサと複数の第2列のラインセンサの内の互いに隣接する第1列のラインセンサと第2列のラインセンサとは、隣接する端部同士が、主走査方向に重なる重複領域(以下「オーバーラップ領域」という。)を有するよう配置される。撮像部102は、被読取物としての原稿の画像を光学的に読み取り、原稿の画像に対応する電気信号(画像データ)SIを生成する。撮像部102で生成される電気信号(画像データ)SIは、第1列のラインセンサ群を構成する複数の第1列のラインセンサから出力される第1の画像データと、第2列のラインセンサ群を構成する複数の第2のラインセンサから出力される第2の画像データとを含む。なお、第1列のラインセンサ群及び第2列のラインセンサ群と原稿との間には、例えば、第1列のラインセンサ群及び第2列のラインセンサ群のそれぞれに正立像を結像させるレンズなどのような光学系を備えてもよい。また、以下の説明においては、撮像部102が2列のラインセンサ群を備えた例を説明するが、ラインセンサ群の列数が3列以上の場合にも、本発明は適用可能である。また、本発明は、オーバーラップ領域を有する2個以上のラインセンサを有する撮像部によって被読取物の画像を読み取る場合に適用可能である。したがって、本発明は、第1列のラインセンサ群が1個のラインセンサから構成され、第2列のラインセンサ群が1個のラインセンサから構成される場合にも適用可能である。
The imaging unit 102 has a first row of line sensor groups including a plurality of (for example, n) first row line sensors arranged in a line at intervals in the main scanning direction, and an interval in the main scanning direction. And a second row line sensor group including a plurality of (for example, n) second row line sensors arranged in a line. Here, n is a positive integer. The positions of the plurality of first row line sensors in the main scanning direction are regions where the plurality of second row line sensors are not provided (that is, regions between the second row line sensors adjacent in the main scanning direction). It is a position facing. The positions of the plurality of second row line sensors in the main scanning direction are regions where the plurality of first row line sensors are not provided (that is, regions between the first row line sensors adjacent in the main scanning direction). It is a position facing. As a result, the plurality of first row line sensors and the plurality of second row line sensors are arranged in a staggered pattern on the sensor substrate. Further, among the plurality of first row line sensors and the plurality of second row line sensors, the first row line sensor and the second row line sensor which are adjacent to each other are adjacent to each other in the main scanning. They are arranged so as to have overlapping areas (hereinafter referred to as “overlap areas”) overlapping in the direction. The imaging unit 102 optically reads an image of a document as an object to be read and generates an electrical signal (image data) SI corresponding to the image of the document. The electrical signal (image data) SI generated by the imaging unit 102 includes first image data output from a plurality of first row line sensors constituting the first row line sensor group, and second row lines. And second image data output from a plurality of second line sensors constituting the sensor group. Note that, for example, an erect image is formed on each of the first row line sensor group and the second row line sensor group between the first row line sensor group and the second row line sensor group and the original. An optical system such as a lens may be provided. In the following description, an example in which the imaging unit 102 includes two lines of line sensor groups will be described. However, the present invention can also be applied when the number of lines in the line sensor group is three or more. In addition, the present invention can be applied to a case where an image of an object to be read is read by an imaging unit having two or more line sensors having an overlap region. Therefore, the present invention can also be applied to the case where the first row line sensor group is composed of one line sensor and the second row line sensor group is composed of one line sensor.
A/D変換部103は、撮像部102から出力される電気信号(画像データ)SIをデジタルデータ(画像データ)DIに変換する。画像データDIは、画像処理部104に入力され、画像処理部104内の画像メモリ141に格納される。
The A / D conversion unit 103 converts the electrical signal (image data) SI output from the imaging unit 102 into digital data (image data) DI. The image data DI is input to the image processing unit 104 and stored in the image memory 141 in the image processing unit 104.
画像処理部104内の画像メモリ141が格納する画像データDIは、第1列のラインセンサ群を構成する複数の第1列のラインセンサから出力された画像データに基づく第1の画像データと、第2列のラインセンサ群を構成する複数の第2のラインセンサから出力された画像データに基づく第2の画像データとを含む。
Image data DI stored in the image memory 141 in the image processing unit 104 includes first image data based on image data output from a plurality of first-line line sensors constituting the first-line line sensor group, and Second image data based on image data output from a plurality of second line sensors constituting the second row of line sensor groups.
画像処理部104内の読み出し制御部142は、変倍率Rに応じて設定される間引き率Mに基づいて、画像メモリ141に格納されている画像データDIから、オーバーラップ領域における画像データのうちの所定の副走査方向の位置(「所定の副走査位置」又は「所定のライン」とも記す。)の基準データ(例えば、後述する図26(b)及び図27(b)における基準データMO(Ok,dr))と、この基準データと重複して読み取られた同じオーバーラップ領域の予め決められたライン数の比較データ(例えば、後述する図26(b)及び図27(b)における比較データME(Ek,dl))とを読み出す。
Based on the thinning rate M set in accordance with the scaling ratio R, the read control unit 142 in the image processing unit 104 determines from the image data DI stored in the image memory 141 among the image data in the overlap region. Reference data (for example, reference data MO (O in FIG. 26B and FIG. 27B described later) of a position in a predetermined sub-scanning direction (also referred to as “predetermined sub-scanning position” or “predetermined line”). k , dr)) and comparison data (for example, comparison data in FIG. 26B and FIG. 27B described later) of a predetermined number of lines in the same overlap region read in duplicate with the reference data. ME (E k , dl)) is read out.
画像処理部104内の類似度算出部143は、読み出し制御部142によって読み出された、同じオーバーラップ領域についての、基準データと比較データとを比較する処理を、比較データによって求められる複数の位置について行うことによって、基準データと副走査方向の複数の位置における比較データ(すなわち、複数ラインの比較データ)との間の、オーバーラップ領域における類似度を算出する。言い換えれば、類似度算出部143は、基準データに対する、複数ラインの比較データの各々についての類似度(すなわち、複数の比較データに対応する複数の類似度)を算出する。
The similarity calculation unit 143 in the image processing unit 104 performs a process of comparing the reference data and the comparison data for the same overlap region read by the read control unit 142 with a plurality of positions obtained from the comparison data. By performing the above, the similarity in the overlap region between the reference data and the comparison data at a plurality of positions in the sub-scanning direction (that is, the comparison data of a plurality of lines) is calculated. In other words, the similarity calculation unit 143 calculates the similarity (that is, the plurality of similarities corresponding to the plurality of comparison data) for each of the plurality of lines of comparison data with respect to the reference data.
画像処理部104内のシフト量推定部144は、類似度算出部143によって算出された複数の類似度の内の最も類似度の高い比較データに基づいて、基準データの副走査方向の位置と類似度の最も高い比較データの副走査方向の位置との差分に対応するシフト量データdshを算出する。
The shift amount estimation unit 144 in the image processing unit 104 is similar to the position of the reference data in the sub-scanning direction based on the comparison data having the highest similarity among the plurality of similarities calculated by the similarity calculation unit 143. The shift amount data d sh corresponding to the difference between the highest degree comparison data and the position in the sub-scanning direction is calculated.
画像処理部104内のシフト量拡大部145は、間引き率Mに基づいて、シフト量データdshを拡大するための処理(補間処理)を行うことによって、シフト量データdshを拡大シフト量データΔybに変換する。
The shift amount enlargement unit 145 in the image processing unit 104 performs processing (interpolation processing) for enlarging the shift amount data d sh based on the thinning rate M, thereby converting the shift amount data d sh into the enlarged shift amount data. to convert to Δy b.
画像処理部104内の結合処理部146は、拡大シフト量データΔybに基づいて画像メモリ141から読み出された画像データの副走査方向の位置を変えて、すなわち、画像メモリ141から読み出された第1の画像データ(第1列のラインセンサ群によって読み取られた画像に対応する画像データ)と第2の画像データ(第2列のラインセンサ群によって読み取られた画像に対応する画像データ)との間の副走査方向の位置ずれを無くした上で、第1の画像データと第2の画像データとを結合する。
The combination processing unit 146 in the image processing unit 104 changes the position in the sub-scanning direction of the image data read from the image memory 141 based on the enlargement shift amount data Δy b , that is, read from the image memory 141. First image data (image data corresponding to an image read by the first row of line sensor groups) and second image data (image data corresponding to an image read by the second row of line sensor groups) The first image data and the second image data are combined with each other after the positional deviation in the sub-scanning direction is eliminated.
コントローラ107は、撮像部102及び画像処理部104の動作を制御する。コントローラ107は、例えば、入力された変倍率Rなどの設定情報又は指示情報を撮像部102及び画像処理部104へ送ることによって、撮像部102による読み取り動作の制御及び画像処理部104による画像処理などの制御を行う。画像読取装置101で変倍処理を行う場合、画像読取装置101内のコントローラ107において変倍率が指定されると、コントローラ107は、読み取り倍率(変倍率)Rを設定する設定情報を撮像部102に送り、変倍率に基づいて設定される間引き率Mなどの指示情報を画像処理部104に送る。これにより、撮像部102は、等倍率(すなわち、変倍率1倍)以外の変倍率で原稿を読み取り、副走査方向に変倍率に応じて拡大又は縮小されたライン数の画像データ(画像信号)を生成して、出力する。
The controller 107 controls the operations of the imaging unit 102 and the image processing unit 104. For example, the controller 107 sends the input setting information such as the scaling ratio R or instruction information to the imaging unit 102 and the image processing unit 104, thereby controlling the reading operation by the imaging unit 102 and the image processing by the image processing unit 104. Control. When zooming processing is performed by the image reading apparatus 101, when a scaling factor is designated by the controller 107 in the image reading apparatus 101, the controller 107 sends setting information for setting the reading scaling factor (scaling factor) R to the imaging unit 102. The instruction information such as the thinning rate M set based on the transmission and variable magnification is sent to the image processing unit 104. As a result, the imaging unit 102 reads the document at a variable magnification other than the equal magnification (that is, a variable magnification of 1), and image data (image signal) of the number of lines expanded or reduced in the sub-scanning direction according to the variable magnification. Is generated and output.
ここで、画像読取装置101内のコントローラ107において設定される間引き率Mは、例えば、読み取り倍率である変倍率Rに近い、1以上の整数値で設定する。例えば、変倍率Rが2倍(R=2)のときには間引き率Mは2に設定され、変倍率Rが4倍(R=4)のときには間引き率Mは4に設定される。また、例えば、変倍率Rが0.8倍(R=0.8)のときには間引き率Mは1に設定され、変倍率Rが1倍と2倍の中間倍率である1.5倍(R=1.5)のときには間引き率Mは2に設定される。このように、間引き率Mは、例えば、変倍率Rに等しい整数値、又は、変倍率Rに最も近い整数値で設定される。なお、変倍率Rに対応する間引き率Mの設定方法は、上記例に限定されない。これにより、画像処理部104において、読み出し制御部142における間引き処理及びシフト量拡大部145における拡大処理を、ライン補間のようなフィルタ演算による拡大又は縮小処理などのような複雑な処理によって行う必要がなく、整数ラインごとのライン間引き又は2度書き補間(繰り返し挿入)と平均化処理のような簡素化された処理によって行うことができ、画像処理部104の回路規模を大きくする必要が無い。
Here, the thinning rate M set in the controller 107 in the image reading apparatus 101 is set to an integer value of 1 or more close to a variable magnification R that is a reading magnification, for example. For example, when the variable magnification R is 2 times (R = 2), the thinning rate M is set to 2, and when the variable magnification R is 4 times (R = 4), the thinning rate M is set to 4. Further, for example, when the variable magnification R is 0.8 times (R = 0.8), the thinning rate M is set to 1, and the variable magnification R is 1.5 times (R which is an intermediate magnification between 1 time and 2 times). = 1.5), the thinning rate M is set to 2. Thus, the thinning rate M is set to an integer value equal to the scaling factor R or an integer value closest to the scaling factor R, for example. Note that the method of setting the thinning rate M corresponding to the scaling factor R is not limited to the above example. Accordingly, in the image processing unit 104, it is necessary to perform the thinning process in the read control unit 142 and the enlargement process in the shift amount enlargement unit 145 by complicated processing such as enlargement or reduction processing by a filter operation such as line interpolation. However, it can be performed by simplified processing such as line thinning for every integer line or double writing interpolation (repeated insertion) and averaging processing, and there is no need to increase the circuit scale of the image processing unit 104.
《4-1-2》撮像部102
図18(a)及び(b)は、撮像部102を説明するための図である。図18(a)は、撮像部102を概略的に示す平面図であり、図18(b)は、被読取物としての原稿160を示す平面図である。図18(a)は、例えば、被読取物としての原稿が載せられる複写機の原稿台(以下「原稿台ガラス」又は「ガラス面」という。)126を上から見た状態を示している。図19は、撮像部102に備えられた複数のラインセンサの1つ(ラインセンサ121O1を例示)の構成を説明するための図である。なお、原稿台126は、ガラス面に限定される、原稿160の位置を所定位置に決めることができる構造であれば、他の構造であってもよい。 << 4-1-2 >>Imaging Unit 102
18A and 18B are diagrams for explaining theimaging unit 102. FIG. 18A is a plan view schematically showing the imaging unit 102, and FIG. 18B is a plan view showing a document 160 as an object to be read. FIG. 18A shows, for example, a state in which an original table (hereinafter referred to as “original table glass” or “glass surface”) 126 of a copying machine on which an original as an object to be read is placed is viewed from above. Figure 19 is a diagram for explaining the structure of one of a plurality of line sensors provided in the imaging unit 102 (illustrated line sensors 121 o 1). Note that the document table 126 may have another structure as long as it is limited to the glass surface and can determine the position of the document 160 at a predetermined position.
図18(a)及び(b)は、撮像部102を説明するための図である。図18(a)は、撮像部102を概略的に示す平面図であり、図18(b)は、被読取物としての原稿160を示す平面図である。図18(a)は、例えば、被読取物としての原稿が載せられる複写機の原稿台(以下「原稿台ガラス」又は「ガラス面」という。)126を上から見た状態を示している。図19は、撮像部102に備えられた複数のラインセンサの1つ(ラインセンサ121O1を例示)の構成を説明するための図である。なお、原稿台126は、ガラス面に限定される、原稿160の位置を所定位置に決めることができる構造であれば、他の構造であってもよい。 << 4-1-2 >>
18A and 18B are diagrams for explaining the
図18(a)に示されるように、撮像部102は、センサ基板120を有する。センサ基板120には、第1列のラインセンサ群と第2列のラインセンサ群とを含む2列のラインセンサ群が配置されている。センサ基板120において、一方の端部(例えば、左側)から数えて奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onは、主走査方向(X方向)に直線状に間隔を開けて配置されている。センサ基板120において、同じ端部から数えて偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enは、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onとは、互いが部分的に対向し且つ千鳥状になるように、主走査方向について異なる位置に、主走査方向の直線状に間隔を開けて配置されている。ここで、nは2以上の整数であり、kは1以上n以下の整数である。奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onは、複数の第1列のラインセンサを含む第1列のラインセンサ群(又は、複数の第2列のラインセンサを含む第2列のラインセンサ群)を構成し、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enは、複数の第2列のラインセンサを含む第2列のラインセンサ群(又は、複数の第1列のラインセンサを含む第1列のラインセンサ群)を構成する。
As illustrated in FIG. 18A, the imaging unit 102 includes a sensor substrate 120. Two rows of line sensor groups including a first row line sensor group and a second row line sensor group are arranged on the sensor substrate 120. In the sensor substrate 120, one end (e.g., left side) the line sensor 121 o 1 located odd counted from, ..., 121O k, ..., 121O n , the spacing in a straight line in the main scanning direction (X-direction) Is opened and placed. In the sensor substrate 120, the line sensor 121E 1 located to the even-numbered counting from the same end, ..., 121E k, ..., 121E n is the line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n are arranged at different positions in the main scanning direction at intervals in a straight line in the main scanning direction so as to partially face each other and form a staggered pattern. Here, n is an integer of 2 or more, and k is an integer of 1 to n. Line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n is the first column line group of sensors including a line sensor of a plurality of the first column (or a line sensor of a plurality of the second column Line sensors 121E 1 ,..., 121E k ,..., 121E n are included in the second row line sensors including a plurality of second row line sensors. A group (or a first row line sensor group including a plurality of first row line sensors) is formed.
図18(a)に示されるように、第1列のラインセンサ群に属する複数の第1列のラインセンサ(例えば、121E1,…,121Ek,…,121En)は、第2列のラインセンサ群における互いに主走査方向に隣り合うラインセンサの間の領域に対向するように配置される。第2列のラインセンサ群に属する複数のラインセンサ(例えば、121O1,…,121Ok,…,121On)は、第1列のラインセンサ群における互いに主走査方向に隣り合うラインセンサの間の領域に対向するように配置される。また、隣接する第1列のラインセンサと第2列のラインセンサの隣接する(最も近い)端部同士(端部srとsl)は、主走査方向に重なる重複領域(オーバーラップ領域)を有している。
As shown in FIG. 18A, a plurality of first row line sensors (eg, 121E 1 ,..., 121E k ,..., 121E n ) belonging to the first row line sensor group The line sensors are arranged so as to face regions between line sensors adjacent to each other in the main scanning direction. A plurality of line sensors belonging to the second column line group of sensors (e.g., 121O 1, ..., 121O k , ..., 121O n) during the line sensors adjacent to each other the main scanning direction in the first column line group of sensors It arrange | positions so that it may oppose the area | region of. Further, adjacent (closest) end portions (end portions sr and sl) of the adjacent first row line sensor and second row line sensor have overlapping regions (overlap regions) overlapping in the main scanning direction. is doing.
図18(a)に示されるように、撮像部102は、搬送部124(例えば、図20(a)に示される)によって副走査方向(Y方向)に移動し、被読取物としての原稿160を読み取る。また、撮像部102は、撮像部102を固定し、搬送部124によって副走査方向の反対方向(-Y方向)に原稿160を搬送させ、原稿160を読み取る装置であってもよい。ここで、本出願の各実施の形態においては、搬送部124によって撮像部102が矢印Dy方向(例えば、図18(a)に示される)の方向に移動する場合を説明する。なお、副走査方向(Y方向)は、撮像部102の移動方向を示し(図18(a)の矢印Dy方向)、主走査方向は、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの配列方向、又は、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの配列方向を示す。
As shown in FIG. 18A, the imaging unit 102 is moved in the sub-scanning direction (Y direction) by the transport unit 124 (for example, shown in FIG. 20A), and the document 160 as a reading object. Read. The imaging unit 102 may be a device that reads the document 160 by fixing the imaging unit 102, transporting the document 160 in the direction opposite to the sub-scanning direction (−Y direction) by the transport unit 124. Here, in each embodiment of the present application, a case will be described in which the imaging unit 102 is moved in the direction of the arrow Dy (for example, shown in FIG. 18A) by the transport unit 124. The sub-scanning direction (Y direction) indicates the moving direction of the imaging unit 102 (the arrow Dy direction in FIG. 18A), and the main scanning direction is an odd-numbered line sensor 121O 1 ,..., 121O k. , ..., the arranging direction of 121 o n, or line sensor 121E 1 located even-numbered, ..., shown 121E k, ..., the arranging direction of 121E n.
画像読取装置101で変倍処理を行う場合、コントローラ107において、ユーザなどによって変倍率が指定されると、撮像部102へは読み取り倍率、すなわち、変倍率Rを設定する設定情報が送られる。撮像部102における変倍率の変更は、例えば、ラインセンサの読み取り周期を一定とし、搬送部124によって移動する撮像部102の速度を変倍率1倍時の1/Rの速度に変更することによって行う。なお、撮像部102を固定し、原稿160を搬送する速度を変倍率1倍時の1/Rの速度に変更することによって、変倍処理を行うこともできる。
When performing the scaling process in the image reading apparatus 101, when the magnification ratio is designated by the user or the like in the controller 107, setting information for setting the scanning magnification, that is, the magnification ratio R is sent to the imaging unit 102. For example, the magnification of the imaging unit 102 is changed by changing the speed of the imaging unit 102 that is moved by the transport unit 124 to 1 / R when the magnification is 1 × while the reading period of the line sensor is constant. . Note that scaling processing can also be performed by fixing the imaging unit 102 and changing the speed at which the document 160 is conveyed to 1 / R when the scaling ratio is 1.
図19に示されるように、ラインセンサ121O1は、受光した光の内の赤色成分の光を電気信号に変換する複数の赤色用光電変換素子(R光電変換素子)126Rと、受光した光の内の緑色成分の光を電気信号に変換する複数の緑色用光電変換素子(G光電変換素子)126Gと、受光した光の内の青色成分の光を電気信号に変換する複数の青色用光電変換素子(B光電変換素子)126Bとを備えている。図19に示されるように、複数のR光電変換素子126Rは、主走査方向(X方向)に直線状に配列され、複数のG光電変換素子126Gは、主走査方向(X方向)に直線状に配列され、複数のB光電変換素子126Bは、主走査方向(X方向)に直線状に配列されている。実施の形態4においては、図19に示される構成のラインセンサについて説明するが、本発明は、色を識別しない白黒の光電変換素子が主走査方向(X方向)に1列に並んだラインセンサを備えた画像読取装置にも適用可能である。また、複数のR光電変換素子126R、複数のG光電変換素子126G、及び複数のB光電変換素子126Bの配列は、図19に示される配列に限定されず、他の配列方式を採用してもよい。ラインセンサ121O1は、受光した情報を電気信号SI(O1)として出力する。また、ラインセンサ121E1,121O2,…,121On,121Enも、同様に、それぞれ受光した情報を電気信号SI(E1),SI(O2),…,SI(On),SI(En)として出力する。電気信号SI(E1),SI(O2),…,SI(On),SI(En)の全体を示す場合には、各ラインセンサから出力される電気信号(画像データ)をSIとも表記する。撮像部102から出力された電気信号SIは、A/D変換部103に入力される。
As shown in FIG. 19, the line sensor 121O 1 includes a plurality of red photoelectric conversion elements (R photoelectric conversion elements) 126R that convert red component light of received light into an electrical signal, and the received light. A plurality of green photoelectric conversion elements (G photoelectric conversion elements) 126G for converting green component light into electrical signals, and a plurality of blue photoelectric conversions for converting blue component light of received light into electrical signals Element (B photoelectric conversion element) 126B. As shown in FIG. 19, the plurality of R photoelectric conversion elements 126R are linearly arranged in the main scanning direction (X direction), and the plurality of G photoelectric conversion elements 126G are linear in the main scanning direction (X direction). The plurality of B photoelectric conversion elements 126B are arranged linearly in the main scanning direction (X direction). In the fourth embodiment, the line sensor having the configuration shown in FIG. 19 will be described. However, the present invention is a line sensor in which monochrome photoelectric conversion elements that do not identify colors are arranged in a line in the main scanning direction (X direction). The present invention is also applicable to an image reading apparatus provided with Further, the arrangement of the plurality of R photoelectric conversion elements 126R, the plurality of G photoelectric conversion elements 126G, and the plurality of B photoelectric conversion elements 126B is not limited to the arrangement shown in FIG. 19, and other arrangement methods may be adopted. Good. The line sensor 121O 1 outputs the received information as an electric signal SI (O 1 ). Similarly, the line sensors 121E 1 , 121O 2 ,..., 121O n , 121E n also receive the received light signals as electrical signals SI (E 1 ), SI (O 2 ),..., SI (O n ), SI. Output as (E n ). When the entire electric signals SI (E 1 ), SI (O 2 ),..., SI (O n ), SI (E n ) are shown, the electric signals (image data) output from the respective line sensors are converted into SI. Also written. The electrical signal SI output from the imaging unit 102 is input to the A / D conversion unit 103.
奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enとは、互いの端部を主走査方向に一部重複させている。すなわち、ラインセンサ121O1,…,121Ok,…,121Onとラインセンサ121E1,…,121Ek,…,121Enとは、原稿160を読み取るオーバーラップ領域A1,1,A1,2,…,Ak,k,Ak,K+1,Ak+1,K+1,…,An,nを有している。なお、オーバーラップ領域の詳細は、後述する。
Line sensor 121 o 1 located odd, ..., 121O k, ..., the line sensor 121E 1 located to the even-numbered and 121O n, ..., 121E k, ..., and 121E n, direction main scan ends of each other Is partially duplicated. That is, the line sensor 121O 1, ..., 121O k, ..., 121O n and the line sensor 121E 1, ..., 121E k, ..., and 121E n, the overlap area A 1, 1 for reading an original 160, A 1, 2 , ..., a k, k, a k, K + 1, a k + 1, K + 1, ..., has a n, a n. Details of the overlap area will be described later.
A/D変換部103に入力された電気信号SIは、デジタルデータ(画像データ)DIに変換され、画像処理部104の画像メモリ141に格納される。
The electrical signal SI input to the A / D conversion unit 103 is converted into digital data (image data) DI and stored in the image memory 141 of the image processing unit 104.
《4-1-3》画像処理部104
図20(a)~(c)は、画像処理部104内の画像メモリ141に格納される画像データDIを説明するための図である。図20(a)は、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸127Oと光軸127Eとが交差する位置に原稿160がある場合(すなわち、光軸127Oと127EをY方向に見たときに、光軸127Oと光軸127Eとがガラス面126上で交差する場合)の原稿160とラインセンサの位置関係を示す図である。図20(b)は、原稿160の一例を示す図である。図20(c)は、原稿160とラインセンサが図20(a)の位置関係にある場合に読み取られた図20(b)の原稿160に対応する画像データDIを概念的に示す図である。 << 4-1-3 >>Image Processing Unit 104
20A to 20C are views for explaining the image data DI stored in theimage memory 141 in the image processing unit 104. FIG. 20 (a) is the line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n and the line sensor 121E 1 located even-numbered, ..., 121E k, ..., the optical axis 127O of 121E n And the optical axis 127E intersect with each other (that is, when the optical axes 127O and 127E intersect on the glass surface 126 when the optical axes 127O and 127E are viewed in the Y direction). FIG. 6 is a diagram showing a positional relationship between the original 160 and a line sensor. FIG. 20B is a diagram illustrating an example of the document 160. FIG. 20C conceptually shows image data DI corresponding to the document 160 of FIG. 20B read when the document 160 and the line sensor are in the positional relationship of FIG. .
図20(a)~(c)は、画像処理部104内の画像メモリ141に格納される画像データDIを説明するための図である。図20(a)は、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸127Oと光軸127Eとが交差する位置に原稿160がある場合(すなわち、光軸127Oと127EをY方向に見たときに、光軸127Oと光軸127Eとがガラス面126上で交差する場合)の原稿160とラインセンサの位置関係を示す図である。図20(b)は、原稿160の一例を示す図である。図20(c)は、原稿160とラインセンサが図20(a)の位置関係にある場合に読み取られた図20(b)の原稿160に対応する画像データDIを概念的に示す図である。 << 4-1-3 >>
20A to 20C are views for explaining the image data DI stored in the
図20(a)は、画像読取装置101の概略的な側面図であり、画像読取装置101を備えた装置(例えば、複写機)を横から見た状態を示している。図18(a)に示される奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onは、ラインセンサ121Oとも表記し、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enは、ラインセンサ121Eとも表記する。発光ダイオード(LED)などの照明光源125で光照射された原稿160からの反射光は、光軸127Oに沿ってラインセンサ121Oに入射し、光軸127Eに沿ってラインセンサ121Eに入射する。副走査方向(Y方向)に搬送される撮像部102は、ガラス面126に置かれた原稿160の反射光を逐次光電変換し、変換した電気信号SIを出力し、A/D変換部103は、その電気信号SIを画像データDIに変換して出力する。なお、光軸127Oの方向は、ラインセンサ121O1,…,121Ok,…,121Onの設置の仕方によって所望の方向に設定でき、光軸127Eの方向は、ラインセンサ121E1,…,121Ek,…,121Enの設置の仕方によって所望の方向に設定できる。また、光軸127Oの方向は、ラインセンサ121O1,…,121Ok,…,121Onとガラス面126との間に配置されたレンズなどの光学系によって設定することもでき、光軸127Eの方向は、ラインセンサ121E1,…,121Ek,…,121Enとガラス面126との間に配置されたレンズなどの光学系によって設定することもできる。
FIG. 20A is a schematic side view of the image reading apparatus 101 and shows a state in which an apparatus (for example, a copying machine) including the image reading apparatus 101 is viewed from the side. Line sensor 121 o 1 positioned to the odd-numbered as shown in Figure 18 (a), ..., 121O k, ..., 121O n is also the line sensor 121 o expressed, the line sensor 121E 1 located even-numbered, ..., 121E k ,..., 121E n are also referred to as line sensors 121E. Reflected light from the original 160 irradiated with light from an illumination light source 125 such as a light emitting diode (LED) enters the line sensor 121O along the optical axis 127O, and enters the line sensor 121E along the optical axis 127E. The imaging unit 102 conveyed in the sub-scanning direction (Y direction) sequentially photoelectrically converts the reflected light of the document 160 placed on the glass surface 126 and outputs the converted electric signal SI. The A / D conversion unit 103 The electric signal SI is converted into image data DI and output. The direction of the optical axis 127O, the line sensor 121O 1, ..., 121O k, ..., can be set to a desired direction by way of the installation of the 121 o n, the direction of the optical axis 127E, the line sensor 121E 1, ..., 121E k, ..., can be set to a desired direction by way of the installation of the 121E n. The direction of the optical axis 127O, the line sensor 121O 1, ..., 121O k, ..., can also be configured by an optical system such as a lens disposed between the 121 o n and the glass surface 126, the optical axis 127E direction, the line sensor 121E 1, ..., 121E k, ..., may be set by the optical system such as a lens disposed between the 121E n and the glass surface 126.
図20(b)に示されるような原稿160を、撮像部102によって逐次光電変換し、その結果生成された電気信号SIをA/D変換部103によって画像データDIに変換すると、図20(c)に示されるような画像データDIが画像メモリ141に格納される。画像データDIは、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onよって生成された画像データDI(O1),…,DI(Ok),…,DI(On)と、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enよって生成された画像データDI(E1),…,DI(Ek),…,DI(En)とから成る。図20(c)は、奇数番目に位置するラインセンサ121Ok及び121Ok+1によって生成された画像データDI(Ok)及びDI(Ok+1)と、偶数番目に位置するラインセンサ121Ek及び121Ek+1によって生成された画像データDI(Ek)及びDI(Ek+1)とを示している。
When an original 160 as shown in FIG. 20B is sequentially photoelectrically converted by the imaging unit 102, and the electric signal SI generated as a result is converted into image data DI by the A / D converter 103, FIG. ) Is stored in the image memory 141. Image data DI, the line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n thus generated image data DI (O 1), ..., DI (O k), ..., DI (O n a), the line sensor 121E 1 located even-numbered, ..., 121E k, ..., 121E n thus generated image data DI (E 1), ..., DI (E k), ..., and DI (E n) Consists of. FIG. 20C shows image data DI (O k ) and DI (O k + 1 ) generated by the odd-numbered line sensors 121O k and 121O k + 1 and the even-numbered line sensors 121E k and 121E k + 1. The image data DI (E k ) and DI (E k + 1 ) generated by the above are shown.
ここで、オーバーラップ領域A1,1,A1,2,…,Ak,k,Ak,k+1,Ak+1,K+1,…,An,nについて説明する。図18(a)に示されるように、撮像部102が副走査方向(Y方向)に移動しながら原稿160を読み取ると、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onが読み取る原稿160上の領域と、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enが読み取る原稿160上の領域とが、一部重複した領域(オーバーラップ領域)になる。例えば、ラインセンサ121O1の右端(端部)srとラインセンサ121E1の左端(端部)slとはいずれも、原稿160のオーバーラップ領域A1,1を読み取る。同様に、ラインセンサ121E1の右端srとラインセンサ121O2の左端slとはいずれも、原稿160のオーバーラップ領域A1,2を読み取る。
Here, the overlap area A 1,1, A 1,2, ..., A k, k, A k, k + 1, A k + 1, K + 1, ..., A n, n will be described. As shown in FIG. 18A, when the imaging unit 102 reads the document 160 while moving in the sub-scanning direction (Y direction), the odd-numbered line sensors 121O 1 ,..., 121O k ,. and the region on the document 160 n reads, the line sensor 121E 1 located even-numbered, ..., 121E k, ..., a region on the document 160 to be read by the 121E n is a portion overlapping region (overlapping region) Become. For example, the right end (end portion) of the line sensor 121 o 1 Both the sr and left (end) of the line sensor 121E 1 sl, reads the overlap area A 1, 1 of the document 160. Similarly, any and left sl the right sr and the line sensor 121 o 2 of the line sensor 121E 1, reads the overlap area A 1, 2 of the document 160.
図20(b)に示される例で説明すると、ラインセンサ121Okの右端srとラインセンサ121Ekの左端slとは、ともに原稿160のオーバーラップ領域Ak,kを読み取り、ラインセンサ121Ekの右端srとラインセンサ121Ok+1の左端slとは、ともに原稿160のオーバーラップ領域Ak,k+1を読み取り、ラインセンサ121Ok+1の右端srとラインセンサ121Ek+1の左端slとは、ともに原稿160のオーバーラップ領域Ak+1,k+1を読み取る。
In the example shown in FIG. 20B, the right end sr of the line sensor 121O k and the left end sl of the line sensor 121E k both read the overlap areas A k and k of the original 160, and the line sensor 121E k the right end sr and the line sensor 121 o k + 1 of the left sl, both read overlap area a k of the original 160, a k + 1, the line sensor 121 o k + 1 of the rightmost sr and the line sensor 121E k + 1 of the left sl, both over the original 160 The wrap areas A k + 1 and k + 1 are read.
したがって、ラインセンサ121Okに対応する画像データDI(Ok)は、原稿160のオーバーラップ領域Ak,kに対応するデジタルデータdrを含み、ラインセンサ121Ekに対応する画像データDI(Ek)は、原稿160のオーバーラップ領域Ak,kに対応するデジタルデータdlを含んでいる。原稿160が、図20(a)に示されるように、ガラス面126に密着している場合、ラインセンサ121Oとラインセンサ121Eの、原稿160の副走査方向(Y方向)についての読み取り位置は、ほぼ同じ位置であるため、図20(c)に示されるように、隣接するデジタルデータdrとdlは、原稿160の副走査方向(Y方向)についての位置ずれの無いデータになる。
Thus, image data DI (O k) corresponding to the line sensor 121 o k is the overlap area A k of the document 160, includes a digital data d r corresponding to k, image data DI (E corresponding to the line sensors 121E k k ) includes digital data d 1 corresponding to the overlap areas A k, k of the original 160. When the original 160 is in close contact with the glass surface 126 as shown in FIG. 20A, the reading positions of the line sensor 121O and the line sensor 121E in the sub-scanning direction (Y direction) of the original 160 are is almost the same position, as shown in FIG. 20 (c), the digital data d r and d l adjacent will free data position shift of the sub-scanning direction of the original 160 (Y-direction).
次に、原稿160がガラス面126から離れることによって原稿160とラインセンサの位置関係が、図20(a)に示される場合と異なる場合について説明する。図21(a)~(c)は、画像メモリ141に格納される画像データDIを説明するための図である。図21(a)は、原稿160がガラス面126から浮いており、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの光軸と偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸とが交差する位置とは違う位置に原稿160がある場合における、原稿160とラインセンサとの位置関係を示す図である。図21(b)は、原稿160の一例を示す図であり、図21(c)は、原稿160とラインセンサが図21(a)の位置関係にある場合の図21(b)の原稿160に対応する画像データDIを概念的に示す図である。
Next, a case will be described in which the positional relationship between the original 160 and the line sensor is different from the case shown in FIG. FIGS. 21A to 21C are diagrams for explaining the image data DI stored in the image memory 141. FIG. FIG. 21 (a), original document 160 are floated from the glass surface 126, the line sensor 121 o 1 located odd, ..., 121 o k, ..., the line sensor 121E 1 positioned to the optical axis and the even-numbered 121 o n , ..., 121E k, ..., in a case where the optical axis of 121E n is the document 160 in a different position than the position that intersects a diagram showing the positional relationship between the document 160 and the line sensor. FIG. 21B is a diagram illustrating an example of the document 160. FIG. 21C illustrates the document 160 illustrated in FIG. 21B when the document 160 and the line sensor are in the positional relationship illustrated in FIG. It is a figure which shows notionally the image data DI corresponding to.
原稿160がガラス面126から浮いた場合であっても、平面図で見た場合のラインセンサと原稿160の位置関係は、変わらない。すなわち、各ラインセンサがガラス面126上に密着している原稿160の画像を読み取って取得される画像データと、各ラインセンサがガラス面126から浮いた位置にある原稿160の画像を読み取って取得される画像データとは、主走査方向(X方向)については同じデータである。したがって、図21(b)においては、図20(b)の場合と同様に、ラインセンサ121Okの右端srとラインセンサ121Ekの左端slとは、ともに原稿160のオーバーラップ領域Ak,kを読み取り、ラインセンサ121Ekの右端srとラインセンサ121Ok+1の左端slとは、ともに原稿160のオーバーラップ領域Ak,k+1を読み取り、ラインセンサ121Ok+1の右端srとラインセンサ121Ek+1の左端sl とは、ともに原稿160のオーバーラップ領域Ak+1,k+1を読み取る。
Even when the document 160 is lifted off the glass surface 126, the positional relationship between the line sensor and the document 160 when viewed in plan is not changed. That is, the image data acquired by reading the image of the original 160 in which each line sensor is in close contact with the glass surface 126 and the image of the original 160 at which each line sensor is lifted from the glass surface 126 are acquired. The image data to be processed is the same data in the main scanning direction (X direction). Accordingly, in FIG. 21B, as in the case of FIG. 20B, the right end sr of the line sensor 121O k and the left end sl of the line sensor 121E k are both overlap regions A k, k. read, and the right end sr and the line sensor 121 o k + 1 of the left sl of the line sensor 121E k, both the overlap area a k of the original 160, reads a k + 1, the line sensor 121 o k + 1 of the rightmost sr and the line sensor 121E k + 1 of the left sl Both read the overlap areas A k + 1, k + 1 of the original 160.
一方、図21(a)に示されるように、撮像部102の側面図で見た場合、原稿160がガラス面126から浮いているため、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの光軸127Oが原稿160と交わる位置と偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸127Eが原稿160と交わる位置とが異なる。そのため、原稿160がガラス面126から浮いている場合、副走査方向(Y方向)における読み取り位置が異なる。すなわち、各ラインセンサがガラス面126上に密着している原稿160の画像を読み取って取得される画像データと、各ラインセンサがガラス面126から浮いた位置にある原稿160の画像を読み取って取得される画像データとは、副走査方向(Y方向)については異なるデータである。これは、副走査方向(Y方向)に撮像部102が搬送される場合、それぞれのラインセンサは、逐次光電変換しているため、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enは、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onに比べて、同じ位置の画像の画像データを時間的に遅れて取得することとなるからである。したがって、図21(c)に示されるように、偶数番目に位置するラインセンサ121Ok,121Ok+1に対応する画像データDI(Ok),DI(Ok+1)と奇数番目に位置するラインセンサ121Ek,121Ek+1に対応する画像データDI(Ek),DI(Ek+1)は、副走査方向の位置がずれて画像メモリ141に格納される。すなわち、偶数番目に位置するラインセンサ121Ok,121Ok+1に対応する画像データDI(Ok),DI(Ok+1)の副走査方向の位置と、奇数番目に位置するラインセンサ121Ek,121Ek+1に対応する画像データDI(Ek),DI(Ek+1)の副走査方向の位置とは、異なる位置(異なるライン)である。
On the other hand, as shown in FIG. 21 (a), when viewed in side view of the imaging unit 102, since the document 160 is floated from the glass surface 126, the line sensor 121 o 1 located odd, ..., 121 o k , ..., the line sensor 121E 1 of the optical axis 127O of 121 o n is located at the position and the even-numbered intersecting the document 160, ..., 121E k, ..., and a position where the optical axis 127E intersects the original 160 121E n different. Therefore, when the document 160 is floating from the glass surface 126, the reading position in the sub-scanning direction (Y direction) is different. That is, the image data obtained by reading the image of the original 160 in which each line sensor is in close contact with the glass surface 126 and the image of the original 160 in which each line sensor is lifted from the glass surface 126 are obtained. The image data to be processed is different data in the sub-scanning direction (Y direction). This is because when the imaging unit 102 is transported in the sub-scanning direction (Y direction), each line sensor sequentially performs photoelectric conversion, and therefore, even-numbered line sensors 121E 1 ,..., 121E k ,. , 121E n is the line sensor 121 o 1 located odd, ..., 121 o k, ..., compared to 121 o n, because the obtaining the image data of the image in the same position temporally delayed. Accordingly, as shown in FIG. 21C, the image data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 121O k and 121O k + 1 and the odd-numbered line sensor 121E. The image data DI (E k ) and DI (E k + 1 ) corresponding to k 1 and 121E k + 1 are stored in the image memory 141 with their positions in the sub-scanning direction shifted. That is, the position of the image data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 121O k and 121O k + 1 and the odd-numbered line sensors 121E k and 121E k + 1. Are different positions (different lines) from the positions of the image data DI (E k ) and DI (E k + 1 ) in the sub-scanning direction.
ここで、変倍処理を行う場合の画像メモリ141に格納される画像データDIについて説明する。原稿160が、ガラス面126に密着している場合(図20(a)参照)とガラス面126から離れている場合(図21(a)参照)のいずれにおいても、変倍率Rを変更して撮像部102で原稿160を読み取ると、副走査方向に読み取るライン数は変倍率Rが1倍(R=1)のときを基準とし、副走査方向には変倍率Rの倍率で拡大又は縮小され、画像データDIとして画像メモリ141に格納される。例えば、変倍率Rが4倍(R=4)のときは、図20(c)及び図21(c)に示される原稿160に対応する画像データDIの副走査方向のライン数は、変倍率Rが1倍のときのライン数の4倍のライン数を持つ画像として画像メモリ141に格納されることとなる。なお、変倍率Rが1倍(R=1)より小さな値の場合(例えば、R=0.8の場合)については、読み取り画像は縮小されることになり、副走査方向のライン数が0.8倍された画像が画像メモリ141に格納される。
Here, the image data DI stored in the image memory 141 when performing the scaling process will be described. In both cases where the original 160 is in close contact with the glass surface 126 (see FIG. 20A) and away from the glass surface 126 (see FIG. 21A), the magnification R is changed. When the document 160 is read by the image pickup unit 102, the number of lines read in the sub-scanning direction is enlarged or reduced at the magnification of the variable magnification R in the sub-scanning direction, with the reference when the variable magnification R is 1 (R = 1). The image data DI is stored in the image memory 141. For example, when the magnification ratio R is 4 (R = 4), the number of lines in the sub-scanning direction of the image data DI corresponding to the document 160 shown in FIGS. 20C and 21C is the magnification ratio. The image is stored in the image memory 141 as an image having four times the number of lines when R is one. When the variable magnification R is smaller than 1 (R = 1) (for example, when R = 0.8), the read image is reduced, and the number of lines in the sub-scanning direction is 0. The image multiplied by 8 is stored in the image memory 141.
すなわち、通常の変倍率(等倍率)、すなわち、R=1のときのあるラインYmと1ライン下のラインYm+1との間の距離(1ライン又は1ライン間隔という。)に対し、変倍処理時、すなわち、R≠1のときには、原稿160上におけるラインYm及びラインYm+1と同一位置を読み取り得られたラインの間の距離が、変倍率Rの倍率で拡大又は縮小されたライン数となる。したがって、原稿160がガラス面126に密着している場合は、図20(c)に示される隣接するデジタルデータdrとdlは、原稿160の副走査方向についての位置ずれの無い(すなわち、位置ずれ量が0である)データであり、変倍処理時も同様に副走査方向についての位置ずれの無いデータとなる。一方、原稿160がガラス面126から離れている場合は、図21(c)に示される画像データDI(Ok),DI(Ok+1)と画像データDI(Ek),DI(Ek+1)は、原稿160の副走査方向の位置がずれて読み取られるため、変倍処理時には、デジタルデータdrとdlは、変倍率が1倍のときの位置ずれ量に、変倍率Rの値を乗算した値の位置ずれがあるデータとして画像メモリ141に格納される。
That is, for a normal scaling factor (equal magnification), that is, a distance between a certain line Y m when R = 1 and a line Y m +1 one line below (referred to as one line or one line interval). During the scaling process, that is, when R ≠ 1, the distance between the lines Y m and the line Y m +1 on the original 160 obtained by reading the same position is enlarged or reduced by the scaling factor R. Number of lines. Therefore, when the document 160 is in close contact with the glass surface 126, digital data d r and d l adjacent shown in FIG. 20 (c), no positional deviation of the sub-scanning direction of the document 160 (i.e., The amount of misalignment is 0), and the data is also free from misalignment in the sub-scanning direction during the scaling process. On the other hand, when the document 160 is separated from the glass surface 126, the image data DI (O k ), DI (O k + 1 ) and the image data DI (E k ), DI (E k + 1 ) shown in FIG. Is read with the position in the sub-scanning direction of the document 160 shifted, so that during the scaling process, the digital data dr and dl have the value of the scaling ratio R as the positional shift amount when the scaling ratio is 1. The multiplied value is stored in the image memory 141 as data having a positional deviation.
画像処理部104は、変倍率Rに応じて設定される間引き率Mに基づいて、画像メモリ141からオーバーラップ領域における画像データを読み出し、基準データと複数の比較データとを比較して複数の類似度を算出し、算出された複数の類似度の内の最も高い類似度が得られた比較データの副走査方向の位置を用いてシフト量データdshを算出する。さらに、画像処理部104は、間引き率Mを用いて、シフト量データdshを副走査方向に拡大する処理を行うことによって、拡大シフト量データΔybを生成し、拡大シフト量データΔybに基づく副走査方向の位置の画像データを画像メモリ141から読み出し、画像データの結合処理を実行する。以下に、画像処理部104の処理を具体的に説明する。
The image processing unit 104 reads the image data in the overlap region from the image memory 141 based on the thinning rate M set according to the scaling ratio R, compares the reference data with a plurality of comparison data, and compares a plurality of similarities. The shift amount data d sh is calculated using the position in the sub-scanning direction of the comparison data from which the highest similarity among the calculated similarities is obtained. Further, the image processing unit 104 uses the thinning rate M, by performing the processing for enlarging the shift amount data d sh in the sub-scanning direction, to generate an enlarged shift amount data [Delta] y b, the expansion shift quantity data [Delta] y b The image data at the position in the sub-scanning direction is read from the image memory 141, and the image data combining process is executed. Hereinafter, the processing of the image processing unit 104 will be specifically described.
画像処理部104内の読み出し制御部142は、コントローラ107から変倍率Rに応じて設定される間引き率Mを受け取る。読み出し制御部142は、画像メモリ141内のオーバーラップ領域における画像データ、すなわち、画像データDIのうち、間引き率Mに基づいて(例えば、間引き率Mで示されるMライン間隔ごとに)、所定の副走査方向(Y方向)の位置Ymとその周辺の画像データ(例えば、後述する図26(a)及び図27(a)に示される読み出し基準ラインYmとMライン間隔ごとのクロスハッチングで示されているライン)を基準データrMOとして読み出す。また、読み出し制御部142は、基準データrMOと重複して読み取られた同じオーバーラップ領域に対し、基準データrMOの副走査方向(Y方向)の位置Ymの前後ラインにおける予め決められた範囲内の位置とその周辺の画像データ(例えば、後述する図26(a)及び図27(a)に示される読み出し基準ラインYmとMライン間隔ごとのハッチングで示されているライン)を比較データrMEとして読み出す。読み出し制御部142は、予め決められたライン数の基準データMOと比較データMEとを出力する。予め決められた範囲は、基準ラインYmを中心として-yから+yまでの範囲、すなわち、検索範囲“-y~+y”内である。ここで、yは、変倍率R=1における予め決められたライン数を1単位とする値である。また、+y方向は、副走査方向(Y方向)であり、-y方向は、+y方向の反対方向である。
The read control unit 142 in the image processing unit 104 receives a thinning rate M set according to the scaling factor R from the controller 107. Based on the thinning rate M (for example, every M line interval indicated by the thinning rate M) of the image data in the overlap region in the image memory 141, that is, the image data DI, the read control unit 142 is a predetermined amount. position Y m and the image data around the sub-scanning direction (Y direction) (for example, by cross-hatching of each read reference line Y m and M line intervals indicated in FIG. 26 to be described later (a) and FIG. 27 (a) The indicated line) is read out as reference data rMO. The read control unit 142 includes a reference data RMO to duplicate the same overlapping area is read, within a predetermined range before and after the line of position Y m in the sub-scanning direction of the reference data RMO (Y-direction) position and the image data of the periphery thereof (e.g., FIG. 26 to be described later (a) and a line indicated by hatching for each read reference line Y m and M line intervals indicated in FIG. 27 (a)) compare data rME Read as. The read control unit 142 outputs reference data MO and comparison data ME having a predetermined number of lines. Predetermined range is a range from -y around the reference line Y m to + y, that is, the search range "-y ~ + y". Here, y is a value having a unit of a predetermined number of lines at a variable magnification R = 1. The + y direction is the sub-scanning direction (Y direction), and the -y direction is the opposite direction to the + y direction.
上記予め決められた検索範囲“-y~+y”は、基準データrMOの中心ライン位置との差、すなわち、位置ずれ量(「シフト量」ともいい、位置ずれ量ΔYa又は位置ずれ量ΔYと記す。)の検索範囲である。予め決められた検索範囲“-y~+y”は、画像読取装置101の撮像部102によって等倍処理(R=1)で原稿160を読み取る際、原稿160とガラス面126との距離又は光軸127Oと光軸127Eとのずれなどによって起こり得る副走査方向の位置のずれ量を考慮して決められる。予め決められた検索範囲“-y~+y”は、結合処理において画像データをシフトさせる副走査方向の位置(ライン)、すなわち、副走査方向の位置ずれ量の検出を行う検索範囲であり、位置ずれ量ΔYは予め決められた検索範囲“-y~+y”内の値をとる。ここで、画像メモリ141から読み出す際の位置ずれ量(基準データの中心位置とライン位置との差分)を位置ずれ量ΔYaとし、読み出し制御部142から出力する際の位置ずれ量は位置ずれ量ΔYとして表す。位置ずれ量ΔYaは、変倍率Rによって変化する間引き率MによるM倍のライン数の範囲、すなわち、基準ラインを中心として-(M×y)から+(M×y)までの検索範囲内の値である。この検索範囲を“-(M×y)~+(M×y)”とも表記する。読み出し制御部142から出力する際の位置ずれ量ΔYは、常に位置ずれの検索範囲“-y~+y”内の値である。
The predetermined search range “−y to + y” is described as a difference from the center line position of the reference data rMO, that is, a positional deviation amount (also referred to as “shift amount”, which is referred to as a positional deviation amount ΔYa or a positional deviation amount ΔY. .) Search range. The predetermined search range “−y to + y” is the distance or optical axis between the original 160 and the glass surface 126 when the original 160 is read by the imaging unit 102 of the image reading apparatus 101 by the same magnification processing (R = 1). It is determined in consideration of the amount of position shift in the sub-scanning direction that may occur due to a shift between 127O and the optical axis 127E. The predetermined search range “−y to + y” is a search range for detecting a position (line) in the sub-scanning direction in which image data is shifted in the combination processing, that is, a position shift amount in the sub-scanning direction. The deviation amount ΔY takes a value within a predetermined search range “−y to + y”. Here, the positional deviation amount (difference between the center position of the reference data and the line position) at the time of reading from the image memory 141 is defined as a positional deviation amount ΔYa, and the positional deviation amount at the time of output from the readout control unit 142 is the positional deviation amount ΔY. Represent as The positional deviation amount ΔYa is within a range of M times the number of lines by the thinning rate M that changes with the scaling factor R, that is, within a search range from − (M × y) to + (M × y) with the reference line as the center. Value. This search range is also expressed as “− (M × y) to + (M × y)”. The positional deviation amount ΔY when output from the read control unit 142 is always a value within the positional deviation search range “−y to + y”.
読み出し制御部142は、間引き率Mが1になり、変倍率Rが1に等しい等倍処理又は変倍率Rが1より小さな縮小変倍処理を行うときは、間引き率Mに基づいて、1ライン間隔ごとに(すなわち、通常のライン間隔で)、画像メモリ141内のオーバーラップ領域におけるラインYmと周辺の画像データであるデータrMOを読み出し基準データMOとして出力し、対応する他方のオーバーラップ領域については、ラインYmを中心とした検索範囲“-y~+y”内、すなわち、ライン(Ym-y)から(Ym+y)までの範囲内のラインとその周辺の画像データであるデータrMEを読み出し、比較データMEとして出力する。このようにして、等倍処理において、読み出し制御部142は、基準データMOと、ラインYmを中心とした検索範囲“-y~+y”内にあるラインの比較データMEとを出力する。位置ずれ量ΔYの取り得る範囲は、副走査方向の位置ずれ量の検出を行う検索範囲となり、基準データMOと、ラインYmを中心とした検索範囲“-y~+y”内の比較データMEは、類似度算出部143へ送られる。
When the thinning rate M is 1 and the magnification ratio R is equal to 1, the readout control unit 142 performs the same magnification processing or the reduction magnification processing in which the magnification ratio R is smaller than 1, and the read control unit 142 sets one line based on the thinning rate M. At each interval (that is, at a normal line interval), the line Y m in the overlap region in the image memory 141 and the data rMO that is the peripheral image data are output as read reference data MO, and the corresponding other overlap region Is a data within the search range “−y to + y” with the line Y m as the center, that is, data in the range from the line (Y m −y) to (Y m + y) and the surrounding image data. rME is read and output as comparison data ME. In this way, in the equal magnification process, the read control unit 142 outputs the reference data MO and the comparison data ME of the lines in the search range “−y to + y” with the line Y m as the center. Possible range of the displacement amount ΔY becomes a search range for detecting a sub-scanning direction positional shift amount, the reference data MO and the line Y m centered search range "-y ~ + y" comparison data ME in Is sent to the similarity calculator 143.
読み出し制御部142は、間引き率Mとなる変倍処理を行うときは、間引き率Mに基づいて、Mライン間隔ごとで、ラインYmと周辺の画像メモリ141内のオーバーラップ領域における画像データrMOを読み出し、基準データMOとして出力し、対応するオーバーラップ領域については、ラインYmを中心とした検索範囲“-(M×y)~+(M×y)”内、すなわち、ライン(Ym-(M×y))から(Ym+(M×y))までの範囲内のMラインごとの画像データrMEを読み出し、比較データMEとして出力する。読み出し時の位置ずれ量ΔYaは、ラインYmを中心とした検索範囲“-(M×y)~+(M×y)”内のラインとなる。ラインYmを中心とした検索範囲“-(M×y)~+(M×y)”は、間引き率Mに基づく、M倍のライン数の範囲である。しかし、読み出し制御部142は、間引き率MによってMライン間隔で読み出す(すなわち、Mラインごとに1ラインを読み出す)ので、読み出し制御部142から出力する画像データは、ラインYmを中心とした検索範囲“-y~+y”内のライン数の範囲内のラインとなる。このように、変倍処理において、読み出し制御部142は、基準データMOと、ラインYmを中心とした検索範囲“-y~+y”内にあるラインの比較データMEとを出力する。位置ずれ量ΔYの取り得る範囲は、副走査方向の位置ずれ量の検出を行う検索範囲となり、基準データMOと、ラインYmを中心とした検索範囲“-y~+y”内の比較データMEは、類似度算出部143へ送られる。
Read control unit 142, when performing scaling processing for the decimation rate M on the basis of the thinning rate M, at each M-line interval, the image data rMO in the overlap region of image memory 141 near the line Y m Is output as reference data MO, and the corresponding overlap region is within the search range “− (M × y) to + (M × y)” centered on the line Y m , that is, the line (Y m - reading the image data rME per M lines in the range of (M × y)) to (Y m + (M × y )), and outputs the comparison data ME. The positional deviation amount ΔYa at the time of reading is a line within the search range “− (M × y) to + (M × y)” with the line Y m as the center. The search range “− (M × y) to + (M × y)” centered on the line Y m is a range of M times the number of lines based on the thinning rate M. However, the read control unit 142 reads in the M line interval by thinning rate M (i.e., reading one line for each M line) since the image data output from the read control unit 142, with a focus on lines Y m Search The line is within the range of the number of lines within the range “−y to + y”. As described above, in the scaling process, the read control unit 142 outputs the reference data MO and the comparison data ME of the lines in the search range “−y to + y” with the line Y m as the center. Possible range of the displacement amount ΔY becomes a search range for detecting a sub-scanning direction positional shift amount, the reference data MO and the line Y m centered search range "-y ~ + y" comparison data ME in Is sent to the similarity calculator 143.
以上のように、読み出し制御部142は、間引き率Mを用いて、画像メモリ141に格納されている画像データrMOから、Mライン間隔ごとのデータを基準データrMOとして読み出し、基準データMOとして出力し、画像メモリ141に格納されている画像データrMEから、Mライン間隔ごとのデータであって、位置ずれ量ΔYaが検索範囲“-(M×y)~+(M×y)”内のデータを、比較データrMEとして読み出し、比較データMEとして出力する。このため、読み出し制御部142においては、読み出し時の位置ずれ量ΔYaは、変倍率Rによって変更されるライン範囲を含むようにするために、間引き率Mを用いたM倍のライン数の範囲となる。しかし、読み出し制御部142から出力する際の副走査方向の位置ずれ量の検出を行う検索範囲は、位置ずれ量ΔYが、ラインYmを中心とした検索範囲“-y~+y”内における1ライン単位の範囲のラインで、変倍率で変更されることがない基準データMOと検索範囲“-y~+y”内にあるラインの比較データMEとして出力することができる。したがって、位置ずれ量の検索範囲に拡大のためにラインメモリなどを増やすなどの回路規模の増加を行う必要なく、位置ずれ量の検索を精度よく行うことができる副走査方向の範囲に相当するラインの画像データを得ることができる。
As described above, the read control unit 142 reads the data for each M line interval as the reference data rMO from the image data rMO stored in the image memory 141 using the thinning rate M, and outputs it as the reference data MO. Then, from the image data rME stored in the image memory 141, data at intervals of M lines, and the positional deviation amount ΔYa within the search range “− (M × y) to + (M × y)” , Read out as comparison data rME and output as comparison data ME. For this reason, in the read control unit 142, the misregistration amount ΔYa at the time of reading includes a line number range of M times using the thinning rate M in order to include a line range changed by the scaling factor R. Become. However, the search range for detecting the amount of positional deviation in the sub-scanning direction when outputting from the read control unit 142 is that the positional deviation amount ΔY is 1 within the search range “−y to + y” centered on the line Y m. It is possible to output the reference data MO that is not changed by the scaling factor and the comparison data ME of the lines in the search range “−y to + y” in a line unit range. Therefore, the line corresponding to the range in the sub-scanning direction can be accurately searched without the need to increase the circuit scale such as increasing the line memory to expand the search range for the misalignment amount. Image data can be obtained.
《4-2》実施の形態4の動作
《4-2-1》読み出し制御部142の動作
図22から図25は、読み出し制御部142が間引き率Mを用いて画像メモリ141から基準データrMOと比較データrMEとを読み出す動作を説明するための図である。 << 4-2 >> Operation ofEmbodiment 4 << 4-2-1 >> Operation of Read Control Unit 142 In FIGS. 22 to 25, the read control unit 142 uses the thinning rate M to read the reference data rMO from the image memory 141. It is a figure for demonstrating the operation | movement which reads comparison data rME.
《4-2-1》読み出し制御部142の動作
図22から図25は、読み出し制御部142が間引き率Mを用いて画像メモリ141から基準データrMOと比較データrMEとを読み出す動作を説明するための図である。 << 4-2 >> Operation of
図22及び図23は、等倍処理(R=1)の場合を示しており、図24及び図25は、変倍処理(例えば、R=4)の場合を示している。R=4の場合には、R=1の場合における1ライン間隔が、4倍の4ライン間隔となる。図22は、図20(c)に対応する図(原稿160がガラス面126に密着している場合)であり、等倍処理において、副走査方向(Y方向)の位置Ym(ラインYm)において基準データとして読み出す画像データrMOと比較データとして読み出す画像データrMEとの位置関係を示している。比較データとして読み出す画像データrMEは、位置ずれ量ΔYaが、ラインYmを中心とした検索範囲“-y~+y”内の画像データ、すなわち、ライン(Ym-y)からライン(Ym+y)までの範囲内の画像データである。図23は、図21(c)に対応する図(原稿160がガラス面126から離れている場合)であり、等倍処理において、副走査方向(Y方向)の位置Ym(ラインYm)において基準データとして読み出す画像データrMOと、比較データとして読み出す位置ずれ量ΔYaが、ラインYmを中心とした検索範囲“-y~+y”内の画像データrMEの位置関係を示している。
22 and 23 show the case of the equal magnification process (R = 1), and FIGS. 24 and 25 show the case of the scaling process (for example, R = 4). In the case of R = 4, one line interval in the case of R = 1 is four times the four line interval. FIG. 22 is a diagram corresponding to FIG. 20C (when the original 160 is in close contact with the glass surface 126), and in the same magnification processing, a position Y m (line Y m ) in the sub-scanning direction (Y direction). ) Shows the positional relationship between the image data rMO read out as reference data and the image data rME read out as comparison data. The image data rME read out as the comparison data has the positional deviation amount ΔYa within the search range “−y to + y” centered on the line Y m , that is, from the line (Y m −y) to the line (Y m + y). ) Image data within the range up to. FIG. 23 is a diagram corresponding to FIG. 21C (when the document 160 is separated from the glass surface 126), and in the same magnification processing, the position Y m (line Y m ) in the sub-scanning direction (Y direction). The positional deviation amount ΔYa read as the reference data and the image data rMO read out as the reference data in FIG. 4 indicate the positional relationship between the image data rME within the search range “−y to + y” with the line Y m as the center.
同様に、図24は、図20(c)に対応する図(原稿160がガラス面126に密着している場合)であり、変倍処理(R=4)において、副走査方向(Y方向)の位置Ym4(図22におけるラインYmに相当する原稿160の読み取り位置)において基準データとして読み出す画像データrMOと、比較データとして読み出す画像データrMEの位置関係を示している。比較データとして読み出す画像データrMEは、位置ずれ量ΔYaが、ラインYm4を中心とした検索範囲“-4y~+4y”内の画像データ、すなわち、ライン(Ym4-4y)からライン(Ym4+4y)までの範囲内の画像データである。図25は、図21(c)に対応する図(原稿がガラス面126から離れている場合)であり、変倍処理(R=4)において、副走査方向(Y方向)の位置Ym4(図23におけるラインYmに相当する原稿160の読み取り位置)において基準データとして読み出す画像データrMOと、比較データとして読み出す位置ずれ量ΔYaが検索範囲“-4y~+4y”内の画像データrMEの位置関係を示している。
Similarly, FIG. 24 is a diagram corresponding to FIG. 20C (when the document 160 is in close contact with the glass surface 126), and in the scaling process (R = 4), the sub-scanning direction (Y direction). The positional relationship between the image data rMO read out as reference data and the image data rME read out as comparison data at the position Y m4 (the reading position of the original 160 corresponding to the line Y m in FIG. 22) is shown. The image data rME read out as comparison data has a positional deviation amount ΔYa within the search range “−4y to + 4y” centered on the line Y m4 , that is, from the line (Y m4 −4y) to the line (Y m4 + 4y). ) Image data within the range up to. FIG. 25 is a diagram corresponding to FIG. 21C (when the document is separated from the glass surface 126), and in the scaling process (R = 4), a position Y m4 (Y direction) in the sub-scanning direction (Y direction). The positional relationship between the image data rMO read out as reference data and the positional deviation amount ΔYa read out as comparison data in the search range “−4y to + 4y” at the reading position of the original 160 corresponding to the line Y m in FIG. Is shown.
図17を用いて説明したように、読み出し制御部142は、間引き率Mに基づいて、画像メモリ141に記憶されているデータrMOを読み出して基準データMOを出力し、画像メモリ141に記憶されているデータrMEを読み出して比較データMEを出力する。図22及び図23に示される等倍処理(R=1)の場合は、読み出し制御部142は、ラインセンサ121Okに対応する画像データDI(Ok)のオーバーラップ領域drにおいて、1ライン間隔ごとにラインYmを中心とした周辺の画像データを基準データrMO(Ok,dr,Ym)として読み出し、ラインセンサ121Ekに対応する画像データDI(Ek)のオーバーラップ領域dlにおいて、ラインYmを中心として検索範囲“-y~+y”内のラインとその周辺の画像データを比較データrME(Ek,dl,Ym)として読み出す。読み出し制御部142は、比較データrME(Ek,dl,Ym)としては、ラインYmを中心として検索範囲“-y~+y”内のラインの画像データを読み出しており、画像データ(基準データ)rMOより画像データ(比較データ)rMEが副走査方向(Y方向)に広くなる。図22及び図23においては、読み出し制御部142は、画像データDI(Ok)のオーバーラップ領域drと画像データDI(Ek)のオーバーラップ領域dlから、基準データrMO(Ok,dr,Ym)と比較データrME(Ek,dl,Ym)を読み出す場合と同様に、順次、画像データDI(Ok+1)のオーバーラップ領域dlと画像データDI(Ek)のオーバーラップ領域drから、基準データrMO(Ok+1,dl,Ym)と比較データrME(Ek,dr,Ym)を読み出し、画像データDI(Ok+1)のオーバーラップ領域drと画像データDI(Ek+1)のオーバーラップ領域dlから、基準データrMO(Ok+1,dr,Ym)と比較データrME(Ek+1,dl,Ym)を読み出すことになる。
As described with reference to FIG. 17, the read control unit 142 reads the data rMO stored in the image memory 141 based on the thinning rate M, outputs the reference data MO, and is stored in the image memory 141. The read data rME is read and the comparison data ME is output. For equal-magnification processing shown in FIGS. 22 and 23 (R = 1), the read control unit 142, in the overlap region d r of the image data DI corresponding to the line sensor 121O k (O k), 1 line Peripheral image data centered on the line Y m is read as reference data rMO (O k , d r , Y m ) for each interval, and the overlap region d of the image data DI (E k ) corresponding to the line sensor 121E k is read. In l , the line in the search range “−y to + y” centered on the line Y m and the surrounding image data are read out as comparison data rME (E k , d l , Y m ). As the comparison data rME (E k , d l , Y m ), the read control unit 142 reads the image data of the lines in the search range “−y to + y” with the line Y m as the center. Image data (comparison data) rME becomes wider in the sub-scanning direction (Y direction) than reference data) rMO. In FIGS. 22 and 23, the read control unit 142, the overlap region d l of the image data DI overlap region (O k) d r and the image data DI (E k), the reference data RMO (O k, Similar to the case of reading out d r , Y m ) and comparison data rME (E k , d l , Y m ), the overlap region d l of the image data DI (O k + 1 ) and the image data DI (E k ) are sequentially read. The reference data rMO (O k + 1 , d 1 , Y m ) and the comparison data rME (E k , d r , Y m ) are read from the overlap area d r of the image data DI (O k + 1 ). from the overlap region d l of the r and the image data DI (E k + 1), the reference data rMO (O k + 1, d r, Y m) and the comparison data rME (E k + 1, d l , Y m ).
図24及び図25に示される変倍処理(R=4)の場合は、読み出し制御部142は、間引き率M=4に基づいて、ラインセンサ121Okに対応する画像データDI(Ok)のオーバーラップ領域drにおいて、4ライン間隔ごとにラインYm4を中心とした周辺の画像データを基準データrMO(Ok,dr,Ym4)として読み出し、ラインセンサ121Ekに対応する画像データDI(Ek)のオーバーラップ領域dlにおいて、ラインYm4を中心とした検索範囲“-4y~+4y”内の4ライン間隔ごとのラインとその周辺の画像データを比較データrME(Ek,dl,Ym4)として読み出す。読み出し制御部142は、ラインYm4を中心とした検索範囲“-4y~+4y”内の4ライン間隔ごとのラインの画像データを読み出しているので、画像データ(基準データ)rMOより画像データ(比較データ)rMEが副走査方向(Y方向)に広くなる。図24及び図25においても、画像データDI(Ok)のオーバーラップ領域drと画像データDI(Ek)のオーバーラップ領域dlから、基準データrMO(Ok,dr,Ym4)と比較データrME(Ek,dl,Ym4)を読み出す場合と同様に、読み出し制御部142は、順次、画像データDI(Ok+1)のオーバーラップ領域dlと画像データDI(Ek)のオーバーラップ領域drから、基準データrMO(Ok+1,dl,Ym4)と比較データrME(Ek,dr,Ym4)を読み出し、画像データDI(Ok+1)のオーバーラップ領域drと画像データDI(Ek+1)のオーバーラップ領域dlから、基準データrMO(Ok+1,dr,Ym4)と比較データrME(Ek+1,dl,Ym4)を読み出すことになる。読み出し時、比較データrMEは、位置ずれ量ΔYaは検索範囲“-4y~+4y”内の範囲から読み出されるが、読み出し制御部142は、間引き率Mによって4ラインごとで読み出すので、比較データrME内におけるライン数は、位置ずれ量ΔYが検索範囲“-y~+y”内の1ライン単位の範囲のライン数と等しくなる。
In the case of the scaling process (R = 4) shown in FIG. 24 and FIG. 25, the read control unit 142 stores the image data DI (O k ) corresponding to the line sensor 121O k based on the thinning rate M = 4. in the overlap region d r, the image data of the surrounding around the line Y m4 every four line interval reference data rMO (O k, d r, Y m4) read as image data DI corresponding to the line sensors 121E k In the overlap region d l of (E k ), the lines at intervals of 4 lines in the search range “−4y to + 4y” centered on the line Y m4 and the surrounding image data are compared with the comparison data rME (E k , d l , Y m4 ). Since the readout control unit 142 reads out the image data of the lines at intervals of 4 lines within the search range “−4y to + 4y” with the line Y m4 as the center, the image data (comparison data) is compared with the image data (reference data) rMO. Data) rME becomes wider in the sub-scanning direction (Y direction). Also in FIGS. 24 and 25, the overlap region d l of the image data DI overlap region (O k) d r and the image data DI (E k), the reference data rMO (O k, d r, Y m4) And the comparison data rME (E k , d l , Y m4 ), the read control unit 142 sequentially reads the overlap region d l of the image data DI (O k + 1 ) and the image data DI (E k ). The reference data rMO (O k + 1 , d 1 , Y m4 ) and the comparison data rME (E k , d r , Y m4 ) are read out from the overlap area d r of the image data DI (O k + 1 ). from the overlap region d l of the r and the image data DI (E k + 1), the reference data rMO (O k + 1, d r, Y m4) and comparison data RME (E k + 1 , d l , Y m4 ). At the time of reading, as for the comparison data rME, the positional deviation amount ΔYa is read from the range within the search range “−4y to + 4y”, but since the reading control unit 142 reads out every four lines by the thinning rate M, the comparison data rME The number of lines at is equal to the number of lines in the range of one line unit within the search range “−y to + y” where the positional deviation amount ΔY is.
図26(a)及び(b)、並びに、図27(a)及び(b)は、読み出し制御部142の動作をさらに詳しく説明するための図である。図26(a)及び(b)は、等倍処理(R=1)の場合を示し、図27(a)及び(b)は、変倍処理の一例(R=4)を示している。画像データDI(Ok)のオーバーラップ領域drと画像データDI(Ek)のオーバーラップ領域dlにおいて、ラインYm及びYm4(読み出し基準ライン)に対する基準データrMO(図26(a)及び図27(a)中、格子状網掛け(クロスハッチング)で示されるライン)と位置ずれ量ΔYa(図26(a)における検索範囲“-y~+y”内及び図27(a)における検索範囲“-4y~+4y”内)の比較データrMEを読み出すライン(図26(a)及び図27(a)中、斜め線網掛け(斜め平行線のハッチング)で示されるライン)の位置関係と、読み出し制御部142から出力されるときの基準データMOと比較データMEの副走査方向(ライン方向)の位置関係とを示している。図26(a)及び図27(a)に示されるように、読み出し制御部142は、ラインYm及びラインYm4をそれぞれ基準ラインとして、間引き率MによるMラインごとで基準データrMOと比較データrMEを読み出す。基準データrMOと比較データrMEは、通常は、主走査方向及び副走査方向に複数の画素のデータで構成される。また、図26(b)及び図27(b)に示されるように、読み出し制御部142から出力されるときの基準データMOと比較データMEは、検出基準ラインY0を中心として主走査方向及び副走査方向に複数の画素のデータで構成される。
FIGS. 26A and 26B and FIGS. 27A and 27B are diagrams for explaining the operation of the read control unit 142 in more detail. FIGS. 26A and 26B show the case of the equal magnification process (R = 1), and FIGS. 27A and 27B show an example of the scaling process (R = 4). In the overlap region d l of the overlap region d r and the image data DI of the image data DI (O k) (E k), the reference data rMO for the line Y m and Y m4 (read reference line) (Fig. 26 (a) In FIG. 27 (a), lines indicated by grid-like shading (cross-hatching) and the amount of displacement ΔYa (within the search range “−y to + y” in FIG. 26 (a) and the search in FIG. 27 (a)). The positional relationship of the lines for reading the comparison data rME (within the range “−4y to + 4y”) (the lines indicated by diagonal lines (hatched hatched parallel lines) in FIGS. 26A and 27A) The positional relationship between the reference data MO and the comparison data ME in the sub-scanning direction (line direction) when output from the read control unit 142 is shown. As shown in FIG. 26A and FIG. 27A, the read control unit 142 uses the line Y m and the line Y m4 as reference lines, respectively, and the reference data rMO and comparison data for each M line with the thinning rate M. Read rME. The reference data rMO and the comparison data rME are usually composed of data of a plurality of pixels in the main scanning direction and the sub scanning direction. Further, as shown in FIGS. 26B and 27B, the reference data MO and the comparison data ME output from the read control unit 142 are in the main scanning direction and the sub data centering on the detection reference line Y0. It consists of data of a plurality of pixels in the scanning direction.
まず、図26(a)及び(b)に示される等倍処理(R=1)の場合について説明する。等倍処理(R=1)の場合は、読み出し制御部142は、1ライン間隔ごとに、基準ラインYmを中心とした副走査方向の幅bhラインの画像データを基準データrMO(Ok,dr,Ym)(図26(a)中、格子状網掛けで示されるライン)として読み出し、画像データDI(Ek)のオーバーラップ領域dlにおいて、基準ラインYmを中心とした検索範囲“-y~+y”内の各ラインで、それぞれがrMO(Ok,dr,Ym)と同じ幅bhラインとなるような画像データrME(Ek,dl,Ym,ΔYa)を比較データrME(Ek,dl,Ym)(図26(a)中、斜め線網掛けで示されるライン)として読み出す。このとき、位置ずれ量ΔYaは、基準データrMO(Ok,dr,Ym)と比較データrME(Ek,dl,Ym)の読み出し時の位置ずれ量であり、検索範囲“-y~+y”内の1ライン間隔の値をとり、図26(a)において、基準ラインYmと中心位置が同じデータを画像データrME(Ek,dl,Ym,0)とし、1ライン間隔ずらした位置のデータをrME(Ek,dl,Ym,1)、さらに1ライン間隔ずつずらして、yライン間隔ずらしたデータを画像データrME(Ek,dl,Ym,+y)とし、逆方向にyライン間隔ずらしたデータを画像データrME(Ek,dl,Ym,-y)としている。読み出し制御部142は、画像データrME(Ek,dl,Ym,-y)からrME(Ek,dl,Ym,+y)までを、比較データrME(Ek,dl,Ym)として読み出す。ここで、1ライン間隔は、ラインセンサによって読み取られた1ラインの画像の副走査方向の幅をラインの本数で示した値であり、Mライン間隔(Mは正の整数)は、ラインセンサによって順次読み取られたMラインの画像の副走査方向の幅を副走査方向に並ぶラインの本数で示した値である。
First, the case of the same magnification processing (R = 1) shown in FIGS. 26A and 26B will be described. For equal magnification processing (R = 1), the read control unit 142, 1 per line interval, the reference line Y m based on the image data in the sub-scanning direction width bh lines around the data RMO (O k, d r , Y m ) (line indicated by grid-like shading in FIG. 26A), and in the overlap region d l of the image data DI (E k ), a search centered on the reference line Y m Image data rME (E k , d l , Y m , ΔYa) such that each line in the range “−y to + y” has a bh line having the same width as rMO (O k , d r , Y m ). Is read out as comparison data rME (E k , d 1 , Y m ) (a line indicated by hatching in FIG. 26A). At this time, the positional deviation amount ΔYa is a positional deviation amount at the time of reading the reference data rMO (O k , d r , Y m ) and the comparison data rME (E k , d l , Y m ), and the search range “− The value of the interval between one line in y to + y ″ is taken. In FIG. 26A, the data having the same center position as the reference line Y m is set as image data rME (E k , d l , Y m , 0), 1 The data at the position shifted by the line interval is rME (E k , d l , Y m , 1), and is further shifted by one line interval, and the data shifted at the y line interval is converted into image data rME (E k , d l , Y m , + Y), and the data shifted by the y-line interval in the opposite direction is the image data rME (E k , d 1 , Y m , −y). The read control unit 142 compares the image data rME (E k , d l , Y m , -y) to rME (E k , d l , Y m , + y) with the comparison data rME (E k , d l , Y m ). Here, the 1-line interval is a value indicating the width in the sub-scanning direction of an image of one line read by the line sensor, and the M-line interval (M is a positive integer) is determined by the line sensor. This is a value indicating the width in the sub-scanning direction of the sequentially read M-line image by the number of lines arranged in the sub-scanning direction.
そして、読み出し制御部142は、基準データrMO(Ok,dr,Ym)と比較データrME(Ek,dl,Ym)を、検出基準ラインY0を中心とした基準データMO(Ok,dr)と比較データME(Ek,dl)として配置して出力する。ここで、検出基準ラインY0を中心とした基準データMO(Ok,dr)と比較データME(Ek,dl)として配置して出力するとは、基準データMO(Ok,dr)と比較データME(Ek,dl)との副走査方向の位置関係を示す情報とともに出力するという意味である。読み出した比較データrME(Ek,dl,Ym)は、位置ずれ量ΔYaは検索範囲“-y~+y”内にあるラインを中心とした1ラインごとの画像データであるので、読み出し制御部142から出力される比較データME(Ek,dl)において、位置ずれ量ΔYの検索範囲が、検索範囲“-y~+y”内における各ラインの画像データME(Ek,dl,ΔY)に対応するように、読み出した比較データrME(Ek,dl,Ym)内の画像データrME(Ek,dl,Ym,ΔYa)はそのまま配置され、基準データMOと検索範囲“-y~+y”内にあるライン数の比較データMEとして出力される。具体的には、図26(a)及び(b)において、読み出し制御部142は、検出基準ラインY0と中心位置が同じデータとして、画像データrME(Ek,dl,Ym,0)をME(Ek,dl,0)とし、1ライン間隔ずらした位置のデータrME(Ek,dl,Ym,1)をME(Ek,dl,1)とし、yライン間隔ずらしたデータrME(Ek,dl,Ym,+y)をME(Ek,dl,+y)とし、逆方向にyライン間隔ずらしたデータrME(Ek,dl,Ym,-y)をME(Ek,dl,-y)として配置して出力する。
Then, the read control unit 142 uses the reference data rMO (O k , d r , Y m ) and the comparison data rME (E k , d l , Y m ) as reference data MO (O k , d r ) and comparison data ME (E k , d l ) for output. Here, the reference data MO (O k, d r) around the detection reference line Y0 and comparison data ME (E k, d l) and arranged to output as the reference data MO (O k, d r) And the comparison data ME (E k , d l ) together with information indicating the positional relationship in the sub-scanning direction. Since the read comparison data rME (E k , d l , Y m ) is the image data for each line centering on the line within the search range “−y to + y”, the positional deviation amount ΔYa is read control. In the comparison data ME (E k , d l ) output from the unit 142, the search range of the positional deviation amount ΔY is the image data ME (E k , d l , each line) within the search range “−y to + y”. Corresponding to (ΔY), the image data rME (E k , d l , Y m , ΔYa) in the read comparison data rME (E k , d l , Y m ) is arranged as it is, and the reference data MO and the search are performed. It is output as comparison data ME for the number of lines in the range “−y to + y”. Specifically, in FIGS. 26A and 26B, the read control unit 142 uses the image data rME (E k , d l , Y m , 0) as data having the same center position as the detection reference line Y0. ME (E k , d l , 0) is used, and data rME (E k , d l , Y m , 1) at the position shifted by one line interval is ME (E k , d l , 1) and y line interval is shifted. The data rME (E k , d l , Y m , + y) is changed to ME (E k , d l , + y), and the data rME (E k , d l , Y m , −y) shifted by the y line interval in the reverse direction. ) As ME (E k , d l , -y) and output.
図27(a)及び(b)に示される変倍処理(R=4)の場合は、読み出し制御部142は、間引き率M=4に基づいて、4ライン間隔ごとに、基準ラインYm4を中心とした副走査方向の幅bhライン(4×bhライン中の4ライン間隔ごと)の画像データを基準データrMO(Ok,dr,Ym4)(図27(a)中、格子状網掛けで示されるライン)として読み出し、画像データDI(Ek)のオーバーラップ領域dlにおいて、基準ラインYm4を中心とし検索範囲“-4y~+4y”内の4ラインごとの各ラインで、それぞれが基準データrMO(Ok,dr,Ym4)と同じ4ライン間隔で幅bhラインとなるような画像データrME(Ek,dl,Ym4,ΔYa)を比較データrME(Ek,dl,Ym4)(図27(a)中、斜め線網掛けで示されるライン)として読み出す。位置ずれ量ΔYaは、基準データrMO(Ok,dr,Ym4)と比較データrME(Ek,dl,Ym4)の読み出し時の位置ずれ量であり、検索範囲“-4y~+4y”内の4ライン間隔の値をとる。図27(a)において、基準ラインYm4と中心位置が同じデータを画像データrME(Ek,dl,Ym4,0)とし、4ライン間隔ずらした位置のデータをrME(Ek,dl,Ym4,4)とし、さらに4ライン間隔ずつずらして、4yライン間隔ずらしたデータを画像データrME(Ek,dl,Ym4,4y)とし、逆方向に4yライン間隔ずらしたデータを画像データrME(Ek,dl,Ym4,-4y)としている。読み出し制御部142は、画像データrME(Ek,dl,Ym4,-4y)からrME(Ek,dl,Ym4,4y)までの4ライン間隔の画像データを、比較データrME(Ek,dl,Ym4)として読み出す。
In the case of the scaling process (R = 4) shown in FIGS. 27A and 27B, the read control unit 142 sets the reference line Y m4 for every four line intervals based on the thinning rate M = 4. Image data of a width bh line (at intervals of 4 lines in the 4 × bh line) centered in the sub-scanning direction is used as reference data rMO (O k , d r , Y m4 ) (in FIG. 27A, a grid network In the overlap region d l of the image data DI (E k ), each of the four lines in the search range “−4y to + 4y” around the reference line Y m4 Image data rME (E k , d l , Y m4 , ΔYa) such that is a width bh line at the same 4-line interval as the reference data rMO (O k , d r , Y m4 ) is compared with the comparison data rME (E k , d l, Y m ) In (FIG. 27 (a), the read out as a line) indicated by oblique lines shaded. The positional deviation amount ΔYa is a positional deviation amount when the reference data rMO (O k , d r , Y m4 ) and the comparison data rME (E k , d l , Y m4 ) are read, and the search range “−4y to + 4y”. The value of the interval between the four lines is taken. In FIG. 27A, data having the same center position as the reference line Y m4 is image data rME (E k , d l , Y m4 , 0), and data at a position shifted by four line intervals is rME (E k , d 1 , Y m4 , 4), and further shifted by 4 line intervals, the data shifted by 4y line intervals is set as image data rME (E k , d l , Y m4 , 4y), and the data shifted by 4y line intervals in the reverse direction. Is image data rME (E k , d l , Y m4 , −4y). The read control unit 142 converts the image data at the 4-line interval from the image data rME (E k , d l , Y m4 , −4y) to rME (E k , d l , Y m4 , 4y) into the comparison data rME ( E k , d l , Y m4 ).
そして、読み出し制御部142は、基準データrMO(Ok,dr,Ym4)と比較データrME(Ek,dl,Ym4)を、検出基準ラインY0を中心とした基準データMO(Ok,dr)と比較データME(Ek,dl)として配置して出力する。読み出し制御部142は、基準データrMO(Ok,dr,Ym4)から、4×bhライン中の4ライン間隔ごとの副走査方向の幅bhラインの画像データを読み出しており、読み出された画像データは、図26(b)における読み出し制御部142からの出力である基準データMO(Ok,dr)と同じライン数の画像データとなる。また、読み出した比較データrME(Ek,dl,Ym4)は、位置ずれ量ΔYaが検索範囲“-4y~+4y”内から読み出されるが、間引き率M=4によって4ラインごとで読み出しており、検索範囲“-4y~+4y”内にある位置ずれ量ΔYaのラインを中心とした4ラインごとの画像データである。このため、読み出し制御部142から出力される比較データME(Ek,dl)は、検索範囲“-y~+y”内の位置ずれ量ΔYの画像データME(Ek,dl,ΔY)に対し、読み出した比較データrME(Ek,dl,Ym4)内の画像データrME(Ek,dl,Ym4,ΔYa)に置き換えることができ、基準データMOと検索範囲“-y~+y”内にあるラインの比較データMEとして出力される。具体的には、図27(b)において、読み出し制御部142は、検出基準ラインY0と中心位置が同じデータとして、画像データrME(Ek,dl,Ym4,0)をME(Ek,dl,0)とし、4ライン間隔ずらした位置のデータrME(Ek,dl,Ym4,4)を1ライン間隔ずらしたME(Ek,dl,1)として、4yライン間隔ずらしたデータrME(Ek,dl,Ym4,4y)をME(Ek,dl,+y)として、逆方向に4yライン間隔ずらしたデータrME(Ek,dl,Ym,-4y)をME(Ek,dl,-y)として置き換え、出力する。
Then, the read control unit 142 uses the reference data rMO (O k , d r , Y m4 ) and the comparison data rME (E k , d l , Y m4 ) as reference data MO (O k , d r ) and comparison data ME (E k , d l ) for output. The read control unit 142 reads out the image data of the width bh line in the sub-scanning direction every 4 lines in the 4 × bh line from the reference data rMO (O k , d r , Y m4 ). The image data becomes image data having the same number of lines as the reference data MO (O k , dr ) that is the output from the read control unit 142 in FIG. Further, the read comparison data rME (E k , d l , Y m4 ) is read out every four lines by the thinning rate M = 4, although the positional deviation amount ΔYa is read from within the search range “−4y to + 4y”. This is image data for every four lines centering on the line of the positional deviation amount ΔYa within the search range “−4y to + 4y”. Therefore, the comparison data ME (E k , d l ) output from the read control unit 142 is the image data ME (E k , d l , ΔY) of the positional deviation amount ΔY within the search range “−y to + y”. Can be replaced with the image data rME (E k , d l , Y m4 , ΔYa) in the read comparison data rME (E k , d l , Y m4 ), the reference data MO and the search range “−y”. Is output as comparison data ME of the lines in .about. + Y ". Specifically, in FIG. 27B, the read control unit 142 converts the image data rME (E k , d l , Y m4 , 0) to ME (E k , D l , 0), and the data rME (E k , d l , Y m4 , 4) at positions shifted by 4 line intervals are ME (E k , d l , 1) shifted by 1 line intervals, and 4y line intervals The shifted data rME (E k , d l , Y m4 , 4y) is defined as ME (E k , d l , + y), and the data rME (E k , d l , Y m , − 4y) is replaced with ME (E k , d l , −y) and output.
上記から、読み出し時、比較データrMEは、位置ずれ量ΔYaの検索範囲“-4y~+4y”内から読み出されるが、間引き率Mによって4ラインごとで読み出すので、比較データrME内にある位置ずれ量ΔYのラインは、検索範囲“-y~+y”内の1ライン単位の範囲のラインと同じにすることができる。また、間引き率Mは、読み取り倍率である変倍率Rに近い、1以上の整数で設定されているので、1より小さな変倍率(例えば、変倍率R=0.8倍)又は変倍率に小数部を持つ中間倍率(例えば、R=1.5倍)のような場合も、間引き率Mに応じたライン間隔で間引いて画像データを読み出すことができ、ライン補間のようなフィルタ演算による拡大、縮小処理を行う必要がなく、回路規模の小さな処理で読み出しを行うことができる。
From the above, at the time of reading, the comparison data rME is read from within the search range “−4y to + 4y” of the positional deviation amount ΔYa, but is read every four lines with the thinning rate M, so the positional deviation amount in the comparative data rME. The line of ΔY can be the same as the line in the unit of one line within the search range “−y to + y”. Further, since the thinning rate M is set as an integer equal to or greater than 1 which is close to the scaling factor R which is the reading magnification, a scaling factor smaller than 1 (for example, scaling factor R = 0.8) or a fractional number to the scaling factor Even in the case of an intermediate magnification having a part (for example, R = 1.5 times), image data can be read out by thinning out at a line interval corresponding to the thinning rate M, and enlarged by a filter operation such as line interpolation, There is no need to perform reduction processing, and reading can be performed with processing with a small circuit scale.
なお、図26(a)及び(b)、並びに、図27(a)及び(b)における説明においては、読み出し制御部142は、画像メモリ141からオーバーラップ領域における画像データを読み出す際、中心となるラインとその周辺の画像データを間引き率MによるMライン間隔ごとの副走査方向の幅bhライン(M×bhライン中のMライン間隔ごと)で読み出すとしたが、幅bhは1つのラインのみ(すなわち、中心ラインのみ)にしてもよい。副走査方向の幅bhをとることで、周辺の画像データの情報をもつ基準データ及び比較データとすることができ、後述する類似度算出部143における処理によって、より精度よく類似度を算出することができる。
In the description of FIGS. 26A and 26B and FIGS. 27A and 27B, the read control unit 142 reads the center of the image data in the overlap area from the image memory 141. And the surrounding image data are read out with a width bh line in the sub-scanning direction at every M line interval at every thinning rate M (every M line interval in the M × bh line), but the width bh is only one line. (That is, only the center line). By taking the width bh in the sub-scanning direction, reference data and comparison data having information on peripheral image data can be obtained, and the similarity can be calculated more accurately by processing in the similarity calculation unit 143 described later. Can do.
ここで、図22から図27(a)及び(b)までにおいては、読み出し制御部142が間引き率Mに基づいて、Mライン間隔ごとに画像データを間引いて読み出す場合を説明した。しかし、読み出し制御部142は、間引き率MによるMラインを単位に画像データを読み出し、Mライン単位で平均化処理したデータを基準データrMOと比較データrMEとしてもよい。この場合には、Mライン単位で平均化するので、読み出されるときの位置ずれ量ΔYaは検索範囲“-(M×y)~+(M×y)”内となるが、読み出し制御部142から出力する際の位置ずれ量ΔYは、検索範囲“-y~+y”内の1ライン単位の範囲のラインとすることができる。例えば、Mライン単位で平均化する方法として、Mライン単位で値をリセットしながら、隣接ラインの平均値と、次のラインと前ラインの平均値との平均を算出しながら、1ラインずつ逐次平均処理を実行する方法がある。この方法によれば、効率よく平均化処理を行うことができる。また、Mライン単位で平均化処理すれば、Mラインの間の画像データを使用することができるため、より精度のよい、周辺の画像データの情報を持つ基準データMO(Ok,dr)と比較データME(Ek,dl)とすることができ、類似度算出部143における処理によって、より精度よく類似度を算出することができる。
Here, in FIGS. 22 to 27 (a) and 27 (b), a case has been described in which the read control unit 142 reads out image data at intervals of M lines based on the thinning rate M. However, the read control unit 142 may read the image data in units of M lines at the thinning rate M, and use the data averaged in units of M lines as reference data rMO and comparison data rME. In this case, since averaging is performed in units of M lines, the positional deviation amount ΔYa at the time of reading is within the search range “− (M × y) to + (M × y)”. The positional deviation amount ΔY at the time of outputting can be a line in the range of one line unit within the search range “−y to + y”. For example, as a method of averaging in units of M lines, while resetting the values in units of M lines, calculating the average value of the adjacent lines and the average value of the next line and the previous line, one line at a time There is a method of executing an average process. According to this method, the averaging process can be performed efficiently. Further, if the averaging process is performed in units of M lines, the image data between the M lines can be used. Therefore, the reference data MO (O k , dr ) having more accurate peripheral image data information can be used. And the comparison data ME (E k , d l ), and the similarity can be calculated more accurately by the processing in the similarity calculation unit 143.
《4-2-2》類似度算出部143の動作
画像処理部104内の類似度算出部143には、読み出し制御部142からの基準データMOと位置ずれ量ΔYの検索範囲“-y~+y”内のラインによる比較データMEが送られる。類似度算出部143は、基準データMOと比較データMEとを比較する処理を位置ずれ量ΔYの検索範囲“-y~+y”内における複数の位置について行い、類似度である相関データD142を算出し、生成する。 <4-2-2> Operation ofSimilarity Calculation Unit 143 The similarity calculation unit 143 in the image processing unit 104 includes a search range “−y˜ + y” for the reference data MO and the positional deviation amount ΔY from the read control unit 142. The comparison data ME by the line in “is sent. The similarity calculation unit 143 performs a process of comparing the reference data MO and the comparison data ME for a plurality of positions within the search range “−y to + y” of the positional deviation amount ΔY, and calculates the correlation data D142 that is the similarity. And generate.
画像処理部104内の類似度算出部143には、読み出し制御部142からの基準データMOと位置ずれ量ΔYの検索範囲“-y~+y”内のラインによる比較データMEが送られる。類似度算出部143は、基準データMOと比較データMEとを比較する処理を位置ずれ量ΔYの検索範囲“-y~+y”内における複数の位置について行い、類似度である相関データD142を算出し、生成する。 <4-2-2> Operation of
図28(a)及び(b)は、類似度算出部143の動作を説明するための図である。図28(a)及び(b)は、類似度算出部143に検出基準ラインY0を中心とした基準データMO(Ok,dr)と比較データME(Ek,dl)が入力される場合を示している。図28(a)に示されるように、類似度算出部143は、入力された基準データMO(Ok,dr)に対し、まず、検出基準ラインY0を中心とした位置ずれ量ΔYの検索範囲“-y~+y”内における比較データME(Ek,dl)から、基準データMO(Ok,dr)と同じ大きさ(同じ副走査方向の幅bhライン)の複数の画像データME(Ek,dl,ΔY)を抽出する。位置ずれ量ΔYは、比較データME(Ek,dl)についての、基準データMO(Ok,dr)からの副走査方向の位置ずれ量であり、検索範囲“-y~+y”内の値をとる。基準データMO(Ok,dr)の中心位置である検出基準ラインY0と同じ位置にあるデータを画像データME(Ek,dl,0)とし、副走査方向の正方向に1ライン間隔ずらしたデータをME(Ek,dl,1)とし、さらに副走査方向の正方向に1ライン間隔ずつずらしていき、yライン間隔ずらしたデータを画像データME(Ek,dl,+y)とする。また、副走査方向の負方向に1ライン間隔ずらしたデータを逆方向にyライン間隔ずらしたデータを画像データME(Ek,dl,-y)とする。
FIGS. 28A and 28B are diagrams for explaining the operation of the similarity calculation unit 143. In FIGS. 28A and 28B, reference data MO (O k , d r ) and comparison data ME (E k , d l ) centered on the detection reference line Y0 are input to the similarity calculation unit 143. Shows the case. As shown in FIG. 28A, the similarity calculation unit 143 first searches for the positional deviation amount ΔY with the detection reference line Y0 as the center with respect to the input reference data MO (O k , dr ). A plurality of pieces of image data having the same size (width bh line in the same sub-scanning direction) as the reference data MO (O k , d r ) from the comparison data ME (E k , d l ) within the range “−y to + y” Extract ME (E k , d l , ΔY). The positional deviation amount ΔY is the positional deviation amount in the sub-scanning direction from the reference data MO (O k , d r ) for the comparison data ME (E k , d l ), and is within the search range “−y to + y”. Takes the value of Data at the same position as the detection reference line Y0, which is the center position of the reference data MO (O k , d r ), is set as image data ME (E k , d l , 0), and one line interval in the positive direction of the sub-scanning direction The shifted data is defined as ME (E k , d l , 1), and further shifted by one line interval in the positive direction of the sub-scanning direction, and the data shifted by the y line interval is converted to image data ME (E k , d l , + y). ). Further, data obtained by shifting the data shifted by one line in the negative direction of the sub-scanning direction by y lines in the reverse direction is referred to as image data ME (E k , d l , -y).
次に、類似度算出部143は、基準データMO(Ok,dr)と比較データME(Ek,dl)における画像データME(Ek,dl,-y)~ME(Ek,dl,+y)の類似度を算出し、相関データD142(Ok,Ek)とする。基準データMO(Ok,dr)と画像データME(Ek,dl,-y)~ME(Ek,dl,+y)のそれぞれは、大きさが同じなので、類似度算出部143は、例えば、基準データMO(Ok,dr)と画像データME(Ek,dl,-y)~ME(Ek,dl,+y)の画素ごとの差分の絶対値の和、又は、画素ごとの差分の二乗和を類似度として算出し、相関データD142(Ok,Ek,ΔY)とし出力する。同様に、類似度算出部143は、次の基準データMO(Ok+1,dl)と比較データME(Ek,dr)、基準データMO(Ok+1,dr)と比較データME(Ek+1,dl)、…と、それぞれ順次類似度を算出し、相関データD142(Ok+1,Ek,ΔY)、D142(Ok+1,Ek+1,ΔY)、…と生成していく。
Next, the similarity calculation unit 143 performs image data ME (E k , d l , -y) to ME (E k ) in the reference data MO (O k , d r ) and the comparison data ME (E k , d l ). , D l , + y) is calculated as correlation data D142 (O k , E k ). Since the reference data MO (O k , d r ) and the image data ME (E k , d l , −y) to ME (E k , d l , + y) have the same size, the similarity calculation unit 143 Is, for example, the sum of absolute values of differences for each pixel of the reference data MO (O k , d r ) and the image data ME (E k , d l , -y) to ME (E k , d l , + y), Alternatively, the sum of squares of differences for each pixel is calculated as the similarity, and is output as correlation data D142 (O k , E k , ΔY). Similarly, the similarity calculation unit 143 includes the next reference data MO (O k + 1 , d l ) and comparison data ME (E k , dr ), the reference data MO (O k + 1 , dr ) and the comparison data ME (E k + 1 , d l ),... are sequentially calculated, and correlation data D 142 (O k + 1 , E k , ΔY), D 142 (O k + 1 , E k + 1 , ΔY),.
基準データMOと比較データMEでの位置ずれ量ΔYを検索する範囲は、検索範囲“-y~+y”であり、変倍率Rで画像データのライン数は変更されないので、類似度算出部143は、位置ずれ量の検出のための類似度を算出し、相関データD142を生成する個数を変更する必要がなく、変倍率に応じ回路規模を変更することはない。
The search range of the positional deviation amount ΔY between the reference data MO and the comparison data ME is the search range “−y to + y”, and the number of lines of the image data is not changed by the scaling factor R. Therefore, the similarity calculation unit 143 Therefore, it is not necessary to calculate the degree of similarity for detecting the amount of misalignment and to change the number of correlation data D142 to be generated, and the circuit scale is not changed according to the scaling factor.
なお、図28(a)において、類似度算出部143は、検索範囲“-y~+y”内の比較データME(Ek,dl)から、位置ずれ量ΔYとして1ライン間隔ずつずらして画像データME(Ek,dl,ΔY)をとり類似度(相関データD142)を算出するとしたが、類似度算出部143は、前後ラインの画像データから補間処理することで、ラインとラインの間の副走査方向の位置(サブ・ラインと記す)を求め、同様に基準データMOとの類似度を算出し、相関データD142を生成することもできる。
In FIG. 28 (a), the similarity calculation unit 143 shifts the image from the comparison data ME (E k , d l ) within the search range “−y to + y” by one line interval as the positional shift amount ΔY. data ME (E k, d l, ΔY) was to calculate the take similarity (correlation data D142), the similarity calculation unit 143, by interpolation processing from the image data before and after the line, between the line and the line It is also possible to obtain the position in the sub-scanning direction (denoted as sub-line), calculate the similarity to the reference data MO, and generate the correlation data D142.
図28(b)は、類似度算出部143において、検出基準ラインY0を中心とした基準データMO(Ok,dr)と比較データME(Ek,dl)における画像データME(Ek,dl,-y)~ME(Ek,dl,+y)の類似度を算出したときの位置ずれ量ΔY(検索範囲“-y~+y”内)に対する相関データD142(Ok,Ek,ΔY)の値の例を示している。図28(b)において、破線で示される曲線は、図22及び図24に対応する相関データD142(Ok,Ek,ΔY)であり、実線で示される曲線は、図23及び図25に対応する相関データD142(Ok,Ek,ΔY)である。図22及び図24においては、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enが副走査方向(Y方向)に同じ位置を読み取っているため、そのデータも副走査方向にずれていない。したがって、図28(b)の破線で示される曲線のようにΔY=0の位置で類似度が高く(すなわち、非類似度が低く)なる。また、図23及び図25においては、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onに対して偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの読み取り画像がガラス面126から上に離れているため、負の値(ΔY=-α)の位置で、類似度が最も高く(すなわち、非類似度が最も低く)なる。変倍処理を行う場合でも、読み出し制御部142からは、間引き率MによってMラインごとで画像メモリ141から読み出された位置ずれ量ΔYの範囲は、検索範囲“-y~+y”内のラインとして、基準データMO(Ok,dr)と比較データME(Ek,dl)が入力されているので、等倍処理及び変倍処理のいずれの場合であっても、図28(b)に示されるように、相関データD142(Ok,Ek,ΔY)が、同じ位置ずれ量ΔYは検索範囲“-y~+y”内のライン数で算出され、相関データD142を生成する個数又は処理を変倍率に応じ変更することはない。
FIG. 28B shows image data ME (E k ) in reference data MO (O k , d r ) and comparison data ME (E k , d l ) centered on the detection reference line Y0 in the similarity calculation unit 143. , D l , −y) to ME (E k , d l , + y) when calculating the degree of similarity ΔY (within the search range “−y to + y”) correlation data D142 (O k , E An example of the value of k , ΔY) is shown. In FIG. 28 (b), the curve indicated by the broken line is the correlation data D142 (O k , E k , ΔY) corresponding to FIGS. 22 and 24, and the curve indicated by the solid line is shown in FIGS. Corresponding correlation data D142 (O k , E k , ΔY). In FIG. 22 and FIG. 24, the line sensor 121 o 1 located odd, ..., 121 o k, ..., the line sensor 121E 1 located to the even-numbered and 121O n, ..., 121E k, ..., 121E n sub scanning Since the same position is read in the direction (Y direction), the data is not shifted in the sub-scanning direction. Therefore, the degree of similarity is high (that is, the degree of dissimilarity is low) at the position of ΔY = 0 as shown by the broken line in FIG. Further, in FIGS. 23 and 25, the line sensor 121 o 1 located odd, ..., 121 o k, ..., the line sensor 121E 1 located to the even-numbered relative 121O n, ..., 121E k, ..., 121E Since the read image of n is away from the glass surface 126, the similarity is the highest (ie, the dissimilarity is the lowest) at the position of the negative value (ΔY = −α). Even when the scaling process is performed, the range of the positional deviation amount ΔY read from the image memory 141 for each M lines by the thinning rate M from the read control unit 142 is a line within the search range “−y to + y”. Since the reference data MO (O k , d r ) and the comparison data ME (E k , d l ) are input as shown in FIG. ), The correlation data D142 (O k , E k , ΔY) has the same positional deviation amount ΔY calculated by the number of lines in the search range “−y to + y”, and the number of correlation data D142 generated. Alternatively, the processing is not changed according to the scaling factor.
《4-2-3》シフト量推定部144の動作
シフト量推定部144は、類似度算出部143から、検索範囲“-y~+y”内における複数のラインの位置ずれ量ΔYにおける相関データD142を受け取る。シフト量推定部144は、複数のラインの相関データD142のうち類似度が最も高いずれデータに対応する位置ずれ量ΔYをシフト量データdshとして、シフト量拡大部145に出力する。 << 4-2-3 >> Operation ofShift Amount Estimator 144 The shift amount estimator 144 receives, from the similarity calculator 143, correlation data D142 for the positional deviation amounts ΔY of a plurality of lines within the search range “−y to + y”. Receive. The shift amount estimation unit 144 outputs the shift amount ΔY corresponding to the data with the highest similarity among the correlation data D142 of the plurality of lines as the shift amount data d sh to the shift amount enlargement unit 145.
シフト量推定部144は、類似度算出部143から、検索範囲“-y~+y”内における複数のラインの位置ずれ量ΔYにおける相関データD142を受け取る。シフト量推定部144は、複数のラインの相関データD142のうち類似度が最も高いずれデータに対応する位置ずれ量ΔYをシフト量データdshとして、シフト量拡大部145に出力する。 << 4-2-3 >> Operation of
すなわち、図28(b)において、破線で示される曲線のようにΔY=0の位置で類似度が高く(すなわち、非類似度が低く)なる場合は、ΔY=0をシフト量データdshとし、また、実線で示される曲線のようなΔY=-αの位置で類似度が最も高く(すなわち、非類似度が最も低く)なる場合は、ΔY=-αをシフト量データdshとする。
That is, in FIG. 28B, when the degree of similarity is high (that is, the degree of dissimilarity is low) at the position of ΔY = 0 as in the curve indicated by the broken line, ΔY = 0 is used as the shift amount data d sh. Further, when the similarity is the highest (ie, the dissimilarity is the lowest) at the position of ΔY = −α as shown by a solid line, ΔY = −α is set as the shift amount data d sh .
《4-2-4》シフト量拡大部145の動作
画像処理部104内のシフト量拡大部145は、変倍率に基づいて設定される間引き率Mをコントローラ107から受け取り、シフト量データdshをシフト量推定部144から受け取る。シフト量拡大部145は、副走査方向(Y方向)の位置Ym(あるいは、Ym4)で得られたシフト量推定部144からのシフト量データdshに対し、間引き率Mに基づいて、シフト量データdshの値をM倍して(M×dsh)として拡大するとともに、例えば、間引き率Mで示されるライン間隔の間(例えば、ラインYmからラインYm+Mまでの間)、M倍したシフト量データdshであるデータ(M×dsh)を繰り返し挿入することで副走査方向に補間(拡大)して、拡大シフト量データΔybに変換し出力する。 << 4-2-4 >> Operation of ShiftAmount Enlargement Unit 145 The shift amount enlargement unit 145 in the image processing unit 104 receives the thinning rate M set based on the variable magnification from the controller 107, and receives the shift amount data d sh . Received from the shift amount estimation unit 144. The shift amount enlargement unit 145 is based on the thinning rate M with respect to the shift amount data d sh from the shift amount estimation unit 144 obtained at the position Y m (or Y m4 ) in the sub-scanning direction (Y direction). The value of the shift amount data d sh is multiplied by M and expanded as (M × d sh ), and for example, between the line intervals indicated by the thinning rate M (for example, between the line Y m and the line Y m + M) The data (M × d sh ), which is the shift amount data d sh multiplied by M, is repeatedly inserted to be interpolated (enlarged) in the sub-scanning direction, converted into enlarged shift amount data Δy b and output.
画像処理部104内のシフト量拡大部145は、変倍率に基づいて設定される間引き率Mをコントローラ107から受け取り、シフト量データdshをシフト量推定部144から受け取る。シフト量拡大部145は、副走査方向(Y方向)の位置Ym(あるいは、Ym4)で得られたシフト量推定部144からのシフト量データdshに対し、間引き率Mに基づいて、シフト量データdshの値をM倍して(M×dsh)として拡大するとともに、例えば、間引き率Mで示されるライン間隔の間(例えば、ラインYmからラインYm+Mまでの間)、M倍したシフト量データdshであるデータ(M×dsh)を繰り返し挿入することで副走査方向に補間(拡大)して、拡大シフト量データΔybに変換し出力する。 << 4-2-4 >> Operation of Shift
すなわち、シフト量拡大部145は、間引き率M=1となる等倍率の変倍率R=1又は1より小さな変倍率の変倍処理を行うときは、シフト量推定部144からのシフト量データdshは、M=1ライン間隔ごとに(通常のライン間隔で)入力されており、検索範囲“-y~+y”内においてライン間引き無しで、位置ずれ量として求められているので、1ラインごと(ラインYmごと)に入力されたシフト量データdshをそのまま拡大シフト量データΔybとして出力する。シフト量拡大部145は、間引き率M(M≧2)となる変倍処理を行うときは、シフト量データdshは、Mライン間隔ごとで入力され、検索範囲“-(M×y)~+(M×y)”内をMライン間隔で間引いて求められた値(1/Mラインされた値)であるので、間引き率Mから、シフト量データdshの値をM倍して(M×dsh)に拡大し、Mライン間隔の間(例えば、ラインYmからYm+Mの間)で(M×dsh)に拡大したシフト量データを繰り返し挿入して副走査方向に拡大するため補間し、拡大シフト量データΔybに変換して出力する。拡大及び補間された拡大シフト量データΔybが、画像データを結合するための各ラインにおける副走査方向の位置ずれ量となり、この拡大シフト量データΔybは、結合処理部146へ送られる。
In other words, the shift amount enlargement unit 145 performs the shift amount data d from the shift amount estimation unit 144 when performing scaling processing with a scaling factor R = 1 or a scaling factor smaller than 1 at which the thinning rate M = 1. sh is input every M = 1 line interval (at a normal line interval), and is obtained as a positional deviation amount without line thinning within the search range “−y to + y”. The shift amount data d sh input for each line Y m is output as enlarged shift amount data Δy b as it is. When the shift amount enlargement unit 145 performs a scaling process with a thinning rate M (M ≧ 2), the shift amount data d sh is input every M line intervals, and the search range “− (M × y) ˜ Since + (M × y) ”is a value obtained by thinning out the M line interval (1 / M line value), the shift amount data d sh is multiplied by M from the thinning rate M ( M × d sh ), and the shift amount data expanded to (M × d sh ) is repeatedly inserted between M line intervals (for example, between lines Y m and Y m + M) and expanded in the sub-scanning direction. interpolated for converts and outputs the enlarged shift quantity data [Delta] y b. The enlarged and interpolated enlargement shift amount data Δy b becomes a positional shift amount in the sub-scanning direction in each line for combining the image data, and this enlargement shift amount data Δy b is sent to the combination processing unit 146.
シフト量拡大部145は、間引き率Mから、シフト量データdshの値をM倍して(M×dsh)として拡大するとともに、(M×dsh)としたシフト量データをMライン間隔の間繰り返し挿入して副走査方向に補間して拡大するので、拡大シフト量データΔybは、変倍率Rによって変更されるライン範囲に相当する位置ずれ量を示す値として得られ、簡単な拡大処理で、位置ずれ量の検索範囲に拡大のためにラインメモリなどを増やすなどの回路規模の増加を行う必要なく、精度よく変倍率に応じた副走査方向の値に相当するシフト量データを得ることができる。なお、上記のシフト量拡大部145の説明においては、シフト量データdshの値をM倍して(M×dsh)とした後、Mライン間隔の間繰り返し挿入して副走査方向に補間拡大するとしたが、シフト量データdshをMライン間隔の間繰り返し挿入して副走査方向に補間拡大した後、シフト量データdshの値をM倍して(M×dsh)として拡大し、拡大シフト量データΔybに変換してもよく、上記のシフト量拡大部145と同様の効果を奏する。
Shift amount enlargement unit 145, a thinning rate M, while expanding the value of the shift amount data d sh by M times as (M × d sh), ( M × d sh) and the shift amount data M line spacing Therefore, the enlargement shift amount data Δy b is obtained as a value indicating the amount of misalignment corresponding to the line range changed by the variable magnification R, and is simply enlarged. In processing, shift amount data corresponding to the value in the sub-scanning direction according to the magnification is accurately obtained without the need to increase the circuit scale such as increasing the line memory for expanding the search range of the positional deviation amount. be able to. In the description of the shift amount enlargement unit 145 described above, the value of the shift amount data d sh is multiplied by M to (M × d sh ), and then repeatedly inserted during the M line interval and interpolated in the sub-scanning direction. Although the shift amount data d sh is repeatedly inserted during the M line interval and interpolated and enlarged in the sub-scanning direction, the value of the shift amount data d sh is multiplied by M and enlarged as (M × d sh ). , And may be converted into enlarged shift amount data Δy b , and the same effect as the shift amount enlargement unit 145 described above can be obtained.
図29及び図30は、シフト量拡大部145における間引き率Mに基づいて、シフト量データdshの値をM倍して(M×dsh)として拡大するとともに、間引き率Mで示されるライン間隔の間繰り返し挿入して、シフト量データdshを拡大及び補間する動作を説明するための図である。
29 and FIG. 30, the value of the shift amount data d sh is multiplied by M based on the decimation rate M in the shift amount enlarging unit 145 and expanded as (M × d sh ), and the line indicated by the decimation rate M It is a figure for demonstrating the operation | movement which inserts repeatedly during an interval and expands and interpolates shift amount data dsh .
図29は、等倍処理(R=1)の場合、図30は、変倍処理の場合(例えば、R=4で等倍処理時の1ラインが、4倍の4ラインの間隔となるような場合)を示しており、図29は、等倍処理において、各副走査方向の位置Ym(横軸)に対し、シフト量データdshから変換された拡大シフト量データΔybの値(縦軸)を示す図であり、各ラインでシフト量データdshをそのまま拡大シフト量データΔybとして変換し、出力する場合を示している。図30は、R=4における変倍処理において、各副走査方向の位置Ym4(横軸、ラインYm4-4,Ym4,Ym4+4,…)に対し、シフト量データdshから変換された拡大シフト量データΔybの値(縦軸)を示す図であり、シフト量データdshは、4ライン間隔ごとに、位置ずれ量が4ライン間隔で間引いて求められた値(1/4ラインされた値)で入力されるので、間引き率M=4から、シフト量データdshの値を4倍して(4×dsh)として拡大し、4ライン間隔の間(例えば、ラインYm4からラインYm4+4の間)、(4×dsh)としたシフト量データを繰り返し挿入して副走査方向に補間し、拡大シフト量データΔybに変換して出力する場合を示している。なお、図30中の黒丸印は、4ライン間隔ごとで入力されるシフト量データdshが得られるラインに対する拡大されたシフト量データ(4×dsh)を示している。
FIG. 29 shows the case of the same magnification processing (R = 1), and FIG. 30 shows the case of the magnification processing (for example, one line at the time of the same magnification processing with R = 4 is 4 times four lines). FIG. 29 shows the value of the enlarged shift amount data Δy b converted from the shift amount data d sh for each position Y m (horizontal axis) in the sub-scanning direction in the same magnification process ( It is a diagram showing the vertical axis), and shows the case where the shift amount data d sh is directly converted into the enlarged shift amount data Δy b for each line and output. FIG. 30 shows the conversion from the shift amount data d sh to the position Y m4 (horizontal axis, lines Y m4 −4, Y m4 , Y m4 +4...) In each sub-scanning direction in the scaling process at R = 4. FIG. 7 is a diagram showing the value (vertical axis) of the enlarged shift amount data Δy b that has been obtained, and the shift amount data d sh is a value (1 / Therefore, the value of the shift amount data d sh is multiplied by 4 and expanded as (4 × d sh ) from the thinning rate M = 4, and the interval between 4 lines (for example, line) (Between Y m4 and line Y m4 +4) and (4 × d sh ) are repeatedly inserted, interpolated in the sub-scanning direction, converted into enlarged shift amount data Δy b , and output. Yes. Note that the black circles in FIG. 30 indicate the expanded shift amount data (4 × d sh ) for the lines from which the shift amount data d sh input at every four line intervals is obtained.
したがって、シフト量データdshは、4ライン間隔ごとに、位置ずれ量が4ライン間隔で間引いて求められた値(1/4ラインされた値)であるが、シフト量拡大部145は、間引き率M=4から、シフト量データdshの値を(4×dsh)として拡大し、(4×dsh)としたシフト量データを副走査方向に繰り返し挿入することで補間して、拡大シフト量データΔybに変換するので、拡大シフト量データΔybは、変倍率Rで変倍処理されて4倍のライン範囲に相当する位置ずれ量を示す値として得ることができる。また、間引き率Mは、読み取り倍率である変倍率Rに近い、1以上の整数で設定されているので、1より小さな変倍率(例えば、変倍率R=0.8倍)又は変倍率に小数部を持つ中間倍率(例えば、R=1.5倍)のような場合も、間引き率Mに応じて、シフト量データdshの値の拡大、副走査方向に繰り返し挿入する補間処理を、整数倍という簡単な演算によって拡大することができ、数ラインを用いたライン補間のようなフィルタ演算による補間処理を行う必要がなく、回路規模の小さな処理で拡大シフト量データΔybに変換することができる。
Therefore, the shift amount data d sh is a value (a value obtained by 1/4 line) obtained by thinning out the positional deviation amount at every four line intervals. from rate M = 4, to expand the value of the shift amount data d sh as (4 × d sh), interpolated by repeatedly inserting and was the shift quantity data in the sub-scanning direction (4 × d sh), expanded Since it is converted into the shift amount data Δy b , the enlarged shift amount data Δy b can be obtained as a value indicating a positional shift amount corresponding to a four-fold line range after being subjected to the scaling process at the scaling ratio R. Further, since the thinning rate M is set as an integer equal to or greater than 1 which is close to the scaling factor R which is the reading magnification, a scaling factor smaller than 1 (for example, scaling factor R = 0.8) or a fractional number to the scaling factor Even in the case of an intermediate magnification having a part (for example, R = 1.5), an interpolation process for expanding the value of the shift amount data d sh and repeatedly inserting it in the sub-scanning direction according to the thinning rate M is an integer. The image can be enlarged by a simple operation of doubling, and it is not necessary to perform an interpolation process by a filter operation such as line interpolation using several lines, and can be converted into the enlarged shift amount data Δy b by a process with a small circuit scale. it can.
なお、図30におけるシフト量拡大部145においては、間引き率Mに基づいて各副走査方向の位置に対し、シフト量データを繰り返し挿入して副走査方向に補間する場合を説明したが、図31に示されるように、シフト量拡大部145は、間引き率Mで示されるライン間隔の間(例えば、ラインYmからYm+Mの間)で、(M×dsh)としたシフト量データを線形補間のような、ラインの移動とともに滑らかに変化するような補間処理によって、副走査方向に補間(拡大)し拡大シフト量データΔybに変換してもよい。この場合には、Mライン単位で補間するので、拡大シフト量データΔybは、滑らかに変化し、シフト量が急激に変化することなく、より精度よく位置ずれ量を得ることができる。間引き率Mは、読み取り倍率である変倍率Rに近い、1以上の整数で設定されているので、間引き率Mで示されるライン間隔の間における線形補間、平均化処理のような回路規模の小さな処理で拡大シフト量データΔybに変換することができる。
In the shift amount enlargement unit 145 in FIG. 30, the case has been described in which the shift amount data is repeatedly inserted and interpolated in the sub-scanning direction with respect to the position in each sub-scanning direction based on the thinning rate M. As shown in FIG. 5, the shift amount enlargement unit 145 converts the shift amount data set to (M × d sh ) between the line intervals indicated by the thinning rate M (for example, between the lines Y m and Y m + M). such as linear interpolation, the interpolation process as smoothly varies with the movement of the line, it may be converted into interpolated (expanded) in the slow-scan enlargement shift quantity data [Delta] y b. In this case, since the interpolation is performed in units of M lines, the enlarged shift amount data Δy b changes smoothly, and the displacement amount can be obtained with higher accuracy without the shift amount changing rapidly. Since the thinning rate M is set to an integer of 1 or more close to the scaling factor R that is the reading magnification, the circuit scale such as linear interpolation and averaging processing between the line intervals indicated by the thinning rate M is small. It can be converted into enlarged shift amount data Δy b by processing.
《4-2-5》結合処理部146の動作
画像処理部104内の結合処理部146は、シフト量拡大部145から出力された拡大シフト量データΔybを受け取る。結合処理部146は、拡大シフト量データΔybに基づいて画像データの読み出し位置RPを算出し、その読み出し位置に対応する画像データM146を読み出し、拡大シフト量データΔybに基づいて画像データを副走査方向に移動させる処理を行い、画像データのオーバーラップ領域におけるデータを結合して、合成画像データを生成する。 << 4-2-5 >> Operation ofJoin Processing Unit 146 The join processing unit 146 in the image processing unit 104 receives the enlarged shift amount data Δy b output from the shift amount enlarging unit 145. The combination processing unit 146 calculates the read position RP of the image data based on the enlargement shift amount data Δy b , reads the image data M146 corresponding to the read position, and subtracts the image data based on the enlargement shift amount data Δy b. A process of moving in the scanning direction is performed, and the data in the overlap region of the image data is combined to generate composite image data.
画像処理部104内の結合処理部146は、シフト量拡大部145から出力された拡大シフト量データΔybを受け取る。結合処理部146は、拡大シフト量データΔybに基づいて画像データの読み出し位置RPを算出し、その読み出し位置に対応する画像データM146を読み出し、拡大シフト量データΔybに基づいて画像データを副走査方向に移動させる処理を行い、画像データのオーバーラップ領域におけるデータを結合して、合成画像データを生成する。 << 4-2-5 >> Operation of
図32並びに図33(a)及び(b)は、結合処理部146の動作を説明するための図である。図32に示されるように、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enに対応するデータだけを、拡大シフト量データΔybに相当する量だけ副走査方向の位置をずらしてもよいし、図33(a)及び(b)に示されるように、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onに対応するデータ、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enに対応するデータの両方のシフト量が合わせて拡大シフト量データΔybと同等となるように、副走査方向の位置をずらしてもよい。図33(a)においては、読み出し制御部142で基準ラインとして読み出した副走査方向(Y方向)の位置Ymにおける画像データを、隣接するラインセンサで拡大シフト量データΔybに相当する量だけずらしており、図33(b)においては、隣接するラインセンサで拡大シフト量データΔybに相当する量だけ分離された位置のデータを、ラインYmの位置にずらしている。
FIG. 32 and FIGS. 33A and 33B are diagrams for explaining the operation of the combination processing unit 146. 32, only the data corresponding to the even-numbered line sensors 121E 1 ,..., 121E k ,..., 121E n are positioned in the sub-scanning direction by an amount corresponding to the enlarged shift amount data Δy b. may be shifted, as shown in shown in FIG. 33 (a) and (b), the line sensor 121 o 1 located odd, ..., 121 o k, ..., data corresponding to 121 o n, located in the even-numbered line sensors 121E 1 to, ..., 121E k, ..., so that equivalent to both shift amount combined enlarge shift quantity data [Delta] y b of the corresponding data to 121E n, may be shifted to positions in the sub-scanning direction . In FIG. 33 (a) the image data at the position Y m in the sub scanning direction read as a reference line (Y direction) by the read control unit 142, by an amount corresponding to the enlargement shift amount data [Delta] y b in the adjacent line sensor shifting and, in FIG. 33 (b), the data of separated by an amount corresponding to the enlargement shift amount data [Delta] y b in the adjacent line sensor position, are shifted to the position of the line Y m.
すなわち、結合処理部146は、拡大シフト量データΔybに基づいて、画像データの読み出し位置RPを算出し、その読み出し位置に対応する画像データM146を画像メモリ141から読み出し、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onのデータと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enのデータが重複するオーバーラップ領域を結合して画像データD146を生成する。拡大シフト量データΔybは、変倍率Rによって変更されるライン範囲に相当する位置ずれ量を示す値として得られているので、拡大シフト量データΔybに基づいて画像データの読み出し位置RPを算出すれば、変倍処理された場合でも、変倍率に応じた各ラインに対応した読み出し位置を求められる。このときは、間引きなどを行わず、変倍率に対応した副走査方向の各ラインにおいて、拡大シフト量データΔybに相当する量だけその位置とその位置の周辺のラインの画像データを用いて画像データD146を生成するので、結合画像が劣化することはない。
That is, the combination processing unit 146 calculates the read position RP of the image data based on the enlargement shift amount data Δy b , reads the image data M146 corresponding to the read position from the image memory 141, and sets the odd-numbered line. sensor 121O 1, ..., 121O k, ..., 121O n data and the line sensor 121E 1 located even-numbered, ..., 121E k, ..., the image data by combining the overlapping region data 121E n overlap D146 Is generated. Since the enlargement shift amount data Δy b is obtained as a value indicating the amount of misalignment corresponding to the line range changed by the scaling factor R, the image data read position RP is calculated based on the enlargement shift amount data Δy b. In this case, even when the scaling process is performed, the reading position corresponding to each line corresponding to the scaling ratio can be obtained. At this time, without performing thinning or the like, in each line in the sub-scanning direction corresponding to the variable magnification, an image corresponding to the enlargement shift amount data Δy b is used by using the image data of the position and the lines around the position. Since the data D146 is generated, the combined image does not deteriorate.
画像処理部104において、読み出し制御部142においては、ある副走査方向(Y方向)の位置Ymを基準ラインとして、間引き率MによるMラインごとで奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onに対応するオーバーラップ領域における画像データと、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enに対応するオーバーラップ領域における画像データを読み出し、予め決められたライン数の基準データMOと比較データMEとして出力し、類似度算出部143においては、基準データMOと比較データMEとを比較して類似度データ(相関データ)D143を算出し、シフト量推定部144で、検出基準ラインY0について類似度の最も高い(相関の最も大きい)比較データの副走査方向の位置に対応する位置ずれ量をシフト量データdshとして算出するとともに、シフト量拡大部145においては、間引き率Mに基づいて、シフト量データdshの値を拡大し、副走査方向に繰り返し挿入して補間して拡大シフト量データΔybに変換し、結合処理部146において、拡大シフト量データΔybに基づいて画像データを読み出し、結合した画像データD146を出力する。これを副走査方向(Y方向)に順次行う。
In the image processing unit 104, the read control unit 142, a position Y m of a sub-scanning direction (Y-direction) as the reference line, the line sensor 121 o 1 located odd in every M lines by thinning rate M, ..., 121O k, ..., read image data in the overlap region corresponding to 121 o n, the line sensor 121E 1 located even-numbered, ..., 121E k, ..., the image data in the overlap region corresponding to 121E n, pre The reference data MO and the comparison data ME of the determined number of lines are output, and the similarity calculation unit 143 compares the reference data MO and the comparison data ME to calculate similarity data (correlation data) D143, and shifts the data. The amount estimation unit 144 has the highest similarity for the detection reference line Y0 (the largest correlation). Calculates the position deviation amount corresponding to the sub scanning direction position of the stomach) comparison data as the shift quantity data d sh, the shift amount larger portion 145, on the basis of the thinning rate M, the value of the shift amount data d sh The image data is enlarged, repeatedly inserted in the sub-scanning direction, interpolated and converted into enlarged shift amount data Δy b , and the combination processing unit 146 reads out the image data based on the enlarged shift amount data Δy b and combines the combined image data D146. Output. This is sequentially performed in the sub-scanning direction (Y direction).
図34は、図20(c)又は図21(c)、及び、図22から図25における画像を結合した画像である。変倍処理の場合(図24及び図25)であっても、等倍処理(R=1)の場合(図22及び図23)と同様の副走査方向(Y方向)の位置ずれ量を補正し、図20(b)及び図21(b)に示される原稿160と同じ画像の画像データを出力することができる。特に、図21(c)又は図23及び図25の場合には、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの画像データと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの画像データが副走査方向(Y方向)にずれていたが、図21(b)に示される原稿160と同じ画像の画像データを出力することができる。
FIG. 34 is an image obtained by combining the images in FIG. 20C or FIG. 21C and FIGS. 22 to 25. Even in the case of the scaling process (FIGS. 24 and 25), the positional deviation amount in the sub-scanning direction (Y direction) is corrected in the same manner as in the case of the normal scaling process (R = 1) (FIGS. 22 and 23). Then, the image data of the same image as the original 160 shown in FIGS. 20B and 21B can be output. Particularly, in the case shown in FIG. 21 (c) or FIG. 23 and FIG. 25, the line sensor 121 o 1 located odd, ..., 121 o k, ..., the line sensor positioned in the image data and the even-numbered 121 o n 121E 1 ,..., 121E k ,..., 121E n are shifted in the sub-scanning direction (Y direction), but image data of the same image as the original 160 shown in FIG.
図34は、また、結合処理部146から出力される結合処理後の画像データD146を概念的に示す説明図でもある。等倍処理(R=1)の場合と変倍処理の場合いずれに対しても、拡大シフト量データΔybから変倍率に応じた読み出し位置を求め、拡大シフト量データΔybに相当する量だけずらした位置とその位置の周辺のラインの画像データを用いて画像データD146を生成するので、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの副走査方向のずれが修正され、重複して読み取っていたオーバーラップ領域の画像が結合される。
FIG. 34 is also an explanatory diagram conceptually showing the image data D146 after the combination process output from the combination processing unit 146. For any cases of the scaling process of the magnification process (R = 1), obtains a reading position corresponding to the magnification ratio from the enlarged shift quantity data [Delta] y b, by an amount corresponding to the enlargement shift amount data [Delta] y b because generates image data D146 with shifted position and a image data of a line near the position, the line sensor 121 o 1 located odd, ..., 121 o k, ..., line positioned in even-numbered and 121 o n The displacement of the sensors 121E 1 ,..., 121E k ,..., 121E n in the sub-scanning direction is corrected, and the overlapping area images that have been read are combined.
《4-2-6》原稿160の他の例
図35(a)及び(b)は、撮像部102を搬送中に、ガラス面126に対する原稿160の位置が変わる例を示す図である。副走査方向(Y方向)の位置Ymにおいては、原稿160は、ガラス面126から浮いており、位置Yuにおいては、原稿160は、ガラス面126に密着している。撮像部102は、副走査方向(Y方向)の各位置において順次画像データを処理しているため、搬送中に原稿160の位置が変わったとしても、画像処理部104においては、各位置における位置ずれ量を算出しているため、正しく画像を結合することができ、また、変倍処理の場合においても、変倍率に応じて設定される間引き率Mに基づいて、ラインを間引いてオーバーラップ領域の画像データを抽出して位置ずれ量を算出し、求めた位置ずれ量を拡大することで変倍率に応じたその位置における読み出し位置を求めているので、等倍率での読み取り時と同様、正しく画像を結合することができる。また、図22から図25で説明したように奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの全ての間で個別にずれ量を算出しているので、主走査方向(X方向)に原稿160のガラス面126に対する位置が変わったとしても正しく画像を結合することができる。 << 4-2-6 >> Other Examples ofDocument 160 FIGS. 35A and 35B are diagrams illustrating an example in which the position of the document 160 with respect to the glass surface 126 changes while the imaging unit 102 is being conveyed. In the position Y m in the sub-scanning direction (Y-direction), the original 160 is floated from the glass surface 126, the position Yu, the document 160 is adhered to the glass surface 126. Since the imaging unit 102 sequentially processes the image data at each position in the sub-scanning direction (Y direction), even if the position of the document 160 changes during conveyance, the image processing unit 104 determines the position at each position. Since the shift amount is calculated, the images can be combined correctly, and even in the case of the scaling process, the overlap area is obtained by thinning out the lines based on the thinning rate M set according to the scaling ratio. Image data is extracted to calculate the amount of misregistration, and the readout position at that position corresponding to the variable magnification is obtained by enlarging the obtained misregistration amount. Images can be combined. The line sensor 121 o 1 located odd as described in FIG. 25 from FIG. 22, ..., 121 o k, ..., the line sensor 121E 1 located to the even-numbered and 121O n, ..., 121E k, ..., 121E Since the shift amount is calculated individually for all n , even if the position of the document 160 with respect to the glass surface 126 changes in the main scanning direction (X direction), the images can be combined correctly.
図35(a)及び(b)は、撮像部102を搬送中に、ガラス面126に対する原稿160の位置が変わる例を示す図である。副走査方向(Y方向)の位置Ymにおいては、原稿160は、ガラス面126から浮いており、位置Yuにおいては、原稿160は、ガラス面126に密着している。撮像部102は、副走査方向(Y方向)の各位置において順次画像データを処理しているため、搬送中に原稿160の位置が変わったとしても、画像処理部104においては、各位置における位置ずれ量を算出しているため、正しく画像を結合することができ、また、変倍処理の場合においても、変倍率に応じて設定される間引き率Mに基づいて、ラインを間引いてオーバーラップ領域の画像データを抽出して位置ずれ量を算出し、求めた位置ずれ量を拡大することで変倍率に応じたその位置における読み出し位置を求めているので、等倍率での読み取り時と同様、正しく画像を結合することができる。また、図22から図25で説明したように奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの全ての間で個別にずれ量を算出しているので、主走査方向(X方向)に原稿160のガラス面126に対する位置が変わったとしても正しく画像を結合することができる。 << 4-2-6 >> Other Examples of
《4-3》実施の形態4の効果
以上に説明したように、実施の形態4に係る画像読取装置101、画像処理装置4、及び画像処理方法においては、読み出し制御部142においては、変倍率に応じて設定される間引き率MによるMラインごとで、読み出す副走査方向の位置(ライン)を間引いて、第1列のラインセンサと第2列のラインセンサによって読み取られた画像の画像データうちのオーバーラップ領域におけるデータを読み出し、ある副走査方向(Y方向)の位置Ymを基準ラインとして、予め決められたライン数の基準データMOと比較データMEとして出力し、類似度算出部143において、基準データMOと比較データMEとを比較して類似度データ(相関データ)D143を算出し、シフト量推定部144で類似度の最も高い比較データの副走査方向の位置に対応する位置ずれ量をシフト量データdshとして算出するとともに、シフト量拡大部145にて、間引き率Mに基づいて、シフト量データdshの値を拡大し、副走査方向に繰り返し挿入して補間して拡大シフト量データΔybに変換しており、この拡大シフト量データΔybに基づいて分割画像の副走査方向の位置をシフトさせて、結合した画像データを生成している。 << 4-3 >> Effects of the Fourth Embodiment As described above, in theimage reading apparatus 101, the image processing apparatus 4, and the image processing method according to the fourth embodiment, the read control unit 142 uses the scaling factor. Of the image data read by the first row line sensor and the second row line sensor by thinning out the position (line) in the sub-scanning direction to be read out for each M lines with the thinning rate M set according to reads the data in the overlap regions, as a reference line position Y m of a sub-scanning direction (Y direction), and outputs the comparison data ME with reference data MO of a predetermined number of lines, the similarity calculation unit 143 Then, the similarity data (correlation data) D143 is calculated by comparing the reference data MO and the comparison data ME, and the shift amount estimation unit 144 has the highest similarity. Enlarge calculates a positional deviation amount corresponding to the sub scanning direction position of the stomach comparison data as the shift quantity data d sh, at the shift amount larger portion 145, on the basis of the thinning rate M, the value of the shift amount data d sh and, have been converted into expanded shift quantity data [Delta] y b interpolated repeatedly inserted in the sub-scanning direction, is shifted in the sub-scanning direction position of the divided image based on the enlargement shift amount data [Delta] y b, bound Image data is generated.
以上に説明したように、実施の形態4に係る画像読取装置101、画像処理装置4、及び画像処理方法においては、読み出し制御部142においては、変倍率に応じて設定される間引き率MによるMラインごとで、読み出す副走査方向の位置(ライン)を間引いて、第1列のラインセンサと第2列のラインセンサによって読み取られた画像の画像データうちのオーバーラップ領域におけるデータを読み出し、ある副走査方向(Y方向)の位置Ymを基準ラインとして、予め決められたライン数の基準データMOと比較データMEとして出力し、類似度算出部143において、基準データMOと比較データMEとを比較して類似度データ(相関データ)D143を算出し、シフト量推定部144で類似度の最も高い比較データの副走査方向の位置に対応する位置ずれ量をシフト量データdshとして算出するとともに、シフト量拡大部145にて、間引き率Mに基づいて、シフト量データdshの値を拡大し、副走査方向に繰り返し挿入して補間して拡大シフト量データΔybに変換しており、この拡大シフト量データΔybに基づいて分割画像の副走査方向の位置をシフトさせて、結合した画像データを生成している。 << 4-3 >> Effects of the Fourth Embodiment As described above, in the
したがって、変倍率に応じて設定される間引き率Mに応じて、間引き率MによるMラインごとでオーバーラップ領域におけるデータを読み出し、変倍率で変更されることがない基準データMOと検索範囲“-y~+y”内にあるライン数の比較データMEを求めることができ、位置ずれ量の検索範囲を拡大し、類似度データを生成する個数又は処理を変倍率に応じて変更することなく、位置ずれ量であるシフト量データdshを算出することができるとともに、シフト量データdshを拡大及び補間することで、変倍率に基づいて変更されるライン範囲に相当する位置ずれ量を示す値として得て、結合した画像データを生成することができるので、変倍処理においても、変倍率に応じて副走査方向の位置ずれ量の検出範囲を広げることなく、回路規模を増やさずに精度よく画像データ間の副走査方向の位置ずれ量を求め、被読取物に対応する高品質な合成画像データを生成することができる。
Therefore, according to the thinning rate M set according to the scaling factor, the data in the overlap region is read for each M lines with the thinning rate M, and the reference data MO and the search range “−” that are not changed by the scaling factor. The comparison data ME for the number of lines in y to + y ″ can be obtained, the position misalignment search range is expanded, and the number or processing for generating similarity data is changed without changing according to the scaling factor. The shift amount data d sh that is the shift amount can be calculated, and the shift amount data d sh is enlarged and interpolated to obtain a position shift amount corresponding to the line range that is changed based on the magnification. As a result, combined image data can be generated. Therefore, even in the scaling process, the detection range of the amount of positional deviation in the sub-scanning direction is not expanded according to the scaling ratio. Obtain the position deviation amount in the sub-scanning direction between accurately image data without increasing the circuit scale, can generate high-quality combined image data corresponding to the read object.
また、変倍率に応じて設定される間引き率Mは、読み取り倍率である変倍率Rに近い、1以上の整数で設定されているので、読み出し制御部142及びシフト量拡大部145における間引き処理又は拡大処理を、ライン補間のようなフィルタ演算による拡大、縮小処理によって行う必要がなく回路規模の小さな処理で行うことができる。
Further, since the thinning rate M set in accordance with the scaling factor is set to an integer of 1 or more close to the scaling factor R that is the reading magnification, the thinning processing in the reading control unit 142 and the shift amount expanding unit 145 or The enlargement process does not need to be performed by an enlargement / reduction process by a filter operation such as line interpolation, and can be performed by a process with a small circuit scale.
《5》実施の形態5
《5-1》実施の形態5の構成
実施の形態4に係る画像読取装置101、画像処理装置4、及び画像処理方法は、変倍処理時に、変倍率に基づいて設定される間引き率Mを画像処理部104内の読み出し制御部142及びシフト量拡大部145へ送り、間引き率Mに基づいて画像メモリ141からオーバーラップ領域における画像データを読み出し、間引き率Mに基づいてシフト量データを拡大及び補間する処理によってシフト量データを拡大シフト量データに変換し、得られた拡大シフト量データを用いて結合画像の画像データを生成している。これに対し、図36に示されるように、実施の形態5に係る画像読取装置101aは、変倍率に基づいて設定される間引き率Mに加え、さらに、変倍率と間引き率Mから得られる位置ずれ量の検索範囲を限定する検索範囲除外ライン数LMTを取得し、間引き率Mに基づいて画像メモリ141からオーバーラップ領域における画像データを読み出し、シフト量データの拡大及び補間による拡大シフト量データへの変換に加え、検索範囲除外ライン数LMTによって、シフト量推定時に使用する位置ずれ量の検索範囲を限定(縮小)して、位置ずれ量であるシフト量データdshを算出するよう構成することもできる。 << 5 >> Embodiment 5
<< 5-1 >> Configuration of Embodiment 5 In theimage reading apparatus 101, the image processing apparatus 4, and the image processing method according to Embodiment 4, the decimation rate M set based on the scaling ratio is set in the scaling process. The image data is sent to the read control unit 142 and the shift amount enlargement unit 145 in the image processing unit 104, the image data in the overlap region is read from the image memory 141 based on the thinning rate M, and the shift amount data is enlarged and reduced based on the thinning rate M. The shift amount data is converted into enlarged shift amount data by interpolation processing, and image data of the combined image is generated using the obtained enlarged shift amount data. On the other hand, as shown in FIG. 36, the image reading apparatus 101a according to the fifth embodiment has a position obtained from the scaling factor and the thinning rate M in addition to the thinning rate M set based on the scaling factor. The number LMT of search range exclusion lines that limit the search range of the shift amount is acquired, the image data in the overlap area is read from the image memory 141 based on the thinning rate M, and the shift amount data is expanded and expanded shift amount data by interpolation. In addition to the above conversion, the shift amount data d sh that is the position shift amount is calculated by limiting (reducing) the search range of the position shift amount used at the time of shift amount estimation by the search range excluded line number LMT. You can also.
《5-1》実施の形態5の構成
実施の形態4に係る画像読取装置101、画像処理装置4、及び画像処理方法は、変倍処理時に、変倍率に基づいて設定される間引き率Mを画像処理部104内の読み出し制御部142及びシフト量拡大部145へ送り、間引き率Mに基づいて画像メモリ141からオーバーラップ領域における画像データを読み出し、間引き率Mに基づいてシフト量データを拡大及び補間する処理によってシフト量データを拡大シフト量データに変換し、得られた拡大シフト量データを用いて結合画像の画像データを生成している。これに対し、図36に示されるように、実施の形態5に係る画像読取装置101aは、変倍率に基づいて設定される間引き率Mに加え、さらに、変倍率と間引き率Mから得られる位置ずれ量の検索範囲を限定する検索範囲除外ライン数LMTを取得し、間引き率Mに基づいて画像メモリ141からオーバーラップ領域における画像データを読み出し、シフト量データの拡大及び補間による拡大シフト量データへの変換に加え、検索範囲除外ライン数LMTによって、シフト量推定時に使用する位置ずれ量の検索範囲を限定(縮小)して、位置ずれ量であるシフト量データdshを算出するよう構成することもできる。 << 5 >> Embodiment 5
<< 5-1 >> Configuration of Embodiment 5 In the
図36は、本発明の実施の形態5に係る画像読取装置101aの構成を概略的に示す機能ブロック図である。図36において、図17(実施の形態4)に示される構成要素と同一又は対応する構成要素には、同じ符号を付している。実施の形態5に係る画像読取装置101aは、画像処理部104a及びコントローラ107aの構成及び動作の点において、実施の形態4に係る画像読取装置101と相違する。画像処理部104a及びコントローラ107aの構成及び動作の点を除いて、実施の形態5に係る画像読取装置101aは、実施の形態4に係る画像読取装置と同じである。図36に示されるように、実施の形態5におけるコントローラ107aは、ユーザなどによって指定された変倍率に基づいて設定される間引き率Mと検索範囲除外ライン数LMTとを、画像処理部104aに送る。画像処理部104aは、受け取った間引き率Mと検索範囲除外ライン数LMTとを用いて変倍処理及び副走査方向の位置ずれ量の検出を行い、検出された位置ずれ量を用いて画像データを結合する。
FIG. 36 is a functional block diagram schematically showing the configuration of the image reading apparatus 101a according to the fifth embodiment of the present invention. In FIG. 36, constituent elements that are the same as or correspond to those shown in FIG. 17 (Embodiment 4) are assigned the same reference numerals. The image reading apparatus 101a according to the fifth embodiment is different from the image reading apparatus 101 according to the fourth embodiment in the configuration and operation of the image processing unit 104a and the controller 107a. Except for the configuration and operation of the image processing unit 104a and the controller 107a, the image reading apparatus 101a according to the fifth embodiment is the same as the image reading apparatus according to the fourth embodiment. As shown in FIG. 36, the controller 107a in the fifth embodiment sends the thinning rate M and the search range excluded line number LMT set based on the scaling factor designated by the user or the like to the image processing unit 104a. . The image processing unit 104a uses the received thinning rate M and the search range excluded line number LMT to perform scaling processing and detection of the amount of positional deviation in the sub-scanning direction, and uses the detected amount of positional deviation to obtain image data. Join.
図36に示されるように、実施の形態5に係る画像読取装置101aは、撮像部102と、A/D変換部103と、画像処理部104aと、撮像部102及び画像処理部104aの動作を制御するコントローラ107aとを備えている。画像処理部104aは、実施の形態5に係る画像処理装置(実施の形態5に係る画像処理方法を実施することができる装置)であり、画像メモリ141と、読み出し制御部142と、類似度算出部143と、シフト量推定部144aと、シフト量拡大部145と、結合処理部146とを備えている。実施の形態5においては、シフト量推定部144aは、検索範囲除外ライン数LMTに基づいて、類似度の最も高い(相関の最も大きい)比較データの副走査方向の位置に対応する位置ずれ量からシフト量データを算出する。他の構成要素、例えば、撮像部102、A/D変換部103、画像メモリ141、読み出し制御部142、類似度算出部143、シフト量拡大部145、及び結合処理部146の構成及び動作は、実施の形態4に示されるものと同じである。
As shown in FIG. 36, the image reading apparatus 101a according to the fifth embodiment performs operations of the imaging unit 102, the A / D conversion unit 103, the image processing unit 104a, the imaging unit 102, and the image processing unit 104a. And a controller 107a to be controlled. The image processing unit 104a is an image processing apparatus according to the fifth embodiment (an apparatus that can perform the image processing method according to the fifth embodiment), and includes an image memory 141, a read control unit 142, and similarity calculation. A unit 143, a shift amount estimation unit 144a, a shift amount enlargement unit 145, and a combination processing unit 146. In the fifth embodiment, the shift amount estimation unit 144a uses the positional deviation amount corresponding to the position in the sub-scanning direction of the comparison data having the highest similarity (the highest correlation) based on the search range excluded line number LMT. Shift amount data is calculated. The configurations and operations of other components, for example, the imaging unit 102, the A / D conversion unit 103, the image memory 141, the read control unit 142, the similarity calculation unit 143, the shift amount enlargement unit 145, and the combination processing unit 146 are as follows. This is the same as that shown in the fourth embodiment.
《5-2》実施の形態5の動作
図36における画像読取装置101aのコントローラ107aは、図17におけるコントローラ107と同様に、ユーザなどによる変倍率などの設定情報又は指示情報を撮像部102及び画像処理部104aに送り、撮像部102及び画像処理部104aの制御を行う。すなわち、変倍処理を行う場合、ユーザが変倍率を指定すると、コントローラ107aは、読み取り倍率(変倍率)Rを設定する設定情報を撮像部102に送り、変倍率に基づいて設定される間引き率Mを画像処理部104aに送る。このとき、コントローラ107aは、変倍率に基づいて検索範囲除外ライン数LMTを設定し、検索範囲除外ライン数LMTを画像処理部104aに送る。 << 5-2 >> Operation of Embodiment 5 As with thecontroller 107 in FIG. 17, the controller 107a of the image reading apparatus 101a in FIG. 36 receives setting information or instruction information such as a variable magnification by the user, etc. The image is sent to the processing unit 104a, and the imaging unit 102 and the image processing unit 104a are controlled. In other words, when performing a scaling process, if the user specifies a scaling ratio, the controller 107a sends setting information for setting a reading magnification (magnification ratio) R to the imaging unit 102, and a thinning rate set based on the scaling ratio. M is sent to the image processing unit 104a. At this time, the controller 107a sets the search range excluded line number LMT based on the scaling factor, and sends the search range excluded line number LMT to the image processing unit 104a.
図36における画像読取装置101aのコントローラ107aは、図17におけるコントローラ107と同様に、ユーザなどによる変倍率などの設定情報又は指示情報を撮像部102及び画像処理部104aに送り、撮像部102及び画像処理部104aの制御を行う。すなわち、変倍処理を行う場合、ユーザが変倍率を指定すると、コントローラ107aは、読み取り倍率(変倍率)Rを設定する設定情報を撮像部102に送り、変倍率に基づいて設定される間引き率Mを画像処理部104aに送る。このとき、コントローラ107aは、変倍率に基づいて検索範囲除外ライン数LMTを設定し、検索範囲除外ライン数LMTを画像処理部104aに送る。 << 5-2 >> Operation of Embodiment 5 As with the
実施の形態4の場合と同様に、コントローラ107aは、間引き率Mとして、読み取り倍率である変倍率Rに近い、1以上の整数を設定する。加えて、コントローラ107aは、変倍率Rが1より小さな変倍率(例えば、R=0.8)又は変倍率に小数部を持つ中間倍率(例えば、R=1.5)のような場合であって、間引き率Mによってラインが間引かれる前の副走査方向の位置(位置ずれ量)の範囲が変倍率に基づいて変更されるライン範囲を超えるようなとき、検索範囲除外ライン数LMTとして、検索する位置ずれ量の範囲を限定するライン数を設定する。このとき、検索範囲除外ライン数LMTは、シフト量推定時に位置ずれ量の検索範囲を限定することができるような位置ずれ量の値であっても、範囲の外側から類似度の値を無効とするようなライン数であってもよく、各位置で無効又は有効を示す信号であってもよい。シフト量推定部144aは、例えば、変倍率Rが0.8倍のときには、間引き率Mは1であるので、検索範囲外側の副走査方向の位置で変倍率Rと間引き率Mの差0.2(=R-M=1-0.8)に相当する量だけ余分な位置ずれ量の検出を行うことになる。よって、実施の形態5においては、コントローラ107aは、差0.2に相当する量の余分な位置ずれ量の検索範囲を限定(縮小)するための値を示す検索範囲除外ライン数LMTを送る。
As in the case of the fourth embodiment, the controller 107a sets, as the thinning rate M, an integer of 1 or more close to the scaling factor R that is the reading magnification. In addition, the controller 107a may be a case where the scaling factor R is smaller than 1 (for example, R = 0.8) or an intermediate magnification having a fractional part in the scaling factor (for example, R = 1.5). When the range of the position in the sub-scanning direction (position shift amount) before the line is thinned by the thinning rate M exceeds the line range changed based on the magnification, the search range excluded line number LMT is Set the number of lines that limit the range of misregistration. At this time, even if the search range excluded line number LMT is a position shift amount value that can limit the search range of the position shift amount when estimating the shift amount, the similarity value is invalidated from the outside of the range. The number of lines may be such, or a signal indicating invalidity or validity at each position may be used. For example, when the scaling factor R is 0.8, the shift amount estimating unit 144a has a thinning rate M of 1. Therefore, the difference between the scaling factor R and the thinning rate M at the position in the sub-scanning direction outside the search range is 0. The amount of extra positional deviation is detected by an amount corresponding to 2 (= RM = 1-0.8). Therefore, in the fifth embodiment, the controller 107a sends the search range excluded line number LMT indicating a value for limiting (reducing) the search range of the extra positional shift amount corresponding to the difference of 0.2.
図36において、撮像部102で原稿160を読み取り、A/D変換部103でデジタルデータ(画像データ)DIに変換し、画像処理部104aに入力され、画像処理部104aの画像メモリ141に格納される。画像処理部104aにおける読み出し制御部142は、変倍率に基づいて設定される間引き率MによるMラインごとで、読み出す副走査方向の位置(ライン)を間引いてオーバーラップ領域におけるデータを読み出し、基準データMOと比較データMEとして出力し、類似度算出部143は、基準データMOと比較データMEとを比較して類似度データ(相関データ)D143を算出する。実施の形態5における以上の構成及び動作は、図17に示される実施の形態4におけるものと同じである。
In FIG. 36, a document 160 is read by the imaging unit 102, converted into digital data (image data) DI by the A / D conversion unit 103, input to the image processing unit 104a, and stored in the image memory 141 of the image processing unit 104a. The The readout control unit 142 in the image processing unit 104a reads out data in the overlap region by thinning out the position (line) in the sub-scanning direction to be read out for each M lines with the thinning rate M set based on the variable magnification. The MO and the comparison data ME are output, and the similarity calculation unit 143 compares the reference data MO and the comparison data ME to calculate the similarity data (correlation data) D143. The configuration and operation described above in the fifth embodiment are the same as those in the fourth embodiment shown in FIG.
シフト量推定部144aは、類似度算出部143から、検索範囲“-y~+y”内における複数のラインの位置ずれ量ΔYにおける類似度データである相関データD142を受け取り、コントローラ107aから、変倍率に基づいて設定された検索範囲除外ライン数LMTを受け取る。シフト量推定部144aは、複数のラインの相関データD142のうち類似度が最も高いずれデータに対応する位置ずれ量ΔYを算出する。このとき、シフト量推定部144aは、検索範囲除外ライン数LMTに基づいて、類似度が最も高いずれデータを求めるための検索範囲を限定(縮小)して、限定(縮小)された検索範囲内で類似度が最も高いずれデータに対応する位置ずれ量ΔYをシフト量データdshとして、シフト量拡大部145へ出力する。
The shift amount estimation unit 144a receives from the similarity calculation unit 143 the correlation data D142 that is similarity data in the positional deviation amounts ΔY of a plurality of lines within the search range “−y to + y”, and receives a scaling factor from the controller 107a. The number LMT of search range exclusion lines set based on is received. The shift amount estimation unit 144a calculates a positional deviation amount ΔY corresponding to the data with the highest similarity among the correlation data D142 of the plurality of lines. At this time, the shift amount estimation unit 144a limits (reduces) the search range for obtaining data with the highest degree of similarity based on the search range excluded line number LMT, and within the limited (reduced) search range. The positional deviation amount ΔY corresponding to the data with the highest similarity is output to the shift amount enlargement unit 145 as the shift amount data d sh .
間引き率Mは、変倍率Rを含むか、又は、倍率に近い整数値で設定されているので、類似度算出部143からの、検索範囲“-y~+y”内における複数のラインの位置ずれ量ΔYは、本来の位置ずれ量ΔYaが“-(M×y)~+(M×y)”内のMライン間隔で読み出された範囲のデータとなり、変倍率Rより間引き率Mが大きな値である場合は、検索範囲は過剰分を持ち(すなわち、広すぎ)、余分な位置ずれ量の検出を行うことになる。例えば、変倍率R=1.5において、間引き率M=2とした場合、位置ずれ量ΔYaは、検索範囲“-2y~+2y”として2ライン間隔で比較データが読み出される。しかし、検索範囲“-2y~+2y”は、実際の変倍率で変更される検索範囲“-1.5y~+1.5y”に比べて、検索範囲“-1.5y~+1.5y”の外側の副走査方向の位置ライン数0.5yに相当する領域(すなわち、0.5y=2y-1.5y)は、余分な位置ずれ量の検出を行うことになる。この検索範囲“-1.5y~+1.5y”の外側のライン数0.5yに相当する領域は、過剰な検索範囲となり、シフト量データ(位置ずれ量)の誤検出の原因となり得る。よって、この場合は、コントローラ107aは、検索範囲“-1.5y~+1.5y”の外側のライン数0.5y分に相当する領域を示す値を検索範囲除外ライン数LMTとして設定し、シフト量推定部144aへ供給する。
Since the thinning-out rate M includes the scaling factor R or is set to an integer value close to the scaling factor, the positional deviation of a plurality of lines within the search range “−y to + y” from the similarity calculation unit 143 The amount ΔY is data in a range in which the original positional deviation amount ΔYa is read at intervals of M lines within “− (M × y) to + (M × y)”, and the thinning rate M is larger than the scaling factor R. If it is a value, the search range has an excessive amount (that is, it is too wide), and an excessive amount of misalignment is detected. For example, when the scaling ratio R = 1.5 and the thinning rate M = 2, the positional deviation amount ΔYa is read as comparison range “−2y to + 2y” at two line intervals. However, the search range “−2y to + 2y” is outside the search range “−1.5y to + 1.5y” as compared to the search range “−1.5y to + 1.5y” that is changed with the actual scaling factor. In the region corresponding to the number of position lines 0.5y in the sub-scanning direction (that is, 0.5y = 2y−1.5y), an excessive amount of displacement is detected. An area corresponding to the number of lines 0.5y outside the search range “−1.5y to + 1.5y” becomes an excessive search range, which may cause erroneous detection of shift amount data (position shift amount). Therefore, in this case, the controller 107a sets a value indicating an area corresponding to the number of lines 0.5y outside the search range “−1.5y to + 1.5y” as the search range excluded line number LMT, and shifts it. It supplies to the quantity estimation part 144a.
そして、シフト量推定部144aは、検索範囲除外ライン数LMTが示す位置ずれ量での相関データD142を検索範囲に含めないよう無効として、検索範囲を範囲“-(y-LMT)~+(y-LMT)”内に限定したラインにおいて、類似度が最も高いずれデータに対応する位置ずれ量ΔYをシフト量データdshとする。図37において、検索範囲は、検索範囲“-y~+y”内の範囲“-(y-LMT)~+(y-LMT)”内に限定したラインとして、破線で示される曲線のようにΔY=0の位置で類似度が高く(すなわち、非類似度が低く)なる場合は、ΔY=0をシフト量データdshとし、また、実線で示される曲線のようなΔY=-αの位置で類似度が最も高く(すなわち、非類似度が最も低く)なる場合は、ΔY=-αをシフト量データdshとする。
Then, the shift amount estimating unit 144a invalidates the search range so that the correlation data D142 at the positional deviation amount indicated by the search range excluded line number LMT is not included in the search range, and sets the search range to the range “− (y−LMT) to + (y In the line limited to “−LMT)”, the shift amount ΔY corresponding to the data with the highest similarity is used as the shift amount data d sh . In FIG. 37, the search range is ΔY as a curve indicated by a broken line as a line limited within the range “− (y−LMT) to + (y−LMT)” within the search range “−y to + y”. When the similarity is high at the position of = 0 (that is, the dissimilarity is low), ΔY = 0 is used as the shift amount data d sh, and at the position of ΔY = −α like the curve shown by the solid line. When the similarity is the highest (that is, the dissimilarity is the lowest), ΔY = −α is set as the shift amount data d sh .
以上から、シフト量推定部144aにおいて、検索範囲除外ライン数LMTに基づいて、類似度が最も高いずれデータを求める範囲を範囲“-(y-LMT)~+(y-LMT)”内に限定(縮小)して、限定された範囲内で類似度が最も高いずれデータに対応する位置ずれ量ΔYをシフト量データdshとして求めるので、1より小さな変倍率(例えば、変倍率R=0.8)又は変倍率に小数部を持つ中間倍率(例えば、R=1.5)のような場合で間引き率Mによる検索範囲が、余分な位置ずれ量の検出を行う範囲を持ち、検索範囲の過剰となる場合でも、適正な範囲で類似度が最も高いずれデータに対応する位置ずれ量ΔYをシフト量データdshとして求めることができ、範囲を限定するという容易な処理によって、回路規模の増加を行う必要なく、位置ずれ量の検索を精度よく行うことができる。
From the above, based on the search range excluded line number LMT, the shift amount estimation unit 144a limits the range for which data with the highest degree of similarity is obtained to the range “− (y−LMT) to + (y−LMT)”. (Reduction), and the positional deviation amount ΔY corresponding to the data with the highest similarity within the limited range is obtained as the shift amount data d sh . 8) or an intermediate magnification having a fractional part (for example, R = 1.5), the search range based on the thinning-out rate M has a range for detecting an excessive amount of misalignment. even when the excessive, it is possible to obtain the positional deviation amount ΔY the similarity corresponding to the highest deviation data in an appropriate range as the shift quantity data d sh, by an easy process of limiting the scope, increase in circuit scale No need to perform, it is possible to accurately search for positional deviation amount.
以下、画像処理部104aにおいては、シフト量拡大部145において、シフト量データdshを拡大及び補間して、拡大シフト量データΔybに変換し、結合処理部146において拡大シフト量データΔybに基づいて分割画像の副走査方向の位置をシフトさせて、結合した画像データを生成する構成及び動作は、実施の形態4(図17)に示されるものと同じである。
Thereafter, in the image processing unit 104a, the shift amount enlargement unit 145 enlarges and interpolates the shift amount data d sh to convert it to enlargement shift amount data Δy b , and the combination processing unit 146 converts it into the enlargement shift amount data Δy b . The configuration and operation of generating the combined image data by shifting the position of the divided image in the sub-scanning direction based on the same as that shown in the fourth embodiment (FIG. 17).
《5-3》実施の形態5の効果
以上に説明したように、実施の形態5に係る画像読取装置101a、画像処理装置4a、及び画像処理方法においては、変倍率に基づいて設定される間引き率Mに応じて、間引き率MによるMラインごとでオーバーラップ領域におけるデータを読み出し、変倍率で変更されることがない基準データMOと検索範囲“-y~+y”内にあるライン数の比較データMEを求めることができ、位置ずれ量の検索範囲を拡大し、類似度データを生成する個数又は処理を変倍率に基づいて変更することなく、シフト量データdshを算出でき、さらに、検索範囲の過剰となる場合でも、検索範囲除外ライン数LMTに基づいて、類似度が最も高いずれデータを求める検索範囲を限定(縮小)して、適正な範囲で位置ずれ量であるシフト量データdshを算出することができるとともに、シフト量データdshを拡大及び補間することで、変倍率に基づいて変更されるライン範囲に相当する位置ずれ量を示す値として得て、結合した画像データを生成することができる。このため、実施の形態5に係る画像読取装置101a、画像処理装置4a、及び画像処理方法によれば、変倍処理においても、変倍率に基づいて副走査方向の位置ずれ量の検出範囲を広げることなく、回路規模を増やさずに精度よく画像データ間の副走査方向の位置ずれ量を求め、被読取物に対応する高品質な合成画像データを生成することができる。 << 5-3 >> Effects of Embodiment 5 As described above, in theimage reading apparatus 101a, the image processing apparatus 4a, and the image processing method according to Embodiment 5, thinning set based on the scaling factor. According to the rate M, the data in the overlap region is read for each M lines at the thinning rate M, and the reference data MO that is not changed by the scaling factor is compared with the number of lines in the search range “−y to + y” The data ME can be obtained, the search range of the positional deviation amount can be expanded, the shift amount data d sh can be calculated without changing the number or processing of generating similarity data based on the scaling factor, and the search Even if the range is excessive, the search range for which data with the highest degree of similarity is determined (reduced) is limited (reduced) based on the number LMT of search range exclusion lines, and the position shifts within an appropriate range. By it with the shift quantity data d sh can be calculated, by expanding and interpolating the shift amount data d sh, obtained as a value indicating a positional deviation amount corresponding to a line range is changed based on the magnification The combined image data can be generated. Therefore, according to the image reading apparatus 101a, the image processing apparatus 4a, and the image processing method according to the fifth embodiment, even in the scaling process, the detection range of the positional deviation amount in the sub-scanning direction is expanded based on the scaling ratio. Therefore, it is possible to accurately obtain the positional deviation amount in the sub-scanning direction between the image data without increasing the circuit scale, and to generate high-quality composite image data corresponding to the object to be read.
以上に説明したように、実施の形態5に係る画像読取装置101a、画像処理装置4a、及び画像処理方法においては、変倍率に基づいて設定される間引き率Mに応じて、間引き率MによるMラインごとでオーバーラップ領域におけるデータを読み出し、変倍率で変更されることがない基準データMOと検索範囲“-y~+y”内にあるライン数の比較データMEを求めることができ、位置ずれ量の検索範囲を拡大し、類似度データを生成する個数又は処理を変倍率に基づいて変更することなく、シフト量データdshを算出でき、さらに、検索範囲の過剰となる場合でも、検索範囲除外ライン数LMTに基づいて、類似度が最も高いずれデータを求める検索範囲を限定(縮小)して、適正な範囲で位置ずれ量であるシフト量データdshを算出することができるとともに、シフト量データdshを拡大及び補間することで、変倍率に基づいて変更されるライン範囲に相当する位置ずれ量を示す値として得て、結合した画像データを生成することができる。このため、実施の形態5に係る画像読取装置101a、画像処理装置4a、及び画像処理方法によれば、変倍処理においても、変倍率に基づいて副走査方向の位置ずれ量の検出範囲を広げることなく、回路規模を増やさずに精度よく画像データ間の副走査方向の位置ずれ量を求め、被読取物に対応する高品質な合成画像データを生成することができる。 << 5-3 >> Effects of Embodiment 5 As described above, in the
《6》実施の形態6
《6-1》実施の形態6の構成
実施の形態4及び5に係る画像読取装置101及び1aの機能は、ハードウェア構成で実現することができる。しかし、画像読取装置101及び101aの機能の一部を、CPUを含むマイクロプロセッサによって実行されるコンピュータプログラムを用いて実現してもよい。画像読取装置101及び101aの機能の一部をコンピュータプログラムを用いて実現する場合には、マイクロプロセッサは、コンピュータ読み取り可能な情報記憶媒体に格納されているプログラムデータを読み出すことによって、又は、インターネットなどの通信手段を経由したダウンロードによって、コンピュータプログラムを取得し、取得されたコンピュータプログラムに従った処理を実行する。 << 6 >> Embodiment 6
<< 6-1 >> Configuration of Embodiment 6 The functions of theimage reading apparatuses 101 and 1a according to Embodiments 4 and 5 can be realized by a hardware configuration. However, some of the functions of the image reading apparatuses 101 and 101a may be realized by using a computer program executed by a microprocessor including a CPU. When a part of the functions of the image reading apparatuses 101 and 101a is realized by using a computer program, the microprocessor reads program data stored in a computer-readable information storage medium, or the Internet The computer program is acquired by downloading via the communication means, and processing according to the acquired computer program is executed.
《6-1》実施の形態6の構成
実施の形態4及び5に係る画像読取装置101及び1aの機能は、ハードウェア構成で実現することができる。しかし、画像読取装置101及び101aの機能の一部を、CPUを含むマイクロプロセッサによって実行されるコンピュータプログラムを用いて実現してもよい。画像読取装置101及び101aの機能の一部をコンピュータプログラムを用いて実現する場合には、マイクロプロセッサは、コンピュータ読み取り可能な情報記憶媒体に格納されているプログラムデータを読み出すことによって、又は、インターネットなどの通信手段を経由したダウンロードによって、コンピュータプログラムを取得し、取得されたコンピュータプログラムに従った処理を実行する。 << 6 >> Embodiment 6
<< 6-1 >> Configuration of Embodiment 6 The functions of the
図38は、実施の形態6に係る画像読取装置101bの機能の一部をコンピュータプログラムで実現する場合における、画像読取装置101bの構成を示す機能ブロック図である。図38に示されるように、実施の形態6に係る画像読取装置101bは、撮像部102と、A/D変換部103と、演算装置105とを備える。演算装置105は、CPUを含むプロセッサ(例えば、マイクロプロセッサ)151と、揮発性メモリとしてのRAM152と、不揮発性メモリ153と、情報記録媒体である大容量記憶媒体154と、上記構成151~154が接続されたバス155とを備えている。不揮発性メモリ153は、例えば、フラッシュメモリである。また、大容量記憶媒体154は、例えば、ハードディスク(磁気ディスク)、光ディスク、又は、半導体記憶装置である。
FIG. 38 is a functional block diagram showing a configuration of the image reading apparatus 101b when a part of the functions of the image reading apparatus 101b according to the sixth embodiment is realized by a computer program. As illustrated in FIG. 38, the image reading apparatus 101b according to the sixth embodiment includes an imaging unit 102, an A / D conversion unit 103, and an arithmetic device 105. The arithmetic unit 105 includes a processor (for example, a microprocessor) 151 including a CPU, a RAM 152 as a volatile memory, a nonvolatile memory 153, a mass storage medium 154 as an information recording medium, and the above-described configurations 151 to 154. And a bus 155 connected thereto. The non-volatile memory 153 is, for example, a flash memory. The mass storage medium 154 is, for example, a hard disk (magnetic disk), an optical disk, or a semiconductor storage device.
図38に示されるA/D変換部103は、図17及び図36に示されるA/D変換部103と同じ機能を有し、撮像部102が出力する電気信号SIをデジタルデータDIに変換してプロセッサ151に供給する。プロセッサ151は、供給されたデジタルデータをRAM152に格納する。
The A / D converter 103 shown in FIG. 38 has the same function as the A / D converter 103 shown in FIGS. 17 and 36, and converts the electrical signal SI output from the imaging unit 102 into digital data DI. To the processor 151. The processor 151 stores the supplied digital data in the RAM 152.
プロセッサ151は、不揮発性メモリ153、又は、大容量記憶媒体154からコンピュータプログラムをロードし、変倍率に基づいて設定される間引き率Mを設定して(又は、間引き率M及び検索範囲除外ライン数LMTを設定して)、画像処理を実行する。この画像処理によって、図17及び図36に示される画像処理部104及び104aによる画像処理と同様の画像処理を実現することができる。
The processor 151 loads a computer program from the nonvolatile memory 153 or the large-capacity storage medium 154 and sets a thinning rate M set based on the scaling factor (or the thinning rate M and the number of search range excluded lines) Set the LMT) and execute image processing. By this image processing, image processing similar to the image processing by the image processing units 104 and 104a shown in FIGS. 17 and 36 can be realized.
《6-2》実施の形態6の動作
図39は、実施の形態6の演算装置105による処理の一例を概略的に示すフローチャートである。図39に示されるように、プロセッサ151は、まず、変倍率に基づいて設定される間引き率Mを用いて(又は、間引き率M及び検索範囲除外ライン数LMTを用いて)、画像データの内のオーバーラップ領域におけるデータを読み出し、予め決められたライン数の基準データMOと比較データMEを抽出するように読み出し制御処理を実行し(ステップS101)、基準データMOと比較データMEとを比較する類似度算出処理を実行する(ステップS102)。その後、プロセッサ151は、シフト量推定処理を実行し(ステップS103)、間引き率Mを用いて(又は、間引き率M及び検索範囲除外ライン数LMTを用いて)、シフト量データを拡大及び補間するシフト量拡大処理を実行する(ステップS104)。最後に、プロセッサ151は、結合処理を実行する(ステップS105)。なお、演算装置105によるステップS1~S5の処理は、実施の形態4における画像処理部104が行う処理、又は、実施の形態5における画像処理部104aが行う処理と同じである。 << 6-2 >> Operation of the Sixth Embodiment FIG. 39 is a flowchart schematically showing an example of processing by thearithmetic unit 105 of the sixth embodiment. As shown in FIG. 39, the processor 151 first uses the thinning rate M set based on the scaling factor (or uses the thinning rate M and the search range excluded line number LMT). The data in the overlap region is read out, and a read control process is executed so as to extract the reference data MO and the comparison data ME with a predetermined number of lines (step S101), and the reference data MO and the comparison data ME are compared. A similarity calculation process is executed (step S102). Thereafter, the processor 151 executes a shift amount estimation process (step S103), and expands and interpolates the shift amount data using the thinning rate M (or using the thinning rate M and the search range excluded line number LMT). A shift amount enlargement process is executed (step S104). Finally, the processor 151 executes a combination process (step S105). Note that the processing in steps S1 to S5 by the arithmetic device 105 is the same as the processing performed by the image processing unit 104 in the fourth embodiment or the processing performed by the image processing unit 104a in the fifth embodiment.
図39は、実施の形態6の演算装置105による処理の一例を概略的に示すフローチャートである。図39に示されるように、プロセッサ151は、まず、変倍率に基づいて設定される間引き率Mを用いて(又は、間引き率M及び検索範囲除外ライン数LMTを用いて)、画像データの内のオーバーラップ領域におけるデータを読み出し、予め決められたライン数の基準データMOと比較データMEを抽出するように読み出し制御処理を実行し(ステップS101)、基準データMOと比較データMEとを比較する類似度算出処理を実行する(ステップS102)。その後、プロセッサ151は、シフト量推定処理を実行し(ステップS103)、間引き率Mを用いて(又は、間引き率M及び検索範囲除外ライン数LMTを用いて)、シフト量データを拡大及び補間するシフト量拡大処理を実行する(ステップS104)。最後に、プロセッサ151は、結合処理を実行する(ステップS105)。なお、演算装置105によるステップS1~S5の処理は、実施の形態4における画像処理部104が行う処理、又は、実施の形態5における画像処理部104aが行う処理と同じである。 << 6-2 >> Operation of the Sixth Embodiment FIG. 39 is a flowchart schematically showing an example of processing by the
《6-3》実施の形態6の効果
以上に説明したように、実施の形態6に係る画像読取装置101bにおいては、位置ずれ量の検索範囲を拡大し、類似度データを生成する個数又は処理を変倍率に応じて変更することなく、シフト量データdshを算出でき、シフト量データdshを拡大及び補間することで、変倍率に基づいて変更されるライン範囲に相当する位置ずれ量を示す値として得て、結合した画像データを生成することができる。このため、実施の形態6に係る画像読取装置101bによれば、変倍処理においても、変倍率に応じて副走査方向の位置ずれ量の検出範囲を広げることなく、回路規模を増やさずに精度よく画像データ間の副走査方向の位置ずれ量を求め、被読取物に対応する高品質な合成画像データを生成することができる。 << 6-3 >> Effects of the Sixth Embodiment As described above, in theimage reading apparatus 101b according to the sixth embodiment, the number or processing of generating the similarity data by expanding the search range of the positional deviation amount. the without changing according to the magnification ratio, can be calculated shift amount data d sh, by expanding and interpolating the shift amount data d sh, the positional deviation amount corresponding to a line range is changed based on the magnification It can be obtained as a value to indicate and combined image data can be generated. For this reason, according to the image reading apparatus 101b according to the sixth embodiment, even in the scaling process, the accuracy without increasing the circuit scale without expanding the detection range of the positional deviation amount in the sub-scanning direction according to the scaling ratio. It is often possible to obtain the amount of positional deviation in the sub-scanning direction between image data, and to generate high-quality composite image data corresponding to the object to be read.
以上に説明したように、実施の形態6に係る画像読取装置101bにおいては、位置ずれ量の検索範囲を拡大し、類似度データを生成する個数又は処理を変倍率に応じて変更することなく、シフト量データdshを算出でき、シフト量データdshを拡大及び補間することで、変倍率に基づいて変更されるライン範囲に相当する位置ずれ量を示す値として得て、結合した画像データを生成することができる。このため、実施の形態6に係る画像読取装置101bによれば、変倍処理においても、変倍率に応じて副走査方向の位置ずれ量の検出範囲を広げることなく、回路規模を増やさずに精度よく画像データ間の副走査方向の位置ずれ量を求め、被読取物に対応する高品質な合成画像データを生成することができる。 << 6-3 >> Effects of the Sixth Embodiment As described above, in the
《7》実施の形態7
《7-1》実施の形態7の構成
実施の形態4においては、図20(a)に示されるように、センサ基板120の一方の端部(例えば、左端)側から数えて奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの光軸127Oと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸127Eとが交差する場合を説明した。実施の形態7においては、センサ基板120の一方の端部(例えば、左端)側から数えて奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの光軸128Oと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸128Eとが交差せず、平行となるようにラインセンサを設置した場合について説明する。なお、ラインセンサ121O1,…,121Ok,…,121Onとガラス面126との間、及び、ラインセンサ121E1,…,121Ek,…,121Enとガラス面126との間には、原稿の画像をラインセンサ上に結像させるレンズなどの光学系を備えてもよい。光軸128Oの方向及び光軸128Eの方向は、ラインセンサとガラス面126との間に備えられた光学系によって設定することもできる。実施の形態7に係る画像読取装置は、ラインセンサ121O1,…,121Ok,…,121Onの光軸128Oとラインセンサ121E1,…,121Ek,…,121Enの光軸128Eとが互いに平行である点を除き、実施の形態4に係る画像読取装置101と実質的に同じである。したがって、実施の形態7の説明においては、図17をも参照する。 << 7 >> Embodiment 7
<< 7-1 >> Configuration of Embodiment 7 InEmbodiment 4, as shown in FIG. 20 (a), the odd-numbered positions counted from one end (for example, the left end) of the sensor substrate 120 are provided. line sensor 121 o 1 to, ..., 121O k, ..., the line sensor 121E 1 located to the even-numbered and the optical axis 127O of 121O n, ..., 121E k, ..., illustrating a case where the optical axis 127E of 121E n intersect did. In the seventh embodiment, one end of the sensor substrate 120 (e.g., left) line sensor 121 o 1 located odd counted from the side, ..., 121 o k, ..., even-numbered and the optical axis 128O of 121 o n line sensors 121E 1 located, ..., 121E k, ..., not intersect with the optical axis 128E of 121E n, it will be described the case of installing the line sensor so as to be parallel. Incidentally, the line sensor 121O 1, ..., 121O k, ..., between 121 o n and the glass surface 126, and the line sensor 121E 1, ..., 121E k, ..., between the 121E n and the glass surface 126, An optical system such as a lens for forming an image of a document on a line sensor may be provided. The direction of the optical axis 128O and the direction of the optical axis 128E can also be set by an optical system provided between the line sensor and the glass surface 126. The image reading apparatus according to the seventh embodiment, the line sensor 121O 1, ..., 121O k, ..., the optical axis 128O and the line sensor 121E 1 of 121O n, ..., 121E k, ..., and the optical axis 128E of 121E n Except for the fact that they are parallel to each other, this is substantially the same as the image reading apparatus 101 according to the fourth embodiment. Therefore, FIG. 17 is also referred to in the description of the seventh embodiment.
《7-1》実施の形態7の構成
実施の形態4においては、図20(a)に示されるように、センサ基板120の一方の端部(例えば、左端)側から数えて奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの光軸127Oと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸127Eとが交差する場合を説明した。実施の形態7においては、センサ基板120の一方の端部(例えば、左端)側から数えて奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの光軸128Oと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸128Eとが交差せず、平行となるようにラインセンサを設置した場合について説明する。なお、ラインセンサ121O1,…,121Ok,…,121Onとガラス面126との間、及び、ラインセンサ121E1,…,121Ek,…,121Enとガラス面126との間には、原稿の画像をラインセンサ上に結像させるレンズなどの光学系を備えてもよい。光軸128Oの方向及び光軸128Eの方向は、ラインセンサとガラス面126との間に備えられた光学系によって設定することもできる。実施の形態7に係る画像読取装置は、ラインセンサ121O1,…,121Ok,…,121Onの光軸128Oとラインセンサ121E1,…,121Ek,…,121Enの光軸128Eとが互いに平行である点を除き、実施の形態4に係る画像読取装置101と実質的に同じである。したがって、実施の形態7の説明においては、図17をも参照する。 << 7 >> Embodiment 7
<< 7-1 >> Configuration of Embodiment 7 In
《7-2》実施の形態7の動作
図40(a)及び(b)は、実施の形態7に係る画像読取装置の撮像部102の奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの光軸128Oと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸128Eとが互いに平行である場合の、被読取物としての原稿160とラインセンサ121O及び121Eの位置関係を示す概略的な側面図である。図40(a)は、原稿160が原稿台載置面であるガラス面126に密着している場合を示し、図40(b)は、原稿160がガラス面126から少し浮いて離れている場合を示す。 << 7-2 >> Operation of Embodiment 7 FIGS. 40 (a) and 40 (b) show line sensors 121O 1 ,..., 121O k located at odd-numbered positions of theimaging unit 102 of the image reading apparatus according to Embodiment 7. FIG. , ..., the line sensor 121E 1 located to the even-numbered and the optical axis 128O of 121 o n, ..., 121E k, ..., if the optical axis 128E of 121E n are parallel to each other, the document 160 as an object to be read was It is a schematic side view which shows the positional relationship of the line sensors 121O and 121E. FIG. 40A shows a case where the document 160 is in close contact with the glass surface 126 which is the document table mounting surface, and FIG. 40B shows a case where the document 160 is slightly lifted away from the glass surface 126. Indicates.
図40(a)及び(b)は、実施の形態7に係る画像読取装置の撮像部102の奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onの光軸128Oと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enの光軸128Eとが互いに平行である場合の、被読取物としての原稿160とラインセンサ121O及び121Eの位置関係を示す概略的な側面図である。図40(a)は、原稿160が原稿台載置面であるガラス面126に密着している場合を示し、図40(b)は、原稿160がガラス面126から少し浮いて離れている場合を示す。 << 7-2 >> Operation of Embodiment 7 FIGS. 40 (a) and 40 (b) show line sensors 121O 1 ,..., 121O k located at odd-numbered positions of the
実施の形態7に係る画像読取装置おいては、図40(a)に示されるように原稿160がガラス面126に密着している場合、図40(b)に示されるように原稿160がガラス面126から離れている場合のいずれであっても、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onによる原稿160の読取画像はほとんど変化せず、同様に、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enによる原稿160の読取画像はほとんど変化しない。また、副走査方向(Y方向)に撮像部102が搬送される場合、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onは、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enに比べて、同じ位置の画像の画像データを時間的に遅れて取得する。光軸128Oと光軸128Eは平行であるので、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onが読み取る画像と、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enが読み取る画像の副走査方向の位置のずれ量L1は、ほぼ一定量として求めることができる。ずれ量L1は、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onと偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enとの間の副走査方向の間隔(距離)に応じた一定量YLと、他の位置ずれ量βとを含む。すなわち、
L1=YL+β
である。ここで、位置ずれ量βは、ラインセンサの取り付け誤差などによる光軸128O又は光軸128Eのずれ量と、撮像部102の副走査方向の搬送速度の時間的なゆらぎ(すなわち、速度変動)に基づくずれ量とを含む値である。 In the image reading apparatus according to the seventh embodiment, whendocument 160 is in close contact with glass surface 126 as shown in FIG. 40A, document 160 is glass as shown in FIG. be any of the case away from the surface 126, the line sensor 121 o 1 located odd, ..., 121 o k, ..., the read image of the document 160 by 121 o n hardly changes, similarly, the even-numbered line sensors 121E 1 located, ..., 121E k, ..., the read image of the document 160 by 121E n is hardly changed. Also, when the imaging unit 102 in the sub-scanning direction (Y direction) is conveyed, the line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n is the line sensor 121E 1 located even number, ..., 121E k, ..., compared to the 121E n, obtains the image data of the image in the same position temporally delayed. Since the optical axis 128O and the optical axis 128E are parallel, the line sensor 121 o 1 located odd, ..., 121 o k, ..., and the image reading is 121 o n, the line sensor 121E 1 located even-numbered, ..., 121E k, ..., shift amount L 1 in the sub-scanning direction position of the image to be read is 121E n can be obtained as a substantially constant amount. Shift amount L 1 is the line sensor 121 o 1 located odd, ..., 121O k, ..., 121O n and the line sensor 121E 1 located even-numbered, ..., 121E k, ..., sub between 121E n comprising a predetermined amount Y L corresponding to the scanning direction of the spacing (distance), and other positional deviation amount beta. That is,
L 1 = Y L + β
It is. Here, the positional deviation amount β is caused by the deviation amount of the optical axis 128O or theoptical axis 128E due to the attachment error of the line sensor and the temporal fluctuation (that is, speed fluctuation) of the conveyance speed of the imaging unit 102 in the sub-scanning direction. It is a value including a deviation amount based on it.
L1=YL+β
である。ここで、位置ずれ量βは、ラインセンサの取り付け誤差などによる光軸128O又は光軸128Eのずれ量と、撮像部102の副走査方向の搬送速度の時間的なゆらぎ(すなわち、速度変動)に基づくずれ量とを含む値である。 In the image reading apparatus according to the seventh embodiment, when
L 1 = Y L + β
It is. Here, the positional deviation amount β is caused by the deviation amount of the optical axis 128O or the
図41は、原稿を読み取った際の偶数番目に位置するラインセンサ121Ok,121Ok+1に対応する画像データDI(Ok),DI(Ok+1)と奇数番目に位置するラインセンサ121Ek,121Ek+1に対応する画像データDI(Ek),DI(Ek+1)を示している。図41に示されるように、偶数番目に位置するラインセンサ121Ok,121Ok+1に対応する画像データDI(Ok),DI(Ok+1)と奇数番目に位置するラインセンサ121Ek,121Ek+1に対応する画像データDI(Ek),DI(Ek+1)とは、副走査方向の位置(ライン)が、一定量YLに位置ずれ量βを加算した距離(YL+β)だけ、互いにずれている。
FIG. 41 shows image data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 121O k and 121O k + 1 when the document is read, and the odd-numbered line sensors 121E k and 121E. image data DI corresponding to k + 1 (E k), shows the DI (E k + 1). As shown in FIG. 41, the image data DI (O k ) and DI (O k + 1 ) corresponding to the even-numbered line sensors 121O k and 121O k + 1 and the odd-numbered line sensors 121E k and 121E k + 1 Corresponding image data DI (E k ) and DI (E k + 1 ) are displaced from each other by a distance (Y L + β) in which the position (line) in the sub-scanning direction is obtained by adding a positional deviation amount β to a certain amount Y L. ing.
このため、画像処理部104においては、画像メモリ141への画像データの書込み時に、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onによる読取画像、又は、偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enによる読取画像のいずれか一方、又は、両方を、副走査方向にほぼ一定量YLずらす処理を行うことによって、奇数番目に位置するラインセンサ121O1,…,121Ok,…,121Onによる読取画像と偶数番目に位置するラインセンサ121E1,…,121Ek,…,121Enによる読取画像との間の副走査方向の位置ずれを減少させる。このような処理によって、図42に示されるように、画像メモリ141に格納される画像データは、副走査方向の位置(ライン)が、位置ずれ量βに等しい距離だけずらして(シフトして)画像メモリ141に格納される。
Therefore, in the image processing section 104, when writing the image data in the image memory 141, the line sensor 121 o 1 located odd, ..., 121 o k, ..., scanned image by 121 o n, or located in the even-numbered line sensors 121E 1 to, ..., 121E k, ..., either one of the image reading by 121E n, or both, by performing a substantially constant amount Y L shifting process in the sub-line located in the odd-numbered sensor 121O 1, ..., 121O k, ..., the line sensor 121E 1 located to the image and the even-numbered reading by 121O n, ..., 121E k, ..., the sub-scanning direction positional shift between the image reading by 121E n Decrease. By such processing, as shown in FIG. 42, the image data stored in the image memory 141 shifts (shifts) the position (line) in the sub-scanning direction by a distance equal to the positional shift amount β. Stored in the image memory 141.
変倍処理時は、実施の形態4における場合と同様に、副走査方向には変倍率Rの倍率で拡大又は縮小され、一定量YLと位置ずれ量βについても変倍率Rの倍率で拡大又は縮小されることとなる。したがって、画像処理部104は、画像メモリ141への画像データの書込み時に、一定量YLずらす処理ではなく、YLを変倍率に等しいR倍したライン数である(R×YL)ラインずらす処理を行う。なお、以上の説明においては、画像メモリ141への書込み時に画像データを副走査方向に一定量YLに相当する量だけずらす処理を行う場合を例示したが、読み出し制御部142及び結合処理部146において画像データを読み出す際に、この一定量YLを読み出す位置に対しオフセット分として加えて位置を求めるよう構成してもよい。
Time scaling processing, as in the fourth embodiment, is enlarged or reduced by the magnification of the magnification ratio R is in the sub-expansion by a factor of a predetermined amount Y L and the position deviation amount β magnification R also Or it will be reduced. Therefore, the image processing unit 104 shifts ( L × Y L ) lines, which is the number of lines obtained by multiplying Y L by R equal to the variable magnification, instead of the process of shifting the predetermined amount Y L when writing image data to the image memory 141. Process. In the above description has exemplified the case where the process of shifting the image data when writing to the image memory 141 by an amount corresponding to a predetermined amount Y L in the sub-scanning direction, the read control unit 142 and the connection processing unit 146 in the case of reading the image data, it may be configured to determine the position added as offset to the position to read out the predetermined amount Y L.
画像処理部104においては、位置ずれ量βを求め、画像を結合することになり、変倍率Rに応じて設定される間引き率Mに基づいて、画像メモリ141から画像データの内のオーバーラップ領域における画像データを読み出し、基準データと比較データとを比較して類似度を算出し、類似度の最も高い比較データの副走査方向の位置からシフト量データを算出するとともに、間引き率Mに基づいてシフト量データを副走査方向に拡大した拡大シフト量データを得て、拡大シフト量データによって画像メモリ141から画像データを読み出し、結合処理する。このような処理を行う構成及び動作は、実施の形態4のものと同じである。
In the image processing unit 104, the positional deviation amount β is obtained and the images are combined. Based on the thinning rate M set in accordance with the variable magnification R, the overlap region in the image data from the image memory 141 is obtained. The image data is read out, the reference data is compared with the comparison data, the similarity is calculated, the shift amount data is calculated from the position of the comparison data with the highest similarity in the sub-scanning direction, and based on the thinning rate M Enlarged shift amount data obtained by enlarging the shift amount data in the sub-scanning direction is obtained, and the image data is read from the image memory 141 by the enlarged shift amount data and combined. The configuration and operation for performing such processing are the same as those in the fourth embodiment.
《7-3》実施の形態7の効果
以上に説明したように、実施の形態7に係る画像読取装置によれば、撮像部102による撮像時に、原稿160又は撮像部102のいずれか一方、又は、両方を副走査方向に移動する際の搬送機構による搬送速度に時間的なゆらぎ(すなわち、速度変動)がある場合であっても、実施の形態4の場合と同様に、変倍率に応じて副走査方向の位置ずれ量の検出範囲を広げることなく、回路規模を増やさずに、精度よく画像データ間の副走査方向の位置ずれ量を求め、正確に画像データを結合して合成画像データを生成することができる。 << 7-3 >> Effects of Embodiment 7 As described above, according to the image reading apparatus of Embodiment 7, either thedocument 160 or the imaging unit 102, or Even in the case where there is temporal fluctuation (that is, speed fluctuation) in the transport speed by the transport mechanism when moving both in the sub-scanning direction, as in the case of the fourth embodiment, depending on the magnification ratio. Without increasing the detection range of the amount of misalignment in the sub-scanning direction, without increasing the circuit scale, the amount of misalignment in the sub-scanning direction between the image data can be obtained accurately, and the combined image data is obtained by accurately combining the image data. Can be generated.
以上に説明したように、実施の形態7に係る画像読取装置によれば、撮像部102による撮像時に、原稿160又は撮像部102のいずれか一方、又は、両方を副走査方向に移動する際の搬送機構による搬送速度に時間的なゆらぎ(すなわち、速度変動)がある場合であっても、実施の形態4の場合と同様に、変倍率に応じて副走査方向の位置ずれ量の検出範囲を広げることなく、回路規模を増やさずに、精度よく画像データ間の副走査方向の位置ずれ量を求め、正確に画像データを結合して合成画像データを生成することができる。 << 7-3 >> Effects of Embodiment 7 As described above, according to the image reading apparatus of Embodiment 7, either the
本発明に係る画像処理装置、画像処理方法、画像読取装置、及びプログラムは、複写機、スキャナ、ファクシミリ、及びパーソナルコンピュータなどの情報処理装置に適用することができる。
The image processing apparatus, image processing method, image reading apparatus, and program according to the present invention can be applied to information processing apparatuses such as a copying machine, a scanner, a facsimile, and a personal computer.
1,1a 画像読取装置、 2 撮像部、 3 A/D変換部、 4 画像処理部(画像処理装置)、 5 演算装置、 20 センサ基板、 21O1,…,21Ok,…,21On 奇数番目に位置するラインセンサ、 21E1,…,21Ek,…,21En 偶数番目に位置するラインセンサ、 22 奇数番目に位置する隣接するラインセンサの間の間隔、 23 偶数番目に位置する隣接するラインセンサの間の間隔、 25 照明光源、 26R 赤色光用の光電変換素子、 26G 緑色光用の光電変換素子、 26B 青色光用の光電変換素子、 27O,28O 奇数番目に位置するラインセンサの光軸、 27E,28E 偶数番目に位置するラインセンサの光軸、 41 画像メモリ、 42 類似度算出部、 43 シフト量推定部、 44 結合処理部、 51 プロセッサ、 52 RAM、 53 不揮発性メモリ、 54 大容量記憶媒体、 60 原稿、 D42 相関データ、 D43 シフト量データ、 M44 画像データ、 D44 画像データ、 101,101a,101b 画像読取装置、 102 撮像部、 103 A/D変換部、 104,104a 画像処理部(画像処理装置)、 105 演算装置、 107,107a コントローラ、 120 センサ基板、 121O1,…,121Ok,…,121On,121O 奇数番目に位置するラインセンサ、 121E1,…,121Ek,…,121En,121E 偶数番目に位置するラインセンサ、 sr ラインセンサの右端の領域、 sl ラインセンサの左端の領域、 A1,1,…,Ak,k,Ak,k+1,Ak+1,k+1,…,An,n オーバーラップ領域(重複領域)、 125 照明光源、 126 ガラス面、 126R 赤色光用光電変換素子、 126G 緑赤色光用光電変換素子、 126B 青色光用光電変換素子、 127O,128O 奇数番目に位置するラインセンサの光軸、 127E,128E 偶数番目に位置するラインセンサの光軸、 141 画像メモリ、 142 読み出し制御部、 143 類似度算出部、 144,144a シフト量推定部、 145 シフト量拡大部、 146 結合処理部、 151 プロセッサ、 152 RAM、 153 不揮発性メモリ、 154 大容量記憶媒体、 160 原稿、 SI 電気信号(画像データ)、 DI デジタルデータ(画像データ)、 rMO 画像データ、 rME 画像データ、 MO 画像データ、 ME 画像データ、 D143 相関データ、 dsh シフト量データ、 Δyb 拡大シフト量データ、 RP 読み出し位置データ、 M146 画像データ、 D146 画像データ、 Dy 撮像部の搬送方向。
1,1a image reading apparatus, second imaging unit, 3 A / D conversion unit, 4 an image processing section (image processing apparatus), 5 calculation unit, 20 sensor substrate, 21O 1, ..., 21O k , ..., 21O n odd 21E 1 ,..., 21E k ,..., 21E n even-numbered line sensors, 22 spacing between odd-numbered adjacent line sensors, and 23 even-numbered adjacent lines 25. Light source, 26R Photoelectric conversion element for red light, 26G Photoelectric conversion element for green light, 26B Photoelectric conversion element for blue light, 27O, 28O Optical axes of line sensors positioned at odd numbers 27E, 28E Optical axes of even-numbered line sensors, 41 image memory, 42 similarity calculation unit, 43 shift amount estimation unit, 44 combination processing unit, 51 step Processor, 52 RAM, 53 Non-volatile memory, 54 Mass storage medium, 60 Document, D42 Correlation data, D43 Shift amount data, M44 image data, D44 image data, 101, 101a, 101b Image reader, 102 Imaging unit, 103 a / D conversion unit, 104, 104a an image processing section (image processing device), 105 arithmetic unit, 107 and 107a controller, 120 sensor substrate, 121O 1, ..., 121O k , ..., 121O n, located in odd-numbered 121 o line sensor, 121E 1, ..., 121E k , ..., 121E n, the line sensor located numbered 121E, the right end of the region of sr line sensor, the left end of the region of the sl line sensor, a 1, 1, ..., a k , k, A k, k + 1, A k + 1, k + 1, ..., A n, n -Burlap region (overlapping region), 125 illumination light source, 126 glass surface, 126R red light photoelectric conversion device, 126G green red light photoelectric conversion device, 126B blue light photoelectric conversion device, 127O, 128O odd-numbered line sensors , 127E, 128E optical axes of even-numbered line sensors, 141 image memory, 142 readout control unit, 143 similarity calculation unit, 144, 144a shift amount estimation unit, 145 shift amount enlargement unit, 146 combination processing , 151 processor, 152 RAM, 153 nonvolatile memory, 154 mass storage medium, 160 original, SI electrical signal (image data), DI digital data (image data), rMO image data, rME image data, MO image data, ME image data, D143 correlation data Data, d sh shift amount data, [Delta] y b larger shift amount data, RP read position data, M146 image data, D146 image data, the transport direction of Dy imaging unit.
Claims (22)
- 主走査方向に第1の間隔を開けてライン状に並ぶ複数の第1のラインセンサを含む第1列のラインセンサ群と、前記第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ前記主走査方向に第2の間隔を開けてライン状に並ぶ複数の第2のラインセンサを含む第2列のラインセンサ群とを有する撮像部であって、前記第1列のラインセンサ群に属する前記複数の第1のラインセンサが前記第2列のラインセンサ群における前記第2の間隔に対向するように配置され、前記第2列のラインセンサ群に属する前記複数の第2のラインセンサが前記第1列のラインセンサ群における前記第1の間隔に対向するように配置され、隣接する前記第1のラインセンサと前記第2のラインセンサの隣接する端部同士が前記主走査方向に重なるオーバーラップ領域を有する前記撮像部によって生成された画像データを処理する画像処理装置において、
前記第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと前記第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリと、
前記画像メモリから読み出された前記第1の画像データの内の、前記オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における前記画像メモリから読み出された前記第2の画像データから選択された、前記第1の幅と同じ幅の比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する類似度算出部と、
前記基準データの副走査方向の位置と、前記複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部と、
前記画像メモリから前記第1の画像データ及び前記第2の画像データを読み出し、前記シフト量データが示すシフト量を分割し、前記分割された一方のシフト量を示す第1のシフト量データと前記分割された他方のシフト量を示す第2のシフト量データとを生成し、前記第1のシフト量データに基づいて前記第1の画像データの副走査方向の位置を補正し、前記第2のシフト量データに基づいて前記第2の画像データの副走査方向の位置を補正して、前記第1の画像データと前記第2の画像データとを結合する結合処理部と
を有することを特徴とする画像処理装置。 A first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and arranged at different positions in the sub scanning direction from the first row of line sensor groups. And a second row line sensor group including a plurality of second line sensors arranged in a line at a second interval in the main scanning direction, wherein the first row line sensor The plurality of first line sensors belonging to a group are arranged to face the second interval in the second row line sensor group, and the plurality of second line sensors belonging to the second row line sensor group Line sensors are arranged so as to face the first interval in the line sensor group of the first row, and adjacent ends of the first line sensor and the second line sensor adjacent to each other are in the main scanning. Overlapping direction An image processing apparatus for processing the image data generated by the imaging unit having a Rappu region,
An image memory storing first image data based on detection signals generated by the first row of line sensor groups and second image data based on detection signals generated by the second row of line sensor groups;
Of the first image data read from the image memory, read from the image memory in the same overlap area as the reference data having a predetermined first width in the sub-scanning direction in the overlap area. The process of comparing the comparison data selected from the second image data and the comparison data having the same width as the first width is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, A similarity calculator that calculates the similarity between the reference data and the comparison data;
A shift amount estimation unit that calculates shift amount data based on a difference between a position in the sub-scanning direction of the reference data and a position in the sub-scanning direction of the comparison data having the highest similarity among the plurality of comparison data;
The first image data and the second image data are read from the image memory, the shift amount indicated by the shift amount data is divided, and the first shift amount data indicating one of the divided shift amounts and the Second shift amount data indicating the other divided shift amount is generated, the position of the first image data in the sub-scanning direction is corrected based on the first shift amount data, and the second shift amount data is corrected. A combination processing unit that corrects a position of the second image data in the sub-scanning direction based on shift amount data, and combines the first image data and the second image data. An image processing apparatus. - 前記結合処理部は、前記第1の画像データと前記第2の画像データとを結合させるときの、前記第1の画像データの副走査方向の位置と前記第2の画像データの副走査方向の位置とを、主走査方向の位置に応じて変化させることを特徴とする請求項1に記載の画像処理装置。 The combination processing unit is configured to combine the position of the first image data in the sub-scanning direction and the position of the second image data in the sub-scanning direction when combining the first image data and the second image data. The image processing apparatus according to claim 1, wherein the position is changed according to a position in the main scanning direction.
- 前記類似度算出部は、前記基準データと前記比較データとの画素毎の差分の絶対値を前記類似度として用いることを特徴とする請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the similarity calculation unit uses an absolute value of a difference for each pixel between the reference data and the comparison data as the similarity.
- 前記類似度算出部は、前記基準データと前記比較データとの画素毎の差分の二乗和を前記類似度として用いることを特徴とする請求項1に記載の画像処理装置。 2. The image processing apparatus according to claim 1, wherein the similarity calculation unit uses a sum of squares of differences between the reference data and the comparison data for each pixel as the similarity.
- 前記結合処理部は、前記第1列のラインセンサ群及び前記第2列のラインセンサ群のいずれか一方を前記シフト量データに基づいて補正することを特徴とする請求項1から4までのいずれか1項に記載の画像処理装置。 5. The method according to claim 1, wherein the combination processing unit corrects one of the line sensor group in the first row and the line sensor group in the second row based on the shift amount data. The image processing apparatus according to claim 1.
- 前記結合処理部は、前記第1列のラインセンサ群及び前記第2列のラインセンサ群の両方を前記シフト量データに基づいて補正することを特徴とする請求項1から4までのいずれか1項に記載の画像処理装置。 5. The method according to claim 1, wherein the combination processing unit corrects both the first row line sensor group and the second row line sensor group based on the shift amount data. The image processing apparatus according to item.
- 主走査方向に第1の間隔を開けてライン状に並ぶ複数の第1のラインセンサを含む第1列のラインセンサ群と、前記第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ前記主走査方向に第2の間隔を開けてライン状に並ぶ複数の第2のラインセンサを含む第2列のラインセンサ群とを有する撮像部であって、前記第1列のラインセンサ群に属する前記複数の第1のラインセンサが前記第2列のラインセンサ群における前記第2の間隔に対向するように配置され、前記第2列のラインセンサ群に属する前記複数の第2のラインセンサが前記第1列のラインセンサ群における前記第1の間隔に対向するように配置され、隣接する前記第1のラインセンサと前記第2のラインセンサの隣接する端部同士が前記主走査方向に重なるオーバーラップ領域を有する前記撮像部によって生成された画像データを処理する画像処理装置において、
前記第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと前記第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリと、
前記画像メモリから読み出された前記第1の画像データの内の、前記オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における前記画像メモリから読み出された前記第2の画像データから選択された、前記第1の幅と同じ幅の比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する類似度算出部と、
前記基準データの副走査方向の位置と、前記複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部と、
前記画像メモリから前記第1の画像データ及び前記第2の画像データを読み出し、前記シフト量データに基づいて、副走査方向の結合位置を変えて前記第1の画像データと前記第2の画像データとを結合する結合処理部と
を有することを特徴とする画像処理装置。 A first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and arranged at different positions in the sub scanning direction from the first row of line sensor groups. And a second row line sensor group including a plurality of second line sensors arranged in a line at a second interval in the main scanning direction, wherein the first row line sensor The plurality of first line sensors belonging to a group are arranged to face the second interval in the second row line sensor group, and the plurality of second line sensors belonging to the second row line sensor group Line sensors are arranged so as to face the first interval in the line sensor group of the first row, and adjacent ends of the first line sensor and the second line sensor adjacent to each other are in the main scanning. Overlapping direction An image processing apparatus for processing the image data generated by the imaging unit having a Rappu region,
An image memory storing first image data based on detection signals generated by the first row of line sensor groups and second image data based on detection signals generated by the second row of line sensor groups;
Of the first image data read from the image memory, read from the image memory in the same overlap area as the reference data having a predetermined first width in the sub-scanning direction in the overlap area. The process of comparing the comparison data selected from the second image data and the comparison data having the same width as the first width is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, A similarity calculator that calculates the similarity between the reference data and the comparison data;
A shift amount estimation unit that calculates shift amount data based on a difference between a position in the sub-scanning direction of the reference data and a position in the sub-scanning direction of the comparison data having the highest similarity among the plurality of comparison data;
The first image data and the second image data are read from the image memory, and the first image data and the second image data are changed based on the shift amount data by changing the coupling position in the sub-scanning direction. An image processing apparatus comprising: a combining processing unit that combines - 主走査方向に第1の間隔を開けてライン状に並ぶ複数の第1のラインセンサを含む第1列のラインセンサ群と、前記第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ前記主走査方向に第2の間隔を開けてライン状に並ぶ複数の第2のラインセンサを含む第2列のラインセンサ群とを有する撮像部であって、前記第1列のラインセンサ群に属する前記複数の第1のラインセンサが前記第2列のラインセンサ群における前記第2の間隔に対向するように配置され、前記第2列のラインセンサ群に属する前記複数の第2のラインセンサが前記第1列のラインセンサ群における前記第1の間隔に対向するように配置され、隣接する前記第1のラインセンサと前記第2のラインセンサの隣接する端部同士が前記主走査方向に重なるオーバーラップ領域を有する前記撮像部によって生成された画像データを処理する画像処理方法において、
前記第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと前記第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリから読み出された前記第1の画像データの内の、前記オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における前記画像メモリから読み出された前記第2の画像データから選択された、前記第1の幅と同じ幅の比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する工程と、
前記基準データの副走査方向の位置と、前記複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出する工程と、
前記画像メモリから前記第1の画像データ及び前記第2の画像データを読み出し、前記シフト量データに基づいて、副走査方向の結合位置を変えて前記第1の画像データと前記第2の画像データとを結合する工程と
を有することを特徴とする画像処理方法。 A first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and arranged at different positions in the sub scanning direction from the first row of line sensor groups. And a second row line sensor group including a plurality of second line sensors arranged in a line at a second interval in the main scanning direction, wherein the first row line sensor The plurality of first line sensors belonging to a group are arranged to face the second interval in the second row line sensor group, and the plurality of second line sensors belonging to the second row line sensor group Line sensors are arranged so as to face the first interval in the line sensor group of the first row, and adjacent ends of the first line sensor and the second line sensor adjacent to each other are in the main scanning. Overlapping direction An image processing method for processing image data generated by the imaging unit having a Rappu region,
Read from an image memory storing first image data based on detection signals generated by the first row of line sensor groups and second image data based on detection signals generated by the second row of line sensor groups. Of the output first image data, the reference data having a predetermined first width in the sub-scanning direction in the overlap region and the second data read from the image memory in the same overlap region. A process of comparing comparison data having the same width as the first width selected from the image data is performed for a plurality of positions where the positions of the comparison data are moved in the sub-scanning direction, and the reference data and the Calculating the degree of similarity with the comparison data;
Calculating shift amount data based on the difference between the position of the reference data in the sub-scanning direction and the position of the comparison data having the highest similarity among the plurality of comparison data in the sub-scanning direction;
The first image data and the second image data are read from the image memory, and the first image data and the second image data are changed based on the shift amount data by changing the coupling position in the sub-scanning direction. And an image processing method. - 主走査方向に第1の間隔を開けてライン状に並ぶ複数の第1のラインセンサを含む第1列のラインセンサ群と、前記第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ前記主走査方向に第2の間隔を開けてライン状に並ぶ複数の第2のラインセンサを含む第2列のラインセンサ群とを有する撮像部であって、前記第1列のラインセンサ群に属する前記複数の第1のラインセンサが前記第2列のラインセンサ群における前記第2の間隔に対向するように配置され、前記第2列のラインセンサ群に属する前記複数の第2のラインセンサが前記第1列のラインセンサ群における前記第1の間隔に対向するように配置され、隣接する前記第1のラインセンサと前記第2のラインセンサの隣接する端部同士が前記主走査方向に重なるオーバーラップ領域を有する前記撮像部と、
前記第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと前記第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリと、
前記画像メモリから読み出された前記第1の画像データの内の、前記オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における前記画像メモリから読み出された前記第2の画像データから選択された、前記第1の幅と同じ幅の比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する類似度算出部と、
前記基準データの副走査方向の位置と、前記複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部と、
前記画像メモリから前記第1の画像データ及び前記第2の画像データを読み出し、前記シフト量データに基づいて、副走査方向の結合位置を変えて前記第1の画像データと前記第2の画像データとを結合する結合処理部と
を有することを特徴とする画像読取装置。 A first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and arranged at different positions in the sub scanning direction from the first row of line sensor groups. And a second row line sensor group including a plurality of second line sensors arranged in a line at a second interval in the main scanning direction, wherein the first row line sensor The plurality of first line sensors belonging to a group are arranged to face the second interval in the second row line sensor group, and the plurality of second line sensors belonging to the second row line sensor group Line sensors are arranged so as to face the first interval in the line sensor group of the first row, and adjacent ends of the first line sensor and the second line sensor adjacent to each other are in the main scanning. Overlapping direction And the imaging section having Rappu area,
An image memory storing first image data based on detection signals generated by the first row of line sensor groups and second image data based on detection signals generated by the second row of line sensor groups;
Of the first image data read from the image memory, read from the image memory in the same overlap area as the reference data having a predetermined first width in the sub-scanning direction in the overlap area. The process of comparing the comparison data selected from the second image data and the comparison data having the same width as the first width is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, A similarity calculator that calculates the similarity between the reference data and the comparison data;
A shift amount estimation unit that calculates shift amount data based on a difference between a position in the sub-scanning direction of the reference data and a position in the sub-scanning direction of the comparison data having the highest similarity among the plurality of comparison data;
The first image data and the second image data are read from the image memory, and based on the shift amount data, the coupling position in the sub-scanning direction is changed to change the first image data and the second image data. An image reading apparatus comprising: a combining processing unit that combines - 主走査方向に第1の間隔を開けてライン状に並ぶ複数の第1のラインセンサを含む第1列のラインセンサ群と、前記第1列のラインセンサ群と副走査方向の異なる位置に配置され且つ前記主走査方向に第2の間隔を開けてライン状に並ぶ複数の第2のラインセンサを含む第2列のラインセンサ群とを有する撮像部であって、前記第1列のラインセンサ群に属する前記複数の第1のラインセンサが前記第2列のラインセンサ群における前記第2の間隔に対向するように配置され、前記第2列のラインセンサ群に属する前記複数の第2のラインセンサが前記第1列のラインセンサ群における前記第1の間隔に対向するように配置され、隣接する前記第1のラインセンサと前記第2のラインセンサの隣接する端部同士が前記主走査方向に重なるオーバーラップ領域を有する前記撮像部によって生成された画像データを処理するための、コンピュータによって実行可能なプログラムであって、
前記第1列のラインセンサ群によって生成された検出信号に基づく第1の画像データと前記第2列のラインセンサ群によって生成された検出信号に基づく第2の画像データを格納する画像メモリから読み出された前記第1の画像データの内の、前記オーバーラップ領域における副走査方向に所定の第1の幅を持つ基準データと、同じオーバーラップ領域における前記画像メモリから読み出された前記第2の画像データから選択された、前記第1の幅と同じ幅の比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記比較データとの類似度を算出する処理と、
前記基準データの副走査方向の位置と、前記複数の比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出する処理と、
前記画像メモリから前記第1の画像データ及び前記第2の画像データを読み出し、前記シフト量データに基づいて、副走査方向の結合位置を変えて前記第1の画像データと前記第2の画像データとを結合する処理と
をコンピュータに実行させるプログラム。 A first row of line sensor groups including a plurality of first line sensors arranged in a line at a first interval in the main scanning direction, and arranged at different positions in the sub scanning direction from the first row of line sensor groups. And a second row line sensor group including a plurality of second line sensors arranged in a line at a second interval in the main scanning direction, wherein the first row line sensor The plurality of first line sensors belonging to a group are arranged to face the second interval in the second row line sensor group, and the plurality of second line sensors belonging to the second row line sensor group Line sensors are arranged so as to face the first interval in the line sensor group of the first row, and adjacent ends of the first line sensor and the second line sensor adjacent to each other are in the main scanning. Overlapping direction For processing the image data generated by the imaging unit having a Rappu area, a program executable by a computer,
Read from an image memory storing first image data based on detection signals generated by the first row of line sensor groups and second image data based on detection signals generated by the second row of line sensor groups. Of the output first image data, the reference data having a predetermined first width in the sub-scanning direction in the overlap region and the second data read from the image memory in the same overlap region. A process of comparing comparison data having the same width as the first width selected from the image data is performed for a plurality of positions where the positions of the comparison data are moved in the sub-scanning direction, and the reference data and the A process of calculating the similarity with the comparison data;
A process of calculating shift amount data based on a difference between a position in the sub-scanning direction of the reference data and a position in the sub-scanning direction of comparison data having the highest similarity among the plurality of comparison data;
The first image data and the second image data are read from the image memory, and the first image data and the second image data are changed based on the shift amount data by changing the coupling position in the sub-scanning direction. A program that causes a computer to execute the process of combining. - 主走査方向に並ぶ複数の光電変換素子を含むラインセンサを、副走査方向の異なる位置に少なくとも2列有し、互いに異なる列の隣接するラインセンサの端部同士が主走査方向に重なるオーバーラップ領域を有するように配置されている撮像部によって生成された画像データを処理する画像処理装置において、
前記ラインセンサからの出力に基づく画像データを格納する画像メモリと、
前記ラインセンサによる副走査方向の読み取り倍率に応じて設定される間引き率に基づいて、前記画像メモリに格納されている前記画像データから、前記オーバーラップ領域における所定の副走査方向の位置における基準データと、前記基準データの前記オーバーラップ領域に重複する領域における比較データとを読み出す読み出し制御部と、
前記読み出し制御部によって読み出された前記基準データと前記比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記複数の副走査方向の位置についての前記比較データとの類似度を算出する類似度算出部と、
前記基準データの副走査方向の位置と、前記複数の副走査方向の位置における前記比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部と、
前記間引き率に基づいて、前記シフト量データを副走査方向に拡大及び補間することによって、前記シフト量データを拡大シフト量データに変換するシフト量拡大部と、
前記拡大シフト量データに基づいて前記画像メモリから読み出される画像データの副走査方向の位置を決定し、前記決定された位置の画像データを前記画像メモリから読み出し、前記互いに異なる列の隣接するラインセンサによって読み出された画像データを結合することによって合成画像データを生成する結合処理部と
を備えることを特徴とする画像処理装置。 An overlap region in which line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction have at least two rows at different positions in the sub-scanning direction, and ends of adjacent line sensors in different rows overlap in the main scanning direction In an image processing apparatus for processing image data generated by an imaging unit arranged to have
An image memory for storing image data based on an output from the line sensor;
Based on the thinning rate set according to the reading magnification in the sub-scanning direction by the line sensor, the reference data at the position in the predetermined sub-scanning direction in the overlap region from the image data stored in the image memory. And a read control unit that reads comparison data in a region that overlaps the overlap region of the reference data;
The process of comparing the reference data read by the read control unit and the comparison data is performed for a plurality of positions where the positions of the comparison data are moved in the sub-scanning direction, and the reference data and the plurality of sub-data are compared. A similarity calculation unit for calculating a similarity with the comparison data for the position in the scanning direction;
Shift amount data is calculated based on the difference between the position of the reference data in the sub-scanning direction and the position of the comparison data having the highest degree of similarity in the plurality of sub-scanning direction positions in the sub-scanning direction A shift amount estimation unit to perform,
A shift amount enlargement unit that converts the shift amount data into enlarged shift amount data by enlarging and interpolating the shift amount data in the sub-scanning direction based on the thinning rate;
The position of the image data read from the image memory in the sub-scanning direction is determined based on the enlargement shift amount data, the image data at the determined position is read from the image memory, and the adjacent line sensors in different columns An image processing apparatus comprising: a combination processing unit that generates combined image data by combining the image data read out by step (a). - 前記読み出し制御部は、
前記間引き率に基づいて前記画像メモリから前記基準データ及び前記比較データを読み出して出力するときに、
前記オーバーラップ領域における副走査方向の所定の位置における前記基準データと、
前記オーバーラップ領域において、前記基準データの副走査方向の位置と同じ位置及びその前後の副走査方向の位置における画像データから選択された予め決められたライン数の副走査方向の位置における比較データとを出力する
ことを特徴とする請求項11に記載の画像処理装置。 The read control unit
When reading and outputting the reference data and the comparison data from the image memory based on the thinning rate,
The reference data at a predetermined position in the sub-scanning direction in the overlap region;
In the overlap region, the comparison data at the position in the sub-scanning direction with a predetermined number of lines selected from the image data at the same position as the position in the sub-scanning direction of the reference data and the position in the sub-scanning direction before and after the reference data The image processing apparatus according to claim 11, wherein: - 前記間引き率は、前記読み取り倍率に応じて拡大又は縮小された副走査方向の読み取りライン数に基づいて設定され、
前記読み出し制御部は、
前記画像メモリから前記画像データの内のオーバーラップ領域における所定の副走査方向の位置の画像データを読み出すときに、
前記間引き率に応じたライン数の副走査方向のライン間隔ごとに、前記オーバーラップ領域における所定の副走査方向の位置における基準データと、前記基準データの副走査位置と同じ位置及びその前後の副走査方向の位置における画像データから選択された予め決められたライン数の副走査方向の位置における比較データとが読み出されるように画像データを間引きながら読み出し、前記基準データ及び前記比較データを出力する
ことを特徴とする請求項11又は12に記載の画像処理装置。 The thinning rate is set based on the number of reading lines in the sub-scanning direction enlarged or reduced according to the reading magnification,
The read control unit
When reading image data at a position in a predetermined sub-scanning direction in an overlap area of the image data from the image memory,
For each line interval in the sub-scanning direction of the number of lines corresponding to the thinning rate, the reference data at a position in the predetermined sub-scanning direction in the overlap area, the same position as the sub-scanning position of the reference data, and the sub-scans before and after that Reading out the image data while thinning out the image data so that the predetermined number of lines selected from the image data at the position in the scanning direction and the comparison data at the position in the sub-scanning direction are read out, and outputting the reference data and the comparison data The image processing apparatus according to claim 11, wherein: - 前記間引き率は、前記読み取り倍率に応じて拡大又は縮小された副走査方向の読み取りライン数に基づいて設定され、
前記読み出し制御部は、
前記画像メモリから前記画像データの内のオーバーラップ領域における所定の副走査方向の位置の画像データを読み出すときに、
前記間引き率に応じたライン数の副走査方向のライン間隔ごとに、前記画像データを読み出し、前記ライン間隔を単位として前記画像データを副走査方向に平均化処理し、前記オーバーラップ領域における所定の副走査方向の位置における前記基準データと、前記基準データの位置と同じ位置又はその前後の副走査方向の位置における前記画像データから選択された予め決められた数の副走査方向の位置における前記比較データとして出力する
ことを特徴とする請求項11又は12に記載の画像処理装置。 The thinning rate is set based on the number of reading lines in the sub-scanning direction enlarged or reduced according to the reading magnification,
The read control unit
When reading image data at a position in a predetermined sub-scanning direction in an overlap area of the image data from the image memory,
The image data is read out for each line interval in the sub-scanning direction corresponding to the number of thinning rates, the image data is averaged in the sub-scanning direction in units of the line interval, and a predetermined value in the overlap region is determined. The reference data at the position in the sub-scanning direction and the comparison at a predetermined number of positions in the sub-scanning direction selected from the image data at the same position as the position of the reference data or the position in the sub-scanning direction before and after the reference data. It outputs as data. The image processing apparatus of Claim 11 or 12 characterized by the above-mentioned. - 前記シフト量拡大部は、前記間引き率に応じた倍率を用いて前記シフト量データの値を変換するとともに、前記間引き率に応じたライン数の副走査方向のライン間隔ごとに、前記シフト量データが求められる次のラインまでの間、前記シフト量データを繰り返し挿入し副走査方向に補間処理することによって、前記シフト量データを前記拡大シフト量データに変換することを特徴とする請求項11に記載の画像処理装置。 The shift amount enlargement unit converts the value of the shift amount data using a magnification according to the thinning rate, and the shift amount data for each line interval in the sub-scanning direction of the number of lines according to the thinning rate. 12. The shift amount data is converted into the enlarged shift amount data by repeatedly inserting the shift amount data and performing an interpolation process in the sub-scanning direction until the next line for which is required. The image processing apparatus described.
- 前記シフト量拡大部は、前記間引き率に応じた倍率を用いて前記シフト量データの値を変換するとともに、前記シフト量データが求められるラインにおけるシフト量データから、前記間引き率に応じたライン数の副走査方向のラインについての補間処理によって副走査方向の補間処理を行うことによって、前記シフト量データを前記拡大シフト量データに変換することを特徴とする請求項11に記載の画像処理装置。 The shift amount enlargement unit converts the value of the shift amount data using a magnification according to the decimation rate, and from the shift amount data in the line for which the shift amount data is obtained, the number of lines according to the decimation rate The image processing apparatus according to claim 11, wherein the shift amount data is converted into the enlarged shift amount data by performing interpolation processing in the sub-scanning direction by interpolation processing for lines in the sub-scanning direction.
- 前記間引き率は、1以上の整数であることを特徴とする請求項11から16までのいずれか一項に記載の画像処理装置。 The image processing apparatus according to any one of claims 11 to 16, wherein the thinning rate is an integer of 1 or more.
- 前記シフト量推定部は、
前記ラインセンサによる副走査方向の読み取り倍率と前記間引き率とに基づいて設定される検索範囲除外ライン数によって、シフト量データを算出するために用いられる前記比較データの副走査方向の位置の範囲を限定する処理をさらに行い、
前記基準データの副走査方向の位置と、前記複数の位置における前記比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいて前記シフト量データを算出する
ことを特徴とする請求項11から17までのいずれか1項に記載の画像処理装置。 The shift amount estimation unit includes:
The range of the position in the sub-scanning direction of the comparison data used for calculating the shift amount data is determined by the number of search range excluded lines set based on the reading magnification in the sub-scanning direction by the line sensor and the thinning rate. Further processing to limit,
Calculating the shift amount data based on a difference between a position of the reference data in the sub-scanning direction and a position of the comparison data having the highest similarity in the plurality of positions in the sub-scanning direction. The image processing apparatus according to claim 11, wherein the image processing apparatus is characterized in that: - 前記結合処理部は、
前記拡大シフト量データに基づいて、前記画像メモリから読み出す画像データの副走査方向の位置を算出し、
前記算出された副走査方向の位置の画像データと前記算出された副走査方向の位置の隣接するラインの画像データとを用いて、前記合成画像データを生成する
ことを特徴とする請求項11から18までのいずれか1項に記載の画像処理装置。 The combination processing unit includes:
Based on the enlargement shift amount data, calculate the position in the sub-scanning direction of the image data read from the image memory,
The composite image data is generated using the image data at the calculated position in the sub-scanning direction and the image data of an adjacent line at the calculated position in the sub-scanning direction. The image processing apparatus according to any one of 18 to 18. - 主走査方向に並ぶ複数の光電変換素子を含むラインセンサを、副走査方向の異なる位置に少なくとも2列有し、互いに異なる列の隣接するラインセンサの端部同士が主走査方向に重なるオーバーラップ領域を有するように配置されている撮像部によって生成された画像データを処理する画像処理方法において、
前記ラインセンサからの出力に基づく画像データを画像メモリに格納する格納ステップと、
前記ラインセンサによる副走査方向の読み取り倍率に応じて設定される間引き率に基づいて、前記画像メモリに格納されている前記画像データから、前記オーバーラップ領域における所定の副走査方向の位置における基準データと、前記基準データの前記オーバーラップ領域に重複する領域における比較データとを読み出す読み出し制御ステップと、
前記読み出しステップによって読み出された前記基準データと前記比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記複数の副走査方向の位置についての前記比較データとの類似度を算出する類似度算出ステップと、
前記基準データの副走査方向の位置と、前記複数の副走査方向の位置における前記比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量算出ステップと、
前記間引き率に基づいて、前記シフト量データを副走査方向に拡大及び補間することによって、前記シフト量データを拡大シフト量データに変換するシフト量拡大ステップと、
前記拡大シフト量データに基づいて前記画像メモリから読み出される画像データの副走査方向の位置を決定し、前記決定された位置の画像データを前記画像メモリから読み出し、前記互いに異なる列の隣接するラインセンサによって読み出された画像データを結合することによって合成画像データを生成する結合処理ステップと
を備えることを特徴とする画像処理方法。 An overlap region in which line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction have at least two rows at different positions in the sub-scanning direction, and ends of adjacent line sensors in different rows overlap in the main scanning direction In an image processing method for processing image data generated by an imaging unit arranged to have
A storage step of storing image data based on an output from the line sensor in an image memory;
Based on the thinning rate set according to the reading magnification in the sub-scanning direction by the line sensor, the reference data at the position in the predetermined sub-scanning direction in the overlap area from the image data stored in the image memory. And a read control step of reading comparison data in a region overlapping the overlap region of the reference data;
The process of comparing the reference data read in the reading step and the comparison data is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, and the reference data and the plurality of sub-scans A similarity calculation step of calculating the similarity with the comparison data for the position in the direction;
Shift amount data is calculated based on the difference between the position of the reference data in the sub-scanning direction and the position of the comparison data having the highest degree of similarity in the plurality of sub-scanning direction positions in the sub-scanning direction A shift amount calculating step,
A shift amount enlargement step for converting the shift amount data into enlarged shift amount data by enlarging and interpolating the shift amount data in the sub-scanning direction based on the thinning rate;
The position of the image data read from the image memory in the sub-scanning direction is determined based on the enlargement shift amount data, the image data at the determined position is read from the image memory, and the adjacent line sensors in different columns An image processing method comprising: a combination processing step of generating composite image data by combining the image data read out by the step. - 主走査方向に並ぶ複数の光電変換素子を含むラインセンサを、副走査方向の異なる位置に少なくとも2列有し、互いに異なる列の隣接するラインセンサの端部同士が主走査方向に重なるオーバーラップ領域を有するように配置されている撮像部と、
前記ラインセンサからの出力に基づく画像データを格納する画像メモリと、
前記ラインセンサによる副走査方向の読み取り倍率に応じて設定される間引き率に基づいて、前記画像メモリに格納されている前記画像データから、前記オーバーラップ領域における所定の副走査方向の位置における基準データと、前記基準データの前記オーバーラップ領域に重複する領域における比較データとを読み出す読み出し制御部と、
前記読み出し制御部によって読み出された前記基準データと前記比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記複数の副走査方向の位置についての前記比較データとの類似度を算出する類似度算出部と、
前記基準データの副走査方向の位置と、前記複数の副走査方向の位置における前記比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量推定部と、
前記間引き率に基づいて、前記シフト量データを副走査方向に拡大及び補間することによって、前記シフト量データを拡大シフト量データに変換するシフト量拡大部と、
前記拡大シフト量データに基づいて前記画像メモリから読み出される画像データの副走査方向の位置を決定し、前記決定された位置の画像データを前記画像メモリから読み出し、前記互いに異なる列の隣接するラインセンサによって読み出された画像データを結合することによって合成画像データを生成する結合処理部と
を備えることを特徴とする画像読取装置。 An overlap region in which line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction have at least two rows at different positions in the sub-scanning direction, and ends of adjacent line sensors in different rows overlap in the main scanning direction An imaging unit arranged to have
An image memory for storing image data based on an output from the line sensor;
Based on the thinning rate set according to the reading magnification in the sub-scanning direction by the line sensor, the reference data at the position in the predetermined sub-scanning direction in the overlap region from the image data stored in the image memory. And a read control unit that reads comparison data in a region that overlaps the overlap region of the reference data;
The process of comparing the reference data read by the read control unit and the comparison data is performed for a plurality of positions where the positions of the comparison data are moved in the sub-scanning direction, and the reference data and the plurality of sub-data are compared. A similarity calculation unit for calculating a similarity with the comparison data for the position in the scanning direction;
Shift amount data is calculated based on the difference between the position of the reference data in the sub-scanning direction and the position of the comparison data having the highest degree of similarity in the plurality of sub-scanning direction positions in the sub-scanning direction A shift amount estimation unit to perform,
A shift amount enlargement unit that converts the shift amount data into enlarged shift amount data by enlarging and interpolating the shift amount data in the sub-scanning direction based on the thinning rate;
A position in the sub-scanning direction of the image data read from the image memory is determined based on the enlargement shift amount data, the image data at the determined position is read from the image memory, and the adjacent line sensors in different columns An image reading apparatus comprising: a combining processing unit that generates combined image data by combining the image data read out by step (a). - 主走査方向に並ぶ複数の光電変換素子を含むラインセンサを、副走査方向の異なる位置に少なくとも2列有し、互いに異なる列の隣接するラインセンサの端部同士が主走査方向に重なるオーバーラップ領域を有するように配置されている撮像部によって生成された画像データの処理をコンピュータに実行させるためのプログラムであって、
前記ラインセンサからの出力に基づく画像データを画像メモリに格納する格納処理と、
前記ラインセンサによる副走査方向の読み取り倍率に応じて設定される間引き率に基づいて、前記画像メモリに格納されている前記画像データから、前記オーバーラップ領域における所定の副走査方向の位置における基準データと、前記基準データの前記オーバーラップ領域に重複する領域における比較データとを読み出す読み出し制御処理と、
前記読み出しステップによって読み出された前記基準データと前記比較データとを比較する処理を、前記比較データの位置を副走査方向に移動させた複数の位置について行い、前記基準データと前記複数の副走査方向の位置についての前記比較データとの類似度を算出する類似度算出処理と、
前記基準データの副走査方向の位置と、前記複数の副走査方向の位置における前記比較データの内の類似度の最も高い比較データの副走査方向の位置との差分に基づいてシフト量データを算出するシフト量算出処理と、
前記間引き率に基づいて、前記シフト量データを副走査方向に拡大及び補間することによって、前記シフト量データを拡大シフト量データに変換するシフト量拡大ステップと、
前記拡大シフト量データに基づいて前記画像メモリから読み出される画像データの副走査方向の位置を決定し、前記決定された位置の画像データを前記画像メモリから読み出し、前記互いに異なる列の隣接するラインセンサによって読み出された画像データを結合することによって合成画像データを生成する結合処理と
をコンピュータに実行させるプログラム。
An overlap region in which line sensors including a plurality of photoelectric conversion elements arranged in the main scanning direction have at least two rows at different positions in the sub-scanning direction, and ends of adjacent line sensors in different rows overlap in the main scanning direction A program for causing a computer to execute processing of image data generated by an imaging unit arranged to have
A storage process for storing image data based on an output from the line sensor in an image memory;
Based on the thinning rate set according to the reading magnification in the sub-scanning direction by the line sensor, the reference data at the position in the predetermined sub-scanning direction in the overlap region from the image data stored in the image memory. And a read control process for reading comparison data in a region overlapping the overlap region of the reference data;
The process of comparing the reference data read in the reading step and the comparison data is performed for a plurality of positions where the position of the comparison data is moved in the sub-scanning direction, and the reference data and the plurality of sub-scans Similarity calculation processing for calculating the similarity with the comparison data for the position in the direction;
Shift amount data is calculated based on the difference between the position of the reference data in the sub-scanning direction and the position of the comparison data having the highest degree of similarity in the plurality of sub-scanning direction positions in the sub-scanning direction Shift amount calculation processing,
A shift amount enlargement step for converting the shift amount data into enlarged shift amount data by enlarging and interpolating the shift amount data in the sub-scanning direction based on the thinning rate;
The position of the image data read from the image memory in the sub-scanning direction is determined based on the enlargement shift amount data, the image data at the determined position is read from the image memory, and the adjacent line sensors in different columns A program that causes a computer to execute a combining process for generating composite image data by combining the image data read out by.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015500302A JP6058115B2 (en) | 2013-02-18 | 2014-02-14 | Image processing apparatus, image processing method, image reading apparatus, and program |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013029146 | 2013-02-18 | ||
JP2013-029146 | 2013-02-18 | ||
JP2013-175740 | 2013-08-27 | ||
JP2013175740 | 2013-08-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014126187A1 true WO2014126187A1 (en) | 2014-08-21 |
Family
ID=51354186
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/053430 WO2014126187A1 (en) | 2013-02-18 | 2014-02-14 | Image processing device, image processing method, image reading device, and program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6058115B2 (en) |
WO (1) | WO2014126187A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108347540A (en) * | 2017-01-23 | 2018-07-31 | 精工爱普生株式会社 | The production method of scanner, scanner program and scan data |
CN111400359A (en) * | 2020-03-17 | 2020-07-10 | 创新奇智(北京)科技有限公司 | Similar k-line retrieval method and system for stock trend prediction |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0698101A (en) * | 1992-09-11 | 1994-04-08 | Dainippon Screen Mfg Co Ltd | Read area connector for image reader |
JP2010206607A (en) * | 2009-03-04 | 2010-09-16 | Mitsubishi Electric Corp | Image reading apparatus |
-
2014
- 2014-02-14 WO PCT/JP2014/053430 patent/WO2014126187A1/en active Application Filing
- 2014-02-14 JP JP2015500302A patent/JP6058115B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0698101A (en) * | 1992-09-11 | 1994-04-08 | Dainippon Screen Mfg Co Ltd | Read area connector for image reader |
JP2010206607A (en) * | 2009-03-04 | 2010-09-16 | Mitsubishi Electric Corp | Image reading apparatus |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108347540A (en) * | 2017-01-23 | 2018-07-31 | 精工爱普生株式会社 | The production method of scanner, scanner program and scan data |
CN108347540B (en) * | 2017-01-23 | 2019-11-08 | 精工爱普生株式会社 | The production method of scanner, scanner program and scan data |
CN111400359A (en) * | 2020-03-17 | 2020-07-10 | 创新奇智(北京)科技有限公司 | Similar k-line retrieval method and system for stock trend prediction |
CN111400359B (en) * | 2020-03-17 | 2023-11-10 | 创新奇智(北京)科技有限公司 | Stock trend prediction-oriented similar k-line retrieval method and retrieval system |
Also Published As
Publication number | Publication date |
---|---|
JP6058115B2 (en) | 2017-01-11 |
JPWO2014126187A1 (en) | 2017-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6076552B1 (en) | Image reading apparatus and image reading method | |
JP4709084B2 (en) | Image processing apparatus and image processing method | |
JP6553826B1 (en) | Image processing apparatus, image processing method, and image processing program | |
JP5388558B2 (en) | Image processing apparatus and image processing method | |
JP5627215B2 (en) | Image processing apparatus and control method thereof | |
US8320715B2 (en) | Device and method for interpolating image, and image scanner | |
US9614996B2 (en) | Image processing apparatus, method therefor, and image reading apparatus | |
US8553293B2 (en) | Image interpolation apparatus and a computer readable storage medium storing instructions of a computer program | |
JP2008147976A (en) | Image inclination correction device and image inclination correcting method | |
JP6058115B2 (en) | Image processing apparatus, image processing method, image reading apparatus, and program | |
JP6422428B2 (en) | Image processing apparatus, image processing method, image reading apparatus, and program | |
JP4913089B2 (en) | Image reading device | |
JP6246379B2 (en) | Image processing apparatus, image processing method, image reading apparatus, and program | |
JP2010056961A (en) | Image processing apparatus and information processing apparatus, method and program | |
JP5590911B2 (en) | Image reading apparatus and method | |
JP2013005176A (en) | Image reading device, image formation device and line image sensor control method | |
JPH10285379A (en) | Original angle correction method and original angle correction system | |
JP2009177798A (en) | Image processing apparatus, image processing method, and program to execute the same | |
JP2006166106A (en) | Picture reader | |
JP2019161635A (en) | Image reading apparatus and image data generation method | |
JP2005122257A (en) | Image processing apparatus and electronic apparatus | |
JP2004112705A (en) | Image reading device | |
JP2006107382A (en) | Image processing controller and image processing control method | |
JP2007148881A (en) | Pixel interpolation method and device, and image reader | |
JP2006262153A (en) | Imaging apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14751779 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015500302 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14751779 Country of ref document: EP Kind code of ref document: A1 |