[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2011010431A1 - Image processing device, image processing method, and image capturing device - Google Patents

Image processing device, image processing method, and image capturing device Download PDF

Info

Publication number
WO2011010431A1
WO2011010431A1 PCT/JP2010/004388 JP2010004388W WO2011010431A1 WO 2011010431 A1 WO2011010431 A1 WO 2011010431A1 JP 2010004388 W JP2010004388 W JP 2010004388W WO 2011010431 A1 WO2011010431 A1 WO 2011010431A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
data
raw data
compression
image
Prior art date
Application number
PCT/JP2010/004388
Other languages
French (fr)
Japanese (ja)
Inventor
秦野敏信
久方和之
橋永寿彦
吉廣秀章
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Publication of WO2011010431A1 publication Critical patent/WO2011010431A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/64Systems for the transmission or the storage of the colour picture signal; Details therefor, e.g. coding or decoding means therefor

Definitions

  • the present invention relates to an image processing apparatus and an image processing method for compressing RAW data in a data format in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array in accordance with a certain rule.
  • the present invention relates to a technique for improving the compression efficiency of color-specific plane data.
  • the present invention also relates to an image pickup apparatus in which the image processing apparatus as described above is mounted and an image sensor of a type that performs color separation and picks up images. Examples of the target image processing apparatus and imaging apparatus include a digital still camera, a digital video camera, an independent image scanner, and an image scanner incorporated in a copying machine.
  • “Plain data” refers to data having an array form that is two-dimensionally developed when data is developed on an image memory.
  • a CCD-type or MOS-type image sensor for obtaining a color image is a color filter array in which color filters of a plurality of types of color components are repeatedly arranged according to a certain rule corresponding to the two-dimensional array of pixels in the image sensor.
  • Color separation filter For example, in a Bayer array color separation filter in which RGB primary color filters are arranged in a checkered pattern corresponding to the pixels of the image sensor, filters for each color component of RGB along both the horizontal and vertical directions. Are arranged every other pixel (BGgR).
  • the information obtained from each pixel is only information of one color component. Therefore, in order to enhance the expressive power, each pixel is interpolated using the color information of the surrounding pixels, and information on a plurality of color components is obtained for each pixel. That is, information on all color components is obtained for all pixels. This is called synchronized color interpolation processing.
  • color image data of each color component having the same number of pixels as that of the image sensor is obtained.
  • signal processing such as white balance (WB) adjustment, gamma correction processing, enhancement processing for edge enhancement, etc. is performed and converted into component signals of a luminance signal (Y) and two types of color difference signals (Cr, Cb). Then, the data is compressed with a compression encoding algorithm such as JPEG (Joint Photographic Experts Group) to obtain a small file size and recorded on a recording medium.
  • WB white balance
  • Cr color difference signals
  • JPEG Joint Photographic Experts Group
  • raw (RAW) digital data (without image processing) immediately after A / D conversion of an imaging analog signal is performed for high-quality development processing and retouch processing performed after shooting.
  • RAW data is reversibly compressed and recorded on a recording medium.
  • Image reproduction (development) processing is performed by an external device such as a personal computer, thereby realizing high-quality printing and image editing that matches the user's purpose.
  • RAW data without image processing is used because if image data with image processing added is recorded and if it is decoded for development processing or retouch processing and subjected to further image processing, the image quality deteriorates. It is.
  • Patent Document 1 discloses a common compression processing step for two modes, a mode for recording after performing the synchronized color interpolation process and a mode for recording in the RAW data state without performing the synchronized color interpolation process. A method of compressing independently is described (see FIGS. 13A and 13B).
  • Patent Document 2 describes a method of compressing component data separated for each color component from RAW data for each component data without going through a synchronized color interpolation process (see FIGS. 14 and 15). ).
  • RAW data is divided into color-specific plane data with high correlation between adjacent pixels, and the plane data is compressed. Therefore, the RAW data is higher than the case where RAW data not separated by color is compressed as it is. It is said that compression efficiency can be obtained.
  • a plurality of color plane data obtained by dividing RAW data is individually compressed. That is, the same compression process is repeatedly performed in four times. It takes time to switch the plane data during this repetition. Specifically, switching from the compression process of the first plane data to the compression process of the second plane data, switching from the compression process of the second plane data to the compression process of the third plane data, There are three times of switching, such as switching from the compression processing of the plane data to the compression processing of the fourth plane data. In this way, if a plurality of pieces of plane data for each color are individually compressed sequentially before and after each other, it takes a lot of time in total. Therefore, the compression efficiency is low.
  • Patent Document 2 also describes that a plurality of compression processing steps that can be operated in parallel are used (see paragraph [0032]). However, in that case, a CPU (Central Processing ⁇ ⁇ ⁇ Unit) having a very high processing capability is required. In addition, when the hardware configuration is adopted, the circuit scale is significantly increased. If the compression efficiency is low, it is disadvantageous for high-speed processing in the compressed RAW data recording mode in response to the recent increase in pixels of image sensors.
  • a CPU Central Processing ⁇ ⁇ ⁇ Unit
  • an imaging apparatus having a continuous shooting function may interfere with high-speed continuous shooting in the compressed RAW data recording mode.
  • the possibility of occurrence of block noise is increased.
  • the present invention was created in view of such circumstances, and it is intended to increase the compression efficiency of a plurality of color-specific plane data, and to avoid increasing the circuit scale and CPU processing capacity.
  • the purpose is that. More desirably, the object is to suppress the occurrence of block noise.
  • the present invention solves the above problems by taking the following measures.
  • An image processing apparatus is a RAW data (image data immediately after A / D conversion, in which signal processing is performed) in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule.
  • a RAW data reconstructor configured as follows and a compression processor.
  • the RAW data reconstructor inputs RAW data that has not been subjected to signal processing after A / D conversion of the analog signal of the image, and processes it as follows. That is, the input RAW data is decomposed for each color component and reassembled to generate a plurality of color-specific plane data. For example, it is assumed that the RAW data is obtained by repeatedly arranging the first to fourth color components on the pixel array according to a certain rule.
  • the RAW data is decomposed into a first color component, a second color component, a third color component, and a fourth color component, and the first color-specific plane data in which only the first color component is collected and only the second color component are collected.
  • the plane data is data having an array form that is expanded two-dimensionally when expanded on the image memory.
  • the case where the two color-specific plane data are the same color as in the Bayer array (BGgR) is also included.
  • the RAW data reconstructor arranges the plurality of color-specific plane data and collects them as one file that is a compression processing unit.
  • one file which is a compression processing unit, has a plurality of arrangement areas divided for each color component.
  • the RAW data reconstructor distributes and arranges the plurality of color-specific plane data in a plurality of arrangement areas divided for each color component, and generates reconstructed RAW data.
  • the first color-specific plane data is arranged in the first arrangement area
  • the second color-specific plane data is arranged in the second arrangement area
  • the third arrangement area is in the first arrangement area.
  • 3 color-specific plane data is arranged
  • the fourth color-specific plane data is arranged in the fourth arrangement area to generate reconstructed RAW data collected as one file as a compression processing unit.
  • the generated reconstructed RAW data is passed to the compression processor.
  • the compression processor inputs the reconstructed RAW data of the compression processing unit generated by the RAW data reconstructor, and performs the compression process.
  • the relative positional relationship between the first to fourth arrangement areas is arbitrary.
  • the image processing apparatus of the present invention is an apparatus that compresses RAW data in a data format in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule. And The RAW data is input, the RAW data is decomposed and reassembled for each color component to generate a plurality of color plane data, and the plurality of color classification A RAW data reconstructor for generating reconstructed RAW data arranged as a single file as a compression processing unit by arranging plain data; A compression processor that inputs and compresses the reconstructed RAW data of the compression processing unit generated by the RAW data reconstructor; It is comprised as that provided.
  • the image processing method of the present invention is a method of compressing RAW data in a data format in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule,
  • the RAW data is input, the RAW data is separated for each color component and reassembled to generate a plurality of color-specific plane data, and the plurality of color-specific plane data are divided into a plurality of arrangement areas divided for each color component.
  • the compression processing target of the compression processor is “reconstructed RAW data in which a plurality of color plane data are arranged and collected as one file as a compression processing unit”.
  • the plane data for each color has a high correlation between adjacent pixels, and the compression process provides a higher compression efficiency than the case where raw data with different color components are adjacent is compressed as it is.
  • This is also recognized in the prior art as described above.
  • not only that, but also reconstructed RAW data configured by arranging a plurality of types of color-specific plane data on one file which is a unit of compression processing is targeted for compression processing. Therefore, a plurality of color-specific plane data on the reconstructed RAW data can be compressed at once, and higher compression efficiency can be obtained.
  • the compression processing of the plane data for each color component by color is realized by one compression processing in one file (reconstructed RAW data) that is a unit of compression processing, a plurality of processing is performed as in the case of the prior art. Compared with the case of repeatedly performing compression processing while sequentially switching the color-specific plane data individually, the compression efficiency is greatly improved.
  • An image pickup apparatus includes an image pickup unit that converts an optical image input by an image sensor that performs color separation and picks up an image into an analog electric signal, and further converts it into digital RAW data, and the image processing device described above. It is equipped with.
  • a RAW data reconstructor and compression processing is performed on reconstructed RAW data configured by arranging a plurality of types of color-specific plane data having different color components on one file as a compression processing unit.
  • FIG. 1 is a block diagram illustrating a schematic configuration of an image processing apparatus that compresses reconstructed RAW data generated from RAW data according to an embodiment of the present invention. It is an illustration figure of the RAW data which are related with embodiment of this invention and are compression object.
  • FIG. 4 is an illustrative diagram in which four color plane data are adjacently arranged in a two-dimensional direction according to an embodiment of the present invention.
  • FIG. 4 is an illustrative diagram in which four color-specific plane data are arranged at appropriate intervals according to an embodiment of the present invention.
  • FIG. 4 is an explanatory diagram (part 1) showing reconstructed RAW data reconstructed as image data for one frame by two-dimensionally arranging four color-specific plane data in a Bayer array according to an embodiment of the present invention. . It is explanatory drawing of the block noise generation
  • FIG. 6 is an explanatory diagram (part 2) showing reconstructed RAW data in which four-color plane data of a Bayer array is two-dimensionally arranged and reconstructed as image data for one frame according to the embodiment of the present invention. .
  • FIG. 6 is an explanatory diagram of block noise occurrence positions in image data when the compressed RAW data in FIG.
  • FIG. 5 is decompressed according to the embodiment of the present invention. It is a figure which shows the mode of the block noise corresponding to embodiment of this invention and corresponding to FIG. 5, FIG. It is a figure which shows the mode of the block noise corresponding to embodiment of this invention and corresponding to FIG. 3, FIG. FIG. 7 is an explanatory diagram showing a block noise occurrence position of FIG. 6 corresponding to FIG. 7A according to the embodiment of the present invention. FIG. 7 is an explanatory diagram showing a block noise occurrence position of FIG. 4 corresponding to FIG. 7B according to the embodiment of the present invention. It is explanatory drawing which concerns on embodiment of this invention and shows the other form of four independent arrangement
  • FIG. 2 is a block diagram illustrating a schematic configuration of an imaging apparatus including an image processing apparatus that compresses reconstructed RAW data generated from RAW data according to an embodiment of the present invention.
  • It is explanatory drawing which shows the example (Bayer arrangement) of the color filter arrangement
  • It is explanatory drawing which shows the example (honeycomb arrangement
  • the RAW data reconstructor when arranging the plurality of color plane data in the one file, includes a compression processing unit block in the compression processor.
  • the reference position of each color plane data is shifted by a predetermined number of pixels in both the vertical and horizontal directions with respect to the periodic boundary position.
  • the step of generating the reconstructed RAW data includes arranging the plurality of color-specific plane data in the one file. In doing so, there is an aspect in which the reference position of each color plane data is shifted by a predetermined number of pixels in both the vertical and horizontal directions with respect to the periodic boundary position of the compression processing unit block in the compression processor.
  • the RAW data reconstructor when arranging the plurality of color plane data in the one file, is a compression processing unit block in the compression processor.
  • the reference position of each color plane data is arranged so as to be shifted by a predetermined pixel in either one of the vertical direction and the horizontal direction.
  • the step of generating the reconstructed RAW data includes arranging the plurality of color-specific plane data in the one file.
  • the reference position of the plane data for each color is shifted by a predetermined number of pixels in either the vertical direction or the horizontal direction with respect to the periodic boundary position of the compression processing unit block in the compression processor.
  • the compression processing is compressed. If decompression processing is performed for reproduction in the case of irreversible compression with a high rate, the block noise generation position of each color corresponding to the boundary position of the compression processing unit block in the reconstructed RAW data for one frame obtained. Overlapping characteristic image quality degradation may occur, and block noise may increase.
  • each color plane data is shifted relative to the periodic boundary position of the compression processing unit block configured as in (2) and (3) above, the block of each color It is possible to suppress image quality degradation peculiar to where noise generation positions overlap and to improve image quality.
  • the RAW data reconstructor arranges fixed data in a non-image area between the arrangement areas of the plurality of color-specific plane data, and There is an aspect of being configured to generate reconstructed RAW data.
  • the step of generating the reconstructed RAW data is performed between the plurality of color-specific plane data arrangement regions.
  • fixed data is arranged in the image area and the reconstructed RAW data is generated.
  • the fixed data is pixel data having no luminance change.
  • the compression processor has a plurality of component input terminals, the reconstructed RAW data input from one of the component input terminals, and the other component There is an aspect in which fixed data input from an input terminal is compressed simultaneously.
  • the compression processing step inputs the reconstructed RAW data and the fixed data in different systems, and There is an aspect in which the reconstructed RAW data and the fixed data are simultaneously compressed.
  • the compression processor has a plurality of component input terminals, the reconstructed RAW data input from one of the component input terminals, and the other component There is an aspect in which another image data input from the input terminal is compressed simultaneously.
  • the compression processing step inputs the reconstructed RAW data and another image data in different systems. There is a mode in which the reconstructed RAW data and the other image data are simultaneously compressed.
  • the reconstructed RAW data and another image data are simultaneously compressed to reduce the size for display. It is possible to further increase the compression efficiency including another image such as an image. Also in this case, resources of a compression processor in a general form having component input terminals for luminance signals (Y) and color difference signals (Cr / Cb) such as JPEG can be effectively used.
  • the compression processor may be configured to irreversibly compress the reconstructed RAW data.
  • the compression processing step includes irreversible compression of the reconstructed RAW data. JPEG, MPEG (Moving Picture Experts Group), H.264.
  • the lossy compression such as H.264 is a compression process with a higher compression rate than the lossless compression. Therefore, compressed RAW data with a smaller file size can be obtained.
  • the image processing method of the present invention may be configured as a single application software, or may be incorporated as a part of an application such as image processing software or file management software.
  • the image processing program according to the image processing method of the present invention is not limited to being applied to a computer system such as a personal computer, and the operation of a central processing unit (CPU) incorporated in an information device such as a digital camera or a cellular phone. It can also be applied as a program.
  • a computer system such as a personal computer
  • CPU central processing unit
  • an information device such as a digital camera or a cellular phone. It can also be applied as a program.
  • FIG. 1 is a block diagram illustrating a schematic configuration of an image processing apparatus that generates reconstructed RAW data from RAW data acquired by an imaging unit having a color filter with a Bayer array and compresses the reconstructed RAW data.
  • reference numeral 1 denotes an image pickup unit (image sensor) having a color filter in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule.
  • the color filter has four types of color components B (blue component), G (first green component), g (second green component), and R (red component) having a Bayer array.
  • B blue component
  • G first green component
  • g second green component
  • R red component
  • Reference numeral 2 denotes RAW data in a data format based on a Bayer array having four color components B, G, g, and R acquired by the imaging unit 1.
  • RAW data 2 is decomposed into color components B, G, g, and R and reassembled two-dimensionally to generate four color-specific plane data 41, 42, 43, 44. Further, the four color-specific plane data 41, 42, 43, and 44 are arranged in four independent arrangement areas a1, a2, a3, and a4 divided for each color component to form one file as a compression processing unit. It is a RAW data reconstructor that generates combined reconstructed RAW data 4.
  • Reference numeral 5 denotes a compression processor that inputs the reconstructed RAW data 4 of the compression processing unit generated by the RAW data reconstructor 3 as luminance data and performs irreversible compression processing.
  • RAW data 2 output from the imaging unit 1 and input to the RAW data reconstructor 3 is image data immediately after A / D conversion of the imaging analog signal in the imaging unit 1, and is a synchronized color interpolation process and a gamma correction process.
  • the image data is not subjected to signal processing such as white balance adjustment.
  • the RAW data 2 is mosaic image data in which only one color information different for each pixel corresponding to the color filter array pattern is held.
  • the RAW data reconstructor 3 receives the RAW data 2 that has not been subjected to signal processing, decomposes the input RAW data 2 for each of the color components, and reassembles the four pieces of plane data 41 and 42 for each color. , 43, 44 are generated. That is, the RAW data 2 is obtained by repeatedly arranging the first to fourth color components on the pixel array in accordance with a certain rule.
  • the RAW data 2 is composed of the first color component (B) and the second color component (G ), First color-specific plane data 41 that is decomposed into the third color component (g) and the fourth color component (R) and collects only the first color component (B), and the second color component (G) Second color plane data 42 that collects only the third color plane data 43 that collects only the third color component (g), and a fourth color that collects only the fourth color component (R) Another plane data 44 is generated.
  • the four color components in the Bayer array are the first color component (B), the second color component (G), the third color component (g), and the fourth color component (R), and the second color component. Both (G) and the third color component (g) are green, but these two are treated as independent plane data by color.
  • the RAW data reconstructor 3 further arranges the four color-specific plane data 41, 42, 43, and 44 in a two-dimensional manner and collects them as one file that is a compression processing unit.
  • one file as a compression processing unit has four arrangement areas a1, a2, a3, and a4 divided for each color component.
  • the RAW data reconstructor 3 distributes and arranges the four color-specific plane data 41, 42, 43, and 44 in the four arrangement areas a1, a2, a3, and a4 divided for each color component.
  • RAW data 4 is generated. That is, the first color-specific plane data 41 is arranged in the first arrangement area a1 corresponding to the color component B (blue), and the second arrangement area a2 corresponding to the color component G (first green).
  • color-specific plane data 42 is arranged, the third color-specific plane data 43 is arranged in the third arrangement area a3 corresponding to the color component g (second green), and it corresponds to the color component R (red).
  • the fourth color-specific plane data 44 is arranged in the fourth arrangement area a4 to be generated, and the reconstructed RAW data 4 collected as one file as a compression processing unit is generated. Then, the generated reconstructed RAW data 4 is transferred to the compression processor 5.
  • the compression processor 5 receives the reconstructed RAW data 4 of the compression processing unit generated by the RAW data reconstructor 3 and performs compression processing.
  • the compression process may be either lossy compression (lossy encoding) or lossless compression (lossless encoding).
  • the relative positional relationship between the first to fourth arrangement regions a1 to a4 is arbitrary. Here, it is arranged two-dimensionally in two horizontal and vertical directions, but as described later, it may be arranged one-dimensionally along the horizontal direction, or one-dimensionally arranged along the vertical direction. It is good (refer FIG. 9).
  • the second color component (G) is adjacent to the right side of the first color component (B), and the first color component (B) is adjacent to the right side of the second color component (G).
  • the fourth color component (R) adjacent to the right side of the third color component (g)
  • the fourth color component (R) is adjacent to the lower side of the second color component (G), and the second color component (G) is adjacent to the lower side of the fourth color component (R).
  • the pixel values of these adjacent pixels are not so highly correlated. Therefore, when the RAW data 2 is compressed as it is, the compression efficiency is low (equivalent to the prior art).
  • the compression processing target of the compression processing unit 5 is “compressed by placing four color-specific plane data 41, 42, 43, 44.
  • the reconstructed RAW data 4 "is collected as a single unit file.
  • the color-specific plane data has a high correlation between adjacent pixels, and the compression process can obtain a higher compression efficiency than the case where the raw data 2 is compressed as it is.
  • the reconstructed RAW data 4 configured by arranging four color-specific plane data 41, 42, 43, and 44 having different color components on one file as a compression processing unit is compressed. Since it is an object of processing, higher compression efficiency can be obtained. Specifically, since the compression processing of the color-specific plane data 41, 42, 43, 44 of all the color components is realized by one compression processing in one file (reconstructed RAW data 4) that is a compression processing unit. Compared with the case where the four color-specific plane data 41, 42, 43, and 44 are individually switched sequentially and repeatedly compressed as in the case of the prior art, the compression efficiency is greatly improved.
  • the compression processor 5 is a standard compression control software asset created so that the compression processing for image data for one frame, which is mounted on a digital camera as standard, is completed in one process. It can also be used effectively. Alternatively, existing JPEG hardware processing can also be supported. That is, it is not necessary to use four compression processors operating in parallel to complete the compression processing of image data for one frame including the four color-specific plane data 41, 42, 43, 44 in one process. Since only one compression processor 5 is required, there is no need to increase the circuit scale. Alternatively, there is no need to particularly increase the processing capacity of the CPU.
  • FIG. 2A illustrates RAW data 2 to be compressed.
  • FIG. 2B four color-specific plane data 41, 42, 43, and 44 are adjacently arranged in a two-dimensional direction.
  • FIG. 2C shows four color-specific plane data 41, 42, 43, 44 arranged at appropriate intervals.
  • FIG. 2C corresponds instead of FIG. 2B.
  • fixed data indicated in gray
  • invariant luminance levels are arranged between the peripheral part and the peripheral part to form a non-image area.
  • M, n, x, and y are arbitrary (two or more) natural numbers.
  • the size of the RAW data 2 to be compressed is 2 m pixels in the horizontal direction and 2n pixels in the vertical direction.
  • the size of the reconstructed RAW data 4 that is one compressed file is larger than that of the RAW data 2. It is 2 (m + x) pixels in the horizontal direction and 2 (n + y) pixels in the vertical direction.
  • the four arrangement regions a1, a2, a3, and a4 divided for each color component have the same size, and are (m + x) pixels in the horizontal direction and (n + y) pixels in the vertical direction.
  • the sizes of the four color-specific plane data 41, 42, 43, and 44 are common and are m pixels in the horizontal direction and n pixels in the vertical direction.
  • FIG. 3 shows a reconstructed RAW obtained by two-dimensionally arranging four color-specific plane data 41, 42, 43, and 44 of the Bayer array and reconstructing it as image data for one frame by the RAW data reconstructor 3 of FIG. Data 4 is shown.
  • 41 is the first color-specific plane data obtained by collecting only the first color components (B) arranged at the upper left part of the image
  • 42 is the second color component (at the upper right part of the image).
  • G) is the second color-specific plane data that collects only G
  • 43 is the third color-specific plane data that collects only the third color component (g) arranged in the lower left part of the image
  • 44 is the image
  • the fourth color-specific plane data is a collection of only the fourth color component (R) arranged in the lower right part of FIG.
  • reference numeral 47 shown in gray is fixed data for filling between adjacent boundary portions of the four color-specific plane data 41, 42, 43, 44.
  • the fixed data 47 is pixel data that has no luminance signal component, only a luminance signal component, and the luminance signal component is constant.
  • the size of the compression processing unit block is 4 pixels ⁇ 4 pixels (equivalent to H.264) here for simplification, but in the case of JPEG, it is 8 pixels ⁇ 8 pixels.
  • the four color-specific plane data 41, 42, 43, and 44 are two-dimensionally arranged at a predetermined interval on the reconstructed RAW data 4 that is one compressed file.
  • the reason why the plane data for each color is arranged with a space therebetween is as follows.
  • the compression process is repeatedly performed on a block unit of 4 pixels ⁇ 4 pixels or 8 pixels ⁇ 8 pixels. If four color plane data are arranged adjacent to each other, some pixel data of the last block of the first color plane data 41 in a certain block row depending on the data size of the color plane data. And some pixel data of the first block of the second color plane data 42 may be included in one compression processing unit block, and the third color plane data 43 and the fourth color The same applies to the other plane data 44.
  • the color-specific plane data are arranged with an interval between them, and an appropriate number of fixed data 47 whose luminance signal component is not changed is arranged between the adjacent color-specific plane data. By doing so, it is possible to prevent deterioration of data after compression.
  • Block noise is likely to occur at the boundary positions 45 and 46 when the compressed RAW data is decompressed for photo retouch processing or the like.
  • Block noise is likely to occur in DCT with JPEG. It tends to occur as the compression rate increases. It stands out in areas where there is relatively little change in density value. This is shown in FIG. FIG. 4 is an explanatory diagram of block noise occurrence positions in the image data when the compressed RAW data of FIG. 3 is decompressed.
  • the relative positional relations are equivalent to each other.
  • the reference position of each arrangement area is the upper left corner of each arrangement area, and the reference position of each color plane data is the upper left corner of each color plane data.
  • the displacement vectors V4 with respect to the reference position are equivalent to each other and are exactly overlapped with each other by parallel movement along the horizontal direction or the vertical direction.
  • the size of the basic processing block is 4 pixels ⁇ 4 pixels.
  • Each of the four color-specific plane data 41, 42, 43, 44 includes therein a horizontal period boundary position 45 extending in the vertical direction and a vertical period boundary position 46 extending in the horizontal direction of the compression processing unit block. Appears repeatedly.
  • the B (blue) pixel data facing the boundary position 45 from the left side in the first color-specific plane data 41 is the second column, the sixth column, the tenth column, etc., and faces the boundary position 45 from the right side.
  • the B (blue) pixel data is in the third, seventh, eleventh columns, etc., and when these pixel data are expanded, the original 2m ⁇ 2n large size is obtained as shown in FIG. Expanded in the image data, the third column, the eleventh column, the nineteenth column, the fifth column, the thirteenth column, the twenty-first column, and the like.
  • B (blue) pixel data facing the boundary position 46 from the upper side in the first color-specific plane data 41 is the second line, the sixth line, the tenth line, and the like, and the boundary position 46 from the lower side.
  • the pixel data of B (blue) facing the first line is the third line, the seventh line, the eleventh line, etc., which are expanded in FIG. 4 and the third line, the eleventh line, the nineteenth line, etc. , Line 13 and line 21.
  • the G (first green) pixel data facing the boundary position 45 from the left side in the second color-specific plane data 42 is the second column, the sixth column, the tenth column, etc., as described above.
  • the G (first green) pixel data facing the boundary position 45 from the right side is the third column, the seventh column, the eleventh column, and the like.
  • the G (first green) pixel data facing the boundary position 46 from the upper side in the first color-specific plane data 41 is in the second row, the sixth row, the tenth row, etc. as described above.
  • the G (first green) pixel data facing the boundary position 46 from the lower side is the third row, the seventh row, the eleventh row, etc., as described above, and these are expanded in FIG.
  • the third, eleventh, and nineteenth lines are the fifth, thirteenth, and twenty-first lines.
  • the relationship between the g (second green) pixel data and the boundary positions 45 and 46 in the third color-specific plane data 43 is the same, and in FIG. , 11th column, 19th column, etc., 5th column, 13th column, 21st column etc., 4th row, 12th row, 20th row, etc., 6th row, 14th row, etc. Eyes, 22nd line, etc.
  • the relationship of the R (red) pixel data with respect to the boundary positions 45 and 46 in the fourth color-specific plane data 44 is the same.
  • the boundary positions at which block noise is likely to occur are equivalent to the four color-specific plane data 41, 42, 43, and 44, and are continuous in the vertical direction and the horizontal direction in the decompressed image data. Become. That is, in the original large-sized image data after the decompression process, pixels corresponding to the boundary positions regularly appear every fourth column and every fourth row. As a result, block noise tends to appear prominently, and there is a possibility that image quality will deteriorate (see FIG. 7B).
  • the problem of block noise generation described above is that, in order to increase the compression efficiency, a plurality of types of color-specific plane data having different color components are arranged in order on one file as a compression processing unit (displacement with respect to each reference position). This is because the reconstructed RAW data configured by arranging the vectors V1 to V4 (equivalent to each other) is the target of compression processing, and the compression processing for one frame of image data is completed in one process. .
  • the data compression here is irreversible compression with a high compression rate.
  • the relative positional relationship arranged at is not equivalent to each other. That is, the four relative positional relationships are shifted from each other. The shift is one pixel in the right direction and one pixel in the downward direction (when the compression processing unit block is 4 pixels ⁇ 4 pixels).
  • the displacement vector V4 with respect to the reference position is not equivalent to each other.
  • V2 V1 ⁇ 2
  • V3 V1 ⁇ 3
  • V4 V1 ⁇ 4.
  • the second color-specific plane data 42 is shifted by one pixel in the lower right direction with respect to the first color-specific plane data 41, and the third color-specific plane data 43 is different from the second color-specific plane data 42.
  • the fourth color-specific plane data 44 is shifted by one pixel in the lower right direction with respect to the third color-specific plane data 43.
  • FIG. 6 shows the original large size image data after the decompression process corresponding to FIG.
  • the B (blue) pixel data facing the boundary position 45 from the left side in the first color-specific plane data 41 is the third column, the seventh column, the eleventh column, etc., and faces the boundary position 45 from the right side.
  • the pixel data of B (blue) is the fourth column, the eighth column, the twelfth column, etc., and these are expanded in FIG. 6, and the fifth column, the thirteenth column, the twenty-first column, etc.
  • the columns are the 23rd column and the like.
  • B (blue) pixel data facing the boundary position 46 from the upper side in the first color-specific plane data 41 is the third line, the seventh line, the eleventh line, etc., and the boundary position 46 from the lower side.
  • the B (blue) pixel data facing the first line is the fourth line, the eighth line, the twelfth line, etc., which are expanded in FIG. 6, and the fifth line, the 13th line, the 21st line, etc. 15th and 23rd lines.
  • the G (first green) pixel data facing the boundary position 45 from the left side in the second color-specific plane data 42 is the second column, the sixth column, the tenth column, etc., and the boundary position from the right side.
  • the pixel data of G (first green) facing 45 is the third column, the seventh column, the eleventh column, etc., which are developed in FIG. 6, and the fourth column, the twelfth column, the twentieth column, etc. , 6th row, 14th row, 22nd row and so on.
  • the G (first green) pixel data facing the boundary position 46 from the upper side in the second color-specific plane data 42 is the second row, the sixth row, the tenth row, etc., from the lower side.
  • the pixel data of G (first green) facing the boundary position 46 is the third row, the seventh row, the eleventh row, etc., which are expanded in FIG.
  • the pixel data of g (second green) facing the boundary position 45 from the left side in the third color-specific plane data 43 is the first column, the fifth column, the ninth column, etc., and the boundary position from the right side
  • the pixel data of g (second green) facing 45 is the second column, the sixth column, the tenth column, etc., and these are expanded in FIG. 6, and the first column, the ninth column, the 17th column, etc.
  • the third, eleventh, and nineteenth columns are provided.
  • the pixel data of g (second green) facing the boundary position 46 from the upper side in the third color-specific plane data 43 is the first row, the fifth row, the ninth row, etc., from the lower side.
  • the pixel data of g (second green) facing the boundary position 46 is the second row, the sixth row, the tenth row, etc., which are expanded in FIG. 6, and the second row, the tenth row, and the 18th row.
  • the R (red) pixel data facing the boundary position 45 from the left side in the fourth color-specific plane data 44 is the fourth, eighth, and twelfth columns, and faces the boundary position 45 from the right side.
  • the pixel data of R (red) is the 1st column, the 5th column, the 9th column, etc., and these are expanded in FIG. 6, and the 8th column, the 16th column, the 24th column, etc. The columns are the 18th column and the like.
  • the R (red) pixel data facing the boundary position 46 from the upper side in the fourth color-specific plane data 44 is the fourth line, the eighth line, the twelfth line, etc., and the boundary position 46 from the lower side.
  • the R (red) pixel data facing the first line is the first line, the fifth line, the ninth line, etc., which are expanded in FIG. 6, and the second line, the eighth line, the 16th line, the 24th line, etc.
  • Block noise may occur in the reconstructed RAW data for one frame after decoding that has been subjected to decompression processing due to image quality deterioration in which the image becomes discontinuous at the periodic boundary positions of the compression processing unit blocks.
  • the 8 pixel ⁇ 8 pixel pattern as described above repeatedly appears along the two-dimensional direction.
  • the pixel group facing the boundary position is aligned in a straight line in both the horizontal direction and the vertical direction, whereas in FIG. 6, it faces the boundary position.
  • the pixel group is in a dispersed arrangement state both in the horizontal direction and in the vertical direction. That is, in the development of FIG. 4, positions where block noise is likely to occur are in a concentrated arrangement state, whereas in the development of FIG. 6, positions where block noise is likely to occur are in a distributed arrangement state.
  • FIG. 8B applies the block noise occurrence position of FIG. 4 corresponding to FIG. 7B to FIG. 7B
  • FIG. 8A applies the block noise occurrence position of FIG. 6 corresponding to FIG. 7A to FIG.
  • FIG. 8A and FIG. 8B are shown as a reference.
  • the background is black in order to make the block noise occurrence position (white portion) clear.
  • the block noise occurrence positions are concentrated in one straight line, and the degree of concentration is extremely high.
  • image quality degradation occurs in which the image becomes discontinuous at the periodic boundary positions of the compression processing unit blocks, and noticeable image quality degradation occurs.
  • the four color-specific plane data 41, 42, 43, 44 are sequentially shifted by one pixel in the horizontal direction and one pixel in the vertical direction.
  • the compression processing unit block size is JPEG of 8 pixels ⁇ 8 pixels
  • the four color-specific plane data 41, 42, 43, and 44 may be sequentially shifted by 2 pixels in the horizontal direction and 2 pixels in the vertical direction.
  • the size of the compression processing unit block is 4 pixels ⁇ 4 pixels, and the four color-specific plane data 41, 42, 43, 44 are sequentially shifted by 1 pixel in the horizontal direction and 1 pixel in the vertical direction. Yes.
  • the size of the compression processing unit block is 8 pixels ⁇ 8 pixels JPEG
  • the boundary positions 45 of the horizontal period extending in the vertical direction appear every 8 columns
  • the boundary positions 46 of the vertical period extending in the horizontal direction are 8 It will appear every other line.
  • the four color-specific plane data 41, 42, 43, and 44 may be sequentially shifted by two pixels in the horizontal direction and two pixels in the vertical direction.
  • the four color plane data may be sequentially shifted in the horizontal direction a / 4 pixel and the vertical direction a / 4 pixel.
  • the arrangement relationship of the four color-specific plane data 41, 42, 43, 44 in the image data for one frame is a two-dimensional array arranged in both the horizontal direction and the vertical direction as described above.
  • a vertical one-dimensional array arranged in one column in the vertical direction or a horizontal one-dimensional array arranged in one column in the horizontal direction may be used.
  • the order in which the four color-specific plane data 41, 42, 43, and 44 are arranged is not particularly limited, and the arrangement information can be arranged in an arbitrary place by using the history of the arrangement information as data additional information after data compression. it can.
  • [B, G, g, R] as shown in FIG. 1 [G, B, R, g] may be used, and [R, G , G, B].
  • FIG. 10 is a block diagram showing a schematic configuration of an imaging apparatus equipped with an image processing apparatus that compresses reconstructed RAW data generated from RAW data according to an embodiment of the present invention.
  • This image pickup apparatus 50 is an image pickup apparatus using an image sensor of a type that separates and picks up images, and can record images in the JPEG format and can record RAW data immediately after A / D conversion. It is configured to be.
  • 50 is an imaging device
  • 60 is a single-plate imaging unit
  • 70 is an image processing device.
  • the imaging device 50 includes an imaging unit 60 and an image processing device 70.
  • the imaging unit 60 includes an optical lens 61, an optical low-pass filter 62, a color filter 63, an imaging element 64, and an analog front end unit 65.
  • the image processing apparatus 70 includes a CPU (Central Processing Unit) 71, a ROM (Read Only Memory) 72, a RAM (Random Access Memory) 73, a preprocessing unit 74, a memory control unit 75, an image memory 76, an image signal processing unit 77, A compression / decompression processing unit (encoder / decoder) 78, a recording media interface unit 79, a display processing unit 80, and a monitor interface unit 81 are included.
  • 91 is an operation panel
  • 92 is a recording medium.
  • the image sensor 64 is an image sensor such as a CCD type or a CMOS type, and a large number of photodiodes (photosensitive pixels) are two-dimensionally arranged on the light receiving surface thereof.
  • the photodiode photoelectrically converts subject information that has passed through the optical lens 61 and the optical low-pass filter 62.
  • the optical low-pass filter 62 has an action of removing high frequency components higher than the sampling frequency depending on the pixel pitch of the image sensor 64, and prevents aliasing in the final image after image reproduction (signal processing). It is supposed to be.
  • Aliasing means that when a waveform with a frequency component exceeding 1/2 of the sampling frequency is forcibly sampled, a frequency component that should not exist originally appears and is drawn in black letters or lines on a white background. In an image with clear contrast, the characters and lines are crushed and a pattern different from the original image appears.
  • the color filter 63 has a predetermined color arrangement in which any color of R, G, and B exists at a position corresponding to one pixel of the image sensor 64, and the light that enters the photodiode that is the light receiving element. Color selection is performed.
  • FIGS. 11A to 11C show examples of primary color filter arrangements.
  • the light receiving elements are arranged in a square matrix at a constant pitch in the row direction and the column direction, respectively.
  • the centers of the geometric shapes of the light receiving elements are arranged so as to be shifted by 1/2 pitch in the row direction and the column direction.
  • the structure of the pixel array is periodically repeated in the horizontal direction and the vertical direction.
  • FIG. 11C shows frequency characteristics in the case of the Bayer array.
  • a subject image that passes through the optical lens 61, the optical low-pass filter 62, and the color filter 63 and is imaged on the light receiving surface of the image sensor 64 is converted into signal charges of an amount corresponding to the amount of incident light by each photodiode, and is not shown. Based on the pulse given from the driver circuit, it is sequentially read out as a voltage signal (image signal) corresponding to the signal charge.
  • the image sensor 64 has an electronic shutter function that controls the charge accumulation time (shutter speed) of each photodiode according to the timing of a shutter gate pulse (not shown). The operation (exposure, reading, etc.) of the image sensor 64 is controlled by the CPU 71.
  • the analog front end unit 65 performs processing such as analog gain adjustment and CDS (correlated double sampling) on the image signal output from the image sensor 64, and then converts the image signal into a digital signal by the built-in A / D conversion unit. It has a function to convert.
  • the analog front end unit 65 outputs the RAW data immediately after A / D conversion to the preprocessing unit 74 of the image processing apparatus 70.
  • the pre-processing unit 74 in the image processing apparatus 70 includes an auto calculation unit that performs calculations necessary for AF and AE control.
  • the preprocessing unit 74 receives the RAW data from the analog front end unit 65, performs the focus evaluation value calculation, the AE calculation, and the like in the auto calculation unit, and transmits the calculation result to the CPU 71.
  • the pre-processing unit 74 performs Bayer array RAW data having four color components B, G, g, and R digitized by the A / D conversion of the analog front end unit 65. Then, the black DC level that is the reference of the data is adjusted.
  • a memory control unit 75 in the image processing apparatus 70 includes a preprocessing unit 74, an image signal processing unit 77, a compression / decompression processing unit 78, a recording media interface unit 79, a display processing unit 80, and an image memory 76. It is configured to relay the exchange of signals.
  • the memory control unit 75 incorporates a RAW data reconstructor.
  • the RAW data reconstructor generates four pieces of color-specific plane data 41, 42, 43, and 44, rearranges them into image data for one frame, and writes them into the memory space of the image memory 76. It is configured.
  • the RAW data reconstructor in the memory control unit 75 generates four pieces of color-specific plane data 41 and 42 with respect to the periodic boundary position of the compression processing unit block in the compression / decompression processing unit 78 when generating the reconstructed RAW data. , 43, and 44 are configured to be shifted relative to each other.
  • a CPU 71 in the image processing device 70 is a control unit that performs overall control of the imaging device 50 in accordance with a predetermined program. Under the cooperation of the ROM 72 and the RAM 73, a preprocessing unit 74, an image signal processing unit 77, and a recording media interface unit. 79 and the operation panel 91 are controlled.
  • the ROM 72 stores programs executed by the CPU 71 and various data necessary for control.
  • the RAM 73 is used as a work area for the CPU 71.
  • the CPU 71 controls the operation of each circuit in the imaging device 50 based on an instruction signal from the operation panel 91.
  • the CPU 71 controls the image pickup unit 60 such as the image pickup device 64 according to various shooting conditions (exposure conditions, presence / absence of strobe light emission, shooting mode, etc.) in accordance with an instruction signal input from the operation panel 91, and automatic exposure (AE). ) Control, automatic focus adjustment (AF) control, auto white balance (AWB) control, lens drive control, image processing control, read / write control of the recording medium 92, and the like.
  • the CPU 71 performs automatic focus adjustment (AF) control when it detects half-pressing of the release switch on the operation panel 91, and starts exposure and reading control for capturing an image for recording when full-pressing of the release switch is detected. Further, the CPU 71 controls the recording medium interface unit 79 so as to record the captured image data on the recording medium 92 according to the recording mode. Further, the CPU 71 sends a command to a strobe control circuit (not shown) as necessary to control light emission of a flash light emitting tube (light emitting unit) such as a xenon tube.
  • AF automatic focus adjustment
  • the image signal processing unit 77 uses the image memory 76 as a work memory via the memory control unit 75, and performs synchronized color interpolation processing, white balance adjustment, gamma correction, luminance / color difference signal generation, contour enhancement, and electronic zoom function.
  • the compression / decompression processing unit 78 reads the image data from the image memory 76 via the memory control unit 75, performs compression processing according to a compression encoding algorithm corresponding to the designated compression format, and sends the compressed RAW data to the memory control unit 75.
  • the image data is stored in the image memory 73.
  • the compression / decompression processing unit 78 performs decompression processing on the compressed RAW data read from the recording medium 92, and stores the RAW data restored by the decompression in the image memory 73 via the memory control unit 75. It is supposed to be.
  • As an algorithm used for compression / decompression processing there are MPEG and the like in addition to JPEG.
  • the recording media interface unit 79 is configured to read image data from the image memory 76 via the memory control unit 75 and transfer it to the recording medium 92 for recording.
  • the image data is read from the recording medium 92 and transferred to the memory control unit 75 for decoding.
  • the operation panel 91 is configured to allow a user to input various instructions to the imaging apparatus 50.
  • a mode selection switch for selecting an operation mode of the imaging apparatus 50, a menu item selection operation (cursor moving operation), and the like.
  • the cross key for inputting instructions such as frame advance / rewind of playback images, execution keys for confirming (registering) selection items and executing operations, and deleting desired objects such as selection items, and canceling instructions
  • Various operation devices such as a cancel key, a power switch, a zoom switch, and a release switch are included.
  • the recording medium 92 for storing image data various recording media such as a magnetic disk, an optical disk, and a magneto-optical disk can be used in addition to a semiconductor memory represented by a memory card. Further, the recording medium (internal memory) built in the imaging device 50 is not limited to a removable medium.
  • the subject image that has passed through the optical lens 61, the optical low-pass filter 62, and the color filter 63 in the imaging unit 60 and is imaged on the light receiving surface of the imaging device 64 is converted into signal charges corresponding to the amount of incident light by each photodiode. Then, it is sequentially read out as a voltage signal (image signal) corresponding to the signal charge based on a pulse given from a driver circuit (not shown), and an image analog signal having a Bayer arrangement of four color components B, G, g, and R is analog. It is sent to the front end portion 65.
  • the image analog signal from the image sensor 64 is digitized by the A / D conversion unit in the analog front end unit 65, and the Bayer array image data is sent to the preprocessing unit 74.
  • the Bayer array image data input to the preprocessing unit 74 is sent to the memory control unit 75 as RAW data after the black DC level as a data reference is adjusted.
  • the memory control unit 75 generates four color-specific plane data 41, 42, 43, and 44 by data rearrangement writing control of the built-in RAW data reconstructor, and further generates image data for one frame. Arrange and write in the memory space of the image memory 76. This is reconstructed RAW data in units of compression processing.
  • the reference positions of the four color-specific plane data 41, 42, 43, and 44 are set relative to the periodic boundary positions of the compression processing unit blocks in the compression / decompression processing unit 78. I try to shift it. This is to prevent the occurrence of block noise occurrence positions of the respective colors from overlapping when the compressed RAW data is read from the recording medium 92 and displayed on the monitor after being decompressed.
  • the reconstructed RAW data written in the image memory 76 is input as one component data to the compression / decompression processing unit 78 via the memory control unit 75.
  • fixed data that does not change the luminance component is input to the other component data input terminals of the compression / decompression processing unit 78, and compression processing is performed.
  • the processed compressed RAW data is written into the image memory 76 via the memory control unit 75 again.
  • the CPU 71 adds a header file as an image file in the JPEG file format to the data written as one compressed RAW data in the image memory 76, and then format information of the compressed RAW data, the shooting time, and the like. Information useful for searching and recognizing images such as the situation is acquired, added to the image file header, and recorded on the recording medium 92 via the memory control unit 75 and the recording medium interface unit 79.
  • the compressed file is read from the recording medium 92. That is, when a reproduction instruction is given on the expansion operation panel 91, the CPU 71 controls the compression / expansion processing unit 78 and the recording media interface unit 79.
  • the compressed RAW data read from the recording medium 92 is written into the image memory 73 via the recording medium interface unit 79 and the memory control unit 75.
  • the compressed RAW data written in the image memory 73 is transferred to the compression / decompression processing unit 78 via the memory control unit 75.
  • the decompression processing unit in the compression / decompression processing unit 78 decompresses the compressed RAW data.
  • the reconstructed RAW data configured by arranging the four color-specific plane data 41, 42, 43, and 44 on one file that is a compression processing unit is the target of the compression processing.
  • the four color-specific plane data 41, 42, 43, 44 can be compressed at once, and the four color-specific plane data 41, 42, 43, 44 are sequentially switched as in the case of the prior art.
  • the compression efficiency is greatly improved as compared with the case where the compression process is repeatedly performed.
  • This significant improvement in compression efficiency is advantageous when the imaging device 50 has a continuous shooting function, and provides high-speed continuous recording in the compressed RAW data recording mode in response to the recent increase in the number of pixels in an image sensor. It is possible to clear the problem in the prior art that hinders copying (pauses during the continuous shooting operation).
  • compression processor 5 compression / decompression processing unit 78
  • a single compression processor is required, so that the circuit scale does not increase, or There is no need to specially increase the processing capacity of the CPU.
  • FIG. 2A it is possible to use a general compression processor having input terminals for luminance signal (Y) and color difference signal (Cr / Cb) as input terminals for component data. Can be used.
  • a general JPEG compression processor 5 compression processor, compression / decompression processor
  • the reconstructed RAW data generated by the RAW data reconstructor 4 (memory control unit 75) is input to the input terminal of the luminance signal (Y), and the fixed data having no luminance change is input to the color difference signals (Cr, Cb).
  • a general JPEG compression processor processor
  • a general JPEG compression processor processor
  • the compressed RAW data is recorded on the recording medium 92, there is no influence of the signal processing of the preprocessing unit 74 and the image signal processing unit 77, and the image quality can be kept high. .
  • the generated compressed RAW data has a small file size, and the utilization efficiency of the recording medium 92 is improved. Since the use efficiency of the recording medium 92 is high and the reconstructed RAW data in which the above-described color-specific plane data is arranged in one file as a compression processing unit is the compression processing target, the compression efficiency is greatly improved. When combined with this, high-speed continuous shooting becomes even more advantageous.
  • the RAW data reconstructor in the memory control unit 75 arranges the four color-specific plane data 41, 42, 43, 44 to generate reconstructed RAW data in units of compression processing, and performs compression processing in the compression / decompression processing unit 78.
  • the reference position of the color-specific plane data is shifted relative to the periodic boundary position of the unit block. Therefore, when the compressed RAW data is read from the recording medium 92, decompressed by the compression / decompression processing unit 78, and displayed on the monitor via the display processing unit 80 and the monitor interface unit 81, the block noise occurrence position of each color is displayed. Overlap can be greatly reduced. Therefore, it is possible to suppress the occurrence of block noise, and to reduce the peculiar image quality degradation in which the occurrence positions of the block noise of each color overlap at the boundary position of the compression processing unit block.
  • the RAW data rearranged and written as image data for one frame in the image memory 76 is input as one component data to the compression / decompression processing unit 78 via the memory control unit 75, and the other components
  • the input data is compressed as fixed data having no luminance change.
  • it is also possible to input other image data to other component inputs and execute the process. is there.
  • As another image data for example, there is small YCrCb data or small RGB data which is reduced and resized for display.
  • the reconstructed RAW data 4 for one frame is input to the input terminal of the luminance signal (Y) in the compression processor 5 and the input terminal of the color difference signal (Cr / Cb) is for display.
  • Small YCrCb data 6 of an image composed of Y, Cr, and Cb that has been reduced and resized, small RGB data 7 of an image composed of R, G, and B are input.
  • the compression processor 5 simultaneously compresses the reconstructed RAW data 4, small YCrCb data 6, small RGB data 7, and the like.
  • the compressed file is read from the recording medium 92.
  • the compression / decompression processing unit 75 displays small YCrCb6 or small RGB7 while performing image signal processing. It is also possible to directly transfer to the processing unit 80 and perform playback preview.
  • JPEG has been described as an example of a still image compression encoding algorithm.
  • JPEG2000 or JPEG XR that can handle the number of bits of input data of more than 8 bits up to 12 bits may be used.
  • image quality deterioration due to compression is small compared to JPEG and the like, and image quality deterioration is particularly small at a high compression rate.
  • lossless compression and lossy compression are possible with the same algorithm.
  • the code string deletion process (post quantization) of the encoded data has an advantage that the compression rate can be adjusted without performing recompression.
  • the reconstructed RAW data is read from the image memory 76 and compressed. If the reconstructed RAW data has not been created in advance, the reconstructed RAW data for one frame is created in the memory control read process. Either aspect is encompassed by the present invention.
  • the process of creating the reconstructed RAW data for one frame has been described as the rearrangement process when the RAW data is written to the image memory 76, but the writing process to the image memory 73 is the original Bayer array.
  • the image memory 73 may be read as it is, and may be simultaneously read when it is read out via the memory control unit 75 during the compression process. Any of these modes is included in the present invention.
  • RAW data is divided into four color plane data and rearranged into one frame of reconstructed RAW data for compression, it is possible to use either lossless compression or lossy compression.
  • lossless compression it is possible to efficiently compress and record the RAW data that is not affected by the signal processing inside the imaging apparatus in a single process.
  • the original RAW data can be completely reproduced by decoding and decompressing the recorded encoded data by the compression / decompression processing unit 78 or an external decoder.
  • an image sensor having a color separation filter has been described as an example.
  • the present invention can also be applied to a case where an image sensor of a type that performs similar color separation by means other than a color filter is used. Is natural.
  • the compressed RAW data recorded by the imaging device 50 can be reproduced (developed) by a dedicated image processing device or a personal computer in addition to the imaging device main body. Specifically, using the arrangement information of each color template added to the compressed RAW data, the RAW data rearranged in the plurality of color regions after expansion may be rearranged again into the original RAW data array, When the reproduction (development) process is performed via the memory control unit 75, the original arrangement may be read.
  • the arrangement structure of the color filter 63 is not limited to the example shown in FIG. 11, and various arrangement structures such as RGB stripes are possible.
  • the primary color filter is used.
  • the present invention is not limited to the primary color filter, and is a complementary color filter composed of yellow (Y), magenta (M), cyan (C), and green (G). Or any combination of primary and complementary colors or white (W) may be used.
  • a noise processing unit and an A / D conversion unit are mounted in the image pickup device 64 as means for realizing high-speed reading, and output directly as a digital signal from the image pickup device. There is also.
  • the present invention relates to an image pickup apparatus equipped with an image sensor of a type that performs color separation such as a digital still camera, a digital video camera, an independent image scanner, an image scanner incorporated in a copying machine, etc.
  • an image pickup apparatus equipped with an image sensor of a type that performs color separation such as a digital still camera, a digital video camera, an independent image scanner, an image scanner incorporated in a copying machine, etc.
  • reconstructed RAW data configured by arranging a plurality of types of color-specific plane data on a single file as a compression processing unit is targeted for compression processing. Since the reconstructed RAW data is acquired by one compression process, it is useful as a technique for greatly improving the compression efficiency without causing an increase in circuit scale or CPU processing capacity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Television Signal Processing For Recording (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

An image processing device includes a raw data reconstructor and a compression processor. In the raw data reconstructor, raw data on which signal processing is not performed is received after an A/D conversion and is decomposed on a per-color-component basis and reassembled to generate a plurality of plane data sets arranged color by color, which are further arranged and collected into one file which is a unit of compression processing. The one file, which is a unit of compression processing, has a plurality of arrangement regions partitioned on a per-color-component basis. The raw data reconstructor sorts the plane data sets arranged color by color to the arrangement regions and rearranges the result to generate reconstructed raw data, which is transferred to the compression processor. The compression processor performs compression processing on the reconstructed raw data.

Description

画像処理装置、画像処理方法および撮像装置Image processing apparatus, image processing method, and imaging apparatus
 本発明は、カラー画像を構成する複数種類の色成分が一定の規則に従って画素配列上に繰り返し配列されるデータ形式のRAWデータを圧縮処理する画像処理装置、画像処理方法にかかわり、特には、複数の色別プレーンデータの圧縮効率を向上させるための技術に関する。また、本発明は、上記のような画像処理装置を実装し、色分解して撮像するタイプのイメージセンサを搭載した撮像装置に関する。対象とする画像処理装置、撮像装置としては、デジタルスチルカメラ、デジタルビデオカメラ、独立したイメージスキャナ、複写機などに組み込まれたイメージスキャナなどが含まれる。「プレーンデータ」というのは、データを画像メモリ上で展開する場合に、2次元的に展開される配列形態をもつデータのことである。 The present invention relates to an image processing apparatus and an image processing method for compressing RAW data in a data format in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array in accordance with a certain rule. The present invention relates to a technique for improving the compression efficiency of color-specific plane data. The present invention also relates to an image pickup apparatus in which the image processing apparatus as described above is mounted and an image sensor of a type that performs color separation and picks up images. Examples of the target image processing apparatus and imaging apparatus include a digital still camera, a digital video camera, an independent image scanner, and an image scanner incorporated in a copying machine. “Plain data” refers to data having an array form that is two-dimensionally developed when data is developed on an image memory.
 カラー画像を取得するためのCCD型またはMOS型のイメージセンサは、撮像素子における2次元配列の画素に対応して複数種類の色成分の色フィルタが一定の規則に従って繰り返し配列された色フィルタアレイ(色分解フィルタ)を有している。例えば、撮像素子の画素に対応してRGBの原色フィルタが市松模様状に配置されているベイヤー(Bayer)配列の色分解フィルタでは、水平方向および垂直方向の両方向に沿ってRGBの各色成分のフィルタがそれぞれ1画素おきに並んでいる(BGgR)。各画素で得られる情報は1つの色成分の情報のみである。そこで表現力を高めるために、各画素それぞれにおいて周辺画素の色情報を用いて補間し、各画素でそれぞれ複数の色成分の情報を得る。つまり、すべての画素それぞれにおいて、すべての色成分の情報を得る。これを同時化色補間処理という。これで、イメージセンサの画素数と同じ画素数をもつ各色成分のカラー画像データが得られる。 A CCD-type or MOS-type image sensor for obtaining a color image is a color filter array in which color filters of a plurality of types of color components are repeatedly arranged according to a certain rule corresponding to the two-dimensional array of pixels in the image sensor. Color separation filter). For example, in a Bayer array color separation filter in which RGB primary color filters are arranged in a checkered pattern corresponding to the pixels of the image sensor, filters for each color component of RGB along both the horizontal and vertical directions. Are arranged every other pixel (BGgR). The information obtained from each pixel is only information of one color component. Therefore, in order to enhance the expressive power, each pixel is interpolated using the color information of the surrounding pixels, and information on a plurality of color components is obtained for each pixel. That is, information on all color components is obtained for all pixels. This is called synchronized color interpolation processing. Thus, color image data of each color component having the same number of pixels as that of the image sensor is obtained.
 さらに、ホワイトバランス(WB)調整、ガンマ補正処理、エッジ強調などのためのエンハンス処理などの信号処理が行われ、輝度信号(Y)と2種類の色差信号(Cr、Cb)のコンポーネント信号に変換し、JPEG(Joint Photographic Experts Group)などの圧縮符号化アルゴリズムで圧縮し小さいファイルサイズの状態とし、記録媒体に記録する。 Furthermore, signal processing such as white balance (WB) adjustment, gamma correction processing, enhancement processing for edge enhancement, etc. is performed and converted into component signals of a luminance signal (Y) and two types of color difference signals (Cr, Cb). Then, the data is compressed with a compression encoding algorithm such as JPEG (Joint Photographic Experts Group) to obtain a small file size and recorded on a recording medium.
 近年では一眼デジタルカメラの機能として、撮影後に行う高画質な現像処理やレタッチ処理のために、撮像アナログ信号をA/D変換した直後の、画像処理は加えていない生(RAW)のデジタルデータ(RAWデータ)を可逆圧縮して記録媒体に記録するモードが普及してきている。パソコン等の外部装置によって画像の再現(現像)処理を行うことにより、高品質のプリントやユーザの目的に合致した画像編集を実現する。画像処理を加えないRAWデータを用いるのは、もし、画像処理を加えた画像データを記録した場合に、現像処理やレタッチ処理のために復号化してさらなる画像処理を施すと、画質が劣化するからである。 In recent years, as a function of a single-lens digital camera, raw (RAW) digital data (without image processing) immediately after A / D conversion of an imaging analog signal is performed for high-quality development processing and retouch processing performed after shooting. RAW data) is reversibly compressed and recorded on a recording medium. Image reproduction (development) processing is performed by an external device such as a personal computer, thereby realizing high-quality printing and image editing that matches the user's purpose. RAW data without image processing is used because if image data with image processing added is recorded and if it is decoded for development processing or retouch processing and subjected to further image processing, the image quality deteriorates. It is.
 特許文献1には、同時化色補間処理を行ってから記録するモードと、同時化色補間処理を行わずにRAWデータの状態で記録するモードとの2モードについて、共通の圧縮処理工程でそれぞれ独立に圧縮する方法が記載されている(図13A,図13B参照)。 Patent Document 1 discloses a common compression processing step for two modes, a mode for recording after performing the synchronized color interpolation process and a mode for recording in the RAW data state without performing the synchronized color interpolation process. A method of compressing independently is described (see FIGS. 13A and 13B).
 また特許文献2には、RAWデータから色成分毎に分離されたコンポーネントデータについて、同時化色補間処理を経ることなく、コンポーネントデータ毎に圧縮する方法が記載されている(図14,図15参照)。 Patent Document 2 describes a method of compressing component data separated for each color component from RAW data for each component data without going through a synchronized color interpolation process (see FIGS. 14 and 15). ).
 これらの従来技術ではいずれも、RAWデータを隣接画素間の相関性の高い色別のプレーンデータに分割し、そのプレーンデータを圧縮するので、色別に分離しないRAWデータをそのまま圧縮する場合に比べ高い圧縮効率が得られるとしている。 In any of these conventional techniques, RAW data is divided into color-specific plane data with high correlation between adjacent pixels, and the plane data is compressed. Therefore, the RAW data is higher than the case where RAW data not separated by color is compressed as it is. It is said that compression efficiency can be obtained.
特許第3864748号公報Japanese Patent No. 3864748 特許第3956360号公報Japanese Patent No. 3956360
 従来の技術においては、RAWデータを分割した複数の色別のプレーンデータを個別に圧縮処理している。つまり、同じような圧縮処理を4回に分けて繰り返し実行している。この繰り返しの際のプレーンデータ切り換えに時間がかかる。具体的には、第1のプレーンデータの圧縮処理から第2のプレーンデータの圧縮処理への切り替わり、第2のプレーンデータの圧縮処理から第3のプレーンデータの圧縮処理への切り替わり、第3のプレーンデータの圧縮処理から第4のプレーンデータの圧縮処理への切り替わりというように、切り替わりが3回もある。このように、複数の色別のプレーンデータを個別に時間的に相前後して順次に圧縮処理すると、トータルで多大な時間がかかる。それで、圧縮効率が低いものとなっている。 In the conventional technology, a plurality of color plane data obtained by dividing RAW data is individually compressed. That is, the same compression process is repeatedly performed in four times. It takes time to switch the plane data during this repetition. Specifically, switching from the compression process of the first plane data to the compression process of the second plane data, switching from the compression process of the second plane data to the compression process of the third plane data, There are three times of switching, such as switching from the compression processing of the plane data to the compression processing of the fourth plane data. In this way, if a plurality of pieces of plane data for each color are individually compressed sequentially before and after each other, it takes a lot of time in total. Therefore, the compression efficiency is low.
 特許文献2においては並列動作可能な圧縮処理工程を複数用いるという記載もある(段落[0032]参照)。しかし、その場合は、CPU(Central Processing Unit)としてきわめて高い処理能力のものが必要となる。また、ハードウェア構成とする場合には、回路規模の著しい増大を招く。圧縮効率が低いと、近年のイメージセンサの高画素化に対しての圧縮RAWデータ記録モードでの高速処理に不利となる。 Patent Document 2 also describes that a plurality of compression processing steps that can be operated in parallel are used (see paragraph [0032]). However, in that case, a CPU (Central Processing の も の Unit) having a very high processing capability is required. In addition, when the hardware configuration is adopted, the circuit scale is significantly increased. If the compression efficiency is low, it is disadvantageous for high-speed processing in the compressed RAW data recording mode in response to the recent increase in pixels of image sensors.
 なお、連写撮影機能をもつ撮像装置では、圧縮RAWデータ記録モードでの高速連写に支障をきたす可能性もでてくる。また、ブロックノイズ発生の可能性も高くなる。 Note that an imaging apparatus having a continuous shooting function may interfere with high-speed continuous shooting in the compressed RAW data recording mode. In addition, the possibility of occurrence of block noise is increased.
 本発明は、このような事情に鑑みて創作したものであり、複数の色別プレーンデータの圧縮効率を高いものにするとともに、回路規模の増大やCPUの処理能力の増強を招かないですむようにすることを目的としている。より望ましくは、ブロックノイズの発生を抑制することを目的とする。 The present invention was created in view of such circumstances, and it is intended to increase the compression efficiency of a plurality of color-specific plane data, and to avoid increasing the circuit scale and CPU processing capacity. The purpose is that. More desirably, the object is to suppress the occurrence of block noise.
 本発明は、次のような手段を講じることにより上記の課題を解決する。 The present invention solves the above problems by taking the following measures.
 本発明による画像処理装置は、カラー画像を構成する複数種類の色成分が一定の規則に従って画素配列上に繰り返し配列されるデータ形式のRAWデータ(A/D変換直後の画像データであって信号処理が加えられていないもの)を圧縮処理する装置であって、次のように構成されたRAWデータ再構成器と圧縮処理器とを備える。 An image processing apparatus according to the present invention is a RAW data (image data immediately after A / D conversion, in which signal processing is performed) in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule. Is a RAW data reconstructor configured as follows and a compression processor.
 RAWデータ再構成器は、画像のアナログ信号がA/D変換された後の、信号処理が加えられていないRAWデータを入力し、次のように処理する。すなわち、入力したRAWデータを前記の色成分毎に分解し、再集合して複数の色別プレーンデータを生成する。例えば、RAWデータが第1ないし第4の色成分を一定の規則に従って画素配列上に繰り返し配列したものであるとする。このRAWデータを第1色成分、第2色成分、第3色成分、第4色成分に分解し、第1色成分だけを集めた第1の色別プレーンデータと、第2色成分だけを集めた第2の色別プレーンデータと、第3色成分だけを集めた第3の色別プレーンデータと、第4色成分だけを集めた第4の色別プレーンデータとを生成する。プレーンデータは、画像メモリ上で展開する場合に、2次元的に展開される配列形態をもつデータである。なお、例えばベイヤー配列(BGgR)のように、2つの色別プレーンデータが同一色である場合も含むものとする。 The RAW data reconstructor inputs RAW data that has not been subjected to signal processing after A / D conversion of the analog signal of the image, and processes it as follows. That is, the input RAW data is decomposed for each color component and reassembled to generate a plurality of color-specific plane data. For example, it is assumed that the RAW data is obtained by repeatedly arranging the first to fourth color components on the pixel array according to a certain rule. The RAW data is decomposed into a first color component, a second color component, a third color component, and a fourth color component, and the first color-specific plane data in which only the first color component is collected and only the second color component are collected. Collected second color-specific plane data, third color-specific plane data that collects only the third color component, and fourth color-specific plane data that collects only the fourth color component are generated. The plane data is data having an array form that is expanded two-dimensionally when expanded on the image memory. In addition, for example, the case where the two color-specific plane data are the same color as in the Bayer array (BGgR) is also included.
 RAWデータ再構成器は、前記の複数の色別プレーンデータを配置して圧縮処理単位である1つのファイルとしてまとめる。ここで、圧縮処理単位である1つのファイルは、色成分毎に区画された複数の配置領域をもっているものとする。RAWデータ再構成器は、色成分毎に区画された複数の配置領域に、前記複数の色別プレーンデータを振り分けて配置し、再構成RAWデータを生成する。例えば、上記の例を引くと、第1の配置領域に第1の色別プレーンデータを配置し、第2の配置領域に第2の色別プレーンデータを配置し、第3の配置領域に第3の色別プレーンデータを配置し、第4の配置領域に第4の色別プレーンデータを配置して、圧縮処理単位である1つのファイルとしてまとめた再構成RAWデータを生成する。そして、生成した再構成RAWデータを圧縮処理器に渡す。 The RAW data reconstructor arranges the plurality of color-specific plane data and collects them as one file that is a compression processing unit. Here, it is assumed that one file, which is a compression processing unit, has a plurality of arrangement areas divided for each color component. The RAW data reconstructor distributes and arranges the plurality of color-specific plane data in a plurality of arrangement areas divided for each color component, and generates reconstructed RAW data. For example, when the above example is drawn, the first color-specific plane data is arranged in the first arrangement area, the second color-specific plane data is arranged in the second arrangement area, and the third arrangement area is in the first arrangement area. 3 color-specific plane data is arranged, and the fourth color-specific plane data is arranged in the fourth arrangement area to generate reconstructed RAW data collected as one file as a compression processing unit. Then, the generated reconstructed RAW data is passed to the compression processor.
 圧縮処理器は、RAWデータ再構成器によって生成された圧縮処理単位の再構成RAWデータを入力し、圧縮処理を行う。なお、第1ないし第4の配置領域の相対的位置関係は任意である。 The compression processor inputs the reconstructed RAW data of the compression processing unit generated by the RAW data reconstructor, and performs the compression process. The relative positional relationship between the first to fourth arrangement areas is arbitrary.
 (1)以上を要するに、本発明の画像処理装置は、カラー画像を構成する複数種類の色成分が一定の規則に従って画素配列上に繰り返し配列されるデータ形式のRAWデータを圧縮処理する装置であって、
 前記RAWデータを入力し、前記RAWデータを前記色成分毎に分解し再集合して複数の色別プレーンデータを生成し、さらに色成分毎に区画された複数の配置領域に前記複数の色別プレーンデータを配置して圧縮処理単位である1つのファイルとしてまとめた再構成RAWデータを生成するRAWデータ再構成器と、
 前記RAWデータ再構成器によって生成された前記圧縮処理単位の再構成RAWデータを入力し圧縮処理する圧縮処理器と、
 を備えたものとして構成されている。
(1) In summary, the image processing apparatus of the present invention is an apparatus that compresses RAW data in a data format in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule. And
The RAW data is input, the RAW data is decomposed and reassembled for each color component to generate a plurality of color plane data, and the plurality of color classification A RAW data reconstructor for generating reconstructed RAW data arranged as a single file as a compression processing unit by arranging plain data;
A compression processor that inputs and compresses the reconstructed RAW data of the compression processing unit generated by the RAW data reconstructor;
It is comprised as that provided.
 また、本発明の画像処理方法は、カラー画像を構成する複数種類の色成分が一定の規則に従って画素配列上に繰り返し配列されるデータ形式のRAWデータを圧縮処理する方法であって、
 前記RAWデータを入力し、前記RAWデータを前記色成分毎に分解し再集合して複数の色別プレーンデータを生成し、さらに色成分毎に区画された複数の配置領域に前記複数の色別プレーンデータを配置して圧縮処理単位である1つのファイルとしてまとめた再構成RAWデータを生成する工程と、
 前記再構成RAWデータを生成する工程によって生成された前記圧縮処理単位の再構成RAWデータを入力し圧縮処理する工程と、
 を備えたものとして構成されている。
The image processing method of the present invention is a method of compressing RAW data in a data format in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule,
The RAW data is input, the RAW data is separated for each color component and reassembled to generate a plurality of color-specific plane data, and the plurality of color-specific plane data are divided into a plurality of arrangement areas divided for each color component. A step of generating reconstructed RAW data arranged as a single file as a compression processing unit by arranging plain data;
Inputting and compressing the reconstructed RAW data of the compression processing unit generated by the step of generating the reconstructed RAW data;
It is comprised as that provided.
 上記のように構成された本発明の画像処理装置・画像処理方法によれば、次のような作用効果が発揮される。 According to the image processing apparatus and the image processing method of the present invention configured as described above, the following operational effects are exhibited.
 RAWデータ再構成器を備えることにより、圧縮処理器の圧縮処理の対象を、「複数の色別プレーンデータを配置して圧縮処理単位である1つのファイルとしてまとめた再構成RAWデータ」としている。色別プレーンデータは隣接画素間の相関性が高く、その圧縮処理では、異なる色成分が隣接しているRAWデータをそのまま圧縮する場合に比べ高い圧縮効率が得られる。これは前述したとおり従来技術でも認められる。本発明では、単にそれだけでなく、複数種類の色別プレーンデータを圧縮処理単位である1つのファイル上に配置して構成した再構成RAWデータを圧縮処理の対象としている。したがって、再構成RAWデータ上の複数の色別プレーンデータを一括して一度に圧縮処理することが可能となり、さらに高い圧縮効率が得られることになる。詳しくは、すべての色成分の色別プレーンデータの圧縮処理を、圧縮処理単位である1つのファイル(再構成RAWデータ)における1回の圧縮処理で実現するので、従来技術の場合のように複数の色別プレーンデータを個別に順次切り替えながら繰り返して圧縮処理する場合に比べて、圧縮効率が大幅に向上する。 By providing the RAW data reconstructor, the compression processing target of the compression processor is “reconstructed RAW data in which a plurality of color plane data are arranged and collected as one file as a compression processing unit”. The plane data for each color has a high correlation between adjacent pixels, and the compression process provides a higher compression efficiency than the case where raw data with different color components are adjacent is compressed as it is. This is also recognized in the prior art as described above. In the present invention, not only that, but also reconstructed RAW data configured by arranging a plurality of types of color-specific plane data on one file which is a unit of compression processing is targeted for compression processing. Therefore, a plurality of color-specific plane data on the reconstructed RAW data can be compressed at once, and higher compression efficiency can be obtained. Specifically, since the compression processing of the plane data for each color component by color is realized by one compression processing in one file (reconstructed RAW data) that is a unit of compression processing, a plurality of processing is performed as in the case of the prior art. Compared with the case of repeatedly performing compression processing while sequentially switching the color-specific plane data individually, the compression efficiency is greatly improved.
 また、圧縮効率を高める上で、圧縮処理器としては並列動作する複数の圧縮処理器を用いる必要がなく、単一の圧縮処理器ですむため、回路規模の増大を招かないですむ。あるいは、CPUの処理能力を特別に増強する必要もない。 Also, in order to increase the compression efficiency, it is not necessary to use a plurality of compression processors operating in parallel as a compression processor, and a single compression processor is required, so that the circuit scale does not increase. Alternatively, there is no need to particularly increase the processing capacity of the CPU.
 また、本発明による撮像装置は、色分解して撮像するタイプのイメージセンサで入力した光学像をアナログの電気信号に変換しさらにデジタルのRAWデータに変換する撮像部と、前記の画像処理装置とを備えたものである。 An image pickup apparatus according to the present invention includes an image pickup unit that converts an optical image input by an image sensor that performs color separation and picks up an image into an analog electric signal, and further converts it into digital RAW data, and the image processing device described above. It is equipped with.
 本発明によれば、RAWデータ再構成器を備え、色成分を異にする複数種類の色別プレーンデータを圧縮処理単位である1つのファイル上に配置して構成した再構成RAWデータを圧縮処理の対象とし、すべての色成分の色別プレーンデータの圧縮処理を、圧縮処理単位である1つのファイルに対する1回の圧縮処理で実現することにより、複数の色別プレーンデータを個別に順次切り替えながら繰り返して圧縮処理する場合に比べて、圧縮効率を大幅に向上することができる。 According to the present invention, there is provided a RAW data reconstructor, and compression processing is performed on reconstructed RAW data configured by arranging a plurality of types of color-specific plane data having different color components on one file as a compression processing unit. By implementing the compression processing of the color-specific plane data for all the color components in one compression process for one file that is the compression processing unit, while switching the plurality of color-specific plane data individually and sequentially Compared with the case of repeatedly performing compression processing, the compression efficiency can be greatly improved.
 加えて、圧縮効率を高める上で、並列動作する複数の圧縮処理器を用いる必要がなく、単一の圧縮処理器ですむため、回路規模の増大を招かないですみ、CPUの処理能力を特別に増強する必要もない。 In addition, it is not necessary to use multiple compression processors that operate in parallel to improve compression efficiency, and a single compression processor is required, so there is no need to increase the circuit scale, and the CPU processing capacity is special. There is no need to reinforce.
本発明の実施の形態にかかわり、RAWデータから生成した再構成RAWデータを圧縮処理する画像処理装置の概略構成を示すブロック図である。1 is a block diagram illustrating a schematic configuration of an image processing apparatus that compresses reconstructed RAW data generated from RAW data according to an embodiment of the present invention. 本発明の実施の形態にかかわり、圧縮対象であるRAWデータの例示図である。It is an illustration figure of the RAW data which are related with embodiment of this invention and are compression object. 本発明の実施の形態にかかわり、4つの色別プレーンデータを隣接させて2次元方向に配置した例示図である。FIG. 4 is an illustrative diagram in which four color plane data are adjacently arranged in a two-dimensional direction according to an embodiment of the present invention. 本発明の実施の形態にかかわり、4つの色別プレーンデータを互いに適当間隔あけて配置した例示図である。FIG. 4 is an illustrative diagram in which four color-specific plane data are arranged at appropriate intervals according to an embodiment of the present invention. 本発明の実施の形態にかかわり、ベイヤー配列の4つの色別プレーンデータを2次元的に配置して1フレーム分の画像データとして再構成した再構成RAWデータを示す説明図(その1)である。FIG. 4 is an explanatory diagram (part 1) showing reconstructed RAW data reconstructed as image data for one frame by two-dimensionally arranging four color-specific plane data in a Bayer array according to an embodiment of the present invention. . 本発明の実施の形態にかかわり、図3の圧縮RAWデータを伸張処理した場合の画像データにおけるブロックノイズ発生位置の説明図である。It is explanatory drawing of the block noise generation | occurrence | production position in the image data at the time of decompression | expansion processing of the compression RAW data of FIG. 3 in connection with embodiment of this invention. 本発明の実施の形態にかかわり、ベイヤー配列の4つの色別プレーンデータを2次元的に配置して1フレーム分の画像データとして再構成した再構成RAWデータを示す説明図(その2)である。FIG. 6 is an explanatory diagram (part 2) showing reconstructed RAW data in which four-color plane data of a Bayer array is two-dimensionally arranged and reconstructed as image data for one frame according to the embodiment of the present invention. . 本発明の実施の形態にかかわり、図5の圧縮RAWデータを伸張処理した場合の画像データにおけるブロックノイズ発生位置の説明図である。FIG. 6 is an explanatory diagram of block noise occurrence positions in image data when the compressed RAW data in FIG. 5 is decompressed according to the embodiment of the present invention. 本発明の実施の形態にかかわり、図5、図6に対応したブロックノイズの様子を示す図である。It is a figure which shows the mode of the block noise corresponding to embodiment of this invention and corresponding to FIG. 5, FIG. 本発明の実施の形態にかかわり、図3、図4に対応したブロックノイズの様子を示す図である。It is a figure which shows the mode of the block noise corresponding to embodiment of this invention and corresponding to FIG. 3, FIG. 本発明の実施の形態にかかわり、図7Aに対応する図6のブロックノイズ発生位置を表す説明図である。FIG. 7 is an explanatory diagram showing a block noise occurrence position of FIG. 6 corresponding to FIG. 7A according to the embodiment of the present invention. 本発明の実施の形態にかかわり、図7Bに対応する図4のブロックノイズ発生位置を表す説明図である。FIG. 7 is an explanatory diagram showing a block noise occurrence position of FIG. 4 corresponding to FIG. 7B according to the embodiment of the present invention. 本発明の実施の形態にかかわり、4つの色別プレーンデータに対応した4つの独立の配置領域の他の形態を示す説明図である。It is explanatory drawing which concerns on embodiment of this invention and shows the other form of four independent arrangement | positioning area | regions corresponding to four plane data according to color. 本発明の実施例にかかわり、RAWデータから生成した再構成RAWデータを圧縮処理する画像処理装置を搭載した撮像装置の概略構成を示すブロック図である。FIG. 2 is a block diagram illustrating a schematic configuration of an imaging apparatus including an image processing apparatus that compresses reconstructed RAW data generated from RAW data according to an embodiment of the present invention. 本発明の実施例にかかわり、原色タイプのカラーフィルタ配列の例(べイヤー配列)を示す説明図である。It is explanatory drawing which shows the example (Bayer arrangement) of the color filter arrangement | sequence of a primary color type in connection with the Example of this invention. 本発明の実施例にかかわり、原色タイプのカラーフィルタ配列の例(ハニカム配列)を示す説明図である。It is explanatory drawing which shows the example (honeycomb arrangement | sequence) of the color filter arrangement | sequence of a primary color type in connection with the Example of this invention. 本発明の実施例にかかわり、原色タイプのカラーフィルタ配列の例(べイヤー配列)における周波数特性を示す説明図である。It is explanatory drawing which shows the frequency characteristic in the example (Bayer arrangement) of the color filter arrangement | sequence of a primary color type in connection with the Example of this invention. 本発明の実施例にかかわり、コンポーネントデータの入力端子として輝度信号(Y)と色差信号(Cr/Cb)の入力端子を備えた一般的な圧縮処理器を用いて資源を有効活用することの説明図(その1)である。Description of Effective Use of Resources Using a General Compression Processor Comprising Input Terminals for Luminance Signal (Y) and Color Difference Signal (Cr / Cb) as Component Data Input Terminals According to Embodiments of the Present Invention It is a figure (the 1). 本発明の実施例にかかわり、コンポーネントデータの入力端子として輝度信号(Y)と色差信号(Cr/Cb)の入力端子を備えた一般的な圧縮処理器を用いて資源を有効活用することの説明図(その2)である。Description of Effective Use of Resources Using a General Compression Processor Comprising Input Terminals for Luminance Signal (Y) and Color Difference Signal (Cr / Cb) as Component Data Input Terminals According to Embodiments of the Present Invention It is a figure (the 2). 従来技術の特許文献1に示す画像処理装置の特徴を示す説明図(その1)である。It is explanatory drawing (the 1) which shows the characteristic of the image processing apparatus shown to patent document 1 of a prior art. 従来技術の特許文献1に示す画像処理装置の特徴を示す説明図(その2)である。It is explanatory drawing (the 2) which shows the characteristic of the image processing apparatus shown in patent document 1 of a prior art. 従来技術の特許文献2に示す画像処理方法の特徴を示す説明図である。It is explanatory drawing which shows the characteristic of the image processing method shown in patent document 2 of a prior art. 従来技術の特許文献2に示す画像処理工程の特徴を示す説明図である。It is explanatory drawing which shows the characteristic of the image processing process shown to patent document 2 of a prior art.
 上記した(1)の構成の本発明の画像処理装置ならびに画像処理方法は、次のような実施の形態においてさらに有利に展開することが可能である。 The image processing apparatus and the image processing method of the present invention having the configuration (1) described above can be further advantageously developed in the following embodiments.
 (2)上記(1)の構成の画像処理装置において、前記RAWデータ再構成器は、前記複数の色別プレーンデータを前記1つのファイルに配置するに際して、前記圧縮処理器における圧縮処理単位ブロックの周期的な境界位置に対して、各色別プレーンデータの基準位置を縦方向と横方向の両方向に所定画素分ずらして配置するように構成されているという態様がある。この画像処理装置に対応する画像処理方法としては、上記(1)の構成の画像処理方法において、前記再構成RAWデータを生成する工程は、前記複数の色別プレーンデータを前記1つのファイルに配置するに際して、前記圧縮処理器における圧縮処理単位ブロックの周期的な境界位置に対して、各色別プレーンデータの基準位置を縦方向と横方向の両方向に所定画素分ずらして配置するという態様がある。 (2) In the image processing apparatus configured as described in (1) above, the RAW data reconstructor, when arranging the plurality of color plane data in the one file, includes a compression processing unit block in the compression processor. There is an aspect in which the reference position of each color plane data is shifted by a predetermined number of pixels in both the vertical and horizontal directions with respect to the periodic boundary position. As an image processing method corresponding to this image processing apparatus, in the image processing method having the configuration (1), the step of generating the reconstructed RAW data includes arranging the plurality of color-specific plane data in the one file. In doing so, there is an aspect in which the reference position of each color plane data is shifted by a predetermined number of pixels in both the vertical and horizontal directions with respect to the periodic boundary position of the compression processing unit block in the compression processor.
 (3)また上記(1)の構成の画像処理装置において、前記RAWデータ再構成器は、前記複数の色別プレーンデータを前記1つのファイルに配置するに際して、前記圧縮処理器における圧縮処理単位ブロックの周期的な境界位置に対して、各色別プレーンデータの基準位置を縦方向または横方向のいずれか1方向に所定画素分ずらして配置するように構成されているという態様がある。この画像処理装置に対応する画像処理方法としては、上記(1)の構成の画像処理方法において、前記再構成RAWデータを生成する工程は、前記複数の色別プレーンデータを前記1つのファイルに配置するに際して、前記圧縮処理器における圧縮処理単位ブロックの周期的な境界位置に対して、各色別プレーンデータの基準位置を縦方向または横方向のいずれか1方向に所定画素分ずらして配置するという態様がある。 (3) Also, in the image processing apparatus having the configuration of (1), the RAW data reconstructor, when arranging the plurality of color plane data in the one file, is a compression processing unit block in the compression processor. With respect to the periodic boundary position, the reference position of each color plane data is arranged so as to be shifted by a predetermined pixel in either one of the vertical direction and the horizontal direction. As an image processing method corresponding to this image processing apparatus, in the image processing method having the configuration (1), the step of generating the reconstructed RAW data includes arranging the plurality of color-specific plane data in the one file. In this case, the reference position of the plane data for each color is shifted by a predetermined number of pixels in either the vertical direction or the horizontal direction with respect to the periodic boundary position of the compression processing unit block in the compression processor. There is.
 複数の色別プレーンデータを配置するに際して、圧縮処理器における圧縮処理単位ブロックの周期的な境界位置に対して、もし各色別プレーンデータの基準位置を相対的に同一位置とすると、圧縮処理が圧縮率の高い非可逆圧縮である場合に、再生のために伸張処理を施すと、得られる1フレーム分の再構成RAWデータにおいて圧縮処理単位ブロックの境界位置に対応する各色のブロックノイズの発生位置が重なり合う特有の画質劣化が生じ、ブロックノイズが亢進する可能性がある。 When arranging a plurality of color plane data, if the reference position of each color plane data is set to the same position relative to the periodic boundary position of the compression processing unit block in the compression processor, the compression processing is compressed. If decompression processing is performed for reproduction in the case of irreversible compression with a high rate, the block noise generation position of each color corresponding to the boundary position of the compression processing unit block in the reconstructed RAW data for one frame obtained. Overlapping characteristic image quality degradation may occur, and block noise may increase.
 これに対して、上記の(2),(3)のように構成し、圧縮処理単位ブロックの周期的な境界位置に対して各色別プレーンデータを相対的にずらすようにすれば、各色のブロックノイズの発生位置が重なり合う特有の画質劣化が抑制され、画像品質を向上することが可能となる。 On the other hand, if each color plane data is shifted relative to the periodic boundary position of the compression processing unit block configured as in (2) and (3) above, the block of each color It is possible to suppress image quality degradation peculiar to where noise generation positions overlap and to improve image quality.
 (4)上記(1)の構成の画像処理装置において、前記RAWデータ再構成器は、前記複数の色別プレーンデータの配置領域どうし間の無画部領域に対して固定データを配置して前記再構成RAWデータを生成するように構成されているという態様がある。この画像処理装置に対応する画像処理方法としては、上記(1)の構成の画像処理方法において、前記再構成RAWデータを生成する工程は、前記複数の色別プレーンデータの配置領域どうし間の無画部領域に対して固定データを配置して前記再構成RAWデータを生成するという態様がある。固定データは、輝度変化のない画素データである。 (4) In the image processing apparatus having the configuration of (1), the RAW data reconstructor arranges fixed data in a non-image area between the arrangement areas of the plurality of color-specific plane data, and There is an aspect of being configured to generate reconstructed RAW data. As an image processing method corresponding to this image processing apparatus, in the image processing method configured as described in (1) above, the step of generating the reconstructed RAW data is performed between the plurality of color-specific plane data arrangement regions. There is an aspect in which fixed data is arranged in the image area and the reconstructed RAW data is generated. The fixed data is pixel data having no luminance change.
 このように構成すれば、複数の色別プレーンデータを配置する際に、色別プレーンデータの配置領域どうしの間に固定データからなる無画部領域を形成することが可能となる。もしこの無画部領域の存在がなく色別プレーンデータどうしが互いに接した状態で配置されているとすると、圧縮処理単位ブロックごとに圧縮処理が繰り返される場合に、同じ1つの圧縮処理単位ブロックに、色別プレーンデータの水平方向または垂直方向でのある色別プレーンデータの終端部分の色成分と隣接する色別プレーンデータの先頭部分の色成分が混入する場合が生じる。無画部領域の存在により、圧縮処理単位ブロックごとに圧縮処理が繰り返される場合に、ある色別プレーンデータの終端部分の色成分と隣接する色別プレーンデータの先頭部分の色成分が混入することを免れる。その結果、再構成RAWデータに対する圧縮処理において、複数の色別プレーンデータを互いに混同しない状態で圧縮処理することが可能となる。介在させるデータは、輝度変化のない固定データゆえに隣接の差分がゼロであり、これによって圧縮効率が高くなることも有利に作用する。 With this configuration, when arranging a plurality of color-specific plane data, it is possible to form a non-image area composed of fixed data between the color-specific plane data arrangement areas. If the non-image area does not exist and the color-specific plane data are arranged in contact with each other, when the compression processing is repeated for each compression processing unit block, the same one compression processing unit block is stored. In some cases, the color component of the end part of the plane data for each color in the horizontal direction or the vertical direction of the plane data for each color is mixed with the color component of the head part of the adjacent plane data for each color. When the compression process is repeated for each compression processing unit block due to the presence of the non-image area, the color component of the end portion of a certain color plane data and the color component of the head portion of the adjacent color-specific plane data are mixed. Escape. As a result, in the compression process for the reconstructed RAW data, it is possible to perform the compression process in a state where the plurality of color-specific plane data are not confused with each other. Since the intervening data is fixed data having no luminance change, the adjacent difference is zero, and this also advantageously increases the compression efficiency.
 (5)上記(1)の構成の画像処理装置において、前記圧縮処理器は、複数のコンポーネント入力端子を有し、1つの前記コンポーネント入力端子から入力した前記再構成RAWデータと、他の前記コンポーネント入力端子から入力した固定データとを同時的に圧縮処理するように構成されているという態様がある。この画像処理装置に対応する画像処理方法としては、上記(1)の構成の画像処理方法において、前記圧縮処理する工程は、前記再構成RAWデータと固定データとを互いに別系統で入力し、前記再構成RAWデータと前記固定データとを同時的に圧縮処理するという態様がある。 (5) In the image processing apparatus having the configuration of (1), the compression processor has a plurality of component input terminals, the reconstructed RAW data input from one of the component input terminals, and the other component There is an aspect in which fixed data input from an input terminal is compressed simultaneously. As an image processing method corresponding to this image processing apparatus, in the image processing method having the configuration (1), the compression processing step inputs the reconstructed RAW data and the fixed data in different systems, and There is an aspect in which the reconstructed RAW data and the fixed data are simultaneously compressed.
 このように構成すれば、再構成RAWデータと固定データとを同時的に圧縮処理することで、圧縮効率をさらに高めることが可能となる。JPEGなどではコンポーネント入力端子として輝度信号(Y)と色差信号(Cr/Cb)の入力端子を備えた圧縮処理器が用いられるが、その一般的な形態の圧縮処理器の資源が有効活用できる。 With this configuration, it is possible to further increase the compression efficiency by simultaneously compressing the reconstructed RAW data and the fixed data. In JPEG or the like, a compression processor having a luminance signal (Y) and color difference signal (Cr / Cb) input terminal as a component input terminal is used, but the resources of the compression processor in its general form can be effectively used.
 (6)上記(1)の構成の画像処理装置において、前記圧縮処理器は、複数のコンポーネント入力端子を有し、1つの前記コンポーネント入力端子から入力した前記再構成RAWデータと、他の前記コンポーネント入力端子から入力した別の画像データとを同時的に圧縮処理するように構成されているという態様がある。この画像処理装置に対応する画像処理方法としては、上記(1)の構成の画像処理方法において、前記圧縮処理する工程は、前記再構成RAWデータと別の画像データとを互いに別系統で入力し、前記再構成RAWデータと前記別の画像データとを同時的に圧縮処理するという態様がある。 (6) In the image processing apparatus configured as described in (1) above, the compression processor has a plurality of component input terminals, the reconstructed RAW data input from one of the component input terminals, and the other component There is an aspect in which another image data input from the input terminal is compressed simultaneously. As an image processing method corresponding to this image processing apparatus, in the image processing method having the configuration of (1), the compression processing step inputs the reconstructed RAW data and another image data in different systems. There is a mode in which the reconstructed RAW data and the other image data are simultaneously compressed.
 このように構成すれば、再構成RAWデータと別の画像データ(例えば表示用に縮小リサイズされたスモールYCrCbデータやスモールRGBデータなど)とを同時的に圧縮処理することで、表示用の縮小リサイズ画像など別の画像を含めて圧縮効率をさらに高めることが可能となる。この場合も、JPEGなどの輝度信号(Y)と色差信号(Cr/Cb)のコンポーネント入力端子を備えた一般的な形態の圧縮処理器の資源が有効活用できる。 With this configuration, the reconstructed RAW data and another image data (for example, small YCrCb data or small RGB data that has been reduced and resized for display) are simultaneously compressed to reduce the size for display. It is possible to further increase the compression efficiency including another image such as an image. Also in this case, resources of a compression processor in a general form having component input terminals for luminance signals (Y) and color difference signals (Cr / Cb) such as JPEG can be effectively used.
 (7)上記(1)の構成の画像処理装置において、前記圧縮処理器は、前記再構成RAWデータを非可逆圧縮するものとして構成されているという態様がある。この画像処理装置に対応する画像処理方法としては、上記(1)の構成の画像処理方法において、前記圧縮処理する工程は、前記再構成RAWデータを非可逆圧縮するという態様がある。JPEG、MPEG(Moving Picture Experts Group)、H.264などの非可逆圧縮は、可逆圧縮に比べて圧縮率の高い圧縮処理である。したがって、よりファイルサイズの小さい圧縮RAWデータを得ることが可能となる。 (7) In the image processing apparatus configured as described in (1) above, the compression processor may be configured to irreversibly compress the reconstructed RAW data. As an image processing method corresponding to this image processing apparatus, in the image processing method having the configuration (1), the compression processing step includes irreversible compression of the reconstructed RAW data. JPEG, MPEG (Moving Picture Experts Group), H.264. The lossy compression such as H.264 is a compression process with a higher compression rate than the lossless compression. Therefore, compressed RAW data with a smaller file size can be obtained.
 なお、上記の(1)の構成の画像処理装置または画像処理方法に対して、上記(2)~(7)のうちの任意複数個の事項を矛盾しない条件のもとで任意に組み合わせてもよい。 It should be noted that the above-described image processing apparatus or image processing method having the configuration (1) may be arbitrarily combined with any of a plurality of items (2) to (7) under conditions that do not contradict each other. Good.
 また、本発明の画像処理方法は、単独のアプリケーションソフトウエアとして構成されてもよいし、画像加工ソフトやファイル管理用ソフトウエアなどのアプリケーションの一部として組み込まれてもよい。 The image processing method of the present invention may be configured as a single application software, or may be incorporated as a part of an application such as image processing software or file management software.
 また、本発明の画像処理方法に準じた画像処理プログラムは、パソコンなどのコンピュータシステムに適用する場合に限定されず、デジタルカメラや携帯電話機などの情報機器に組み込まれる中央処理装置(CPU)の動作プログラムとしても適用することが可能である。 The image processing program according to the image processing method of the present invention is not limited to being applied to a computer system such as a personal computer, and the operation of a central processing unit (CPU) incorporated in an information device such as a digital camera or a cellular phone. It can also be applied as a program.
 以上で、本発明の概要を説明した。 The outline of the present invention has been described above.
 次に、ベイヤー配列の場合を例に挙げて、より分かりやすく説明する。 Next, the explanation will be made more easily by taking the case of the Bayer array as an example.
 図1はベイヤー配列のカラーフィルタをもつ撮像部で取得されたRAWデータから再構成RAWデータを生成し、再構成RAWデータを圧縮処理する画像処理装置の概略構成を示すブロック図である。 FIG. 1 is a block diagram illustrating a schematic configuration of an image processing apparatus that generates reconstructed RAW data from RAW data acquired by an imaging unit having a color filter with a Bayer array and compresses the reconstructed RAW data.
 図1において、1はカラー画像を構成する複数種類の色成分が一定の規則に従って画素配列上に繰り返し配列されるカラーフィルタをもつ撮像部(イメージセンサ)である。ここでは、カラーフィルタは、4種類の色成分であるB(青成分),G(第1の緑成分),g(第2の緑成分),R(赤成分)がベイヤー配列をもつものであるとする。2は撮像部1で取得された4つの色成分B,G,g,Rをもつベイヤー配列によるデータ形式のRAWデータである。3はRAWデータ2を入力し、RAWデータ2を色成分B,G,g,R毎に分解し2次元的に再集合して4つの色別プレーンデータ41,42,43,44を生成し、さらに色成分毎に区画された4つの独立の配置領域a1,a2,a3,a4に前記4つの色別プレーンデータ41,42,43,44を配置して圧縮処理単位である1つのファイルとしてまとめた再構成RAWデータ4を生成するRAWデータ再構成器である。5はRAWデータ再構成器3によって生成された圧縮処理単位の再構成RAWデータ4を輝度データとして入力し非可逆圧縮処理する圧縮処理器である。 In FIG. 1, reference numeral 1 denotes an image pickup unit (image sensor) having a color filter in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule. Here, the color filter has four types of color components B (blue component), G (first green component), g (second green component), and R (red component) having a Bayer array. Suppose there is. Reference numeral 2 denotes RAW data in a data format based on a Bayer array having four color components B, G, g, and R acquired by the imaging unit 1. 3 inputs RAW data 2, RAW data 2 is decomposed into color components B, G, g, and R and reassembled two-dimensionally to generate four color- specific plane data 41, 42, 43, 44. Further, the four color- specific plane data 41, 42, 43, and 44 are arranged in four independent arrangement areas a1, a2, a3, and a4 divided for each color component to form one file as a compression processing unit. It is a RAW data reconstructor that generates combined reconstructed RAW data 4. Reference numeral 5 denotes a compression processor that inputs the reconstructed RAW data 4 of the compression processing unit generated by the RAW data reconstructor 3 as luminance data and performs irreversible compression processing.
 撮像部1から出力されRAWデータ再構成器3に入力されるRAWデータ2は、撮像部1における撮像アナログ信号のA/D変換直後の画像データであって、同時化色補間処理、ガンマ補正処理、ホワイトバランス調整などの信号処理が行われていない画像データである。RAWデータ2は、カラーフィルタの配列パターンに対応して画素毎に異なる色情報を1つだけ保持しているモザイク状の画像データである。 RAW data 2 output from the imaging unit 1 and input to the RAW data reconstructor 3 is image data immediately after A / D conversion of the imaging analog signal in the imaging unit 1, and is a synchronized color interpolation process and a gamma correction process. The image data is not subjected to signal processing such as white balance adjustment. The RAW data 2 is mosaic image data in which only one color information different for each pixel corresponding to the color filter array pattern is held.
 RAWデータ再構成器3は、信号処理が加えられていないRAWデータ2を入力し、入力したRAWデータ2を前記の色成分毎に分解し、再集合して4つの色別プレーンデータ41,42,43,44を生成する。すなわち、RAWデータ2は第1ないし第4の色成分を一定の規則に従って画素配列上に繰り返し配列したものであるが、このRAWデータ2を第1色成分(B)、第2色成分(G)、第3色成分(g)、第4の色成分(R)に分解し、第1色成分(B)だけを集めた第1の色別プレーンデータ41と、第2色成分(G)だけを集めた第2の色別プレーンデータ42と、第3色成分(g)だけを集めた第3の色別プレーンデータ43と、第4色成分(R)だけを集めた第4の色別プレーンデータ44とを生成する。なお、ベイヤー配列における4つの色成分は、第1色成分(B)、第2色成分(G)、第3色成分(g)、第4の色成分(R)であり、第2色成分(G)と第3色成分(g)とはともに緑であるが、これら2つは互いに独立した色別プレーンデータとして扱うものとする。 The RAW data reconstructor 3 receives the RAW data 2 that has not been subjected to signal processing, decomposes the input RAW data 2 for each of the color components, and reassembles the four pieces of plane data 41 and 42 for each color. , 43, 44 are generated. That is, the RAW data 2 is obtained by repeatedly arranging the first to fourth color components on the pixel array in accordance with a certain rule. The RAW data 2 is composed of the first color component (B) and the second color component (G ), First color-specific plane data 41 that is decomposed into the third color component (g) and the fourth color component (R) and collects only the first color component (B), and the second color component (G) Second color plane data 42 that collects only the third color plane data 43 that collects only the third color component (g), and a fourth color that collects only the fourth color component (R) Another plane data 44 is generated. The four color components in the Bayer array are the first color component (B), the second color component (G), the third color component (g), and the fourth color component (R), and the second color component. Both (G) and the third color component (g) are green, but these two are treated as independent plane data by color.
 RAWデータ再構成器3は、さらに、前記の4つの色別プレーンデータ41,42,43,44を2次元的に配置して圧縮処理単位である1つのファイルとしてまとめる。ここで、圧縮処理単位である1つのファイルは、色成分毎に区画された4つの配置領域a1,a2,a3,a4をもっている。RAWデータ再構成器3は、色成分毎に区画された4つの配置領域a1,a2,a3,a4に、前記4つの色別プレーンデータ41,42,43,44を振り分けて配置し、再構成RAWデータ4を生成する。すなわち、色成分B(青)に対応する第1の配置領域a1に第1の色別プレーンデータ41を配置し、色成分G(第1の緑)に対応する第2の配置領域a2に第2の色別プレーンデータ42を配置し、色成分g(第2の緑)に対応する第3の配置領域a3に第3の色別プレーンデータ43を配置し、色成分R(赤)に対応する第4の配置領域a4に第4の色別プレーンデータ44を配置して、圧縮処理単位である1つのファイルとしてまとめた再構成RAWデータ4を生成する。そして、生成した再構成RAWデータ4を圧縮処理器5に渡す。 The RAW data reconstructor 3 further arranges the four color- specific plane data 41, 42, 43, and 44 in a two-dimensional manner and collects them as one file that is a compression processing unit. Here, one file as a compression processing unit has four arrangement areas a1, a2, a3, and a4 divided for each color component. The RAW data reconstructor 3 distributes and arranges the four color- specific plane data 41, 42, 43, and 44 in the four arrangement areas a1, a2, a3, and a4 divided for each color component. RAW data 4 is generated. That is, the first color-specific plane data 41 is arranged in the first arrangement area a1 corresponding to the color component B (blue), and the second arrangement area a2 corresponding to the color component G (first green). 2 color-specific plane data 42 is arranged, the third color-specific plane data 43 is arranged in the third arrangement area a3 corresponding to the color component g (second green), and it corresponds to the color component R (red). The fourth color-specific plane data 44 is arranged in the fourth arrangement area a4 to be generated, and the reconstructed RAW data 4 collected as one file as a compression processing unit is generated. Then, the generated reconstructed RAW data 4 is transferred to the compression processor 5.
 圧縮処理器5は、RAWデータ再構成器3によって生成された圧縮処理単位の再構成RAWデータ4を入力し、圧縮処理を行う。圧縮処理については、非可逆圧縮(非可逆符号化)、可逆圧縮(可逆符号化)のいずれでもよい。 The compression processor 5 receives the reconstructed RAW data 4 of the compression processing unit generated by the RAW data reconstructor 3 and performs compression processing. The compression process may be either lossy compression (lossy encoding) or lossless compression (lossless encoding).
 なお、第1ないし第4の配置領域a1~a4の相対的位置関係は任意である。ここでは水平垂直2方向の2次元的に配置しているが、後述するように、水平方向に沿って1次元的に配置してもよいし、垂直方向に沿って1次元的に配置してもよい(図9参照)。 Note that the relative positional relationship between the first to fourth arrangement regions a1 to a4 is arbitrary. Here, it is arranged two-dimensionally in two horizontal and vertical directions, but as described later, it may be arranged one-dimensionally along the horizontal direction, or one-dimensionally arranged along the vertical direction. It is good (refer FIG. 9).
 上記のように構成された本実施の形態の画像処理装置によれば、次のような作用効果が発揮される。RAWデータ2では、第1色成分(B)の右横に隣接して第2色成分(G)があり、第2色成分(G)の右横に隣接して第1色成分(B)があり、第3色成分(g)の右横に隣接して第4の色成分(R)があり、第4の色成分(R)の右横に隣接して第3色成分(g)があり、また、第1色成分(B)の下方に隣接して第3色成分(g)があり、第3色成分(g)の下方に隣接して第1色成分(B)があり、第2色成分(G)の下方に隣接して第4の色成分(R)があり、第4の色成分(R)の下方に隣接して第2色成分(G)がある。しかし、これら互いに隣接する画素の画素値は、相関性があまり高いものではない。したがって、RAWデータ2をそのまま圧縮する場合は、圧縮効率が低いものとなる(従来技術相当)。 According to the image processing apparatus of the present embodiment configured as described above, the following operational effects are exhibited. In the RAW data 2, the second color component (G) is adjacent to the right side of the first color component (B), and the first color component (B) is adjacent to the right side of the second color component (G). There is a fourth color component (R) adjacent to the right side of the third color component (g), and there is a third color component (g) adjacent to the right side of the fourth color component (R). In addition, there is a third color component (g) adjacent below the first color component (B), and there is a first color component (B) adjacent below the third color component (g). The fourth color component (R) is adjacent to the lower side of the second color component (G), and the second color component (G) is adjacent to the lower side of the fourth color component (R). However, the pixel values of these adjacent pixels are not so highly correlated. Therefore, when the RAW data 2 is compressed as it is, the compression efficiency is low (equivalent to the prior art).
 本発明の実施の形態では、RAWデータ再構成器3を備えることにより、圧縮処理器5の圧縮処理の対象を、「4つの色別プレーンデータ41,42,43,44を配置して圧縮処理単位である1つのファイルとしてまとめた再構成RAWデータ4」としている。前述したとおり、色別プレーンデータは隣接画素間の相関性が高く、その圧縮処理では、RAWデータ2をそのまま圧縮する場合に比べ高い圧縮効率を得られる。 In the embodiment of the present invention, by providing the RAW data reconstructor 3, the compression processing target of the compression processing unit 5 is “compressed by placing four color- specific plane data 41, 42, 43, 44. The reconstructed RAW data 4 "is collected as a single unit file. As described above, the color-specific plane data has a high correlation between adjacent pixels, and the compression process can obtain a higher compression efficiency than the case where the raw data 2 is compressed as it is.
 さらに、本実施の形態では、色成分を異にする4つの色別プレーンデータ41,42,43,44を圧縮処理単位である1つのファイル上に配置して構成した再構成RAWデータ4を圧縮処理の対象としているので、一層高い圧縮効率が得られる。詳しくは、すべての色成分の色別プレーンデータ41,42,43,44の圧縮処理を、圧縮処理単位である1つのファイル(再構成RAWデータ4)における1回の圧縮処理で実現するので、従来技術の場合のように4つの色別プレーンデータ41,42,43,44を個別に順次切り替えながら繰り返して圧縮処理する場合に比べて、圧縮効率が大幅に向上する。 Further, in the present embodiment, the reconstructed RAW data 4 configured by arranging four color- specific plane data 41, 42, 43, and 44 having different color components on one file as a compression processing unit is compressed. Since it is an object of processing, higher compression efficiency can be obtained. Specifically, since the compression processing of the color- specific plane data 41, 42, 43, 44 of all the color components is realized by one compression processing in one file (reconstructed RAW data 4) that is a compression processing unit. Compared with the case where the four color- specific plane data 41, 42, 43, and 44 are individually switched sequentially and repeatedly compressed as in the case of the prior art, the compression efficiency is greatly improved.
 また、圧縮処理器5としては、デジタルカメラに標準的に実装されている、1フレーム分の画像データに対する圧縮処理を1回の処理で完了するように作成された従来方式の圧縮制御ソフト資産を有効活用することも可能である。あるいは、既存のJPEGハード処理でも対応することが可能である。すなわち、4つの色別プレーンデータ41,42,43,44を含む1フレーム分の画像データの圧縮処理を1回の処理で完了させるに当たり、並列動作する圧縮処理器を4つ用いる必要がなく、圧縮処理器5は単一ですむため、回路規模の増大を招かないですむ。あるいは、CPUの処理能力を特別に増強する必要もない。 The compression processor 5 is a standard compression control software asset created so that the compression processing for image data for one frame, which is mounted on a digital camera as standard, is completed in one process. It can also be used effectively. Alternatively, existing JPEG hardware processing can also be supported. That is, it is not necessary to use four compression processors operating in parallel to complete the compression processing of image data for one frame including the four color- specific plane data 41, 42, 43, 44 in one process. Since only one compression processor 5 is required, there is no need to increase the circuit scale. Alternatively, there is no need to particularly increase the processing capacity of the CPU.
 次に、ブロックノイズの発生について説明する。図2Aは圧縮対象であるRAWデータ2を例示する。図2Bは4つの色別プレーンデータ41,42,43,44を隣接させて2次元方向に配置したものである。図2Cは4つの色別プレーンデータ41,42,43,44を互いに適当間隔あけて配置したものである。本発明の実施の形態の場合は、図2Bではなく、図2Cが該当する。図2Cの場合、間の部分と周辺部分には輝度レベル不変の固定データ(灰色で示す)が配置され、無画部領域を形成している。 Next, the generation of block noise will be described. FIG. 2A illustrates RAW data 2 to be compressed. In FIG. 2B, four color- specific plane data 41, 42, 43, and 44 are adjacently arranged in a two-dimensional direction. FIG. 2C shows four color- specific plane data 41, 42, 43, 44 arranged at appropriate intervals. In the case of the embodiment of the present invention, FIG. 2C corresponds instead of FIG. 2B. In the case of FIG. 2C, fixed data (indicated in gray) with invariant luminance levels are arranged between the peripheral part and the peripheral part to form a non-image area.
 m、n、x、yを任意の(2以上の)自然数とする。圧縮対象のRAWデータ2のサイズを、水平方向で2m画素、垂直方向で2n画素とする。1つの被圧縮ファイルである再構成RAWデータ4のサイズはRAWデータ2よりも大きい。それは、水平方向で2(m+x)画素であり、垂直方向で2(n+y)画素である。色成分毎に区画された4つの配置領域a1,a2,a3,a4は互いに同じサイズであり、それぞれ水平方向で(m+x)画素であり、垂直方向で(n+y)画素である。4つの色別プレーンデータ41,42,43,44のサイズは共通で、水平方向でm画素、垂直方向n画素である。 M, n, x, and y are arbitrary (two or more) natural numbers. The size of the RAW data 2 to be compressed is 2 m pixels in the horizontal direction and 2n pixels in the vertical direction. The size of the reconstructed RAW data 4 that is one compressed file is larger than that of the RAW data 2. It is 2 (m + x) pixels in the horizontal direction and 2 (n + y) pixels in the vertical direction. The four arrangement regions a1, a2, a3, and a4 divided for each color component have the same size, and are (m + x) pixels in the horizontal direction and (n + y) pixels in the vertical direction. The sizes of the four color- specific plane data 41, 42, 43, and 44 are common and are m pixels in the horizontal direction and n pixels in the vertical direction.
 図3は図1のRAWデータ再構成器3によってベイヤー配列の4つの色別プレーンデータ41,42,43,44を2次元的に配置して1フレーム分の画像データとして再構成した再構成RAWデータ4を示す。図3において、41は画像の左上部に配置された第1色成分(B)だけを集めた第1の色別プレーンデータであり、42は画像の右上部に配置された第2色成分(G)だけを集めた第2の色別プレーンデータであり、43は画像の左下部に配置された第3色成分(g)だけを集めた第3の色別プレーンデータであり、44は画像の右下部に配置された第4色成分(R)だけを集めた第4の色別プレーンデータである。 FIG. 3 shows a reconstructed RAW obtained by two-dimensionally arranging four color- specific plane data 41, 42, 43, and 44 of the Bayer array and reconstructing it as image data for one frame by the RAW data reconstructor 3 of FIG. Data 4 is shown. In FIG. 3, 41 is the first color-specific plane data obtained by collecting only the first color components (B) arranged at the upper left part of the image, and 42 is the second color component (at the upper right part of the image). G) is the second color-specific plane data that collects only G, 43 is the third color-specific plane data that collects only the third color component (g) arranged in the lower left part of the image, and 44 is the image The fourth color-specific plane data is a collection of only the fourth color component (R) arranged in the lower right part of FIG.
 45は圧縮処理器5で処理を実施する際の圧縮処理単位ブロックの水平方向での周期的な境界位置(垂直方向に延びる)を示し、46は圧縮処理器5で処理を実施する際の圧縮処理単位ブロックの垂直方向での周期的な境界位置(水平方向に延びる)を示す。図面上、灰色で示す47は4つの色別プレーンデータ41,42,43,44の隣接境界部分の間を埋める固定データである。固定データ47は、色信号成分はもたず、輝度信号成分のみをもち、しかも輝度信号成分が一定不変となっている画素データである。圧縮処理単位ブロックの大きさは、ここでは簡略化のため4画素×4画素(H.264相当)としているが、JPEGの場合は8画素×8画素である。 45 indicates a periodic boundary position (extending in the vertical direction) in the horizontal direction of the compression processing unit block when processing is performed by the compression processor 5, and 46 indicates compression when processing is performed by the compression processor 5. A periodic boundary position (extending in the horizontal direction) in the vertical direction of the processing unit block is shown. In the drawing, reference numeral 47 shown in gray is fixed data for filling between adjacent boundary portions of the four color- specific plane data 41, 42, 43, 44. The fixed data 47 is pixel data that has no luminance signal component, only a luminance signal component, and the luminance signal component is constant. The size of the compression processing unit block is 4 pixels × 4 pixels (equivalent to H.264) here for simplification, but in the case of JPEG, it is 8 pixels × 8 pixels.
 1つの被圧縮ファイルである再構成RAWデータ4上において4つの色別プレーンデータ41,42,43,44が互いに一定間隔をおいて2次元的に配置されている。色別プレーンデータどうしを互いに間隔をあけて配置するのは次の理由による。圧縮処理は、4画素×4画素や8画素×8画素のブロック単位に対して繰り返し行われる。もし、4つの色別プレーンデータが隣接状態で配置されていると、色別プレーンデータのデータサイズによっては、あるブロック行において第1の色別プレーンデータ41の最終のブロックのいくつかの画素データと第2の色別プレーンデータ42の先頭のブロックのいくつかの画素データとが1つの圧縮処理単位ブロックに共通に入ることがあり、また、第3の色別プレーンデータ43と第4の色別プレーンデータ44との間でも同様で、そのような状態で圧縮処理を行うと、同じ圧縮処理単位ブロックに色成分が相違する画素データが混在することから、圧縮後のデータが劣化してしまう。さらに、垂直方向でも、あるブロック列において第1の色別プレーンデータ41の最終のブロックのいくつかの画素データと第3の色別プレーンデータ43の先頭のブロックのいくつかの画素データとが1つの圧縮処理単位ブロックに共通に入ることがあり、また、第2の色別プレーンデータ42と第4の色別プレーンデータ44との間でも同様で、その状態で圧縮処理を行うと、同じ圧縮処理単位ブロックに色成分が相違する画素データが混在することから、圧縮後のデータが劣化してしまう。 The four color- specific plane data 41, 42, 43, and 44 are two-dimensionally arranged at a predetermined interval on the reconstructed RAW data 4 that is one compressed file. The reason why the plane data for each color is arranged with a space therebetween is as follows. The compression process is repeatedly performed on a block unit of 4 pixels × 4 pixels or 8 pixels × 8 pixels. If four color plane data are arranged adjacent to each other, some pixel data of the last block of the first color plane data 41 in a certain block row depending on the data size of the color plane data. And some pixel data of the first block of the second color plane data 42 may be included in one compression processing unit block, and the third color plane data 43 and the fourth color The same applies to the other plane data 44. When compression processing is performed in such a state, pixel data having different color components are mixed in the same compression processing unit block, and thus the compressed data is deteriorated. . Further, even in the vertical direction, some pixel data of the last block of the first color-specific plane data 41 and some pixel data of the first block of the third color-specific plane data 43 are 1 in a certain block row. The same block may be included in one compression processing unit block, and the same applies to the second color-specific plane data 42 and the fourth color-specific plane data 44. When compression processing is performed in this state, the same compression is performed. Since pixel data having different color components is mixed in the processing unit block, the data after compression is deteriorated.
 上記不都合を回避するためには、1つの圧縮処理単位ブロックに入るのは必ず同じ色成分の画素データだけであるようにすればよい。そのために本実施の形態では、色別プレーンデータどうしを互いに間隔をあけて配置し、隣り合う色別プレーンデータどうし間に輝度信号成分一定不変の固定データ47を適当個数配置するようにしている。こうすることにより、圧縮後のデータの劣化を防止することができる。 In order to avoid the above inconvenience, it is only necessary that only one pixel data of the same color component enters one compression processing unit block. For this reason, in this embodiment, the color-specific plane data are arranged with an interval between them, and an appropriate number of fixed data 47 whose luminance signal component is not changed is arranged between the adjacent color-specific plane data. By doing so, it is possible to prevent deterioration of data after compression.
 圧縮処理単位ブロックの圧縮処理においては、フォトレタッチ処理等のために圧縮RAWデータを伸張処理したときに、境界位置45,46にブロックノイズが発生しやすい。ブロックノイズは、JPEGでのDCTにおいて発生しやすい。圧縮率を上げるほど発生しやすい。比較的濃度値の変化が少ない部分で目立つ。これを図4に示す。図4は図3の圧縮RAWデータを伸張処理した場合の画像データにおけるブロックノイズ発生位置の説明図である。 In the compression processing of the compression processing unit block, block noise is likely to occur at the boundary positions 45 and 46 when the compressed RAW data is decompressed for photo retouch processing or the like. Block noise is likely to occur in DCT with JPEG. It tends to occur as the compression rate increases. It stands out in areas where there is relatively little change in density value. This is shown in FIG. FIG. 4 is an explanatory diagram of block noise occurrence positions in the image data when the compressed RAW data of FIG. 3 is decompressed.
 図3の場合、第1の色別プレーンデータ41が第1の配置領域a1において配置されている相対的位置関係と、第2の色別プレーンデータ42が第2の配置領域a2において配置されている相対的位置関係と、第3の色別プレーンデータ43が第3の配置領域a3において配置されている相対的位置関係と、第4の色別プレーンデータ44が第4の配置領域a4において配置されている相対的位置関係とは、互いに等価となっている。各配置領域の基準位置を各配置領域の左上隅とし、各色別プレーンデータの基準位置を各色別プレーンデータの左上隅とする。第1の色別プレーンデータ41における基準位置の第1の配置領域a1における基準位置に対する変位ベクトルV1と、第2の色別プレーンデータ42における基準位置の第2の配置領域a2における基準位置に対する変位ベクトルV2と、第3の色別プレーンデータ43における基準位置の第3の配置領域a3における基準位置に対する変位ベクトルV3と、第4の色別プレーンデータ44における基準位置の第4の配置領域a4における基準位置に対する変位ベクトルV4とは、互いに等価であり、水平方向または垂直方向に沿った平行移動により互いにぴったりと重なり合うようになっている。 In the case of FIG. 3, the relative positional relationship in which the first color-specific plane data 41 is arranged in the first arrangement area a1, and the second color-specific plane data 42 is arranged in the second arrangement area a2. Relative positional relationship, the relative positional relationship in which the third color-specific plane data 43 is arranged in the third arrangement region a3, and the fourth color-specific plane data 44 in the fourth arrangement region a4. The relative positional relations are equivalent to each other. The reference position of each arrangement area is the upper left corner of each arrangement area, and the reference position of each color plane data is the upper left corner of each color plane data. The displacement vector V1 of the reference position in the first color plane data 41 with respect to the reference position in the first arrangement area a1, and the displacement of the reference position in the second color plane data 42 with respect to the reference position in the second arrangement area a2. The vector V2, the displacement vector V3 relative to the reference position in the third arrangement area a3 of the reference position in the third color plane data 43, and the reference position in the fourth arrangement area a4 of the fourth color plane data 44. The displacement vectors V4 with respect to the reference position are equivalent to each other and are exactly overlapped with each other by parallel movement along the horizontal direction or the vertical direction.
 しかし、このような等価的な配置形態がブロックノイズ発生の原因となりやすいことが分かってきた。次にその理由を説明する。簡単に概念説明するために、基本処理ブロックの大きさを4画素×4画素とする。 However, it has been found that such an equivalent arrangement form is likely to cause block noise. Next, the reason will be described. In order to briefly explain the concept, the size of the basic processing block is 4 pixels × 4 pixels.
 4つの色別プレーンデータ41,42,43,44のそれぞれは、その内部に、圧縮処理単位ブロックの垂直方向に延びる水平方向周期の境界位置45と水平方向に延びる垂直方向周期の境界位置46とが繰り返し現れてくる。 Each of the four color- specific plane data 41, 42, 43, 44 includes therein a horizontal period boundary position 45 extending in the vertical direction and a vertical period boundary position 46 extending in the horizontal direction of the compression processing unit block. Appears repeatedly.
 図3において、第1の色別プレーンデータ41で左側から境界位置45に臨むB(青)の画素データは2列目、6列目、10列目などであり、右側から境界位置45に臨むB(青)の画素データは3列目、7列目、11列目などであり、これらの画素データは伸張処理されると、図4に示すように、元の2m×2nの大きなサイズの画像データにおいて展開され、3列目、11列目、19列目などと、5列目、13列目、21列目などとなっている。 In FIG. 3, the B (blue) pixel data facing the boundary position 45 from the left side in the first color-specific plane data 41 is the second column, the sixth column, the tenth column, etc., and faces the boundary position 45 from the right side. The B (blue) pixel data is in the third, seventh, eleventh columns, etc., and when these pixel data are expanded, the original 2m × 2n large size is obtained as shown in FIG. Expanded in the image data, the third column, the eleventh column, the nineteenth column, the fifth column, the thirteenth column, the twenty-first column, and the like.
 図3において、第1の色別プレーンデータ41で上側から境界位置46に臨むB(青)の画素データは2行目、6行目、10行目目などであり、下側から境界位置46に臨むB(青)の画素データは3行目、7行目、11行目などであり、これらが図4において展開され、3行目、11行目、19行目などと、5行目、13行目、21行目などとなっている。 In FIG. 3, B (blue) pixel data facing the boundary position 46 from the upper side in the first color-specific plane data 41 is the second line, the sixth line, the tenth line, and the like, and the boundary position 46 from the lower side. The pixel data of B (blue) facing the first line is the third line, the seventh line, the eleventh line, etc., which are expanded in FIG. 4 and the third line, the eleventh line, the nineteenth line, etc. , Line 13 and line 21.
 図3において、第2の色別プレーンデータ42で左側から境界位置45に臨むG(第1の緑)の画素データは、上記と同様に2列目、6列目、10列目などであり、右側から境界位置45に臨むG(第1の緑)の画素データは、上記と同様に3列目、7列目、11列目などである。これらが図4において展開され、4列目、12列目、20列目などと、6列目、14列目、22列目などとなっている。 In FIG. 3, the G (first green) pixel data facing the boundary position 45 from the left side in the second color-specific plane data 42 is the second column, the sixth column, the tenth column, etc., as described above. Similarly, the G (first green) pixel data facing the boundary position 45 from the right side is the third column, the seventh column, the eleventh column, and the like. These are developed in FIG. 4 and are the fourth column, the twelfth column, the twentieth column, the sixth column, the fourteenth column, the twenty-second column, and the like.
 図3において、第1の色別プレーンデータ41で上側から境界位置46に臨むG(第1の緑)の画素データは、上記と同様に2行目、6行目、10行目目などであり、下側から境界位置46に臨むG(第1の緑)の画素データは上記同様に3行目、7行目、11行目などであり、これらが図4において展開され、上記と同様に、3行目、11行目、19行目などと、5行目、13行目、21行目などとなっている。 In FIG. 3, the G (first green) pixel data facing the boundary position 46 from the upper side in the first color-specific plane data 41 is in the second row, the sixth row, the tenth row, etc. as described above. Yes, the G (first green) pixel data facing the boundary position 46 from the lower side is the third row, the seventh row, the eleventh row, etc., as described above, and these are expanded in FIG. The third, eleventh, and nineteenth lines are the fifth, thirteenth, and twenty-first lines.
 さらに、図3において、第3の色別プレーンデータ43でのg(第2の緑)の画素データの境界位置45,46に対する関係も同様であり、図4において上記と同様に、3列目、11列目、19列目などと、5列目、13列目、21列目などとなっており、また、4行目、12行目、20行目などと、6行目、14行目、22行目などとなっている。 Further, in FIG. 3, the relationship between the g (second green) pixel data and the boundary positions 45 and 46 in the third color-specific plane data 43 is the same, and in FIG. , 11th column, 19th column, etc., 5th column, 13th column, 21st column etc., 4th row, 12th row, 20th row, etc., 6th row, 14th row, etc. Eyes, 22nd line, etc.
 さらに、図3において、第4の色別プレーンデータ44でのR(赤)の画素データの境界位置45,46に対する関係も同様であり、図4において上記と同様に、4行目、12行目、20行目などと、6行目、14行目、22行目などとなっており、また、4行目、12行目、20行目などと、6行目、14行目、22行目などとなっている。 Further, in FIG. 3, the relationship of the R (red) pixel data with respect to the boundary positions 45 and 46 in the fourth color-specific plane data 44 is the same. In FIG. The 6th line, the 14th line, the 22nd line, and the like, the 4th line, the 12th line, the 20th line, and the 6th line, the 14th line, and the 22nd line. It is the line.
 以上のように、ブロックノイズが発生しやすい境界位置が4つの色別プレーンデータ41,42,43,44で等価的であり、伸張処理後の画像データにおいて垂直方向および水平方向に連続することになる。すなわち、伸張処理後の元の大きなサイズの画像データにおいて、4列目毎と4行目毎に規則的に境界位置に対応する画素が現れることになる。その結果として、ブロックノイズが顕著に現れやすく、画像品質の劣化を招く可能性が残る(図7B参照)。 As described above, the boundary positions at which block noise is likely to occur are equivalent to the four color- specific plane data 41, 42, 43, and 44, and are continuous in the vertical direction and the horizontal direction in the decompressed image data. Become. That is, in the original large-sized image data after the decompression process, pixels corresponding to the boundary positions regularly appear every fourth column and every fourth row. As a result, block noise tends to appear prominently, and there is a possibility that image quality will deteriorate (see FIG. 7B).
 以上で説明したブロックノイズ発生の問題は、圧縮効率を高めるために、色成分を異にする複数種類の色別プレーンデータを圧縮処理単位である1つのファイル上に整然と(それぞれの基準位置に対する変位ベクトルV1~V4を互いに等価的に)配置して構成した再構成RAWデータを圧縮処理の対象とし、1フレーム分の画像データに対する圧縮処理を1回の処理で完了するようにしたことに由来する。 The problem of block noise generation described above is that, in order to increase the compression efficiency, a plurality of types of color-specific plane data having different color components are arranged in order on one file as a compression processing unit (displacement with respect to each reference position). This is because the reconstructed RAW data configured by arranging the vectors V1 to V4 (equivalent to each other) is the target of compression processing, and the compression processing for one frame of image data is completed in one process. .
 そこで、本実施の形態では、より好ましい態様として次のような対策を講じる。ここでのデータ圧縮は、圧縮率の高い非可逆圧縮とする。図5に示すように、第1の色別プレーンデータ41が第1の配置領域a1において配置されている相対的位置関係と、第2の色別プレーンデータ42が第2の配置領域a2において配置されている相対的位置関係と、第3の色別プレーンデータ43が第3の配置領域a3において配置されている相対的位置関係と、第4の色別プレーンデータ44が第4の配置領域a4において配置されている相対的位置関係とは、互いに非等価となっている。つまり、これら4つの相対的位置関係において、相互にずれを生じさせている。そのずれは、右方向1画素分かつ下方向1画素分である(圧縮処理単位ブロックが4画素×4画素の場合)。 Therefore, in this embodiment, the following measures are taken as a more preferable aspect. The data compression here is irreversible compression with a high compression rate. As shown in FIG. 5, the relative positional relationship in which the first color-specific plane data 41 is arranged in the first arrangement area a1, and the second color-specific plane data 42 is arranged in the second arrangement area a2. Relative positional relationship, the third color plane data 43 is arranged in the third arrangement area a3, and the fourth color plane data 44 is the fourth arrangement area a4. The relative positional relationship arranged at is not equivalent to each other. That is, the four relative positional relationships are shifted from each other. The shift is one pixel in the right direction and one pixel in the downward direction (when the compression processing unit block is 4 pixels × 4 pixels).
 第1の色別プレーンデータ41における基準位置の第1の配置領域a1における基準位置に対する変位ベクトルV1と、第2の色別プレーンデータ42における基準位置の第2の配置領域a2における基準位置に対する変位ベクトルV2と、第3の色別プレーンデータ43における基準位置の第3の配置領域a3における基準位置に対する変位ベクトルV3と、第4の色別プレーンデータ44における基準位置の第4の配置領域a4における基準位置に対する変位ベクトルV4とは、互いに非等価である。V2=V1×2、V3=V1×3、V4=V1×4となっている。 The displacement vector V1 of the reference position in the first color plane data 41 with respect to the reference position in the first arrangement area a1, and the displacement of the reference position in the second color plane data 42 with respect to the reference position in the second arrangement area a2. The vector V2, the displacement vector V3 relative to the reference position in the third arrangement area a3 of the reference position in the third color plane data 43, and the reference position in the fourth arrangement area a4 of the fourth color plane data 44. The displacement vector V4 with respect to the reference position is not equivalent to each other. V2 = V1 × 2, V3 = V1 × 3, and V4 = V1 × 4.
 第2の色別プレーンデータ42は第1の色別プレーンデータ41に対して右下方向に1画素分ずらしてあり、第3の色別プレーンデータ43は第2の色別プレーンデータ42に対して右下方向に1画素分ずらしてあり、第4の色別プレーンデータ44は第3の色別プレーンデータ43に対して右下方向に1画素分ずらしてある。 The second color-specific plane data 42 is shifted by one pixel in the lower right direction with respect to the first color-specific plane data 41, and the third color-specific plane data 43 is different from the second color-specific plane data 42. The fourth color-specific plane data 44 is shifted by one pixel in the lower right direction with respect to the third color-specific plane data 43.
 図6は図5に対応して伸張処理後の元の大きなサイズの画像データを示したものである。図4の図3に対する関係と図6の図5に対する関係とは同じ次元の対応関係となっている。 FIG. 6 shows the original large size image data after the decompression process corresponding to FIG. The relationship of FIG. 4 to FIG. 3 and the relationship of FIG. 6 to FIG.
 図5において、第1の色別プレーンデータ41で左側から境界位置45に臨むB(青)の画素データは3列目、7列目、11列目などであり、右側から境界位置45に臨むB(青)の画素データは4列目、8列目、12列目などであり、これらが図6において展開され、5列目、13列目、21列目などと、7列目、15列目、23列目などとなっている。 In FIG. 5, the B (blue) pixel data facing the boundary position 45 from the left side in the first color-specific plane data 41 is the third column, the seventh column, the eleventh column, etc., and faces the boundary position 45 from the right side. The pixel data of B (blue) is the fourth column, the eighth column, the twelfth column, etc., and these are expanded in FIG. 6, and the fifth column, the thirteenth column, the twenty-first column, etc. The columns are the 23rd column and the like.
 図5において、第1の色別プレーンデータ41で上側から境界位置46に臨むB(青)の画素データは3行目、7行目、11行目目などであり、下側から境界位置46に臨むB(青)の画素データは4行目、8行目、12行目などであり、これらが図6において展開され、5行目、13行目、21行目などと、7行目、15行目、23行目などとなっている。 In FIG. 5, B (blue) pixel data facing the boundary position 46 from the upper side in the first color-specific plane data 41 is the third line, the seventh line, the eleventh line, etc., and the boundary position 46 from the lower side. The B (blue) pixel data facing the first line is the fourth line, the eighth line, the twelfth line, etc., which are expanded in FIG. 6, and the fifth line, the 13th line, the 21st line, etc. 15th and 23rd lines.
 図5において、第2の色別プレーンデータ42で左側から境界位置45に臨むG(第1の緑)の画素データは2列目、6列目、10列目などであり、右側から境界位置45に臨むG(第1の緑)の画素データは3列目、7列目、11列目などであり、これらが図6において展開され、4列目、12列目、20列目などと、6列目、14列目、22列目などとなっている。 In FIG. 5, the G (first green) pixel data facing the boundary position 45 from the left side in the second color-specific plane data 42 is the second column, the sixth column, the tenth column, etc., and the boundary position from the right side. The pixel data of G (first green) facing 45 is the third column, the seventh column, the eleventh column, etc., which are developed in FIG. 6, and the fourth column, the twelfth column, the twentieth column, etc. , 6th row, 14th row, 22nd row and so on.
 図5において、第2の色別プレーンデータ42で上側から境界位置46に臨むG(第1の緑)の画素データは2行目、6行目、10行目目などであり、下側から境界位置46に臨むG(第1の緑)の画素データは3行目、7行目、11行目などであり、これらが図6において展開され、3行目、12行目、20行目などと、5行目、13行目、21行目などとなっている。 In FIG. 5, the G (first green) pixel data facing the boundary position 46 from the upper side in the second color-specific plane data 42 is the second row, the sixth row, the tenth row, etc., from the lower side. The pixel data of G (first green) facing the boundary position 46 is the third row, the seventh row, the eleventh row, etc., which are expanded in FIG. The fifth line, the 13th line, the 21st line, etc.
 図5において、第3の色別プレーンデータ43で左側から境界位置45に臨むg(第2の緑)の画素データは1列目、5列目、9列目などであり、右側から境界位置45に臨むg(第2の緑)の画素データは2列目、6列目、10列目などであり、これらが図6において展開され、1列目、9列目、17列目などと、3列目、11列目、19列目などとなっている。 In FIG. 5, the pixel data of g (second green) facing the boundary position 45 from the left side in the third color-specific plane data 43 is the first column, the fifth column, the ninth column, etc., and the boundary position from the right side The pixel data of g (second green) facing 45 is the second column, the sixth column, the tenth column, etc., and these are expanded in FIG. 6, and the first column, the ninth column, the 17th column, etc. The third, eleventh, and nineteenth columns are provided.
 図5において、第3の色別プレーンデータ43で上側から境界位置46に臨むg(第2の緑)の画素データは1行目、5行目、9行目目などであり、下側から境界位置46に臨むg(第2の緑)の画素データは2行目、6行目、10行目などであり、これらが図6において展開され、2行目、10行目、18行目などと、4行目、12行目、20行目などとなっている。 In FIG. 5, the pixel data of g (second green) facing the boundary position 46 from the upper side in the third color-specific plane data 43 is the first row, the fifth row, the ninth row, etc., from the lower side. The pixel data of g (second green) facing the boundary position 46 is the second row, the sixth row, the tenth row, etc., which are expanded in FIG. 6, and the second row, the tenth row, and the 18th row. The fourth line, the twelfth line, the twentieth line, and the like.
 図5において、第4の色別プレーンデータ44で左側から境界位置45に臨むR(赤)の画素データは4列目、8列目、12列目などであり、右側から境界位置45に臨むR(赤)の画素データは1列目、5列目、9列目などであり、これらが図6において展開され、8列目、16列目、24列目などと、2列目、10列目、18列目などとなっている。 In FIG. 5, the R (red) pixel data facing the boundary position 45 from the left side in the fourth color-specific plane data 44 is the fourth, eighth, and twelfth columns, and faces the boundary position 45 from the right side. The pixel data of R (red) is the 1st column, the 5th column, the 9th column, etc., and these are expanded in FIG. 6, and the 8th column, the 16th column, the 24th column, etc. The columns are the 18th column and the like.
 図5において、第4の色別プレーンデータ44で上側から境界位置46に臨むR(赤)の画素データは4行目、8行目、12行目目などであり、下側から境界位置46に臨むR(赤)の画素データは1行目、5行目、9行目などであり、これらが図6において展開され、8行目、16行目、24行目などと、2行目、10行目、18行目などとなっている。 In FIG. 5, the R (red) pixel data facing the boundary position 46 from the upper side in the fourth color-specific plane data 44 is the fourth line, the eighth line, the twelfth line, etc., and the boundary position 46 from the lower side. The R (red) pixel data facing the first line is the first line, the fifth line, the ninth line, etc., which are expanded in FIG. 6, and the second line, the eighth line, the 16th line, the 24th line, etc. The 10th line, the 18th line, and the like.
 圧縮処理単位ブロックの周期的な境界位置において画像が不連続となる画質劣化により、伸張処理を施したデコード後の1フレーム分の再構成RAWデータにブロックノイズが発生する場合がある。 • Block noise may occur in the reconstructed RAW data for one frame after decoding that has been subjected to decompression processing due to image quality deterioration in which the image becomes discontinuous at the periodic boundary positions of the compression processing unit blocks.
 図6において以上のような8画素×8画素のパターンが2次元方向に沿って繰り返し現れる。図4と図6とを比較すると、図4では境界位置に臨む画素群が水平方向と垂直方向とのいずれにおいても1直線状に整列しているのに対して、図6では境界位置に臨む画素群が水平方向と垂直方向とのいずれにおいても分散配置の状態となっている。すなわち、図4の展開ではブロックノイズが発生しやすい位置が集中配置の状態になっているのに対して、図6の展開ではブロックノイズが発生しやすい位置が分散配置の状態になっている。 In FIG. 6, the 8 pixel × 8 pixel pattern as described above repeatedly appears along the two-dimensional direction. Comparing FIG. 4 and FIG. 6, in FIG. 4, the pixel group facing the boundary position is aligned in a straight line in both the horizontal direction and the vertical direction, whereas in FIG. 6, it faces the boundary position. The pixel group is in a dispersed arrangement state both in the horizontal direction and in the vertical direction. That is, in the development of FIG. 4, positions where block noise is likely to occur are in a concentrated arrangement state, whereas in the development of FIG. 6, positions where block noise is likely to occur are in a distributed arrangement state.
 ブロックノイズの発生位置が水平方向および垂直方向に連続する図3、図4の方式の場合、その圧縮RAWデータを伸張処理すると、図7Bのようにブロックノイズが目立つ。これに対して、ブロックノイズの発生位置が水平方向および垂直方向で分散された図5、図6の方式の場合、その圧縮RAWデータを伸張処理すると、図7Aのようにブロックノイズはあまり目立たない。 In the case of the method shown in FIGS. 3 and 4 where the block noise generation position is continuous in the horizontal direction and the vertical direction, when the compressed RAW data is decompressed, the block noise is noticeable as shown in FIG. 7B. On the other hand, in the case of the method of FIGS. 5 and 6 in which the generation positions of block noise are distributed in the horizontal direction and the vertical direction, if the compressed RAW data is expanded, the block noise is not so noticeable as shown in FIG. 7A. .
 図7Bに対応する図4のブロックノイズ発生位置を図7Bに適用したのが図8Bであり、図7Aに対応する図6のブロックノイズ発生位置を図7Aに適用したのが図8Aである。図7Aと図7Bの違いを鮮明にするため、参考として図8A,図8Bを示す。図8A、図8Bでは、ブロックノイズ発生位置(白い部分)を鮮明にするため、背景を黒くしてある。図8Bの場合、ブロックノイズ発生位置が1直線状に集中し、その集中度がきわめて高いものとなっている。その結果として、図7Bに示すように、圧縮処理単位ブロックの周期的な境界位置において画像が不連続となる画質劣化が生じ、目立った画質劣化が生じている。これに対して、図8Aの場合、ブロックノイズ発生位置が2次元的に大きく分散し、その集中度がきわめて低いものとなっている。その結果として、図7Aに示すように、画質劣化は生じているもののあまり目立たないものとなっている。 FIG. 8B applies the block noise occurrence position of FIG. 4 corresponding to FIG. 7B to FIG. 7B, and FIG. 8A applies the block noise occurrence position of FIG. 6 corresponding to FIG. 7A to FIG. In order to clarify the difference between FIG. 7A and FIG. 7B, FIG. 8A and FIG. 8B are shown as a reference. In FIG. 8A and FIG. 8B, the background is black in order to make the block noise occurrence position (white portion) clear. In the case of FIG. 8B, the block noise occurrence positions are concentrated in one straight line, and the degree of concentration is extremely high. As a result, as shown in FIG. 7B, image quality degradation occurs in which the image becomes discontinuous at the periodic boundary positions of the compression processing unit blocks, and noticeable image quality degradation occurs. On the other hand, in the case of FIG. 8A, the block noise occurrence positions are widely dispersed two-dimensionally, and the degree of concentration is extremely low. As a result, as shown in FIG. 7A, the image quality is deteriorated but not so noticeable.
 圧縮処理単位ブロックの大きさが4画素×4画素の上記の場合、4つの色別プレーンデータ41,42,43,44を順次に水平方向1画素かつ垂直方向1画素のずらしを行ったが、圧縮処理単位ブロックの大きさが8画素×8画素のJPEGの場合、4つの色別プレーンデータ41,42,43,44を順次に水平方向2画素かつ垂直方向2画素のずらしを行えばよい。 In the above case where the size of the compression processing unit block is 4 pixels × 4 pixels, the four color- specific plane data 41, 42, 43, 44 are sequentially shifted by one pixel in the horizontal direction and one pixel in the vertical direction. When the compression processing unit block size is JPEG of 8 pixels × 8 pixels, the four color- specific plane data 41, 42, 43, and 44 may be sequentially shifted by 2 pixels in the horizontal direction and 2 pixels in the vertical direction.
 図5を見ると、これは圧縮処理単位ブロックの大きさが4画素×4画素で、4つの色別プレーンデータ41,42,43,44を順次に水平方向1画素かつ垂直方向1画素ずらしている。圧縮処理単位ブロックの大きさが8画素×8画素のJPEGであれば、垂直方向に延びる水平方向周期の境界位置45が8列置きに現れ、水平方向に延びる垂直方向周期の境界位置46が8行置きに現れることになる。ブロックノイズが発生しやすい位置をうまくずらすには、4つの色別プレーンデータ41,42,43,44を順次に水平方向2画素かつ垂直方向2画素ずつずらせばよい。こうすることにより、伸張処理後の元の大きなサイズの画像データ(ベイヤー配列)において、各色のブロックノイズの発生位置が重なり合いを防ぎ、ブロックノイズを発生しやすい位置を大きく分散配置できるので、視覚的な画像品質の劣化を抑えることができる。 As shown in FIG. 5, the size of the compression processing unit block is 4 pixels × 4 pixels, and the four color- specific plane data 41, 42, 43, 44 are sequentially shifted by 1 pixel in the horizontal direction and 1 pixel in the vertical direction. Yes. When the size of the compression processing unit block is 8 pixels × 8 pixels JPEG, the boundary positions 45 of the horizontal period extending in the vertical direction appear every 8 columns, and the boundary positions 46 of the vertical period extending in the horizontal direction are 8 It will appear every other line. To shift the position where block noise is likely to occur, the four color- specific plane data 41, 42, 43, and 44 may be sequentially shifted by two pixels in the horizontal direction and two pixels in the vertical direction. By doing this, in the original large size image data (Bayer array) after the decompression process, the positions where the block noises of each color are prevented from overlapping and the positions where the block noises are likely to be generated can be distributed widely. Image quality deterioration can be suppressed.
 一般的に、圧縮処理単位ブロックの大きさがa画素×a画素の場合、4つの色別プレーンデータを順次に水平方向a/4画素かつ垂直方向a/4画素ずらせばよい。ここで、aは4の倍数であり、a=4,8,12,16…である。 Generally, when the size of the compression processing unit block is a pixel × a pixel, the four color plane data may be sequentially shifted in the horizontal direction a / 4 pixel and the vertical direction a / 4 pixel. Here, a is a multiple of 4 and a = 4, 8, 12, 16,.
 RAWデータ再構成器3について、1フレーム分の画像データにおける4つの色別プレーンデータ41,42,43,44の配置関係は、上記のように、水平方向と垂直方向の両方向に並べる2次元配列のほか、図9に示すように、垂直方向に1列に並べる垂直1次元配列や水平方向に1列に並べる水平1次元配列でもよい。 As for the RAW data reconstructor 3, the arrangement relationship of the four color- specific plane data 41, 42, 43, 44 in the image data for one frame is a two-dimensional array arranged in both the horizontal direction and the vertical direction as described above. In addition, as shown in FIG. 9, a vertical one-dimensional array arranged in one column in the vertical direction or a horizontal one-dimensional array arranged in one column in the horizontal direction may be used.
 4つの色別プレーンデータ41,42,43,44を並べる順番は特に規定する必要はなく、配置情報の履歴をデータの圧縮後のデータ付加情報とすることにより、任意の場所に配置することができる。[左上、右上、左下、右下]の配列で、図1のように[B,G,g,R]とするほか、[G,B,R,g]としてもよいし、[R,G,g,B]としてもよい。 The order in which the four color- specific plane data 41, 42, 43, and 44 are arranged is not particularly limited, and the arrangement information can be arranged in an arbitrary place by using the history of the arrangement information as data additional information after data compression. it can. In the [upper left, upper right, lower left, lower right] array, [B, G, g, R] as shown in FIG. 1, [G, B, R, g] may be used, and [R, G , G, B].
 垂直方向に1列に並べる垂直1次元配列の場合に、ブロックノイズ発生位置の重なりを軽減するには、4つの色別プレーンデータ41,42,43,44の上下の隣接間に固定データ47を挿入し、かつ、圧縮処理単位ブロックの周期的な境界位置に対して各色別プレーンデータを垂直方向で相対的にずらせばよい。なお、2次元的な展開になるが、水平方向にも画素ずらしを行うとなおよい。2次元的展開とはいっても、配置領域a1,a2,a3,a4の並びとしてはあくまで垂直方向に沿って1次元的である。 In the case of a vertical one-dimensional array arranged in a line in the vertical direction, in order to reduce the overlap of block noise occurrence positions, fixed data 47 is placed between the upper and lower adjacent areas of the four color- specific plane data 41, 42, 43, 44. What is necessary is just to insert each color plane data relatively with respect to the periodic boundary position of the compression processing unit block in the vertical direction. Although it is a two-dimensional development, it is better to shift the pixels in the horizontal direction. Even though it is a two-dimensional development, the arrangement of the arrangement regions a1, a2, a3, and a4 is only one-dimensional along the vertical direction.
 また、水平方向に1列に並べる水平1次元配列の場合に、ブロックノイズ発生位置の重なりを軽減するには、4つの色別プレーンデータ41,42,43,44の左右の隣接間に固定データ47を挿入し、かつ、圧縮処理単位ブロックの周期的な境界位置に対して各色別プレーンデータを水平方向で相対的にずらせばよい。なお、2次元的な展開になるが、垂直方向にも画素ずらしを行うとなおよい。2次元的展開とはいっても、配置領域a1,a2,a3,a4の並びとしてはあくまで水平方向に沿って1次元的である。 In addition, in the case of a horizontal one-dimensional array arranged in a row in the horizontal direction, in order to reduce the overlap of block noise occurrence positions, fixed data between the left and right adjacent four- color plane data 41, 42, 43, 44 47 and the plane data for each color may be shifted relative to the periodic boundary position of the compression processing unit block in the horizontal direction. Although it is a two-dimensional development, it is better to shift the pixels in the vertical direction. Even though it is a two-dimensional development, the arrangement of the arrangement regions a1, a2, a3, and a4 is only one-dimensional along the horizontal direction.
 [実施例]
 以下、本発明における画像処理装置および画像処理方法の好ましい実施例について詳説する。
[Example]
Hereinafter, preferred embodiments of the image processing apparatus and the image processing method in the present invention will be described in detail.
 図10は本発明の実施例にかかわるRAWデータから生成した再構成RAWデータを圧縮処理する画像処理装置を搭載した撮像装置の概略構成を示すブロック図である。この撮像装置50は、色分解して撮像するタイプのイメージセンサを用いた撮像装置であって、JPEG形式による画像記録が可能であるとともに、A/D変換した直後のRAWデータの記録が可能であるように構成されている。 FIG. 10 is a block diagram showing a schematic configuration of an imaging apparatus equipped with an image processing apparatus that compresses reconstructed RAW data generated from RAW data according to an embodiment of the present invention. This image pickup apparatus 50 is an image pickup apparatus using an image sensor of a type that separates and picks up images, and can record images in the JPEG format and can record RAW data immediately after A / D conversion. It is configured to be.
 図10において、50は撮像装置、60は単板式の撮像部、70は画像処理装置である。撮像装置50は、撮像部60と画像処理装置70を備えている。撮像部60は、光学レンズ61、光学ローパスフィルタ62、カラーフィルタ63、撮像素子64およびアナログフロントエンド部65を含んで構成されている。画像処理装置70は、CPU(Central Processing Unit)71、ROM(Read Only Memory)72、RAM(Random Access Memory)73、前処理部74、メモリ制御部75、画像メモリ76、画像信号処理部77、圧縮伸張処理部(エンコーダ/デコーダ)78、記録メディアインターフェース部79、表示処理部80およびモニタインターフェース部81を含んで構成されている。91は操作パネル、92は記録メディアである。以下、各部の構成を説明する。 In FIG. 10, 50 is an imaging device, 60 is a single-plate imaging unit, and 70 is an image processing device. The imaging device 50 includes an imaging unit 60 and an image processing device 70. The imaging unit 60 includes an optical lens 61, an optical low-pass filter 62, a color filter 63, an imaging element 64, and an analog front end unit 65. The image processing apparatus 70 includes a CPU (Central Processing Unit) 71, a ROM (Read Only Memory) 72, a RAM (Random Access Memory) 73, a preprocessing unit 74, a memory control unit 75, an image memory 76, an image signal processing unit 77, A compression / decompression processing unit (encoder / decoder) 78, a recording media interface unit 79, a display processing unit 80, and a monitor interface unit 81 are included. 91 is an operation panel, and 92 is a recording medium. Hereinafter, the configuration of each unit will be described.
 撮像素子64は、CCD型、CMOS型などのイメージセンサであり、その受光面に多数のフォトダイオード(感光画素)が2次元的に配列されている。フォトダイオードは、光学レンズ61および光学ローパスフィルタ62を通過した被写体情報を光電変換する。光学ローパスフィルタ62は、撮像素子64の画素ピッチなどに依存するサンプリング周波数以上の高周波成分を除去する作用を有し、画像再現(信号処理)後の最終画像におけるエリアシング(aliasing)の発生を防止するようになっている。エリアシングとは、標本化周波数の1/2を超える周波数成分をもつ波形を無理に標本化しようとすると、本来存在しないはずの周波数成分が現れ、白地に黒い文字や線で描かれたようなコントラストのはっきりした画像では、文字や線がつぶれて本来の画像とは異なった模様が現れる現象である。カラーフィルタ63は、撮像素子64の1画素に対応する位置にR,G,Bの何れかの色が存在するような所定の色配列を有し、受光素子であるフォトダイオードに入射する光の色選択を行うものである。 The image sensor 64 is an image sensor such as a CCD type or a CMOS type, and a large number of photodiodes (photosensitive pixels) are two-dimensionally arranged on the light receiving surface thereof. The photodiode photoelectrically converts subject information that has passed through the optical lens 61 and the optical low-pass filter 62. The optical low-pass filter 62 has an action of removing high frequency components higher than the sampling frequency depending on the pixel pitch of the image sensor 64, and prevents aliasing in the final image after image reproduction (signal processing). It is supposed to be. Aliasing means that when a waveform with a frequency component exceeding 1/2 of the sampling frequency is forcibly sampled, a frequency component that should not exist originally appears and is drawn in black letters or lines on a white background. In an image with clear contrast, the characters and lines are crushed and a pattern different from the original image appears. The color filter 63 has a predetermined color arrangement in which any color of R, G, and B exists at a position corresponding to one pixel of the image sensor 64, and the light that enters the photodiode that is the light receiving element. Color selection is performed.
 図11A~図11Cに原色タイプのカラーフィルタ配列の例を示す。図11Aに示したベイヤー配列は、受光素子が行方向および列方向にそれぞれ一定ピッチで正方行列的に配列されている。図11Bに示したハニカム配列は、受光素子(フォトダイオード)の幾何学的な形状の中心が行方向および列方向に1/2ピッチずつずらして配置されている。実際の撮像素子64の結像面では、画素配列の構造が水平方向および垂直方向に周期的に繰り返される。図11Cはベイヤー配列の場合の周波数特性である。 FIGS. 11A to 11C show examples of primary color filter arrangements. In the Bayer array shown in FIG. 11A, the light receiving elements are arranged in a square matrix at a constant pitch in the row direction and the column direction, respectively. In the honeycomb arrangement shown in FIG. 11B, the centers of the geometric shapes of the light receiving elements (photodiodes) are arranged so as to be shifted by 1/2 pitch in the row direction and the column direction. On the actual imaging plane of the image sensor 64, the structure of the pixel array is periodically repeated in the horizontal direction and the vertical direction. FIG. 11C shows frequency characteristics in the case of the Bayer array.
 光学レンズ61、光学ローパスフィルタ62、カラーフィルタ63を通過して撮像素子64の受光面に結像された被写体像は、各フォトダイオードによって入射光量に応じた量の信号電荷に変換され、図示しないドライバ回路から与えられるパルスに基づいて信号電荷に応じた電圧信号(画像信号)として順次読み出されるようになっている。撮像素子64は、シャッタゲートパルス(図示せず)のタイミングによって各フォトダイオードの電荷蓄積時間(シャッタスピード)を制御する電子シャッタ機能を有している。撮像素子64の動作(露光、読み出し等)はCPU71により制御される。 A subject image that passes through the optical lens 61, the optical low-pass filter 62, and the color filter 63 and is imaged on the light receiving surface of the image sensor 64 is converted into signal charges of an amount corresponding to the amount of incident light by each photodiode, and is not shown. Based on the pulse given from the driver circuit, it is sequentially read out as a voltage signal (image signal) corresponding to the signal charge. The image sensor 64 has an electronic shutter function that controls the charge accumulation time (shutter speed) of each photodiode according to the timing of a shutter gate pulse (not shown). The operation (exposure, reading, etc.) of the image sensor 64 is controlled by the CPU 71.
 アナログフロントエンド部65は、撮像素子64から出力された画像信号に対して、アナログゲイン調整、CDS(相関二重サンプリング)などの処理を行った後、内蔵するA/D変換部によりデジタル信号に変換する機能を有している。アナログフロントエンド部65は、A/D変換直後のRAWデータを画像処理装置70の前処理部74に出力するようになっている。 The analog front end unit 65 performs processing such as analog gain adjustment and CDS (correlated double sampling) on the image signal output from the image sensor 64, and then converts the image signal into a digital signal by the built-in A / D conversion unit. It has a function to convert. The analog front end unit 65 outputs the RAW data immediately after A / D conversion to the preprocessing unit 74 of the image processing apparatus 70.
 画像処理装置70における前処理部74は、AFおよびAE制御に必要な演算を行うオート演算部を含んでいる。前処理部74は、アナログフロントエンド部65からRAWデータを受け取り、オート演算部において、焦点評価値演算やAE演算などを行い、その演算結果をCPU71に伝える。前処理部74は、圧縮RAWデータを記録するモードの場合、アナログフロントエンド部65のA/D変換によってデジタル化された4つの色成分B,G,g,Rをもつベイヤー配列のRAWデータにつき、データの基準となる黒のDCレベルを調整する。 The pre-processing unit 74 in the image processing apparatus 70 includes an auto calculation unit that performs calculations necessary for AF and AE control. The preprocessing unit 74 receives the RAW data from the analog front end unit 65, performs the focus evaluation value calculation, the AE calculation, and the like in the auto calculation unit, and transmits the calculation result to the CPU 71. In the mode for recording compressed RAW data, the pre-processing unit 74 performs Bayer array RAW data having four color components B, G, g, and R digitized by the A / D conversion of the analog front end unit 65. Then, the black DC level that is the reference of the data is adjusted.
 画像処理装置70におけるメモリ制御部75は、前処理部74、画像信号処理部77、圧縮伸張処理部78、記録メディアインターフェース部79および表示処理部80と、画像メモリ76との間で、データ、信号のやりとりを中継制御するものとして構成されている。メモリ制御部75にはRAWデータ再構成器が組み込まれている。そのRAWデータ再構成器は、4つの色別プレーンデータ41,42,43,44を生成した上で、1フレーム分の画像データとなるよう並びかえて画像メモリ76のメモリ空間上に書き込むように構成されている。メモリ制御部75におけるRAWデータ再構成器は、再構成RAWデータを生成するに当たり、圧縮伸張処理部78における圧縮処理単位ブロックの周期的な境界位置に対して、4つの色別プレーンデータ41,42,43,44の基準位置を相対的にずらすように構成されている。 A memory control unit 75 in the image processing apparatus 70 includes a preprocessing unit 74, an image signal processing unit 77, a compression / decompression processing unit 78, a recording media interface unit 79, a display processing unit 80, and an image memory 76. It is configured to relay the exchange of signals. The memory control unit 75 incorporates a RAW data reconstructor. The RAW data reconstructor generates four pieces of color- specific plane data 41, 42, 43, and 44, rearranges them into image data for one frame, and writes them into the memory space of the image memory 76. It is configured. The RAW data reconstructor in the memory control unit 75 generates four pieces of color- specific plane data 41 and 42 with respect to the periodic boundary position of the compression processing unit block in the compression / decompression processing unit 78 when generating the reconstructed RAW data. , 43, and 44 are configured to be shifted relative to each other.
 画像処理装置70におけるCPU71は、所定のプログラムに従って撮像装置50を統括制御する制御部であり、ROM72とRAM73との協働のもと、前処理部74、画像信号処理部77、記録メディアインターフェース部79および操作パネル91を制御するように構成されている。ROM72にはCPU71が実行するプログラムおよび制御に必要な各種データ等が格納され、RAM73はCPU71の作業用領域として利用される。CPU71は、操作パネル91からの指示信号に基づいて撮像装置50内の各回路の動作を制御するようになっている。CPU71は、操作パネル91から入力される指示信号に応じて種々の撮影条件(露出条件、ストロボ発光有無、撮影モードなど)に従い、撮像素子64などの撮像部60を制御するとともに、自動露出(AE)制御、自動焦点調節(AF)制御、オートホワイトバランス(AWB)制御、レンズ駆動制御、画像処理制御、記録メディア92の読み書き制御などを行う。 A CPU 71 in the image processing device 70 is a control unit that performs overall control of the imaging device 50 in accordance with a predetermined program. Under the cooperation of the ROM 72 and the RAM 73, a preprocessing unit 74, an image signal processing unit 77, and a recording media interface unit. 79 and the operation panel 91 are controlled. The ROM 72 stores programs executed by the CPU 71 and various data necessary for control. The RAM 73 is used as a work area for the CPU 71. The CPU 71 controls the operation of each circuit in the imaging device 50 based on an instruction signal from the operation panel 91. The CPU 71 controls the image pickup unit 60 such as the image pickup device 64 according to various shooting conditions (exposure conditions, presence / absence of strobe light emission, shooting mode, etc.) in accordance with an instruction signal input from the operation panel 91, and automatic exposure (AE). ) Control, automatic focus adjustment (AF) control, auto white balance (AWB) control, lens drive control, image processing control, read / write control of the recording medium 92, and the like.
 CPU71は、操作パネル91にあるレリーズスイッチの半押しを検知すると自動焦点調節(AF)制御を行い、レリーズスイッチの全押しを検知すると記録用の画像を取り込むための露光および読み出し制御を開始する。また、CPU71は、取り込まれた画像データを記録モードに従って記録メディア92に記録するように記録メディアインターフェース部79を制御する。また、CPU71は、必要に応じてストロボ制御回路(図示せず)にコマンドを送り、キセノン管などの閃光発光管(発光部)の発光を制御する。 The CPU 71 performs automatic focus adjustment (AF) control when it detects half-pressing of the release switch on the operation panel 91, and starts exposure and reading control for capturing an image for recording when full-pressing of the release switch is detected. Further, the CPU 71 controls the recording medium interface unit 79 so as to record the captured image data on the recording medium 92 according to the recording mode. Further, the CPU 71 sends a command to a strobe control circuit (not shown) as necessary to control light emission of a flash light emitting tube (light emitting unit) such as a xenon tube.
 画像信号処理部77は、メモリ制御部75を介して画像メモリ76をワークメモリとして利用しつつ、同時化色補間処理、ホワイトバランス調整、ガンマ補正、輝度・色差信号生成、輪郭強調、電子ズーム機能による変倍(拡大/縮小)処理、画素数の変換(リサイズ)処理などの各種処理を実施する器として構成され、CPU71からのコマンドに従って画像信号を処理するように構成されている。 The image signal processing unit 77 uses the image memory 76 as a work memory via the memory control unit 75, and performs synchronized color interpolation processing, white balance adjustment, gamma correction, luminance / color difference signal generation, contour enhancement, and electronic zoom function. Is configured as a device for performing various processes such as a scaling (enlargement / reduction) process and a conversion (resizing) process for the number of pixels, and is configured to process an image signal in accordance with a command from the CPU 71.
 圧縮伸張処理部78は、メモリ制御部75を介して画像メモリ76から画像データを読み出し、指定された圧縮形式に対応した圧縮符号化アルゴリズムに従って圧縮処理を行い、圧縮RAWデータをメモリ制御部75を介して画像メモリ73に格納するようになっている。また、圧縮伸張処理部78は、記録メディア92から読み出された圧縮RAWデータに対して伸張処理を行い、伸張によって元に戻されたRAWデータをメモリ制御部75を介して画像メモリ73に格納するようになっている。圧縮伸張処理に用いるアルゴリズムとしては、JPEG以外に、MPEGその他がある。 The compression / decompression processing unit 78 reads the image data from the image memory 76 via the memory control unit 75, performs compression processing according to a compression encoding algorithm corresponding to the designated compression format, and sends the compressed RAW data to the memory control unit 75. The image data is stored in the image memory 73. The compression / decompression processing unit 78 performs decompression processing on the compressed RAW data read from the recording medium 92, and stores the RAW data restored by the decompression in the image memory 73 via the memory control unit 75. It is supposed to be. As an algorithm used for compression / decompression processing, there are MPEG and the like in addition to JPEG.
 記録メディアインターフェース部79は、メモリ制御部75を介して画像メモリ76から画像データを読み出し、記録のために記録メディア92へ転送するものとして構成されている。また、記録メディア92から画像データを読み出し、復号のためにメモリ制御部75へ転送するように構成されている。 The recording media interface unit 79 is configured to read image data from the image memory 76 via the memory control unit 75 and transfer it to the recording medium 92 for recording. The image data is read from the recording medium 92 and transferred to the memory control unit 75 for decoding.
 操作パネル91は、撮像装置50に対してユーザが各種の指示を入力するための構成であり、例えば、撮像装置50の動作モードを選択するためモード選択スイッチ、メニュー項目の選択操作(カーソル移動操作)や再生画像のコマ送り/コマ戻し等の指示を入力する十字キー、選択項目の確定(登録)や動作の実行を指示する実行キー、選択項目など所望の対象の消去や指示のキャンセルを行うためのキャンセルキー、電源スイッチ、ズームスイッチ、レリーズスイッチなどの各種の操作器を含む。 The operation panel 91 is configured to allow a user to input various instructions to the imaging apparatus 50. For example, a mode selection switch for selecting an operation mode of the imaging apparatus 50, a menu item selection operation (cursor moving operation), and the like. ) And the cross key for inputting instructions such as frame advance / rewind of playback images, execution keys for confirming (registering) selection items and executing operations, and deleting desired objects such as selection items, and canceling instructions Various operation devices such as a cancel key, a power switch, a zoom switch, and a release switch are included.
 画像データを保存する記録メディア92は、メモリカードで代表される半導体メモリのほか、磁気ディスク、光ディスク、光磁気ディスクなど種々の記録媒体を用いることができる。また、リムーバブルメディアに限らず、撮像装置50に内蔵された記録媒体(内部メモリ)であってもよい。 As the recording medium 92 for storing image data, various recording media such as a magnetic disk, an optical disk, and a magneto-optical disk can be used in addition to a semiconductor memory represented by a memory card. Further, the recording medium (internal memory) built in the imaging device 50 is not limited to a removable medium.
 次に、上記構成の本実施例の撮像装置50の動作を説明する。撮像部60における光学レンズ61、光学ローパスフィルタ62、カラーフィルタ63を通過して撮像素子64の受光面に結像された被写体像は、各フォトダイオードによって入射光量に応じた量の信号電荷に変換され、図示しないドライバ回路から与えられるパルスに基づいて信号電荷に応じた電圧信号(画像信号)として順次読み出され、4つの色成分B,G,g,Rのベイヤー配列の画像アナログ信号がアナログフロントエンド部65に送られる。アナログフロントエンド部65におけるA/D変換部によって、撮像素子64からの画像アナログ信号がデジタル化され、ベイヤー配列の画像データが前処理部74に送られる。 Next, the operation of the image pickup apparatus 50 of the present embodiment having the above configuration will be described. The subject image that has passed through the optical lens 61, the optical low-pass filter 62, and the color filter 63 in the imaging unit 60 and is imaged on the light receiving surface of the imaging device 64 is converted into signal charges corresponding to the amount of incident light by each photodiode. Then, it is sequentially read out as a voltage signal (image signal) corresponding to the signal charge based on a pulse given from a driver circuit (not shown), and an image analog signal having a Bayer arrangement of four color components B, G, g, and R is analog. It is sent to the front end portion 65. The image analog signal from the image sensor 64 is digitized by the A / D conversion unit in the analog front end unit 65, and the Bayer array image data is sent to the preprocessing unit 74.
 圧縮RAWデータを記録するモードの場合、前処理部74に入力されたベイヤー配列の画像データはデータ基準となる黒のDCレベルが調整された後、RAWデータとしてメモリ制御部75に送られる。メモリ制御部75では、組み込まれているRAWデータ再構成器のデータ並び替え書き込み制御により4つの色別プレーンデータ41,42,43,44を生成し、さらに、1フレーム分の画像データとなるよう配置して画像メモリ76のメモリ空間上に書き込む。これが圧縮処理単位の再構成RAWデータである。この再構成RAWデータを生成するに当たり、圧縮伸張処理部78における圧縮処理単位ブロックの周期的な境界位置に対して、4つの色別プレーンデータ41,42,43,44の基準位置を相対的にずらすようにしている。それは、以降において、記録メディア92から圧縮RAWデータを読み出し、伸張処理した上でモニタに表示したときに、各色のブロックノイズの発生位置が重なり合うのを未然に回避するためである。 In the mode for recording compressed RAW data, the Bayer array image data input to the preprocessing unit 74 is sent to the memory control unit 75 as RAW data after the black DC level as a data reference is adjusted. The memory control unit 75 generates four color- specific plane data 41, 42, 43, and 44 by data rearrangement writing control of the built-in RAW data reconstructor, and further generates image data for one frame. Arrange and write in the memory space of the image memory 76. This is reconstructed RAW data in units of compression processing. In generating the reconstructed RAW data, the reference positions of the four color- specific plane data 41, 42, 43, and 44 are set relative to the periodic boundary positions of the compression processing unit blocks in the compression / decompression processing unit 78. I try to shift it. This is to prevent the occurrence of block noise occurrence positions of the respective colors from overlapping when the compressed RAW data is read from the recording medium 92 and displayed on the monitor after being decompressed.
 次に、画像メモリ76に書き込まれた再構成RAWデータは、メモリ制御部75を介して圧縮伸張処理部78に1つのコンポーネントデータとして入力される。並行して、圧縮伸張処理部78の他のコンポーネントデータの入力端子に輝度成分不変の固定データが入力され、圧縮処理が行われる。処理後の圧縮RAWデータは再度メモリ制御部75を介して画像メモリ76に書き込まれる。圧縮処理単位である1つのファイルには4つの色別プレーンデータ41,42,43,44が存在しており、それらを一括して一度に圧縮処理する。 Next, the reconstructed RAW data written in the image memory 76 is input as one component data to the compression / decompression processing unit 78 via the memory control unit 75. At the same time, fixed data that does not change the luminance component is input to the other component data input terminals of the compression / decompression processing unit 78, and compression processing is performed. The processed compressed RAW data is written into the image memory 76 via the memory control unit 75 again. There are four color- specific plane data 41, 42, 43, and 44 in one file that is a unit of compression processing, and they are compressed at once.
 次に、画像メモリ76内に1つの圧縮RAWデータとして書き込まれたデータに対して、CPU71は、JPEGのファイル形式の画像ファイルとしてヘッダファイルを付加した後、圧縮RAWデータのフォーマット情報、撮影時の状況など画像の検索、認識などに役立つ情報を取得して画像ファイルヘッダに付加し、メモリ制御部75と記録メディアインターフェース部79を介して記録メディア92に記録する。 Next, the CPU 71 adds a header file as an image file in the JPEG file format to the data written as one compressed RAW data in the image memory 76, and then format information of the compressed RAW data, the shooting time, and the like. Information useful for searching and recognizing images such as the situation is acquired, added to the image file header, and recorded on the recording medium 92 via the memory control unit 75 and the recording medium interface unit 79.
 撮影し記録メディア92に記録した画像を撮像装置50で再生確認する際、圧縮された上記ファイルを記録メディア92から読み出す。すなわち、伸張操作パネル91において再生の指示を与えると、CPU71は圧縮伸張処理部78と記録メディアインターフェース部79を制御する。記録メディア92から読み出された圧縮RAWデータは記録メディアインターフェース部79およびメモリ制御部75を介して画像メモリ73に書き込まれる。画像メモリ73に書き込まれた圧縮RAWデータはメモリ制御部75を介して圧縮伸張処理部78に転送される。圧縮伸張処理部78における伸張処理部は、圧縮RAWデータを伸張処理する。 When the image picked up and recorded on the recording medium 92 is confirmed by the imaging device 50, the compressed file is read from the recording medium 92. That is, when a reproduction instruction is given on the expansion operation panel 91, the CPU 71 controls the compression / expansion processing unit 78 and the recording media interface unit 79. The compressed RAW data read from the recording medium 92 is written into the image memory 73 via the recording medium interface unit 79 and the memory control unit 75. The compressed RAW data written in the image memory 73 is transferred to the compression / decompression processing unit 78 via the memory control unit 75. The decompression processing unit in the compression / decompression processing unit 78 decompresses the compressed RAW data.
 本実施例によれば、4つの色別プレーンデータ41,42,43,44を圧縮処理単位である1つのファイル上に配置して構成した再構成RAWデータを圧縮処理の対象としているので、4つの色別プレーンデータ41,42,43,44を一括して一度に圧縮処理することができ、従来技術の場合のように4つの色別プレーンデータ41,42,43,44を個別に順次切り替えながら繰り返して圧縮処理する場合に比べて、圧縮効率が大幅に向上する。 According to this embodiment, the reconstructed RAW data configured by arranging the four color- specific plane data 41, 42, 43, and 44 on one file that is a compression processing unit is the target of the compression processing. The four color- specific plane data 41, 42, 43, 44 can be compressed at once, and the four color- specific plane data 41, 42, 43, 44 are sequentially switched as in the case of the prior art. However, the compression efficiency is greatly improved as compared with the case where the compression process is repeatedly performed.
 この圧縮効率の大幅な向上は、撮像装置50が連写撮影機能を有している場合に有利に作用し、近年のイメージセンサの高画素化に対しての圧縮RAWデータ記録モードでの高速連写に支障をきたす(連写記録の動作中に一旦停止)という従来技術での問題をクリアすることができる。 This significant improvement in compression efficiency is advantageous when the imaging device 50 has a continuous shooting function, and provides high-speed continuous recording in the compressed RAW data recording mode in response to the recent increase in the number of pixels in an image sensor. It is possible to clear the problem in the prior art that hinders copying (pauses during the continuous shooting operation).
 また、圧縮処理器5(圧縮伸張処理部78)としては並列動作する複数の圧縮処理器を用いる必要がなく、単一の圧縮処理器ですむため、回路規模の増大を招かないですみ、あるいは、CPUの処理能力を特別に増強する必要もない。図2Aに示すように、コンポーネントデータの入力端子として輝度信号(Y)と色差信号(Cr/Cb)の入力端子を備えた一般的な圧縮処理器を用いることが可能であるので、資源の有効活用ができる。 Further, it is not necessary to use a plurality of compression processors operating in parallel as the compression processor 5 (compression / decompression processing unit 78), and a single compression processor is required, so that the circuit scale does not increase, or There is no need to specially increase the processing capacity of the CPU. As shown in FIG. 2A, it is possible to use a general compression processor having input terminals for luminance signal (Y) and color difference signal (Cr / Cb) as input terminals for component data. Can be used.
 資源の有効利用について補足する。一般的なJPEG方式の圧縮処理器5(圧縮処理プロセッサ、圧縮伸張処理プロセッサ)は、図12Aに示すように、コンポーネントデータの入力端子として輝度信号(Y)の入力端子と色差信号(Cr、Cb)の入力端子をもっている。このうち輝度信号(Y)の入力端子にRAWデータ再構成器4(メモリ制御部75)で生成された再構成RAWデータを入力し、輝度変化のない固定データを色差信号(Cr、Cb)の入力端子から入力するように構成すれば、一般的なJPEG方式の圧縮処理器(プロセッサ)を援用することが可能となる。 Supplement the effective use of resources. As shown in FIG. 12A, a general JPEG compression processor 5 (compression processor, compression / decompression processor) has an input terminal for luminance signal (Y) and color difference signals (Cr, Cb) as input terminals for component data. ) Input terminal. Among these, the reconstructed RAW data generated by the RAW data reconstructor 4 (memory control unit 75) is input to the input terminal of the luminance signal (Y), and the fixed data having no luminance change is input to the color difference signals (Cr, Cb). When configured to input from the input terminal, a general JPEG compression processor (processor) can be used.
 本実施例によれば、記録メディア92に記録するのは圧縮RAWデータの形態であるので、前処理部74や画像信号処理部77の信号処理の影響がなく、画像品質を高く保つことができる。また、JPEGによる効率的な非可逆圧縮であるので、生成される圧縮RAWデータはファイルサイズが小さく、記録メディア92の利用効率が改善される。この記録メディア92の利用効率が高いことと、前述の色別プレーンデータを圧縮処理単位である1つのファイル上に配置した再構成RAWデータを圧縮処理対象とするがゆえに圧縮効率が大幅に向上していることとが相乗すると、高速連写がさらに有利になる。 According to the present embodiment, since the compressed RAW data is recorded on the recording medium 92, there is no influence of the signal processing of the preprocessing unit 74 and the image signal processing unit 77, and the image quality can be kept high. . In addition, since it is efficient lossy compression by JPEG, the generated compressed RAW data has a small file size, and the utilization efficiency of the recording medium 92 is improved. Since the use efficiency of the recording medium 92 is high and the reconstructed RAW data in which the above-described color-specific plane data is arranged in one file as a compression processing unit is the compression processing target, the compression efficiency is greatly improved. When combined with this, high-speed continuous shooting becomes even more advantageous.
 メモリ制御部75におけるRAWデータ再構成器は、4つの色別プレーンデータ41,42,43,44を配置して圧縮処理単位の再構成RAWデータを生成するに際して、圧縮伸張処理部78における圧縮処理単位ブロックの周期的な境界位置に対し色別プレーンデータの基準位置を相対的にずらすようにしている。したがって、記録メディア92から圧縮RAWデータを読み出し、圧縮伸張処理部78で伸張処理した上で表示処理部80およびモニタインターフェース部81を介してモニタに表示したときに、各色のブロックノイズの発生位置の重なり合いを大幅に緩和することができる。したがって、ブロックノイズの発生を抑制でき、圧縮処理単位ブロックの境界位置で各色のブロックノイズの発生位置が重なり合う特有の画質劣化を低減することができる。 The RAW data reconstructor in the memory control unit 75 arranges the four color- specific plane data 41, 42, 43, 44 to generate reconstructed RAW data in units of compression processing, and performs compression processing in the compression / decompression processing unit 78. The reference position of the color-specific plane data is shifted relative to the periodic boundary position of the unit block. Therefore, when the compressed RAW data is read from the recording medium 92, decompressed by the compression / decompression processing unit 78, and displayed on the monitor via the display processing unit 80 and the monitor interface unit 81, the block noise occurrence position of each color is displayed. Overlap can be greatly reduced. Therefore, it is possible to suppress the occurrence of block noise, and to reduce the peculiar image quality degradation in which the occurrence positions of the block noise of each color overlap at the boundary position of the compression processing unit block.
 上記説明では、画像メモリ76内に1フレーム分の画像データとして再配置して書き込まれたRAWデータはメモリ制御部75を介して圧縮伸張処理部78に1つのコンポーネントデータとして入力し、他のコンポーネント入力データは輝度変化のない固定データとして圧縮処理を行うとしている。しかし、これに代えて、1回の圧縮処理を有効活用して新規な画像圧縮ファイルを生成する目的で、他のコンポーネント入力には別の画像データを入力して処理を実施することも可能である。別の画像データとしては、例えば表示用に縮小リサイズされたスモールYCrCbデータやスモールRGBデータなどがある。 In the above description, the RAW data rearranged and written as image data for one frame in the image memory 76 is input as one component data to the compression / decompression processing unit 78 via the memory control unit 75, and the other components The input data is compressed as fixed data having no luminance change. However, instead of this, for the purpose of generating a new compressed image file by effectively using one compression process, it is also possible to input other image data to other component inputs and execute the process. is there. As another image data, for example, there is small YCrCb data or small RGB data which is reduced and resized for display.
 例えば図12Bに示すように、圧縮処理器5における輝度信号(Y)の入力端子に1フレーム分の再構成RAWデータ4を入力するとともに、色差信号(Cr/Cb)の入力端子に表示用に縮小リサイズされたY,Cr,Cbで構成される画像のスモールYCrCbデータ6やR,G,Bで構成される画像のスモールRGBデータ7などを入力する。そして、圧縮処理器5によって、これら再構成RAWデータ4やスモールYCrCbデータ6やスモールRGBデータ7などを同時的に圧縮処理する。 For example, as shown in FIG. 12B, the reconstructed RAW data 4 for one frame is input to the input terminal of the luminance signal (Y) in the compression processor 5 and the input terminal of the color difference signal (Cr / Cb) is for display. Small YCrCb data 6 of an image composed of Y, Cr, and Cb that has been reduced and resized, small RGB data 7 of an image composed of R, G, and B are input. The compression processor 5 simultaneously compresses the reconstructed RAW data 4, small YCrCb data 6, small RGB data 7, and the like.
 撮影した画像を撮像装置50で再生確認する際、圧縮された上記ファイルを記録メディア92から読み出すが、このとき、圧縮伸張処理部75では画像信号処理は行わすに、スモールYCrCb6またはスモールRGB7を表示処理部80に直接転送して、再生プレビューすることも可能である。 When the captured image is reproduced and confirmed by the imaging device 50, the compressed file is read from the recording medium 92. At this time, the compression / decompression processing unit 75 displays small YCrCb6 or small RGB7 while performing image signal processing. It is also possible to directly transfer to the processing unit 80 and perform playback preview.
 なお、上記の実施の形態においては、静止画像の圧縮符号化アルゴリズムとしてJPEGを例に説明したが、入力データのビット数を8ビット超の12ビットまで対応できるJPEG2000やJPEG XRを用いてもよい。これらの圧縮符号化アルゴリズムでは、JPEGなどに比べ圧縮に伴う画質劣化が少なく、特に高圧縮率での画質劣化が少ない。また、同じアルゴリズムで可逆圧縮と非可逆圧縮が可能である。符号化データの符号列削除処理(ポスト量子化)により、JPEGなどと違い再圧縮を行うことなく、圧縮率を調整できる等々の長所を有する。 In the above-described embodiment, JPEG has been described as an example of a still image compression encoding algorithm. However, JPEG2000 or JPEG XR that can handle the number of bits of input data of more than 8 bits up to 12 bits may be used. . In these compression encoding algorithms, image quality deterioration due to compression is small compared to JPEG and the like, and image quality deterioration is particularly small at a high compression rate. In addition, lossless compression and lossy compression are possible with the same algorithm. Unlike the JPEG method, the code string deletion process (post quantization) of the encoded data has an advantage that the compression rate can be adjusted without performing recompression.
 なお、4つの色別プレーンデータで再構成された1フレーム分の再構成RAWデータが作成されてあらかじめ画像メモリ76に蓄積されている場合には、再構成RAWデータを画像メモリ76より読み出して圧縮伸張処理部78に入力するが、再構成RAWデータがあらかじめ作成されていない場合には、メモリ制御の読み出し工程で1フレーム分の再構成RAWデータの作成処理を行うことになる。いずれの態様も本発明に包含される。詳細には、1フレーム分の再構成RAWデータの作成処理は、RAWデータを画像メモリ76に書き込むときの並び替え処理として説明したが、画像メモリ73への書き込み処理については、オリジナルのベイヤー配列のまま画像メモリ73に取り込み、圧縮処理の際にメモリ制御部75を介して読み出すときに同時に行うようにしてもよく、そのいずれの態様も本発明に包含される。 If reconstructed RAW data for one frame reconstructed with four color-specific plane data is created and stored in the image memory 76 in advance, the reconstructed RAW data is read from the image memory 76 and compressed. If the reconstructed RAW data has not been created in advance, the reconstructed RAW data for one frame is created in the memory control read process. Either aspect is encompassed by the present invention. In detail, the process of creating the reconstructed RAW data for one frame has been described as the rearrangement process when the RAW data is written to the image memory 76, but the writing process to the image memory 73 is the original Bayer array. The image memory 73 may be read as it is, and may be simultaneously read when it is read out via the memory control unit 75 during the compression process. Any of these modes is included in the present invention.
 以上に述べたように、RAWデータを4つの色別プレーンデータに分割して1フレーム分の再構成RAWデータに再配置して圧縮するため、可逆圧縮であっても非可逆圧縮であっても、撮像装置内部の信号処理の影響のないRAWデータを1回の処理で効率良く圧縮して記録することができる。可逆圧縮の場合は、記録された符号化データを圧縮伸張処理部78または外部のデコーダで復号伸長することにより、元のRAWデータを完全に再生することができる。 As described above, since RAW data is divided into four color plane data and rearranged into one frame of reconstructed RAW data for compression, it is possible to use either lossless compression or lossy compression. Thus, it is possible to efficiently compress and record the RAW data that is not affected by the signal processing inside the imaging apparatus in a single process. In the case of lossless compression, the original RAW data can be completely reproduced by decoding and decompressing the recorded encoded data by the compression / decompression processing unit 78 or an external decoder.
 なお、上記の実施の形態では、色分解フィルタを持つイメージセンサを例に説明したが、同様の色分解をカラーフィルタ以外の手段によって行うタイプのイメージセンサを用いる場合にも、本発明を適用できることは当然である。 In the above embodiment, an image sensor having a color separation filter has been described as an example. However, the present invention can also be applied to a case where an image sensor of a type that performs similar color separation by means other than a color filter is used. Is natural.
 なお、撮像装置50によって記録された圧縮RAWデータは、撮像装置本体以外に、専用の画像処理装置あるいはパソコンなどによって再現(現像)処理され得る。具体的には、圧縮RAWデータに付加された各色テンプレートの配置情報も用いて、伸張後の複数の色領域に再配置されたRAWデータからオリジナルのRAWデータ配列に再度並び替えてもよいし、再現(現像)処理の際にメモリ制御部75を介して読み出すときにオリジナル配列となるように読み出すことも可能である。 Note that the compressed RAW data recorded by the imaging device 50 can be reproduced (developed) by a dedicated image processing device or a personal computer in addition to the imaging device main body. Specifically, using the arrangement information of each color template added to the compressed RAW data, the RAW data rearranged in the plurality of color regions after expansion may be rearranged again into the original RAW data array, When the reproduction (development) process is performed via the memory control unit 75, the original arrangement may be read.
 また本発明を実施する際の処理の手順、それにかかわる手段の機能をパソコンやマイクロコンピュータなどのコンピュータ上でプログラムにより実現することも可能である。また、画像処理はその処理の一部または全部を専用のハードウェア(信号処理回路)態様に限らず、一部分をプログラムで実現してもよい。そのためのプログラムと、それが記録された各種の記録(記憶)媒体も本発明に包含される。また、そのような手順による処理の方法も本発明に包含されることは当然である。 It is also possible to realize the processing procedure and the function of the means related to the present invention by a program on a computer such as a personal computer or a microcomputer. Further, part or all of the image processing is not limited to a dedicated hardware (signal processing circuit) mode, and a part of the processing may be realized by a program. A program therefor and various recording (storage) media on which the program is recorded are also included in the present invention. Further, it is natural that the processing method according to such a procedure is also included in the present invention.
 なお、カラーフィルタ63の配列構造は、図11に示した例に限定されず、RGBストライプなど様々な配列構造が可能である。また、本例では、原色フィルタを用いているが、本発明の実施に際しては原色フィルタに限定されず、イエロー(Y)、マゼンタ(M)、シアン(C)、グリーン(G)からなる補色フィルタを用いたり、原色と補色の任意の組み合わせやホワイト(W)を用いることも可能である。 The arrangement structure of the color filter 63 is not limited to the example shown in FIG. 11, and various arrangement structures such as RGB stripes are possible. In this example, the primary color filter is used. However, the present invention is not limited to the primary color filter, and is a complementary color filter composed of yellow (Y), magenta (M), cyan (C), and green (G). Or any combination of primary and complementary colors or white (W) may be used.
 また、CMOS型に代表される撮像素子64においては、高速読み出しを実現する手段として、撮像素子64内にノイズ処理部とA/D変換部を実装し、撮像素子から直接デジタル信号として出力する形態もある。 In the image pickup device 64 represented by a CMOS type, a noise processing unit and an A / D conversion unit are mounted in the image pickup device 64 as means for realizing high-speed reading, and output directly as a digital signal from the image pickup device. There is also.
 本発明は、デジタルスチルカメラ、デジタルビデオカメラ、独立したイメージスキャナ、複写機などに組み込まれたイメージスキャナなど、色分解して撮像するタイプのイメージセンサを搭載した撮像装置において、レタッチ処理などユーザによるデータ加工性の高いRAWデータを圧縮状態で取得するに際し、複数種類の色別プレーンデータを圧縮処理単位である1つのファイル上に配置して構成した再構成RAWデータを圧縮処理の対象とし、1回の圧縮処理で再構成RAWデータを取得するので、回路規模の増大やCPU処理能力の増強を招くことなく圧縮効率を大幅に向上するための技術として有用である。 The present invention relates to an image pickup apparatus equipped with an image sensor of a type that performs color separation such as a digital still camera, a digital video camera, an independent image scanner, an image scanner incorporated in a copying machine, etc. When acquiring RAW data with high data processability in a compressed state, reconstructed RAW data configured by arranging a plurality of types of color-specific plane data on a single file as a compression processing unit is targeted for compression processing. Since the reconstructed RAW data is acquired by one compression process, it is useful as a technique for greatly improving the compression efficiency without causing an increase in circuit scale or CPU processing capacity.
 1:ベイヤー配列のカラーフィルタをもつ撮像部(イメージセンサ)
 2:ベイヤー配列のRAWデータ
 3:RAWデータ再構成器
 4:4つの色別プレーンデータで構成される1フレーム分の再構成RAWデータ
 5:圧縮処理器
 6:スモールYCrCbデータ
 7:スモールRGBデータ
 a1:色成分B(青)に対応する第1の配置領域
 a2:色成分G(第1の緑)に対応する第2の配置領域
 a3:色成分g(第2の緑)に対応する第3の配置領域
 a4:色成分R(赤)に対応する第4の配置領域
 41:色成分B(青)に対応する第1の配置領域に第1の色別プレーンデータ
 42:色成分G(第1の緑)に対応する第2の配置領域に第2の色別プレーンデータ
 43:色成分g(第2の緑)に対応する第3の配置領域に第3の色別プレーンデータ
 44:色成分R(赤)に対応する第4の配置領域に第4の色別プレーンデータ
 45:圧縮処理単位ブロックの水平方向での周期的な境界位置
 46:圧縮処理単位ブロックの垂直方向での周期的な境界位置
 47:輝度変化のない固定データ
 50:撮像装置
 61:光学レンズ
 62:光学ローパスフィルタ
 63:カラーフィルタ
 64:撮像素子
 65:アナログフロントエンド部
 70:画像処理装置
 71:CPU
 72:ROM
 73:RAM
 74:前処理部
 75:メモリ制御部(RAWデータ再構成器)
 76:画像メモリ
 77:画像信号処理部
 78:圧縮伸張処理部(圧縮処理器)
 79:記録メディアインターフェース部
 80:表示処理部
 81:モニタインターフェース部
 91:操作パネル
 92:記録メディア
1: Imager with Bayer array color filter (image sensor)
2: RAW data in Bayer array 3: RAW data reconstructor 4: Reconstructed RAW data for one frame composed of four color-specific plane data 5: Compression processor 6: Small YCrCb data 7: Small RGB data a1 : First arrangement area corresponding to color component B (blue) a2: second arrangement area corresponding to color component G (first green) a3: third corresponding to color component g (second green) Arrangement area a4: fourth arrangement area corresponding to color component R (red) 41: first color-specific plane data 42: color component G (first) in the first arrangement area corresponding to color component B (blue) 2nd color-specific plane data 43 in the second arrangement area corresponding to the first green) 43: third color-specific plane data 44 in the third arrangement area corresponding to the color component g (second green) 44: color By the fourth color in the fourth arrangement region corresponding to the component R (red) Plane data 45: Periodic boundary position of the compression processing unit block in the horizontal direction 46: Periodic boundary position of the compression processing unit block in the vertical direction 47: Fixed data without luminance change 50: Imaging device 61: Optical lens 62: Optical low-pass filter 63: Color filter 64: Image sensor 65: Analog front end unit 70: Image processing device 71: CPU
72: ROM
73: RAM
74: Pre-processing unit 75: Memory control unit (RAW data reconstructor)
76: Image memory 77: Image signal processing unit 78: Compression / decompression processing unit (compression processor)
79: Recording media interface unit 80: Display processing unit 81: Monitor interface unit 91: Operation panel 92: Recording medium

Claims (15)

  1.  カラー画像を構成する複数種類の色成分が一定の規則に従って画素配列上に繰り返し配列されるデータ形式のRAWデータを圧縮処理する画像処理装置であって、
     前記RAWデータを入力し、前記RAWデータを前記色成分毎に分解し再集合して複数の色別プレーンデータを生成し、さらに色成分毎に区画された複数の配置領域に前記複数の色別プレーンデータを配置して圧縮処理単位である1つのファイルとしてまとめた再構成RAWデータを生成するRAWデータ再構成器と、
     前記RAWデータ再構成器によって生成された前記圧縮処理単位の再構成RAWデータを入力し圧縮処理する圧縮処理器と、
     を備えた画像処理装置。
    An image processing apparatus that compresses RAW data in a data format in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule,
    The RAW data is input, the RAW data is separated for each color component and reassembled to generate a plurality of color-specific plane data, and the plurality of color-specific plane data are divided into a plurality of arrangement areas divided for each color component. A RAW data reconstructor for generating reconstructed RAW data arranged as a single file as a compression processing unit by arranging plain data;
    A compression processor that inputs and compresses the reconstructed RAW data of the compression processing unit generated by the RAW data reconstructor;
    An image processing apparatus.
  2.  前記RAWデータ再構成器は、前記複数の色別プレーンデータを前記1つのファイルに配置するに際して、前記圧縮処理器における圧縮処理単位ブロックの周期的な境界位置に対して、各色別プレーンデータの基準位置を縦方向と横方向の両方向に所定画素分ずらして配置するように構成されている、
     請求項1に記載の画像処理装置。
    When the RAW data reconstructor arranges the plurality of color plane data in the one file, a reference of each plane data for each color with respect to a periodic boundary position of a compression processing unit block in the compression processor. The position is configured to be shifted by a predetermined number of pixels in both the vertical direction and the horizontal direction.
    The image processing apparatus according to claim 1.
  3.  前記RAWデータ再構成器は、前記複数の色別プレーンデータを前記1つのファイルに配置するに際して、前記圧縮処理器における圧縮処理単位ブロックの周期的な境界位置に対して、各色別プレーンデータの基準位置を縦方向または横方向のいずれか1方向に所定画素分ずらして配置するように構成されている、
     請求項1に記載の画像処理装置。
    When the RAW data reconstructor arranges the plurality of color plane data in the one file, a reference of each plane data for each color with respect to a periodic boundary position of a compression processing unit block in the compression processor. The position is configured to be shifted by a predetermined pixel in either one of the vertical direction and the horizontal direction.
    The image processing apparatus according to claim 1.
  4.  前記RAWデータ再構成器は、前記複数の色別プレーンデータの配置領域どうし間の無画部領域に対して固定データを配置して前記再構成RAWデータを生成するように構成されている、
     請求項1に記載の画像処理装置。
    The RAW data reconstructor is configured to generate fixed RAW data by arranging fixed data in a non-image area between the arrangement areas of the plurality of color-specific plane data.
    The image processing apparatus according to claim 1.
  5.  前記圧縮処理器は、複数のコンポーネント入力端子を有し、1つの前記コンポーネント入力端子から入力した前記再構成RAWデータと、他の前記コンポーネント入力端子から入力した固定データとを同時的に圧縮処理するように構成されている、
     請求項1に記載の画像処理装置。
    The compression processor has a plurality of component input terminals, and simultaneously compresses the reconstructed RAW data input from one of the component input terminals and the fixed data input from the other component input terminal. Configured as
    The image processing apparatus according to claim 1.
  6.  前記圧縮処理器は、複数のコンポーネント入力端子を有し、1つの前記コンポーネント入力端子から入力した前記再構成RAWデータと、他の前記コンポーネント入力端子から入力した別の画像データとを同時的に圧縮処理するように構成されている、
     請求項1に記載の画像処理装置。
    The compression processor has a plurality of component input terminals, and simultaneously compresses the reconstructed RAW data input from one of the component input terminals and another image data input from the other component input terminal. Configured to process,
    The image processing apparatus according to claim 1.
  7.  前記圧縮処理器は、前記再構成RAWデータを非可逆圧縮するものとして構成されている、
     請求項1に記載の画像処理装置。
    The compression processor is configured to irreversibly compress the reconstructed RAW data.
    The image processing apparatus according to claim 1.
  8.  カラー画像を構成する複数種類の色成分が一定の規則に従って画素配列上に繰り返し配列されるデータ形式のRAWデータを圧縮処理する画像処理方法であって、
     前記RAWデータを入力し、前記RAWデータを前記色成分毎に分解し再集合して複数の色別プレーンデータを生成し、さらに色成分毎に区画された複数の配置領域に前記複数の色別プレーンデータを配置して圧縮処理単位である1つのファイルとしてまとめた再構成RAWデータを生成する工程と、
     前記再構成RAWデータを生成する工程によって生成された前記圧縮処理単位の再構成RAWデータを入力し圧縮処理する工程と、
     を含む画像処理方法。
    An image processing method for compressing RAW data in a data format in which a plurality of types of color components constituting a color image are repeatedly arranged on a pixel array according to a certain rule,
    The RAW data is input, the RAW data is decomposed and reassembled for each color component to generate a plurality of color-specific plane data. A step of generating reconstructed RAW data arranged as a single file as a compression processing unit by arranging plain data;
    A step of inputting and compressing the reconstructed RAW data of the compression processing unit generated by the step of generating the reconstructed RAW data;
    An image processing method including:
  9.  前記再構成RAWデータを生成する工程は、前記複数の色別プレーンデータを前記1つのファイルに配置するに際して、前記圧縮処理器における圧縮処理単位ブロックの周期的な境界位置に対して、各色別プレーンデータの基準位置を縦方向と横方向の両方向に所定画素分ずらして配置する、
     請求項8に記載の画像処理方法。
    In the step of generating the reconstructed RAW data, each of the color planes is arranged with respect to a periodic boundary position of the compression processing unit block in the compression processor when the plurality of color plane data are arranged in the one file. The data reference position is shifted by a predetermined number of pixels in both the vertical and horizontal directions.
    The image processing method according to claim 8.
  10.  前記再構成RAWデータを生成する工程は、前記複数の色別プレーンデータを前記1つのファイルに配置するに際して、前記圧縮処理器における圧縮処理単位ブロックの周期的な境界位置に対して、各色別プレーンデータの基準位置を縦方向または横方向のいずれか1方向に所定画素分ずらして配置する、
     請求項8に記載の画像処理方法。
    In the step of generating the reconstructed RAW data, each of the color planes is arranged with respect to a periodic boundary position of the compression processing unit block in the compression processor when the plurality of color plane data are arranged in the one file. The data reference position is shifted by a predetermined number of pixels in either the vertical direction or the horizontal direction.
    The image processing method according to claim 8.
  11.  前記再構成RAWデータを生成する工程は、前記複数の色別プレーンデータの配置領域どうし間の無画部領域に対して固定データを配置して前記再構成RAWデータを生成する、
     請求項8に記載の画像処理方法。
    The step of generating the reconstructed RAW data generates the reconstructed RAW data by arranging fixed data in a non-image area between the plurality of color plane data arrangement regions.
    The image processing method according to claim 8.
  12.  前記圧縮処理する工程は、前記再構成RAWデータと固定データとを互いに別系統で入力し、前記再構成RAWデータと前記固定データとを同時的に圧縮処理する、
     請求項8に記載の画像処理方法。
    The compression processing step inputs the reconstructed RAW data and fixed data in different systems, and simultaneously compresses the reconstructed RAW data and the fixed data.
    The image processing method according to claim 8.
  13.  前記圧縮処理する工程は、前記再構成RAWデータと別の画像データとを互いに別系統で入力し、前記再構成RAWデータと前記別の画像データとを同時的に圧縮処理する、
     請求項8に記載の画像処理方法。
    The compression processing step inputs the reconstructed RAW data and another image data in different systems, and simultaneously compresses the reconstructed RAW data and the other image data.
    The image processing method according to claim 8.
  14.  前記圧縮処理する工程は、前記再構成RAWデータを非可逆圧縮する、
     請求項8に記載の画像処理方法。
    The step of compressing compresses the reconstructed RAW data irreversibly;
    The image processing method according to claim 8.
  15.  色分解して撮像するタイプのイメージセンサで入力した光学像をアナログの電気信号に変換しさらにデジタルのRAWデータに変換する撮像部と、
     請求項1に記載の画像処理装置と、
     を備えた撮像装置。
    An imaging unit that converts an optical image input by an image sensor of a type that performs color separation and captures an image into an analog electrical signal, and further converts it into digital RAW data;
    An image processing apparatus according to claim 1;
    An imaging apparatus comprising:
PCT/JP2010/004388 2009-07-23 2010-07-05 Image processing device, image processing method, and image capturing device WO2011010431A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009171953A JP2011029809A (en) 2009-07-23 2009-07-23 Image processing device, image processing method, and image capturing device
JP2009-171953 2009-07-23

Publications (1)

Publication Number Publication Date
WO2011010431A1 true WO2011010431A1 (en) 2011-01-27

Family

ID=43498913

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/004388 WO2011010431A1 (en) 2009-07-23 2010-07-05 Image processing device, image processing method, and image capturing device

Country Status (2)

Country Link
JP (1) JP2011029809A (en)
WO (1) WO2011010431A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015064402A1 (en) * 2013-11-01 2015-05-07 ソニー株式会社 Image processing device and method
WO2015064403A1 (en) * 2013-11-01 2015-05-07 ソニー株式会社 Image processing device and method
CN113705553A (en) * 2020-05-20 2021-11-26 深圳清华大学研究院 Visual task execution method and device, electronic equipment, storage medium and system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6142833B2 (en) 2014-03-31 2017-06-07 ブラザー工業株式会社 Electronics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005286415A (en) * 2004-03-26 2005-10-13 Olympus Corp Image compression method and image compression device
JP2006173931A (en) * 2004-12-14 2006-06-29 Canon Inc Image processing apparatus, control method thereof, computer program, and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005286415A (en) * 2004-03-26 2005-10-13 Olympus Corp Image compression method and image compression device
JP2006173931A (en) * 2004-12-14 2006-06-29 Canon Inc Image processing apparatus, control method thereof, computer program, and computer readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015064402A1 (en) * 2013-11-01 2015-05-07 ソニー株式会社 Image processing device and method
WO2015064403A1 (en) * 2013-11-01 2015-05-07 ソニー株式会社 Image processing device and method
JPWO2015064402A1 (en) * 2013-11-01 2017-03-09 ソニー株式会社 Image processing apparatus and method
JPWO2015064403A1 (en) * 2013-11-01 2017-03-09 ソニー株式会社 Image processing apparatus and method
US10356442B2 (en) 2013-11-01 2019-07-16 Sony Corporation Image processing apparatus and method
US10397614B2 (en) 2013-11-01 2019-08-27 Sony Corporation Image processing apparatus and method
CN113705553A (en) * 2020-05-20 2021-11-26 深圳清华大学研究院 Visual task execution method and device, electronic equipment, storage medium and system
CN113705553B (en) * 2020-05-20 2024-01-26 深圳清华大学研究院 Visual task execution method, device, electronic equipment, storage medium and system

Also Published As

Publication number Publication date
JP2011029809A (en) 2011-02-10

Similar Documents

Publication Publication Date Title
US10574898B2 (en) Compression and decoding of single sensor color image data
JP5845464B2 (en) Image processing apparatus, image processing method, and digital camera
US8854488B2 (en) Image capture apparatus and image capture method in which an image is processed by a plurality of image processing devices
JP5280448B2 (en) Image processing apparatus, image processing method, image processing program, and semiconductor integrated circuit
US8428120B2 (en) Method and apparatus of Bayer pattern direct video compression
JP5853166B2 (en) Image processing apparatus, image processing method, and digital camera
WO2019142821A1 (en) Encoder, decoder, encoding method, decoding method, encoding program, and decoding program
KR20080012207A (en) Image signal processing apparatus and image signal processing method
KR101046012B1 (en) Dynamic image processing device, dynamic image processing method, and computer-readable recording medium having recorded dynamic image processing program
WO2014010224A1 (en) Video compression device, video decoding device, image compression device, image decoding device, imaging device, and program
JP4497945B2 (en) Imaging device
JP4905279B2 (en) Imaging circuit and imaging apparatus
WO2012147523A1 (en) Imaging device and image generation method
WO2011010431A1 (en) Image processing device, image processing method, and image capturing device
JP4257145B2 (en) Image compression apparatus and image processing system
JP4436733B2 (en) Imaging apparatus and reproducing apparatus
JP6152642B2 (en) Moving picture compression apparatus, moving picture decoding apparatus, and program
JP5974691B2 (en) MOVING IMAGE COMPRESSION DEVICE, IMAGING DEVICE, AND PROGRAM
JP2018032909A (en) Image coding device, control method therefor, imaging apparatus and program
JP6890993B2 (en) Image coding device, image decoding device, its control method, and control program
JPH1169377A (en) Image information encoding device and device applying the same
JP2019022225A (en) Image compression apparatus and program
JP7289642B2 (en) IMAGE PROCESSING DEVICE, CONTROL METHOD FOR IMAGE PROCESSING DEVICE, AND PROGRAM
JP2017200199A (en) Video compression device, video decoding device, and program
JP2014143488A (en) Image compression apparatus, image decoder and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10802053

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10802053

Country of ref document: EP

Kind code of ref document: A1