WO2013179560A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2013179560A1 WO2013179560A1 PCT/JP2013/002737 JP2013002737W WO2013179560A1 WO 2013179560 A1 WO2013179560 A1 WO 2013179560A1 JP 2013002737 W JP2013002737 W JP 2013002737W WO 2013179560 A1 WO2013179560 A1 WO 2013179560A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- area
- folded
- unnecessary
- region
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 120
- 238000003672 processing method Methods 0.000 title claims description 8
- 238000000034 method Methods 0.000 claims description 99
- 238000004458 analytical method Methods 0.000 claims description 33
- 239000000203 mixture Substances 0.000 claims description 33
- 239000002131 composite material Substances 0.000 claims description 31
- 230000002093 peripheral effect Effects 0.000 claims description 26
- 238000001514 detection method Methods 0.000 claims description 19
- 230000015572 biosynthetic process Effects 0.000 claims description 11
- 238000003786 synthesis reaction Methods 0.000 claims description 11
- 230000002194 synthesizing effect Effects 0.000 claims description 10
- 238000010586 diagram Methods 0.000 description 28
- 238000004590 computer program Methods 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 11
- 230000000295 complement effect Effects 0.000 description 8
- 239000000470 constituent Substances 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000009792 diffusion process Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 230000005484 gravity Effects 0.000 description 3
- 230000003252 repetitive effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
- H04N1/3872—Repositioning or masking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
Definitions
- the present invention relates to an image processing apparatus and an image processing technique for removing an image of a designated unnecessary area in a still image and a moving image.
- the present invention is widely used in the field of video and image information processing apparatuses such as digital video cameras, digital cameras, and DVD recorders.
- the user designates an unnecessary area in advance, determines the area of the designated unnecessary area, and changes the processing contents according to the size of the area.
- the isotropic diffusion equation + anisotropic diffusion equation is used if the area of the unnecessary region is less than the threshold, and the advection equation and Navier-Stokes (NS: Navier-) are used to prevent the texture from being inadvertently blurred.
- the unnecessary area is complemented from the periphery using the Stokes equation (see, for example, Patent Document 1).
- Patent Document 1 there is a problem that a large amount of calculation is required for image processing, and it takes a long time for image processing. In order to perform processing in a short time, an apparatus having a large processing capacity must be used. difficult. There is also a problem that if the area of the unnecessary area is large, a place where image processing has been performed is often unnaturally processed.
- the present invention solves the above-described conventional problems, and provides an image processing apparatus that has a small amount of calculation and can realize a more natural unnecessary area complementing process.
- an image processing apparatus includes an image acquisition unit that acquires an input image, and an acquisition unit that acquires area information that is information indicating an unnecessary area in the input image.
- area information that is information indicating an unnecessary area in the input image.
- the first folded image and the second folded image are generated by folding back from the boundary of the unnecessary area. For this reason, it is possible to complement the unnecessary area at high speed with a natural image including a component equivalent to the frequency of the texture around the unnecessary area.
- the image processing apparatus and the image processing method of the present invention it is possible to perform a more natural unnecessary region complementing process with a small amount of calculation.
- FIG. 1 is a configuration diagram of an image processing apparatus according to the first embodiment.
- FIG. 2 is a diagram showing a flowchart of processing in the first embodiment.
- FIG. 3 is a diagram for describing a method for generating a first folded image, a second folded image, and a composite image according to the first embodiment.
- FIG. 4 is a diagram showing a graph of weighting factors.
- FIG. 5 is a processing procedure diagram for generating a folded image in the first embodiment.
- FIG. 6 is a diagram illustrating a flow from setting an unnecessary area to an input image until a composite image is displayed.
- FIG. 7 is a diagram illustrating an example of a UI display.
- FIG. 8 is a diagram illustrating an example of a UI display.
- FIG. 1 is a configuration diagram of an image processing apparatus according to the first embodiment.
- FIG. 2 is a diagram showing a flowchart of processing in the first embodiment.
- FIG. 3 is a diagram for describing a method for
- FIG. 9 is a diagram for explaining a processing method when there is an image different from the background image at a position close to the unnecessary area.
- FIG. 10 is a diagram for explaining a processing method when there is an image different from the background image at a position close to the unnecessary area.
- FIG. 11 is a configuration diagram of the image processing apparatus according to the second embodiment.
- FIG. 12 is a diagram showing a flowchart of processing in the second embodiment.
- FIG. 13 is a diagram showing an example of a boundary peripheral region of a mask image when texture analysis is performed in the second embodiment.
- FIG. 14 is a diagram illustrating a histogram representing the frequency in the edge direction.
- FIG. 15 is a diagram for describing a method of generating the first folded image and the second folded image in the second embodiment.
- FIG. 16 is a configuration diagram of the image processing apparatus according to the third embodiment.
- FIG. 17 is a diagram showing a flowchart of processing in the third embodiment.
- FIG. 18 is a diagram illustrating an example of a region division result in the third embodiment.
- FIG. 19 is a diagram showing a processing procedure of processing by a conventional unnecessary object removing device.
- FIG. 19 shows a processing flow in image processing of the conventional unnecessary object removing apparatus described in Patent Document 1.
- an unnecessary area designated by a UI such as a touch panel is converted into a mask image and acquired in step S101.
- area calculation in an unnecessary region indicated by the mask image is performed in step S102.
- step S103 it is determined whether or not the area calculated in step S102 is greater than or equal to a threshold value. If the area of the unnecessary area is less than the threshold (S103: Yes), the anisotropic diffusion equation and the isotropic diffusion equation are used to perform an unnecessary area complementing process for diffusing and propagating pixels from the periphery of the unnecessary area to the unnecessary area. (S104). If the unnecessary area is equal to or greater than the threshold value, the advection equation and the NS equation are used in combination to diffuse and propagate in order to prevent the unnecessary area complement processing result from blurring (S105).
- the conventional configuration has a problem that the calculation amount is large because the calculation is repeatedly performed using a diffusion equation, a fluid transfer equation, and the like, and is propagated in time. Further, when the area of the unnecessary area is larger than a certain value on the basis of the NS equation, there is a problem that the result of complementary processing of the unnecessary area differs from the texture of the surrounding area, resulting in a blurred result.
- an image processing apparatus includes an image acquisition unit that acquires an input image, and an acquisition unit that acquires area information that is information indicating an unnecessary area in the input image.
- area information that is information indicating an unnecessary area in the input image.
- region can be supplemented with the natural image containing the component equivalent to the frequency of the texture of an unnecessary area
- the image generation unit determines that the number of pixels continuously arranged in the first direction in the first region is smaller than the number corresponding to the first width
- the image generation unit A process of acquiring a pixel group continuously arranged in the first direction and arranging the pixel group by repeatedly folding back the first width in a direction opposite to the first direction as the first process.
- the first folded image is generated by repeatedly performing in a direction orthogonal to the second region
- the number of pixels continuously arranged in the first direction in the second region is less than the number corresponding to the second width.
- a pixel group continuously arranged in the second direction of the second region is acquired, and the pixel group is repeatedly folded and arranged in the direction opposite to the second direction by the second width.
- Direction perpendicular to the second direction as the second treatment It may generate the second wrapped image by performing repeated.
- the unnecessary area can be complemented with a natural image including a component equivalent to the frequency of the texture around the unnecessary area at high speed.
- the image generation unit further includes an area setting unit for setting a prohibited area, and (i) when the prohibited area is closer to the first direction than the unnecessary area, The first folded image is generated by repeatedly performing the first process in a direction orthogonal to the first direction in an area excluding the prohibited area, and (ii) the prohibited area is more second than the unnecessary area. If the second folded image is generated by repeatedly performing the second process in a direction orthogonal to the second direction in the region excluding the prohibited region from the second region, Good.
- the first folded image or the second folded image is generated by acquiring the first pixel or the second pixel except for the prohibited area set by the area setting unit. For this reason, it is possible to prevent the first folded image or the second folded image from being generated using a texture that is clearly different from the texture around the unnecessary area.
- region can be complemented with the natural image containing the component equivalent to the frequency of the texture of an unnecessary area
- the image generation unit further includes an area setting unit for setting a search area, wherein the image generation unit (i) repeatedly performs the first process in the direction orthogonal to the first direction in the search area.
- a second folded image may be generated by generating one folded image and (ii) repeating the second process in a direction orthogonal to the second direction in an area excluding the search area.
- the first folded image or the second folded image is generated by acquiring the first pixel or the second pixel for the search region set by the region setting unit. For this reason, if a search area
- the first direction may be a left direction from the boundary of the unnecessary area in the horizontal direction
- the second direction may be a right direction from the boundary of the unnecessary area in the horizontal direction.
- the image generation unit further includes (i) a third region adjacent to the third direction in the vertical direction and above the unnecessary region, by a third width in the third direction of the unnecessary region.
- a third process in which a third pixel group in which a number of pixels are continuously arranged in the third direction is folded back in the direction opposite to the third direction with respect to the boundary of the unnecessary area is orthogonal to the third direction.
- a composite image may be generated by combining with the folded image.
- the image composition unit multiplies the pixel value by a first weight set for each pixel value of the plurality of pixels in the first folded image, and a plurality of pixels in the second folded image.
- the synthesis may be performed by adding together each of a plurality of pixel values of the second folded image after being multiplied by the second weight.
- the image composition unit may set the first weight and the second weight such that the weight is larger as the pixel is closer to the boundary of the unnecessary region.
- the image composition unit for each of a plurality of pixels constituting the unnecessary area, the first weight multiplied by the pixel value of the first folded image at a position corresponding to the pixel, and the The first weight and the second weight are set so that the added value of the second weight multiplied by the pixel value of the second folded image at the position corresponding to the pixel is 1. May be.
- a more natural image can be generated when a folded image is synthesized by folding pixels from a plurality of directions.
- a texture analysis unit that determines the first direction and the second direction by analyzing a texture in a peripheral area of the unnecessary area may be provided. Specifically, the texture analysis unit detects the most frequent straight line from the edge strength and the edge angle obtained from the edge image of the input image, and detects the first direction and the second direction. It may be determined in both directions on the most frequent straight line. Further, the texture analysis unit detects a high-frequency angle from the edge angle for each pixel or region obtained from the edge image of the input image, and detects the first direction and the second direction. You may determine in a direction perpendicular
- the image taken by the user cannot be taken completely horizontally and becomes an inclined image. Therefore, the horizontal component in the background area is often displayed tilted.
- the texture analysis unit as in the above configuration, the inclination angle of the image can be detected and reflected in the direction of folding, so that a more natural composite image can be generated.
- the image generating unit further includes an area dividing unit that divides an area around the unnecessary area in the input image into a plurality of areas that are sets of pixels having the same or similar characteristics, For each of the regions, a group of pixels in which the number of pixels corresponding to the width in the predetermined direction set in the unnecessary region in the predetermined direction is continuously arranged in the predetermined direction is defined in the predetermined direction with reference to the boundary of the unnecessary region.
- a divided folded image may be generated by repeatedly performing the process of folding and arranging in the opposite direction in a direction orthogonal to the predetermined direction.
- an image display unit for displaying the result of the image composition unit may be provided.
- An image processing apparatus including a display unit capable of touch input by a user, wherein an area specified by touch input in an input image displayed on the display unit is an unnecessary area in the input image.
- the touch input is detected again, and when the detected position by the detected touch input is a movement of a predetermined value or more,
- the second detection unit that detects the direction of movement as the first direction, and the first region adjacent to the first direction of the unnecessary region include pixels corresponding to the width of the unnecessary region in the first direction.
- a first process in which a first pixel group continuously arranged in one direction is folded and arranged in a direction opposite to the first direction with respect to the boundary of the unnecessary area is repeatedly performed in a direction orthogonal to the first direction.
- An image generating unit that generates a first folded image, an image combining unit that generates a combined image by combining the input image and the first folded image, and the display unit that displays the combined image It is good also as an image processing apparatus provided with these.
- FIG. 1 is a configuration diagram of an image processing apparatus according to Embodiment 1 of the present invention.
- the image processing apparatus 100 of the present invention includes an image acquisition unit 101, a mask image acquisition unit 102, an image generation unit 110, and an image composition unit 105.
- the image generation unit 110 includes a first image generation unit 103 and a second image generation unit 104.
- the image processing apparatus 100 according to Embodiment 1 of the present invention may include an image display unit 106 as shown in FIG.
- the image processing apparatus includes an image acquisition unit that acquires an input image, an acquisition unit that acquires region information that is information indicating an unnecessary region in the input image, and (i) the region information.
- a first pixel group in which pixels corresponding to the first width in the first direction of the unnecessary region are continuously arranged in the first direction.
- Generating a first folded image by repeatedly performing a first process of folding and arranging in a direction orthogonal to the first direction on the basis of the boundary of the unnecessary area, and (ii) In the second region adjacent to the second direction of the unnecessary region, a second pixel group in which pixels corresponding to a second width in the second direction of the unnecessary region are continuously arranged in the second direction, Based on the boundary of unnecessary areas And generating a second folded image by repeatedly performing a second process of folding back and arranging in the direction opposite to the second direction in a direction orthogonal to the second direction, and generating the image by the image generating unit An image compositing unit that generates a composite image by combining the first folded image and the second folded image.
- the image acquisition unit 101 acquires input images such as still images and moving images taken by a digital camera or a digital video camera.
- the image acquisition unit 101 sends the acquired input image to the mask image acquisition unit 102.
- the mask image acquisition unit 102 is an acquisition unit that acquires an input image sent from the image acquisition unit 101 and a mask image that represents an unnecessary area specified by a user interface such as a touch panel from the user.
- the mask image acquisition unit 102 sends the acquired input image and mask image to the first image generation unit 103 and the second image generation unit 104.
- the mask image is, for example, an image (for example, a binarized image) to which a luminance value of 255 is assigned if it is an unnecessary region, and 0 otherwise.
- the mask image acquisition unit 102 is not limited to acquiring a mask image as long as it acquires region information indicating an unnecessary region. That is, the mask image is included in the area information.
- the method for identifying the unnecessary area is not limited.
- the unnecessary area may be specified by touching the user so as to trace the boundary of a certain area on the screen.
- the user may touch only one point on an image divided in advance into a plurality of areas, and an area including the touched point may be set as an unnecessary area.
- an algorithm for automatically detecting an unnecessary area based on a predetermined constant may be provided without being specified by the user. For example, face detection and upper body detection may be performed by image processing or image recognition, and unnecessary area candidates may be detected in advance for each face or person.
- motion detection may be performed by image processing or the like, and a moving object may be detected as a candidate for an unnecessary region, and a block unit having a close color may be used as a candidate for an unnecessary region by region division such as the MeanShift method.
- the first image generation unit 103 acquires an input image and a mask image from the mask image acquisition unit 102, searches for a pixel in a first direction from a boundary of an unnecessary area indicated by the mask image (hereinafter referred to as “folding boundary”), A first folded image is generated by folding and arranging pixels searched for at the folding boundary.
- the first image generation unit 103 generates a first folded image that complements the pixels in the unnecessary area, and then sends the first folded image to the image composition unit 105.
- the folding boundary (hereinafter referred to as “first folding boundary”) in the first image generation unit 103 is a boundary that can be obtained by searching an unnecessary area in the first direction.
- the first image generation unit 103 performs the first operation in the first direction of the unnecessary area in the first area adjacent to the first direction of the unnecessary area indicated by the mask image (hereinafter referred to as “first search area”).
- a first pixel group in which pixels corresponding to the width (hereinafter referred to as “first search width”) are continuously arranged in the first direction is folded back in the direction opposite to the first direction with respect to the first folding boundary.
- a first folded image is generated by repeating the first process in a direction orthogonal to the first direction. More specifically, the first image generation unit 103 acquires a first pixel group that is continuous in a row in a first direction that is adjacent to the first direction of the unnecessary region, and performs first folding of the first pixel group. The pixels closer to the boundary are arranged in the first direction with respect to the unnecessary area so as to be closer to the boundary of the unnecessary area.
- the second image generation unit 104 acquires the input image and the mask image from the mask image acquisition unit 102, searches for a pixel in the second direction different from the first direction from the return boundary of the unnecessary area indicated by the mask image, and returns the return boundary.
- a second folded image is generated by folding and arranging the searched pixels.
- the second image generation unit 104 generates a second folded image that complements the pixels in the unnecessary area, and then sends it to the image composition unit 105.
- the folding boundary (hereinafter referred to as “second folding boundary”) in the second image generation unit 104 is a boundary that can be obtained by searching an unnecessary area in the second direction.
- the second image generation unit 104 uses a second width (hereinafter, “the second search region”) in the second direction of the unnecessary region in the second direction (hereinafter, “second search region”).
- a second process in which a second pixel group in which a number of pixels (hereinafter referred to as “second search width”) are continuously arranged in the second direction is folded in the opposite direction of the second direction with the second folding boundary as a reference.
- a second folded image is generated by repeatedly performing in a direction orthogonal to the two directions. More specifically, the second image generation unit 104 acquires a second pixel group that is continuous in a row in a second direction adjacent to the second direction of the unnecessary area, and performs second folding of the second pixel group. The pixels closer to the boundary are arranged in the second direction with respect to the unnecessary area so as to be closer to the boundary of the unnecessary area.
- the image composition unit 105 acquires the first folded image, the second folded image, the input image, and the mask image from the first image generating unit 103 and the second image generating unit 104.
- the image composition unit 105 performs composition by weighting and superimposing pixels of the first folded image and the second folded image according to the distance from the folding boundary that is a reference for folding the first folded image and the second folded image. Generate an image. Thereby, the image composition unit 105 sends the composite image in which the unnecessary area of the acquired input image is complemented to the image display unit 106.
- the image display unit 106 displays an image composition result after complementing the unnecessary area sent from the image composition unit 105 on a liquid crystal display or the like.
- the image display unit 106 may be included in the image processing apparatus according to the present embodiment. In the case of an image processing apparatus that does not include a display unit, the image display unit 106 may output to an external display unit. Good.
- step S11 the image acquisition unit 101 acquires an input image.
- step S12 the mask image acquisition unit 102 acquires a mask image.
- the method for acquiring the mask image is as described above.
- step S13a and step S93b the first image generation unit 103 and the second image generation unit 104 generate a first folded image in the first direction and a second folded image in the second direction, respectively.
- the first direction and the second direction in the present embodiment are preset, and in the following description, the first direction is the left direction from the boundary of the unnecessary area on the horizontal direction, and the second direction is the horizontal direction. The direction is on the right side from the boundary of the unnecessary area.
- the first direction and the second direction are not limited to this.
- step S13a and step S13b based on the first direction and the second direction, a folding boundary is set for folding each pixel in the unnecessary area indicated by the mask image, and is a target to be folded based on the set folding boundary.
- Fold each pixel is a luminance value or RGB value in each pixel.
- the processing performed on the pixels is actually performed on the luminance value and the RGB value (that is, the pixel value) in each pixel.
- step S14a and step S14b the image composition unit 105 multiplies each folded pixel by a weight.
- the target pixel is folded back to the unnecessary area from a plurality of directions instead of a single direction, and each pixel from the folded plurality of directions is synthesized to generate a composite image.
- a plurality of pixel values that is, pixel values from a plurality of directions
- step S15 the image composition unit 105 synthesizes the first folded image and the second folded image that have been multiplied by the weights in step S14a and step S14b, and generates a synthesized image.
- FIG. 3 shows an input image 210, a first folded boundary 211 in the first direction, a first folded image 212, a first direction 213, a weight coefficient 220 of the first folded image, and an image obtained by multiplying the weight coefficient by the first folded image.
- a composite image 270 obtained by combining the image and the second folded image is shown.
- step S13a and S13b Specific processing for generating a folded image will be described.
- the first image generation unit 103 searches for a first folded boundary 211 that is a reference boundary for folding each pixel based on the first direction 213 (left in the horizontal direction). . That is, the first image generation unit 103 searches for the first return boundary 211 by specifying the boundary between the unnecessary area and the first search area. Then, the first image generation unit 103 searches for a pixel in the input image 210 from the first folding boundary 211 in the left direction, which is the first direction 213, and folds the pixel to an unnecessary area with the first folding boundary 211 as a boundary. A first folded image 212 is generated.
- the second image generation unit 104 based on the second direction 243 (right in the horizontal direction), the second folding boundary 241 that is a reference boundary for folding each pixel.
- the second image generation unit 104 specifies the boundary between the unnecessary region and the second region on the first direction side of the unnecessary region (hereinafter referred to as “second search region”), thereby setting the second folding boundary 241.
- the second image generation unit 104 searches for a pixel in the input image 210 from the second folding boundary 241 to the right, which is the second direction 243, and folds the pixel to an unnecessary area with the second folding boundary 241 as a boundary.
- a folded image 242 is generated.
- step S14a and S14b a process for multiplying the folded images 212 and 242 by weights will be described.
- the image composition unit 105 calculates a weighting factor 220 for each of the plurality of pixels of the first folded image 212 so that the weight becomes larger as the distance from the first folded boundary 211 is closer, and for each of the plurality of pixels of the first folded image 212. Is multiplied by a plurality of calculated weighting factors 220 to generate an image 230. Similarly, the image composition unit 105 calculates the weight coefficient 250 for each of the plurality of pixels of the second folded image 242 so that the weight becomes larger as the distance from the second folded boundary 241 becomes closer, and the plurality of the second folded image 242 An image 260 is generated by multiplying the calculated weighting coefficient 250 for each pixel.
- the weighting factor at this time is set by an arbitrary method.
- the first weighting factor is set to 0.84 and the second weighting factor is set to 0.16.
- the first weighting factor and the second weighting factor are added together, 1
- the first weighting coefficient and the second weighting coefficient may be set so that the addition value of the second weighting coefficient multiplied by the pixel of the second folded image at 1 is 1.
- the weight may be calculated using a table such as the table in FIG. 10A, the weighting coefficient of the nearest pixel in the unnecessary area is set to 1.0 and the weighting coefficient of the farthest pixel in the unnecessary area is set to 0 according to the distance from the folding boundary.
- the table for calculating the weighting coefficient in the first direction is shown.
- FIG. 10B is a diagram showing the result of assigning the result of calculating the weighting factor in the first direction to each pixel using the table of FIG.
- FIG. 10A when the number of pixels in the unnecessary area is 11.
- FIG. 10C and FIG. 10D are diagrams showing a weight calculation table in the second direction and a weight coefficient assigned to each pixel when the number of pixels in the unnecessary area is 11.
- FIG. 10C and FIG. 10D are diagrams showing a weight calculation table in the second direction and a weight coefficient assigned to each pixel when the number of pixels in the unnecessary area is 11.
- the weight is increased as the distance from the folding boundary becomes larger and the weights in a plurality of directions are set to be 1, the pixels from the plurality of directions can be accurately reflected in the folded image.
- the second table may not be provided. That is, in that case, the weight in the second direction may be calculated so as to be 1 based on the weight coefficient calculated by the first table.
- a table represented by a quadratic function (curve) or the like may be provided. That is, the shape of the function is limited if the table is a monotonically decreasing function in which the weighting factor of the pixel closest to the folding boundary is set to 1.0 and the weighting factor of the farthest pixel in the unnecessary region is set to 0. Not.
- the weight calculated by the above method is multiplied by each pixel value in each folded image.
- step S15 a process of adding a plurality of images multiplied by weights.
- the image 230 and the image 260 are added together to generate a composite image 270 that complements the unnecessary area. It can be seen that a certain pixel 201 is obtained by adding a calculated value 0.84 from the first image and a calculated value 0.64 from the second image to obtain a combined value of 1.48.
- each weighting factor for pixels other than unnecessary areas of the first folded image and the second folded image is set to 0.5.
- the processing procedure of the folded image generation process of the image generation unit 110 in S13a and S13b will be described in more detail with reference to FIG.
- the folded image generation processing can be decomposed into processing in units of rows.
- step S21 the image generation unit 110 sets the line of the input image to be calculated as the first line, and moves to step S22.
- step S22 the image generation unit 110 determines whether there is an unprocessed line including a unnecessary area. If there is an unprocessed line (S22: Yes), there is no unprocessed line in step S23. In the case (S22: No), the generation processing of the folded image is completed, and thus the generation processing ends.
- step S23 the image generation unit 110 determines whether an unnecessary area is included in the calculation target row. If the unnecessary area is not included (S23: No), the process proceeds to step S27. If the unnecessary area is included (S23: Yes) moves to S24.
- step S24 the image generation unit 110 specifies a folding boundary in the unnecessary area corresponding to the folding direction (that is, the first direction or the second direction), and moves to step S25.
- step S25 the image generation unit 110 calculates the search width of the unnecessary area (that is, the width of the unnecessary area in the row) serving as a reference for determining the return width from the return boundary specified in step S24, and then proceeds to step S26.
- the “unnecessary area search width” here is the first search width in the case of the process of generating the first folded image, and the second in the case of the process of generating the second folded image. Search width.
- step S ⁇ b> 26 the image generation unit 110 searches for pixels in the specified folding direction by the search width of the unnecessary area from the folding boundary, and folds back the pixels corresponding to the search width searched from the folding boundary as one line. Minutes, and moves to step S27.
- step S27 the calculation target line is incremented by 1, and the process moves to step S22. That is, in step S27, a process is performed in which the next line after the line that is the target of the process up to step S26 is a calculation target. If there is no next line, the generation process may be terminated.
- FIG. 6 is a diagram showing an example of UI display employed in an image pickup apparatus with an image display unit 106, a portable terminal, or the like as the image processing apparatus 100 according to the present invention.
- the user designates an unnecessary area on the input image 301 by touching or tracing the screen.
- a mask image 302 is generated and the determined unnecessary area is displayed.
- a composite image 303 is displayed when an input indicating determination or the like is performed or when a predetermined time has elapsed.
- the mask image 302 does not necessarily have to be as shown in FIG. 6 and may be an image in which unnecessary regions are displayed in an easily understandable manner on the original image.
- the outline of the unnecessary area may be surrounded by a frame, or may be displayed by filling all of a plurality of pixels constituting the unnecessary area with the same pixel (for example, black) indicating the unnecessary area.
- the same pixel for example, black
- FIG. 6 what is shown in FIG. 6 is merely an example, and UI display that guides the procedure of an operation method for removing unnecessary areas to the user may be adopted.
- the image processing apparatus 100 is generated and synthesized by folding back along the folding boundary of the unnecessary area from each of the horizontal direction that is the first direction and the second direction.
- the image processing apparatus 100 does not complement only from a single direction, but generates a plurality of folded images for complementing unnecessary areas from a plurality of directions, so that a natural complementing process capable of dealing with various scenes is possible. It is. Also, when generating a composite image, the pixels in the first direction and the second direction are multiplied by a weighting factor and then added together, so the result of adding the results as in the conventional method is combined. A more natural unnecessary area complementing process can be realized without blurring in the image.
- the first direction and the second direction are the horizontal and horizontal directions, respectively, but other directions may be used. That is, for example, the first direction may be an upward direction in the vertical direction and the second direction may be a downward direction in the vertical direction. Further, the first direction and the second direction may not be aligned on the same straight line.
- the image processing apparatus according to the present embodiment may newly include a direction determining unit.
- a step may be newly added to the processing flow in which the direction determining unit determines a direction and sets a boundary for returning according to the determined direction.
- first direction and the second direction are assigned as the horizontal direction, for example, the first direction and the second direction may be assigned by detecting the tilt of the camera using a digital compass or the like.
- a folding direction may be assigned by performing a flick operation or the like to quickly move a finger in a desired direction determined by the user looking at the image,
- the direction specified by the flick operation may be the first direction
- the direction opposite to the first direction by 180 degrees may be the second direction.
- the image processing apparatus includes a display unit capable of touch input by a user, and an area specified by touch input in the input image displayed on the display unit is an unnecessary area in the input image.
- the touch input is detected again, and when the detected position by the detected touch input is a movement of a predetermined value or more,
- the second detection unit that detects the direction of movement as the first direction, and the first region adjacent to the first direction of the unnecessary region include pixels corresponding to the width of the unnecessary region in the first direction.
- the first process of arranging the first pixel group continuously arranged in one direction by folding back in the direction opposite to the first direction with reference to the boundary of the unnecessary area is repeatedly performed in the direction orthogonal to the first direction.
- an image generating unit that generates a first folded image
- an image combining unit that generates a combined image by combining the input image and the first folded image
- the display unit that displays the combined image.
- an image processing apparatus the region specified by touch input in the input image displayed on the display unit at a timing different from the detection by the first detection unit and the second detection unit.
- a third detection unit that detects the prohibited area in the input image, and the image generation unit detects the prohibited area from the first area when the prohibited area is closer to the first direction than the unnecessary area.
- the first folded image may be generated by repeatedly executing the first processing in a direction orthogonal to the first direction in the excluded region, and further, the first detection unit and the second detection unit An area specified by touch input in the input image displayed on the display unit at a timing different from the detection by the search area in the input image And detecting the first folded image by repeatedly executing the first process in a direction orthogonal to the first direction in an area within the search area. May be generated.
- FIG. 7 shows a UI display in the operation at this time.
- a search area for searching for a folded image may be specified. For example, a first rectangle that specifies an unnecessary area and a second rectangle that represents a search area for generating a folded image are displayed on the input screen, and the user adjusts the four corners of the rectangle to display the unnecessary area. You may instruct
- FIG. 8 shows a UI display in the operation at this time.
- the image processing apparatus 100 may determine the first direction and the second direction for folding using the direction specified by the user by the flick operation.
- two rectangular points on the circumscribed rectangle that surrounds the unnecessary area are determined as an unnecessary rectangular area or a search area as a rectangular area having two diagonal points specified by the user on the touch panel.
- the unnecessary area or the search area may be determined by adjusting the size or the position of the center of gravity by a pinch operation or the like.
- the folded image is generated by performing the process of placing the pixels acquired from the search area by the amount corresponding to the search width of the unnecessary area from the folding boundary, but this is not limitative.
- the search width of the unnecessary area is large, the search area for generating the return image is widened, and the background image around the unnecessary area should originally be complemented by the return, but the foreground image such as a person is included and returned. Therefore, an unnatural folding result may occur.
- a threshold value may be provided for the search width, and when the search width is equal to or larger than the threshold value, it may be folded twice or folded three times.
- the threshold value may be a predetermined fixed value, or may be the width of the first search area or the width of the second search area. In the latter case, the first search width is compared with the width of the first search region in the first direction, and the second search width is compared with the width of the second search region in the second direction.
- the image generation unit determines that the number of pixels continuously arranged in the first direction in the first search region is smaller than the number corresponding to the first search width, the image generation unit moves in the first direction of the first region.
- a process of acquiring consecutively arranged pixel groups and repeatedly folding and arranging the pixel groups in a direction opposite to the first direction by the first search width is repeatedly performed in a direction orthogonal to the first direction.
- the first folded image may be generated.
- the image generation unit determines that the number of pixels continuously arranged in the first direction in the second search region is smaller than the number corresponding to the second width
- the image generation unit moves in the second direction of the second region.
- a process of acquiring consecutively arranged pixel groups and repeatedly folding and arranging the pixel groups in a direction opposite to the second direction by a second width is repeatedly performed as a second process in a direction orthogonal to the second direction.
- a second folded image may be generated.
- the “repetitive folding and disposing process” here is, for example, in the case of a pixel group in which a plurality of pixels are continuously arranged in the first direction, opposite to the first direction on the basis of the first folding boundary.
- the pixel group is folded and arranged in the direction, and then the pixel group is folded again in the opposite direction with reference to the pixel at the end in the opposite direction of the pixel group that has already been folded. .
- the first search is performed by performing the “repetitive folding and arranging process” a plurality of times until the first search width is satisfied.
- a first folded image of width can be generated. Further, as described above, the “repetitive folding and arranging process” is performed in the second direction as well.
- a color or luminance difference from the folding boundary may be considered. This is because the probability of the background image around the unnecessary area is evaluated by the distance from the wrapping boundary, the color and brightness difference, and the edge strength, and the foreground image that is the cause of unnatural wrapping is not subject to wrapping. Is the method. For example, when a folded image is generated without considering the person A when the person A that is an area other than the background exists in the vicinity of the person B that is an unnecessary area as shown in FIG. A part of the pixel of the person A enters the part of the part as it is. Therefore, the intended composite image cannot be obtained. Therefore, as shown in FIG.
- a threshold value (that is, a region having a predetermined width) is provided in the search region, and a region within the threshold value is set as the search region, and pixels in the search region are set a plurality of times (for example, twice). Or three times). That is, a process may be performed in which pixels in a search area that is an area within a threshold value are acquired, and the acquired pixels are repeatedly folded and arranged. Further, as shown in FIG. 9C, an area used for folding may be set using the probability of pixels around the unnecessary area.
- the image processing apparatus may further include an area setting unit that sets a prohibited area.
- the image generation unit when the prohibited area is closer to the first direction than the unnecessary area, the image generation unit generates the first folded image by acquiring the first pixel from the area excluding the prohibited area from the first area. To do.
- the image generation unit when the prohibited area is on the second direction side from the unnecessary area, the image generation unit generates the second folded image by acquiring the second pixel from the area excluding the prohibited area from the second area.
- the area setting unit may set the prohibited area based on an input by the user, or may set the prohibited area based on the result of image processing or image recognition using the probability of the background image, for example. . In other words, the prohibited area may be set by using the above-described method of detecting the unnecessary area by image processing or image recognition.
- an unnecessary area exists near the edge of the image, it may be complemented only from the first direction.
- a symmetrical image may be virtually created, and a search area for use in folding may be virtually created. That is, an aliased image may be generated by virtually arranging an inverted image of the input image in the second direction (rightward direction) of the input image and enlarging a search area for use in aliasing. .
- the two processing units of the first image generation unit and the second image generation unit are used, but the present invention is not limited to this.
- a third image generation unit that generates a third folded image by performing folding based on the third direction that is the upper direction of the boundary of the unnecessary region, and a fourth direction that is the lower direction of the boundary of the unnecessary region.
- a fourth image generation unit that generates a fourth folded image by performing a reference folding, and the image composition unit stores the first folded image, the second folded image, the third folded image, and the fourth folded image. You may synthesize.
- both directions on a straight line with an upper right angle of 45 degrees are set, and a fifth image is generated from the boundary of the unnecessary region to the upper right direction (fifth direction) and the lower left direction (sixth direction).
- a sixth image generation unit is set.
- both directions on the upper left oblique 45 degree line are set, and the upper left direction (seventh direction) and the lower right direction (eighth direction) from the boundary of the unnecessary area are set.
- a seventh image generation unit and an eighth image generation unit may be provided.
- the image composition unit may compose the first to eighth folded images.
- an n-direction may be set and an image generation unit based on the direction may be provided, and a folded image generated by the image generation unit may be used for synthesis of the composite image.
- the weighting factor for the first folded image may be set to be smaller than the weighting factor for the other second to nth folded images.
- the search area may occupy a large number of search areas instead of the assumed background image. In this case, a result different from the tendency in other search areas is obtained. Therefore, after calculating the tendency in the search area in each direction, by increasing the ratio of image synthesis to the complemented folded image from the search area of similar tendency, the unnecessary area complement from the unintended image is minimized. There is an aim.
- a part of the area that is originally intended to be deleted is not designated as an unnecessary area but is designated as an unnecessary area.
- a part of the area that is originally desired to be erased exists outside the boundary of the area.
- a folded image is generated, a part of the region that is originally desired to be deleted is included in the folded image, resulting in a problem that a composite image that is emphasized with twice the area is generated.
- the boundary of the unnecessary area is expanded by an arbitrary pixel. That is, since an area larger than the area designated as the unnecessary area is specified as the unnecessary area, it is possible to deal with even if there is some area that is originally desired to be deleted.
- the mask image and the input image are subjected to image processing, and the mask image is expanded so that the boundary (edge) of the unnecessary area of the mask image matches the edge of the input image. Is also possible.
- the first folded image and the second folded image are generated as images having the same size as the input image. That is, the first folded image and the second folded image are images in which an image for complementing the unnecessary area is generated while maintaining the pixel values of the pixels in the area other than the unnecessary area of the input image.
- the present invention is not limited to the above, and the first folded image and the second folded image may be generated as an image having the same size as the unnecessary area. That is, it may be generated as an image for replacing an unnecessary area.
- the image synthesizing unit generates a complementary image for replacing the unnecessary area by synthesizing the first folded image and the second folded image.
- the image composition unit composes the composite image by replacing the unnecessary area of the input image with the complementary image.
- FIG. 11 is a configuration diagram of the image processing apparatus according to the second embodiment of the present invention.
- the same components as those in FIG. 11 are identical to FIG. 11 in FIG. 11, the same components as those in FIG. 11 in FIG. 11, the same components as those in FIG. 11
- the image processing apparatus 400 includes an image acquisition unit 101, a mask image acquisition unit 102, a texture analysis unit 401, an image generation unit 110, and an image composition unit 105.
- the image generation unit 110 includes a first image generation unit 103 and a second image generation unit 104.
- the image processing apparatus 400 according to Embodiment 2 of the present invention may include an image display unit 106 as shown in FIG.
- the image processing apparatus 400 includes a texture analysis unit 401 that determines a first direction and a second direction by analyzing a texture in a peripheral region of an unnecessary region.
- FIG. 12 is a flow for generating a composite image in the image processing apparatus 400 according to the present embodiment.
- Step S11, step S12, step S13a, step S13b, step S14a, step S14b, and step S15 are the same as the processing for creating a composite image in the image processing apparatus of the first embodiment, and thus description thereof is omitted. To do.
- step S31 the texture analysis unit 401 determines the first direction and the second direction.
- FIG. 13 is a diagram illustrating an example of a peripheral region of the unnecessary region 501 used when determining the first direction and the second direction by texture analysis. That is, the texture analysis unit 401 determines the first direction and the second direction by analyzing the texture in the peripheral area of the unnecessary area.
- the texture analysis unit 401 calculates a circumscribed circle 502 circumscribing the unnecessary area 501, calculates a circle corresponding to three times the radius of the circumscribed circle 502 as a peripheral area circle 503, and sets it as a target range for texture analysis.
- the texture analysis unit 401 uses the Sobel filter or the like to generate an edge image that is the first derivative of the acquired input image within the peripheral region circle 503 that is the target range of the texture analysis.
- the texture analysis unit 401 votes the histogram representing the frequency of the edge direction obtained by dividing the edge direction obtained from the edge image in several stages while weighting with the edge strength, and determines the angle orthogonal to the high-frequency angle as the first angle. Determine as direction and second direction. That is, the texture analysis unit 401 detects a high-frequency angle from the edge angle of each pixel or region obtained from the edge image of the input image, and detects the first direction and the second direction as the detected high-frequency angles. The direction perpendicular to.
- the edge direction includes a lot of vertical directions, and thus a high-frequency angle obtained from the edge image corresponds to the vertical direction.
- the horizontal direction orthogonal to the vertical direction is set as the folding direction, the unnecessary area can be complemented with the horizontal stripes cleanly.
- FIG. 14 shows an example of a histogram representing the frequency in the edge direction.
- the most frequent angle is 135 degrees, so the first direction and the second direction are determined as a 225 degree direction and a 45 degree direction orthogonal to the respective angles.
- step S32 the texture analysis unit 401 determines a folding boundary using the first direction and the second direction which are the determined folding directions.
- FIG. 15 is a diagram illustrating an example in which a folded image is generated by the first image generation unit 103 and the second image generation unit 104.
- the first direction 601 and the second direction 611 are determined to be 225 degrees and 45 degrees, respectively, the first folding boundary 603 and the second folding boundary 604 are set, and the corresponding pixels are folded.
- a first folded image 602 and a second folded image 612 are generated as illustrated.
- the image processing apparatus 400 can generate a folded image corresponding to the texture in the peripheral area of the unnecessary area.
- the image captured by the user cannot be captured completely horizontally and becomes a tilted image. Therefore, the horizontal component in the background area is often displayed tilted.
- the inclination angle is detected and reflected in the direction of folding, so that a more natural composite image can be generated.
- the texture analysis unit 401 may detect the most frequent straight line from the edge strength and edge angle obtained from the edge image of the input image, and set the first direction and the second direction as both directions on the detected straight line.
- the texture analysis unit 401 may set the most frequently used angle as the first direction and set the direction opposite to the first direction by 180 degrees as the second direction.
- the texture analysis unit 401 may set an angle orthogonal to the second most frequent angle as the third direction, and set a direction opposite to the third direction by 180 degrees as the fourth direction.
- a synthesized image may be synthesized by using a folded image for.
- the weighting coefficient used at the time of image composition may be adjusted according to not only the distance from the folding boundary but also the frequency of the angle.
- the texture analysis unit may detect the inclination of the straight line on the entire screen or around the unnecessary area by image processing such as Hough transform, and set the detected inclination of the straight line as the first direction and the second direction.
- the texture analysis unit 401 calculates the frequency in the edge direction with the peripheral boundary region as one region, but it may be divided into a plurality of regions and calculate the frequency in the edge direction for each of the plurality of regions.
- the texture analysis unit in this case divides the peripheral area circle 503 of FIG. 13 into four in units of 90 degrees, and sets the most frequent direction of the edge direction in each of the divided peripheral area circles as the first direction and the first direction. Two directions, a third direction and a fourth direction are determined. Then, four folded images may be generated using the first to fourth directions determined by the texture analysis unit as a reference, and the four folded images may be equally combined to perform unnecessary region complementation.
- the image composition unit may determine the weighting coefficient not only based on the distance from the folding boundary but also based on the distance from the center of gravity of the divided boundary peripheral area.
- the first image generation unit 103 calculates a first return boundary from the first direction acquired from the texture analysis unit 401, the mask image, and the input image, and generates a first return image.
- the second image generation unit 104 calculates a second return boundary from the second direction acquired from the texture analysis unit 401, the mask image, and the input image, and generates a second return image.
- FIG. 16 is a configuration diagram of the image processing apparatus according to the third embodiment of the present invention. 16, the same components as those in FIGS. 1 and 11 are denoted by the same reference numerals, and the description thereof is omitted.
- an image processing apparatus 700 includes an image acquisition unit 101, a mask image acquisition unit 102, an area division unit 701, an image generation unit 110, an image synthesis unit 105, and an image display unit 106.
- the image generation unit 110 includes a first image generation unit 103 and a second image generation unit 104.
- the image processing apparatus 700 according to Embodiment 3 of the present invention may include an image display unit 106 as shown in FIG.
- the image processing apparatus 700 further includes an area dividing unit 701 that divides an area around the unnecessary area in the input image into a plurality of areas that are sets of pixels having the same or similar characteristics. .
- FIG. 17 is a flow for generating a composite image in the image processing apparatus according to the present embodiment.
- step S11, step S12, step S31, step S32, step S13a, step S13b, step S14a, step S14b, and step S15 the synthesized image is used in the image processing apparatuses 100 and 400 of the first embodiment and the second embodiment. Since it is the same process as the process at the time of creation, the description is omitted.
- step S41 the area dividing unit 701 divides the image in the boundary peripheral area.
- the image generation unit 110 sets a pixel group in which a number of pixels corresponding to the width in the predetermined direction set in the unnecessary area in the predetermined direction are continuously arranged in the predetermined direction as a reference based on the boundary of the unnecessary area.
- the divided folded image is generated by repeatedly performing the process of folding and arranging in the direction opposite to the predetermined direction in the direction orthogonal to the predetermined direction.
- the area dividing unit 701 acquires an input image and a mask image from the mask image acquiring unit 102 and divides the boundary peripheral area of the mask image.
- region division method for example, region division by means of MeanShift method, region division by K-means method, cluster classification used in statistics, region division method used in image processing, or the like can be used. is there.
- FIG. 18 is a diagram illustrating an example of the region division result.
- the unnecessary region 801 there are boundary peripheral regions 811 to 814 divided into four regions, and the texture is divided for each of the boundary peripheral regions 811 to 814.
- the analysis unit 401 calculates the folding direction, and the image synthesis unit 105 synthesizes four folded images, thereby realizing unnecessary area complementation.
- boundary peripheral areas 811 to 814 obtained by dividing the area in step S41 are sent to the texture analysis unit 401, texture analysis is performed for each boundary peripheral area 811 to 814, and each direction and folding boundary are determined for each boundary peripheral area 811 to 814. To do.
- the area is divided according to the characteristics of the peripheral area, and the direction of folding is determined for each of the divided areas. Therefore, a more natural composite image according to the texture of the background Can be generated.
- the weighting coefficient applied to each folded image may take into account not only the distance from the folded boundary but also the distance from the center or the center of gravity of the divided area.
- the above imaging device is specifically a computer system including a microprocessor, ROM, RAM, a hard disk unit, a display unit, a keyboard, a mouse, and the like.
- a computer program is stored in the RAM or hard disk unit.
- the imaging apparatus achieves its functions by the microprocessor operating according to the computer program.
- the computer program is configured by combining a plurality of instruction codes indicating instructions for the computer in order to achieve a predetermined function.
- a part or all of the constituent elements constituting the above-described imaging apparatus may be configured by a single system LSI (Large Scale Integration).
- the system LSI is an ultra-multifunctional LSI manufactured by integrating a plurality of components on a single chip, and specifically, a computer system including a microprocessor, ROM, RAM, and the like. .
- a computer program is stored in the RAM.
- the system LSI achieves its functions by the microprocessor operating according to the computer program.
- a part or all of the constituent elements constituting the imaging apparatus may be configured by an IC card or a single module that can be attached to and detached from the imaging apparatus.
- the IC card or the module is a computer system including a microprocessor, a ROM, a RAM, and the like.
- the IC card or the module may include the super multifunctional LSI described above.
- the IC card or the module achieves its function by the microprocessor operating according to the computer program. This IC card or this module may have tamper resistance.
- the present invention may be the method described above. Further, the present invention may be a computer program that realizes these methods by a computer, or may be a digital signal composed of the computer program.
- the present invention also provides a computer-readable recording medium such as a flexible disk, hard disk, CD-ROM, MO, DVD, DVD-ROM, DVD-RAM, BD (Blu-ray Disc). : Registered trademark), or recorded in a semiconductor memory or the like.
- the digital signal may be recorded on these recording media.
- the computer program or the digital signal may be transmitted via an electric communication line, a wireless or wired communication line, a network represented by the Internet, a data broadcast, or the like.
- the present invention may be a computer system including a microprocessor and a memory, the memory storing the computer program, and the microprocessor operating according to the computer program.
- program or the digital signal is recorded on the recording medium and transferred, or the program or the digital signal is transferred via the network or the like and executed by another independent computer system. You may do that.
- each component may be configured by dedicated hardware, or may be realized by executing a software program suitable for each component.
- Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
- the software that realizes the image decoding apparatus of each of the above embodiments is the following program.
- this program is indicated by the image acquisition step of acquiring an input image to the computer, the acquisition step of acquiring region information which is information indicating an unnecessary region in the input image, and (i) the region information.
- region information which is information indicating an unnecessary region in the input image
- region information which is information indicating an unnecessary region in the input image
- region information which is information indicating an unnecessary region in the input image
- region information which is information indicating an unnecessary region in the input image
- the region information In the first region adjacent to the first direction of the unnecessary region, a first pixel group in which pixels corresponding to the first width in the first direction of the unnecessary region are continuously arranged in the first direction, A first folded image is generated by repeatedly performing the first process of folding and arranging in the direction opposite to the first direction with respect to the boundary of the unnecessary area as a reference, and (ii) the unnecessary In a second region adjacent to the second direction of the region, a second pixel group in which pixels corresponding to a second width in the second direction of the unnecessary region are continuously arranged in the second
- An image processing method including an image synthesis step of generating a synthesized image by synthesizing the first folded image and the second folded image generated in the step is executed.
- the present invention is not limited to this embodiment. Unless it deviates from the gist of the present invention, one or more of the present invention may be applied to various modifications that can be conceived by those skilled in the art, or forms constructed by combining components in different embodiments. It may be included within the scope of the embodiments.
- An image processing apparatus is an image processing apparatus or the like that has a small amount of calculation and can realize a more natural unnecessary area complementing process. This is useful as an unnecessary area removal application that can remove unnecessary areas.
- Image processing apparatus 101
- Image acquisition unit 102
- Mask image acquisition unit 103
- First image generation unit 104
- Second image generation unit 105
- Image composition unit 106
- Image display unit 110
- Image generation unit 201 Pixel 210
- Input image 211
- First folding Boundary 212
- First fold image 213
- First direction 220
- Weight coefficient 230
- Image obtained by multiplying first fold image 312 by weight coefficient 241 Second fold boundary 242 Second fold image 243
- Weight coefficient 260
- Second fold image 342 270
- composite image 401
- texture analysis unit 501 unnecessary area 502 circumscribed circle 503 peripheral area circle 601 first direction 602 first folded image 603 first folded boundary 604 second folded boundary 611 second direction 612 second Folded Return image 701
- Region division unit 801 Unnecessary region 811 to 814 Border peripheral region
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
Description
図19は、特許文献1に記載された従来の不要物除去装置の画像処理における処理フローを示すものである。
図1は、本発明の実施の形態1における画像処理装置の構成図である。図1において、本発明の画像処理装置100は、画像取得部101、マスク画像取得部102、画像生成部110、および画像合成部105、により構成される。画像生成部110は、第一画像生成部103および第二画像生成部104を含む。また、本発明の実施の形態1における画像処理装置100は図1に示すように画像表示部106を備えていてもよい。
図11は、本発明の実施の形態2の画像処理装置の構成図である。図11において、図1と同じ構成要素については同じ符号を用い、説明を省略する。
図16は、本発明の実施の形態3の画像処理装置の構成図である。図16において、図1および図11と同じ構成要素については同じ符号を用い、説明を省略する。
なお、本発明を上記実施の形態に基づいて説明してきたが、本発明は、上記の実施の形態に限定されないのはもちろんである。以下のような場合も本発明に含まれる。
101 画像取得部
102 マスク画像取得部
103 第一画像生成部
104 第二画像生成部
105 画像合成部
106 画像表示部
110 画像生成部
201 画素
210 入力画像
211 第一折り返し境界
212 第一折り返し画像
213 第一方向
220 重み係数
230 第一折り返し画像312に重み係数320を掛けた画像
241 第二折り返し境界
242 第二折り返し画像
243 第二方向
250 重み係数
260 第二折り返し画像342に重み係数350を掛けた画像
270 合成画像
401 テクスチャ解析部
501 不要領域
502 外接円
503 周辺領域円
601 第一方向
602 第一折り返し画像
603 第一折り返し境界
604 第二折り返し境界
611 第二方向
612 第二折り返し画像
701 領域分割部
801 不要領域
811~814 境界周辺領域
Claims (18)
- 入力画像を取得する画像取得部と、
前記入力画像の内で不要領域を示す情報である領域情報を取得する取得部と、
(i)前記領域情報により示される前記不要領域の第一方向に隣接する第一領域において、前記不要領域の前記第一方向における第一の幅分の数の画素が前記第一方向に連続して並ぶ第一画素群を、前記不要領域の境界を基準として前記第一方向の反対方向に折り返して配置する第一処理を前記第一方向に直交する方向に繰り返して行うことにより第一折り返し画像を生成し、(ii)前記不要領域の第二方向に隣接する第二領域において、前記不要領域の前記第二方向における第二の幅分の数の画素が前記第二方向に連続して並ぶ第二画素群を、前記不要領域の境界を基準として前記第二方向の反対方向に折り返して配置する第二処理を前記第二方向に直交する方向に繰り返して行うことにより第二折り返し画像を生成する、画像生成部と、
前記画像生成部により生成された、前記第一折り返し画像と、前記第二折り返し画像との合成により、合成画像を生成する画像合成部と、を備える
画像処理装置。 - 前記画像生成部は、
前記第一領域において前記第一方向に連続して並ぶ画素の数が、前記第一の幅分の数よりも少ないと判定した場合に、当該第一領域の前記第一方向に連続して並ぶ画素群を取得して、当該画素群を前記第一方向の反対方向に前記第一の幅分を繰り返し折り返して配置する処理を前記第一処理として前記第一方向に直交する方向に繰り返して行うことにより前記第一折り返し画像を生成し、
前記第二領域において前記第一方向に連続して並ぶ画素の数が、前記第二の幅分の数よりも少ないと判定した場合に、当該第二領域の前記第二方向に連続して並ぶ画素群を取得して、当該画素群を前記第二方向の反対方向に前記第二の幅分を繰り返し折り返して配置する処理を前記第二処理として前記第二方向に直交する方向に繰り返して行うことにより前記第二折り返し画像を生成する
請求項1に記載の画像処理装置。 - さらに、
禁止領域を設定する領域設定部を備え、
前記画像生成部は、(i)前記禁止領域が前記不要領域よりも前記第一方向側にある場合には、前記第一領域から前記禁止領域を除く領域において前記第一処理を前記第一方向に直交する方向に繰り返して行うことにより、前記第一折り返し画像を生成し、(ii)前記禁止領域が前記不要領域よりも前記第二方向側にある場合には、前記第二領域から前記禁止領域を除く領域において前記第二処理を前記第二方向に直交する方向に繰り返して行うことにより、前記第二折り返し画像を生成する
請求項1または2に記載の画像処理装置。 - さらに、
探索領域を設定する領域設定部を備え、
前記画像生成部は、(i)前記探索領域において前記第一処理を前記第一方向に直交する方向に繰り返して行うことにより、前記第一折り返し画像を生成し、(ii)前記探索領域を除く領域において前記第二処理を前記第二方向に直交する方向に繰り返して行うことにより、前記第二折り返し画像を生成する
請求項1または2に記載の画像処理装置。 - 前記第一方向は水平方向上で前記不要領域の境界から左側の方向であり、
前記第二方向は水平方向上で前記不要領域の境界から右側の方向である
請求項1から4のいずれか1項に記載の画像処理装置。 - 前記画像生成部はさらに、
(i)垂直方向で前記不要領域の上側の方向である第三方向に隣接する第三領域において、前記不要領域の前記第三方向における第三の幅分の数の画素が前記第三方向に連続して並ぶ第三画素群を、前記不要領域の境界を基準として前記第三方向の反対方向に折り返して配置する第三処理を前記第三方向に直交する方向に繰り返して行うことにより第三折り返し画像を生成し、(ii)垂直方向で前記不要領域の下側の方向である第四方向に隣接する第四領域において、前記不要領域の前記第四方向における第四の幅分の数の画素が前記第四方向に連続して並ぶ第四画素群を、前記不要領域の境界を基準として前記第四方向の反対方向に折り返して配置する第四処理を前記第四方向に直交する方向に繰り返して行うことにより第四折り返し画像を生成し、
前記画像合成部は、前記画像生成部により生成された、前記第一折り返し画像と、前記第二折り返し画像と、前記第三折り返し画像と、第四折り返し画像との合成により、合成画像を生成する
請求項5に記載の画像処理装置。 - 前記画像合成部は、
前記第一折り返し画像における複数の画素のそれぞれの画素値に対して設定された第一の重みを、当該画素値に掛け合わせ、
前記第二折り返し画像における複数の画素のそれぞれの画素値に対して設定された第二の重みを、当該画素値に掛け合わせ、
前記第一の重みの掛け合わされた後の前記第一折り返し画像の複数の画素値のそれぞれと、前記第二の重みの掛け合わされた後の前記第二折り返し画像の複数の画素値のそれぞれとを足し合わせることで前記合成を行う
請求項1から5のいずれか1項に記載の画像処理装置。 - 前記画像合成部は、前記不要領域の境界から近い画素ほど重みが大きくなるように前記第一の重みおよび第二の重みを設定する
請求項7に記載の画像処理装置。 - 前記画像合成部は、前記不要領域を構成する複数の画素のそれぞれについて、当該画素に対応する位置にある前記第一折り返し画像の画素値に掛け合わされる前記第一の重みと、および当該画素に対応する位置にある前記第二折り返し画像の画素値に掛け合わされる前記第二の重みとの加算値が1になるように、前記第一の重みと前記第二の重みとを設定する
請求項7または8に記載の画像処理装置。 - さらに、
前記不要領域の周辺領域のテクスチャを解析することで、前記第一方向および前記第二方向を決定するテクスチャ解析部を備える
請求項1から9のいずれか1項に記載の画像処理装置。 - 前記テクスチャ解析部は、前記入力画像のエッジ画像から求めたエッジ強度およびエッジ角度から、最頻度の直線を検出し、前記第一の方向および前記第二方向を、検出した前記最頻度の直線上の両方向に決定する
請求項10に記載の画像処理装置。 - 前記テクスチャ解析部は、前記入力画像のエッジ画像から求めた画素ごともしくは領域ごとのエッジの角度から、頻度の高い角度を検出し、前記第一方向および前記第二方向を、検出した前記頻度の高い角度に対して垂直な方向に決定する
請求項10または11に記載の画像処理装置。 - さらに、
前記入力画像のうち前記不要領域の周辺の領域を、同一もしくは類似の特徴を有する画素の集合である複数の領域に分割する領域分割部を備え、
前記画像生成部は、前記複数の領域のそれぞれについて、前記不要領域の当該領域で設定される所定方向における幅分の数の画素が前記所定方向に連続して並ぶ画素群を、前記不要領域の境界を基準として前記所定方向の反対方向に折り返して配置する処理を前記所定方向に直交する方向に繰り返して行うことにより分割折り返し画像を生成する
請求項1から12のいずれか1項に記載の画像処理装置。 - さらに、
前記画像合成部の結果を表示する画像表示部を備える
請求項1から13のいずれか1項に記載の画像処理装置。 - ユーザによるタッチ入力が可能な表示部を備える画像処理装置であって、
前記表示部に表示されている入力画像内においてタッチ入力されることにより特定された領域を前記入力画像における不要領域として検出する第一検出部と、
前記不要領域が前記第一検出部により検出された後、再びタッチ入力を検出し、検出した前記タッチ入力による検出位置が所定値以上の移動である場合に、当該移動の方向を第一方向として検出する第二検出部と、
前記不要領域の前記第一方向に隣接する第一領域に、前記不要領域の前記第一方向の幅分の数の画素が前記第一方向に連続して並ぶ第一画素群を、前記不要領域の境界を基準として前記第一方向の反対方向に折り返して配置する第一処理を前記第一方向に直交する方向に繰り返して行うことにより第一折り返し画像を生成する画像生成部と、
前記入力画像と、前記第一折り返し画像とを合成することにより、合成画像を生成する画像合成部と、
前記合成画像を表示する前記表示部と、を備える
画像処理装置。 - 入力画像を取得する画像取得ステップと、
前記入力画像の内で不要領域を示す情報である領域情報を取得する取得ステップと、
(i)前記領域情報により示される前記不要領域の第一方向に隣接する第一領域において、前記不要領域の前記第一方向における第一の幅分の数の画素が前記第一方向に連続して並ぶ第一画素群を、前記不要領域の境界を基準として前記第一方向の反対方向に折り返して配置する第一処理を前記第一方向に直交する方向に繰り返して行うことにより第一折り返し画像を生成し、(ii)前記不要領域の第二方向に隣接する第二領域において、前記不要領域の前記第二方向における第二の幅分の数の画素が前記第二方向に連続して並ぶ第二画素群を、前記不要領域の境界を基準として前記第二方向の反対方向に折り返して配置する第二処理を前記第二方向に直交する方向に繰り返して行うことにより第二折り返し画像を生成する、画像生成ステップと、
前記画像生成ステップにより生成された、前記第一折り返し画像と、前記第二折り返し画像との合成により、合成画像を生成する画像合成ステップと、を含む
画像処理方法。 - 請求項16に記載の画像処理方法をコンピュータに実行させるためのプログラム。
- 入力画像を取得する画像取得部と、
前記入力画像の内で不要領域を示す情報である領域情報を取得する取得部と、
(i)前記領域情報により示される前記不要領域の第一方向に隣接する第一領域において、前記不要領域の前記第一方向における第一の幅分の数の画素が前記第一方向に連続して並ぶ第一画素群を、前記不要領域の境界を基準として前記第一方向の反対方向に折り返して配置する第一処理を前記第一方向に直交する方向に繰り返して行うことにより第一折り返し画像を生成し、(ii)前記不要領域の第二方向に隣接する第二領域において、前記不要領域の前記第二方向における第二の幅分の数の画素が前記第二方向に連続して並ぶ第二画素群を、前記不要領域の境界を基準として前記第二方向の反対方向に折り返して配置する第二処理を前記第二方向に直交する方向に繰り返して行うことにより第二折り返し画像を生成する、画像生成部と、
前記画像生成部により生成された、前記第一折り返し画像と、前記第二折り返し画像との合成により、合成画像を生成する画像合成部と、を備える
集積回路。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013537975A JPWO2013179560A1 (ja) | 2012-05-30 | 2013-04-23 | 画像処理装置および画像処理方法 |
US14/116,473 US20140079341A1 (en) | 2012-05-30 | 2013-04-23 | Image processing apparatus and image processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012123599 | 2012-05-30 | ||
JP2012-123599 | 2012-05-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013179560A1 true WO2013179560A1 (ja) | 2013-12-05 |
Family
ID=49672794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/002737 WO2013179560A1 (ja) | 2012-05-30 | 2013-04-23 | 画像処理装置および画像処理方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20140079341A1 (ja) |
JP (1) | JPWO2013179560A1 (ja) |
WO (1) | WO2013179560A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017058929A (ja) * | 2015-09-16 | 2017-03-23 | 日本電信電話株式会社 | 画像情報取得方法、画像評価方法、画像情報取得装置、画像評価装置及び画像処理プログラム |
WO2018180578A1 (ja) * | 2017-03-31 | 2018-10-04 | ソニーセミコンダクタソリューションズ株式会社 | 画像処理装置、撮像装置、画像処理方法、およびプログラム |
JP2019053732A (ja) * | 2017-09-15 | 2019-04-04 | ソニー株式会社 | シーン内に存在する不要なオブジェクトの除去に基づくシーンの画像の動的生成 |
JP2020048117A (ja) * | 2018-09-20 | 2020-03-26 | 大日本印刷株式会社 | 画像提供システム |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101999140B1 (ko) * | 2013-01-03 | 2019-07-11 | 삼성전자주식회사 | 카메라장치 및 카메라를 구비하는 휴대단말기의 이미지 촬영장치 및 방법 |
JP2016092712A (ja) * | 2014-11-10 | 2016-05-23 | セイコーエプソン株式会社 | 画像処理装置、画像処理方法およびプログラム |
US11227007B2 (en) * | 2019-07-23 | 2022-01-18 | Obayashi Corporation | System, method, and computer-readable medium for managing image |
CN116883993B (zh) * | 2023-09-06 | 2023-12-01 | 临沂大学 | 基于视觉的玫瑰花茶干花分选方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005094212A (ja) * | 2003-09-16 | 2005-04-07 | Canon Inc | 画像処理装置及び方法、並びにコンピュータプログラム及びコンピュータ可読記憶媒体 |
JP2011170840A (ja) * | 2010-01-20 | 2011-09-01 | Sanyo Electric Co Ltd | 画像処理装置及び電子機器 |
JP2011249926A (ja) * | 2010-05-24 | 2011-12-08 | Nikon Corp | 画像処理プログラム、および画像処理装置 |
JP2012000324A (ja) * | 2010-06-18 | 2012-01-05 | Panasonic Electric Works Co Ltd | ミラーセラピーシステム |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003032542A (ja) * | 2001-07-19 | 2003-01-31 | Mitsubishi Electric Corp | 撮像装置 |
JP4307301B2 (ja) * | 2003-07-31 | 2009-08-05 | キヤノン株式会社 | 画像処理装置およびその方法 |
JP2007233871A (ja) * | 2006-03-02 | 2007-09-13 | Fuji Xerox Co Ltd | 画像処理装置、コンピュータの制御方法及びプログラム |
JP4883783B2 (ja) * | 2006-12-22 | 2012-02-22 | キヤノン株式会社 | 画像処理装置およびその方法 |
US8126288B2 (en) * | 2007-01-31 | 2012-02-28 | A School Juridical Person Fujita Educational Institution | Image processing apparatus |
JP4289414B2 (ja) * | 2007-03-27 | 2009-07-01 | セイコーエプソン株式会社 | 画像変形のための画像処理 |
JP4957463B2 (ja) * | 2007-08-30 | 2012-06-20 | セイコーエプソン株式会社 | 画像処理装置 |
JP5071162B2 (ja) * | 2008-03-05 | 2012-11-14 | セイコーエプソン株式会社 | 画像処理装置、画像処理方法及び画像処理用コンピュータプログラム |
JP5287778B2 (ja) * | 2010-03-26 | 2013-09-11 | 株式会社島津製作所 | 画像処理方法およびそれを用いた放射線撮影装置 |
JP2013045316A (ja) * | 2011-08-25 | 2013-03-04 | Sanyo Electric Co Ltd | 画像処理装置及び画像処理方法 |
-
2013
- 2013-04-23 WO PCT/JP2013/002737 patent/WO2013179560A1/ja active Application Filing
- 2013-04-23 JP JP2013537975A patent/JPWO2013179560A1/ja active Pending
- 2013-04-23 US US14/116,473 patent/US20140079341A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005094212A (ja) * | 2003-09-16 | 2005-04-07 | Canon Inc | 画像処理装置及び方法、並びにコンピュータプログラム及びコンピュータ可読記憶媒体 |
JP2011170840A (ja) * | 2010-01-20 | 2011-09-01 | Sanyo Electric Co Ltd | 画像処理装置及び電子機器 |
JP2011249926A (ja) * | 2010-05-24 | 2011-12-08 | Nikon Corp | 画像処理プログラム、および画像処理装置 |
JP2012000324A (ja) * | 2010-06-18 | 2012-01-05 | Panasonic Electric Works Co Ltd | ミラーセラピーシステム |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017058929A (ja) * | 2015-09-16 | 2017-03-23 | 日本電信電話株式会社 | 画像情報取得方法、画像評価方法、画像情報取得装置、画像評価装置及び画像処理プログラム |
WO2018180578A1 (ja) * | 2017-03-31 | 2018-10-04 | ソニーセミコンダクタソリューションズ株式会社 | 画像処理装置、撮像装置、画像処理方法、およびプログラム |
JPWO2018180578A1 (ja) * | 2017-03-31 | 2020-02-06 | ソニーセミコンダクタソリューションズ株式会社 | 画像処理装置、撮像装置、画像処理方法、およびプログラム |
US11170511B2 (en) | 2017-03-31 | 2021-11-09 | Sony Semiconductor Solutions Corporation | Image processing device, imaging device, and image processing method for replacing selected image area based on distance |
JP7098601B2 (ja) | 2017-03-31 | 2022-07-11 | ソニーセミコンダクタソリューションズ株式会社 | 画像処理装置、撮像装置、画像処理方法、およびプログラム |
JP2019053732A (ja) * | 2017-09-15 | 2019-04-04 | ソニー株式会社 | シーン内に存在する不要なオブジェクトの除去に基づくシーンの画像の動的生成 |
JP2020048117A (ja) * | 2018-09-20 | 2020-03-26 | 大日本印刷株式会社 | 画像提供システム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2013179560A1 (ja) | 2016-01-18 |
US20140079341A1 (en) | 2014-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013179560A1 (ja) | 画像処理装置および画像処理方法 | |
JP4696635B2 (ja) | 画像領域の高凝縮要約画像を生成する方法、装置およびプログラム | |
CN107123084B (zh) | 优化图像裁剪 | |
JP6167703B2 (ja) | 表示制御装置、プログラム及び記録媒体 | |
JP4715267B2 (ja) | 画像における重要度の高い領域を判別する方法、装置及びプログラム | |
US10809898B2 (en) | Color picker | |
JP2010160790A (ja) | 自動画像クロッピング | |
WO2018198703A1 (ja) | 表示装置 | |
CN103702032A (zh) | 图像处理方法、装置和终端设备 | |
KR20150106330A (ko) | 화상 표시 장치 및 화상 표시 방법 | |
US8726185B1 (en) | Method and apparatus for rendering overlapped objects | |
US8824778B2 (en) | Systems and methods for depth map generation | |
KR20160006965A (ko) | 디스플레이 장치 및 이의 하이라이트 표시 방법 | |
KR20140008041A (ko) | 방향 적응적 영상 보간 방법 및 그 전자 장치 | |
US9143754B2 (en) | Systems and methods for modifying stereoscopic images | |
US20130236117A1 (en) | Apparatus and method for providing blurred image | |
CN106131628B (zh) | 一种视频图像处理方法及装置 | |
JP6137464B2 (ja) | 画像処理装置および画像処理プログラム | |
RU2509377C2 (ru) | Способ и система и для просмотра изображения на устройстве отображения | |
US9530183B1 (en) | Elastic navigation for fixed layout content | |
JP2023522370A (ja) | 画像表示方法、装置、機器及び記憶媒体 | |
JP6443505B2 (ja) | プログラム、表示制御装置及び表示制御方法 | |
JP7533619B2 (ja) | 情報処理装置、情報処理方法、及びコンピュータプログラム | |
KR102507669B1 (ko) | 전자 장치 및 그의 제어방법 | |
JP5281720B1 (ja) | 立体映像処理装置及び立体映像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2013537975 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14116473 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13798207 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13798207 Country of ref document: EP Kind code of ref document: A1 |