WO2006068289A1 - 学習装置と学習方法および学習プログラム - Google Patents
学習装置と学習方法および学習プログラム Download PDFInfo
- Publication number
- WO2006068289A1 WO2006068289A1 PCT/JP2005/023997 JP2005023997W WO2006068289A1 WO 2006068289 A1 WO2006068289 A1 WO 2006068289A1 JP 2005023997 W JP2005023997 W JP 2005023997W WO 2006068289 A1 WO2006068289 A1 WO 2006068289A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- pixel
- motion
- student
- student image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 42
- 239000000284 extract Substances 0.000 claims abstract description 23
- 238000000605 extraction Methods 0.000 claims description 38
- 230000008569 process Effects 0.000 claims description 29
- 230000000694 effects Effects 0.000 claims description 19
- 230000000750 progressive effect Effects 0.000 claims description 7
- 239000013598 vector Substances 0.000 abstract description 34
- 238000010586 diagram Methods 0.000 description 24
- 239000011159 matrix material Substances 0.000 description 21
- 238000001514 detection method Methods 0.000 description 20
- 230000002093 peripheral effect Effects 0.000 description 20
- 230000008859 change Effects 0.000 description 6
- SLPJGDQJLTYWCI-UHFFFAOYSA-N dimethyl-(4,5,6,7-tetrabromo-1h-benzoimidazol-2-yl)-amine Chemical compound BrC1=C(Br)C(Br)=C2NC(N(C)C)=NC2=C1Br SLPJGDQJLTYWCI-UHFFFAOYSA-N 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/142—Edging; Contouring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
Definitions
- the present invention relates to a learning device, a learning method, and a learning program. Specifically, based on the set amount and direction of motion, a student image is generated by adding motion blur to the teacher image, and the pixel value of the pixel in the student image corresponding to the target pixel in the teacher image is set. Based on this, the class of the target pixel is determined. Also, because the main term that mainly contains the component of the pixel of interest in the motion object in which motion blur occurs in the student image is extracted, the spatial position of the pixel of interest in the teacher image is almost the same as the spatial position of the pixel of interest. Extract at least the pixel value of.
- a processing coefficient for predicting the target pixel in the teacher image based on the extracted pixel value is performed for each detected class.
- a student image is generated by changing at least one of the movement amount and the movement direction at a specific ratio.
- Data acquired using this sensor is information obtained by projecting onto real-time information (eg, light) force and a space-time of a lower dimension than the real world.
- the information obtained by projection has distortion caused by projection.
- an object moving in front of a stationary background is imaged with a video camera and converted into an image signal, it is displayed based on the image signal because the real-world information is sampled and converted into data.
- a motion blur that causes a moving object to blur occurs as a distortion caused by projection.
- the image object corresponding to the foreground object is detected by detecting the outline of the image object corresponding to the foreground object included in the input image.
- the motion vector of the image object ⁇ ⁇ corresponding to the coarsely extracted foreground object is detected, and the motion vector and the motion vector position information are used to detect the motion vector. Reduction of blur is performed. Disclosure of the invention
- the learning device adds a motion blur to a teacher image based on a motion amount setting unit that sets a motion amount, a motion direction setting unit that sets a motion direction, and a motion amount and a motion direction.
- the spatial position of the pixel of interest in the teacher image A prediction tap extraction unit that extracts at least pixel values of pixels in substantially the same student image, a plurality of pixel values extracted by the prediction tap extraction unit at least for each movement direction, and a teacher image
- a coefficient generation unit that generates a processing coefficient for predicting the target pixel in the teacher image from the pixel value of the pixel extracted by the prediction tap extraction unit, and the student image generation unit Definite
- adding a motion blur to a teacher image Shinare and generates a student image.
- the learning method includes a movement amount setting step for setting a movement amount, and a movement direction.
- the student image generation step of generating a student image by adding motion blur to the teacher image based on the motion amount and the motion direction, and the motion object in which motion blur is generated in the student image
- a prediction tap extracting step of extracting at least a pixel value of a pixel of the student image ⁇ that is substantially the same as the spatial position of the pixel of interest in the teacher image;
- a teacher image is obtained from the pixel values of the pixels extracted by the prediction tap extraction process based on the relationship between the pixel values of the pixels extracted by the prediction tap extraction unit and the target pixel in the teacher image for each set of movement directions.
- a coefficient generation process for generating a processing coefficient for predicting a target pixel in the image. In the student image generation process, a student image that does not add motion blur to the teacher image is generated at a specific ratio. Is.
- the learning program provides a computer with motion blur on a teacher image based on a motion amount setting step for setting a motion amount, a motion direction setting step for setting a motion direction, and a motion amount and a motion direction.
- a student image generation step for generating a student image that does not add motion blur to the teacher image at a specific ratio, and a pixel of interest in the motion object in which motion blur occurs in the student image
- a prediction tap extraction step for extracting at least the pixel value of the pixel in the student image substantially the same as the spatial position of the target pixel in the teacher image, and at least for each motion direction
- a coefficient generation step for generating a processing coefficient for predicting the target pixel in the teacher image from the pixel value of the pixel extracted in the output step is executed.
- a student image is generated by adding motion blur to the teacher image based on the set motion amount and motion direction.
- the movement amount is set to “0” and the student image without motion blur is added, or at least one of the set movement amount and the movement direction is changed.
- a student image with motion blur added to the teacher image and a student image with noise added are generated.
- the pixels in the student image that are substantially the same as the spatial position of the pixel of interest in the teacher image are extracted. Pixel value of Is at least extracted.
- the pixel values of the first plurality of pixels in the student image are extracted.
- the pixel values of the second plurality of pixels in the student image are extracted.
- a processing coefficient for predicting the target pixel in the teacher image is generated based on the pixel value of the extracted pixel.
- student images that do not add motion blur to a teacher image at a specific rate are generated. For this reason, even if motion blur is removed from an image including a still image, the still image can be prevented from failing. Further, when generating a prediction coefficient corresponding to motion blur in the first motion direction, a student image having motion blur in the second motion direction close to the first motion direction is used. For this reason, even if a motion vector cannot be detected with high accuracy, motion blur removal can be performed satisfactorily. Furthermore, since the processing coefficient is generated by adding noise to the student image, the influence of noise can be reduced if learning is performed by adding noise generated by the image sensor. It is also possible to change the amount of noise to change the blur, or to adjust the proportion of student images with different amounts of blur to create a new blur.
- the pixel values of the first plurality of pixels in the student image are extracted, and when in the progressive format, the pixel values of the second plurality of pixels in the student image are extracted. Therefore, motion blur can be removed from an image based on this image signal using any image signal of interface format or progressive format. Furthermore, the class of the target pixel is determined according to the pixel value activity of the pixel in the student image corresponding to the target pixel in the teacher image, and the processing coefficient is generated for each class. The motion blur removal process can be performed according to the condition.
- FIG. 2 is a view for explaining an image taken by the image sensor.
- 3A and 3B are diagrams for explaining the captured image.
- FIG. 4 is a diagram illustrating a time direction dividing operation of pixel values.
- Figure 5 A and Figure 5 B is a diagram for explaining the operation of calculating the pixel value of the target pixel Q
- FIG. 6 shows the processing area
- FIG. 7A and FIG. 7B are diagrams showing examples of setting processing areas.
- Fig. 8 is a diagram for explaining temporal mixing of real world variables in the processing area.
- Fig. 9 is a diagram showing the positions of main terms in the spatial direction.
- FIG. 10 is a diagram showing the positions of main terms in the time direction.
- Figures 11A and 11B are diagrams for explaining the relationship between the displacement of the motion vector and the displacement of the main term when the main term in the spatial direction is used.
- Figure 12 is a diagram for explaining the relationship between the displacement of the motion vector and the displacement of the main term when the main term in the time direction is used.
- FIG. 13 is a functional block diagram of the image processing apparatus.
- FIG. 14 is a diagram showing a configuration of an image processing apparatus when software is used.
- FIGS. 15 and 15 B are diagrams showing prediction taps.
- FIG. 16 is a flowchart showing image processing.
- Fig. 17 is a functional block diagram of the image processing apparatus (when class determination is performed).
- Fig. 18A and Fig. 18B are diagrams showing class taps. '
- FIG. 19 is a diagram for explaining the calculation of activity.
- FIG. 20 is a flowchart showing image processing (when class determination is performed).
- FIG. 21 is a diagram showing a configuration when motion blur removal processing is performed by obtaining processing coefficients by learning.
- FIG. 22 is a functional block diagram of the learning device.
- FIG. 23 is a flowchart showing the learning process.
- Figure 24 shows the functional block diagram of the learning device (when class is determined).
- Fig. 25 is a flowchart showing the learning process (when class is determined)
- FIG. 1 is a block diagram showing a configuration of a system to which the present invention is applied.
- the image 'sensor 10 generates an image signal D Va obtained by imaging the real society and supplies it to the image processing apparatus 20.
- the image processing device 20 extracts information embedded in the supplied image signal D Va of the input image, and generates and outputs an image signal from which the embedded information is extracted. Note that the image processing apparatus 20 can also extract information buried in the image signal D Va by using various information ET supplied from the outside.
- the image sensor 10 is composed of a video camera equipped with a CCD (Charge-Coupled Device) area sensor or a MOS area sensor, which is a solid-state image sensor, and images a real society. For example, as shown in FIG. 2, when the moving object OB f corresponding to the foreground moves in the direction of arrow A between the image sensor 10 and the object OBb corresponding to the background, the image sensor 10 is The object OB f corresponding to the foreground is imaged together with the object OBb corresponding to the background.
- CCD Charge-Coupled Device
- the detection element of the image sensor 10 converts the input light into electric charges for a period corresponding to the exposure time, and accumulates the photoelectrically converted electric charges.
- the amount of charge is roughly proportional to the intensity of the input light and the time that the light is input.
- the detection element adds the charge converted from the input light to the already accumulated charge. That is, the detection element integrates the input light for a period corresponding to the exposure time, and accumulates an amount of charge corresponding to the integrated light. It can be said that the detection element has an integral effect with respect to time. In this way, photoelectric conversion is performed by the image sensor, and the input light is converted into electric charges in units of pixels and accumulated in units of exposure time.
- a pixel signal is generated according to the accumulated charge amount, and a desired frame is generated using the pixel signal.
- the exposure time of the image sensor is a period in which the light input by the image sensor is converted into electric charge and the electric charge is accumulated in the detection element as described above.
- the image time is It is equal to the interval (one frame period).
- the shatter movement is performed, it is equal to the shatter opening time.
- FIG. 3A and FIG. 3B are diagrams for explaining the captured image indicated by the image signal.
- FIG. 3A shows an image obtained by imaging a moving object OB f corresponding to a moving foreground and an object OB b corresponding to a stationary background. It is assumed that the moving object OB f corresponding to the foreground has moved horizontally in the direction of arrow A.
- Fig. 3B shows the relationship between the image and time at the position of line L (shown by a broken line) extending in the direction of arrow A as shown in Fig. 3A.
- the length of the moving object OB f in the moving direction is, for example, 9 pixels and moves 5 pixels during one exposure period, the front edge and pixel position P that were at pixel position P 21 at the start of the frame period
- the rear end corresponding to 13 ends the exposure period at pixel positions P 25 and P 17, respectively.
- the exposure period in one frame is equal to one frame period, so that the front end is the pixel position P 26 and the rear end is the pixel position P 18 at the start of the next frame period. .
- the background area includes only the background component.
- Pixel positions P17 to P21 are foreground regions having only foreground components.
- the pixel positions P13 to P16 and the pixel positions P22 to P25 are mixed regions in which the background component and the foreground component are mixed.
- the mixed area is classified into a covered background area where the background component is covered with the foreground as time passes, and an uncovered background area where the background component appears as time elapses.
- the mixed region located on the front end side of the foreground object OB f in the traveling direction is the covered background region
- the mixed region located on the rear end side is the unencapsulated background region.
- the image signal includes an image including a foreground area, a background area, or a covered background area or an uncovered background area.
- the image time interval is short
- the moving object OB f corresponding to the foreground is a rigid body and moves at a constant speed
- the pixel value in line L is divided in the time direction.
- Figure 4 In this time direction division operation, pixel values are expanded in the time direction and divided into equal time intervals by the number of virtual divisions.
- the vertical direction corresponds to time, indicating that time passes from top to bottom in the figure.
- the number of virtual divisions is set according to the amount of motion V in the image time interval of the motion object corresponding to the foreground. For example, when the motion amount V in one frame period is 5 pixels as described above, the number of virtual divisions is set to “5” corresponding to the motion amount V, and one frame period is set at equal time intervals. Divide into 5 parts.
- the pixel value of one frame period of the pixel position Px obtained when the object OBb corresponding to the background is imaged is Bx
- the motion object O Bf corresponding to the foreground whose length in the line L is 9 pixels is The pixel values obtained for each pixel when the image is taken still are F 09 (front end side) to F01 (rear end side).
- the pixel value DP14 at the pixel position P14 is expressed by Expression (1).
- the background component includes 3 virtual division times (frame period / V), Since the foreground component includes two virtual division times, the mixing ratio ⁇ of the background component to the pixel value is (3/5). Similarly, for example, at pixel position ⁇ 22, the background component includes one virtual division time and the foreground component includes four virtual division times, so the mixture ratio ⁇ is (1/5). As a result, different foreground components are added in one exposure time, so the foreground area corresponding to the motion object includes motion blur. For this reason, the image processing device 20 extracts significant information buried in the image signal DVa and generates an image signal DVout from which the motion blur of the motion object OBf corresponding to the foreground is removed.
- the calculation operation of the pixel value of the target pixel on the image will be described with reference to FIG.
- the pixel position P47 including the target pixel component F 29 / v is set as the target pixel to be processed.
- the pixel value F29 of the pixel of interest is the pixel value DP44, 045 at the pixel positions P44, P45 and the pixel value? 24, or can be calculated using pixel values DP 49 and DP 50 and pixel value F34 at pixel positions P 49 and P 50.
- the pixel value F29 can be calculated using the pixel values DP39, DP40, DP44, DP45 and the pixel value F19 at the pixel positions P39, P40, P44, and P45.
- the pixel value F34 can be obtained in the same way.
- the position of the pixel for which the difference is obtained repeatedly appears with an interval of motion amount V That is, the pixel value F29 of the target pixel can be calculated using the pixel values of the pixel positions' ⁇ , P39, P40, P44, P45, P49, P50, ⁇ used for the difference calculation as described above.
- FIG. 6 A case where the pixel value of the target pixel is calculated from the model formula will be described.
- a processing area of (2N + 1) pixels is set in the movement direction around the target pixel Pna.
- Fig. 7 A and Fig. 7 B show examples of processing area settings.
- the direction of the motion vector is horizontal as shown by the arrow A, for example, for the pixel of the motion object OBf that removes motion blur
- the processing area WA in the horizontal direction as shown in Figure 7A.
- the direction of the motion vector is diagonal.
- the processing area WA is set in the corresponding angular direction.
- the pixel value corresponding to the pixel position of the processing area is obtained by interpolation or the like.
- the real world variables ( ⁇ . 8, ⁇ ⁇ ⁇ , ⁇ 0, ⁇ ⁇ ⁇ , ⁇ 8) is the mixing time.
- N 6: N is the number of pixels of the processing width for the target pixel).
- the pixel values of the pixels constituting the processing area are set to ⁇ - ⁇ , ⁇ - ⁇ + 1, ⁇ , ⁇ , ⁇ ⁇ ⁇
- the pixel value Xt indicates the pixel value at the pixel position Pt.
- the constant h shows the value of the integer part when the motion amount V is multiplied by 1Z2 (the value rounded down after the decimal point).
- Equation (7) shows the case where the processing area is set as shown in FIG. 8, and is Equation (5).
- T indicates that it is a transposed matrix.
- AY X + ⁇
- Equation (10) that minimizes the sum of squared errors can be obtained.
- the processing coefficient used for each pixel has a shape as shown in FIG. 5B. That is, the absolute value of the processing coefficient for the pixel at the pixel position used for the difference calculation is larger than the coefficients for the other pixels.
- the pixel at the pixel position used for the difference calculation is the main term.
- the position of the main term in the spatial direction is based on the target pixel Pna as shown in Fig. 9.
- the pixel position corresponds to the amount of movement in the movement direction.
- the main term MCal indicates the main term closest to the target pixel in the direction of motion
- the main term MCbl indicates the main term closest to the target direction in the direction of motion.
- the position of the main term in the time direction overlaps the same pixel position on multiple images. Focusing on the above-mentioned main terms MCal and MCbl, (t 1 1) The position of the main term MCal on the frame image is (t) the position of the main term MCbl on the frame image. Therefore, the center position of the (t ⁇ l) frame and the (t) frame and substantially the same position as the main terms MCal and MCbl corresponds to the target pixel Pna. To be precise, main terms MCal pixel value X.3, X. 2 pixels, main terms MCbl corresponds to a pixel of the pixel values X 2, X 3, the spatial position of the pixel of interest Pna
- a pixel value X 2 pixels of the pixel is the pixel and the pixel value X-2 is the pixel value X-3.
- Fig. 11A only the main term in the spatial direction is used, and the pixel position in the middle between the main term MC al and the main term M Cbl existing in the spatial direction is removed by motion blur.
- the output position of the subsequent pixel of interest Pna if the motion vector of the pixel of interest Pna is not detected accurately, the positions of the main terms MCal, MCbl, etc. will fluctuate significantly as shown by the broken line. The motion blur of the pixel Pna cannot be removed accurately.
- the position of the main term M Cbl is The influence of the motion vector detection accuracy is small, but the position of the remaining main term M Cal etc. is greatly affected by the motion vector detection accuracy as shown by the broken line, so the motion vector of the target pixel Pna is If it is not detected accurately, the motion blur of the pixel of interest Pna cannot be removed accurately.
- the pixel position in the middle of the main term M Cal and the main term MCbl is set as the output position of the target pixel Pna after motion blur removal, as in Fig. 11 A.
- the position of the main term in the time direction will fluctuate, and it will not be possible to accurately remove motion blur. Therefore, as shown in Fig. 12, using the main term in the time direction, the output phase of the pixel of interest P na after the motion blur is removed in the middle between frames after the motion blur of the pixel of interest P na is removed. Are generated.
- a functional block diagram of an image processing device that performs motion blur removal using the main term in the space-time direction as the output phase of the pixel of interest P na after motion blur removal is shown in the middle of the frame. 1 Shown in 3. It does not matter whether each function of the image processing device is realized by hardware or software. In other words, the functional block shown in Fig. 13 can be realized by hardware, or it can be realized by software.
- a CPU Central Processing Unit
- ROM Read Only Memory
- storage unit 208 includes an image processing device.
- a program for realizing each function is stored.
- a RAM Random Access Memory
- the CPU 201, ROM 202, and RAM 203 are connected to each other by a bus 204.
- an input / output interface 205 is connected to the CPU 201 via a bus 204.
- the input / output interface 205 is connected to an input unit 206 composed of a keyboard, a mouse, a microphone, and the like, and an output unit 207 composed of a display, a speaker, and the like.
- the CPU 201 executes various processes in response to commands input from the input unit 206. Then, the CPU 201 outputs an image, sound, or the like obtained as a result of the processing to the output unit 207.
- the storage unit 208 connected to the input / output interface 205 is composed of, for example, a hard disk, and stores programs executed by the CPU 201 and various data.
- a communication unit 209 communicates with an external device via the Internet or other networks.
- the communication unit 209 functions as an acquisition unit that captures the output of the sensor.
- the program may be acquired via the communication unit 209 and stored in the storage unit 208.
- the drive 210 connected to the input / output interface 205 is a program recorded on a recording medium when a recording medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is loaded. Get data and data. The acquired program and data are transferred to and stored in the storage unit 208 as necessary.
- the image signal DVa supplied to the image processing device 20 includes the direction detection processing unit 321 of the motion direction detection unit 32, the first image holding unit 331 and the second image of the peripheral image holding unit 33. It is supplied to the holding unit 332.
- the target pixel setting unit 31 sets the target pixel Pna in the target image to be predicted.
- the main term MCal on the image of the image and (t) the main term MCbl on the image of the frame (t 1) the phase between the frame and (t) frame, the main terms MCal, MCbl It is possible to obtain the pixel value after removing the motion blur of the target pixel which is a pixel at substantially the same pixel position as that of the pixel. Therefore, when the image signal DVa is a signal in progressive format, the pixel-of-interest setting unit 31 sets the predicted image to be predicted as an image in the middle of the (t-1) frame and the (t) frame.
- the direction detection processing unit 32 1 detects a motion vector for each pixel based on the image signal DVa, and uses the motion direction information V da indicating the motion direction of the target pixel set by the target pixel setting unit 31 as the motion direction. This is supplied to the selection unit 322.
- This direction detection processing unit 321 can detect a motion vector for each pixel by using a method such as a block matching method or a gradient method.
- the motion direction selection unit 322 can input the motion direction information vdb indicating the motion direction as information ET from the outside, and the motion direction information Vda supplied from the direction detection processing unit 321 or the motion input from the outside One of the direction information V db is selected, and the selected motion direction information vd is supplied to the pixel value generation unit 38a.
- the first image holding unit 33 1 and the second image holding unit 332 of the peripheral image holding unit 33 are configured using memory, and the first image holding unit 331 is a peripheral image (t 1 1) Hold the image.
- the second image holding unit 332 holds a (t) frame image that is a peripheral image.
- the pixel value extraction unit 36 extracts the main term mainly including the component of the target pixel. Therefore, the pixel value extraction unit 36 extracts the target pixel Pna from the peripheral images held in the first image holding unit 331 and the second image holding unit 332. At least a pixel at a position substantially the same as the spatial position is extracted and supplied to the pixel value generation unit 38a as a prediction tap Ca.
- Figures 15A and 15B show the prediction tap C a.
- the pixel value extraction unit 36 receives the target pixel P as shown in Fig. 15A from the (t-1) frame and (t) frame images, which are peripheral images.
- 21 pixels are extracted as a prediction tap with reference to a pixel at the same position as the spatial position of na.
- the class tap extraction unit 3 51 is a peripheral image with reference to the pixel at the same position as the spatial position of the target pixel P na as shown in FIG. t) Extract 21 pixels from the field image as a prediction tap, and predict 17 pixels from the surrounding image (t— 1) field image based on the same position as the spatial position of the target pixel P na Extract as a tap.
- the processing coefficient setting unit 3 7 a stores in advance processing coefficients used for blur removal processing, and sets a plurality of sets of processing coefficients da corresponding to the motion direction selected by the motion direction selection unit to the pixel value generation unit 3 8 a To supply. Further, when the adjustment information BS that enables adjustment of motion blur is supplied as information ET from the outside, the processing coefficient setting unit 3 7 a supplies the pixel value generation unit 3 8 a based on the adjustment information BS.
- the motion blur removal effect is adjusted by switching the processing coefficient da to be supplied. For example, even if the motion blur is not optimally performed with the processing coefficient supplied first, the motion blur can be optimally removed by switching the processing coefficient. It is also possible to leave motion blur intentionally by switching the processing coefficient.
- the pixel value generation unit 3 8 a Based on the motion direction information vd selected by the motion direction selection unit 3 2 2, the pixel value generation unit 3 8 a obtains the pixel value of the pixel extracted from the first image holding unit 3 3 1 of the pixel value extraction unit 3 6. The pixel value in the direction of motion corresponding to the processing coefficient da supplied from the processing coefficient setting unit 37a is calculated. Further, a product-sum operation is performed on the calculated pixel value and the processing coefficient da supplied from the processing coefficient setting unit 37a to generate a pixel value. Also, using the pixel value of the pixel extracted from the second image holding unit 3 32, the pixel value in the motion direction corresponding to the processing coefficient da supplied from the processing coefficient setting unit 37 a is calculated.
- a product-sum operation is performed on the calculated pixel value and the processing coefficient da supplied from the processing coefficient setting unit 37a to generate a pixel value.
- the pixel value of the target pixel is generated and output as the image signal DVout.
- FIG. 16 shows a flowchart when image processing is performed by software.
- the CPU 20 1 sets a target pixel from which motion blur is to be removed, and proceeds to step ST 2.
- the CPU 201 detects the moving direction of the pixel of interest and proceeds to step ST3.
- the CPU 201 extracts pixel values and extracts pixel values of prediction taps set in the surrounding images.
- CCU201 extracts the main term that mainly contains the component of the target pixel in the motion object, at least a pixel in the surrounding image whose spatial position is substantially the same as the target pixel is used as the prediction tap. Extract pixel values.
- step ST4 the CPU 201 sets a processing coefficient corresponding to the motion direction detected in step ST2, and proceeds to step ST5.
- step ST5 the CPU 201 performs blur removal processing on each frame. That is, the CPU 201 performs a calculation process on the pixel value of the prediction tap extracted in step ST3 and the processing coefficient set in step ST4, calculates the pixel value from which blurring has been removed, and proceeds to step ST6. move on.
- step ST6 the CPU 201 determines whether or not the deblurring process has been completed for the entire screen. If there is a pixel for which the deblurring process has not been performed, the process returns to step ST1 to remove the deblurring for the entire screen. When is completed, the process ends.
- the processing coefficient is set based on the motion direction selected by the motion direction selection unit 322, but the class determination is performed using not only the motion direction but also the signal level of the image. If the processing coefficient is selected according to the determined class and supplied to the pixel value generation unit, the motion blur removal process can be performed with higher accuracy.
- FIG. 17 shows a functional block diagram of an image processing apparatus that performs class determination. Note that portions corresponding to those in FIG. 13 are denoted by the same reference numerals, and detailed description thereof is omitted.
- the class tap extraction unit 351 of the class determination unit 35 classifies the surrounding image held in the first image holding unit 33 1 and the second image holding unit 332 with reference to the spatial position corresponding to the target pixel. The tap is extracted, and the extracted class tap TPa is supplied to the class classification unit 352.
- Figures 18A and 18B show class taps.
- the class tap extraction unit 351 determines the target pixel as shown in Fig. 18A from the (t 1 1) frame and (t) frame images that are peripheral images. For example, the pixel of interest with reference to the spatial position corresponding to Pna
- the class tap extraction unit 351 uses, for example, a peripheral image (t) field image based on the spatial position corresponding to the target pixel Pna as shown in FIG. 9 pixels are extracted as a class tap including the pixel at the spatial position corresponding to the target pixel Pna and the pixels adjacent to this pixel.
- 12 pixels are extracted as class taps from the image in the (t-1) field, which is a peripheral image, including a pixel that overlaps with the spatial position corresponding to the target pixel Pna and a pixel adjacent to this pixel.
- the class classification unit 352 performs class classification based on the motion direction information vd supplied from the motion direction detection unit 32 and the class tap TPa extracted by the class tap extraction unit 351, determines a class code KA, and processes coefficients. Supply to setting unit 37b.
- class classification is performed using the class tap TPa extracted by the class tap extraction unit 351, class classification is performed based on the activity calculated from the class tap TPa.
- Activity is the sum of the difference values between adjacent pixels and indicates spatial correlation.
- the difference between adjacent pixels of 9 pixels in total of 3 pixels ⁇ 3 pixels is an activity.
- the class tap is selected as in the (t_l) field in FIG. 18B
- the difference between the adjacent pixels of 12 pixels in total of 4 pixels ⁇ 3 pixels is the activity.
- the activity AC is based on Equation (1 3). Can be calculated.
- the spatial correlation is high, the value of activity AC is small, and when the spatial correlation is low, the value of activity AC is large.
- Activity class AL ACtZ (ACt-l + ACt) X 100
- class code KA is determined based on movement direction information vd and activity class AL.
- the pixel value extraction unit 36 extracts at least a pixel at a position substantially the same as the spatial position of the target pixel Pna from the peripheral images held in the first image holding unit 331 and the second image holding unit 332. Then, it supplies the pixel value generator 38b as the prediction tap Ca.
- the processing coefficient setting unit 37 b stores in advance processing coefficients used for blur removal processing for each class code, and selects a processing coefficient db corresponding to the class code KA supplied from the class classification unit 352 to generate a pixel value. Supply to part 38b.
- the processing coefficient setting unit 37b performs motion by switching the processing coefficient to be selected based on the adjustment information BS. Adjust the blur removal effect. For example, if the processing coefficient db corresponding to the class code KA is used, motion blur can be optimally removed by switching the processing coefficient even if the motion blur does not occur optimally. . Also, It is possible to leave motion blur intentionally by switching the processing coefficient.
- the pixel value generation unit 38b performs a calculation process on the prediction tap Ca supplied from the pixel value extraction unit 36 and the processing coefficient db supplied from the processing coefficient setting unit 37b, and the pixel value of the target pixel in the target image Is generated.
- the pixel value is generated by performing a product-sum operation on the prediction tap extracted from the surrounding image held in the first image holding unit 331 and the processing coefficient.
- the pixel value is generated by performing a product-sum operation on the prediction tap extracted from the peripheral image held in the second image holding unit 332 and the processing coefficient. By combining these two pixel values, the pixel value of the target pixel is generated and output as the image signal DVout.
- FIG. 20 shows a flowchart when image processing is performed by software.
- the CPU 201 sets a target pixel from which motion blur is to be removed, and proceeds to step ST 12.
- the CPU 201 detects the moving direction of the target pixel and proceeds to step ST13.
- the CPU 201 determines the class of the target pixel. In this class determination, the class code is determined by classifying based on the pixel direction of the cluster set in the peripheral image based on the movement direction of the target pixel and the spatial position of the target pixel.
- the CPU 201 extracts pixel values and extracts pixel values of prediction taps set in the surrounding images.
- the CPU 201 extracts a main term mainly including the component of the target pixel in the moving object, at least a pixel in the surrounding image whose spatial position is substantially the same as the target pixel is used as a prediction tap. Extract pixel values.
- step ST 15 the CPU 201 sets a processing coefficient corresponding to the motion direction detected in step ST 1 2 and the class determined in step ST 13, and proceeds to step ST 16.
- step ST16 the CPU 201 performs blur removal processing on each frame. That is, the CPU 201 determines the pixel value of the prediction tap extracted in step ST14. And the processing coefficient determined in step ST 15 are calculated, the pixel value from which the blur is removed is calculated, and the process proceeds to step ST 17.
- step ST 17 the CPU 2 0 1 determines whether or not the deblurring process has been completed for the entire screen. If there is a pixel for which the deblurring process has not been performed, the process returns to step ST 1 1 When the blur removal is completed for the screen, the process is terminated.
- Fig. 21 shows the configuration in the case where the processing coefficient is obtained by learning and the blur removal process is performed.
- the learning device 60 performs learning processing using the teacher image and the student image obtained by adding motion blur to the teacher image, and stores the processing coefficient obtained by the learning in the processing coefficient setting unit of the image processing device 20.
- the image processing apparatus 20 selects a prediction tap from an image including motion blur that becomes an input image so as to include at least a pixel at a position substantially the same as the spatial position of the pixel of interest, and the pixel value of the prediction tap and the processing coefficient Using the processing coefficient stored in the setting unit, the pixel value of the target pixel that has been subjected to arithmetic processing and subjected to blur removal is obtained.
- the teacher image an image captured using a high-speed imaging camera or a still image is used.
- the student image is generated by integrating the image taken with the high-speed imaging camera as a still image and time integration.
- the direction and amount of movement are set, and the student image is defined by adding motion blur and noise according to the set direction and amount of motion to the still image.
- the motion setting unit 61 sets a motion direction and a motion amount, and supplies motion information MH indicating the set motion direction and motion amount to the student image generation unit 62.
- a plurality of movement directions are set with a predetermined angle difference.
- a plurality of different motion amounts may be set for each motion direction.
- the motion blur adding unit 6 2 1 of the student image generating unit 6 2 adds motion blur to the teacher image according to the motion direction and the amount of motion indicated by the motion information MH, and moves the entire lf plane moving unit 6 2 2 To supply.
- the full-screen moving unit 6 2 2 generates a student image by moving the teacher image to which the motion blur has been added in the movement direction by the amount of movement based on the movement information MH.
- the student image generation unit 62 generates a student image that does not add motion blur to the teacher image at a specific ratio.
- the processing coefficient is generated by switching the generation ratio of the student image with no motion blur and the generation ratio of the student image having the motion blur in the second motion direction.
- a plurality of student images in the movement direction indicated by the movement information MH are close to the first movement direction indicated by the movement information MH! / It is assumed that student images generated by performing surface movement are included at a specific rate.
- a motion blur or a motion amount corresponding to a motion direction or motion amount different from the motion direction or motion amount indicated by the motion information MH is added to the plurality of student images having the motion direction or motion amount indicated by the motion information MH, or the entire screen. It is assumed that student images generated by moving are included at a specific rate.
- the processing coefficient is generated by switching the ratio including the motion direction and the motion amount different from the motion direction and the motion amount indicated by the motion information MH, and the user can select a desired processing coefficient. In this way, it is possible to select the processing coefficient generated by switching the ratio including the direction and amount of movement. For example, it is possible to perform motion blur removal processing according to the user's preference.
- the motion blur of the still image portion is removed by selecting a processing coefficient generated by increasing the proportion of student images with a motion amount of “0”. Can be performed with higher accuracy.
- Motion blur can be removed with high accuracy.
- the student image generation unit 62 when a student image is generated by the student image generation unit 62, the student image is generated so that the middle phase of the two student images becomes a teacher image.
- the teacher image to which motion blur is added is moved by 1 to 2 of the amount of motion in the opposite direction to the motion direction indicated by the motion information MH, and (t 1 1) is the first image corresponding to the frame image.
- the teacher image to which the motion blur is added is moved by a motion amount of 12 in the motion direction indicated by the motion information MH, and a second student image corresponding to, for example, a (t) frame image is generated.
- the teacher image corresponds to the attention image
- the student image corresponds to the peripheral image.
- the first student image generated by the full screen moving unit 6 2 2 is stored in the first image holding unit 6 2 3.
- the second student image generated by the full screen moving unit 6 2 2 is stored in the second image holding unit 6 24.
- the noise component adding unit 6 2 5, 6 2 6 allows the image signal D Va to perform motion blur removal processing without being affected by the noise even if noise is superimposed on the image signal D Va.
- the noise NZ superimposed on is pre-superimposed on the first and second student images to learn the processing coefficients. In this way, by providing learning with the noise component addition units 6 2 5 and 6 26, the influence of noise is greater than when learning without the noise component addition units 6 2 5 and 6 26. Less motion blur removal processing can be performed with high accuracy. In addition, the blur can be changed by adjusting the amount of noise.
- a reference image is generated by photographing a subject with uniform brightness with a digital camera or a video camera and adding the subject image. Noise obtained by subtracting this reference image from each captured image is used. If you use this kind of noise, It is possible to more effectively remove motion blur corresponding to an image.
- the prediction tap extraction unit 64 extracts the prediction tap Ca from the first and second student images generated by the student image generation unit 62 in the same manner as the pixel value extraction unit 36 described above.
- the pixel value is supplied to the normal equation generation unit 651.
- the normal equation generation unit 651 of the processing coefficient generation unit 65 generates a normal equation for each motion direction from the pixel value of the prediction tap Ca extracted by the prediction tap extraction unit 64 and the pixel value of the teacher image, and a coefficient determination unit 652 To supply.
- the coefficient determination unit 652 calculates a processing coefficient to the student image ⁇ based on the normal equation supplied from the normal equation generation unit 651, and obtains the processing coefficient for each student image for each motion direction. Store in a.
- the normal equation generation unit 651 and the coefficient determination unit 652 will be further described.
- the above-described pixel value generation unit 38a uses the pixel value of the prediction tap extracted by the pixel value extraction unit 36 and the processing coefficient supplied from the processing coefficient setting unit 37a, for example, using Equation (1)
- the linear combination shown in 5) is performed, and pixel values after blur removal processing are generated for each peripheral image.
- Equation (15) q ′ represents the pixel value of the pixel from which blur removal has been performed.
- C i (i represents each pixel in the processing range with an integer value of 1 to n) represents the pixel value of the processing area.
- Di represents a processing coefficient.
- each processing coefficient di is an undetermined coefficient before learning.
- Processing coefficient learning is performed by inputting pixels of multiple teacher images (still images).
- q k (k is an integer value of 1 to m) When there are m pixels of the teacher image and the pixel data of the m pixels is described as “q k (k is an integer value of 1 to m)”, from the equation (15), the following equation (16 ) Is set.
- Expression (16) can obtain the pixel value q k ′ after blur removal substantially equal to the actual pixel value q k without motion blur by performing the calculation on the right side.
- the pixel value after blur removal which is the calculation result on the right side, is motion blur. This is because it does not exactly match the pixel value of the pixel of interest in an actual image with no error and includes a predetermined error.
- the processing coefficient di uses the pixel value q k after blur removal to a pixel value without motion blur. It is considered to be the optimum coefficient for approaching. Therefore, for example, the optimal processing coefficient di is determined by the method of least squares using m pixel values q k collected by learning (where m is an integer greater than n).
- Equation (17) The straight equation when the processing coefficient di on the right side of Equation (16) is obtained by the method of least squares can be expressed as Equation (17).
- the processing coefficient di can be determined by solving the normal equation shown in Equation (1 7). Specifically, each of the matrices of the normal equation shown in Equation (1 7) is expressed by the following equations (8) to (8) If defined as 20), the normal equation is expressed as the following equation (2 1). di
- each component of the matrix DMAT is the processing coefficient di to be obtained. Therefore, if the left-side matrix CMAT and the right-side matrix QMAT are determined in Equation (2 1), the matrix DMAT (that is, the processing coefficient) can be calculated by matrix solution. Specifically, as shown in Equation (18), each component of the matrix CMAT can be calculated if the prediction tap c ik is known. Prediction tap c ik Since the extraction unit 6 4 extracts, the normal equation generation unit 6 5 1 calculates each component of the matrix C MAT by using each of ik for the prediction tap supplied from the prediction tap extraction unit 6 4. Can do.
- each component of the matrix QMAT can be calculated if the prediction tap c ik and the still image pixel value q k are known.
- Prediction taps c ik is the same as the ones contained in each component of the matrix C MAT,
- the coefficient determination unit 6 52 calculates the processing coefficient d i which is each component of the matrix DMAT of the above equation (19). Specifically, the normal equation of the above equation (2 1) can be transformed as the following equation (2 2).
- Equation (2 2) each component of the matrix DMAT on the left side is the processing coefficient d i to be obtained.
- Each component of matrix C MAT and matrix QMAT is supplied from the normal equation generator 6 5 1. Therefore, when each component of the matrix C MAT and the matrix QMAT is supplied from the normal equation generator 6 5 1, the coefficient determination unit 6 5 2 performs the matrix operation on the right side of Equation (2 2) Thus, the matrix DMAT is calculated, and the calculation result (processing coefficient di) is stored in the processing coefficient setting unit 37a. If the above learning is performed by switching the motion direction set by the motion setting unit 61, based on the relationship between the pixel value of the prediction tap and the target pixel in the teacher image, a plurality of sets of prediction taps. The processing coefficient for predicting the target pixel in the teacher image from the pixel value can be stored in the processing coefficient setting unit at least for each motion direction.
- FIG. 23 is a flowchart showing the learning process executed by the learning device. Step ST 2 1 sets the motion amount and direction of the processing coefficient to be generated.
- step S T 2 motion blur is added, and motion blur is added to the teacher image according to the motion amount set in step S T 21.
- step ST 2 3 the entire screen is moved, and the teacher image with motion blur added in step ST 2 2 is moved across the screen based on the motion amount and direction set in step ST 2 1, and the peripheral image A student image corresponding to is generated.
- this student image generation at least one of the set movement amount and movement direction is changed at a specific rate, and after the change, movement blur and full screen are added to the teacher image based on the movement amount and movement direction. Move and generate student images. In addition, student images are generated at a specific rate with the movement amount set to “0”.
- step S T 24 noise addition processing is performed and noise is added to the student image.
- step ST 25 a prediction tap is extracted from the student image to which noise has been added.
- step ST 26 a normal equation is generated at least for each motion direction using the teacher image and the extracted prediction tap.
- step S T 27 a normal coefficient is solved to generate a processing coefficient.
- step ST 28 it is determined whether or not processing has been performed on the entire screen. If processing has not been performed on the entire screen, the processing from step ST 21 is repeated for new pixels. When all pixels have been processed, the learning process ends.
- the amount of motion is set to “0” and including student images with no motion blur in the learning source, it is possible to improve the mouth bust against the failure of still images, and the detected motion Even if the vector has an error, it is possible to prevent the failure of the still image when motion blur is removed.
- the processing coefficient By adjusting the amount of noise to be added, the blur can be changed, or the proportion of student images with different blur can be adjusted to create a new blur.
- FIG. 24 A functional block diagram of 60 is shown in FIG. In FIG. 24, parts corresponding to those in FIG. 22 are given the same reference numerals, and detailed descriptions thereof are omitted.
- the class determination unit 63 determines the class code KB for the pixel of interest in the same manner as the class determination unit 35 described above, and supplies it to the normal equation generation unit 65 1 of the processing coefficient generation unit 65.
- the prediction tap extraction unit 6 4 extracts the prediction tap Ca from the first 'and second student images generated by the student image generation unit 62 in the same manner as the pixel value extraction unit 36 described above. Supply the pixel value of the prediction tap to the normal equation generator 6 5 1 d
- the normal equation generation unit 6 5 1 of the processing coefficient generation unit 65 generates a normal equation for each class code from the pixel value of the prediction tap Ca extracted by the prediction tap extraction unit 64 and the pixel value of the teacher image.
- the coefficient determination unit 6 5 2 is supplied.
- the coefficient determination unit 65 2 calculates the processing coefficient based on the normal equation supplied from the normal equation generation unit 65 1, and stores the obtained plurality of sets of processing coefficients in the processing coefficient setting unit 3 7 b. Also, by generating the processing coefficients by switching the movement direction, the processing coefficient setting unit 37 b stores a plurality of sets of processing coefficients according to the movement direction and the class. If the processing coefficient is generated by switching the amount of motion, a processing coefficient with higher accuracy can be obtained.
- processing coefficients are classified according to the noise added by the student image generation unit and stored in the processing coefficient setting unit 37 b. If the processing coefficients are classified according to noise in this way, the processing coefficients to be selected can be switched by changing the class based on the adjustment information B S as described above.
- Fig. 25 is a flowchart showing the learning process (when class determination is performed) performed by the learning device.
- step ST 31 the amount and direction of movement of the processing coefficient to be generated are set.
- step ST 3 2 motion blur is added, and motion blur is added to the teacher image according to the amount of motion set in step ST 3 1.
- step ST 3 3 the entire screen is moved, and the teacher image with motion blur added in step ST 3 2 is moved on the whole screen based on the motion amount and direction set in step ST 3 1, and the peripheral image is moved.
- step ST 3 3 corresponds to Generate student images.
- at least one of the set movement amount and movement direction is changed at a specific rate, and motion blur is added to the teacher image and all the movement directions are changed based on the changed movement amount and movement direction.
- student images are generated at a specific rate with the movement amount set to “0”.
- step ST 3 noise addition processing is performed to add noise to the student image.
- step ST35 class determination processing is performed, and a class code is determined for each pixel using a student image to which noise is added.
- step ST 36 a prediction tap is extracted from the student image to which noise is added.
- step ST 37 a normal equation is generated for each class using the teacher image and the extracted prediction tap.
- processing coefficients are generated by solving normal equations.
- step ST 39 it is determined whether or not processing has been performed on the entire screen. If processing has not been performed on the entire screen, the processing from step ST 3 1 is repeated for new pixels. When all pixels have been processed, the learning process ends.
- the image processing device, the learning device, and the method according to the present invention are useful when extracting information buried in an image signal obtained by imaging the real society using an image sensor. It is suitable for obtaining an image from which motion blur is removed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Picture Signal Circuits (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006549084A JP4872672B2 (ja) | 2004-12-21 | 2005-12-21 | 学習装置と学習方法および学習プログラム |
US11/722,141 US7940993B2 (en) | 2004-12-21 | 2005-12-21 | Learning device, learning method, and learning program |
EP05822508A EP1830562A4 (en) | 2004-12-21 | 2005-12-21 | APPARATUS, METHOD AND PROGRAM FOR LEARNING |
CN2005800439785A CN101088281B (zh) | 2004-12-21 | 2005-12-21 | 学习装置和学习方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-369268 | 2004-12-21 | ||
JP2004369268 | 2004-12-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006068289A1 true WO2006068289A1 (ja) | 2006-06-29 |
Family
ID=36601875
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/023997 WO2006068289A1 (ja) | 2004-12-21 | 2005-12-21 | 学習装置と学習方法および学習プログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US7940993B2 (ja) |
EP (1) | EP1830562A4 (ja) |
JP (1) | JP4872672B2 (ja) |
KR (1) | KR101211074B1 (ja) |
CN (1) | CN101088281B (ja) |
WO (1) | WO2006068289A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100061642A1 (en) * | 2006-09-28 | 2010-03-11 | Sony Corporation | Prediction coefficient operation device and method, image data operation device and method, program, and recording medium |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101098087B1 (ko) * | 2007-03-28 | 2011-12-26 | 후지쯔 가부시끼가이샤 | 화상 처리 장치, 화상 처리 방법, 화상 처리 프로그램을 기록한 컴퓨터 판독 가능한 매체 |
JP5061882B2 (ja) * | 2007-12-21 | 2012-10-31 | ソニー株式会社 | 画像処理装置、画像処理方法、およびプログラム、並びに学習装置 |
JP4548520B2 (ja) * | 2008-07-02 | 2010-09-22 | ソニー株式会社 | 係数生成装置および方法、画像生成装置および方法、並びにプログラム |
JP5526942B2 (ja) * | 2010-03-31 | 2014-06-18 | ソニー株式会社 | ロボット装置、ロボット装置の制御方法およびプログラム |
KR102163573B1 (ko) * | 2018-11-23 | 2020-10-12 | 연세대학교 산학협력단 | 실시간 객체 탐지 시스템 학습을 위한 합성 데이터 생성 장치 및 방법 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001250119A (ja) * | 1999-12-28 | 2001-09-14 | Sony Corp | 信号処理装置および方法、並びに記録媒体 |
JP2002373336A (ja) * | 2001-06-15 | 2002-12-26 | Sony Corp | 画像処理装置および方法、記録媒体、並びにプログラム |
JP2003233817A (ja) * | 2002-02-08 | 2003-08-22 | Sony Corp | 画像処理装置および方法、記録媒体、並びにプログラム |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08147493A (ja) * | 1994-11-15 | 1996-06-07 | Matsushita Electric Ind Co Ltd | アニメーション画像生成方法 |
US6987530B2 (en) * | 2001-05-29 | 2006-01-17 | Hewlett-Packard Development Company, L.P. | Method for reducing motion blur in a digital image |
JP4596216B2 (ja) * | 2001-06-20 | 2010-12-08 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP4596224B2 (ja) * | 2001-06-27 | 2010-12-08 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP4596227B2 (ja) * | 2001-06-27 | 2010-12-08 | ソニー株式会社 | 通信装置および方法、通信システム、記録媒体、並びにプログラム |
JP4441846B2 (ja) * | 2003-02-28 | 2010-03-31 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
-
2005
- 2005-12-21 KR KR1020077012991A patent/KR101211074B1/ko not_active IP Right Cessation
- 2005-12-21 CN CN2005800439785A patent/CN101088281B/zh not_active Expired - Fee Related
- 2005-12-21 EP EP05822508A patent/EP1830562A4/en not_active Withdrawn
- 2005-12-21 US US11/722,141 patent/US7940993B2/en not_active Expired - Fee Related
- 2005-12-21 WO PCT/JP2005/023997 patent/WO2006068289A1/ja active Application Filing
- 2005-12-21 JP JP2006549084A patent/JP4872672B2/ja not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001250119A (ja) * | 1999-12-28 | 2001-09-14 | Sony Corp | 信号処理装置および方法、並びに記録媒体 |
JP2002373336A (ja) * | 2001-06-15 | 2002-12-26 | Sony Corp | 画像処理装置および方法、記録媒体、並びにプログラム |
JP2003233817A (ja) * | 2002-02-08 | 2003-08-22 | Sony Corp | 画像処理装置および方法、記録媒体、並びにプログラム |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100061642A1 (en) * | 2006-09-28 | 2010-03-11 | Sony Corporation | Prediction coefficient operation device and method, image data operation device and method, program, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
KR20070094740A (ko) | 2007-09-21 |
KR101211074B1 (ko) | 2012-12-12 |
EP1830562A1 (en) | 2007-09-05 |
US20080075362A1 (en) | 2008-03-27 |
CN101088281A (zh) | 2007-12-12 |
CN101088281B (zh) | 2011-09-07 |
JP4872672B2 (ja) | 2012-02-08 |
JPWO2006068289A1 (ja) | 2008-06-12 |
EP1830562A4 (en) | 2012-08-01 |
US7940993B2 (en) | 2011-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8452122B2 (en) | Device, method, and computer-readable medium for image restoration | |
JP4646146B2 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
EP2164040B1 (en) | System and method for high quality image and video upscaling | |
US7710461B2 (en) | Image processing device, image processing method, and image processing program | |
US20120274855A1 (en) | Image processing apparatus and control method for the same | |
CN102819825A (zh) | 图像处理设备和方法、程序以及记录介质 | |
WO2007119680A1 (ja) | 撮像装置 | |
US7751642B1 (en) | Methods and devices for image processing, image capturing and image downscaling | |
US20060192857A1 (en) | Image processing device, image processing method, and program | |
WO2006068292A1 (ja) | 画像処理装置と画像処理方法および画像処理プログラム | |
WO2006068289A1 (ja) | 学習装置と学習方法および学習プログラム | |
JP4867659B2 (ja) | 画像処理装置と学習装置と係数生成装置および方法 | |
US8269888B2 (en) | Image processing apparatus, image processing method, and computer readable medium having image processing program | |
US8213496B2 (en) | Image processing device, image processing method, and image processing program | |
JP7183015B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
JPWO2007026452A1 (ja) | 画像処理装置、及び画像処理方法 | |
JP3209785B2 (ja) | 動きベクトル検出装置 | |
JP2004120603A (ja) | 画像処理方法および装置並びにプログラム | |
JP4250807B2 (ja) | フィールド周波数変換装置および変換方法 | |
JP2011142400A (ja) | 動きベクトル検出装置および方法、映像表示装置、映像記録装置、映像再生装置、プログラムおよび記録媒体 | |
JP2009093676A (ja) | 画像処理装置、画像処理方法、およびプログラム | |
JP3221052B2 (ja) | 画像の手振れ検出装置 | |
WO2023174546A1 (en) | Method and image processor unit for processing image data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1020077012991 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006549084 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11722141 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005822508 Country of ref document: EP Ref document number: 200580043978.5 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2005822508 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 11722141 Country of ref document: US |