WO2022168667A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2022168667A1 WO2022168667A1 PCT/JP2022/002594 JP2022002594W WO2022168667A1 WO 2022168667 A1 WO2022168667 A1 WO 2022168667A1 JP 2022002594 W JP2022002594 W JP 2022002594W WO 2022168667 A1 WO2022168667 A1 WO 2022168667A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- recognition
- unit
- resolution
- processing
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 217
- 238000003672 processing method Methods 0.000 title claims description 43
- 238000001514 detection method Methods 0.000 claims abstract description 142
- 238000010801 machine learning Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000000034 method Methods 0.000 abstract description 37
- 238000003384 imaging method Methods 0.000 description 139
- 238000010586 diagram Methods 0.000 description 46
- 238000005516 engineering process Methods 0.000 description 34
- 238000004891 communication Methods 0.000 description 30
- 230000006870 function Effects 0.000 description 19
- 230000006835 compression Effects 0.000 description 15
- 238000007906 compression Methods 0.000 description 15
- 238000006243 chemical reaction Methods 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 7
- 239000000284 extract Substances 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present disclosure relates to an image processing device and an image processing method.
- An image sensor with a built-in DNN (Deep Neural Network) engine is known.
- an application processor outside the image sensor performs object recognition processing.
- an object recognition process is performed by a DNN engine inside the image sensor, and based on the result, an application processor outside the image sensor instructs the DNN engine inside the image sensor of the extraction range of the object region for the captured image. Therefore, a large frame delay occurs until a series of processes including object position detection, object region extraction, and object recognition processing are completed.
- the present disclosure provides an image processing device and an image processing method that enable faster execution of recognition processing.
- An image processing apparatus includes a detection unit that detects the position of an object included in the input image in the input image; A generation unit that generates an image, and a recognition unit that performs recognition processing for recognizing an object on the recognition image generated by the generation unit.
- FIG. 10 is a schematic diagram for explaining a first image processing method according to an existing technique
- FIG. 10 is a schematic diagram for explaining a second image processing method according to an existing technique
- FIG. 10 is a functional block diagram of an example for explaining functions of an image sensor for executing a second image processing method according to existing technology
- FIG. 10 is a sequence diagram of an example for explaining a second image processing method according to existing technology
- FIG. 11 is a sequence diagram of an example for explaining a third image processing method according to existing technology
- FIG. 10 is a diagram schematically showing the state inside the image sensor in the processing of each frame in the third image processing method according to the existing technology
- FIG. 10 is a diagram schematically showing the state inside the image sensor in the processing of each frame in the third image processing method according to the existing technology
- FIG. 10 is a diagram schematically showing the state inside the image sensor in the processing of each frame in the third image processing method according to the existing technology
- FIG. 10 is a diagram schematically showing the state inside the image sensor in the processing of each frame in the third image processing method according to the existing technology
- FIG. 4 is a schematic diagram for explaining motion prediction by an existing technique
- 1 is a diagram showing a configuration of an example of an imaging system applicable to each embodiment of the present disclosure
- FIG. 1 is a block diagram showing a configuration of an example of an imaging device applicable to each embodiment
- FIG. 1 is a block diagram showing an example configuration of an image sensor applicable to each embodiment of the present disclosure
- FIG. 1 is a perspective view showing an outline of an external configuration example of an image sensor according to each embodiment
- FIG. 3 is a functional block diagram of an example for explaining functions of the image sensor according to the first embodiment
- FIG. 4 is a functional block diagram of an example for explaining functions of a detection unit according to the first embodiment
- FIG. FIG. 4 is a diagram schematically showing an example of a position detection image according to the first embodiment
- FIG. 7 is a sequence diagram of an example for explaining processing according to the first embodiment
- FIG. 10 is a functional block diagram of an example for explaining functions of an image sensor according to the second embodiment
- FIG. 11 is a functional block diagram of an example for explaining functions of a prediction/detection unit according to the second embodiment
- FIG. 11 is a sequence diagram of an example for explaining processing according to the second embodiment
- FIG. 10 is a schematic diagram for explaining motion estimation according to the second embodiment
- FIG. 12 is a schematic diagram for explaining pipeline processing applicable to the second embodiment
- FIG. 11 is a functional block diagram of an example for explaining functions of an image sensor according to a third embodiment
- FIG. 11 is a functional block diagram of an example for explaining functions of an image sensor according to a fourth embodiment
- the present disclosure relates to an image sensor that captures a subject and acquires a captured image.
- the image sensor according to the present disclosure includes an imaging unit that captures an image, and a recognition unit that recognizes an object based on the captured image captured by the imaging unit. including.
- the position of the object to be recognized by the recognition unit on the captured image is detected based on the captured image captured by the imaging unit. Based on the detected position, an image including a region corresponding to the object is cut out from the captured image at a resolution that can be handled by the recognition unit, and is output to the recognition unit as an image for recognition.
- the present disclosure can shorten the delay time (latency) from when an image is captured and a captured image is acquired until a recognition result based on the captured image is obtained. Further, the position of the object to be recognized on the image is determined based on the detection image obtained by converting the captured image into an image having a resolution lower than that of the captured image. As a result, the load of object position detection processing can be reduced, and the delay time can be further shortened.
- FIG. 1 is a schematic diagram for explaining a first image processing method according to existing technology.
- an image sensor 1000 includes an imaging unit (not shown) and a recognition unit 1010 that uses a captured image 1100 captured by the imaging unit as an original image and recognizes an object included in the captured image 1100 .
- the recognition unit 1010 uses a DNN (Deep Neural Network) to recognize an object included in the captured image.
- DNN Deep Neural Network
- the resolution (size) of an image that can be handled by the recognizer is generally limited from the viewpoint of cost and the like. , is limited to a given resolution (eg, 224 pixels by 224 pixels). Therefore, when an image to be recognized has a high resolution (for example, 4000 pixels ⁇ 3000 pixels), it is necessary to generate an image with a resolution that can be handled by the recognizer based on the image.
- the image sensor 1000 simply reduces the entire captured image 1100 to a resolution that the recognition unit 1010 can handle, and generates an input image 1101 for input to the recognition unit 1010 .
- each object included in the captured image 1100 is a low-resolution image, resulting in a low recognition rate for each object.
- FIG. 2 is a schematic diagram for explaining the second image processing method according to the existing technology.
- an image sensor 1000 operates as a slave of an application processor (AP) 1001, and cuts out an input image from a captured image 1100 to be input to a recognition unit 1010 according to an instruction from the AP 1001.
- AP application processor
- the image sensor 1000 passes the captured image 1100 captured by an imaging unit (not shown) to the AP 1001 (step S1).
- the AP 1001 detects an object included in the captured image 1100 received from the image sensor 1000, and returns information indicating the position of the detected object to the image sensor 1000 (step S2).
- the AP 1001 detects an object 1150 from the captured image 1100 and returns information indicating the position of this object 1150 within the captured image 1100 to the image sensor 1000 .
- the image sensor 1000 cuts out the object 1150 from the captured image 1100 based on the position information passed from the AP 1001 and inputs the cut out image of the object 1150 to the recognition unit 1010 .
- the recognition unit 1010 performs recognition processing on the image of the object 1150 cut out from the captured image 1100 .
- the recognition unit 1010 outputs the recognition result for the object 1150 to, for example, the AP 1001 (step S3).
- the image cut out from the captured image 1100 retains detailed information in the captured image 1100 . Since the recognition unit 1010 executes recognition processing on the image in which this detailed information is held, the recognition result 1151 can be output at a higher recognition rate.
- the delay time increases.
- FIG. 3 is a functional block diagram of an example for explaining functions of the image sensor 1000 for executing the second image processing method according to the existing technology.
- the image sensor 1000 includes a clipping section 1011 and a recognition section 1010 . Note that in the example of FIG. 3, an imaging unit that captures the captured image 1100N is omitted.
- a captured image 1100N of the Nth frame is input to the clipping unit 1011.
- the captured image 1100N is a 4k ⁇ 3k image with a width of 4096 pixels and a height of 3072 pixels.
- the clipping unit 1011 clips an area including the object 1300 (a dog in this example) from the captured image 1100N according to the position information passed from the AP 1001 .
- the AP 1001 detects the object 1300 using the background image 1200 stored in the frame memory 1002 and the captured image 1100 (N-3) of the (N-3)th frame. More specifically, the AP 1001 stores in the frame memory 1002 the captured image 1100 (N-3) of the (N-3)-th frame three frames before the N-th frame, and stores this captured image 1100 (N- 3) and the background image 1200 stored in advance in the frame memory 1002 is obtained, and an object 1300 is detected based on this difference.
- the AP 1001 passes to the image sensor 1000 the positional information indicating the position of the object 1300 thus detected from the captured image 1100 (N-3) of the (N-3)th frame.
- the image sensor 1000 passes the position information passed from the AP 1001 to the clipping unit 1011 .
- the clipping unit 1011 extracts a recognition image 1104 for recognition processing by the recognition unit 1010 from the captured image 1100N based on the position information detected from the captured image 1100(N ⁇ 3) of the (N ⁇ 3)th frame. break the ice. That is, the recognizing unit 1010 performs the recognition processing on the captured image 1100N of the Nth frame, based on the information of the captured image 1100(N ⁇ 3) of the (N ⁇ 3)th frame three frames earlier. will be executed using
- FIG. 4 is an example sequence diagram for explaining the second image processing method according to the existing technology.
- the horizontal direction indicates the passage of time frame by frame.
- the upper side shows the processing in the image sensor 1000
- the lower side shows the processing in the AP 1001, respectively.
- a captured image 1100 (N-3) including the object 1300 is captured in the (N-3)th frame.
- a captured image 1100 (N ⁇ 3) is output from the image sensor 1000 (step S11) through image processing (step S10) in the clipping unit 1011, and passed to the AP 1001, for example.
- the AP 1001 executes object position detection processing on the captured image 1100 (N-3) passed from the image sensor 1000 (step S12).
- the AP 1001 stores the captured image 1100(N ⁇ 3) in the frame memory 1002, obtains the difference from the background image 1200 stored in advance in the frame memory 1002, and calculates the background image from the captured image 1100(N ⁇ 3).
- a background cancellation process for removing the components of the image 1200 is executed (step S13).
- the AP 1001 performs object position detection processing on the image from which the background image 1200 has been removed by this background cancellation processing.
- the AP 1001 passes the position information indicating the position of the detected object (for example, the object 1300) to the image sensor 1000 (step S14).
- the AP 1001 uses the captured image 1100 (N-3) with a resolution of 4k ⁇ 3k as it is to perform background cancellation processing and object position detection processing. These processes take a long time because the number of pixels in the target image is very large.
- the timing at which the object position detection process ends and the position information is output in step S14 is near the end of the (N-2)th frame.
- the image sensor 1000 calculates register setting values for the clipping unit 1011 to clip an image of an area including the object 1300 from the captured image 1100 based on the position information passed from the AP 10011 (step S15).
- step S15 since the position information supplied from the AP 1001 in step S14 is near the end of the (N-2)th frame, the calculation of the register setting value in step S15 is performed in the next (N-1)th frame. is running during the
- the image sensor 1000 acquires the captured image 1100N of the Nth frame in the next Nth frame.
- the register setting value calculated in the (N-1)th frame is reflected in the clipping unit 1011 in this Nth frame.
- the clipping unit 1011 performs clipping processing on the captured image 1100N of the N-th frame according to this register setting value, and clips the recognition image 1104 (step S16).
- the recognition unit 1010 performs recognition processing on the recognition image 1104 cut out from the captured image 1100N of the Nth frame (step S17), and outputs the recognition result to, for example, the AP 1001 (step S18).
- the picked-up image 1100(N-3) of the (N-3)th frame is delivered to the AP 1001 as it is, and the AP 1001 processes the delivered picked-up image 1100(N-3).
- -3) is used to perform background cancellation processing and object position detection processing. Therefore, these processes take a long time, and a significant delay time occurs before the object position detection result is applied to the captured image 1100 .
- this third image processing method cuts out an image corresponding to an area including an object to be recognized from the captured image 1100 to generate an input image to be input to the recognition unit 1010 .
- the image is clipped based on the recognition result of the recognition unit 1010 in the image sensor 1000 without using the AP 1001 .
- FIG. 5 is an example sequence diagram for explaining the third image processing method according to the existing technology. Note that the meaning of each part in FIG. 5 is the same as that in FIG. 6A, 6B, and 6C are diagrams schematically showing states within the image sensor 1000 during processing of each frame in the sequence diagram of FIG.
- the captured image 1100 (N-2) including the object 1300 is captured in the (N-2)th frame.
- the captured image 1100 (N ⁇ 2) is transferred to the recognition unit 1010 by image processing (step S30) in the clipping unit 1011, for example.
- the recognition unit 1010 performs recognition processing on the captured image 1100 (N-2) of the (N-2)th frame (step S31).
- the recognition unit 1010 recognizes and detects an area including the object 1300 through this recognition processing, and outputs information indicating this area as a recognition result 1151 (step S32).
- This recognition result 1151 is stored in the memory 1012 of the image sensor 1000, for example.
- the image sensor 1000 based on the recognition result 1151 stored in the memory 1012 (step S33), For example, the object position in the captured image 1100 (N ⁇ 2) is obtained, and based on the position information indicating the obtained object position, the clipping unit 1011 calculates a register setting value for clipping an image of an area including the object 1300 from the captured image 1100. (step S34).
- the image sensor 1000 acquires a captured image 1100N of the Nth frame in the next Nth frame.
- the register setting value calculated in the (N-1)th frame is reflected in the clipping unit 1011 in this Nth frame.
- the clipping unit 1011 performs clipping processing on the captured image 1100N of the N-th frame according to this register setting value, and clips the recognition image 1104 (step S35).
- the recognition unit 1010 performs recognition processing on the recognition image 1104 cut out from the captured image 1100N of the Nth frame (step S36), and outputs the recognition result to, for example, the AP 1001 (step S37).
- the recognition image 1104 obtained by recognition processing for the (N ⁇ 2)th frame captured image 1100(N ⁇ 2) is used to obtain the Nth frame captured image. 1100N is cut out, and a delay of two frames occurs. Furthermore, in the third image processing method, by repeating object position detection and object recognition in this manner, the throughput is halved. Therefore, the delay time can be shortened compared to the second image processing method described above.
- the captured image 1100 (N ⁇ 2) of the (N ⁇ 2)th frame, or the (N ⁇ 2)th frame, or the (N ⁇ 2)th -3) The clipping area is determined based on the captured image 1100 (N-3) of the frame. Therefore, when the object 1300 moves at high speed, the position of the object 1300 in the captured image 1100N of the Nth frame temporally later than the (N ⁇ 2)th or (N ⁇ 3)th frame is The position may be significantly different from the position at the time when the clipping area is determined. Therefore, it is preferable to be able to predict the motion of object 1300 using the information of the frame that temporally precedes the Nth frame, and predict the position of object 1300 in captured image 1100N of the Nth frame.
- FIG. 7 is a schematic diagram for explaining motion prediction by existing technology.
- the example of FIG. 7 schematically shows how captured images 1100(N-3) to 1100N of the (N ⁇ 3)th frame to the Nth frame are superimposed.
- the object 1300 starts from the lower left corner of each of the captured images 1100(N-3) to 1100N and curves greatly from the (N-3)th frame to the Nth frame, as indicated by the trajectory 1401 in the figure. and move to reach the bottom right corner.
- the (N-1)th frame is calculated with register setting values to be set for the extraction unit 1011.
- FIG. Therefore, the captured image 1100 (N ⁇ 1) of the (N ⁇ 1)th frame immediately before the Nth frame is not used for motion prediction of the object 1300 . Therefore, for example, the movement of the object 1300 is predicted based on the captured images 1100(N-3) and 1100(N-2) of the (N-3)th and (N-2)th frames temporally before the Nth frame.
- a trajectory 1400 in FIG. 7 there is a possibility that a trajectory that is significantly different from the actual trajectory 1401 is predicted.
- the object 1300 is predicted to be positioned near the upper right of the captured image 1100N of the Nth frame at the time of the Nth frame, which is significantly different from the actual position (lower right corner).
- the recognition unit 1010 cannot recognize the object 1300 correctly.
- FIG. 8 is a diagram showing an example configuration of an imaging system applicable to each embodiment of the present disclosure.
- an imaging system 1 includes an imaging device 10 and an information processing device 11 that are communicably connected to each other via a network 2 .
- the imaging system 1 is shown to include one imaging device 10, but the imaging system 1 includes a plurality of imaging devices each communicably connected to the information processing device 11 via the network 2. 10 can be included.
- the imaging device 10 executes imaging and recognition processing according to the present disclosure, and transmits recognition results based on the captured image to the information processing device 11 via the network 2 together with the captured image.
- the information processing device 11 is, for example, a server, receives captured images and recognition results transmitted from the imaging device 10, and stores and displays the received captured images and recognition results.
- the imaging system 1 configured in this way can be applied to, for example, a surveillance system.
- the imaging device 10 is installed at a predetermined position with a fixed imaging range. This is not limited to this example, and the imaging system 1 can be applied to other uses, and the imaging device 10 can be used alone.
- FIG. 9 is a block diagram showing an example configuration of the imaging device 10 applicable to each embodiment.
- the imaging device 10 includes an image sensor 100, an AP (application processor) 101, a CPU (Central Processing Unit) 102, a ROM (Read Only Memory) 103, a RAM (Random Access Memory) 104, a storage device 105, and a communication I/F 106 , and these units are communicably connected to each other via a bus 110 .
- AP application processor
- CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- the storage device 105 is a nonvolatile storage medium such as a hard disk drive or flash memory, and stores programs and various data.
- the CPU 102 operates according to programs stored in the ROM 103 and the storage device 105 using the RAM 104 as a work memory, and controls the overall operation of the imaging apparatus 10 .
- the communication I/F 106 is an interface for communicating with the outside. Communication I/F 106 performs communication via network 2, for example. Not limited to this, the communication I/F 106 may be directly connected to an external device via a USB (Universal Serial Bus) or the like. Communication by communication I/F 106 may be either wired communication or wireless communication.
- the image sensor 100 is a CMOS (Complementary Metal Oxide Semiconductor) image sensor configured with one chip, receives incident light from an optical section, performs photoelectric conversion, A captured image corresponding to the incident light is output.
- the image sensor 100 executes recognition processing for recognizing objects included in the captured image.
- AP 101 executes applications for image sensor 100 .
- AP 101 may be integrated with CPU 102 .
- FIG. 10 is a block diagram showing an example configuration of the image sensor 100 applicable to each embodiment of the present disclosure.
- image sensor 100 has imaging block 20 and signal processing block 30 .
- the imaging block 20 and the signal processing block 30 are electrically connected by connection lines (internal buses) CL1, CL2 and CL3.
- the imaging block 20 has an imaging unit 21, an imaging processing unit 22, an output control unit 23, an output I/F 24, and an imaging control unit 25, and captures images.
- the imaging unit 21 is configured by arranging a plurality of pixels two-dimensionally.
- the imaging unit 21 is driven by the imaging processing unit 22 to capture an image. That is, the light from the optical section is incident on the imaging section 21 .
- the imaging unit 21 receives incident light from the optical unit in each pixel, performs photoelectric conversion, and outputs an analog image signal corresponding to the incident light.
- the size (resolution) of the image (signal) output by the imaging unit 21 is, for example, 4096 pixels wide ⁇ 3072 pixels high. This image of width 4096 pixels ⁇ height 3072 pixels is appropriately called a 4k ⁇ 3k image.
- the size of the captured image output by the imaging unit 21 is not limited to 4096 pixels wide ⁇ 3072 pixels high.
- the imaging processing unit 22 performs driving of the imaging unit 21, AD (Analog to Digital) conversion of analog image signals output by the imaging unit 21, imaging signal processing, etc. in the imaging unit 21 under the control of the imaging control unit 25. imaging processing related to imaging of the image of .
- the imaging processing unit 22 outputs a digital image signal obtained by AD conversion or the like of the analog image signal output by the imaging unit 21 as a captured image.
- the imaging signal processing for example, for the image output by the imaging unit 21, for each predetermined small area, by calculating the average value of the pixel values, etc., the brightness of each small area is obtained, Processing for converting an image output by the imaging unit 21 into an HDR (High Dynamic Range) image, defect correction, development, and the like.
- HDR High Dynamic Range
- the captured image output by the imaging processing unit 22 is supplied to the output control unit 23 and also supplied to the image compression unit 35 of the signal processing block 30 via the connection line CL2.
- the output control unit 23 is supplied with the captured image from the imaging processing unit 22, and is also supplied with the signal processing result of signal processing using the captured image and the like from the signal processing block 30 via the connection line CL3.
- the output control unit 23 performs output control to selectively output the captured image from the imaging processing unit 22 and the signal processing result from the signal processing block 30 to the outside from (one) output I/F 24 . That is, the output control unit 23 selects the captured image from the imaging processing unit 22 or the signal processing result from the signal processing block 30 and supplies it to the output I/F 24 .
- the output I/F 24 is an I/F that outputs the captured image supplied from the output control unit 23 and the signal processing result to the outside.
- a relatively high-speed parallel I/F such as MIPI (Mobile Industry Processor Interface) can be adopted.
- the output I/F 24 outputs the captured image from the imaging processing unit 22 or the signal processing result from the signal processing block 30 to the outside according to the output control of the output control unit 23 . Therefore, for example, when only the signal processing result from the signal processing block 30 is required externally, and the captured image itself is not required, only the signal processing result can be output and output from the output I/F 24 to the outside. The amount of data to be output can be reduced.
- signal processing is performed to obtain the required signal processing result externally, and the signal processing result is output from the output I/F 24, thereby eliminating the need for external signal processing.
- the load on external blocks can be reduced.
- the imaging control unit 25 has a communication I/F 26 and a register group 27.
- the communication I/F 26 is, for example, a first communication I/F such as a serial communication I/F such as I2C (Inter-Integrated Circuit). information exchange.
- a first communication I/F such as a serial communication I/F such as I2C (Inter-Integrated Circuit). information exchange.
- the register group 27 has a plurality of registers, and stores imaging information related to imaging by the imaging unit 21 and various other information.
- the register group 27 stores imaging information received from the outside through the communication I/F 26 and the result of imaging signal processing in the imaging processing unit 22 (for example, the brightness of each small area of the captured image).
- the imaging control unit 25 controls the imaging processing unit 22 according to the imaging information stored in the register group 27 , thereby controlling the imaging of the image by the imaging unit 21 .
- the imaging information stored in the register group 27 includes, for example, ISO sensitivity (analog gain at the time of AD conversion in the imaging processing unit 22), exposure time (shutter speed), frame rate, focus, shooting mode, cropping range, and the like. There is (information representing).
- Shooting modes include, for example, a manual mode in which the exposure time, frame rate, etc. are manually set, and an automatic mode in which they are automatically set according to the scene.
- the automatic mode includes modes corresponding to various shooting scenes such as night scenes and people's faces.
- the clipping range represents the range to be clipped from the image output by the imaging unit 21 when the imaging processing unit 22 clips a part of the image output by the imaging unit 21 and outputs it as a captured image.
- the cutout range for example, it is possible to cut out only the range in which a person is shown from the image output by the imaging unit 21 .
- an image clipping method there is a method of clipping an image output by the imaging unit 21 , and a method of reading out only an image (signal) in the clipping range from the imaging unit 21 .
- the register group 27 can store imaging information, the results of imaging signal processing in the imaging processing unit 22, and output control information related to output control in the output control unit 23.
- the output control unit 23 can perform output control for selectively outputting the captured image and the signal processing result according to the output control information stored in the register group 27 .
- the imaging control unit 25 and the CPU 31 of the signal processing block 30 are connected via a connection line CL1. , can read and write information. That is, in the image sensor 100 , reading and writing of information with respect to the register group 27 can be performed not only from the communication I/F 26 but also from the CPU 31 .
- the signal processing block 30 has a CPU (Central Processing Unit) 31, a DSP (Digital Signal Processor) 32, a memory 33, a communication I/F 34, an image compression section 35 and an input I/F 36. Predetermined signal processing is performed using a captured image or the like.
- CPU Central Processing Unit
- DSP Digital Signal Processor
- the CPU 31 through the input I/F 36 that constitute the signal processing block 30 are mutually connected via a bus, and can exchange information as necessary.
- the CPU 31 executes a program stored in the memory 33 to control the signal processing block 30, read/write information from/to the register group 27 of the imaging control unit 25 via the connection line CL1, and perform various other processes. I do.
- the CPU 31 functions as an imaging information calculation unit that calculates imaging information using the signal processing results obtained by the signal processing in the DSP 32, and calculates new imaging information using the signal processing results. Imaging information is fed back to and stored in the register group 27 of the imaging control section 25 via the connection line CL1. Therefore, as a result, the CPU 31 can control the imaging by the imaging section 21 and the imaging signal processing by the imaging processing section 22 according to the signal processing result of the captured image.
- the imaging information stored in the register group 27 by the CPU 31 can be provided (output) to the outside from the communication I/F 26 .
- focus information in the imaging information stored in the register group 27 can be provided from the communication I/F 26 to a focus driver (not shown) that controls focus.
- the DSP 32 executes a program stored in the memory 33, thereby obtaining a captured image supplied to the signal processing block 30 from the imaging processing unit 22 via the connection line CL2, and information externally received by the input I/F 36. It functions as a signal processing unit that performs signal processing using
- the memory 33 is composed of SRAM (Static Random Access Memory), DRAM (Dynamic RAM), etc., and stores data necessary for the processing of the signal processing block 30.
- the memory 33 in the communication I / F34, the program received from the outside, the image compressed by the image compression unit 35, the captured image used in the signal processing in the DSP32, the signal processing result of the signal processing performed in the DSP32 , and stores information and the like received by the input I/F 36 .
- the communication I/F 34 is, for example, a second communication I/F such as a serial communication I/F such as SPI (Serial Peripheral Interface), and communicates with the outside (for example, the memory 3 and the control unit 6 in FIG. 1). Necessary information such as programs executed by the CPU 31 and the DSP 32 are exchanged between them.
- the communication I/F 34 downloads a program executed by the CPU 31 or DSP 32 from the outside, supplies the program to the memory 33, and stores the program. Therefore, various processes can be executed by the CPU 31 and the DSP 32 according to the programs downloaded by the communication I/F 34 .
- the communication I/F 34 can exchange arbitrary data in addition to programs with the outside.
- the communication I/F 34 can output a signal processing result obtained by signal processing in the DSP 32 to the outside.
- Communication I/F 34 outputs information according to instructions from CPU 31 to an external device, thereby controlling the external device according to instructions from CPU 31 .
- the signal processing result obtained by the signal processing in the DSP 32 can be output to the outside from the communication I/F 34, and can also be written to the register group 27 of the imaging control section 25 by the CPU 31.
- the signal processing results written in the register group 27 can be output to the outside from the communication I/F 26 .
- a captured image is supplied to the image compression unit 35 from the imaging processing unit 22 via the connection line CL2.
- the image compression unit 35 performs compression processing for compressing the captured image as necessary, and generates a compressed image having a smaller amount of data than the captured image.
- the compressed image generated by the image compression unit 35 is supplied to the memory 33 via the bus and stored therein.
- the image compression unit 35 can also output the supplied captured image without compression.
- the signal processing in the DSP 32 can be performed using the captured image itself, or can be performed using the compressed image generated from the captured image by the image compression unit 35 . Since the compressed image has a smaller amount of data than the captured image, it is possible to reduce the signal processing load on the DSP 32 and save the storage capacity of the memory 33 for storing the compressed image.
- the compression processing in the image compression unit 35 for example, when the signal processing in the DSP 32 is performed for luminance and the captured image is an RGB image, the compression processing is performed on the RGB image, For example, a YUV conversion that converts to a YUV image can be performed.
- the image compression unit 35 can be realized by software, or can be realized by dedicated hardware.
- the input I/F 36 is an I/F that receives information from the outside.
- the input I/F 36 receives, for example, the output of the external sensor (external sensor output) from an external sensor, supplies it to the memory 33 via the bus, and stores it.
- a parallel I/F such as MIPI (Mobile Industry Processor Interface) or the like can be adopted, like the output I/F 24.
- MIPI Mobile Industry Processor Interface
- the external sensor for example, a distance sensor that senses information about distance can be adopted. Further, as the external sensor, for example, light is sensed and an image corresponding to the light is output. A sensor, ie, an image sensor other than the image sensor 100, can be employed.
- the input I / F 36 receives from the above-described external sensor and uses the external sensor output stored in the memory 33 to perform signal processing. can be done.
- the DSP 32 performs signal processing using an imaged image obtained by imaging by the imaging unit 21, and the signal processing result of the signal processing and the imaged image are obtained. , is selectively output from the output I/F 24 . Therefore, an imaging device that outputs information required by a user can be configured in a small size.
- the image sensor 100 the signal processing of the DSP 32 is not performed, and therefore, the image sensor 100 does not output the signal processing result, but outputs the captured image.
- the image sensor 100 is configured as an image sensor that only outputs an image as an output, the image sensor 100 can be configured only with the imaging block 20 without the output control section 23 .
- FIG. 11 is a perspective view showing an outline of an external configuration example of the image sensor 100 according to each embodiment.
- the image sensor 100 can be configured as a one-chip semiconductor device having a laminated structure in which a plurality of dies are laminated.
- the image sensor 100 is configured by stacking two dies 51 and 52 .
- an imaging unit 21 is mounted on an upper die 51, and an imaging processing unit 22, an output control unit 23, an output I/F 24, an imaging control unit 25, a CPU 31, and a DSP 32 are mounted on a lower die 52.
- a memory 33, a communication I/F 34, an image compression unit 35 and an input I/F 36 are mounted.
- the upper die 51 and the lower die 52 are separated by, for example, forming a through hole that penetrates the die 51 and reaches the die 52 , or by forming a Cu wiring exposed on the lower surface side of the die 51 and the die 52 . are electrically connected by performing Cu--Cu bonding for direct connection with the Cu wiring exposed on the upper surface side of the .
- a column-parallel AD method or an area AD method can be adopted as a method for performing AD conversion of the image signal output by the imaging unit 21.
- an ADC AD converter
- each column's ADC is in charge of AD conversion of the pixel signals of the pixels in that column.
- AD conversion of image signals of pixels in each column of one row is performed in parallel.
- part of the imaging processing unit 22 that performs AD conversion in the column-parallel AD method may be mounted on the upper die 51 .
- pixels forming the imaging unit 21 are divided into a plurality of blocks, and an ADC is provided for each block. Then, the ADC of each block is in charge of AD conversion of the pixel signals of the pixels of that block, so AD conversion of the image signals of the pixels of a plurality of blocks is performed in parallel.
- AD conversion readout and AD conversion
- AD conversion of an image signal can be performed on only necessary pixels among the pixels forming the imaging unit 21, using a block as the minimum unit.
- the image sensor 100 can be composed of a single die if the area of the image sensor 100 is allowed to be large.
- the two dies 51 and 52 are stacked to form the one-chip image sensor 100.
- the one-chip image sensor 100 is formed by stacking three or more dies. can do.
- the memory 33 of FIG. 11 can be mounted on another die.
- FIG. 12 is an example functional block diagram for explaining the functions of the image sensor 100 according to the first embodiment.
- the image sensor 100 includes a clipping unit 200, a detection unit 201, a background memory 202, and a recognition unit 204.
- the clipping unit 200, the detection unit 201, the background memory 202, and the recognition unit 204 are realized by, for example, the DSP 32 in the signal processing block 30 shown in FIG.
- Imaging is performed in the imaging block 20 (see FIG. 10) not shown, and the imaging block 20 outputs a captured image 1100N of the N-th frame.
- the captured image 1100N is a 4k ⁇ 3k image of width 4096 pixels ⁇ height 3072 pixels.
- a captured image 1100N output from the imaging block 20 is supplied to the clipping unit 200 and the detection unit 201.
- the detection unit 201 detects the position of the object 1300 included in the captured image 1100N, and passes position information indicating the detected position to the extraction unit 200. More specifically, the detection unit 201 generates a detection image obtained by lowering the resolution of the captured image 1100N from the captured image 1100N, and performs position detection of the object 1300 on this detection image (details will be described later). ).
- the background memory 202 stores in advance a detection background image obtained by changing the background image corresponding to the captured image 1100N to an image with the same resolution as the detection image.
- the detection unit 201 obtains the difference between the image obtained by lowering the resolution of the captured image 1100N and the background image for detection, and uses this difference as the image for detection.
- the background image is displayed in a default state in which there is no person in the imaging range.
- An image can be captured and the captured image obtained there can be applied.
- a background image can also be captured according to the user's operation on the imaging device 10 .
- the clipping unit 200 clips an image including the object 1300 from the captured image 1100N in a predetermined size that can be handled by the recognition unit 204 based on the position information passed from the detection unit 201, and generates a recognition image 1104a. That is, the clipping unit 200 functions as a generation unit that generates a recognition image of a predetermined resolution including the object 1300 from the input image based on the position detected by the detection unit 201 .
- the predetermined size that can be handled by the recognition unit 204 is assumed to be 224 pixels wide by 224 pixels high.
- a recognition image 1104a is generated by cutting out in a size of x224 pixels in height. That is, the recognition image 1104a is an image having a resolution of 224 pixels wide by 224 pixels high.
- the clipping unit 200 when the size of the object 1300 does not fit within the predetermined size, the clipping unit 200 reduces the size of the clipped image including the object 1300 from the captured image 1100N to a size of 224 pixels wide by 224 pixels high, A recognition image 1104a can be generated.
- the clipping unit 200 may reduce the entire captured image 1100N to the predetermined size without clipping from the captured image 1100N to generate the recognition image 1104b. In this case, the clipping unit 200 can add the position information passed from the detection unit 201 to the recognition image 1104b.
- the clipping unit 200 outputs the recognition image 1104a out of the recognition images 1104a and 1104b.
- the recognition image 1104 a cut out from the captured image 1100 N by the cut-out unit 200 is passed to the recognition unit 204 .
- the clipping unit 200 can pass the position information passed from the detection unit 201 to the recognition unit 204 together with the recognition image 1104a.
- the recognition unit 204 executes recognition processing for recognizing the object 1300 included in the recognition image 1104 based on a model learned by machine learning, for example.
- the recognition unit 204 can apply, for example, a DNN (Deep Neural Network) as a learning model for machine learning.
- the recognition result of the object 1300 by the recognition unit 204 is passed to the AP 101, for example.
- the recognition result can include information indicating the type of the object 1300 and the degree of recognition of the object 1300, for example.
- the clipping unit 200 can pass the position information passed from the detection unit 201 together with the recognition image 1104a.
- the recognition unit 204 can acquire a more highly accurate recognition result by executing recognition processing based on this position information.
- FIG. 13 is an example functional block diagram for explaining the functions of the detection unit 201 according to the first embodiment.
- the detection unit 201 includes a position detection image generation unit 2010 , a subtractor 2012 and an object position detection unit 2013 .
- the position detection image generation unit 2010 generates a low resolution image 300 by lowering the resolution of the captured image 1100N supplied from the imaging block 20 .
- the low-resolution image 300 generated by the position detection image generation unit 2010 has a resolution (size) of 16 pixels wide ⁇ 16 pixels high.
- a low-resolution image 300 having a resolution (size) of 16 pixels wide by 16 pixels high is generated by using the representative values obtained for each of the 256 blocks as pixel values.
- Background cancellation processing is performed on the low-resolution image 300 generated by the position detection image generation unit 2010 using the low-resolution background image 301 stored in the subtractor 2012 and the background memory 202 .
- a low resolution image 300 is input to the subtracted input of subtractor 2012 .
- the low-resolution background image 301 stored in the background memory 202 is input to the subtraction input terminal of the subtractor 2012 .
- the subtractor 2012 generates the absolute value of the difference between the low-resolution image 300 input to the input terminal to be subtracted and the low-resolution background image 301 input to the subtraction input terminal as the position detection image 302 .
- FIG. 14 is a diagram schematically showing an example of the position detection image 302 according to the first embodiment.
- section (a) shows an example of a position detection image 302 as an image.
- Section (b) shows the image of section (a) using the pixel value of each pixel. Also, in the example of section (b) of FIG. 14, the pixel values are shown assuming that the pixel bit depth is 8 bits.
- the position detection image 302 consists of the background area of the low-resolution image 300 (area excluding the low-resolution object area 303 corresponding to the object 1300) and the area of the low-resolution background image 301 corresponding to the background area.
- the pixel values match perfectly, as shown in section (b) of FIG. ] is a different value.
- the position detection image 302 is input to the object position detection unit 2013 .
- the object position detection unit 2013 detects the position of the low-resolution object area 303 within the position detection image 302 based on the luminance value of each pixel of the position detection image 302 .
- the object position detection unit 2013 performs threshold determination on each pixel of the position detection image 302, determines an area of pixels with a pixel value of [1] or more as the low-resolution object area 303, and determines the position as the low-resolution object area 303. Ask.
- the object position detection unit 2013 converts the position of each pixel included in the low-resolution object region 303 into the position of each block obtained by dividing the captured image 1100N (for example, the position of the representative pixel of the block), thereby The position of object 1300 can be determined. Also, the object position detection unit 2013 can obtain a plurality of object positions based on the luminance value of each pixel of the position detection image 302 .
- the position information indicating the position of the object 1300 in the captured image 1100N detected by the object position detection unit 2013 is passed to the clipping unit 200.
- FIG. 15 is an example sequence diagram for explaining processing according to the first embodiment. Note that the meaning of each part in FIG. 15 is the same as in FIG.
- a captured image 1100 (N-1) including the object 1300 is captured in the (N-1)th frame.
- the captured image 1100(N ⁇ 1) is passed to the detection unit 201 through image processing (step S100) in the clipping unit 200, for example, and the position of the object 1300 in the captured image 1100(N ⁇ 1) is detected (step S101).
- step S101 the position detection in step S101, as described above, the difference between the low-resolution image 300 and the low-resolution background image 301 each having a size of 16 pixels ⁇ 16 pixels is obtained by the background cancellation processing 320. performed against.
- the image sensor 1000 extracts an area including the object 1300 from the captured image 1100 based on the position information indicating the position of the object 1300 in the captured image 1100 (N-1) detected by the object position detection process in step S101.
- a register setting value for cutting out the image of is calculated (step S102).
- the object position detection processing in step S101 uses a small number of pixels for processing, so the processing is relatively light. can be completed.
- the register setting value calculated in step S101 is reflected in the clipping unit 200 in the next N-th frame (step S103).
- the clipping unit 200 performs clipping processing on the captured image 1100N (not shown) of the Nth frame according to the register setting value (step S104) to generate the recognition image 1104a.
- This recognition image 1104 a is passed to the recognition unit 204 .
- the recognition unit 204 performs recognition processing on the object 1300 based on the passed recognition image 1104a (step S105), and outputs the recognition result to, for example, the AP 101 (step S106).
- the recognition image 1104a used for recognition processing by the recognition unit 204 is based on the position of the object 1300 detected using the low-resolution image 300 with a small pixel count of 16 ⁇ 16 pixels. Cut out and generated. Therefore, the processing up to the calculation of the register set value in step S102 can be completed within the period of the (N ⁇ 1)th frame. Therefore, the latency until the clipping position is reflected in the captured image 1100N of the Nth frame can be reduced to one frame, which can be shortened compared to the existing technology. In addition, since the object position detection process and the recognition process can be executed by separate pipeline processes, it is possible to perform the process without lowering the throughput with respect to the existing technology.
- the second embodiment uses low-resolution images based on a plurality of captured images 1100(N-2) and 1100(N-1), such as the (N-2)th and (N-1)th frames, to obtain the This is an example in which the position of the object 1300 in the captured image 1100N of N frames is predicted.
- FIG. 16 is an example functional block diagram for explaining the functions of the image sensor according to the second embodiment.
- the image sensor 100 shown in FIG. 16 has a prediction/detection unit 210 instead of the detection unit 201 and has at least two position It has a memory 211 capable of holding information.
- memory 211 can also hold information other than past position information (for example, past low-resolution images).
- memory 211 includes position information memory 2110 for holding position information and background memory 2111 for holding background image 311 .
- Imaging is performed in the imaging block 20 (see FIG. 10) not shown, and the imaging block 20 outputs a captured image 1100 (N-1) of the (N-1)th frame, which is a 4k ⁇ 3k image.
- a captured image 1100 (N ⁇ 1) output from the imaging block 20 is supplied to the clipping section 200 and the prediction/detection section 210 .
- FIG. 17 is an example functional block diagram for explaining the function of the prediction/detection unit 210 according to the second embodiment.
- prediction/detection section 210 includes position detection image generation section 2010 , object position detection section 2013 , position information memory 2110 , background memory 2111 , and prediction section 2100 .
- the position detection image generation unit 2010 and the object position detection unit 2013 are the same as the position detection image generation unit 2010 and the object position detection unit 2013 described with reference to FIG. Description is omitted.
- the prediction/detection unit 210 detects the low-resolution object region 303 corresponding to the object 1300 from the background image stored in the background memory 2111 and the captured image 1100 (N-1) output from the position detection image generation unit 2010. do.
- the position information (N-2) indicates the position of the object 1300 generated from the captured image 1100 (N-2) of the (N-2)th frame as described in the first embodiment. location information.
- the position information (N ⁇ 1) is position information indicating the position of the object 1300 generated from the captured image 1100(N ⁇ 1) of the (N ⁇ 1)th frame.
- the position information memory 2110 included in the memory 211 can store at least two frames of position information indicating the position of the object 1300 in the past.
- the position detection image generation unit 2010 generates a low resolution image 310 by lowering the resolution of the captured image 1100 (N ⁇ 1) including the object 1300 (not shown) supplied from the imaging block 20, and the object position detection unit 2013.
- An object position detection unit 2013 detects a position corresponding to the object 1300 .
- position information memory 2110 holds position information (N ⁇ 1) passed from object position detection section 2013 .
- Position information (N-1) in the (N-1)-th frame and the previous frame ( Position information (N-2) in (N-2) frames) is passed.
- the prediction unit 2100 predicts the future frame based on the position information (N-1) passed from the object position detection unit 2013 and the position information (N-2) stored in the area (N-2) of the memory 211.
- the position of the object 1300 in the captured image 1100N of the N-th frame is predicted.
- the prediction unit 2100 can predict the position of the object 1300 in the captured image 1100N of the Nth frame, for example, by linear calculation based on two pieces of position information (N-1) and position information (N-2).
- the memory 211 may further store low-resolution images of past frames, and three or more pieces of position information may be used to predict the position. Furthermore, from those low-resolution images, it is also possible to determine that the position of object 1300 is the same object in each frame. Not limited to this, the prediction unit 2100 can also predict the position using a model learned by machine learning.
- the prediction unit 2100 outputs position information (N) indicating the position of the object 1300 in the predicted captured image 1100N of the Nth frame to the clipping unit 200, for example.
- the clipping unit 200 predicts that the captured image 1100N of the N-th frame includes the object 1300 from the captured image 1100(N ⁇ 1) based on the predicted position information passed from the prediction/detection unit 210.
- the image at the position is cut out in a predetermined size (for example, 224 pixels wide ⁇ 224 pixels high) that can be handled by the recognition unit 204, and a recognition image 1104c is generated.
- the cropping unit 200 crops the image including the object 1300 from the captured image 1100(N-1) to a size of 224 pixels wide by 224 pixels high. to generate recognition image 1104c.
- the clipping unit 200 may reduce the entire captured image 1100(N ⁇ 1) to the predetermined size without clipping from the captured image 1100N to generate the recognition image 1104d.
- the clipping unit 200 can add the position information passed from the prediction/detection unit 210 to the recognition image 1104d.
- the clipping unit 200 outputs the recognition image 1104c out of the recognition images 1104c and 1104d.
- a recognition image 1104 c cut out from the captured image 1100 (N ⁇ 1) by the cutout unit 200 is passed to the recognition unit 204 .
- the recognition unit 204 executes recognition processing for recognizing the object 1300 included in the recognition image 1104c using, for example, DNN.
- the recognition result of the object 1300 by the recognition unit 204 is passed to the AP 101, for example.
- the recognition result can include information indicating the type of the object 1300 and the degree of recognition of the object 1300, for example.
- FIG. 17 is an example functional block diagram for explaining the function of the prediction/detection unit 210 according to the second embodiment.
- prediction/detection section 210 includes position detection image generation section 2010 , object position detection section 2013 , background memory 2111 , position information memory 2110 , and prediction section 2100 .
- the position detection image generation unit 2010 and the object position detection unit 2013 are the same as the position detection image generation unit 2010 and the object position detection unit 2013 described with reference to FIG. Description is omitted.
- the position information memory 2110 can store at least two frames of position information indicating the position of the object 1300 in the past.
- the position detection image generation unit 2010 generates a low resolution image 310 by lowering the resolution of the captured image 1100 (N ⁇ 1) including the object 1300 (not shown) supplied from the imaging block 20, and the object position detection unit 2013.
- the object position detection unit 2013 detects the position corresponding to the object 1300 .
- Information indicating the detected position is passed to the position information memory 2110 as position information (N-1) in the (N-1)th frame.
- the position information (N-1) indicating the position of the object 1300 is moved to the area (N-2) of the memory 211 at the timing of the next frame, and is used as the position information (N-2) of the (N-2)th frame. be.
- Position information (N-1) in the (N-1)-th frame and the previous frame ( Position information (N-2) in (N-2) frames) is passed.
- the prediction unit 2100 predicts the future frame based on the position information (N-1) passed from the object position detection unit 2013 and the position information (N-2) stored in the area (N-2) of the memory 211.
- the position of the object 1300 in the captured image 1100N of the N-th frame is predicted.
- the prediction unit 2100 can linearly predict the position of the object 1300 in the captured image 1100N of the N-th frame based on, for example, two pieces of position information (N-1) and position information (N-2).
- the memory 211 may further store low-resolution images of past frames, and two or more pieces of position information may be used to predict the position. Furthermore, from those low-resolution images, it is also possible to determine that the position of object 1300 is the same object in each frame. Note that the prediction unit 2100 can also predict the position using a model learned by machine learning.
- the prediction unit 2100 outputs position information (N) indicating the position of the object 1300 in the predicted captured image 1100N of the Nth frame to the clipping unit 200, for example.
- FIG. 18 is an example sequence diagram for explaining processing according to the second embodiment. It should be noted that the meaning of each part in FIG. 18 is the same as in FIG.
- a captured image 1100 (N-1) including the object 1300 is captured in the (N-1)th frame.
- the prediction/detection unit 210 After performing predetermined image processing (step S130), the prediction/detection unit 210 performs the above-described motion prediction processing 330, based on two pieces of position information (N-1) and position information (N-2), to determine the N-th frame.
- the position of the object 1300 in the captured image 1100N is predicted, and position information (N) indicating the predicted position is generated (step S131).
- the image sensor 1000 extracts an area including the object 1300 from the captured image 1100N based on the position information (N) indicating the position of the object 1300 in the future captured image 1100N predicted by the object position detection processing in step S131.
- a register setting value for cutting out the image of is calculated (step S132).
- the object position detection processing in step S131 uses a small number of pixels for processing, so the processing is relatively light. can be completed.
- the register setting value calculated in step S131 is reflected in the clipping unit 200 in the next N-th frame (step S133).
- the clipping unit 200 performs clipping processing on the captured image 1100N (not shown) of the Nth frame according to the register setting value (step S144) to generate a recognition image 1104c.
- This recognition image 1104 c is passed to the recognition unit 204 .
- the recognition unit 204 performs recognition processing for the object 1300 based on the passed recognition image 1104c (step S155), and outputs the recognition result to, for example, the AP 101 (step S136).
- FIG. 19 is a schematic diagram for explaining motion estimation according to the second embodiment.
- the meaning of each part is the same as in FIG. 7 described above, so description thereof will be omitted here.
- the position of the object 1300 in the captured image 1100N of the Nth frame is predicted using the information of the (N-1)th frame immediately before the Nth frame. Therefore, as indicated by a trajectory 1402 in FIG. 19, it is possible to predict a trajectory that is close to the actual trajectory 1401 .
- FIG. 20 is a schematic diagram for explaining pipeline processing applicable to the second embodiment. It should be noted that the description of the parts common to FIG. 18 described above will be omitted here.
- the image sensor 100 executes object position prediction processing (step S131) based on the captured image 1100N as described using FIG.
- the image sensor 100 executes a register set value calculation process (step S132) based on the position information (N) indicating the predicted position.
- the register set value calculated here is reflected in the clipping process (step S134) in the next (N+1)-th frame (step S133).
- the image sensor 100 uses the register setting value calculated in the immediately preceding (N ⁇ 1)-th frame (step S133), and executes the clipping process in the clipping unit 200 (step S134), A recognition image 1104c is generated.
- the recognition unit 204 executes recognition processing for the object 1300 based on the generated recognition image 1104c (step S135).
- object position prediction processing step S131
- register setting value calculation processing step S132
- the clipping process (step S134) and the recognition process step S135) are independent processes. Therefore, pipeline processing by object position prediction processing (step S131) and register setting value calculation processing (step S132), and pipeline processing by clipping processing (step S134) and recognition processing (step S135) are executed in parallel. It is possible to perform processing without lowering throughput compared to existing technology. Note that this pipeline processing can be similarly applied to the processing according to the first embodiment described using FIG.
- the third embodiment is an example in which a recognition image from which a background image has been removed is passed to the recognition unit 204 .
- the recognition unit 204 can recognize the object with higher accuracy.
- FIG. 21 is an example functional block diagram for explaining the functions of the image sensor according to the third embodiment.
- the image sensor 100 shown in FIG. 21 has a clipping section 200 , a background cancellation section 221 , a background memory 222 and a recognition section 204 .
- Imaging is performed in the imaging block 20 (see FIG. 10) (not shown), and the imaging block 20 outputs a captured image 1100N of the N-th frame, which is a 4k ⁇ 3k image.
- a captured image 1100N output from the imaging block 20 is supplied to the clipping section 200 .
- the clipping unit 200 reduces the captured image 1100N to a resolution that can be handled by the recognition unit 204, for example, 224 pixels wide ⁇ 224 pixels high, and generates a recognition image 1104e. Note that the clipping unit 200 may generate the reduced recognition image 1104e by simply thinning out pixels, or may generate it using linear interpolation or the like.
- the recognition image 1104 e is input to the background cancellation unit 221 .
- the background image 340 for example, when the imaging device 10 mounted with the image sensor 100 is used as a surveillance camera with a fixed imaging range, the imaging range It is possible to perform imaging in a default state where there is no person in the room, and apply the captured image obtained there. Not limited to this, a background image can also be captured according to the user's operation on the imaging device 10 .
- the background image 340 stored in the background memory 222 is not limited to a size of 224 pixels wide by 224 pixels high.
- a background image 341 having the same size of 4k ⁇ 3k as the captured image 1100N may be stored in the background memory 222 .
- the background memory 222 can store a background image of any size from 224 pixels wide ⁇ 224 pixels high to 4k ⁇ 3k.
- the background canceling unit 221 makes the background image correspond to the recognition image 1104e and has a size of 224 pixels wide by 224 pixels high. Convert to image.
- the background canceling unit 221 uses, for example, a background image 340 having a size of 224 pixels wide by 224 pixels high, which is the same as the recognition image 1104e, and cancels the recognition image 1104e input from the clipping unit 200 and the background image 340. Find the absolute value of the difference.
- the background cancellation unit 221 performs threshold determination on the absolute value of the obtained difference for each pixel of the recognition image 1104e. In accordance with the result of this threshold determination, the background canceling unit 221 selects, for example, an area of pixels with an absolute value of difference of [1] or more as an object area, an area of pixels with an absolute value of difference of [0] as a background part, etc.
- a recognition result by the recognition unit 204 is output to the AP 101, for example.
- the fourth embodiment is a combination of the configurations according to the first to third embodiments described above.
- FIG. 22 is an example functional block diagram for explaining the functions of the image sensor according to the fourth embodiment.
- the image sensor 100 includes a clipping unit 200, a prediction/detection unit 210, a background memory 222, a memory 211 including a position information memory 2110 and a background memory 2111, a background cancellation unit 221, and a recognition unit 204.
- the functions of these units are the same as those described in the first to third embodiments, detailed descriptions thereof are omitted here.
- Imaging is performed in the imaging block 20 (see FIG. 10) not shown, and the imaging block 20 outputs the (N-1)-th frame image 1100 (N-1), which is a 4k ⁇ 3k image.
- a captured image 1100 (N ⁇ 1) output from the imaging block 20 is supplied to the clipping section 200 and the prediction/detection section 210 .
- the prediction/detection unit 210 generates, for example, a 16-pixel width ⁇ 16-pixel height image from the supplied captured image 1100(N ⁇ 1) in the same manner as the position detection image generation unit 2010 described with reference to FIG. A resolution image 300 is generated.
- the prediction/detection unit 210 also obtains the difference between the generated low-resolution image 300 and the low-resolution background image 311 stored in the background memory 2111 to obtain the position information (N ⁇ 1) of the object 1300 .
- the prediction/detection unit 210 sets the position information (N ⁇ 1) already stored in the position information memory 2110 in the memory 211 as the position information (N ⁇ 2) of the (N ⁇ 2)th frame, and obtains the position information (N ⁇ 2).
- the location information (N ⁇ 1) is stored in the location information memory 2110 in the memory 211 .
- Prediction/detection unit 210 executes motion prediction processing 330 described using FIG. , predicts the position of the object 1300 in the captured image 1100N of the N-th frame, which is the future frame.
- the prediction/detection unit 210 generates a low-resolution image 312 including position information (N) indicating the predicted position in this way, and passes it to the clipping unit 200 .
- the clipping unit 200 Based on the position information (N) included in the low-resolution image 312 passed from the prediction/detection unit 210, the clipping unit 200 extracts the object 1300 from the captured image 1100(N-1) to the captured image 1100N of the Nth frame.
- the image at the position predicted to be included is clipped to a predetermined size (for example, 224 pixels wide ⁇ 224 pixels high) that can be handled by the recognition unit 204, and a recognition image 1104g is generated.
- the clipping unit 200 when the size of the object 1300 does not fit within the predetermined size, the clipping unit 200 reduces the size of the clipped image including the object 1300 from the captured image 1100N to a size of 224 pixels wide by 224 pixels high, A recognition image 1104a can be generated.
- the clipping unit 200 may reduce the entire captured image 1100N to the predetermined size without clipping from the captured image 1100N to generate the recognition image 1104h.
- the clipping unit 200 can add the position information (N) passed from the prediction/detection unit 210 to the recognition image 1104h.
- the recognition image 1104 g output from the clipping unit 200 is input to the background cancellation unit 221 .
- a background image 340 corresponding in size to the recognition image 1104 g stored in the background memory 222 is also input to the background cancellation unit 221 .
- the background canceling unit 221 obtains the difference between the recognition image 1104g and the background image 340, and performs threshold determination of the absolute value of the difference for each pixel of the difference image.
- the region of pixels above is determined as the object region, the region of pixels whose absolute value of the difference is [0] is determined as the background portion, and the pixel value of the pixels in the background portion is set to a predetermined pixel value (for example, a pixel value indicating white). replace with .
- An image in which the pixel values of pixels in the background portion are replaced with predetermined pixel values is passed to the recognition unit 204 as a recognition image 1104i in which the background has been canceled.
- the background canceling unit 221 can convert the background image into an image corresponding in size to the recognition image 1104g. can. For example, when the recognition image 1104h obtained by reducing the captured image 1100(N ⁇ 1) is input to the background canceling unit 221, the background canceling unit 221 cancels the background of the same size as the captured image 1100(N ⁇ 1). The image 341 is reduced, and the difference between the reduced background image 341 and the recognition image 1104h is obtained. The background canceling unit 221 performs threshold determination on each pixel of the difference image. A region is determined as a background portion.
- the background cancellation unit 221 replaces the pixel values of the pixels included in the area determined as the background portion with a predetermined pixel value (for example, a pixel value indicating white).
- a predetermined pixel value for example, a pixel value indicating white.
- An image in which the pixel values of the pixels in the area determined to be the background portion are replaced with predetermined pixel values is passed to the recognition unit 204 as a recognition image 1104j in which the background has been canceled.
- the recognition unit 204 performs recognition processing of the object 1300 on the background-cancelled recognition image 1104i or 1104j passed from the background cancellation unit 221 .
- a result of the recognition processing is output to the AP 101, for example.
- the clipping unit 200 clips the recognition image 1104g from the captured image 1100N based on the predicted position. Then, a recognition image 1104 i obtained by canceling the background portion of the recognition image 1104 g by the background canceling unit 221 is input to the recognition unit 204 .
- the position prediction of the object 1300 in the captured image 1100N of the Nth frame is performed using an image of, for example, 16 pixels wide by 16 pixels high, which is obtained by reducing the 4k ⁇ 3k image, thereby speeding up the processing. is possible and latency can be reduced.
- the present technology can also take the following configuration.
- a detection unit that detects a position in the input image of an object included in the input image; a generation unit that generates a recognition image of a predetermined resolution including the object from the input image based on the position detected by the detection unit; a recognition unit that performs recognition processing for recognizing the object on the recognition image generated by the generation unit;
- An image processing device comprising: (2) The detection unit is converting the input image of a first resolution into a detection image of a second resolution lower than the first resolution, and detecting a position in the input image based on the detection image; The image processing apparatus according to (1) above. (3) the predetermined resolution is lower than the first resolution and the second resolution is lower than the first predetermined resolution; The image processing apparatus according to (2) above.
- the detection unit is detecting the difference between the second resolution image obtained by converting the image corresponding to the case where the input image does not include the object and the second resolution image obtained by converting the input image including the object; used as images for The image processing apparatus according to (2) or (3) above.
- the detection unit is Predicting the position in a future input image for the input image based on the position detected from the input image and the positions detected from one or more past input images with respect to the input image. , The image processing apparatus according to (2) above.
- the detection unit is having a memory capable of storing at least two frames of position information indicating the position of the object; Detected from the difference between the image of the second resolution obtained by converting the image corresponding to the case where the input image does not include the object and the detection image obtained by converting the input image to the image of the second resolution, predicting the position in the input image one frame future with respect to the input image based on the position information and the position information one frame before the frame in which the position information is detected; The image processing device according to (5) above. (7) The generating unit extracting a region corresponding to the object from the input image based on the position detected by the detection unit to generate the recognition image; The image processing apparatus according to any one of (1) to (6) above.
- the generating unit when the size of the input image of the object is larger than the predetermined resolution, reducing the image of the region to generate the recognition image of the predetermined resolution including the entire object;
- the image processing device according to (7) above.
- the generating unit reducing the input image to the image of the predetermined resolution to generate the image for recognition, and passing the position detected by the detection unit to the recognition unit together with the image for recognition;
- the image processing apparatus according to any one of (1) to (5) above.
- the background removing unit In the image corresponding to the case where the input image does not include the object, from the image of the predetermined resolution that includes the object and is generated from the input image by the generation unit based on the position detected by the detection unit determining the background portion based on a threshold on an image generated by subtracting the image of the predetermined resolution of the region corresponding to the object based on the position as the image of the background portion; outputting an image obtained by replacing pixel values of pixels included in the region with a predetermined pixel value to the recognition unit as the recognition image from which the background portion is removed;
- the image processing apparatus according to any one of (1) to (9) above.
- the background removing unit having a background image memory that stores the image of the background portion; The image processing device according to (10) above.
- the recognition unit Recognizing the object based on a model learned by machine learning; The image processing apparatus according to any one of (1) to (11) above.
- the recognition unit recognizing the object using a DNN (Deep Neural Network); The image processing device according to (12) above.
- An image processing method comprising:
- Imaging device 100 Image sensor 101 Application processor 200 Clipping unit 201 Detection unit 202, 222, 2111 Background memory 204 Recognition unit 210 Prediction/detection unit 211 Memory 221 Background cancellation unit 222, 2111 Background memory 1100, 1100N, 1100(N-1 ), 1100(N-2), 1100(N-3) Captured image 1300 Objects 1104, 1104a, 1104b, 1104c, 1104d, 1104e, 1104f, 1104g, 1104h, 1104i, 1104j Image for recognition 2110 Position information memory
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
Abstract
Description
1.本開示の概要
2.既存技術について
2-1.既存技術による第1の画像処理方法
2-2.既存技術による第2の画像処理方法
2-3.既存技術による第3の画像処理方法
2-4.既存技術による動き予測
3.本開示の各実施形態に適用可能な構成
4.本開示に係る第1の実施形態
4-1.第1の実施形態に係る構成例
4-2.第1の実施形態に係る処理例
5.本開示に係る第2の実施形態
5-1.第2の実施形態に係る構成例
5-2.第2の実施形態に係る処理例
5-3.第2の実施形態に適用可能なパイプライン処理
6.本開示に係る第3の実施形態
7.本開示に係る第4の実施形態
本開示は、被写体を撮像し撮像画像を取得するイメージセンサに関するもので、本開示に係るイメージセンサは、撮像を行う撮像部と、撮像部により撮像された撮像画像に基づき物体認識を行う認識部とを含む。本開示では、撮像部で撮像された撮像画像に基づき、認識部の認識対象となるオブジェクトの撮像画像上での位置を検出する。検出された位置に基づき、撮像画像から、当該オブジェクトに対応する領域を含む画像を、認識部が対応可能な解像度で切り取り、認識用画像として認識部に出力する。
本開示の各実施形態の説明に先んじて、理解を容易とするために、本開示の技術に関連する既存技術について概略的に説明する。
先ず、既存技術による第1の画像処理方法について説明する。図1は、既存技術による第1の画像処理方法を説明するための模式図ある。図1において、イメージセンサ1000は、図示されない撮像部と共に、撮像部で撮像された撮像画像1100を元画像とし、当該撮像画像1100に含まれるオブジェクトを認識する認識部1010を含む。認識部1010は、DNN(Deep Neural Network)を用いて、撮像画像に含まれるオブジェクトの認識を行う。
次に、既存技術による第2の画像処理方法について説明する。この第2の画像処理方法および後述する第3の画像処理方法では、上述した第1の画像処理方法における個々のオブジェクトの認識率の低下を抑制するために、撮像画像1100から、認識対象となるオブジェクトが含まれる領域に対応する画像を切り出して、認識部1010に入力するための入力画像を生成する。
次に、既存技術による第3の画像処理方法について説明する。この第3の画像処理方法は、上述したように、撮像画像1100から、認識対象となるオブジェクトが含まれる領域に対応する画像を切り出して、認識部1010に入力するための入力画像を生成する。このとき、第3の画像処理方法では、AP1001を利用せずに、イメージセンサ1000内の認識部1010の認識結果に基づき画像の切り出しを行う。
次に、上述した第2または第3の画像処理方法を用いた場合の、高速に移動するオブジェクト1300の動き予測、すなわち、当該オブジェクト1300の未来の位置を予測する場合について説明する。
次に、本開示の各実施形態に適用可能な構成について説明する。
次に、本開示に係る第1の実施形態について説明する。
図12は、第1の実施形態に係るイメージセンサ100の機能を説明するための一例の機能ブロック図である。図12において、イメージセンサ100は、切り出し部200と、検出部201と、背景メモリ202と、認識部204と、を含む。なお、これら切り出し部200、検出部201、背景メモリ202および認識部204は、図10に示した信号処理ブロック30において、例えばDSP32により実現される。
図15は、第1の実施形態に係る処理を説明するための一例のシーケンス図である。なお、図15の各部の意味は、上述した図4などと同様であるので、ここでの説明を省略する。
次に、本開示に係る第2の実施形態について説明する。第2の実施形態は、例えば第(N-2)および第(N-1)フレームといった複数の撮像画像1100(N-2)および1100(N-1)に基づく低解像度画像を用いて、第Nフレームの撮像画像1100Nにおけるオブジェクト1300の位置を予測するようにした例である。
図16は、第2の実施形態に係るイメージセンサの機能を説明するための一例の機能ブロック図である。図16に示すイメージセンサ100は、図12を用いて説明した第1の実施形態に係るイメージセンサ100と比較して、検出部201の代わりに予測・検出部210を有すると共に、少なくとも2つの位置情報を保持可能なメモリ211を有している。
図18は、第2の実施形態に係る処理を説明するための一例のシーケンス図である。なお、図18の各部の意味は、上述した図4などと同様であるので、ここでの説明を省略する。
図18を用いて説明した処理は、物体位置予測処理と認識処理とをそれぞれ別のパイプライン処理で実行できるため、既存技術に対してスループットを落とさずに処理を行うことができる。
次に、本開示に係る第3の実施形態について説明する。第3の実施形態は、認識部204に対して、背景画像を除去した認識用画像を渡すようにした例である。認識用画像からオブジェクト以外の背景画像を除去することで、認識部204は、オブジェクトをより高精度で認識することが可能となる。
次に、本開示に係る第4の実施形態について説明する。第4の実施形態は、上述した第1~第3の実施形態に係る構成を組み合わせたものである。
(1)
入力画像に含まれるオブジェクトの、前記入力画像における位置を検出する検出部と、
前記検出部により検出された前記位置に基づき、前記入力画像から前記オブジェクトを含む所定の解像度の認識用画像を生成する生成部と、
前記生成部により生成された前記認識用画像に対して前記オブジェクトを認識する認識処理を行う認識部と、
を備える画像処理装置。
(2)
前記検出部は、
第1の解像度の前記入力画像を、解像度が前記第1の解像度より低い第2の解像度の検出用画像に変換し、前記検出用画像に基づき前記入力画像における位置を検出する、
前記(1)に記載の画像処理装置。
(3)
前記所定の解像度は、前記第1の解像度より低く、前記第2の解像度は、前記第所定の解像度より低い、
前記(2)に記載の画像処理装置。
(4)
前記検出部は、
前記入力画像が前記オブジェクトを含まない場合に対応する画像を変換した前記第2の解像度の画像と、前記オブジェクトを含む前記入力画像を変換した前記第2の解像度の画像との差分を、前記検出用画像として用いる、
前記(2)または(3)に記載の画像処理装置。
(5)
前記検出部は、
前記入力画像から検出された前記位置と、前記入力画像に対して過去の1以上の入力画像から検出された前記位置とに基づき、前記入力画像に対して未来の入力画像における前記位置を予測する、
前記(2)に記載の画像処理装置。
(6)
前記検出部は、
前記オブジェクトの位置を示す位置情報を少なくとも2フレーム分記憶可能なメモリを有し、
前記入力画像が前記オブジェクトを含まない場合に対応する画像を変換した前記第2の解像度の画像と、前記入力画像を前記第2の解像度の画像に変換した検出用画像との差分から検出した、前記位置情報と、該位置情報を検出したフレームの1フレーム前の前記位置情報とに基づき、前記入力画像に対して1フレーム未来の入力画像における前記位置を予測する、
前記(5)に記載の画像処理装置。
(7)
前記生成部は、
前記入力画像から前記検出部により検出された前記位置に基づき前記オブジェクトに対応する領域を切り出して、前記認識用画像を生成する、
前記(1)~(6)の何れかに記載の画像処理装置。
(8)
前記生成部は、
前記オブジェクトの前記入力画像における大きさが前記所定の解像度に対して大きい場合に、前記領域の画像を縮小して前記オブジェクトの全体を含む前記所定の解像度の前記認識用画像を生成する、
前記(7)に記載の画像処理装置。
(9)
前記生成部は、
前記入力画像を前記所定の解像度の画像に縮小して、前記認識用画像を生成し、前記検出部により検出された前記位置を、前記認識用画像と共に前記認識部に渡す、
前記(1)~(5)の何れかに記載の画像処理装置。
(10)
前記認識用画像の背景部分を除去して前記認識部に出力する背景除去部をさらに備え、
前記背景除去部は、
前記検出部により検出された前記位置に基づき前記生成部により前記入力画像から生成された、前記オブジェクトを含む前記所定の解像度の画像から、前記入力画像が前記オブジェクトを含まない場合に対応する画像における、前記位置に基づく前記オブジェクトに対応する領域の前記所定の解像度の画像を前記背景部分の画像として差し引いて生成した画像に対し、閾値に基づき前記背景部分の判定処理を行い、前記背景部分の画素領域に含まれる画素の画素値を所定の画素値で置換した画像を、前記背景部分が除去された前記認識用画像として前記認識部に出力する、
前記(1)~(9)の何れかに記載の画像処理装置。
(11)
前記背景除去部は、
前記背景部分の画像を記憶する背景画像メモリを有する、
前記(10)に記載の画像処理装置。
(12)
前記認識部は、
機械学習により学習されたモデルに基づき前記オブジェクトの認識を行う、
前記(1)~(11)の何れかに記載の画像処理装置。
(13)
前記認識部は、
DNN(Deep Neural Network)を用いて前記オブジェクトの認識を行う、
前記(12)に記載の画像処理装置。
(14)
プロセッサにより実行される、
入力画像に含まれるオブジェクトの、前記入力画像における位置を検出する検出ステップと、
前記検出ステップにより検出された前記位置に基づき、前記入力画像から前記オブジェクトを含む所定の解像度の認識用画像を生成する生成ステップと、
前記生成ステップにより生成された前記認識用画像に対して前記オブジェクトを認識する認識処理を行う認識ステップと、
を有する画像処理方法。
100 イメージセンサ
101 アプリケーションプロセッサ
200 切り出し部
201 検出部
202,222,2111 背景メモリ
204 認識部
210 予測・検出部
211 メモリ
221 背景キャンセル部
222,2111 背景メモリ
1100,1100N,1100(N-1),1100(N-2),1100(N-3) 撮像画像
1300 オブジェクト
1104,1104a、1104b,1104c,1104d,1104e,1104f,1104g,1104h,1104i,1104j 認識用画像
2110 位置情報メモリ
Claims (14)
- 入力画像に含まれるオブジェクトの、前記入力画像における位置を検出する検出部と、
前記検出部により検出された前記位置に基づき、前記入力画像から前記オブジェクトを含む所定の解像度の認識用画像を生成する生成部と、
前記生成部により生成された前記認識用画像に対して前記オブジェクトを認識する認識処理を行う認識部と、
を備える画像処理装置。 - 前記検出部は、
第1の解像度の前記入力画像を、解像度が前記第1の解像度より低い第2の解像度の検出用画像に変換し、前記検出用画像に基づき前記入力画像における位置を検出する、
請求項1に記載の画像処理装置。 - 前記所定の解像度は、前記第1の解像度より低く、前記第2の解像度は、前記所定の解像度より低い、
請求項2に記載の画像処理装置。 - 前記検出部は、
前記入力画像が前記オブジェクトを含まない場合に対応する画像を変換した前記第2の解像度の画像と、前記オブジェクトを含む前記入力画像を変換した前記第2の解像度の画像との差分を、前記検出用画像として用いる、
請求項2に記載の画像処理装置。 - 前記検出部は、
前記入力画像から検出された前記位置と、前記入力画像に対して過去の1以上の入力画像から検出された前記位置とに基づき、前記入力画像に対して未来の入力画像における前記位置を予測する、
請求項2に記載の画像処理装置。 - 前記検出部は、
前記オブジェクトの位置を示す位置情報を少なくとも2フレーム分記憶可能なメモリを有し、
前記入力画像が前記オブジェクトを含まない場合に対応する画像を変換した前記第2の解像度の画像と、前記入力画像を前記第2の解像度の画像に変換した検出用画像との差分から検出した、前記位置情報と、該位置情報を検出したフレームの1フレーム前の前記位置情報とに基づき、前記入力画像に対して1フレーム未来の入力画像における前記位置を予測する、
請求項5に記載の画像処理装置。 - 前記生成部は、
前記入力画像から前記検出部により検出された前記位置に基づき前記オブジェクトに対応する領域を切り出して、前記認識用画像を生成する、
請求項1に記載の画像処理装置。 - 前記生成部は、
前記オブジェクトの前記入力画像における大きさが前記所定の解像度に対して大きい場合に、前記領域の画像を縮小して前記オブジェクトの全体を含む前記所定の解像度の前記認識用画像を生成する、
請求項7に記載の画像処理装置。 - 前記生成部は、
前記入力画像を前記所定の解像度の画像に縮小して、前記認識用画像を生成し、前記検出部により検出された前記位置を、前記認識用画像と共に前記認識部に渡す、
請求項1に記載の画像処理装置。 - 前記認識用画像の背景部分を除去して前記認識部に出力する背景除去部をさらに備え、
前記背景除去部は、
前記検出部により検出された前記位置に基づき前記生成部により前記入力画像から生成された、前記オブジェクトを含む前記所定の解像度の画像から、前記入力画像が前記オブジェクトを含まない場合に対応する画像における、前記位置に基づく前記オブジェクトに対応する領域の前記所定の解像度の画像を前記背景部分の画像として差し引いて生成した画像に対し、閾値に基づき前記背景部分の判定処理を行い、前記背景部分の画素領域に含まれる画素の画素値を所定の画素値で置換した画像を、前記背景部分が除去された前記認識用画像として前記認識部に出力する、
請求項1に記載の画像処理装置。 - 前記背景除去部は、
前記背景部分の画像を記憶する背景画像メモリを有する、
請求項10に記載の画像処理装置。 - 前記認識部は、
機械学習により学習されたモデルに基づき前記オブジェクトの認識を行う、
請求項1に記載の画像処理装置。 - 前記認識部は、
DNN(Deep Neural Network)を用いて前記オブジェクトの認識を行う、
請求項12に記載の画像処理装置。 - プロセッサにより実行される、
入力画像に含まれるオブジェクトの、前記入力画像における位置を検出する検出ステップと、
前記検出ステップにより検出された前記位置に基づき、前記入力画像から前記オブジェクトを含む所定の解像度の認識用画像を生成する生成ステップと、
前記生成ステップにより生成された前記認識用画像に対して前記オブジェクトを認識する認識処理を行う認識ステップと、
を有する画像処理方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280012150.7A CN116830153A (zh) | 2021-02-03 | 2022-01-25 | 图像处理装置和图像处理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-015918 | 2021-02-03 | ||
JP2021015918A JP7460561B2 (ja) | 2021-02-03 | 2021-02-03 | 撮像装置および画像処理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022168667A1 true WO2022168667A1 (ja) | 2022-08-11 |
Family
ID=82741274
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/002594 WO2022168667A1 (ja) | 2021-02-03 | 2022-01-25 | 画像処理装置および画像処理方法 |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP7460561B2 (ja) |
CN (1) | CN116830153A (ja) |
WO (1) | WO2022168667A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004228770A (ja) * | 2003-01-21 | 2004-08-12 | Minolta Co Ltd | 画像処理システム |
JP2008109552A (ja) * | 2006-10-27 | 2008-05-08 | Sanyo Electric Co Ltd | 追跡機能付き撮像装置 |
JP2009117956A (ja) * | 2007-11-02 | 2009-05-28 | Hitachi Kokusai Electric Inc | 映像処理装置 |
JP2013009050A (ja) * | 2011-06-22 | 2013-01-10 | Canon Inc | 画像処理装置、画像処理方法 |
US20190213420A1 (en) * | 2018-01-09 | 2019-07-11 | Qualcomm Incorporated | Adaptive object detection and recognition |
-
2021
- 2021-02-03 JP JP2021015918A patent/JP7460561B2/ja active Active
-
2022
- 2022-01-25 WO PCT/JP2022/002594 patent/WO2022168667A1/ja active Application Filing
- 2022-01-25 CN CN202280012150.7A patent/CN116830153A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004228770A (ja) * | 2003-01-21 | 2004-08-12 | Minolta Co Ltd | 画像処理システム |
JP2008109552A (ja) * | 2006-10-27 | 2008-05-08 | Sanyo Electric Co Ltd | 追跡機能付き撮像装置 |
JP2009117956A (ja) * | 2007-11-02 | 2009-05-28 | Hitachi Kokusai Electric Inc | 映像処理装置 |
JP2013009050A (ja) * | 2011-06-22 | 2013-01-10 | Canon Inc | 画像処理装置、画像処理方法 |
US20190213420A1 (en) * | 2018-01-09 | 2019-07-11 | Qualcomm Incorporated | Adaptive object detection and recognition |
Also Published As
Publication number | Publication date |
---|---|
CN116830153A (zh) | 2023-09-29 |
JP2022119005A (ja) | 2022-08-16 |
JP7460561B2 (ja) | 2024-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4656216B2 (ja) | 撮像装置、画像処理装置、画像処理方法、プログラム及び記録媒体 | |
EP2330812A1 (en) | Apparatus for generating a panoramic image, method for generating a panoramic image, and computer-readable medium | |
US10359498B2 (en) | Image pickup apparatus having function of generating simulation image,control method therefor, and storage medium | |
US11539896B2 (en) | Method and apparatus for dynamic image capturing based on motion information in image | |
US20220360711A1 (en) | Automated camera mode selection | |
JP2008109336A (ja) | 画像処理装置および撮像装置 | |
US8798369B2 (en) | Apparatus and method for estimating the number of objects included in an image | |
WO2016011872A1 (zh) | 图像拍摄方法、装置和计算机存储介质 | |
JP2017188760A (ja) | 画像処理装置、画像処理方法、コンピュータプログラム及び電子機器 | |
JP2007281555A (ja) | 撮像装置 | |
US7428012B2 (en) | Image processing method including generating face region information and a digital camera including face region identification unit | |
CN107211098B (zh) | 对场景进行成像的方法的装置 | |
JP2021196643A (ja) | 推論装置、撮像装置、学習装置、推論方法、学習方法、及びプログラム | |
US9986163B2 (en) | Digital photographing apparatus and digital photographing method | |
US8243154B2 (en) | Image processing apparatus, digital camera, and recording medium | |
WO2022168667A1 (ja) | 画像処理装置および画像処理方法 | |
JP2000023024A (ja) | 画像入力装置 | |
JP2007251532A (ja) | 撮像装置及び顔領域抽出方法 | |
JP5016999B2 (ja) | 撮像装置、撮像方法、及びプログラム | |
EP3844945B1 (en) | Method and apparatus for dynamic image capturing based on motion information in image | |
JP2014225763A (ja) | 撮像装置及びその制御方法、プログラム、並びに記憶媒体 | |
JPH1175106A (ja) | 静止画カメラ | |
CN113347490B (zh) | 视频处理方法、终端及存储介质 | |
US10785420B2 (en) | Imaging device and camera for minimizing a digital gain value to be multiplied and suppressing noise | |
JP5610818B2 (ja) | 撮像装置及びその制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22749542 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18262701 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280012150.7 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22749542 Country of ref document: EP Kind code of ref document: A1 |