US20180278824A1 - Systems and methods for regionally controlling exposure time in high dynamic range imaging - Google Patents
Systems and methods for regionally controlling exposure time in high dynamic range imaging Download PDFInfo
- Publication number
- US20180278824A1 US20180278824A1 US15/469,309 US201715469309A US2018278824A1 US 20180278824 A1 US20180278824 A1 US 20180278824A1 US 201715469309 A US201715469309 A US 201715469309A US 2018278824 A1 US2018278824 A1 US 2018278824A1
- Authority
- US
- United States
- Prior art keywords
- image
- sensors
- images
- duration
- bounding box
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000003384 imaging method Methods 0.000 title description 9
- 238000001514 detection method Methods 0.000 abstract description 29
- 230000008447 perception Effects 0.000 description 40
- 230000006870 function Effects 0.000 description 17
- 238000012545 processing Methods 0.000 description 11
- 230000006854 communication Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 8
- 238000003708 edge detection Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 5
- 230000005855 radiation Effects 0.000 description 5
- 238000002156 mixing Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000005096 rolling process Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 229920006395 saturated elastomer Polymers 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000026058 directional locomotion Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 229910000078 germane Inorganic materials 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000003079 width control Methods 0.000 description 1
Images
Classifications
-
- H04N5/2353—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G06K9/6212—
-
- G06K9/6267—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G06T5/009—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/743—Bracketing, i.e. taking a series of images with varying exposure conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/53—Control of the integration time
- H04N25/531—Control of the integration time by controlling rolling shutters in CMOS SSIS
-
- H04N5/2256—
-
- H04N5/2356—
-
- H04N5/3765—
-
- G06K9/00791—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/12—Bounding box
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Definitions
- This disclosure relates to object detection in images and, more particularly, selectively controlling the exposure time of individual sensors based on the location of the object in the image.
- High dynamic range (HDR) imaging is a technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques.
- HDR photographs are generally achieved by capturing multiple standard exposure images with different exposure times and merging the captured images to form a single HDR image.
- Digital images are often encoded in a camera's raw image format because image encoding doesn't offer a great enough range of values to allow fine transitions, introducing undesirable effects due to lossy compression.
- the degree of exposure to light applied to the image sensor can be altered by increasing/decreasing the time of each exposure.
- the final image is constructed by combining the multiple frames captured at different exposures, wherein different parts of the final image include different combinations of different exposure frames.
- HDR imaging is a critical requirement for several scientific applications where a scene may contain bright, direct sunlight and extreme shade.
- cameras, or digital imagers, used on automotive applications may be subject to scenes that can include regions of both significant brightness (e.g., sun, oncoming headlights) and darkness (e.g., under a bridge, parking garage).
- the typical image sensor is activated by rows of sensors (or pixels) in an asynchronous manner. In other words, the rows of the image sensor are activated in succession, which may result in a “rolling shutter” effect in a captured image.
- the rolling shutter effect may distort features of a scene that are rapidly moving or changing, causing the features to look distorted, partially captured, or not captured at all.
- LEDs Light emitting diodes
- ADAS advanced driving assistance system
- a self-driving car system may not recognize that the vehicle in front of it is braking, or that a traffic light has turned red.
- the apparatus may include a digital imager.
- the digital imager may include a sensor array comprising a plurality of sensors, each sensor configured to generate a signal responsive to an amount of radiation incident on the sensor, the sensor array further configured to generate a plurality of images, wherein each of the plurality of images are generated under different exposure conditions, and an image signal processor configured to control exposure conditions for each sensor of the plurality of sensors.
- the apparatus may also include a processor coupled to the digital imager and configured to determine one or more weight values for each image in the plurality of images, combine the plurality of images into a single image based on a ratio of the one or more weight values for each image, determine a number of edges in the single image, the number of edges representing boundaries of objects, identify an object in the single image using the number of edges and an object database, determine a region of sensors of the plurality of sensors corresponding to the identified object, and transmit a first message to the image signal processor, the first message comprising data for adjusting an exposure condition of one or more sensors within with the region of sensors.
- a processor coupled to the digital imager and configured to determine one or more weight values for each image in the plurality of images, combine the plurality of images into a single image based on a ratio of the one or more weight values for each image, determine a number of edges in the single image, the number of edges representing boundaries of objects, identify an object in the single image using the number of edges and an object database, determine a region of sensors
- the processor is further configured to generate a bounding box based on the identified object in the single image, and wherein the bounding box comprises at least a portion of the identified object.
- the bounding box corresponds to the region of sensors of the plurality of sensors.
- Another innovation is a method for regionally controlling exposure time on an image sensor, including generating a plurality of images, via the image sensor, the image sensor comprising a plurality of sensors, wherein each sensor is configured to generate a signal responsive to an amount of radiation incident on the sensor, and wherein each of the plurality of images are generated under different exposure conditions, controlling exposure conditions, via an image signal processor, for each sensor of the plurality of sensors, computing, via a processor, one or more weight values for each image in the plurality of images, combining, via the processor, the plurality of images into a single image based on a ratio of the one or more weight values for each image, determining, via the processor, a number of edges in the single image, the number of edges representing boundaries of objects, identifying, via the processor, an object in the single image using the number of edges and an object database, determining, via the processor, a region of sensors of the plurality of sensors corresponding to the identified object, and transmitting, via the processor, a first message to the image signal processor, the first message
- Another innovation is an apparatus for regionally controlling exposure time, that includes a means for generating a plurality of images, wherein each of the plurality of images are generated under different exposure conditions, a means for controlling exposure conditions of the means for generating, a means for computing one or more weight values for each image in the plurality of images, a means for combining the plurality of images into a single image based on a ratio of the one or more weight values for each image, a means for determining a number of edges in the single image, the number of edges representing boundaries of objects, a means for identifying an object in the single image using the number of edges and an object database, a means for determining a region of sensors of the plurality of sensors corresponding to the identified object, and a means for transmitting a first message to means for controlling exposure conditions, the first message comprising data for adjusting an exposure condition of one or more sensors within with the region of sensors.
- the means for generating is an image sensor
- the means for controlling is an image signal processor
- the means for computing is a processor
- means for combining is the processor
- means for determining is the processor
- means for identifying is the processor
- the means for transmitting is the processor.
- Another innovation is a non-transitory, computer-readable medium comprising instructions executable by a processor of an apparatus, that causes the apparatus to generate a plurality of images, via a sensor array comprising a plurality of sensors, wherein each sensor configured to generate a signal responsive to an amount of radiation incident on the sensor, and wherein each of the plurality of images are generated under different exposure conditions, control, via an image signal processor, exposure conditions for each sensor of the plurality of sensors, determine one or more weight values for each image in the plurality of images, combine the plurality of images into a single image based on a ratio of the one or more weight values for each image, determine a number of edges in the single image, the number of edges representing boundaries of objects, identify an object in the single image using the number of edges and an object database, determine a region of sensors of the plurality of sensors corresponding to the identified object, and transmit a first message to the image signal processor, the first message comprising data for adjusting an exposure condition of one or more sensors within with the region of sensors.
- FIG. 1 illustrates an example implementation of a camera system for identifying light sources.
- FIG. 2 is a block diagram illustrating an example of a camera system integrated with a perception system.
- FIG. 3A illustrates a typical image sensor activated by rows of pixels, where each row is read out in a sequential manner.
- FIG. 3B illustrates an example pulse sequence of an LED light source compared to a digital imager exposure time under a bright light.
- FIG. 4A is a flowchart that illustrates the steps for generating a single HDR image.
- FIG. 4B illustrates an example implementation of an HDR image blender for combining multiple images to generate a single image.
- FIG. 5 is a block diagram illustrating an example implementation of the perception system and the camera system.
- FIG. 6 illustrates varying degrees of generated bounding boxes at different stages of object detection.
- FIG. 7 is a flowchart that illustrates the steps for implementing a camera system and perception system for identification of light sources.
- the examples, systems, and methods described herein are described with respect to techniques for selectively controlling the exposure time of individual sensors of a sensor array or an image sensor, based on the location of an identified object in an image.
- the systems and methods described herein may be implemented on various types of imaging systems that include a camera, or digital imager, and operate in conjunction with various types of object detection systems. These include general purpose or special purpose digital cameras or any camera attached to or integrated with an electronic or analog system.
- Examples of photosensitive devices or cameras that may be suitable for use with the invention include, but are not limited to, semiconductor charge-coupled devices (CCD) or active sensors in CMOS or N-Type metal-oxide-semiconductor (NMOS) technologies, all of which can be germane in a variety of applications including: digital cameras, hand-held or laptop devices, and mobile devices (e.g., phones, smart phones, Personal Data Assistants (PDAs), Ultra Mobile Personal Computers (UMPCs), and Mobile Internet Devices (MIDs)).
- Examples of object detection systems that may be suitable for use with the invention include, but are not limited to real-time object detection systems based on image processing.
- FIG. 1 illustrates an example of a first vehicle 105 and a second vehicle 110 .
- the second vehicle 110 may also be referred to as another vehicle, or as a plurality of other vehicles.
- the first vehicle 105 is equipped with an HDR camera and an object detection system that may be used in conjunction with an ADAS or self-driving car application.
- the first vehicle 105 equipped with the object detection system may include a camera 115 configured to capture HDR and wide dynamic range (WDR) images, and a perception system 120 .
- the camera 115 may be directed so that the lens is facing in the forward direction of the first vehicle 105 for capturing images or frames of the scene in front of the first vehicle 105 . It is noted that FIG.
- the camera may be located inside or outside of the first vehicle 105 , and may be directed such that the lens assembly 255 is facing in any direction.
- the camera 115 may be directed to the rear of the first vehicle 105 for capturing image or frames of the scene behind the first vehicle 105 .
- the camera 115 and perception system 120 may be equipped in a vehicle other than a car, such as an air vehicle.
- a three-axis Cartesian coordinate system is illustrated extending from the camera 115 in the direction a lens assembly 255 of the camera 115 is facing, providing an example of the range of focus of the camera 115 .
- the camera 115 may capture a scene that includes a road and the markers and signs around the road, and other vehicles on and around the road.
- the camera 115 may be functionally and physically integrated with the perception system 120 .
- the perception system 120 may include a processor for executing an object detection algorithm for detecting objects in frames captured by the camera 115 .
- the perception system 120 may be integrated with the camera 115 using a wireless or wired bidirectional communication implementation.
- the communication link may include a wired communication link and/or a wireless communication link including Bluetooth or Wi-Fi, or an infra-red (IR) beam communication protocol.
- FIG. 2 is a block diagram 200 illustrating how the camera 115 and perception system 120 may implement techniques in accordance with aspects described in this disclosure. In some examples, the techniques described in this disclosure may be shared among the various components of the camera 115 and the perception system 120 .
- the camera 115 may include a plurality of physical and functional components.
- the components of the camera 115 may include the lens assembly 255 , and image sensor 250 , an image signal processor (ISP) 235 , an on-chip memory 240 , and an external memory 245 .
- ISP image signal processor
- the camera 115 may include more, fewer, or different components.
- the lens assembly 255 captures light from a scene and brings it to a focus on the electrical sensor or film.
- the two main optical parameters of a photographic lens are maximum aperture and focal length.
- the focal length determines the angle of view, and a size of the image relative to that of an object for a given distance to the object (subject-distance).
- the maximum aperture (f-number, or f-stop) limits the brightness of the image and the fastest shutter speed usable for a given setting (focal length/effective aperture), with a smaller number indicating that more light is provided to the focal plane which typically can be thought of as the face of the image sensor in a simple digital camera.
- a single focal length is provided.
- the lens may be of manual or auto focus (AF).
- the lens assembly 255 provides a structure for containing and positioning one or more camera lenses.
- the lens assembly 255 may provide a focus control function wherein the lens position is adjusted based on feedback from ISP 235 or a user of the camera.
- the lens assembly 255 may include an actuator or step motor for adjusting the lens position.
- the lens assembly 255 may be functionally and/or physically coupled to an image sensor 250 .
- the image sensor 250 may include a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) sensor.
- the image sensor 250 includes a sensor array of light sensitive pixels or sensors. Each pixel in the array can include at least one photosensitive element for outputting a signal having a magnitude proportional to the intensity of incident light or radiation contacting the photosensitive element. When exposed to incident light reflected or emitted from a scene, each pixel in the array outputs at least one signal having a magnitude corresponding to an intensity of light at one point in the scene.
- the signals output from each photosensitive element may be processed to form an image representing the captured scene. Filters for use with image sensors include materials configured to block out certain wavelengths of radiation.
- a photo sensor may be designed to detect first, second, and third colors (e.g., red, green and blue wavelengths).
- first, second, and third colors e.g., red, green and blue wavelengths.
- each pixel in the array of pixels may be covered with a single color filter (e.g., a red, green or blue filter) or with a plurality of color filters.
- the color filters may be arranged into a pattern to form a color filter array over the array of pixels such that each individual filter in the color filter array is aligned with one individual pixel in the array. Accordingly, each pixel in the array may detect the color of light corresponding to the filter(s) aligned with it.
- FIG. 2 further illustrates the ISP 235 integrated with the camera 115 and the image sensor 250 , the lens assembly 255 , and the external memory 245 .
- the ISP 235 may be an element of the camera 115 , or may be an element associated with an independent system of which the camera 115 is integrated (e.g., the perception system 120 ).
- the image sensor 250 may function to measure the light intensity provided by a scene and convert that light into an electronic signal made up of the image statistics for each frame.
- the image statistics, or raw image data, provided by the image sensor 250 may supply the ISP 235 with the data necessary to process captured image frames.
- the ISP 235 may control the lens assembly 255 which can adjust the location of the lens in order to focus the scene.
- Scene focusing can be based on the image sensor 250 image statistics alone or in conjunction with an autofocus algorithm.
- distance and directional movement of the lens assembly 255 may be based on direction provided by the autofocus algorithm which may include a contrast detection autofocus.
- the contrast detection autofocus can make use of the image statistics by mapping them to a value that represents a lens position or, alternatively, may position the lens in non-discrete, ad-hoc positions.
- the ISP 235 may be coupled to the lens actuator and may adjust the lens based on calculations made with the image information from the at least one image sensor.
- the ISP 235 may control the image sensor 250 exposure period.
- the ISP 235 may adjust the exposure period of the image sensor 250 based in part on the size of the aperture and the brightness of the scene.
- the ISP 235 may also adjust the exposure period on a per-pixel or per-sensor basis, using data provided by the perception system 120 .
- the processor may allow certain sensors or regions of sensors to collect light for a longer or shorter period of time than
- the ISP 235 may include an on-chip memory 240 integrated with the processor hardware and directly accessibly by the ISP 235 .
- the memory 240 may be a random access memory (RAM) chip, a read-only memory, or a flash memory, and may contain instructions for the ISP 235 to interface with the image sensor 250 , the lens assembly 255 , the external memory 245 , and the perception system 120 .
- the external memory 245 may also store information regarding the type of processor, auto focus algorithms, and store captured images.
- the external memory 245 may be a fixed piece of hardware such as a random access memory (RAM) chip, a read-only memory, and a flash memory.
- the external memory 245 may include a removable memory device, for example, a memory card and a USB drive.
- the camera 115 may be integrated with the perception system 120 .
- the perception system 120 may include a plurality of functional components.
- the functional components of the perception system 120 may include an object detection 220 module, a feature extractor 215 module, a perception module 225 , and an object database 230 .
- the feature extractor 215 , object detection 220 , and the perception module 225 may all be executed on a single processor, or may be executed by individual processors functionally and/or physically integrated together.
- the object database 230 may be a memory including a RAM chip, a read-only memory, and a flash memory. In another embodiment, the object database 230 may include a removable memory device, for example, a memory card and a USB drive.
- the camera 115 may be integrated with the first vehicle 105 and configured to capture images of a scene outside of the first vehicle 105 .
- Raw image data captured by the image sensor 250 may be processed by the ISP 235 .
- the raw image data may be communicated to the feature extractor 215 of the perception system 120 .
- the ISP 235 may execute an image combine function to combine a number of captured images into one HDR image 435 , then communicate the HDR image 435 to the feature extractor 215 .
- the raw image data of the number of captured images is combined to form a single HDR image 435 using sequential exposure change, or other techniques such as interpolation.
- the camera 115 may sequentially capture multiple images of the same scene using different exposure times.
- the exposure for each image may be controlled by either varying the f-number of the lens assembly 255 or the exposure time of the image sensor 250 .
- a high exposure image will be saturated in the bright regions of the captured scene, but the image will capture dark regions as well.
- a low exposure image will have less saturation in bright regions but may end up being too dark and noisy in the dark areas.
- FIG. 3A illustrates a functional representation 300 of the image sensor 250 capturing a scene at three different exposure times (T 1 , T 2 , T 3 ) to generate the HDR image 435 .
- Exposure time may also be referred to as “exposure condition” herein, although exposure conditions may also include other conditions such as lens position, aperture settings, and other camera and other hardware parameters that may affect the exposure conditions of the image sensor.
- Exposure condition may also include other conditions such as lens position, aperture settings, and other camera and other hardware parameters that may affect the exposure conditions of the image sensor.
- Each row of sensors 301 a - 301 t is illustrated offset from the previous row to indicate a time delta caused by the sequential manner in which each row of the image sensor 250 is read out. This sequential read out often causes a “rolling shutter” effect that distorts objects and creates artifacts in the images.
- the human eye does not detect the intervals at which an LED light is pulsed on and off, but a camera with a fast exposure time may miss the LED pulse.
- the problem becomes pronounces in brightly illuminated scenes, where a short (T 2 ) or very short (T 3 ) exposure time is used by the HDR camera to construct the combined scene.
- exposure time T 1 may be used by certain regions of the image sensor 250 in order to completely capture the LED pulse.
- the exposure condition can be determined by a processor based on the amount of light in the scene, and/or based on a pre-configured set of parameters, including exposure time.
- FIG. 3B illustrates an example LED light source timing sequence 305 and an image sensor 250 exposure time 310 .
- the ON state 315 of the LED pulse is represented by four square waves, or clock cycles.
- there may be a delay between sequential ON states for example, the delay may be on the order of milliseconds (e.g., 10 ms).
- the exposure time 310 includes two period of image sensor 250 exposure ( 325 , 330 ). The duration of each period of exposure time may depend on the light available in the scene. For example, the exposure time may be 100 ⁇ s or less for a brightly lit scene, or a scene with brightly lit regions.
- the first exposure time 325 captures a piece of an LED pulse.
- the image may appear distorted, or the light source may appear dim and partially lit.
- the second exposure time 330 falls in between two LED ON states. In such a situation, the light source appears off. As such, the duration of the exposure of the image sensor 250 may preclude capturing an image while the LED light source is in an ON state.
- the method illustrated in FIG. 3A may be used to consecutively capture a number of images over a period that will ensure at least one image of the number of images will capture the LED light source in an ON state.
- FIG. 4A illustrates an example method for blending a plurality of images 400 taken at different exposure settings to generate the HDR image 435 .
- the ISP 235 may receive a number of images ( 405 ), the number of images captured with different exposure settings. The ISP 235 may then generate one or more weight values for image blending for each image of the plurality of images ( 410 ). For example, a weight value accorded to each image may affect its gradation level, or the degree of impact the image has in the HDR image 435 when combined with other images.
- the ISP 235 may adjust the weights a and I to generate the HDR image 435 according to a perception measure (e.g., brightness, edges of a degree of saliency, smoothness, length, etc.).
- the ISP 235 may generate ( 415 ) a blended image based on the weights calculated for each image being blended.
- a perception measure e.g., brightness, edges of a degree of saliency, smoothness, length, etc.
- the ISP 235 may generate ( 415 ) a blended image based on the weights calculated for each image being blended.
- a perception measure e.g., brightness, edges of a degree of saliency, smoothness, length, etc.
- the ISP 235 may generate ( 415 ) a blended image based on the weights calculated for each image being blended.
- SEI data of a short-exposure image 425
- LEI data of a long-exposure image 430
- ⁇ and ⁇ may represent weights which are applied to blend images, and by differentiating the weights ⁇ and ⁇ , a range of representable gradation levels may be adjusted.
- a merging weight value may be computed for each pixel location of each of the plurality of images captured. The weight value of each pixel in an image of the plurality of images may be determined based on the weight computed for the other images in the plurality of images, or the weight value can be determined by applying pre-determined weight values stored in the on-chip memory 240 or external memory 245 .
- the HDR imaging is performed by the ISP 235 as illustrated in FIGS. 2 and 3B .
- the ISP 235 may be configured to receive image data comprising image data from a set of image frames.
- a set of image frames may correspond to a plurality of images captured of the same scene but having different exposure conditions.
- the ISP 235 may be implemented in hardware as a single-path system with limited memory.
- image data from the set of image frames may be received as a bitstream and processed by the system as it is received.
- the system will only have access to a small portion of each image frame of the set of image frames at any given time, without the ability for the system to refer to other portions of the image frames.
- the ISP 235 analyzes the received image data to produce the HDR image 435 .
- portions of the image frame with high luminosity e.g., due to direct sunlight
- pixels from an image frame having lower exposure time may be used, because image frames having higher exposure times may be saturated in those portions of the image frame.
- portions of the image frame with low luminosity e.g., due to being in shadow
- pixels from an image frame having higher exposure time may be used.
- appropriate image data from the set of image frames is selected, such that the HDR image 435 is able to capture both high and low luminosity ranges while avoiding saturation (e.g., due to high exposure times) and unnecessary noise (e.g., due to low exposure time).
- FIG. 4B illustrates an exemplary method of generating a single image from multiple images taken at different levels of exposure.
- the camera 115 may capture a number of images of a scene, and the ISP 235 may combine the images to create a single HDR image 435 .
- three images are captured at differing levels of exposure: a long-exposure image 430 (T 1 ), a short-exposure image 425 (T 2 ), and a very short-exposure image 420 (T 3 ).
- the ISP 235 receives the three images and blends the images to generate the HDR image 435 .
- the images may be combined based on a ratio of image parameters in each of the images as compared to a reference image.
- the reference image may be the short-exposure image 425 (T 2 ).
- the image parameters may include characteristics of each image, including but not limited to, contrast, saturation, color, brightness, hue, and chrominance and luminance components.
- FIG. 4B illustrates an example and should not be used to limit the disclosed techniques to combining only three images.
- the ISP 235 may receive and blend any number of images.
- the feature extractor 215 of the perception system 120 may blend the number of images and generate the HDR image 435 instead of the ISP 235 .
- the ISP 235 may include an auto-focus algorithm, and may control the lens position using the lens assembly 255 according to the auto-focus algorithm.
- contrast detection algorithms evaluate the image statistics received from the image sensor 250 at a number of lens positions, and determine if there is more or less contrast at each position relative to the other positions. If contrast has increased, the lens is moved in that direction until contrast is maximized. If contrast is decreased, the lens is moved in the opposite direction. This movement of the lens is repeated until contrast is maximized.
- the lens assembly may be activated to focus the lens on a particular scene by employing algorithms of at least one of three specific types of contrast detection: (1) exhaustive autofocus, (2) slope predictive autofocus, and (3) continuous autofocus.
- Contrast detection autofocus makes use of a focus feature that maps an image to a value that represents the degree of focus of the image, and iteratively moves the lens searching for an image with the maximal focus according to the contrast detection algorithm.
- the ISP 235 may determine which contrast detection autofocus algorithm may be the most appropriate for a given scene or application, and select it to be used.
- the ISP 235 may determine the appropriate algorithm based on image sensor 250 information. For example, the type of image sensor, the number of light sensitive surfaces on each sensor, etc.
- the ISP 235 may actuate the lens using the lens assembly to adjust the position of the lens using a digital lookup table with a range of lens positions that correspond to a calculated disparity value.
- the lookup table may be stored in the memory 240 .
- the camera may adjust position of the lens using one or more contrast detection auto focus algorithms.
- FIG. 5 is a block diagram illustrating an example of the perception system 120 that may implement techniques in accordance with aspects described in this disclosure.
- the perception system 120 may be configured to perform some or all of the techniques of this disclosure.
- the techniques described in this disclosure may be shared among the various components of the perception system 120 and the camera 115 .
- a processor (not shown) may be configured to perform some or all of the techniques described in this disclosure.
- the processor may include the image sensor array as an ISP, but can also include an application processor or other external general processor.
- the feature extractor 215 includes a plurality of functional components.
- the processes and filters that are executed on the image data by the feature extractor 215 can enable the object detection 220 system to accurately and effectively derive a number of edges and edge-related information from received image data.
- Each edge may represent a boundary of an object with respect to a scene surrounding the object.
- the functional components of the feature extractor 215 include a filter bank 505 and an edge detector 510 .
- the feature extractor may include more, fewer, or different functional components.
- the filter bank 505 may be a portion of physical memory included on a processor, for example, a video processing unit or a general processing unit, and memory, collectively configured to manage communications and execution of tasks.
- the filter bank 505 may include a stored set of filters for smoothing the image or filtering the image for noise.
- the filter bank 505 may include an implementation of a Gaussian filter and a Sobel filter. It is contemplated that other smoothing, blurring, or shading filters can be used by the feature extractor 215 .
- the HDR image 435 data is first applied to the filter bank 505 to create a blurred image. Such blurring reduces image noise and reduces details in the raw image data.
- applying the HDR image 435 to the filter bank 505 has the effect of reducing the detection of weak or isolated edges.
- the feature extractor 215 may be utilized to detect and identify edges in the long-exposure image 430 , the short-exposure image 425 , and the long-exposure image 430 before combining the images.
- the edge detector 510 may be a portion of physical memory included on a processor, for example, a video processing unit or a general processing unit, and memory, collectively configured to manage communications and execution of tasks.
- the feature extractor 215 uses an edge detector 510 to perform an edge detection algorithm on the blurred image to detect edges in the image.
- the edge detector may perform the edge detection algorithm on the HDR image 435 prior to, or without, passing the HDR image 435 through a filtering system.
- an implementation of a Canny edge detection algorithm may be used to detect and single out prominent edges in the image. It is contemplated that other edge detection algorithms can be used by the feature extractor 215 .
- a Canny-Deriche detector algorithm and a differential edge detector algorithm can be used.
- the edge detector 510 may generate data that includes edge data and edge location in the HDR image 435 .
- the object detection 220 module includes a plurality of functional components including an object matching 515 module, an object database 230 , and an optional filter bank 520 .
- the feature extractor may include more, fewer, or different functional components.
- the object matching 515 module may be a portion of physical memory included on a processor, for example, a video processing unit or a general processing unit, and memory, collectively configured to manage communications and execution of tasks.
- the object matching 515 algorithm may include an algorithm for matching an object shape in the object database 230 with the edges in the HDR image 435 determined by the feature extractor 215 received by the object detection 220 module.
- a scene captured by the camera 115 may include the second vehicle 110 .
- the object database 230 may include a plurality of shapes, any number of which may be substantially similar to the calculated edges of the second vehicle 110 .
- An object matching algorithm may determine a shape in the object database 230 that most closely resembles the calculated edges of the second vehicle 110 .
- the object database may include attributes associated with each shape such that each shape can be identified.
- a vehicle such as a car or truck can be identified based on its shape, and be distinguished from a traffic light or a construction sign, both of which are also identified based on their shape.
- Shape identification can alter the way that the images are generated.
- the shape of a car or truck can indicate a moving object that may remain in the scene for longer than a stationary object such as a traffic light or street sign.
- identification of the object can trigger a calculation of gradients of movement of the object based on the identity of the object.
- Identification of the object may also include identifying the presence of an object without determining a specific object.
- the object matching 515 algorithm detects shapes in the image created by the edges based on one or more criterion including the perception measure, length of an edge or its associated curves, a number of overlapped edges, location in the image, depth information, camera location information, or other available information.
- the camera 115 may be calibrated to capture a center of a lane in the middle of an image frame, and the shoulders of the lane in the left and right periphery of the image frame.
- the object matching 515 algorithm may detect shapes based on expected shapes in those areas of the image frame.
- the object matching 515 algorithm detects shapes in the image based on motion detection of the object. For example, the motion of an object may be detected by obtaining a plurality of images of an object over a period of time, identifying the object, and calculating gradients of movement of the object based on the plurality of images.
- the object matching 515 algorithm may generate a first bounding box 605 around each of the objects in the image where the first bounding box 605 has a height (h) and width (w) measured in pixels.
- Each bounding box may represent a region of sensors in the sensor array.
- Each bounding box may be generated based on the geometry of the objects, other types of image descriptors (e.g., SIFT, BRISK, FREAK, etc.), or other parameters.
- the object detection 220 module may provide any number of bounding boxes to the camera 115 .
- the ISP 235 may communicate a command to the image sensor 250 to modulate the exposure settings of the sensors of the bounding box and the sensors within the bounding box. For example, the ISP 235 may direct the image sensor to increase the exposure time of the sensors within the bounding box to a time greater than the exposure time of other sensors in the image sensor 250 .
- the object detection 220 module may include an optional filter bank 520 . Once the edge data is received from the feature extractor 215 , may perform the steps of reducing false positives and verifying the object match.
- the perception module 225 may be a portion of physical memory included on a processor, for example, a video processing unit or a general processing unit, and memory, collectively configured to manage communications and execution of tasks.
- the perception module 225 may determine areas of the detected objects that may include an LED light. For example, a traffic light and a vehicle with taillights may be captured in an image frame, and the object detection module may detect the object of the traffic light and the vehicle based on the edge detector 510 data.
- the perception module 225 may determine areas of the detected objects that may include LED lights.
- the perception module may generate a second bounding box 610 around each of the areas that may include an LED light in the image where the second bounding box 610 has a height (h) and width (w) measured in pixels.
- Each bounding box may be generated based on the geometry of the objects, other types of image descriptors (e.g., SIFT, BRISK, FREAK, etc.), or other parameters.
- the perception module 225 may provide any number of bounding boxes to the camera 115 .
- the ISP 235 may communicate a command to the image sensor 250 to modulate the exposure settings of the sensors of each bounding box and the sensors within each bounding box.
- the ISP 235 may direct the image sensor to increase the exposure time of the sensors within the bounding box to a time greater than the exposure time of other sensors in the image sensor 250 . This allows the image sensor 250 to capture image frames that include the light created by the LED, and avoid capturing image frames during the off phase of the LED.
- the perception system 120 may generate a clock cycle that cycles from an ON state to an OFF state in sync with the LED pulse of an outside light source captured by the camera 115 .
- the perception system 120 may determine the ON/OFF state cycles based on the images captured of the light source.
- the perception module 120 may provide the clock cycle to the ISP 235 so that the ISP 235 may expose the image sensor 250 during the ON state of the outside light source.
- the ISP may expose one or more regions of the image sensor 250 at the rate of the clock cycle.
- the camera 115 captures images in sync with the outside light source so that the images are not captured during an OFF cycle of the LED pulse of the outside light.
- FIG. 6 illustrates an example of bounding box operations 600 .
- the second vehicle 110 is captured in an image frame.
- the vehicle is detected through edge detection and object detection algorithms, and a first bounding box 605 is generated around the detected object.
- LED light sources are determined by the perception module 225 and a second bounding box 610 is generated around the LED light sources.
- This embodiment offers the benefit of two sources of bounding boxes for increasing sensor exposure time in image sensors 250 .
- the perception module cannot determine an LED source
- the first bounding box 605 may be used to determine sensor exposure time. It is contemplated that other embodiments may include only one source of bounding box from either the object detection 220 module or the perception module 225 .
- FIG. 7 is a flow chart that illustrates an example method 700 of detecting LED light source regions in a captured image.
- the method 700 captures multiple images of a scene using the camera 115 .
- the camera 115 may include an image sensor 250 .
- the multiple images are each captured consecutively at differing exposure times.
- the method 700 generates the HDR image 435 by combining, or blending, the multiple images.
- each image of the multiple images may be accorded a weight value, where the weight value assigns the effect each image has on the blended, HDR image 435 .
- the method 700 executes an edge detection algorithm on the HDR image 435 in order to calculate and define the edges.
- the blended image is ideal for edge detection because by blending the multiple images into the HDR image 435 , the edges have greater definition.
- the feature extractor 215 may apply the edge detection algorithm to one or more of the multiple images prior to generating the blended image. In this configuration, the edges can carry over to the blended image.
- the method 700 identifies an object in the image.
- a number of objects may be detected in the image using the detected edges.
- the objects detected are defined by an object database, and are matched using the object matching algorithm.
- the object matching algorithm may identify objects based on the edges in the image and the objects stored in the object database 230 by comparing the shapes formed by the edges.
- the method 700 determines a region of the identified object that contains a light source.
- the method 700 may use an object database to determine regions of the detected objects that may contain a light source.
- the object matching algorithm may identify objects based on the edges in the image, and the identified objects may contain characteristics.
- the characteristics may include regions of the identified objects that may contain light sources.
- the HDR image 435 may contain edges that form a rectangle that includes three circular shapes that are vertically aligned, similar to a traffic light.
- the object database 230 may include the shape, as well as associated characteristics that include light sources at each of the three circular shapes.
- the perception module 225 may increase the exposure time of regions of sensors in the image sensor 250 that correspond to the regions of the three circular shapes that have the light source characteristic. In another implementation, the perception module 225 may increase the exposure time of regions of sensors in the image sensor 250 that correspond to one or more bounding boxes containing the regions of the three circular shapes that have the light source characteristic.
- the method 700 generates a bounding box around one or more regions that may contain a light source.
- the bounding box may correspond to a region of pixels or individual sensors on the image sensor 250 .
- the bounding box may be dynamically resized and/or reshaped with subsequent HDR images that reveal the location of one or more light sources. For example, if the HDR image 435 is captured while the LED light source is in an ON state 315 , the bounding box may be resized or reshaped to fit the actual size of the light source. In this embodiment, the bounding box regions are fitted to be more accurate, so as not to include unnecessarily large regions of the image sensor 250 .
- the method 700 communicates the bounding box data to the camera 115 .
- the bounding box data identifies regions of pixels or individual sensors on the image sensor 250 that will have a different exposure time than the rest of the sensors.
- the ISP 235 may receive the bounding box data, and increase the exposure time of the regions of sensors in the image sensor 250 that are contained by the one or more bounding boxes.
- the method 700 updates the exposure time of the sensors in regions of the image sensor 250 according to the bounding box data.
- the perception module 225 may generate bounding boxes in regions that have less edges than other regions of the HDR image 435 , or regions where edges commonly terminate. These regions may indicate regions of the HDR image 435 that are over- or under exposed.
- the perception module 225 may send bounding box data to the ISP 235 , where the bounding box data contains the regions that have less edges than other regions of the HDR image 435 , or regions where edges commonly terminate.
- the ISP 235 may increase or decrease the exposure time of the individual sensors of the image sensor 250 that are contained within the one or more bounding boxes.
- One or more of the components, steps, features and/or functions illustrated in the figures may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from novel features disclosed herein.
- the apparatus, devices, and/or components illustrated in the figures may be configured to perform one or more of the methods, features, or steps described in the figures.
- the novel algorithms described herein may also be efficiently implemented in software and/or embedded in hardware.
- the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged.
- a process is terminated when its operations are completed.
- a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
- a process corresponds to a function
- its termination corresponds to a return of the function to the calling function or the main function.
- determining encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
- pixel or “sensor” may include multiple photosensitive elements, for example a photogate, photoconductor, or other photodetector, overlying a substrate for accumulating photo-generated charge in an underlying portion of the substrate.
- storage medium may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine-readable mediums, processor-readable mediums, and/or computer-readable mediums for storing information.
- ROM read-only memory
- RAM random access memory
- magnetic disk storage mediums magnetic disk storage mediums
- optical storage mediums flash memory devices and/or other machine-readable mediums
- processor-readable mediums and/or computer-readable mediums for storing information.
- the terms “machine-readable medium”, “computer-readable medium”, and/or “processor-readable medium” may include, but are not limited to non-transitory mediums such as portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instruction(s) and/or data.
- various methods described herein may be fully or partially implemented by instructions and/or data that may be stored in a “machine-readable medium,” “computer-readable medium,” and/or “processor-readable medium” and executed by one or more processors, machines and/or devices.
- embodiments may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof.
- the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s).
- a processor may perform the necessary tasks.
- a code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements.
- a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing components, e.g., a combination of a DSP and a microprocessor, a number of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- a software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
- a storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
Description
- This disclosure relates to object detection in images and, more particularly, selectively controlling the exposure time of individual sensors based on the location of the object in the image.
- High dynamic range (HDR) imaging is a technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. HDR photographs are generally achieved by capturing multiple standard exposure images with different exposure times and merging the captured images to form a single HDR image. Digital images are often encoded in a camera's raw image format because image encoding doesn't offer a great enough range of values to allow fine transitions, introducing undesirable effects due to lossy compression. In most imaging devices, the degree of exposure to light applied to the image sensor can be altered by increasing/decreasing the time of each exposure. The final image is constructed by combining the multiple frames captured at different exposures, wherein different parts of the final image include different combinations of different exposure frames.
- The purpose is to present a range of luminance similar to that experienced through the human eye, so that all aspects of an image are clear despite the image having regions of broad luminance value disparity. HDR imaging is a critical requirement for several scientific applications where a scene may contain bright, direct sunlight and extreme shade. For example, cameras, or digital imagers, used on automotive applications may be subject to scenes that can include regions of both significant brightness (e.g., sun, oncoming headlights) and darkness (e.g., under a bridge, parking garage). However, the typical image sensor is activated by rows of sensors (or pixels) in an asynchronous manner. In other words, the rows of the image sensor are activated in succession, which may result in a “rolling shutter” effect in a captured image. The rolling shutter effect may distort features of a scene that are rapidly moving or changing, causing the features to look distorted, partially captured, or not captured at all.
- Light emitting diodes (LEDs) are ubiquitous in driving scenarios, and account for the light sources found in traffic lights, traffic sign boards, and tail and head lights in automobiles. LED light sources are typically pulsed at a high frequency, where the pulse width controls the brightness. For example, an LED may pulse to an ON state every 10 ms, where it returns to an OFF state during the 10 ms window of time. As a result, an HDR camera with a short exposure time may capture images of a car or a traffic light, but the images may be captured at a time where the LED light sources is in an OFF state. In such a case, an advanced driving assistance system (ADAS) or a self-driving car system may not recognize that the vehicle in front of it is braking, or that a traffic light has turned red.
- A summary of sample aspects of the disclosure follows. For convenience, one or more aspects of the disclosure may be referred to herein simply as “some aspects.”
- Methods and apparatuses or devices being disclosed herein each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this disclosure, for example, as expressed by the claims which follow, its more prominent features will now be discussed briefly.
- One innovation includes an apparatus for regionally controlling exposure time. The apparatus may include a digital imager. The digital imager may include a sensor array comprising a plurality of sensors, each sensor configured to generate a signal responsive to an amount of radiation incident on the sensor, the sensor array further configured to generate a plurality of images, wherein each of the plurality of images are generated under different exposure conditions, and an image signal processor configured to control exposure conditions for each sensor of the plurality of sensors. The apparatus may also include a processor coupled to the digital imager and configured to determine one or more weight values for each image in the plurality of images, combine the plurality of images into a single image based on a ratio of the one or more weight values for each image, determine a number of edges in the single image, the number of edges representing boundaries of objects, identify an object in the single image using the number of edges and an object database, determine a region of sensors of the plurality of sensors corresponding to the identified object, and transmit a first message to the image signal processor, the first message comprising data for adjusting an exposure condition of one or more sensors within with the region of sensors.
- For some embodiments, the processor is further configured to generate a bounding box based on the identified object in the single image, and wherein the bounding box comprises at least a portion of the identified object. For some embodiments, the bounding box corresponds to the region of sensors of the plurality of sensors.
- Another innovation is a method for regionally controlling exposure time on an image sensor, including generating a plurality of images, via the image sensor, the image sensor comprising a plurality of sensors, wherein each sensor is configured to generate a signal responsive to an amount of radiation incident on the sensor, and wherein each of the plurality of images are generated under different exposure conditions, controlling exposure conditions, via an image signal processor, for each sensor of the plurality of sensors, computing, via a processor, one or more weight values for each image in the plurality of images, combining, via the processor, the plurality of images into a single image based on a ratio of the one or more weight values for each image, determining, via the processor, a number of edges in the single image, the number of edges representing boundaries of objects, identifying, via the processor, an object in the single image using the number of edges and an object database, determining, via the processor, a region of sensors of the plurality of sensors corresponding to the identified object, and transmitting, via the processor, a first message to the image signal processor, the first message comprising data for adjusting an exposure condition of one or more sensors within with the region of sensors.
- Another innovation is an apparatus for regionally controlling exposure time, that includes a means for generating a plurality of images, wherein each of the plurality of images are generated under different exposure conditions, a means for controlling exposure conditions of the means for generating, a means for computing one or more weight values for each image in the plurality of images, a means for combining the plurality of images into a single image based on a ratio of the one or more weight values for each image, a means for determining a number of edges in the single image, the number of edges representing boundaries of objects, a means for identifying an object in the single image using the number of edges and an object database, a means for determining a region of sensors of the plurality of sensors corresponding to the identified object, and a means for transmitting a first message to means for controlling exposure conditions, the first message comprising data for adjusting an exposure condition of one or more sensors within with the region of sensors.
- For some embodiments, the means for generating is an image sensor, the means for controlling is an image signal processor, the means for computing is a processor, means for combining is the processor, means for determining is the processor, means for identifying is the processor, and the means for transmitting is the processor.
- Another innovation is a non-transitory, computer-readable medium comprising instructions executable by a processor of an apparatus, that causes the apparatus to generate a plurality of images, via a sensor array comprising a plurality of sensors, wherein each sensor configured to generate a signal responsive to an amount of radiation incident on the sensor, and wherein each of the plurality of images are generated under different exposure conditions, control, via an image signal processor, exposure conditions for each sensor of the plurality of sensors, determine one or more weight values for each image in the plurality of images, combine the plurality of images into a single image based on a ratio of the one or more weight values for each image, determine a number of edges in the single image, the number of edges representing boundaries of objects, identify an object in the single image using the number of edges and an object database, determine a region of sensors of the plurality of sensors corresponding to the identified object, and transmit a first message to the image signal processor, the first message comprising data for adjusting an exposure condition of one or more sensors within with the region of sensors.
- Various features, aspects, and advantages will become apparent from the description herein and drawings appended hereto. As a person of ordinary skill in the art will understand, aspects described or illustrated for an embodiment may be included in one or more other described or illustrated embodiments, if not impractical for the implementation or function of such an embodiment, unless otherwise stated.
-
FIG. 1 illustrates an example implementation of a camera system for identifying light sources. -
FIG. 2 is a block diagram illustrating an example of a camera system integrated with a perception system. -
FIG. 3A illustrates a typical image sensor activated by rows of pixels, where each row is read out in a sequential manner. -
FIG. 3B illustrates an example pulse sequence of an LED light source compared to a digital imager exposure time under a bright light. -
FIG. 4A is a flowchart that illustrates the steps for generating a single HDR image. -
FIG. 4B illustrates an example implementation of an HDR image blender for combining multiple images to generate a single image. -
FIG. 5 is a block diagram illustrating an example implementation of the perception system and the camera system. -
FIG. 6 illustrates varying degrees of generated bounding boxes at different stages of object detection. -
FIG. 7 is a flowchart that illustrates the steps for implementing a camera system and perception system for identification of light sources. - The following detailed description is directed to certain specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways. It should be apparent that the aspects herein may be embodied in a wide variety of forms and that any specific structure, function, or both being disclosed herein is merely representative. Based on the teachings herein one skilled in the art should appreciate that an aspect disclosed herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented or such a method may be practiced using other structure, functionality, or structure and functionality in addition to, or other than one or more of the aspects set forth herein.
- The examples, systems, and methods described herein are described with respect to techniques for selectively controlling the exposure time of individual sensors of a sensor array or an image sensor, based on the location of an identified object in an image. The systems and methods described herein may be implemented on various types of imaging systems that include a camera, or digital imager, and operate in conjunction with various types of object detection systems. These include general purpose or special purpose digital cameras or any camera attached to or integrated with an electronic or analog system. Examples of photosensitive devices or cameras that may be suitable for use with the invention include, but are not limited to, semiconductor charge-coupled devices (CCD) or active sensors in CMOS or N-Type metal-oxide-semiconductor (NMOS) technologies, all of which can be germane in a variety of applications including: digital cameras, hand-held or laptop devices, and mobile devices (e.g., phones, smart phones, Personal Data Assistants (PDAs), Ultra Mobile Personal Computers (UMPCs), and Mobile Internet Devices (MIDs)). Examples of object detection systems that may be suitable for use with the invention include, but are not limited to real-time object detection systems based on image processing.
-
FIG. 1 illustrates an example of afirst vehicle 105 and asecond vehicle 110. Herein, thesecond vehicle 110 may also be referred to as another vehicle, or as a plurality of other vehicles. Thefirst vehicle 105 is equipped with an HDR camera and an object detection system that may be used in conjunction with an ADAS or self-driving car application. Thefirst vehicle 105 equipped with the object detection system may include acamera 115 configured to capture HDR and wide dynamic range (WDR) images, and aperception system 120. Thecamera 115 may be directed so that the lens is facing in the forward direction of thefirst vehicle 105 for capturing images or frames of the scene in front of thefirst vehicle 105. It is noted thatFIG. 1 is an example representation of one embodiment of the techniques disclosed herein, and should not be read as limiting the placement or direction of the camera, nor application of the camera. For example, the camera may be located inside or outside of thefirst vehicle 105, and may be directed such that thelens assembly 255 is facing in any direction. For example, thecamera 115 may be directed to the rear of thefirst vehicle 105 for capturing image or frames of the scene behind thefirst vehicle 105. In another example, thecamera 115 andperception system 120 may be equipped in a vehicle other than a car, such as an air vehicle. - Still referring to
FIG. 1 , a three-axis Cartesian coordinate system is illustrated extending from thecamera 115 in the direction alens assembly 255 of thecamera 115 is facing, providing an example of the range of focus of thecamera 115. For example, thecamera 115 may capture a scene that includes a road and the markers and signs around the road, and other vehicles on and around the road. Thecamera 115 may be functionally and physically integrated with theperception system 120. For example, theperception system 120 may include a processor for executing an object detection algorithm for detecting objects in frames captured by thecamera 115. Theperception system 120 may be integrated with thecamera 115 using a wireless or wired bidirectional communication implementation. For example, the communication link may include a wired communication link and/or a wireless communication link including Bluetooth or Wi-Fi, or an infra-red (IR) beam communication protocol. -
FIG. 2 is a block diagram 200 illustrating how thecamera 115 andperception system 120 may implement techniques in accordance with aspects described in this disclosure. In some examples, the techniques described in this disclosure may be shared among the various components of thecamera 115 and theperception system 120. - In the example of
FIG. 2 , thecamera 115 may include a plurality of physical and functional components. The components of thecamera 115 may include thelens assembly 255, andimage sensor 250, an image signal processor (ISP) 235, an on-chip memory 240, and anexternal memory 245. In other examples, thecamera 115 may include more, fewer, or different components. - Still referring to
FIG. 2 , thelens assembly 255 captures light from a scene and brings it to a focus on the electrical sensor or film. In general terms, the two main optical parameters of a photographic lens are maximum aperture and focal length. The focal length determines the angle of view, and a size of the image relative to that of an object for a given distance to the object (subject-distance). The maximum aperture (f-number, or f-stop) limits the brightness of the image and the fastest shutter speed usable for a given setting (focal length/effective aperture), with a smaller number indicating that more light is provided to the focal plane which typically can be thought of as the face of the image sensor in a simple digital camera. In one form of typical simple lens (technically a lens having a single element) a single focal length is provided. In focusing a camera using a single focal length lens, the distance between lens and the focal plane is changed which results in altering the focal point where the photographic subject image is directed onto the focal plane. The lens may be of manual or auto focus (AF). Thelens assembly 255 provides a structure for containing and positioning one or more camera lenses. Thelens assembly 255 may provide a focus control function wherein the lens position is adjusted based on feedback fromISP 235 or a user of the camera. Thelens assembly 255 may include an actuator or step motor for adjusting the lens position. Thelens assembly 255 may be functionally and/or physically coupled to animage sensor 250. - Still referring to
FIG. 2 , theimage sensor 250 may include a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) sensor. Theimage sensor 250 includes a sensor array of light sensitive pixels or sensors. Each pixel in the array can include at least one photosensitive element for outputting a signal having a magnitude proportional to the intensity of incident light or radiation contacting the photosensitive element. When exposed to incident light reflected or emitted from a scene, each pixel in the array outputs at least one signal having a magnitude corresponding to an intensity of light at one point in the scene. The signals output from each photosensitive element may be processed to form an image representing the captured scene. Filters for use with image sensors include materials configured to block out certain wavelengths of radiation. To capture color images, photo sensitive elements should be able to separately detect wavelengths of light associated with different colors. For example, a photo sensor may be designed to detect first, second, and third colors (e.g., red, green and blue wavelengths). To accomplish this, each pixel in the array of pixels may be covered with a single color filter (e.g., a red, green or blue filter) or with a plurality of color filters. The color filters may be arranged into a pattern to form a color filter array over the array of pixels such that each individual filter in the color filter array is aligned with one individual pixel in the array. Accordingly, each pixel in the array may detect the color of light corresponding to the filter(s) aligned with it. -
FIG. 2 further illustrates theISP 235 integrated with thecamera 115 and theimage sensor 250, thelens assembly 255, and theexternal memory 245. TheISP 235 may be an element of thecamera 115, or may be an element associated with an independent system of which thecamera 115 is integrated (e.g., the perception system 120). Theimage sensor 250 may function to measure the light intensity provided by a scene and convert that light into an electronic signal made up of the image statistics for each frame. The image statistics, or raw image data, provided by theimage sensor 250 may supply theISP 235 with the data necessary to process captured image frames. TheISP 235 may control thelens assembly 255 which can adjust the location of the lens in order to focus the scene. Scene focusing can be based on theimage sensor 250 image statistics alone or in conjunction with an autofocus algorithm. In an example embodiment, distance and directional movement of thelens assembly 255 may be based on direction provided by the autofocus algorithm which may include a contrast detection autofocus. The contrast detection autofocus can make use of the image statistics by mapping them to a value that represents a lens position or, alternatively, may position the lens in non-discrete, ad-hoc positions. TheISP 235 may be coupled to the lens actuator and may adjust the lens based on calculations made with the image information from the at least one image sensor. TheISP 235 may control theimage sensor 250 exposure period. For example, theISP 235 may adjust the exposure period of theimage sensor 250 based in part on the size of the aperture and the brightness of the scene. TheISP 235 may also adjust the exposure period on a per-pixel or per-sensor basis, using data provided by theperception system 120. For example, the processor may allow certain sensors or regions of sensors to collect light for a longer or shorter period of time than other pixels. - Still referring to
FIG. 2 , theISP 235 may include an on-chip memory 240 integrated with the processor hardware and directly accessibly by theISP 235. Thememory 240 may be a random access memory (RAM) chip, a read-only memory, or a flash memory, and may contain instructions for theISP 235 to interface with theimage sensor 250, thelens assembly 255, theexternal memory 245, and theperception system 120. Theexternal memory 245 may also store information regarding the type of processor, auto focus algorithms, and store captured images. In one embodiment, theexternal memory 245 may be a fixed piece of hardware such as a random access memory (RAM) chip, a read-only memory, and a flash memory. In another embodiment, theexternal memory 245 may include a removable memory device, for example, a memory card and a USB drive. - Still referring to
FIG. 2 , thecamera 115 may be integrated with theperception system 120. Theperception system 120 may include a plurality of functional components. The functional components of theperception system 120 may include anobject detection 220 module, afeature extractor 215 module, aperception module 225, and anobject database 230. Thefeature extractor 215,object detection 220, and theperception module 225 may all be executed on a single processor, or may be executed by individual processors functionally and/or physically integrated together. Theobject database 230 may be a memory including a RAM chip, a read-only memory, and a flash memory. In another embodiment, theobject database 230 may include a removable memory device, for example, a memory card and a USB drive. - The
camera 115 may be integrated with thefirst vehicle 105 and configured to capture images of a scene outside of thefirst vehicle 105. Raw image data captured by theimage sensor 250 may be processed by theISP 235. In one embodiment, the raw image data may be communicated to thefeature extractor 215 of theperception system 120. For example, theISP 235 may execute an image combine function to combine a number of captured images into oneHDR image 435, then communicate theHDR image 435 to thefeature extractor 215. In one example embodiment, the raw image data of the number of captured images is combined to form asingle HDR image 435 using sequential exposure change, or other techniques such as interpolation. In one example embodiment, thecamera 115 may sequentially capture multiple images of the same scene using different exposure times. The exposure for each image may be controlled by either varying the f-number of thelens assembly 255 or the exposure time of theimage sensor 250. A high exposure image will be saturated in the bright regions of the captured scene, but the image will capture dark regions as well. In contrast, a low exposure image will have less saturation in bright regions but may end up being too dark and noisy in the dark areas. -
FIG. 3A illustrates afunctional representation 300 of theimage sensor 250 capturing a scene at three different exposure times (T1, T2, T3) to generate theHDR image 435. Exposure time may also be referred to as “exposure condition” herein, although exposure conditions may also include other conditions such as lens position, aperture settings, and other camera and other hardware parameters that may affect the exposure conditions of the image sensor. Each row of sensors 301 a-301 t is illustrated offset from the previous row to indicate a time delta caused by the sequential manner in which each row of theimage sensor 250 is read out. This sequential read out often causes a “rolling shutter” effect that distorts objects and creates artifacts in the images. For example, the human eye does not detect the intervals at which an LED light is pulsed on and off, but a camera with a fast exposure time may miss the LED pulse. The problem becomes pronounces in brightly illuminated scenes, where a short (T2) or very short (T3) exposure time is used by the HDR camera to construct the combined scene. Hence, exposure time T1 may be used by certain regions of theimage sensor 250 in order to completely capture the LED pulse. The exposure condition can be determined by a processor based on the amount of light in the scene, and/or based on a pre-configured set of parameters, including exposure time. -
FIG. 3B illustrates an example LED lightsource timing sequence 305 and animage sensor 250exposure time 310. TheON state 315 of the LED pulse is represented by four square waves, or clock cycles. In one example of an LED light source, there may be a delay between sequential ON states. For example, the delay may be on the order of milliseconds (e.g., 10 ms). Theexposure time 310 includes two period ofimage sensor 250 exposure (325, 330). The duration of each period of exposure time may depend on the light available in the scene. For example, the exposure time may be 100 μs or less for a brightly lit scene, or a scene with brightly lit regions. Thefirst exposure time 325 captures a piece of an LED pulse. In such a situation, the image may appear distorted, or the light source may appear dim and partially lit. Thesecond exposure time 330 falls in between two LED ON states. In such a situation, the light source appears off. As such, the duration of the exposure of theimage sensor 250 may preclude capturing an image while the LED light source is in an ON state. Hence, the method illustrated inFIG. 3A may be used to consecutively capture a number of images over a period that will ensure at least one image of the number of images will capture the LED light source in an ON state. -
FIG. 4A illustrates an example method for blending a plurality ofimages 400 taken at different exposure settings to generate theHDR image 435. In one example embodiment, theISP 235 may receive a number of images (405), the number of images captured with different exposure settings. TheISP 235 may then generate one or more weight values for image blending for each image of the plurality of images (410). For example, a weight value accorded to each image may affect its gradation level, or the degree of impact the image has in theHDR image 435 when combined with other images. In one embodiment, theISP 235 may adjust the weights a and I to generate theHDR image 435 according to a perception measure (e.g., brightness, edges of a degree of saliency, smoothness, length, etc.). TheISP 235 may generate (415) a blended image based on the weights calculated for each image being blended. For example, where data of a short-exposure image 425 is represented by SEI, data of a long-exposure image 430 is represented by LEI, and characteristic functions corresponding to gradation optimization (histogram optimization and the like) of the respective images are represented by f(·) and g(·) respectively, then blended image data (HDRI) according to an exemplary embodiment may be calculated as follows: -
HDRI=α·f(SEI)+β·g(LEI) Equation 1 - Where α and β may represent weights which are applied to blend images, and by differentiating the weights α and β, a range of representable gradation levels may be adjusted. In another embodiment, a merging weight value may be computed for each pixel location of each of the plurality of images captured. The weight value of each pixel in an image of the plurality of images may be determined based on the weight computed for the other images in the plurality of images, or the weight value can be determined by applying pre-determined weight values stored in the on-
chip memory 240 orexternal memory 245. - In some embodiments, the HDR imaging is performed by the
ISP 235 as illustrated inFIGS. 2 and 3B . TheISP 235 may be configured to receive image data comprising image data from a set of image frames. As used herein, a set of image frames may correspond to a plurality of images captured of the same scene but having different exposure conditions. - In some embodiments, the
ISP 235 may be implemented in hardware as a single-path system with limited memory. As such, image data from the set of image frames may be received as a bitstream and processed by the system as it is received. The system will only have access to a small portion of each image frame of the set of image frames at any given time, without the ability for the system to refer to other portions of the image frames. - In order to perform HDR imaging, the
ISP 235 analyzes the received image data to produce theHDR image 435. For example, for portions of the image frame with high luminosity (e.g., due to direct sunlight), pixels from an image frame having lower exposure time may be used, because image frames having higher exposure times may be saturated in those portions of the image frame. On the other hand, for portions of the image frame with low luminosity (e.g., due to being in shadow), pixels from an image frame having higher exposure time may be used. As such, for each portion of theHDR image 435, appropriate image data from the set of image frames is selected, such that theHDR image 435 is able to capture both high and low luminosity ranges while avoiding saturation (e.g., due to high exposure times) and unnecessary noise (e.g., due to low exposure time). -
FIG. 4B illustrates an exemplary method of generating a single image from multiple images taken at different levels of exposure. Thecamera 115 may capture a number of images of a scene, and theISP 235 may combine the images to create asingle HDR image 435. For example, three images are captured at differing levels of exposure: a long-exposure image 430 (T1), a short-exposure image 425 (T2), and a very short-exposure image 420 (T3). TheISP 235 receives the three images and blends the images to generate theHDR image 435. The images may be combined based on a ratio of image parameters in each of the images as compared to a reference image. For example, the reference image may be the short-exposure image 425 (T2). The image parameters may include characteristics of each image, including but not limited to, contrast, saturation, color, brightness, hue, and chrominance and luminance components.FIG. 4B illustrates an example and should not be used to limit the disclosed techniques to combining only three images. For example, theISP 235 may receive and blend any number of images. In an alternative embodiment, thefeature extractor 215 of theperception system 120 may blend the number of images and generate theHDR image 435 instead of theISP 235. - The
ISP 235 may include an auto-focus algorithm, and may control the lens position using thelens assembly 255 according to the auto-focus algorithm. Generally, contrast detection algorithms evaluate the image statistics received from theimage sensor 250 at a number of lens positions, and determine if there is more or less contrast at each position relative to the other positions. If contrast has increased, the lens is moved in that direction until contrast is maximized. If contrast is decreased, the lens is moved in the opposite direction. This movement of the lens is repeated until contrast is maximized. In one exemplary embodiment, the lens assembly may be activated to focus the lens on a particular scene by employing algorithms of at least one of three specific types of contrast detection: (1) exhaustive autofocus, (2) slope predictive autofocus, and (3) continuous autofocus. Contrast detection autofocus makes use of a focus feature that maps an image to a value that represents the degree of focus of the image, and iteratively moves the lens searching for an image with the maximal focus according to the contrast detection algorithm. In one example, theISP 235 may determine which contrast detection autofocus algorithm may be the most appropriate for a given scene or application, and select it to be used. Alternatively, theISP 235 may determine the appropriate algorithm based onimage sensor 250 information. For example, the type of image sensor, the number of light sensitive surfaces on each sensor, etc. TheISP 235 may actuate the lens using the lens assembly to adjust the position of the lens using a digital lookup table with a range of lens positions that correspond to a calculated disparity value. The lookup table may be stored in thememory 240. The camera may adjust position of the lens using one or more contrast detection auto focus algorithms. -
FIG. 5 is a block diagram illustrating an example of theperception system 120 that may implement techniques in accordance with aspects described in this disclosure. Theperception system 120 may be configured to perform some or all of the techniques of this disclosure. In some examples, the techniques described in this disclosure may be shared among the various components of theperception system 120 and thecamera 115. In some examples, additionally or alternatively, a processor (not shown) may be configured to perform some or all of the techniques described in this disclosure. The processor may include the image sensor array as an ISP, but can also include an application processor or other external general processor. - In the example of
FIG. 5 , thefeature extractor 215 includes a plurality of functional components. The processes and filters that are executed on the image data by thefeature extractor 215 can enable theobject detection 220 system to accurately and effectively derive a number of edges and edge-related information from received image data. Each edge may represent a boundary of an object with respect to a scene surrounding the object. The functional components of thefeature extractor 215 include afilter bank 505 and anedge detector 510. In other examples, the feature extractor may include more, fewer, or different functional components. - Still referring to
FIG. 5 , thefilter bank 505 may be a portion of physical memory included on a processor, for example, a video processing unit or a general processing unit, and memory, collectively configured to manage communications and execution of tasks. Thefilter bank 505 may include a stored set of filters for smoothing the image or filtering the image for noise. In one embodiment, thefilter bank 505 may include an implementation of a Gaussian filter and a Sobel filter. It is contemplated that other smoothing, blurring, or shading filters can be used by thefeature extractor 215. TheHDR image 435 data is first applied to thefilter bank 505 to create a blurred image. Such blurring reduces image noise and reduces details in the raw image data. Thus, applying theHDR image 435 to thefilter bank 505 has the effect of reducing the detection of weak or isolated edges. In another embodiment, thefeature extractor 215 may be utilized to detect and identify edges in the long-exposure image 430, the short-exposure image 425, and the long-exposure image 430 before combining the images. - Still referring to
FIG. 5 , theedge detector 510 may be a portion of physical memory included on a processor, for example, a video processing unit or a general processing unit, and memory, collectively configured to manage communications and execution of tasks. In some embodiments, thefeature extractor 215 uses anedge detector 510 to perform an edge detection algorithm on the blurred image to detect edges in the image. In another embodiment, the edge detector may perform the edge detection algorithm on theHDR image 435 prior to, or without, passing theHDR image 435 through a filtering system. In one example embodiment, an implementation of a Canny edge detection algorithm may be used to detect and single out prominent edges in the image. It is contemplated that other edge detection algorithms can be used by thefeature extractor 215. In a non-limiting example, a Canny-Deriche detector algorithm and a differential edge detector algorithm can be used. Theedge detector 510 may generate data that includes edge data and edge location in theHDR image 435. - In the example of
FIG. 5 , theobject detection 220 module includes a plurality of functional components including an object matching 515 module, anobject database 230, and anoptional filter bank 520. In other examples, the feature extractor may include more, fewer, or different functional components. - Still referring to
FIG. 5 , the object matching 515 module may be a portion of physical memory included on a processor, for example, a video processing unit or a general processing unit, and memory, collectively configured to manage communications and execution of tasks. The object matching 515 algorithm may include an algorithm for matching an object shape in theobject database 230 with the edges in theHDR image 435 determined by thefeature extractor 215 received by theobject detection 220 module. For example, a scene captured by thecamera 115 may include thesecond vehicle 110. Theobject database 230 may include a plurality of shapes, any number of which may be substantially similar to the calculated edges of thesecond vehicle 110. An object matching algorithm may determine a shape in theobject database 230 that most closely resembles the calculated edges of thesecond vehicle 110. The object database may include attributes associated with each shape such that each shape can be identified. For example, a vehicle such as a car or truck can be identified based on its shape, and be distinguished from a traffic light or a construction sign, both of which are also identified based on their shape. Shape identification can alter the way that the images are generated. For example, the shape of a car or truck can indicate a moving object that may remain in the scene for longer than a stationary object such as a traffic light or street sign. As such, identification of the object can trigger a calculation of gradients of movement of the object based on the identity of the object. Identification of the object may also include identifying the presence of an object without determining a specific object. In some embodiments, the object matching 515 algorithm detects shapes in the image created by the edges based on one or more criterion including the perception measure, length of an edge or its associated curves, a number of overlapped edges, location in the image, depth information, camera location information, or other available information. For example, thecamera 115 may be calibrated to capture a center of a lane in the middle of an image frame, and the shoulders of the lane in the left and right periphery of the image frame. In such an example, the object matching 515 algorithm may detect shapes based on expected shapes in those areas of the image frame. In one example embodiment, the object matching 515 algorithm detects shapes in the image based on motion detection of the object. For example, the motion of an object may be detected by obtaining a plurality of images of an object over a period of time, identifying the object, and calculating gradients of movement of the object based on the plurality of images. - In one embodiment, upon determining a shape in the
object database 230 that most closely resembles the calculated edges, the object matching 515 algorithm may generate afirst bounding box 605 around each of the objects in the image where thefirst bounding box 605 has a height (h) and width (w) measured in pixels. Each bounding box may represent a region of sensors in the sensor array. Each bounding box may be generated based on the geometry of the objects, other types of image descriptors (e.g., SIFT, BRISK, FREAK, etc.), or other parameters. In an optional embodiment, theobject detection 220 module may provide any number of bounding boxes to thecamera 115. In such an embodiment, theISP 235 may communicate a command to theimage sensor 250 to modulate the exposure settings of the sensors of the bounding box and the sensors within the bounding box. For example, theISP 235 may direct the image sensor to increase the exposure time of the sensors within the bounding box to a time greater than the exposure time of other sensors in theimage sensor 250. Theobject detection 220 module may include anoptional filter bank 520. Once the edge data is received from thefeature extractor 215, may perform the steps of reducing false positives and verifying the object match. - In the example of
FIG. 5 , theperception module 225 may be a portion of physical memory included on a processor, for example, a video processing unit or a general processing unit, and memory, collectively configured to manage communications and execution of tasks. Theperception module 225 may determine areas of the detected objects that may include an LED light. For example, a traffic light and a vehicle with taillights may be captured in an image frame, and the object detection module may detect the object of the traffic light and the vehicle based on theedge detector 510 data. Theperception module 225 may determine areas of the detected objects that may include LED lights. The perception module may generate asecond bounding box 610 around each of the areas that may include an LED light in the image where thesecond bounding box 610 has a height (h) and width (w) measured in pixels. Each bounding box may be generated based on the geometry of the objects, other types of image descriptors (e.g., SIFT, BRISK, FREAK, etc.), or other parameters. In an example embodiment, theperception module 225 may provide any number of bounding boxes to thecamera 115. In such an embodiment, theISP 235 may communicate a command to theimage sensor 250 to modulate the exposure settings of the sensors of each bounding box and the sensors within each bounding box. For example, theISP 235 may direct the image sensor to increase the exposure time of the sensors within the bounding box to a time greater than the exposure time of other sensors in theimage sensor 250. This allows theimage sensor 250 to capture image frames that include the light created by the LED, and avoid capturing image frames during the off phase of the LED. - Still referring to
FIG. 5 , theperception system 120 may generate a clock cycle that cycles from an ON state to an OFF state in sync with the LED pulse of an outside light source captured by thecamera 115. For example, once the outside light source is detected, theperception system 120 may determine the ON/OFF state cycles based on the images captured of the light source. Theperception module 120 may provide the clock cycle to theISP 235 so that theISP 235 may expose theimage sensor 250 during the ON state of the outside light source. In another example, the ISP may expose one or more regions of theimage sensor 250 at the rate of the clock cycle. In this configuration, thecamera 115 captures images in sync with the outside light source so that the images are not captured during an OFF cycle of the LED pulse of the outside light. -
FIG. 6 illustrates an example of boundingbox operations 600. Thesecond vehicle 110 is captured in an image frame. The vehicle is detected through edge detection and object detection algorithms, and afirst bounding box 605 is generated around the detected object. LED light sources are determined by theperception module 225 and asecond bounding box 610 is generated around the LED light sources. This embodiment offers the benefit of two sources of bounding boxes for increasing sensor exposure time inimage sensors 250. In the event that the perception module cannot determine an LED source, thefirst bounding box 605 may be used to determine sensor exposure time. It is contemplated that other embodiments may include only one source of bounding box from either theobject detection 220 module or theperception module 225. -
FIG. 7 is a flow chart that illustrates anexample method 700 of detecting LED light source regions in a captured image. Atblock 705, themethod 700 captures multiple images of a scene using thecamera 115. In some implementations, thecamera 115 may include animage sensor 250. In some implementations, the multiple images are each captured consecutively at differing exposure times. - At
block 710, themethod 700 generates theHDR image 435 by combining, or blending, the multiple images. For combining the images, each image of the multiple images may be accorded a weight value, where the weight value assigns the effect each image has on the blended,HDR image 435. - At
block 715, themethod 700 executes an edge detection algorithm on theHDR image 435 in order to calculate and define the edges. The blended image is ideal for edge detection because by blending the multiple images into theHDR image 435, the edges have greater definition. In some embodiments, thefeature extractor 215 may apply the edge detection algorithm to one or more of the multiple images prior to generating the blended image. In this configuration, the edges can carry over to the blended image. - At
block 720, themethod 700 identifies an object in the image. In some implementations, a number of objects may be detected in the image using the detected edges. In some implementations, the objects detected are defined by an object database, and are matched using the object matching algorithm. In such an implementation, the object matching algorithm may identify objects based on the edges in the image and the objects stored in theobject database 230 by comparing the shapes formed by the edges. - At
block 725, themethod 700 determines a region of the identified object that contains a light source. In some implementations, themethod 700 may use an object database to determine regions of the detected objects that may contain a light source. For example, the object matching algorithm may identify objects based on the edges in the image, and the identified objects may contain characteristics. In this implementation, the characteristics may include regions of the identified objects that may contain light sources. For example, theHDR image 435 may contain edges that form a rectangle that includes three circular shapes that are vertically aligned, similar to a traffic light. Theobject database 230 may include the shape, as well as associated characteristics that include light sources at each of the three circular shapes. In such a configuration, theperception module 225 may increase the exposure time of regions of sensors in theimage sensor 250 that correspond to the regions of the three circular shapes that have the light source characteristic. In another implementation, theperception module 225 may increase the exposure time of regions of sensors in theimage sensor 250 that correspond to one or more bounding boxes containing the regions of the three circular shapes that have the light source characteristic. - At
block 730, themethod 700 generates a bounding box around one or more regions that may contain a light source. In some implementations, the bounding box may correspond to a region of pixels or individual sensors on theimage sensor 250. In some implementations, the bounding box may be dynamically resized and/or reshaped with subsequent HDR images that reveal the location of one or more light sources. For example, if theHDR image 435 is captured while the LED light source is in anON state 315, the bounding box may be resized or reshaped to fit the actual size of the light source. In this embodiment, the bounding box regions are fitted to be more accurate, so as not to include unnecessarily large regions of theimage sensor 250. - At
block 735, themethod 700 communicates the bounding box data to thecamera 115. In some implementations, the bounding box data identifies regions of pixels or individual sensors on theimage sensor 250 that will have a different exposure time than the rest of the sensors. For example, theISP 235 may receive the bounding box data, and increase the exposure time of the regions of sensors in theimage sensor 250 that are contained by the one or more bounding boxes. - At
block 740, themethod 700 updates the exposure time of the sensors in regions of theimage sensor 250 according to the bounding box data. In some implementations, theperception module 225 may generate bounding boxes in regions that have less edges than other regions of theHDR image 435, or regions where edges commonly terminate. These regions may indicate regions of theHDR image 435 that are over- or under exposed. In such an implementation, theperception module 225 may send bounding box data to theISP 235, where the bounding box data contains the regions that have less edges than other regions of theHDR image 435, or regions where edges commonly terminate. In response, theISP 235 may increase or decrease the exposure time of the individual sensors of theimage sensor 250 that are contained within the one or more bounding boxes. - Implementing Systems and Terminology
- One or more of the components, steps, features and/or functions illustrated in the figures may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from novel features disclosed herein. The apparatus, devices, and/or components illustrated in the figures may be configured to perform one or more of the methods, features, or steps described in the figures. The novel algorithms described herein may also be efficiently implemented in software and/or embedded in hardware.
- Also, it is noted that the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
- The term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
- The phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” describes both “based only on” and “based at least on.”
- The term “pixel” or “sensor” may include multiple photosensitive elements, for example a photogate, photoconductor, or other photodetector, overlying a substrate for accumulating photo-generated charge in an underlying portion of the substrate.
- Moreover, storage medium may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine-readable mediums, processor-readable mediums, and/or computer-readable mediums for storing information. The terms “machine-readable medium”, “computer-readable medium”, and/or “processor-readable medium” may include, but are not limited to non-transitory mediums such as portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instruction(s) and/or data. Thus, the various methods described herein may be fully or partially implemented by instructions and/or data that may be stored in a “machine-readable medium,” “computer-readable medium,” and/or “processor-readable medium” and executed by one or more processors, machines and/or devices.
- Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s). A processor may perform the necessary tasks. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
- The various illustrative logical blocks, modules, circuits, elements, and/or components described in connection with the examples disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing components, e.g., a combination of a DSP and a microprocessor, a number of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- The methods or algorithms described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executable by a processor, or in a combination of both, in the form of processing unit, programming instructions, or other directions, and may be contained in a single device or distributed across multiple devices. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
- A person having ordinary skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
- The various features of the invention described herein can be implemented in different systems without departing from the invention. It should be noted that the foregoing embodiments are merely examples and are not to be construed as limiting the invention. The description of the embodiments is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art.
Claims (30)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/469,309 US10084967B1 (en) | 2017-03-24 | 2017-03-24 | Systems and methods for regionally controlling exposure time in high dynamic range imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/469,309 US10084967B1 (en) | 2017-03-24 | 2017-03-24 | Systems and methods for regionally controlling exposure time in high dynamic range imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
US10084967B1 US10084967B1 (en) | 2018-09-25 |
US20180278824A1 true US20180278824A1 (en) | 2018-09-27 |
Family
ID=63556837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/469,309 Expired - Fee Related US10084967B1 (en) | 2017-03-24 | 2017-03-24 | Systems and methods for regionally controlling exposure time in high dynamic range imaging |
Country Status (1)
Country | Link |
---|---|
US (1) | US10084967B1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200342624A1 (en) * | 2019-04-24 | 2020-10-29 | Sony Interactive Entertainment Inc. | Information processing apparatus and representative coordinate derivation method |
EP4050882A1 (en) * | 2021-02-25 | 2022-08-31 | Canon Kabushiki Kaisha | Image capturing apparatus capable of detecting flicker due to periodic change in light amount of object, flicker detecting method, and program |
WO2022222028A1 (en) | 2021-04-20 | 2022-10-27 | Baidu.Com Times Technology (Beijing) Co., Ltd. | Traffic light detection and classification for autonomous driving vehicles |
US11575842B2 (en) * | 2018-08-03 | 2023-02-07 | Canon Kabushiki Kaisha | Imaging apparatus |
US11928797B2 (en) | 2018-10-24 | 2024-03-12 | Samsung Electronics Co., Ltd. | Electronic device and method for acquiring a synthesized image |
US11933599B2 (en) | 2018-11-08 | 2024-03-19 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling same |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10334141B2 (en) * | 2017-05-25 | 2019-06-25 | Denso International America, Inc. | Vehicle camera system |
US10852731B1 (en) * | 2017-12-28 | 2020-12-01 | Waymo Llc | Method and system for calibrating a plurality of detection systems in a vehicle |
US11681030B2 (en) | 2019-03-05 | 2023-06-20 | Waymo Llc | Range calibration of light detectors |
US11102422B2 (en) * | 2019-06-05 | 2021-08-24 | Omnivision Technologies, Inc. | High-dynamic range image sensor and image-capture method |
US11747453B1 (en) | 2019-11-04 | 2023-09-05 | Waymo Llc | Calibration system for light detection and ranging (lidar) devices |
CN111327800B (en) * | 2020-01-08 | 2022-02-01 | 深圳深知未来智能有限公司 | All-weather vehicle-mounted vision system and method suitable for complex illumination environment |
CN112634183B (en) * | 2020-11-05 | 2024-10-15 | 北京迈格威科技有限公司 | Image processing method and device |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9692964B2 (en) | 2003-06-26 | 2017-06-27 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
JP4937851B2 (en) | 2007-07-05 | 2012-05-23 | パナソニック株式会社 | Imaging device |
US8717464B2 (en) | 2011-02-09 | 2014-05-06 | Blackberry Limited | Increased low light sensitivity for image sensors by combining quantum dot sensitivity to visible and infrared light |
GB2497571A (en) | 2011-12-15 | 2013-06-19 | St Microelectronics Res & Dev | An imaging array with high dynamic range |
US9179062B1 (en) * | 2014-11-06 | 2015-11-03 | Duelight Llc | Systems and methods for performing operations on pixel data |
US9531961B2 (en) * | 2015-05-01 | 2016-12-27 | Duelight Llc | Systems and methods for generating a digital image using separate color and intensity data |
DE102012217093A1 (en) * | 2012-09-21 | 2014-04-17 | Robert Bosch Gmbh | Camera system, in particular for a vehicle, and method for determining image information of a detection area |
DE102013100804A1 (en) | 2013-01-28 | 2014-07-31 | Conti Temic Microelectronic Gmbh | Method for detecting pulsed light sources |
JP6230239B2 (en) * | 2013-02-14 | 2017-11-15 | キヤノン株式会社 | Image processing apparatus, imaging apparatus, image processing method, image processing program, and storage medium |
US8849064B2 (en) * | 2013-02-14 | 2014-09-30 | Fotonation Limited | Method and apparatus for viewing images |
JP2015231118A (en) * | 2014-06-04 | 2015-12-21 | キヤノン株式会社 | Image composition device, image composition system and image composition method |
-
2017
- 2017-03-24 US US15/469,309 patent/US10084967B1/en not_active Expired - Fee Related
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11575842B2 (en) * | 2018-08-03 | 2023-02-07 | Canon Kabushiki Kaisha | Imaging apparatus |
US11928797B2 (en) | 2018-10-24 | 2024-03-12 | Samsung Electronics Co., Ltd. | Electronic device and method for acquiring a synthesized image |
US11933599B2 (en) | 2018-11-08 | 2024-03-19 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling same |
EP3843379B1 (en) * | 2018-11-08 | 2024-08-07 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling same |
US20200342624A1 (en) * | 2019-04-24 | 2020-10-29 | Sony Interactive Entertainment Inc. | Information processing apparatus and representative coordinate derivation method |
US11663737B2 (en) * | 2019-04-24 | 2023-05-30 | Sony Interactive Entertainment Inc. | Information processing apparatus and representative coordinate derivation method |
EP4050882A1 (en) * | 2021-02-25 | 2022-08-31 | Canon Kabushiki Kaisha | Image capturing apparatus capable of detecting flicker due to periodic change in light amount of object, flicker detecting method, and program |
US11627261B2 (en) | 2021-02-25 | 2023-04-11 | Canon Kabushiki Kaisha | Image capturing apparatus capable of detecting flicker due to periodic change in light amount of object, flicker detecting method, and non-transitory computer-readable storage medium |
WO2022222028A1 (en) | 2021-04-20 | 2022-10-27 | Baidu.Com Times Technology (Beijing) Co., Ltd. | Traffic light detection and classification for autonomous driving vehicles |
Also Published As
Publication number | Publication date |
---|---|
US10084967B1 (en) | 2018-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10084967B1 (en) | Systems and methods for regionally controlling exposure time in high dynamic range imaging | |
US11758279B2 (en) | WDR imaging with LED flicker mitigation | |
US9516295B2 (en) | Systems and methods for multi-channel imaging based on multiple exposure settings | |
JP5832855B2 (en) | Image processing apparatus, imaging apparatus, and image processing program | |
CA2896825C (en) | Imaging apparatus with scene adaptive auto exposure compensation | |
TWI722283B (en) | Multiplexed high dynamic range images | |
US9489750B2 (en) | Exposure metering based on background pixels | |
JP5860663B2 (en) | Stereo imaging device | |
KR102581679B1 (en) | An elelctronic device and method for processing an image in the same | |
JP4523629B2 (en) | Imaging device | |
US10462378B2 (en) | Imaging apparatus | |
JP6965132B2 (en) | Image processing equipment, imaging equipment, image processing methods and programs | |
TWI767422B (en) | A low-light imaging system | |
JP7278764B2 (en) | IMAGING DEVICE, ELECTRONIC DEVICE, IMAGING DEVICE CONTROL METHOD AND PROGRAM | |
TW201512701A (en) | Image capturing apparatus and the control method thereof | |
US20200036877A1 (en) | Use of ir pre-flash for rgb camera's automatic algorithms | |
CN114143418B (en) | Dual-sensor imaging system and imaging method thereof | |
JP4304610B2 (en) | Method and apparatus for adjusting screen brightness in camera-type vehicle detector | |
JP2012085093A (en) | Imaging device and acquisition method | |
US20150116500A1 (en) | Image pickup apparatus | |
JP2016220002A (en) | Imaging apparatus, method for controlling the same, program, and storage medium | |
US20240005465A1 (en) | Enhanced vision processing and sensor system for autonomous vehicle | |
JPWO2019142586A1 (en) | Image processing system and light distribution control system | |
US11825205B2 (en) | Computer implemented method and a system for obtaining a camera setting | |
US20240340396A1 (en) | Derivation device, derivation method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: QUALCOMM INCORPORATED, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SOMASUNDARAM, KIRAN;BISWAS, MAINAK;SIGNING DATES FROM 20170504 TO 20170615;REEL/FRAME:042761/0818 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20220925 |