[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024046727A1 - Agencement de capteur d'image multispectrale, dispositif électronique et procédé d'imagerie multispectrale - Google Patents

Agencement de capteur d'image multispectrale, dispositif électronique et procédé d'imagerie multispectrale Download PDF

Info

Publication number
WO2024046727A1
WO2024046727A1 PCT/EP2023/072014 EP2023072014W WO2024046727A1 WO 2024046727 A1 WO2024046727 A1 WO 2024046727A1 EP 2023072014 W EP2023072014 W EP 2023072014W WO 2024046727 A1 WO2024046727 A1 WO 2024046727A1
Authority
WO
WIPO (PCT)
Prior art keywords
spectral
image
image sensor
sensor
multispectral
Prior art date
Application number
PCT/EP2023/072014
Other languages
English (en)
Inventor
Alexander Gaiduk
Gunter Siess
Mohsen Mozaffari
Original Assignee
ams Sensors Germany GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ams Sensors Germany GmbH filed Critical ams Sensors Germany GmbH
Publication of WO2024046727A1 publication Critical patent/WO2024046727A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0205Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
    • G01J3/0248Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows using a sighting port, e.g. camera or human eye
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/027Control of working procedures of a spectrometer; Failure detection; Bandwidth calculation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0272Handheld
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0289Field-of-view determination; Aiming or pointing of a spectrometer; Adjusting alignment; Encoding angular position; Size of measurement area; Position tracking

Definitions

  • This disclosure relates to a multispectral image sensor arrangement , an electronic device and to a method of multispectral imaging .
  • An obj ect to be achieved is to provide a multispectral image sensor arrangement for electronic devices that overcomes the aforementioned limitations and provides improved color correction or calibration .
  • a further obj ect is to provide an electronic device comprising such an image sensor arrangement and a method of multispectral imaging .
  • the following relates to an improved concept in the field of imaging .
  • One aspect relates to the sensing of lighting conditions from di f ferent regions of an image .
  • the proposed concept provides means to control selection of regions-of- interest , ROI s , (including position and si ze ) and provides control over light monitoring regions .
  • a multispectral image sensor arrangement comprising a main image sensor, a multispectral image sensor and a processing unit to perform image processing of images taken by the sensors .
  • the processing unit allows a functional workflow to be implemented that combines information from the main image sensor and the multispectral image sensor and, optionally, other sensors as well as databases or arti ficial intelligence , Al , based cores to address a task that a user wishes to solve.
  • Such tasks can be light identification, chemical identification, plant identification, skin identification, face identification etc.
  • the improved concept enables a functional workflow that flexibly combines spatial and spectral information from multiple image sensors that have various spatial and spectral resolutions or temporal responses.
  • a multispectral image sensor arrangement comprises a main image sensor, a multispectral image sensor and a processing unit.
  • the image sensor is operable to acquire a spatially resolved first image of a scene.
  • the multispectral sensor is operable to acquire a spectrally resolved second image of the same scene.
  • the processing unit is operable to perform a number of image processing and control steps.
  • One step is to define one or more regions-of-interest , ROIs, in the first image.
  • Another step is to define one or more spectral ROIs in the second image corresponding to the ROIs in the first image.
  • Spectral data is determined from the spectral ROIs of the second image.
  • the determined spectral data is used to adjust a spectral representation of the first image, i.e. image data of the first image is complemented with additional application-specific spectral data from the multispectral sensor.
  • the sequence of procedural steps conducted by the processing unit may vary according to several possible workflows, for example.
  • the regions-of-interest, ROIs can be used to identify, and isolate, objects in the scene.
  • the first image may show a bright light source of some color, which is localized only in a small part of the image (spatial distribution) .
  • the color of the bright light source may have a di f ferent dominant color ( spectral distribution) than other, or even most other, parts of the image .
  • an appropriate ROT can be used to create a subset of image data, in order to adj ust , or calibrate , a spectral representation of said ROT of the first image .
  • the subset of image data can be used to adj ust , or calibrate , a spectral representation of the entire first image , e . g . complemented with data from a dedicated optical sensor .
  • the calibration involves spectral data which can be retrieved from the second image , which contains spectral data of the corresponding spectral ROT .
  • the ROT and the spectral ROT may not be exactly the same , as typically the main image sensor and multispectral image sensor may have di f ferent spatial resolutions .
  • the proposed concept addresses a number of shortcomings of previous solutions suggested in the art .
  • the multispectral image sensor arrangement allows the combination of high spatial resolution of the main image sensor with high quality spectral identi fication of the multispectral image sensor, or high speci ficity for a speci fic application .
  • This spectral range is typically not available in a standard monochromatic or RGB color sensor .
  • Correlating the images allows information from image sensors to be combined with di f ferent spatial and spectral resolutions and to perform calibration on a finer scale , as defined by the regions-of-interest .
  • an accurate spectral identi fication of a direct or di f fuse light source can be achieved in one or more local positions simultaneously, rather than being roughly estimated at a global level or at a smaller location.
  • the proposed concept can also be applied to different fields, e.g. spectral identification and color matching applications.
  • the range of applications may also include sensing of reflected light in fields such as medical imaging, digital health, wellness, diagnostics, agriculture, food inspection, counterfeit detection, security, sorting, etc.
  • analysis of multispectral data is dedicated to the application and to the optical system (with potentially non- negligible optical aberrations) .
  • the parameters of the analysis are flexible and potentially spectral settings are tunable .
  • the proposed concept suggests ways to combine two image sensors with different spatial and spectral characteristics, i.e. spectral analysis is performed for objects visible in images obtained by the main image sensor, but depending on the optical and digitalization properties and resolution of multispectral image sensor.
  • the properties of the multispectral image sensor may vary along the field and spectral channels (optics-defined) and are digitally corrected using the processing unit, e.g. by means of firmware and software.
  • region-of-interest refers to samples within a data set, i.e. a sub-image within the first or second (spectral) image.
  • the image sensor comprises an array of pixels, e.g. implemented as a CMOS photodetector or CCD.
  • the multispectral image sensor comprises an array of spectral pixels, e.g. the spectral pixels provide optical channels distributed over the visible, IR and/or UV range. There may be extra channels present such as Clear, Flicker and NIR channel.
  • spectral indicates that the image sensor, or spectral pixels , is arranged to generate an image of spectral data .
  • the spectral image represents a three-dimensional array of data which combines precise spectral information with two-dimensional spatial correlation . Spectral information enables accurate obj ect color measurements , spectral detection and characteri zation .
  • the term " spatial" indicates that the image sensor, or pixels , is arranged to generate an image of spatial data .
  • the spatial image represents a three- dimensional array of data which combines intensity, or color, and depth of field/ focus for each sensor and spectral channel .
  • the term "adj ust a spectral representation” refers to a combination of the spatial and spectral information .
  • the image sensor generates an image , which could be representing a true color image of a scene .
  • a calibration may be needed .
  • Calibration may be a special case of spectral representation adj ustment . Typically, this can be achieved by means of a dedicated optical sensor .
  • FOV field-of-view
  • the term "spectral representation" may be understood in a broader sense .
  • the image acquired by the main image sensor may also be used for spectral imaging .
  • the image may be adj usted, or calibrated, such that the spectral representation indicates accurate spectral information over the entire image . Examples for calibration include white balance or color balance in digital imaging .
  • the processing unit can be implemented in different ways.
  • the processing unit comprises an image processor as a type of media processor or specialized digital signal processor (DSP) used for image processing.
  • DSP digital signal processor
  • the processing unit can be implemented as an ASIC or microprocessor, for example.
  • the processing unit can be part of a dedicated sensor module or be part of a larger electronic device such as a digital camera.
  • the processing unit is operable to define the position, size and shape of the ROIs and spectral ROIs.
  • the spectral spatial distribution depends on the relative size of an object, detector pixel size and parameters of an optical system, such as a point- spread-function, PSF, of a camera lens, for example.
  • position, size and shape of the ROIs may only be limited by the design of the image sensors, e.g. number and shape of pixels.
  • the processing unit may also be operable to define more ROIs and corresponding spectral ROIs for the same object, and to adjust a spectral representation based on a comparison of said more ROIs and corresponding spectral ROIs. For example, a smaller ROI selection may produce better spectral reconstruction (or better estimation of the light source type) compared to a larger ROI selection, depending on the size of an object in the scene.
  • the processing unit is operable to initiate image acquisition of the spatially resolved first image by means of pixels of the image sensor. Furthermore, the processing unit is operable to initiate image acquisition of the spectrally resolved second image by means of spectral pixels of the multispectral sensor . The second image is acquired using spectral pixels corresponding to the defined spectral ROI s .
  • the processing unit is operable to initiate image acquisition of the spatially resolved first image by means of pixels of the image sensor . Furthermore, the processing unit is operable to initiate image acquisition of the spectrally resolved second image by means of spectral pixels of the multispectral sensor, wherein the second image is acquired using all spectral pixels corresponding to the defined spectral ROI s .
  • the first image is acquired first and analyzed for ROI s .
  • the second image may be acquired as a whole and corresponding spectral ROI s are used for further analysis .
  • the multispectral image sensor is used to only acquire data corresponding to the spectral ROI s , rather than taking an entire image .
  • image processing by means of the processing unit focuses on the ROI s rather than the entire images . This renders processing fast as well as accurate .
  • di f ferent ROI s can give di f ferent digitali zation levels/ sampling/binning for the corresponding ROI s in the multispectral image sensor .
  • the processing unit may be arranged not only to conduct steps of image processing, but may also be involved in controlling operation of the sensors .
  • the processing unit is operable to adjust, or calibrate, a spectral representation of the one or more ROIs of the first image with spectral data determined from the corresponding spectral ROIs.
  • the spectral data determined from the corresponding spectral ROIs can be used to adjust spectral representation only of the corresponding ROI in the first image, leaving the rest of the spectral representation untouched. In this way, only parts of the image can be adjusted with spectral data from dedicated ROIs, while the rest may not be adjusted or may be adjusted with sensor data of another sensor, like an optical sensor.
  • the processing unit is operable to adjust, or calibrate a spectral representation of the first image without spectral data determined from spectral ROIs, denoted void spectral ROI.
  • a spectral representation of the first image without spectral data determined from spectral ROIs, denoted void spectral ROI.
  • specific areas in the image may be excluded from analysis (set to ROI voids) .
  • this can be due to saturation of one or several spectral channels or due to the need to compare spatial-spectral distribution of the detected light from light sources from different color temperature or different light types or the need to analyze spatially distributed reflections or it can be due to known properties of the optical systems of the multispectral imaging sensor and/or main imaging sensor. This may be possible by defining position, size and shape of VOID ROIs and the appropriate definition of valid ROIs.
  • the processing unit is operable to define the ROIs in the first images by user input, by object recognition, and/or by database matching.
  • Object recognition may involve known procedures of image processing, such as edge detection techniques, such as the Canny edge detection, to find edges , as well as speci fic multispectral channel detection or speci fic feature detection .
  • the main image sensor has a first f ield-of-view and the multispectral image sensor has a second f ield-of-view .
  • the main image sensor and the multispectral image sensors are arranged next to each other such that the first and the second f ields-of-view are overlapping .
  • the main image sensor and the multispectral image sensor have a shared focal plane .
  • the main image sensor and multispectral image sensor comprise electronic sensors and at least one optical element or multiple optical elements for imaging .
  • the main image sensor has a high spatial resolution and comparably low spectral resolution .
  • the multispectral image sensor has a high spectral resolution and comparably low spatial resolution .
  • the main image sensor has a higher spatial resolution than the multispectral image sensor .
  • the multispectral image sensor has a higher spectral resolution than the main image sensor .
  • the main imaging sensor has a high number of pixels and multispectral sensor has a low number of pixels ( > 4x to l O Ox or to l O O Ox or to 80000x lower ) .
  • At least one embodiment further comprises at least one additional optical sensor wherein the processing unit is operable to adj ust , or calibrate , the spectral representation of the first image using the determined spectral data and using data generated by the additional optical sensor .
  • the additional optical sensor can be used to adj ust , or calibrate , the spectral representation of the first image in all areas other than the defined ROI s , or be used as an additional source of information to adj ust , or calibrate , the spectral representation of the first image using the determined spectral data and the data generated by the additional optical sensor .
  • At least the main image sensor and the multispectral image sensor are integrated into a sensor module .
  • the processing unit is integrated into the sensor module .
  • an electronic device comprises at least one multispectral image sensor arrangement according to one of the aforementioned aspects , and a host system .
  • the host system comprises one of a mobile device , digital camera, such as security camera or drone camera, or , i . e . handheld, spectrometer .
  • a method of multispectral imaging uses a multispectral image sensor arrangement having a main image sensor, a multispectral image sensor and a processing unit .
  • the method involves , using the main image sensor, acquiring a spatially resolved first image of a scene and, using the multispectral sensor, acquiring a spectrally resolved second image of the same scene .
  • the processing unit uses the processing unit to determine one or more regions-of-interest , ROI s , and one or more spectral ROI s are defined in the second image corresponding to the ROI s in the first image .
  • spectral data is determined from the spectral ROI s of the second image .
  • a spectral representation of the first image is adj usted using the determined spectral data .
  • Figure 1 shows an example embodiment of a multispectral image sensor arrangement
  • Figure 2 shows an example image acquired by the main image sensor
  • Figure 3 shows an example image acquired by the multi-spectral image sensor
  • Figure 4 shows another example image acquired by the main image sensor .
  • Figure 1 shows an example embodiment of a multispectral image sensor arrangement.
  • the multispectral image sensor arrangement comprises an imaging module IM with a processing unit PU.
  • the imaging module comprises a main image sensor IS and a multispectral image sensor MS.
  • the main image sensor IS is complemented with first optics 01.
  • the main image sensor comprises an array of pixels, e.g. implemented as a CMOS photodetector or charge-coupled device, CCD.
  • the first optics 01 provides a first f ield-of-view F0V1 characterized by a first solid angle 01.
  • the main image sensor IS features high lateral spatial resolution for both input and output, and high axial resolution, with short depth-of-f ield, DoF.
  • spectral resolution of the main image sensor can be low, i.e. the main image sensor may be arranged to provide a monochromatic (1 channel) or red, green, blue, or RGB, (3 channels) image as output.
  • the main image sensor can be arranged with, or complemented with, autofocus, distance estimation, segmentation, region-of-interest , ROI, selection, coordinates selection, 3D reconstruction etc. functionality. Said functions may be controlled by the processing unit PU, or dedicated control circuitry.
  • the multispectral image sensor MS is complemented with second optics 02 and comprises an array of spectral pixels, e.g. the spectral pixels provide optical spectral channels distributed over the visible, IR and/or UV range.
  • the term "spectral” indicates that the multispectral image sensor, or spectral pixels, is arranged to generate an image of spectral data.
  • the second optics 02 provides a second f ield-of-view FOV2 characterized by a second solid angle 02.
  • the multispectral image sensor MS features a given lateral spatial resolution for an input and a lower spatial resolution for an output.
  • the lateral spatial resolution for a spectral image sensor could be e.g. in the range of 5 MP (the MS input) or lower, and could be low compared to the main image sensor IS that could be in the range of 40MP, for example.
  • the multispectral image sensor may also feature low axial resolution, with longer DoF.
  • spectral resolution of the multispectral image sensor can be high, i.e. the multispectral image sensor may at least have six spectral channels or higher (e.g., in the range of 6, 12, 20, 50, 120 channels or 4 channels with defined spectral positions and FWHM) or provide a highly defined spectral position in a spectral image as output (specificity) .
  • the multispectral image sensor can be sensitive in VIS, VIS/NIR, NIR, SWIR, and VIS/SWIR.
  • the multispectral image sensor can be arranged with, or complemented with, receiving metadata from the main image sensor or other sensors/single point calibrated color sensors or databases or coordinates, associated with the imaging module IM.
  • the multispectral image sensor can be arranged to provide additional color-related and/or spatial information to machine learning based algorithms for obj ect identi fication, or spectral information with or without spatial and/or spectral averaging . Said functions may be controlled or implemented by the processing unit PU, or dedicated control circuitry .
  • the main image sensor IS and the multispectral image sensor MS are arranged next to each other in the imaging module IM such that the first and second f ields-of-view F0V1 , F0V2 are overlapping .
  • the optics are further arranged so that the main image sensor and the multispectral image sensor have a shared focal plane FP at a distance dl away from the imaging module IM .
  • the focal plane maybe shared in the sense that the depths of field DOF1 , DOF2 corresponding to the respective optics 01 , 02 are overlapping .
  • the processing unit PU can be implemented in di f ferent ways .
  • the processing unit comprises an image processor as a type of media processor or speciali zed digital signal processor ( DSP ) used for image processing .
  • DSP digital signal processor
  • the processing unit can be implemented as an AS IC or microprocessor, for example .
  • the processing unit can be part of a dedicated sensor module or be part of a larger electronic device such as a digital camera .
  • Figure 2 shows an example image acquired by the main image sensor .
  • the image sensor IS is operable to acquire a spatially resolved first image IM1 of a scene . Depicted are di f ferent obj ects of various si ze and shape . The obj ects are shown as black and white but generally may also have different colors, or spectral content.
  • the objects can be represented by defining regions-of-interest , ROIs, in the first image IM1, as will be discussed further below.
  • Figure 3 shows an example image acquired by the multi- spectral image sensor.
  • the multispectral image sensor MS is operable to acquire a spectrally resolved second image IM2 of the same scene.
  • the multispectral image sensor can have different f ield-of-view FOV2, electronic sensor pixel size, and optics-specific point spread function, PSF for short, compared to the image sensor IS.
  • the resolution of the multispectral image sensor could be limited by its PSF or by its pixel size.
  • the PSF can even vary from one spectral channel to another spectral channel, and can vary for a given spectral channel.
  • the proposed principle suggests a specific method, e.g. to be conducted by means of the processing unit PU, to provide accurate spectral data to objects visible in first images IM1 of the main image sensor IS, e.g. by applying digital processing and knowledge about combined opto-electronic performance of the multispectral image sensor MS.
  • This may involve sampling of all spectral channels, which may have the constraint of being appropriate for all channel-dependent PSF sizes.
  • the sampling should be able to digitalize the spectral channel with the smallest PSF according to the Nyquist criterion.
  • a second image IM2 is depicted and shows a number of objects represented by respective PSFs of variable size (shown in the upper row of the image) .
  • a given PSF may vary in size over area for a single channel, as indicated in the middle row of the image.
  • the PSF can be variable in size over different spectral channels.
  • a spectral measurement according to the proposed concept may include the constraint that the areas of all involved PSF overlap for best accuracy (shown in the bottom row of the image) . Other areas may have reduced accuracy of spectral measurements and/or reconstruction .
  • Sampling seeks to realize the smallest size PSF to be ensured for the digitalization of spectral channels.
  • the drawing below the example image IM2 shows different grids representing different ROIs and sampling (fine, mid, coarse) . Furthermore, each grid also depicts PSFs of two different sizes. Coarse sampling (on the right side example) does not ensure appropriate sampling for both PSF and would lead to systematic error. Finer sampling works better for different or all PSF sizes and would allow improved channel overlap.
  • Figure 4 shows another example image acquired by the main image sensor.
  • the drawing illustrates that regions-of- interest, ROIs, can be defined in the first image IM1, i.e. the objects shown in the first image IM1 can be selected for multispectral analysis by ROIs of different sizes and shapes.
  • ROI of different sizes are presented as dashed squares.
  • ROI size In the top left of the drawing objects are smaller than ROI size.
  • ROI includes objects.
  • objects In the top right, objects are of comparable or the same size as ROI size.
  • ROI can include an object completely or partially.
  • On the bottom, a ROI is smaller than the depicted objects, i.e. one ROI can be within an object or partially overlap with an object.
  • the ROIs size can be analyzed and adjusted to fit a size definition based on the sampling accuracy of the multispectral sensor MS (a set of spectral channels that is application dependent, e.g. depends on the actual implementation of the multispectral image sensor MS, such as number of spectral pixels or channels) .
  • the ROIs in the first image IM1 are defined based on the size definition of the multispectral image sensor.
  • the size definition includes a predefined sampling parameter, e.g. to ensure that Nyquist criterion is met for the multispectral image sensor.
  • the parameters of multispectral acquisition can be adjusted according to workflows described below.
  • a method of multispectral imaging includes procedural steps, e.g. conducted or initiated by the processing unit PU using the main image sensor IS and multispectral image sensor MS. Details related to said steps, including a succession of procedural steps, may vary.
  • the method comprises the steps of:
  • Workflows may use the first image IM1 to define initial ROIs.
  • Other workflows use the second image of the multispectral sensor MS to define initial ROIs. Both ROIs or spectral ROIs can be determined as initial ROIs.
  • a spatially resolved first image IM1 e.g. a high resolution RGB/BW picture, of a scene is acquired using the main image sensor IS.
  • ROIs are defined in the first image IM1.
  • Methods for defining the ROIs could involve coordinates/areas definition from the main image sensor, e.g. based on intensity threshold analysis, gradient analysis, artificial intelligence, Al, object recognition, RGB component recognition, etc.
  • ROIs can also be user supervised (e.g. user input, such as touch, audio, eye tracking, etc.) or other sources of a coordinate's definition.
  • a spectrally resolved second image IM2 e.g. a lower resolution multispectral image
  • the next step can be summarized as defining one or more spectral ROIs in the second image IM2 corresponding to the ROIs in the first image IM1.
  • this step may involve a number of additional steps, which are discussed now.
  • One step relates to applying aberration corrections necessary for a specific multispectral image sensor, e.g. for all spectral channels or selected spectral channel (s) .
  • “Aberration corrections" for the multispectral image sensor MS could include:
  • these aberrations include defocus, spherical, coma, astigmatism, and/or field curvature.
  • corrections could be quantified for each individual multispectral image sensor or be averaged for a type of multispectral image sensor in the form of a lookup table, LUT .
  • Specific parameters may be called up in the step of processing by means of the processing unit, when one or more spectral channels are allocated to a spectral ROI by the needs of an application, for example.
  • a next step may involve overlap, correlation or comparison of the first and second images IM1, IM2, e.g. in view of the requirements of the specific application.
  • the first and second f ields-of-view F0V1, F0V2 as well as scaling can be different.
  • F0V2 can be 45 degree while the F0V1 could be 120 degree.
  • An "overlap" information from the imaging module IM could be applied for one or multiple regions of interest (ROI) , defined as follows. The overlap may not be available for the complete area of one of the sensors without additional motion of the two sensors:
  • An overlap, correlation or comparison procedure to define one or more spectral ROIs in the second image IM2 corresponding to the ROIs in the first image IM1 could include the following steps:
  • a geometrical relation or overlap is estimated for a definite ROI in the first image IM1 and a spectral ROI in the second image.
  • the overlap could be visualized or highlighted in the first image; this step can be guided by using the "overlap" information discussed above,
  • PSF point-spread-function
  • the estimated parameters of PSF variation in spectral channels of MS provide a measure as to how accurate spectral analysis representation could be in the IM1.
  • the spectral ROI selection can be visualized or highlighted as an overlap in the first image IM1.
  • Further steps may include spatial and/or spectral averaging or adjustment of parameters for multispectral data acquisition .
  • An option for averaging could include the following processing steps:
  • a size of the ROI in the first image IM1 in each coordinate could be at least three times larger than the selected PSF size in the corresponding area in the second image IM2 (a) ,
  • spectral channels (depending on the specific application requirement, if high spectral accuracy is not needed or if a particular spectral combination or spectral selectivity is needed) .
  • the size of the ROI in each coordinate may be required to be at least three times larger than the selected channel PSF size in the corresponding area (a) ,
  • the next steps involve determining spectral data from the selected spectral ROIs of the second image IM2, and, as a consequence, use the determined spectral data to adjust a spectral representation of the first image IM1.
  • the adjusted spectral representation may be a color or white balance for the corresponding ROI .
  • a spatially resolved first image IM1 e.g. a high resolution RGB/BW picture
  • a spatially resolved first image IM1 e.g. a high resolution RGB/BW picture
  • a spectrally resolved second image IM2 e.g. a lower resolution multispectral image, of the same scene is acquired using the multispectral sensor MS.
  • aberration corrections are conducted, if necessary for a specific multispectral image sensor MS, e.g. for all spectral channels or selected spectral channel (s) . "Aberration corrections" could be similar to the steps discussed above in the context of the first workflow.
  • spectral ROIs in the second image IM2 are defined, which correspond to ROIs in the first image IM1. Spatial ROIs and parameters are selected from the second image IM2. Selection methods could be based on:
  • a further step may relate to an overlap, correlation or comparison procedure to define one or more spectral ROIs in the second image IM2 corresponding to the ROIs in the first image IM1.
  • a first image IM1 after overlap with results from a second image IM2 will show areas with the strongest features derived from multispectral analysis definition and specific for an application.
  • the f ields-of-view of the two sensors as well as scaling can be different. Thus, overlap may not be available for the complete area of one of the sensors without the additional motion of two sensors.
  • regions-of-interest ROI s
  • spectral ROI s are defined in the second image IM2 corresponding to the ROI s in the first image IM1 .
  • the selection allows for customer feedback and customer selection of regions of interest .
  • the previous steps can be repeated .
  • the procedure allows switching to the first workflow depending on the input of customer .

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

Un agencement de capteur d'image multispectrale comprend un capteur d'image principal (IS), un capteur d'image multispectrale (MS) et une unité de traitement (PU). Le capteur d'image (IS) permet d'acquérir une première image à résolution spatiale (IM1) d'une scène. Le capteur multispectral (MS) permet d'acquérir une seconde image à résolution spatiale (IM2) de la même scène. L'unité de traitement (PU) permet de définir une ou plusieurs régions d'intérêt, ROI, dans la première image (IM1), définir une ou plusieurs ROI spectrales dans la seconde image (IM2) correspondant aux ROI dans la première image (IM1), déterminer des données spectrales à partir des ROI spectrales de la seconde image (IM2), et utiliser les données spectrales déterminées pour ajuster une représentation spectrale de la première image (IM1).
PCT/EP2023/072014 2022-08-30 2023-08-09 Agencement de capteur d'image multispectrale, dispositif électronique et procédé d'imagerie multispectrale WO2024046727A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102022121896.1 2022-08-30
DE102022121896 2022-08-30

Publications (1)

Publication Number Publication Date
WO2024046727A1 true WO2024046727A1 (fr) 2024-03-07

Family

ID=87747763

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/072014 WO2024046727A1 (fr) 2022-08-30 2023-08-09 Agencement de capteur d'image multispectrale, dispositif électronique et procédé d'imagerie multispectrale

Country Status (1)

Country Link
WO (1) WO2024046727A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070064119A1 (en) * 2004-05-26 2007-03-22 Olympus Corporation Photographing system
US20080297861A1 (en) * 2003-04-25 2008-12-04 Quad/Tech, Inc. Image processing of a portion of multiple patches of a colorbar
US20120062888A1 (en) * 2010-09-10 2012-03-15 Chemimage Corporation Method for operating an optical filter in multiple modes
US20150086117A1 (en) * 2013-09-24 2015-03-26 Corning Incorporated Hyperspectral detector systems and methods using context-image fusion
EP2944930A2 (fr) * 2014-05-16 2015-11-18 Cubert GmbH Caméra d'hyperspectroscopie spatiale et à déclenchement spectral et procédé
WO2019089531A1 (fr) * 2017-10-30 2019-05-09 University Of Maryland, College Park Dispositifs d'imagerie de brillouin, et systèmes et procédés faisant appel à de tels dispositifs
WO2021105398A1 (fr) * 2019-11-27 2021-06-03 ams Sensors Germany GmbH Classification de source de lumière ambiante

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080297861A1 (en) * 2003-04-25 2008-12-04 Quad/Tech, Inc. Image processing of a portion of multiple patches of a colorbar
US20070064119A1 (en) * 2004-05-26 2007-03-22 Olympus Corporation Photographing system
US20120062888A1 (en) * 2010-09-10 2012-03-15 Chemimage Corporation Method for operating an optical filter in multiple modes
US20150086117A1 (en) * 2013-09-24 2015-03-26 Corning Incorporated Hyperspectral detector systems and methods using context-image fusion
EP2944930A2 (fr) * 2014-05-16 2015-11-18 Cubert GmbH Caméra d'hyperspectroscopie spatiale et à déclenchement spectral et procédé
WO2019089531A1 (fr) * 2017-10-30 2019-05-09 University Of Maryland, College Park Dispositifs d'imagerie de brillouin, et systèmes et procédés faisant appel à de tels dispositifs
WO2021105398A1 (fr) * 2019-11-27 2021-06-03 ams Sensors Germany GmbH Classification de source de lumière ambiante

Similar Documents

Publication Publication Date Title
US8908026B2 (en) Imaging method and microscope device
US10247933B2 (en) Image capturing device and method for image capturing
EP2943761B1 (fr) Système et procédés d'imagerie multispectrale plein champ
US7579577B2 (en) Image capturing apparatus having a filter section disposed on periphery of a light passing section of a partial wavelength spectrum diaphragm section
AU2007324081B2 (en) Focus assist system and method
US9638575B2 (en) Measuring apparatus, measuring system, and measuring method
US11614363B2 (en) Digital pathology color calibration and validation
TWI471004B (zh) 成像裝置、成像方法及程式
US20140028839A1 (en) Image processing method, storage medium, image processing apparatus and image pickup apparatus
CN105359024B (zh) 摄像装置和摄像方法
WO2014031611A1 (fr) Système et appareil pour correction de couleur dans des platines porte-objet de microscope à transmission
JP5882789B2 (ja) 画像処理装置、画像処理方法、及びプログラム
CN114241066A (zh) 用于生成hdr图像的显微镜系统和方法
US8749640B2 (en) Blur-calibration system for electro-optical sensors and method using a moving multi-focal multi-target constellation
Nouri et al. Calibration and test of a hyperspectral imaging prototype for intra-operative surgical assistance
JP7237450B2 (ja) 画像処理装置、画像処理方法、プログラム、記憶媒体及び撮像装置
WO2024046727A1 (fr) Agencement de capteur d'image multispectrale, dispositif électronique et procédé d'imagerie multispectrale
KR102027106B1 (ko) 콘트라스트가 향상된 영상취득장치 및 방법, 이를 위한 컴퓨터 프로그램 및 기록매체
JP7254440B2 (ja) 画像処理装置、撮像装置、画像処理方法、および、プログラム
WO2014208188A1 (fr) Appareil de traitement d'image et procédé de traitement d'image
Gebejes et al. Color and image characterization of a three CCD seven band spectral camera
US9818322B2 (en) Method and system for obtaining color measurement of a display screen
US20230418042A1 (en) Microscopy System and Method for the Color Correction of Microscope Images
Yuasa et al. Color adjustment algorithm adapted to the spectral reflectance estimation method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23757532

Country of ref document: EP

Kind code of ref document: A1