WO2022249598A1 - 情報処理方法、情報処理装置、及びプログラム - Google Patents
情報処理方法、情報処理装置、及びプログラム Download PDFInfo
- Publication number
- WO2022249598A1 WO2022249598A1 PCT/JP2022/007565 JP2022007565W WO2022249598A1 WO 2022249598 A1 WO2022249598 A1 WO 2022249598A1 JP 2022007565 W JP2022007565 W JP 2022007565W WO 2022249598 A1 WO2022249598 A1 WO 2022249598A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image data
- unit
- information processing
- value
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 75
- 238000003672 processing method Methods 0.000 title claims abstract description 38
- 238000003860 storage Methods 0.000 claims abstract description 54
- 238000002073 fluorescence micrograph Methods 0.000 claims abstract description 35
- 238000006243 chemical reaction Methods 0.000 claims abstract description 31
- 238000003384 imaging method Methods 0.000 claims description 58
- 238000012545 processing Methods 0.000 claims description 54
- 238000004458 analytical method Methods 0.000 claims description 53
- 238000000034 method Methods 0.000 claims description 39
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000001131 transforming effect Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 2
- 238000005286 illumination Methods 0.000 description 71
- 230000003595 spectral effect Effects 0.000 description 58
- 238000010586 diagram Methods 0.000 description 44
- 238000004422 calculation algorithm Methods 0.000 description 43
- 230000005284 excitation Effects 0.000 description 41
- 239000000975 dye Substances 0.000 description 23
- 230000001575 pathological effect Effects 0.000 description 23
- 238000001228 spectrum Methods 0.000 description 22
- 230000003287 optical effect Effects 0.000 description 16
- 238000002189 fluorescence spectrum Methods 0.000 description 15
- 238000000701 chemical imaging Methods 0.000 description 14
- 238000004611 spectroscopical analysis Methods 0.000 description 14
- 230000004048 modification Effects 0.000 description 13
- 238000012986 modification Methods 0.000 description 13
- 238000000926 separation method Methods 0.000 description 9
- 230000007246 mechanism Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 239000007850 fluorescent dye Substances 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000005086 pumping Methods 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 230000001172 regenerating effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 240000006829 Ficus sundaica Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 239000000427 antigen Substances 0.000 description 1
- 102000036639 antigens Human genes 0.000 description 1
- 108091007433 antigens Proteins 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 239000006059 cover glass Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000004141 dimensional analysis Methods 0.000 description 1
- 239000001050 dyes by color Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000000799 fluorescence microscopy Methods 0.000 description 1
- 238000012757 fluorescence staining Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- QSHDDOUJBYECFT-UHFFFAOYSA-N mercury Chemical compound [Hg] QSHDDOUJBYECFT-UHFFFAOYSA-N 0.000 description 1
- 229910052753 mercury Inorganic materials 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 239000000049 pigment Substances 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/62—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
- G01N21/63—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
- G01N21/64—Fluorescence; Phosphorescence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10064—Fluorescence image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
Definitions
- the present disclosure relates to an information processing method, an information processing device, and a program.
- a fluorescence observation device using a line spectroscope has been proposed as a configuration for realizing such a pathological image diagnosis method using fluorescence staining.
- the line spectroscope irradiates a fluorescently-stained pathological specimen with linear line illumination, and the spectroscope captures an image by dispersing the fluorescence excited by the line illumination.
- Fluorescence image data obtained by imaging are sequentially output in the line direction of line illumination, for example, and are sequentially output in the wavelength direction of spectroscopy, thereby being output continuously without interruption.
- the pathological specimen is imaged by scanning in the direction perpendicular to the line direction of the line illumination, so that the spectral information related to the pathological specimen based on the captured image data can be handled as two-dimensional information. becomes possible.
- the present disclosure provides an information processing method, an information processing apparatus, and a program capable of displaying an image with a more appropriate dynamic range.
- first image data of a unit area image that is each area obtained by dividing a fluorescence image into a plurality of areas, and a predetermined pixel value range for each of the first image data are shown.
- a storage step of associating and storing the first value a conversion step of converting a pixel value of a combination image of the selected unit area images based on a representative value selected from first values associated with each combination of the selected unit area images;
- a method of processing information comprising:
- the combination of the selected unit area images may correspond to the observation range displayed on the display unit, and the range of the combination of the unit area images may be changed according to the observation range.
- a display control step of displaying a range corresponding to the observation range on the display unit may be further provided.
- the observation range may correspond to the observation range of the microscope, and the combination range of the unit area images may be changed according to the magnification of the microscope.
- the first image data may be image data whose dynamic range is adjusted based on a pixel value range obtained according to a predetermined rule in the original image data of the first image data.
- a pixel value of the original image data may be obtained by multiplying the representative value associated with the first image data.
- the storing step includes: second image data having a different size from the area of the first image data, the second image data obtained by redividing the fluorescence image into a plurality of areas; a first value indicating a pixel value range for each of the second image data; may be stored in association with each other.
- the converting step converts pixel values for the selected combination of second image data based on a representative value selected from respective first values associated with the selected combination of second image data.
- the pixel value range may be a range based on statistics in the original image data corresponding to the first image data.
- the statistic may be either the maximum value, the mode value, or the median value.
- the pixel value range may be a range between the minimum value in the original image data and the statistic.
- the first image data is data obtained by dividing a pixel value of the original image data corresponding to the unit area image by the first value;
- the transforming step multiplies each of the first image data in the selected unit area images by the first value corresponding to each of the first image data and the respective first values associated with the combination of the selected unit area images. You may divide by the maximum value of 1 value.
- a first input step of inputting a calculation method for the statistic a calculation method for the statistic
- an analysis step of calculating the statistic according to the input of the input unit a data generation step of generating first image data obtained by dividing a fluorescence image into a plurality of regions and a first value indicating a pixel value range for each of the first image data, based on the analysis in the analysis step; may be further provided.
- the conversion step may select a combination of the first images according to the input of the second input step.
- the display control step causes the display unit to display a display form regarding the first input step and the second input step; further comprising an operation step of indicating the position of any one of the display forms,
- the first input step and the second input step may input related information in accordance with instructions in the operation step.
- the method may further include a data generation step of dividing each of the plurality of fluorescence images into image data and coefficients that are the first values for the image data.
- the analysis step of performing the cell analysis may be performed based on an image range instructed by the operator.
- a storage unit that associates and stores first image data obtained by dividing a fluorescence image into a plurality of regions and a first value that indicates a predetermined pixel value range for each of the first image data; a conversion unit that converts a pixel value of a combination image of the selected first images based on a representative value selected from first values associated with each combination of the selected first images;
- a storing step of associating and storing first image data obtained by dividing a fluorescence image into a plurality of regions and a first value indicating a predetermined pixel value range for each of the first image data; a conversion step of converting pixel values of the selected combination of the first images based on a representative value selected from first values associated with each of the selected combinations of the first images; is provided to the information processing apparatus.
- FIG. 4 is a schematic diagram for explaining line spectroscopy applicable to the embodiment; 4 is a flowchart showing an example of line spectroscopy processing; 1 is a schematic block diagram of a fluorescence observation device according to an embodiment of the present technology; FIG. The figure which shows an example of the optical system in a fluorescence observation apparatus. Schematic diagram of a pathological specimen to be observed.
- FIG. 4 is a schematic diagram showing how line illumination illuminates an observation target.
- FIG. 4 is a diagram for explaining a method of acquiring spectral data when an imaging element in a fluorescence observation device is composed of a single image sensor;
- FIG. 7 is a diagram showing wavelength characteristics of spectral data acquired in FIG. 6; FIG.
- FIG. 4 is a diagram for explaining a method of acquiring spectral data when an imaging device is composed of a plurality of image sensors;
- FIG. 4 is a conceptual diagram for explaining a scanning method of line illumination applied to an observation target;
- FIG. 4 is a conceptual diagram for explaining three-dimensional data (X, Y, ⁇ ) acquired by a plurality of line illumination;
- 4 is a table showing the relationship between irradiation lines and wavelengths;
- 4 is a flowchart showing an example of a procedure of processing executed in an information processing device (processing unit);
- FIG. 4 is a diagram schematically showing the flow of acquisition processing of spectral data (x, ⁇ ) according to the embodiment; The figure which shows a several unit block typically.
- FIG. 4 is a diagram for explaining a method of acquiring spectral data when an imaging device is composed of a plurality of image sensors;
- FIG. 4 is a conceptual diagram for explaining a scanning method of line illumination applied to an observation target;
- FIG. 4
- FIG. 15 is a schematic diagram showing an example of spectral data (x, ⁇ ) shown in section (b) of FIG. 14;
- FIG. 4 is a schematic diagram showing an example of spectral data (x, ⁇ ) in which the order of data arrangement is changed;
- FIG. 3 is a block diagram showing a configuration example of a gradation processing unit;
- FIG. 4 is a diagram conceptually explaining a processing example of a gradation processing unit;
- FIG. 4 is a diagram showing an example of data names corresponding to imaging positions; The figure which shows the example of the data format of each unit rectangular block.
- FIG. 4 is a diagram showing an image pyramid structure for explaining a processing example of an image group generation unit;
- FIG. 4 is a diagram showing an example of regenerating a stitching image (WSI) as an image pyramid structure; An example of a display screen generated by the display control unit. The figure which shows the example by which the display area was changed. 4 is a flowchart showing an example of processing by an information processing apparatus;
- FIG. 2 is a schematic block diagram of a fluorescence observation device according to a second embodiment; The figure which shows typically the example of a process of a 2nd analysis part.
- FIG. 1 is a schematic diagram for explaining line spectroscopy applicable to the embodiment.
- FIG. 2 is a flowchart illustrating an example of line spectroscopic processing.
- a fluorescently stained pathological specimen 1000 is irradiated with linear excitation light, for example laser light, by line illumination (step S1).
- the pathological specimen 1000 is irradiated with the excitation light in a line shape parallel to the x-direction.
- the fluorescent substance obtained by fluorescent staining is excited by irradiation with excitation light, and emits fluorescence in a line (step S2).
- This fluorescence is spectroscopically separated by a spectroscope (step S3) and imaged by a camera.
- the imaging device of the camera has pixels arranged in a two-dimensional lattice pattern including pixels aligned in the row direction (x direction) and pixels aligned in the column direction (y direction). configuration.
- the captured image data 1010 has a structure including position information in the line direction in the x direction and wavelength ⁇ information by spectroscopy in the y direction.
- the pathological specimen 1000 is moved in the y direction by a predetermined distance (step S4), and the next imaging is performed.
- Image data 1010 in the next line in the y direction is acquired by this imaging.
- two-dimensional information of the fluorescence emitted from the pathological specimen 1000 can be obtained for each wavelength ⁇ (step S5).
- Data obtained by stacking the two-dimensional information at each wavelength ⁇ in the direction of the wavelength ⁇ is generated as the spectrum data cube 1020 (step S6).
- data obtained by stacking two-dimensional information at wavelength ⁇ in the direction of wavelength ⁇ is called a spectrum data cube.
- the spectral data cube 1020 has a structure that includes two-dimensional information of the pathological specimen 1000 in the x and y directions and wavelength ⁇ information in the height direction (depth direction).
- the spectral information of the pathological specimen 1000 in such a data configuration, it becomes possible to easily perform a two-dimensional analysis of the pathological specimen 1000 .
- FIG. 3 is a schematic block diagram of a fluorescence observation device according to an embodiment of the present technology
- FIG. 4 is a diagram showing an example of an optical system in the fluorescence observation device.
- a fluorescence observation apparatus 100 of this embodiment includes an observation unit 1 , a processing unit (information processing device) 2 , and a display section 3 .
- the observation unit 1 includes an excitation unit 10 that irradiates a pathological specimen (pathological sample) with a plurality of line illuminations of different wavelengths arranged in parallel with different axes, a stage 20 that supports the pathological specimen, and a pathological specimen that is linearly excited. and a spectral imaging unit 30 that acquires the fluorescence spectrum (spectral data) of the sample.
- different axes parallel means that the multiple line illuminations are different axes and parallel.
- Different axes mean not coaxial, and the distance between the axes is not particularly limited.
- Parallel is not limited to being parallel in a strict sense, but also includes a state of being substantially parallel. For example, there may be distortion derived from an optical system such as a lens, or deviation from a parallel state due to manufacturing tolerances, and such cases are also regarded as parallel.
- the information processing device 2 Based on the fluorescence spectrum of the pathological specimen (hereinafter also referred to as sample S) acquired by the observation unit 1, the information processing device 2 typically forms an image of the pathological specimen or outputs the distribution of the fluorescence spectrum. do.
- the image here refers to the composition ratio of the dyes that compose the spectrum, the autofluorescence derived from the sample, the waveform converted to RGB (red, green, and blue) colors, the luminance distribution of a specific wavelength band, and the like.
- the two-dimensional image information generated based on the fluorescence spectrum may be referred to as a fluorescence image.
- the information processing device 2 corresponds to the information processing device.
- the display unit 3 is, for example, a liquid crystal monitor.
- the input unit 4 is, for example, a pointing device, keyboard, touch panel, or other operating device. If the input unit 4 includes a touch panel, the touch panel can be integrated with the display unit 3 .
- the excitation unit 10 and the spectral imaging unit 30 are connected to the stage 20 via an observation optical system 40 such as an objective lens 44 .
- the observation optical system 40 has an autofocus (AF) function that follows the optimum focus by a focus mechanism 60 .
- the observation optical system 40 may be connected to a non-fluorescent observation section 70 for dark-field observation, bright-field observation, or the like.
- the fluorescence observation device 100 controls an excitation unit (LD and shutter control), an XY stage that is a scanning mechanism, a spectral imaging unit (camera), a focus mechanism (detector and Z stage), a non-fluorescence observation unit (camera), and the like. It may be connected to the control unit 80 that
- the pumping unit 10 includes a plurality of light sources L1, L2, . . . capable of outputting light of a plurality of pumping wavelengths Ex1, Ex2, .
- a plurality of light sources are typically composed of light-emitting diodes (LEDs), laser diodes (LDs), mercury lamps, and the like, and each light is converted into line illumination to irradiate the sample S on the stage 20 .
- FIG. 5 is a schematic diagram of a pathological specimen to be observed.
- FIG. 6 is a schematic diagram showing how line illumination is applied to an observation target.
- the sample S is typically composed of a slide containing an observation target Sa such as a tissue section as shown in FIG.
- a sample S (observation target Sa) is stained with a plurality of fluorescent dyes.
- the observation unit 1 enlarges the sample S to a desired magnification and observes it.
- the illumination unit has a plurality of line illuminations (two (Ex1, Ex2) in the example shown) arranged so as to overlap each illumination area.
- Imaging areas R1 and R2 of the spectral imaging unit 30 are arranged.
- the two line illuminations Ex1 and Ex2 are each parallel to the Z-axis direction and arranged a predetermined distance ( ⁇ y) apart in the Y-axis direction.
- the imaging areas R1 and R2 respectively correspond to the slit sections of the observation slit 31 (see FIG. 4) in the spectral imaging section 30.
- the same number of slits of the spectral imaging unit 30 as that of the line illumination are arranged.
- the line width of illumination is wider than the slit width in FIG. If the illumination line width is larger than the slit width, the alignment margin of the excitation unit 10 with respect to the spectral imaging unit 30 can be increased.
- the wavelengths forming the first line illumination Ex1 and the wavelengths forming the second line illumination Ex2 are different from each other. Line-shaped fluorescence excited by these line illuminations Ex1 and Ex2 is observed in the spectroscopic imaging section 30 via the observation optical system 40 .
- the spectroscopic imaging unit 30 includes an observation slit 31 having a plurality of slits through which fluorescence excited by a plurality of line illuminations can pass, and at least one imaging device 32 capable of individually receiving the fluorescence that has passed through the observation slit 31. and A two-dimensional imager such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) is adopted as the imaging device 32 .
- CCD Charge Coupled Device
- CMOS Complementary Metal Oxide Semiconductor
- the spectral imaging unit 30 acquires fluorescence spectral data (x, ⁇ ) from the line illuminations Ex1 and Ex2, using the pixel array in one direction (for example, the vertical direction) of the imaging device 32 as a wavelength channel.
- the obtained spectroscopic data (x, ⁇ ) are recorded in the information processing device 2 in a state in which the excitation wavelength from which the spectroscopic data is excited is linked.
- the information processing device 2 can be realized by hardware elements used in a computer such as a CPU (Central Processing Unit), RAM (Random Access Memory), ROM (Read Only Memory), and necessary software. Instead of or in addition to the CPU, PLD (Programmable Logic Device) such as FPGA (Field Programmable Gate Array), DSP (Digital Signal Processor), and other ASIC (Application Specific Integrated Circuit) may be used. good.
- the information processing device 2 has a storage section 21 , a data configuration section 22 , an image forming section 23 and a gradation processing section 24 .
- the information processing device 2 can configure the functions of a data configuration section 22 , an image forming section 23 and a gradation processing section 24 by executing a program stored in the storage section 21 . Note that the data configuration unit 22, the image forming unit 23, and the gradation processing unit 24 may be configured by circuits.
- the information processing device 2 has a storage unit 21 that stores spectral data representing the correlation between the wavelengths of the plurality of line illuminations Ex1 and Ex2 and the fluorescence received by the imaging device 32.
- a storage device such as a non-volatile semiconductor memory or a hard disk drive is used for the storage unit 21, and the standard spectrum of the autofluorescence related to the sample S and the standard spectrum of the single dye that stains the sample S are stored in advance.
- Spectroscopic data (x, ⁇ ) received by the imaging device 32 is acquired, for example, as shown in FIGS. 7 and 8 and stored in the storage unit 21 .
- a storage unit for storing the autofluorescence of the sample S and the standard spectrum of the dye alone and a storage unit for storing the spectroscopic data (measured spectrum) of the sample S acquired by the imaging element 32 are shared by the storage unit 21.
- the storage unit 21 it is not limited to this, and may be configured with separate storage units.
- FIG. 7 is a diagram for explaining a method of acquiring spectral data when the imaging device in the fluorescence observation device 100 is composed of a single image sensor.
- FIG. 8 is a diagram showing wavelength characteristics of spectral data acquired in FIG. In this example, the fluorescence spectra Fs1 and Fs2 excited by the line illuminations Ex1 and Ex2 are passed through the spectroscopic optical system (described later), and finally the image pickup device in a state shifted by an amount proportional to ⁇ y (see FIG. 6). An image is formed on the light receiving surface of 32 .
- FIG. 9 is a diagram for explaining a method of acquiring spectral data when the imaging device is composed of a plurality of image sensors.
- FIG. 10A and 10B are conceptual diagrams for explaining a scanning method of line illumination applied to an observation target.
- FIG. 11 is a conceptual diagram for explaining three-dimensional data (X, Y, ⁇ ) acquired by a plurality of line illuminations.
- the fluorescence observation apparatus 100 will be described in more detail below with reference to FIGS. 7 to 11.
- FIG. 11 is a conceptual diagram for explaining three-dimensional data (X, Y, ⁇ ) acquired by a plurality of line illuminations.
- the fluorescence observation apparatus 100 will be described in more detail below with reference to FIGS. 7 to 11.
- the frame rate of the image pickup device 32 can be increased by Row_full/(Row_b-Row_a+Row_d-Row_c) times as high as in full-frame readout.
- a dichroic mirror 42 and a bandpass filter 45 are inserted in the optical path to prevent the excitation light (Ex1, Ex2) from reaching the imaging element 32.
- an intermittent portion IF is generated in the fluorescence spectrum Fs1 imaged on the imaging device 32 (see FIGS. 7 and 8). By excluding such an intermittent portion IF from the readout area, the frame rate can be further improved.
- the imaging device 32 may include a plurality of imaging devices 32a and 32b each capable of receiving fluorescence that has passed through the observation slit 31.
- the fluorescence spectra Fs1 and Fs2 excited by the line illuminations Ex1 and Ex2 are obtained on the imaging elements 32a and 32b as shown in FIG.
- the line illuminations Ex1 and Ex2 are not limited to being configured with a single wavelength, and each may be configured with a plurality of wavelengths. If the line illuminations Ex1, Ex2 each consist of multiple wavelengths, the fluorescence excited by them also contains multiple spectra.
- the spectroscopic imaging unit 30 has a wavelength dispersive element for separating the fluorescence into spectra derived from the excitation wavelengths.
- the wavelength dispersive element is composed of a diffraction grating, a prism, or the like, and is typically arranged on the optical path between the observation slit 31 and the imaging element 32 .
- the observation unit 1 further includes a scanning mechanism 50 that scans the stage 20 with the plurality of line illuminations Ex1 and Ex2 in the Y-axis direction, that is, in the arrangement direction of the line illuminations Ex1 and Ex2.
- a scanning mechanism 50 that scans the stage 20 with the plurality of line illuminations Ex1 and Ex2 in the Y-axis direction, that is, in the arrangement direction of the line illuminations Ex1 and Ex2.
- the photographing region Rs is divided into a plurality of parts in the X-axis direction, the sample S is scanned in the Y-axis direction, then moved in the X-axis direction, and further scanned in the Y-axis direction. The action of doing is repeated.
- a single scan can capture spectroscopic images from a sample excited by several excitation wavelengths.
- the scanning mechanism 50 typically scans the stage 20 in the Y-axis direction, but a plurality of line illuminations Ex1 and Ex2 may be scanned in the Y-axis direction by a galvanomirror arranged in the middle of the optical system. .
- three-dimensional data of (X, Y, ⁇ ) as shown in FIG. 11 are acquired for each of the plurality of line illuminations Ex1 and Ex2. Since the three-dimensional data derived from each of the line illuminations Ex1 and Ex2 is data whose coordinates are shifted by ⁇ y about the Y axis, correction is performed based on the value of ⁇ y recorded in advance or the value of ⁇ y calculated from the output of the imaging device 32. is output.
- the line illumination as the excitation light is composed of two, but it is not limited to this, and may be three, four, or five or more.
- Each line illumination may also include multiple excitation wavelengths selected to minimize degradation of color separation performance. Even if there is only one line illumination, if the excitation light source is composed of multiple excitation wavelengths and each excitation wavelength is associated with the row data obtained by the image pickup device and recorded, a different axis parallel It does not give as much resolution, but it does give a polychromatic spectrum.
- FIG. 12 is a table showing the relationship between irradiation lines and wavelengths. For example, a configuration as shown in FIG. 12 may be adopted.
- observation unit Next, details of the observation unit 1 will be described with reference to FIG. Here, an example in which the observation unit 1 is configured in configuration example 2 in FIG. 12 will be described.
- the excitation unit 10 has a plurality (four in this example) of excitation light sources L1, L2, L3, and L4.
- Each of the excitation light sources L1 to L4 is composed of a laser light source that outputs laser light with wavelengths of 405 nm, 488 nm, 561 nm and 645 nm, respectively.
- the excitation unit 10 includes a plurality of collimator lenses 11 and laser line filters 12, dichroic mirrors 13a, 13b, and 13c, a homogenizer 14, a condenser lens 15, and an entrance slit 16 so as to correspond to the excitation light sources L1 to L4. further has
- the laser light emitted from the excitation light source L1 and the laser light emitted from the excitation light source L3 are collimated by a collimator lens 11, respectively, and then transmitted through a laser line filter 12 for cutting the skirt of each wavelength band. and are made coaxial by the dichroic mirror 13a.
- the two coaxial laser beams are further beam-shaped by a homogenizer 14 such as a fly-eye lens and a condenser lens 15 to form line illumination Ex1.
- the laser light emitted from the excitation light source L2 and the laser light emitted from the excitation light source L4 are coaxially coaxial with each other by the dichroic mirrors 13b and 13c, and are line-illuminated to form a line illumination Ex2 having a different axis from the line illumination Ex1. be done.
- the line illuminations Ex1 and Ex2 form off-axis line illuminations (primary images) separated by .DELTA.y in the entrance slit 16 (slit conjugate) having a plurality of slits each passable.
- the observation optical system 40 has a condenser lens 41 , dichroic mirrors 42 and 43 , an objective lens 44 , a bandpass filter 45 and a condenser lens 46 .
- the line illuminations Ex1 and Ex2 are collimated by a condenser lens 41 paired with an objective lens 44, reflected by dichroic mirrors 42 and 43, transmitted through the objective lens 44, and irradiated onto the sample S.
- Illumination as shown in FIG. 6 is formed on the sample S surface. Fluorescence excited by these illuminations is collected by the objective lens 44, reflected by the dichroic mirror 43, transmitted through the dichroic mirror 42 and the bandpass filter 45 that cuts the excitation light, and collected again by the condenser lens 46. and enters the spectral imaging unit 30 .
- the spectral imaging unit 30 has an observation slit 31, imaging elements 32 (32a, 32b), a first prism 33, a mirror 34, a diffraction grating 35 (wavelength dispersion element), and a second prism .
- the observation slit 31 is arranged at the condensing point of the condenser lens 46 and has the same number of slit parts as the number of excitation lines.
- the fluorescence spectra derived from the two excitation lines that have passed through the observation slit 31 are separated by the first prism 33 and reflected by the grating surfaces of the diffraction grating 35 via the mirrors 34, respectively, so that the fluorescence spectra of the excitation wavelengths are further divided into separated.
- the four fluorescence spectra thus separated are incident on the imaging elements 32a and 32b via the mirror 34 and the second prism 36, and developed into (x, ⁇ ) information as spectral data.
- the pixel size (nm/Pixel) of the imaging elements 32a and 32b is not particularly limited, and is set to 2 nm or more and 20 nm or less, for example. This dispersion value may be realized by the pitch of the diffraction grating 35 or optically, or by hardware binning of the imaging elements 32a and 32b.
- the stage 20 and the scanning mechanism 50 constitute an XY stage, which moves the sample S in the X-axis direction and the Y-axis direction in order to acquire a fluorescence image of the sample S.
- XY stage which moves the sample S in the X-axis direction and the Y-axis direction in order to acquire a fluorescence image of the sample S.
- WSI Whole slide imaging
- the operation of scanning the sample S in the Y-axis direction, then moving in the X-axis direction, and then scanning in the Y-axis direction is repeated (see FIG. 10).
- the non-fluorescence observation section 70 is composed of a light source 71, a dichroic mirror 43, an objective lens 44, a condenser lens 72, an imaging device 73, and the like.
- FIG. 4 shows an observation system using dark field illumination.
- the light source 71 is arranged below the stage 20 and irradiates the sample S on the stage 20 with illumination light from the side opposite to the line illuminations Ex1 and Ex2.
- the light source 71 illuminates from outside the NA (numerical aperture) of the objective lens 44 , and the light (dark field image) diffracted by the sample S passes through the objective lens 44 , the dichroic mirror 43 and the condenser lens 72 . Then, the image sensor 73 takes a picture.
- dark field illumination even seemingly transparent samples such as fluorescently stained samples can be observed with contrast.
- the non-fluorescent observation unit 70 is not limited to an observation system that acquires a dark field image, but is an observation system capable of acquiring non-fluorescent images such as bright field images, phase contrast images, phase images, and in-line hologram images. may consist of For example, various observation methods such as the Schlieren method, the phase contrast method, the polarizing observation method, and the epi-illumination method can be employed as methods for obtaining non-fluorescent images.
- the position of the illumination light source is not limited to below the stage, and may be above the stage or around the objective lens. In addition to the method of performing focus control in real time, other methods such as a pre-focus map method in which focus coordinates (Z coordinates) are recorded in advance may be employed.
- FIG. 13 is a flowchart showing an example of the procedure of processing executed in the information processing device (processing unit) 2.
- FIG. Details of the gradation processing unit 24 (see FIG. 3) will be described later.
- the storage unit 21 stores the spectral data (fluorescence spectra Fs1 and Fs2 (see FIGS. 7 and 8)) acquired by the spectral imaging unit 30. (Step 101).
- the storage unit 21 stores in advance the autofluorescence of the sample S and the standard spectrum of the dye alone.
- the storage unit 21 improves the recording frame rate by extracting only the wavelength region of interest from the pixel array of the imaging device 32 in the wavelength direction.
- the wavelength region of interest corresponds to, for example, the visible light range (380 nm to 780 nm) or the wavelength range determined by the emission wavelength of the dye that dyes the sample.
- Wavelength regions other than the wavelength region of interest include, for example, sensor regions where there is light of unnecessary wavelengths, sensor regions where there is clearly no signal, and excitation wavelengths to be cut by the dichroic mirror 42 or bandpass filter 45 in the optical path. area, etc.
- the wavelength region of interest on the sensor may be switched depending on the line illumination situation. For example, when fewer excitation wavelengths are used for line illumination, the wavelength range on the sensor is also limited, and the limited frame rate can be increased.
- the data calibration unit 22 converts the spectral data stored in the storage unit 21 from the pixel data (x, ⁇ ) into wavelengths, and all the spectral data have common discrete values in wavelength units ([nm], [ ⁇ m], etc.) to be output (step 102).
- the pixel data (x, ⁇ ) are not necessarily aligned neatly with the pixel rows of the imaging device 32, and may be distorted due to slight tilt or distortion of the optical system. Therefore, for example, conversion from pixels to wavelength units using a light source with a known wavelength results in conversion to different wavelengths (nm values) for all x-coordinates. In this state, handling of the data is complicated, so the data is converted into integer-aligned data by a interpolation method (for example, linear interpolation or spline interpolation) (step 102).
- a interpolation method for example, linear interpolation or spline interpolation
- the data calibration unit 22 uses an arbitrary light source and its representative spectrum (average spectrum or spectral radiance of the light source) to uniformize and output (step 103). Uniformity eliminates instrumental differences, and in spectrum waveform analysis, it is possible to reduce the trouble of measuring individual component spectra each time. Furthermore, it is also possible to output an approximate quantification value of the number of fluorescent dyes from the luminance values whose sensitivities have been calibrated.
- the sensitivity of the imaging device 32 corresponding to each wavelength is also corrected.
- the sensitivity of the imaging device 32 corresponding to each wavelength is also corrected.
- the above processing is similarly executed for the illumination range by the line illuminations Ex1 and Ex2 on the sample S scanned in the Y-axis direction. Thereby, spectral data (x, y, ⁇ ) of each fluorescence spectrum are obtained for the entire range of the sample S.
- FIG. The obtained spectral data (x, y, ⁇ ) are stored in the storage unit 21 .
- the image forming unit 23 Based on the spectral data stored in the storage unit 21 (or the spectral data calibrated by the data calibrating unit 22) and the interval corresponding to the axial distance ( ⁇ y) between the excitation lines Ex1 and Ex2, the image forming unit 23 , form a fluorescence image of the sample S (step 104).
- the image forming unit 23 forms, as a fluorescence image, an image in which the detection coordinates of the imaging device 32 are corrected by a value corresponding to the interval ( ⁇ y) between the plurality of line illuminations Ex1 and Ex2.
- the three-dimensional data derived from each of the line illuminations Ex1 and Ex2 is data whose coordinates are shifted by ⁇ y about the Y axis, so correction is made based on the value of ⁇ y recorded in advance or the value of ⁇ y calculated from the output of the imaging device 32. and output.
- the difference in coordinates detected by the imaging device 32 is corrected so that the three-dimensional data derived from the line illuminations Ex1 and Ex2 are data on the same coordinates.
- the image forming unit 23 executes processing (stitching) for connecting the captured images into one large image (WSI) (step 105). Thereby, a pathological image of the multiplexed sample S (observation target Sa) can be acquired.
- the formed fluorescence image is output to the display unit 3 (step 106).
- the image forming unit 23 extracts the components of the autofluorescence and dye of the sample S from the photographed spectral data (measurement spectrum) based on the standard spectra of the autofluorescence and dye alone of the sample S stored in advance in the storage unit 21 . Calculate the distribution separately.
- a calculation method a method of least squares, a method of weighted least squares, or the like can be adopted, and coefficients are calculated so that the captured spectroscopic data becomes a linear sum of the above standard spectra.
- the calculated distribution of the coefficients is stored in the storage unit 21 and output to the display unit 3 to be displayed as an image (steps 107 and 108).
- FIG. 14 is a diagram schematically showing the flow of acquisition processing of spectral data (x, ⁇ ) according to the embodiment.
- configuration example 2 in FIG. 10 is applied as a configuration example of a combination of line illumination and excitation light using two imaging elements 32a and 32b.
- the number of pixels corresponding to one scanning line is 2440 [pix], and the scanning position is moved in the X-axis direction for each scanning of 610 lines in the Y-axis direction.
- Section (a) of FIG. 14 shows an example of spectral data (x, ⁇ ) acquired in the first line of scanning (also described as “1Ln” in the figure).
- a tissue 302 corresponding to the sample S described above is sandwiched and fixed between a slide glass 300 and a cover glass 301 and placed on the sample stage 20 with the slide glass 300 as the bottom surface.
- a region 310 in the drawing indicates an area irradiated with four laser beams (excitation light) from the line illuminations Ex1 and Ex2.
- the horizontal direction (row direction) in the drawing indicates the position in the scanning line
- the vertical direction (column direction) indicates the wavelength
- Each spectral data (x, ⁇ ) is associated with a position in the column direction of the imaging element 32a.
- the wavelength ⁇ does not have to be continuous in the column direction of the imaging element 32a. That is, the wavelength of the spectral data (x, ⁇ ) based on the spectral wavelength (1) and the wavelength of the spectral data (x, ⁇ ) based on the spectral wavelength (3) must not be continuous including the blank portion between them. good.
- each spectroscopic data (x, ⁇ ) is data (brightness value )including.
- the data within the wavelength region of each spectral data (x, ⁇ ) is selectively read out, and the other regions ( ) are not read out.
- spectral data (x, ⁇ ) in the wavelength region of spectral wavelength (1) and spectral data (x, ⁇ ) in the wavelength region of spectral wavelength (3) are acquired.
- the acquired spectral data (x, ⁇ ) of each wavelength region is stored in the storage unit 21 as each spectral data (x, ⁇ ) of the first line.
- Section (b) of FIG. 14 shows an example in which scanning up to the 610th line (also described as “610Ln” in the drawing) is completed at the same scanning position in the X-axis direction as in section (a).
- spectral data (x, ⁇ ) in the wavelength regions of spectral wavelengths (1) to (4) for 610 lines are stored line by line in the storage unit 21.
- the 611th line (also described as "611Ln” in the drawing) is scanned as shown in section (c) of Fig. 14. In this example, the 611th line is scanned.
- the scanning position in the X-axis direction is moved, and the position in the Y-axis direction is reset, for example.
- FIG. 15 is a diagram schematically showing a plurality of unit blocks 400 and 500.
- the photographing region Rs is divided into a plurality of parts in the X-axis direction, and the operation of scanning the sample S in the Y-axis direction, moving in the X-axis direction, and further scanning in the Y-axis direction is repeated.
- the imaging region Rs is further composed of a plurality of unit blocks 400 and 500 .
- data for the 610th line shown in section (b) of FIG. 14 is called a unit block as a basic unit.
- FIG. 16 is a schematic diagram showing an example of spectral data (x, ⁇ ) stored in the storage unit 21 when the scanning of the 610th line shown in section (b) of FIG. 14 is completed.
- the spectral data (x, ⁇ ) indicates the position on the line in the horizontal direction in the drawing, and the block indicating the number of spectral wavelengths in the vertical direction in the drawing. It is stored in the storage unit 21 as a frame 40f.
- a unit block 400 (see FIG. 15) is formed by a frame 40f of 610 lines.
- the arrow in the frame 40f indicates the memory when the C language, which is one of the programming languages, or a language conforming to the C language is used to access the storage unit 21.
- the direction of memory access in the unit 21 is shown. In the example of FIG. 16, access is made in the horizontal direction of the frame 40f (that is, the line position direction), and this is repeated in the vertical direction of the frame 40f (that is, the direction of the number of spectral wavelengths).
- the number of spectral wavelengths corresponds to the number of channels when the spectral wavelength region is divided into a plurality of channels.
- the information processing apparatus 2 arranges the order of the spectral data (x, ⁇ ) of each wavelength region stored for each line by the image forming unit 23, for each spectral wavelength (1) to (4). Convert to sort order.
- FIG. 17 is a schematic diagram showing an example of spectral data (x, ⁇ ) in which the data arrangement order is changed, according to the embodiment.
- the spectroscopic data (x, ⁇ ) indicates the order of the data, for each spectroscopic wavelength, the position on the line in the horizontal direction in the figure, and the scanning line in the vertical direction in the figure. , and stored in the storage unit 21 .
- the arrangement order of data in unit rectangular blocks according to the embodiment is such that the arrangement of pixels in frames 400a, 400b, . . . It corresponds to dimensional information. 16, the unit rectangular blocks 400a, 400b, . It can be treated as two-dimensional information within the unit block 400 for the tissue 302 . Therefore, by applying the information processing apparatus 2 according to the embodiment, image processing, spectral waveform separation processing (color separation processing), and the like for captured image data acquired by the line spectroscope (observation unit 1) can be performed more easily. and can be processed at high speed.
- FIG. 18 is a block diagram showing a configuration example of the gradation processing section 24 according to this embodiment.
- the gradation processing unit 24 includes an image group generation unit 240, a statistic calculation unit 242, an SF (scaling factor) generation unit 244, a first analysis unit 246, and a gradation conversion unit. 248 and a display control unit 250 .
- the two-dimensional information displayed on the display unit 3 is called an image, or the range of this two-dimensional information is called an image, and the data used for displaying the image is called image data or simply data. called.
- the image data according to the present embodiment is a numerical value related to at least one of the luminance value and the output value in units of the number of antibodies.
- FIG. 19 is a diagram conceptually explaining a processing example of the gradation processing unit 24 according to this embodiment.
- FIG. 20 is a diagram showing an example of data names corresponding to imaging positions. As shown in FIG. 20, data names are allocated corresponding to, for example, unit block areas 200 . This makes it possible to allocate data names corresponding to two-dimensional positions in the row direction (block_num) and the column direction (obi_num), for example, to the imaging data of each unit block.
- each unit block 400, 500 As shown in FIG. 19 again, first, all imaged data (see FIGS. 15 and 16) for each unit block 400, 500, . As shown in FIG. 20, each of these unit blocks 400, 500, . . . dat, and the data corresponding to the unit block 500 is 01_02. dat is allocated. Although only the unit blocks 400 and 500 are shown in FIG. 19 to simplify the explanation, the unit blocks 400, 500, . . . n are processed.
- the imaging data 01_01. dat is subjected to color separation processing by the image forming section 23 as described above, and separated into unit rectangular blocks 400a, 400b, 400n (see FIG. 17).
- imaging data 01 — 02 . dat is separated into unit rectangular blocks 500a, 500b, and 500n by the image forming section 23 by color separation processing.
- the imaging data for all unit blocks are separated into unit rectangular blocks corresponding to dyes by color separation processing.
- a data name is assigned to the data of each unit rectangular block according to the rule shown in FIG.
- the image forming section 23 performs a stitching process on the unit rectangular blocks 400a, 400b, .
- the image group generation unit 240 re-divides each piece of stitched and color-separated data into minimum segments to generate a mipmap (MIPmap).
- MIPmap mipmap
- Data names are assigned to these minimum sections according to the rules shown in FIG.
- the minimum section of the stitched image is calculated as the unit blocks 400sa, 400sb, 500sa, and 500sb, but it is not limited to this.
- FIGS. 20 and 21 to be described later for example, it may be re-divided into square regions.
- an image group pre-calculated so as to complement the main texture image is referred to as a mipmap. Details of the mipmap will be described later with reference to FIGS.
- the statistic calculation unit 242 calculates the statistic Stv for the image data (luminance data) in each unit rectangular block unit block 400sa, 400sb, 500sa, and 500sb.
- the statistic Stv is the maximum value, minimum value, median value, mode value, and the like.
- the image data is, for example, float32 and is, for example, 32 bits.
- the SF generator 242 uses the statistic Stv calculated by the statistic calculator 242 to calculate a scaling factor (Sf) for each unit rectangular block 400sa, 400sb, 500sa, 500sb, . Then, the SF generation unit 242 stores the scaling factor (Sf) in the storage unit 21.
- Sf scaling factor
- the scaling factor Sf is, for example, the difference between the maximum value maxv and the minimum value minv of the image data (brightness data) in each of the unit rectangular blocks 400sa, 400sb, 500sa, 500sb, as shown in formula (1). It is a value divided by the size dsz.
- the data size of the original image data is 32 bits of float32. Note that in the present embodiment, the image data before being divided by the scaling factor Sf is referred to as original image data.
- the original image data has, for example, a 32-bit data size of float32, as described above. This data size corresponds to the pixel value.
- a region with strong fluorescence is calculated with a scaling factor Sf of 5
- a region without fluorescence is calculated with a scaling factor Sf of 0.1.
- the scaling factor Sf corresponds to the dynamic range in the original image data of each unit rectangular block 400sa, 400sb, 500sa, 500sb, .
- the minimum value minv is assumed to be 0 below, it is not limited to this. Note that the scaling factor according to this embodiment corresponds to the first value.
- the first analysis unit 246 extracts the subject area from the image. Then, the statistic calculator 242 calculates the statistic Stv using the original image data in the subject area, and the SF generator 242 calculates the scaling factor Sf based on the statistic Stv.
- the gradation conversion unit 248 divides the original image data of each unit rectangular block 400sa, 400sb, 500sa, 500sb, .
- the first image data processed by the gradation conversion unit 248 is normalized by the pixel value range, which is the difference between the maximum value maxv and the minimum value minv.
- the image data obtained by dividing the pixel values of the original image data by the scaling factor Sf will be referred to as first image data.
- the first image data is, for example, in a short16 data format.
- the scaling factor Sf is greater than 1, the dynamic range of the second image data is compressed, and when the scaling factor Sf is less than 1, the dynamic range of the second image data is expanded.
- the scaling factor Sf is, for example, float32 and has 32 bits.
- the scaling factor Sf is similarly calculated for the unit rectangular blocks 400a, 400b, which are the color separation data, and the gradation conversion unit 248 converts the gradation of the original image data with the scaling factor Sf. to generate the first image data.
- FIG. 21 is a diagram showing an example of the data format of each unit rectangular block 400sa, 400sb, 500sa, 500sb, .
- Each image data is converted from float32 to ushort16 16 bits to compress the storage capacity.
- the data of the unit rectangular blocks 400a, 400b, . . . are stored in the storage unit 21 in, for example, Tiff (Tagged Image File Format) format.
- Tiff Tagged Image File Format
- Each original image data is converted from float32 to ushort16 first image data to compress storage capacity. Since the scaling factor Sf is recorded in the footer, it can be read from the storage section 21 without reading the image data.
- the first image data after being divided by the scaling factor Sf and the scaling factor Sf are associated with each other in, for example, the Tiff format and stored in the storage unit 21 .
- the first image data is compressed from 32 bits to 16 bits. Since this first image data has its dynamic range adjusted, the entire image can be visualized when displayed on the display section 3 .
- by multiplying the first image data by the corresponding scaling factor Sf it is possible to obtain the pixel values of the original image data while maintaining the information content.
- FIG. 22 is a diagram showing an image pyramid structure for explaining a processing example of the image group generation unit 240.
- the image group generator 240 generates the image pyramid structure 500 using stitching images (WSI), for example.
- WSI stitching images
- the image pyramid structure 500 is a group of images generated with a plurality of different resolutions for a stitched image (WSI) obtained by stitching the unit rectangular blocks 400a, 500a, . . . is.
- WSI stitched image
- the largest size image is placed, and at the top L1, the smallest size image is placed.
- the resolution of the largest size image is, for example, 50 ⁇ 50 (Kpixels) or 40 ⁇ 60 (Kpixels).
- the smallest size image is, for example, 256 ⁇ 256 (pixel) or 256 ⁇ 512 (pixel).
- one tile that is a component area of an image area is called a unit area image. Note that the unit area image may have any size and shape.
- the same display unit 3 displays these images at, for example, 100% (respectively with the same number of physical dots as the number of pixels of those images), the largest size image Ln is displayed the largest, The image L1 with the smallest size is displayed in the smallest size.
- the display range of the display unit 106 is shown as D.
- the entire image group forming the image pyramid structure 50 may be generated by a known compression method, or may be generated by a known compression method when generating thumbnail images, for example.
- FIG. 23 is a diagram showing an example of regenerating the stitching image (WSI) of the wavelength bands of dyes 1 to n in FIG. 19 as an image pyramid structure. That is, it is a diagram showing an example in which the image group generation unit 240 regenerates the stitched image (WSI) for each pigment generated by the image formation unit 23 as an image pyramid structure. For ease of explanation, three levels are shown, but are not limited to this. In the image pyramid structure of dye 1, each unit area image at the L3 level is associated with, for example, scaling factors Sf3-1 to Sf3-n as Tiff data. , the pixel values are converted into the first image data.
- each small image at the L2 level is associated with, for example, scaling factors Sf2-1 to Sf2-n as Tiff data. is converted.
- the L1 level small image is associated with, for example, a scaling factor Sf1 as Tiff data, and the original image data is converted to the first image data by the gradation conversion unit 248 in terms of pixel values.
- a similar process is performed for the stitched images of the wavelength bands of dyes 2-n.
- These image pyramid structure data are stored in the storage unit 21 as mipmaps in, for example, Tiff (Tagged Image File Format) format.
- FIG. 24 is an example of a display screen generated by the display control unit 250.
- the display area 3000 displays the main observation image whose dynamic range has been adjusted based on the scaling factor Sf.
- a thumbnail image area 3010 displays an entire image of the observation range.
- An area 3020 indicates a range within the entire image (subnail image) in which the display area 3000 is displayed. In the sub-nail image area 3010, for example, an image of the non-fluorescent observation part (camera) captured by the image sensor 73 may be displayed.
- the sub-nail image area 3010 for example, an image of the non-fluorescent observation part (camera) captured by the image sensor 73 may be displayed.
- the selected wavelength operation area section 3030 is an input section for inputting the wavelength range of the displayed image, for example, the wavelengths corresponding to the dyes 1 to n, according to the instruction of the operation section 4.
- Magnification operation area section 3040 is an input section for inputting a value for changing the display magnification according to an instruction from operation section 4 .
- a horizontal operation area section 3060 is an input section for inputting a value for changing the horizontal selection position of the image according to an instruction from the operation section 4 .
- a vertical operation area section 3080 is an input section for inputting a value for changing the vertical selection position of an image according to an instruction from the operation section 4 .
- a display area 3100 displays the scaling factor Sf of the main observation image.
- a display area 3120 is an input section for selecting a scaling factor value according to instructions from the operation section 4 .
- the scaling factor value corresponds to the dynamic range as described above. For example, it corresponds to the maximum pixel value maxv (see formula 1).
- a display area 3140 is an input section for selecting an algorithm for calculating the scaling factor Sf according to instructions from the operation section 4 . Note that the display control unit 250 may further display the file paths of the observation image, the overall image, and the like.
- the display control unit 250 reads the mipmap image of the corresponding dye n from the storage unit 21 according to the input of the selected wavelength operation area unit 3030 .
- the display control unit 250 displays a level L1 image when the instruction input to the magnification operation area unit 3040 is less than the first threshold, and displays a level L2 image when the instruction input is greater than or equal to the first threshold. An image of level L3 is displayed when the instruction input is greater than or equal to the second threshold.
- the display control section 250 displays the display area D (see FIG. 22) selected by the horizontal operation area section 3060 and the vertical operation area section 3080 in the display area 3000 as the main observation image.
- the pixel value of the image data of each unit area image is recalculated by the gradation conversion unit 248 using the scaling factor Sf associated with each unit area image included in the display area D.
- FIG. 25 is a diagram showing an example in which the display area D is changed from D10 to D20 by input processing via the horizontal operation area section 3060 and the vertical operation area section 3080.
- FIG. 25 is a diagram showing an example in which the display area D is changed from D10 to D20 by input processing via the horizontal operation area section 3060 and the vertical operation area section 3080.
- the image data of the area D10 is normalized by dividing by the maximum scaling factor MAX_Sf (1, 2, 5, 6). Thereby, the brightness of the image data of the area D10 is displayed more appropriately.
- the scaling factor Sf is calculated by the above equation (1)
- the value of the image data of each unit area image is the maximum value and the minimum value of the original image data of each unit area image included in the area D10. Normalized between values.
- the dynamic range of the first image data within the region D10 is readjusted by using the scaling factors Sf1, Sf2, SF5, and Sf6, making it possible to visually recognize all of the first image data within the region D10. Become.
- recalculation by the statistic calculation unit 242 becomes unnecessary, and the dynamic range can be adjusted in a shorter time according to the domain conversion.
- the display control unit 250 displays the maximum value MAX_Sf (1, 2, 5, 6) in the display area 3100 . This allows the operator to more easily recognize how much the dynamic range has been compressed or expanded.
- the scaling factors Sf1, Sf2, SF5, Sf6, and Sf7 stored in association with each unit area image are read from the storage unit 21. Then, as shown in equation (3), the first image data of each unit area image is multiplied by the corresponding scaling factors Sf1, Sf2, SF5, Sf6, and Sf7, respectively, and the maximum scaling factor MAX_Sf(1, 2, 5, 6, 7).
- Pixel value after rescaling (each Sf ⁇ pixel value before rescaling)/MAX_Sf(1, 2, 5, 6, 7) Equation (3)
- Scaling factor Sf1 corresponding to the first image data of each unit area image , Sf2, SF5, Sf6, and Sf7 are converted into pixel values of the original image data, and divided by the maximum value MAX_Sf (1, 2, 5, 6, 7) of the scaling factor to obtain the first pixel value of the area D20.
- the image data are normalized again. Thereby, the brightness of the image data of the area D10 is displayed more appropriately.
- display control section 250 displays maximum value MAX_Sf (1, 2, 5, 6, 7) in display area 3100 . This allows the operator to more easily recognize how much the dynamic range has been compressed or expanded.
- Display control unit 250 uses the value of scaling factor MSf input via display area 312 when manual is selected in an arithmetic algorithm corresponding to display area 3140, which will be described later, to calculate equation (4). Recalculate using .
- Pixel value after rescaling (Each Sf ⁇ Pixel value before rescaling)/MSf (4)
- display control section 250 displays scaling factor MSf in display area 3100 .
- the original image data after color separation and after stitching is assumed to be output in units of the number of float32 antibodies, for example.
- the scaling factors can be compared with each other when visually recognizing an area that straddles a plurality of basic area images as shown in FIG. , and ushort16 (0-65535) to adjust the display dynamic range.
- the stitching image WSI means the level L1 image.
- ROI means a selected area image.
- the maximum value MAX means that the statistic used when calculating the scaling factor Sf is the maximum value.
- the average value Ave means that the statistic used when calculating the scaling factor Sf is the average value.
- the mode value Mode means that the statistic used when calculating the scaling factor Sf is the mode value.
- the tissue region Sf means using the scaling factor Sf calculated from the selected image region and the image subject region extracted by the first analysis unit 246 . In this case, for example, the maximum value is used as the statistic.
- a mipmap corresponding to the scaling factor Sf generated by the SF generator 242 using the maximum value is read from the storage unit 21 .
- a mipmap corresponding to the scaling factor Sf generated by the SF generator 242 using the average value is read from the storage unit 21 .
- the mode value Mode is selected, a mipmap corresponding to the scaling factor Sf generated by the SF generation unit 242 using the mode value is read from the storage unit 21 .
- the first algorithm reconverts the pixel values of the display image by the scaling factor L1Sf of the level L1 image, as shown in equation (5).
- the maximum value is used for the scaling factor L1Sf.
- Pixel value after rescaling (each Sf ⁇ pixel value before rescaling)/L1Sf (5)
- the level L1 image is You can restrict the display. In this case, no recalculation is required.
- the second algorithm reconverts the pixel values of the display image by the average value L1av of the level L1 image, as shown in equation (6).
- the second algorithm (Ave (WSI)) reconverts the pixel values of the display image by the average value L1av of the level L1 image, as shown in equation (6).
- the average value L1av is used, it is possible to observe the information of the entire image while suppressing the information of the fluorescent region, which is the high luminance region.
- the display may be limited to the level L1 image. In this case, no recalculation is required.
- Pixel value after rescaling (each Sf ⁇ pixel value before rescaling)/L1av Equation (6)
- the third algorithm reconverts the pixel values of the display image using the mode value L1mod of the level L1 image, as shown in equation (7).
- the mode L1mod when used, it is possible to observe information based on the pixels included most in the image while suppressing the information in the fluorescent region, which is a high luminance region.
- the display when a WSI-related algorithm is selected in the display area 3100, the display may be limited to the level L1 image. In this case, no recalculation is required.
- Pixel value after rescaling (each Sf ⁇ pixel value before rescaling)/L1mod (7)
- the fourth algorithm ((MAX(ROI)) reconverts the pixel values of the display image by the maximum value ROImax of the scaling factor Sf in the selected basic region image, as shown in equation (8).
- the statistic is the maximum value, as described above.
- Pixel value after rescaling (each Sf ⁇ pixel value before rescaling)/ROImax Equation (8)
- the fifth algorithm ((Ave(ROI)) reconverts the pixel values of the display image by the maximum value ROIAvemax of the scaling factor Sf in the selected basic region image, as shown in equation (9).
- the sixth algorithm ((Mode (ROI)) reconverts the pixel values of the display image by the maximum value ROIModemax of the scaling factor Sf in the selected basic region image, as shown in equation (10).
- the seventh algorithm (tissue area Sf) reconverts the pixel values of the display image by the maximum value Sfmax of the scaling factor Sf in the selected basic area image, as shown in equation (11).
- the eighth algorithm uses the function Sf( ⁇ ) of the representative value ⁇ of the wavelength selected by the input of the selected wavelength operation region 303 as shown in equation (12), and the pixel of the display image is Reconvert the value.
- the ninth algorithm manual, reproduces the pixel values of the display image using the value of the scaling factor MSf input via the display area 312, as shown in equation (4) above. Algorithm to convert.
- FIG. 26 is a flowchart showing a processing example of the information processing device 2.
- FIG. Here, a case will be described in which the display area 3100 is restricted to display a level L1 image when a WSI-related algorithm is selected.
- the display control unit 250 acquires the algorithm (see FIG. 24) selected by the operator via the display area 3100 (step S200). Subsequently, the display control unit 250 reads the mipmap corresponding to the selected algorithm from the storage unit 21 (step S202). In this case, if the corresponding mipmap is not stored in the storage unit 21, the display control unit 250 causes the image group generation unit 240 to generate the corresponding mipmap.
- the display control unit 250 determines whether the selected algorithm (see FIG. 24) is WSI-related (step S204). If it is determined to be WSI related (yes in step S204), the display control unit 250 starts processing related to the selected algorithm (step S206).
- the display control unit 250 displays the level L1 image.
- the dynamic range of the main observation image is adjusted according to the statistics based on the original image data (step S208). In this case, since the dynamic range of the first image data of the level L1 image has already been adjusted, no recalculation is necessary.
- the display control unit 250 adjusts the main observation image based on the statistic calculated within the image data in the tissue region in the image. Adjust the dynamic range (step S210). In this case, since the dynamic range of the first image data of the level L1 image has already been adjusted, no recalculation is necessary.
- the display control unit 250 controls the scaling factor MSf input via the display area 312 as shown in the above equation (4). is used to reconvert the pixel values of the first image data in the level L1 image, which is the display image (step S212).
- the display control unit 250 uses the function Sf( ⁇ ) of the representative value ⁇ of the selected wavelength according to the input of the image selection wavelength operation area unit 303. , reconverts the pixel values of the first image data in the level L1 image, which is the display image. (Step S214).
- the display control unit 250 determines that the selected algorithm (see FIG. 24) is not related to WSI (no in step S204)
- the display control unit 250 controls the operation unit 4 via the magnification operation area unit 3040. acquires the input display magnification (step S216).
- the display control unit 250 selects the image levels L1 to Ln used for displaying the main observation image from the mipmap according to the display magnification (step S218).
- the display control section 250 displays the display areas selected by the horizontal operation area section 3060 and the vertical operation area section 3080 as frames 302 (see FIG. 24) in the sub-nail image 301 (step S220).
- the display control unit 250 determines whether the selected algorithm (see FIG. 24) is related to the seventh algorithm (tissue region Sf) (step S222). If it is determined that the seventh algorithm (tissue region Sf) is not relevant (yes in step S222), the display control unit 250 starts processing related to the selected algorithm, 5 algorithm ((Ave (ROI)) and sixth algorithm ((Mode (ROI))), the first image data is recalculated, and the image within the frame 302 (see FIG. 24) with the adjusted dynamic range is displayed on the display unit 3 as the main observation image (step S224).
- 5 algorithm ((Ave (ROI))
- sixth algorithm ((Mode (ROI))
- the display control unit 250 controls the scaling factor MSf input via the display area 312 as shown in the above equation (4). is used to reconvert the pixel values of the first image data in each basic region image included in the frame 302 (see FIG. 24), which is the display image, and displayed on the display unit 3 (step S226).
- the display control unit 250 determines that the selected algorithm (see FIG. 24) is related to the seventh algorithm (tissue region Sf) (no in step S222), the image within the tissue region in the image Based on the statistics calculated in the data, the dynamic range of the main observation image is adjusted (step S228).
- the image data to be displayed on the display unit 3 may be, for example, linear conversion (linear) of the luminance value of 0-655535 which is ushort16, log conversion (Logarithm), biexponential conversion (Biexponential conversion). ), or the like, and may be displayed.
- each basic area image which is a small area compared to the stitching image (WSI) reduces the display dynamic range. Adjustment is possible. This makes it possible to improve the visibility of the captured image. Furthermore, it becomes easy to compare the scaling factors Sf of the adjacent basic area images and unify the scaling factors Sf. As a result, rescaling based on the unified scaling factor Sf makes it possible to align the display dynamic ranges of the plurality of basic region images at a higher speed.
- the first image data of the unit area image which is each area obtained by dividing the fluorescence image into a plurality of areas, is associated with the scaling factor Sf indicating the pixel value range for each first image data. and stored in the storage unit 21 as a mipmap (MIPMAP).
- MIPMAP mipmap
- the pixel values of the selected unit area image combination image are converted based on the representative value selected from among the scaling factors Sf associated with each of the unit area image combinations in the selected area D. becomes possible. Therefore, the dynamic range of the selected unit area image is readjusted by using the scaling factor Sf, and all the image data in the area D can be viewed with a predetermined dynamic range.
- the information processing apparatus 2 according to the second embodiment is different from the information processing apparatus 2 according to the first embodiment in that it further includes a second analysis unit that performs cell analysis such as cell counting. Differences from the information processing apparatus 2 according to the first embodiment will be described below.
- FIG. 27 is a schematic block diagram of a fluorescence observation device according to the second embodiment. As shown in FIG. 27 , the information processing device 2 further includes a second analysis section 26 .
- FIG. 28 is a diagram schematically showing a processing example of the second analysis unit 26.
- a stitching process is performed to join images captured by the image forming unit 23 into one large stitched image (WSI), and the image group generating unit 240 generates a mipmap (MIPmap). do.
- MIPmap mipmap
- the minimum sections of the stitched image are calculated as unit blocks (basic region images) 400sa, 400sb, 500sa, and 500sb.
- the display control unit 250 converts each basic area image within the field of view (display area D) selected by the horizontal operation area unit 3060 and the vertical operation area unit 3080 (see FIG. 24) to the associated sampling factor Sf. scaled and stored in the storage unit 21 as basic area images) 400sa_2, 400sb_2, 500sa_2, 500sb_2, .
- the second analysis unit 26 determines the analysis field after stitching, performs manual rescaling within the field of view with a multi-dye image, outputs the image, and then performs cell analysis such as cell counting.
- an operator can perform analysis using an image rescaled within an arbitrary field of view. As a result, it is possible to analyze the area in which the intention of the operator is reflected.
- the information processing apparatus 2 according to Modification 1 of the second embodiment differs from the information processing apparatus 2 according to the second embodiment in that the second analysis unit 26 that performs cell analysis such as cell counting automatically performs analysis processing. Differences from the information processing apparatus 2 according to the second embodiment will be described below.
- FIG. 29 is a diagram schematically showing a processing example of the second analysis unit 26 according to Modification 1 of the second embodiment.
- the second analysis unit 26 according to Modification 1 of the second embodiment performs automatic rescaling and After image output, cell analysis such as cell counting is performed.
- the second analysis unit 26 automatically detects the region where the tissue to be observed exists, and can perform analysis using an image automatically rescaled using the scaling factor of the region. Become.
- the second analysis unit 26 which performs cell analysis such as cell counting, performs automatic analysis processing after automatic rescaling by the eighth algorithm (auto). It is different from the second analysis unit 26 according to Modification 1 of the second embodiment. Differences from the information processing apparatus 2 according to Modification 2 of the second embodiment will be described below.
- FIG. 30 is a diagram schematically showing a processing example of the second analysis unit 26 according to modification 2 of the second embodiment.
- the second analysis unit 26 according to Modification 2 of the second embodiment performs Auto rescaling. That is, the function Sf ( ⁇ ), which is the rescaling factor, collects the data of the scaling factor Sf accumulated from the past imaging results, and stores it in the storage unit 31 as a database as the scaling factor Sf for dye and cell analysis. It is accumulated.
- past processing data are collected, dyes and scaling factors Sf for cell analysis are accumulated as a database, and after stitching, rescaling is performed using the scaling factor Sf of the database as it is. Save the image. This makes it possible to omit the rescaling processing flow for analysis.
- this technique can take the following structures.
- a method of processing information comprising:
- the combination of the selected unit area images corresponds to an observation range to be displayed on a display unit, and the range of the combination of the unit area images is changed according to the observation range. information processing method.
- the first image data is image data with a dynamic range adjusted based on a pixel value range obtained according to a predetermined rule in the original image data of the first image data. Information processing method described.
- the storing step includes: second image data having a different size from the area of the first image data, the second image data obtained by redividing the fluorescence image into a plurality of areas; a first value indicating a pixel value range for each of the second image data; The information processing method according to (6), further storing in association with .
- the converting step converts pixel values for the selected combination of second image data based on a representative value selected from respective first values associated with the selected combination of second image data.
- the first image data is data obtained by dividing a pixel value of the original image data corresponding to the unit area image by the first value;
- the transforming step multiplies each of the first image data in the selected unit area images by the first value corresponding to each of the first image data and the respective first values associated with the combination of the selected unit area images.
- (13) a first input step of inputting a calculation method for the statistic; an analysis step of calculating the statistic according to the input of the input unit; a data generation step of generating first image data obtained by dividing a fluorescence image into a plurality of regions and a first value indicating a pixel value range for each of the first image data, based on the analysis in the analysis step;
- the display control step causes the display unit to display a display form regarding the first input step and the second input step; further comprising an operation step of indicating the position of any one of the display forms,
- the fluorescence image is one of a plurality of fluorescence images generated by an imaging subject for each of a plurality of fluorescence wavelengths;
- a storage unit that associates and stores first image data obtained by dividing a fluorescence image into a plurality of regions and a first value indicating a predetermined pixel value range for each of the first image data; a conversion unit that converts pixel values of a combination image of the selected first images based on a representative value selected from first values associated with each combination of the selected first images; An information processing device.
- 2 information device (processing unit), 3: display unit, 21: storage unit, 248: gradation conversion unit, 250: display control unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
Abstract
Description
選択された前記単位領域画像の組み合わせのそれぞれに関連付けられた第1値のなかから選択された代表値に基づき、前記選択された前記単位領域画像の組み合わせ画像の画素値を変換する変換工程と、
を備える、情報処理方法が提供される。
前記第1画像データの領域と大きさが異なる第2画像データであって、前記蛍光画像を複数の領域に再分割した第2画像データと、
前記第2画像データ毎の画素値範囲を示す第1値と、
を関連付けて更に記憶してもよい。
前記変換工程は、前記選択された前記第2画像データの組み合わせに関連付けられたそれぞれの第1値から選択された代表値に基づき、前記選択された前記第2画像データの組み合わせに対する画素値を変換してもよい。
前記変換工程は、前記選択された前記単位領域画像における前記第1画像データのそれぞれに対応する前記第1値を乗算し、前記選択された前記単位領域画像の組み合わせに関連付けられたそれぞれの前記第1値の最大値で除算してもよい。
前記入力部の入力に応じて、前記統計量を演算する解析工程と、
前記解析工程の解析に基づき、蛍光画像を複数の領域に分割した第1画像データと、前記第1画像データ毎の画素値範囲を示す第1値を生成するデータ生成工程と、
を更に備えてもよい。
前記変換工程は、前記第2入力工程の入力に応じて、前記第1画像の組み合わせを選択してもよい。
前記表示形態のいずれかの位置を指示する操作工程を更に備え、
前記第1入力工程、及び前記第2入力工程は、前記操作工程における指示に応じて関連する情報を入力してもよい。
前記複数の蛍光画像のそれぞれを、画像データと、前記画像データに対する前記第1値である係数と、に分割するデータ生成工程を更に備えてもよい。
前記細胞解析を行う解析工程は、操作者に指示された範囲の画像範囲に基づき行われてもよい。
選択された前記第1画像の組み合わせのそれぞれに関連付けられた第1値のなかから選択された代表値に基づき、前記選択された前記第1画像の組み合わせ画像の画素値を変換する変換部と、
を備える、情報処理装置が提供される。
選択された前記第1画像の組み合わせのそれぞれに関連付けられた第1値のなかから選択された代表値に基づき、前記選択された前記第1画像の組み合わせ画像の画素値を変換する変換工程と、
を情報処理装置に実行させるプログラムが提供される。
本開示の実施形態の説明に先立って、理解を容易とするために、ライン分光について図1を参照しつつ、図2に基づき概略的に説明する。図1は、実施形態に適用可能なライン分光を説明するための模式図である。図2は、ライン分光の処理例を示すフローチャートである。図2に示すように、蛍光染色された病理標本1000に対して、ライン照明により、例えばレーザ光によるライン状の励起光を照射する(ステップS1)。図1の例では、励起光は、x方向に平行なライン形状で病理標本1000に照射されている。
本実施形態の蛍光観察装置100は、観察ユニット1と、処理ユニット(情報処理装置)2と、表示部3とを備える。観察ユニット1は、異軸平行に配置された波長の異なる複数のライン照明を病理標本(病理サンプル)に照射する励起部10と、病理標本を支持するステージ20と、ライン状に励起された病理標本の蛍光スペクトル(分光データ)を取得する分光イメージング部30とを有する。
続いて、図4を参照して観測ユニット1の詳細について説明する。ここでは、図12における構成例2で観測ユニット1が構成される例について説明する。
次に、本開示の実施形態に適用可能な技術について説明する。
図14は、実施形態に係る分光データ(x、λ)の取得処理の流れを概略的に示す図である。以下では、2つの撮像素子32aおよび32bを用い、ライン照明と励起光の組み合わせの構成例として、図10の構成例2を適用する。撮像素子32aでライン照明Ex1による励起波長λ=405[nm]および532[nm]に応じた各分光データ(x、λ)を取得し、撮像素子32bでライン照明Ex2による励起波長λ=488[nm]および638[nm]に応じた各分光データ(x、λ)を取得するものとする。また、スキャンの1ラインに対応する画素数を2440[pix]とし、Y軸方向への610ライン分のスキャン毎に、スキャン位置をX軸方向に移動させるものとする。
図15は、複数の単位ブロック400、500を模式的示す図である。上述したように、撮影領域RsがX軸方向に複数に分割され、Y軸方向にサンプルSをスキャンし、その後、X軸方向に移動し、さらにY軸方向へのスキャンを行うといった動作が繰り返される。撮影領域Rsは、更に複数の単位ブロック400、500により構成される。例えば、図14のセクション(b)に示した、第610ライン分のデータを基本単位として、単位ブロックと称する。
リスケーリング後の画素値=(各Sf×リスケーリング前の画素値)/MAX_Sf(1、2、5、6) (2)式
各単位領域画像の第1画像データに、対応するスケーリングファクタSf1、Sf2、SF5、Sf6、Sf7をそれぞれ乗算すると原画像データの画素値に変換され、スケーリングファクタの最大値MAX_Sf(1、2、5、6、7)で除算することにより、領域D20の第1画像データが再び正規化される。これにより、領域D10の画像データの輝度がより適切に表示される。上述同様に、表示制御部250は、表示領域3100に最大値MAX_Sf(1、2、5、6、7)を表示する。これにより、操作者は、ダイナミックレンジがどの程度圧縮されているか、或いは、拡大されているかを、より容易に認識することが可能となる。
上述と同様に、表示制御部250は、表示領域3100にスケーリングファクタMSfを表示する。これにより、操作者は、自信の操作により、ダイナミックレンジがどの程度圧縮されているか、或いは、拡大されているかを、より容易に認識することが可能となる。
なお、以下の処理において、表示領域3100によりWSI関連のアルゴリズムが選択された場合には、レベルL1画像を表示させるように制限してもよい。この場合、再演算は不要である。
リスケーリング後の画素値=(各Sf×リスケーリング前画素値)/L1av (6)式
リスケーリング後の画素値=(各Sf×リスケーリング前画素値)/L1mod (7)式
リスケーリング後の画素値=(各Sf×リスケーリング前画素値)/ROImax (8)式
リスケーリング後の画素値=(各Sf×リスケーリング前画素値)/ROIAvemax (9)式
リスケーリング後の画素値=(各Sf×リスケーリング前画素値)/ROIModemax (10)式
リスケーリング後の画素値=(各Sf×リスケーリング前画素値)/Sfmax (11)式
リスケーリング後の画素値=(各Sf×リスケーリング前画素値)/Sf(λ) (12)式
第2実施形態に係る情報処理装置2は、細胞カウントなどの細胞解析を行う第2解析部を更に備えることで、第1実施形態に係る情報処理装置2と相違する。以下では、第1実施形態に係る情報処理装置2と相違する点を説明する。
第2実施形態の変形例1に係る情報処理装置2は、細胞カウントなどの細胞解析を行う第2解析部26が自動解析処理する点で第2実施形態に係る情報処理装置2と相違する。以下では、第2実施形態に係る情報処理装置2と相違する点を説明する。
第2実施形態の変形例2に係る情報処理装置2は、細胞カウントなどの細胞解析を行う第2解析部26が第8アルゴリズム(auto)により、自動リスケーリングした後に自動解析処理する点で第2実施形態の変形例1に係る第2解析部26と相違する。以下では、第2実施形態の変形例2に係る情報処理装置2と相違する点を説明する。
(1)蛍光画像を複数に分割した各領域である単位領域画像の第1画像データと、前記第1画像データ毎の所定の画素値範囲を示す第1値を関連付けて記憶する記憶工程と、
選択された前記単位領域画像の組み合わせのそれぞれに関連付けられた第1値のなかから選択された代表値に基づき、前記選択された前記単位領域画像の組み合わせ画像の画素値を変換する変換工程と、
を備える、情報処理方法。
前記第1画像データの領域と大きさが異なる第2画像データであって、前記蛍光画像を複数の領域に再分割した第2画像データと、
前記第2画像データ毎の画素値範囲を示す第1値と、
を関連付けて更に記憶する、(6)に記載の情報処理方法。
前記変換工程は、前記選択された前記第2画像データの組み合わせに関連付けられたそれぞれの第1値から選択された代表値に基づき、前記選択された前記第2画像データの組み合わせに対する画素値を変換する、(7)に記載の情報処理方法。
前記変換工程は、前記選択された前記単位領域画像における前記第1画像データのそれぞれに対応する前記第1値を乗算し、前記選択された前記単位領域画像の組み合わせに関連付けられたそれぞれの前記第1値の最大値で除算する、(11)に記載の情報処理方法。
前記入力部の入力に応じて、前記統計量を演算する解析工程と、
前記解析工程の解析に基づき、蛍光画像を複数の領域に分割した第1画像データと、前記第1画像データ毎の画素値範囲を示す第1値を生成するデータ生成工程と、
を更に備える、(12)に記載の情報処理方法。
前記変換工程は、前記第2入力工程の入力に応じて、前記第1画像の組み合わせを選択する、
(13)に記載の情報処理方法。
前記表示形態のいずれかの位置を指示する操作工程を更に備え、
前記第1入力工程、及び前記第2入力工程は、前記操作工程における指示に応じて関連する情報を入力する、(14)に記載の情報処理方法。
前記複数の蛍光画像のそれぞれを、画像データと、前記画像データに対する前記第1値である係数と、に分割するデータ生成工程を更に備える、(15)に記載の情報処理方法。
選択された前記第1画像の組み合わせのそれぞれに関連付けられた第1値のなかから選択された代表値に基づき、前記選択された前記第1画像の組み合わせ画像の画素値を変換する変換部と、
を備える、情報処理装置。
選択された前記第1画像の組み合わせのそれぞれに関連付けられた第1値のなかから選択された代表値に基づき、前記選択された前記第1画像の組み合わせ画像の画素値を変換する変換工程と、
を情報処理装置に実行させるプログラム。
Claims (19)
- 蛍光画像を複数に分割した各領域である単位領域画像の第1画像データと、前記第1画像データ毎の所定の画素値範囲を示す第1値を関連付けて記憶する記憶工程と、
選択された前記単位領域画像の組み合わせのそれぞれに関連付けられた第1値のなかから選択された代表値に基づき、前記選択された前記単位領域画像の組み合わせ画像の画素値を変換する変換工程と、
を備える、情報処理方法。 - 前記選択された前記単位領域画像の組み合わせは、表示部に表示させる観察範囲に対応し、観察範囲に応じて、前記単位領域画像の組合せの範囲が変更される、請求項1に記載の情報処理方法。
- 前記観察範囲に対応する範囲を前記表示部に表示させる表示制御工程を更に備える、請求項2に記載の情報処理方法。
- 前記観察範囲は、顕微鏡の観察範囲に対応し、前記顕微鏡の倍率に応じて、前記単位領域画像の組合せの範囲が変更される、請求項2に記載の情報処理方法。
- 前記第1画像データは、前記第1画像データの原画像データにおいて所定の規則で取得された画素値範囲に基づき、ダイナミックレンジの範囲が調整された画像データである、請求項1に記載の情報処理方法。
- 前記第1画像データと関連付けられた前記代表値との乗算により、前記原画像データの画素値が得られる、請求項5に記載の情報処理方法。
- 前記記憶工程は、
前記第1画像データの領域と大きさが前記蛍光画像に対して異なる第2画像データであって、前記蛍光画像を複数の領域に再分割した第2画像データと、
前記第2画像データ毎の画素値範囲を示す第1値と、
を関連付けて更に記憶する、請求項6に記載の情報処理方法。 - 前記顕微鏡の倍率が所定値を越えた場合に、前記観察範囲に対応する前記第2画像データの組合せが選択され、
前記変換工程は、前記選択された前記第2画像データの組み合わせに関連付けられたそれぞれの第1値から選択された代表値に基づき、前記選択された前記第2画像データの組み合わせに対する画素値を変換する、請求項7に記載の情報処理方法。 - 前記画素値範囲は、前記第1画像データに対応する前記原画像データにおける統計量に基づく範囲である、請求項8に記載の情報処理方法。
- 前記統計量は、最大値、最頻値、中央値のいずれかである、請求項9に記載の情報処理方法。
- 前記画素値範囲は、前記原画像データにおける最小値と、前記統計量との範囲である、請求項10に記載の情報処理方法。
- 前記第1画像データは、前記単位領域画像に対応する前記原画像データの画素値を前記第1値で除算したデータあり、
前記変換工程は、前記選択された前記単位領域画像における前記第1画像データのそれぞれに対応する前記第1値を乗算し、前記選択された前記単位領域画像の組み合わせに関連付けられたそれぞれの前記第1値の最大値で除算する、請求項11に記載の情報処理方法。 - 前記統計量の演算方法を入力する第1入力工程と、
前記入力部の入力に応じて、前記統計量を演算する解析工程と、
前記解析工程の解析に基づき、蛍光画像を複数の領域に分割した第1画像データと、前記第1画像データ毎の画素値範囲を示す第1値を生成するデータ生成工程と、
を更に備える、請求項12に記載の情報処理方法。 - 前記表示倍率、及び前記観察範囲の少なくともいずれかに関する情報を更に入力する第2入力工程を更に備え、
前記変換工程は、前記第2入力工程の入力に応じて、前記第1画像の組み合わせを選択する、
請求項13に記載の情報処理方法。 - 前記表示制御工程は、前記第1入力工程、及び前記第2入力工程に関する表示形態を前記表示部に表示させ、
前記表示形態のいずれかの位置を指示する操作工程を更に備え、
前記第1入力工程、及び前記第2入力工程は、前記操作工程における指示に応じて関連する情報を入力する、請求項14に記載の情報処理方法。 - 前記蛍光画像は、複数の蛍光波長それぞれについて、撮像対象により生成された複数の蛍光画像のうちの1つであり、
前記複数の蛍光画像のそれぞれを、画像データと、前記画像データに対する前記第1値である係数と、に分割するデータ生成工程を更に備える、請求項15に記載の情報処理方法。 - 前記変換工程で変換された画素値に基づき、細胞解析を行う解析工程を更に備え、
前記細胞解析を行う解析工程は、操作者に指示された範囲の画像範囲に基づき行われる、請求項16に記載の情報処理方法。 - 蛍光画像を複数の領域に分割した第1画像データと、前記第1画像データ毎の所定の画素値範囲を示す第1値を関連付けて記憶する記憶部と、
選択された前記第1画像の組み合わせのそれぞれに関連付けられた第1値のなかから選択された代表値に基づき、前記選択された前記第1画像の組み合わせ画像の画素値を変換する変換部と、
を備える、情報処理装置。 - 蛍光画像を複数の領域に分割した第1画像データと、前記第1画像データ毎の所定の画素値範囲を示す第1値を関連付けて記憶する記憶工程と、
選択された前記第1画像の組み合わせのそれぞれに関連付けられた第1値のなかから選択された代表値に基づき、前記選択された前記第1画像の組み合わせ画像の画素値を変換する変換工程と、
を情報処理装置に実行させるプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/560,699 US20240354913A1 (en) | 2021-05-27 | 2022-02-24 | Information processing method, information processing device, and program |
CN202280036546.5A CN117396749A (zh) | 2021-05-27 | 2022-02-24 | 信息处理方法、信息处理装置及程序 |
JP2023524000A JPWO2022249598A1 (ja) | 2021-05-27 | 2022-02-24 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021089480 | 2021-05-27 | ||
JP2021-089480 | 2021-05-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022249598A1 true WO2022249598A1 (ja) | 2022-12-01 |
Family
ID=84229790
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/007565 WO2022249598A1 (ja) | 2021-05-27 | 2022-02-24 | 情報処理方法、情報処理装置、及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240354913A1 (ja) |
JP (1) | JPWO2022249598A1 (ja) |
CN (1) | CN117396749A (ja) |
WO (1) | WO2022249598A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016157345A1 (ja) * | 2015-03-27 | 2016-10-06 | 株式会社ニコン | 顕微鏡装置、観察方法、及び制御プログラム |
JP2017198609A (ja) * | 2016-04-28 | 2017-11-02 | 凸版印刷株式会社 | 画像処理方法、画像処理装置、プログラム |
WO2019230878A1 (ja) * | 2018-05-30 | 2019-12-05 | ソニー株式会社 | 蛍光観察装置及び蛍光観察方法 |
JP2020173204A (ja) * | 2019-04-12 | 2020-10-22 | コニカミノルタ株式会社 | 画像処理システム、画像処理方法及びプログラム |
-
2022
- 2022-02-24 CN CN202280036546.5A patent/CN117396749A/zh active Pending
- 2022-02-24 US US18/560,699 patent/US20240354913A1/en active Pending
- 2022-02-24 WO PCT/JP2022/007565 patent/WO2022249598A1/ja active Application Filing
- 2022-02-24 JP JP2023524000A patent/JPWO2022249598A1/ja active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016157345A1 (ja) * | 2015-03-27 | 2016-10-06 | 株式会社ニコン | 顕微鏡装置、観察方法、及び制御プログラム |
JP2017198609A (ja) * | 2016-04-28 | 2017-11-02 | 凸版印刷株式会社 | 画像処理方法、画像処理装置、プログラム |
WO2019230878A1 (ja) * | 2018-05-30 | 2019-12-05 | ソニー株式会社 | 蛍光観察装置及び蛍光観察方法 |
JP2020173204A (ja) * | 2019-04-12 | 2020-10-22 | コニカミノルタ株式会社 | 画像処理システム、画像処理方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022249598A1 (ja) | 2022-12-01 |
CN117396749A (zh) | 2024-01-12 |
US20240354913A1 (en) | 2024-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11971355B2 (en) | Fluorescence observation apparatus and fluorescence observation method | |
US9575304B2 (en) | Pathology slide scanners for fluorescence and brightfield imaging and method of operation | |
US11269171B2 (en) | Spectrally-resolved scanning microscope | |
US11106026B2 (en) | Scanning microscope for 3D imaging using MSIA | |
US11143855B2 (en) | Scanning microscope using pulsed illumination and MSIA | |
WO2021177446A1 (en) | Signal acquisition apparatus, signal acquisition system, and signal acquisition method | |
JP2013003386A (ja) | 撮像装置およびバーチャルスライド装置 | |
WO2022249598A1 (ja) | 情報処理方法、情報処理装置、及びプログラム | |
US11994469B2 (en) | Spectroscopic imaging apparatus and fluorescence observation apparatus | |
WO2022138374A1 (ja) | データ生成方法、蛍光観察システムおよび情報処理装置 | |
JP2012189342A (ja) | 顕微分光測定装置 | |
US20220413275A1 (en) | Microscope device, spectroscope, and microscope system | |
WO2022080189A1 (ja) | 生体試料検出システム、顕微鏡システム、蛍光顕微鏡システム、生体試料検出方法及びプログラム | |
WO2023189393A1 (ja) | 生体試料観察システム、情報処理装置及び画像生成方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22810879 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023524000 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18560699 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280036546.5 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22810879 Country of ref document: EP Kind code of ref document: A1 |