[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2008062528A1 - Fundus image analyzer - Google Patents

Fundus image analyzer Download PDF

Info

Publication number
WO2008062528A1
WO2008062528A1 PCT/JP2006/323413 JP2006323413W WO2008062528A1 WO 2008062528 A1 WO2008062528 A1 WO 2008062528A1 JP 2006323413 W JP2006323413 W JP 2006323413W WO 2008062528 A1 WO2008062528 A1 WO 2008062528A1
Authority
WO
WIPO (PCT)
Prior art keywords
fundus image
image
fundus
luminance distribution
distribution information
Prior art date
Application number
PCT/JP2006/323413
Other languages
French (fr)
Japanese (ja)
Inventor
Enrico Grisan
Alfredo Ruggeri
Massimo De Luca
Original Assignee
Nidek Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nidek Co., Ltd. filed Critical Nidek Co., Ltd.
Priority to PCT/JP2006/323413 priority Critical patent/WO2008062528A1/en
Publication of WO2008062528A1 publication Critical patent/WO2008062528A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes

Definitions

  • the present invention relates to a fundus image analysis apparatus that analyzes a fundus image.
  • Patent Document 1 an apparatus for analyzing a fundus image obtained by a fundus camera or the like is known (see, for example, Patent Document 1 and Patent Document 2).
  • Such an analysis device detects an optic disc using image processing technology and analyzes the state of the optic disc.
  • Patent Document 1 JP-A-9 313447
  • Patent Document 2 Japanese Patent Application Laid-Open No. 11 151206
  • the present invention is characterized by having the following configuration.
  • Storage means for storing a photographed fundus image, mask means for specifying and masking a blood vessel portion by the stored fundus image force image processing, and all pixels in the fundus image other than those masked by the mask means
  • a luminance distribution acquisition unit that obtains luminance distribution information of the entire fundus image by obtaining luminance information, and a luminance component acquired by the luminance distribution acquisition unit
  • Image analysis means for analyzing the fundus image by comparing cloth information with reference luminance distribution information obtained in advance, and notification means for notifying a result analyzed by the image analysis means.
  • the image analysis means performs analysis using a -European network.
  • the image analysis unit compares the luminance distribution information with reference luminance distribution information, and determines the luminance distribution information in a dark color zone, a gray zone, and a light color zone. It is divided into three zones, and at least one of the dark color zone and the light color zone is analyzed using a -Ural network.
  • the image analysis unit analyzes the fundus image by comparing luminance distribution information of the entire fundus image with reference luminance distribution information obtained in advance. Then, the entire fundus image is further divided into predetermined sectors, and analysis is performed on sectors other than the portion masked by the mask means.
  • the fundus image is an image photographed with green illumination light.
  • the image analysis means obtains the presence or absence of the onset of diabetic retinopathy by the analysis.
  • FIG. 1 is a diagram showing a schematic configuration of a fundus imaging apparatus in the present embodiment.
  • FIG. 2 is a diagram showing an optical system of a fundus imaging apparatus in the present embodiment.
  • FIG. 3 is a diagram showing a configuration of a focus chart.
  • FIG. 4 is a diagram showing an anterior segment image in which an alignment index is formed.
  • FIG. 5A is a diagram for explaining focus adjustment of the fundus imaging apparatus in the present embodiment.
  • FIG. 5B is a diagram for explaining focus adjustment of the fundus imaging apparatus in the present embodiment.
  • FIG. 6 A flow chart for analyzing the presence or absence of diabetic retinopathy using the Yural network.
  • FIG. 7 is a schematic diagram showing a fundus image of a patient's eye developing diabetic retinopathy.
  • FIG. 8 is a diagram showing luminance distribution information of the entire fundus image after removing blood vessels and optic discs.
  • FIG. 9 is a diagram showing a state in which the fundus image is divided into local luminance distribution information.
  • FIG. 1 is a schematic diagram showing a schematic configuration of the fundus imaging apparatus of the present embodiment.
  • the fundus imaging apparatus 1 includes a control unit 2 having a CPU and the like, a monitor 3 for displaying various information, an instruction input unit 4 for performing various settings, a storage unit for storing fundus images, etc. 5.
  • Image analysis unit that analyzes stored fundus images using the neural network technology 6.
  • Output unit 7 for outputting the analysis results.
  • Image taking unit with optical system for photographing the fundus of eye E 100, drive unit 1 for driving the imaging unit 100 in the front-back, up-down, left-right direction (XYZ direction) with respect to the eye E Provide 01 mag.
  • Reference numeral 8 denotes a photographing window, and the fundus of the subject eye E is photographed by the photographing unit 100 inside the apparatus 1 by positioning the subject eye E in the photographing window 8.
  • Monitor 3, instruction input unit 4, storage unit 5, image analysis unit 6, output unit 7, photographing unit 100 (light source, light receiving element, etc.), drive unit 101 is electrically connected to control unit 2 and connected to control unit The drive is controlled by the command signal from 2.
  • FIG. 2 is a diagram illustrating a configuration of an optical system included in the photographing unit 100.
  • the optical system for illuminating the eye to be examined includes a light source 10 that emits infrared light for fundus illumination, a light source 11 that emits flash light in the visible range for photographing the fundus, and a visible light that reflects infrared light. It consists of a dichroic mirror 12 that transmits light, a collimator lens 13, a focus chart 14, a condenser lens 15, a ring slit 16 having a ring-shaped opening, a mirror 17, a relay lens 18, and a half mirror 19.
  • the light source 10 and the light source 11 have a conjugate relationship with the pupil of the eye E to be examined.
  • the light source 11 can be used as long as it emits visible light. In the present embodiment, the light source 11 emits green monochromatic light to further enhance the blood vessel image among the fundus images to be photographed. Use the
  • a ring-shaped chart 14b having a predetermined size is formed on a filter 14a that transmits visible light and infrared light.
  • This chart 14b is formed by a coating process that transmits visible light but does not transmit infrared light.
  • the focus chart 14 is moved along the optical axis together with a focusing lens 23 described later by the driving means 102, and forms a ring image on the fundus of the eye E as an index at the time of focus.
  • the ring slit 16 is placed at a position conjugate with the pupil of the eye E through the relay lens 18.
  • Infrared light emitted from the light source 10 is reflected by the dichroic mirror 12 and then illuminates the focus chart 14 with a back force via the collimator lens 13.
  • the infrared light that has passed through the focus chart 14 illuminates the ring slit 16 through the condenser lens 15.
  • Infrared light that has passed through the ring slit 16 is reflected by the mirror 17, reflected by the half mirror 19 via the relay lens 18, and then formed at the pupil of the eye E to be examined and focused on the fundus.
  • the fundus is illuminated while forming a ring image that serves as an index of the eye.
  • visible light emitted from the light source 11 After passing through the dichroic mirror 12 (in this embodiment, green monochromatic light), the same light path as that of the infrared light from the light source 10 described above is applied to illuminate the fundus of the eye E to be examined. Since the chart 14b formed on the focus chart 14 transmits visible light, the visible light emitted from the light source 11 uniformly illuminates the fundus without forming a ring image on the fundus of the eye E. To do.
  • the optical system for photographing the fundus of the eye E has a half mirror 19, an object lens 20, a dichroic mirror 21, an aperture 22, a focusing lens 23, an imaging lens, and a half mirror from the eye E side. 25 and 2D light receiving element 26.
  • the two-dimensional light receiving element 26 has a conjugate relationship with the fundus of the eye E to be examined.
  • the diaphragm 22 is disposed at a position conjugate with the pupil of the eye E through the objective lens 20.
  • the focusing lens 23 is moved along the optical axis together with the focus chart by the driving means 102.
  • the reflected light of the fundus force of the illumination light from the light source 10 or 11 passes through the half mirror 19 and the objective lens 20 and forms an image, and then the dichroic mirror 21, aperture 22, focusing lens 23, imaging lens 24, The light is received by the two-dimensional light receiving element 26 through the half mirror 25.
  • Reference numerals 32a and 32b denote light sources for projecting alignment indices for detection in the up / down / left / right and front / rear directions from the front of the eye E and for illuminating the anterior eye portion of the eye E.
  • the light sources 32a and 32b are a pair of rectangular LEDs arranged symmetrically with respect to the photographing optical axis L1, and emit infrared light having a wavelength different from that of the light source 10 described above.
  • the light sources 32a, 32b project a finite index (a rectangular index extending in a direction perpendicular to the eye to be examined) by a divergent light beam at a predetermined projection angle toward the cornea of the eye E to be examined, and an anterior eye portion Illuminate the whole.
  • a finite index a rectangular index extending in a direction perpendicular to the eye to be examined
  • the optical system for photographing the anterior segment of the eye E has half mirror 19, objective lens 20, dichroic mirror 21 shared with the optical system for fundus photography, and dichroic mirror 2 1 It consists of a field lens 28, a mirror 29, an imaging lens 30, and a two-dimensional light receiving element 31 that are arranged in the direction of reflection by.
  • the two-dimensional light receiving element 31 has a conjugate relationship with the pupil of the eye E to be examined.
  • the dichroic mirror 21 transmits visible light and infrared light from the light source 10. The infrared light emitted from the light sources 32a and 32b is reflected.
  • FIG. 27 is a fixation lamp that emits visible light placed in the reflection direction of the half mirror 25.
  • the subject's face is brought close to the apparatus, and the eye (examinee E) to be imaged is positioned on the imaging window 8 shown in FIG. .
  • the control unit 2 turns on one of the nine fixation lamps 27 (in this case, the central fixation lamp located on the optical axis) to fixate.
  • the control unit 2 turns on the light sources 32a and 32b, causes the two-dimensional light receiving element 31 to receive the anterior eye image of the eye E, and based on the light reception result, the device (imaging unit 100) and the eye E to be inspected. Align (alignment).
  • FIG. 4 is a schematic diagram showing an anterior segment image received by the two-dimensional light receiving element 31.
  • the anterior segment of the eye E is illuminated, and rectangular alignment indexes 33L and 33R as shown are projected onto the cornea of the eye E.
  • the control unit 2 specifies the pupil P by anterior eye image power image processing received by the two-dimensional light receiving element 31 and obtains the center P of the specified pupil P. In addition, the control unit 2 performs an anterior segment by image processing.
  • the alignment state of the imaging unit 100 in the up / down / left / right direction with respect to the eye E is detected by detecting the positional relationship between the obtained center P and the intermediate position M, and the alignment in the front / rear direction
  • the alignment state is detected by comparing the image intervals of the alignment indicators 33L and 33R. Further, since the alignment indices 33L and 33R are projections of a finite distance index, the image interval changes with a change in the front-rear direction between the eye E and the imaging unit 100. In the present embodiment, the interval between the alignment indexes corresponding to the appropriate alignment distance in the front-rear direction between the eye E and the imaging unit 100 is obtained in advance as a predetermined value and stored in the storage unit 5.
  • the control unit 2 is based on the intermediate position M obtained from the index image (alignment index) formed by the light sources 32a and 32b and the center P obtained from the pupil center obtained from the anterior eye image power. Then, distance information for moving the photographing unit is obtained, and the drive unit 101 is driven so that the two coincide with each other, and the whole photographing unit 100 is moved in the vertical and horizontal directions to perform alignment. In addition, the control unit 2 drives the drive unit 101 so that the interval between the alignment indices 33L and 33R becomes a predetermined interval (predetermined value), and moves the entire imaging unit 100 in the front-rear direction with respect to the eye to be examined. Make the alignment. When each alignment state falls within a predetermined allowable range, the control unit determines alignment completion.
  • the control unit 2 turns off the light sources 32a and 32b, turns on the light source 10 for fundus illumination, irradiates the fundus of the eye E with infrared light, and receives the reflected light two-dimensionally.
  • Light is received by element 26 and a fundus image is obtained.
  • FIG. 5A is a schematic view showing a fundus image received by the two-dimensional light receiving element 26. 200 is an index projected onto the fundus by the focus chart 14.
  • the control unit 2 sets a line 210 that passes through the index 200, and the luminance information power on the set line 210 also indicates luminance information corresponding to the index 200.
  • FIG. 5B is a schematic diagram showing luminance information 220 on the set line 210.
  • the vertical axis indicates the luminance value
  • the horizontal axis indicates the position
  • 200 ′ indicates the luminance information corresponding to the index 200.
  • luminance information corresponding to other parts such as blood vessels in the fundus is excluded.
  • the image of the index 200 projected onto the fundus is blurred, so the luminance information 20 as shown by the dotted line in FIG. 5B.
  • the control unit 2 detects the luminance information 20 (corresponding to the index 200, and based on this luminance information 20 (
  • the focus chart 14 and the focusing lens 23 are moved in conjunction with each other using the driving unit 102 so that the peak height L1 and the narrowest width W1 are obtained.
  • the index chart 200 is projected onto the fundus using the focus chart 14, and the focus is adjusted based on the light receiving state (luminance information) of the index 200.
  • the captured fundus image is not limited to this. From blood vessels, etc. Extracting a constant sites, can also be adjusted based, Te focus on luminance information of the site
  • the control unit 2 turns off the light source 10 and The light source 11 is flashed to illuminate the fundus with visible light (green monochromatic light in this embodiment).
  • the reflected light from the fundus passes through the half mirror 19 and the objective lens 20 and forms an image.
  • the light is received by the two-dimensional light receiving element 26 through the dichroic mirror 21, the aperture 22, the focusing lens 23, the imaging lens 24, and the first mirror 25.
  • the control unit 2 stores the obtained fundus image in the storage unit 5 as the fundus image of the subject E and sequentially turns on the other fixation lamps 27, and uses the same method for the same eye E. A plurality of fundus images are obtained.
  • the ophthalmologic photographing apparatus analyzes a photographed fundus image (fundus image) using a Yural network, and analyzes diabetic retinopathy (hereinafter simply referred to as DR). It also serves as a fundus image analyzer that can determine the onset of onset.
  • DR diabetic retinopathy
  • the fundus images obtained by sequentially lighting nine fixation lamps are joined together by an existing image processing technique to form a single fundus image.
  • here is an example of analysis using the fundus image obtained by the presentation of a single fixation lamp.
  • the image analysis unit 6 extracts the fundus image 300 as shown in FIG. 7 stored in the storage unit 5.
  • a blood vessel 301 and an optic nerve head 302 (which may not be photographed depending on the position of the fixation lamp) are photographed.
  • DR diabetic retinopathy
  • a dark portion 303 caused by bleeding and a bright portion 304 called cotton wool spots are photographed.
  • the image analysis unit 6 uses image processing technology to extract the portions of the fundus image, such as blood vessels and optic nerve heads, that interfere with the subsequent analysis, removes (masks) the corresponding pixels, and leaves the remaining fundus image All pixels are counted as 0 to 255 levels of luminance information, and luminance distribution information of the entire fundus image is obtained.
  • the image is taken using green monochromatic light, so that a red portion such as a blood vessel on the fundus is photographed in black, and the subsequent blood vessels and It is easier to extract the dark part.
  • FIG. 8 is a diagram showing luminance distribution information (global luminance distribution information) of the entire fundus image after removing the blood vessel 301 and the optic disc 302.
  • the horizontal axis is the brightness value from 0 to 255
  • the vertical axis indicates the number of pixels.
  • a solid line 310 indicates luminance distribution information based on the fundus image 300
  • a dotted line 320 indicates luminance distribution information of a healthy person obtained in advance, and indicates reference luminance distribution information.
  • the healthy person's luminance distribution information (dotted line) 320 is determined in advance by quantitatively obtaining luminance distribution information obtained from the fundus image power of a plurality of healthy persons.
  • the image analysis unit 6 divides into three zones, a dark color zone, a gray zone, and a light color zone, at the boundary where the luminance distribution information (solid line) 310 to be analyzed and the luminance distribution information 320 of the healthy person intersect.
  • the image analysis unit 6 analyzes the luminance information of the portion corresponding to the dark zone in the luminance distribution information 310 using the -Ural network.
  • the gray zone and the light color zone are also analyzed using the -Ural network, and it is determined whether or not DR is indicated based on these output results.
  • the neural network of the present embodiment is configured as a feed-forward type composed of three layers, an input layer, an intermediate layer, and an output layer, and inputs and outputs learning data (teacher signal) to backpropagation. Learning (training) using the method.
  • Input data includes, for example, the total number of pixels in each zone, maximum (minimum) luminance information, the difference in the number of pixels between the gray and dark (light) zones, the number of pixels relative to the peak value in each zone, the bottom in each zone Feature data required for DR determination, such as the number of pixels for the value, is given.
  • the weighting parameters within the neuron's network matrix are systematically adjusted until an acceptable response is achieved.
  • the network is trained on all pre-prepared training map datasets and then tested simultaneously on an independent test map dataset. The network has been improved to correctly classify both training and test sets.
  • the tolerance of both is set to a low value so that higher accuracy can be realized.
  • FIG. 9 shows a diagram in which the fundus image is divided into local luminance distribution information.
  • the image analysis unit 6 masks pixels corresponding to blood vessels and optic nerve heads on the fundus image and divides the fundus image into small regions 400 (one sector).
  • the image analysis unit 6 For each sector 400 where no masked pixel exists, the image analysis unit 6 divides the pixel 401 in the sector 1 into dark, gray, and light pixels, and counts the number. At this time, if the number of dark or light pixels exceeds the predetermined number, the sector 400 is counted as an abnormal sector 400 because it is likely to indicate DR. If there is no masked pixel, analyze all sectors 400 in the same way and detect abnormal sectors 400.
  • the image analysis unit 6 analyzes the portion having the dark pixel force and the portion having the light pixel force in the sector 400 determined to be abnormal in the same manner as described above using the -Ural network, and performs an abnormality based on the output result. It is judged whether or not it shows the DR for every 400 sectors.
  • the control unit 2 displays (notifies) that fact on the monitor 3. If the result of neural network output does not indicate DR, control unit 2 displays that fact on monitor 3.
  • the fundus image is analyzed using the -Ural network, and the result of the analysis is a force that is used to determine the presence or absence of the onset of diabetic retinopathy. /.
  • fundus image power After removing (masking) blood vessels, optic nerve head, etc., obtaining the luminance distribution tendency of each pixel, using a neural network, other fundus images can be obtained Similar analysis can be performed based on local differences in the whole or in the fundus.
  • the fundus imaging apparatus has a configuration for performing image analysis, but the present invention is not limited to this.
  • the image analysis described above does not have a function of photographing the fundus, and can be applied to any device that analyzes fundus images obtained by other devices.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A fundus image analyzer enabling efficient analysis of a fundus image without manual operation as less as possible. The analyzer for analyzing a captured fundus image comprises storage means for storing a captured fundus image, mask means for identifying a blood vessel part by image-processing the stored fundus image and masking the identified blood vessel part, luminance distribution acquiring mean for determining luminance information on all the pixels of the fundus image except for the masked part and acquiring luminance distribution information on the whole fundus image, image analyzing means for comparing the acquired luminance distribution information to a predetermined reference luminance distribution information to analyze the fundus image, and indicating means for indicating the results of the analysis by the image analyzing means.

Description

眼底画像解析装置  Fundus image analyzer
技術分野  Technical field
[0001] 本発明は眼底画像を解析する眼底画像解析装置に関する。  The present invention relates to a fundus image analysis apparatus that analyzes a fundus image.
背景技術  Background art
[0002] 従来、眼底カメラ等にて得られた眼底画像を解析する装置が知られて 、る(例えば 、特許文献 1、特許文献 2参照)。このような解析装置は、画像処理技術を用いて視 神経乳頭を検出し、この視神経乳頭の状態を解析する。  Conventionally, an apparatus for analyzing a fundus image obtained by a fundus camera or the like is known (see, for example, Patent Document 1 and Patent Document 2). Such an analysis device detects an optic disc using image processing technology and analyzes the state of the optic disc.
[0003] 特許文献 1 :特開平 9 313447号公報  [0003] Patent Document 1: JP-A-9 313447
特許文献 2:特開平 11 151206号公報  Patent Document 2: Japanese Patent Application Laid-Open No. 11 151206
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0004] し力しながら、上述したような眼底画像を解析する装置においては、視神経乳頭領 域をマ-ユアルにて特定する等の解析に人的作業が入り、解析に時間が力かり効率 が良くない。また、基本的に視神経乳頭の状態に基づいて解析を行うため、その用 途は限られてしまう。 [0004] However, in the apparatus for analyzing the fundus image as described above, human labor is involved in the analysis such as specifying the optic disc area manually, and the analysis is time consuming and efficient. Is not good. In addition, since the analysis is basically performed based on the state of the optic disc, its use is limited.
[0005] 上記従来技術の問題点に鑑み、眼底画像を解析する際に人的作業を極力介する ことなく効率よく行うことのできる眼底画像解析装置を提供することを技術課題とする 課題を解決するための手段  [0005] In view of the above-described problems of the prior art, it is an object of the present invention to provide a fundus image analysis apparatus that can efficiently perform human funding as much as possible when analyzing a fundus image. Means for
[0006] 上記課題を解決するために、本発明は以下のような構成を備えることを特徴とする [0006] In order to solve the above problems, the present invention is characterized by having the following configuration.
[0007] (1) 撮影された眼底画像の解析を行う眼底画像解析装置において、 [0007] (1) In a fundus image analysis apparatus for analyzing a photographed fundus image,
撮影された眼底画像を記憶する記憶手段と、該記憶された前記眼底画像力 画像 処理により血管部分を特定してマスクするマスク手段と、該マスク手段によりマスクし た以外の眼底画像における全ピクセルの輝度情報を求めて眼底画像全体の輝度分 布情報を得る輝度分布取得手段と、該輝度分布取得手段により取得された輝度分 布情報と予め求められている基準輝度分布情報とを比較して前記眼底画像を解析 する画像解析手段と、該画像解析手段により解析された結果を報知する報知手段と 、を有することを特徴とする。 Storage means for storing a photographed fundus image, mask means for specifying and masking a blood vessel portion by the stored fundus image force image processing, and all pixels in the fundus image other than those masked by the mask means A luminance distribution acquisition unit that obtains luminance distribution information of the entire fundus image by obtaining luminance information, and a luminance component acquired by the luminance distribution acquisition unit Image analysis means for analyzing the fundus image by comparing cloth information with reference luminance distribution information obtained in advance, and notification means for notifying a result analyzed by the image analysis means. To do.
[0008] (2) (1)の眼底画像解析装置において、前記画像解析手段は-ユーラルネットヮ ークを用いて解析することを特徴とする。  [0008] (2) In the fundus image analysis apparatus according to (1), the image analysis means performs analysis using a -European network.
[0009] (3) (2)の眼底画像解析装置において、前記画像解析手段は前記輝度分布情報 と基準輝度分布情報とを比較して前記輝度分布情報を暗色ゾーン,グレーゾーン, 明色ゾーンの 3ゾーンに分け、少なくとも該暗色ゾーン及び明色ゾーンのどちらか一 方を-ユーラルネットワークを用いて解析することを特徴とする。  [0009] (3) In the fundus image analysis apparatus according to (2), the image analysis unit compares the luminance distribution information with reference luminance distribution information, and determines the luminance distribution information in a dark color zone, a gray zone, and a light color zone. It is divided into three zones, and at least one of the dark color zone and the light color zone is analyzed using a -Ural network.
[0010] (4) (2)の眼底画像解析装置において、前記画像解析手段は前記眼底画像全体 の輝度分布情報と予め求められている基準輝度分布情報とを比較して前記眼底画 像を解析した後、さらに前記眼底画像全体を所定のセクタ一に分けるとともに前記マ スク手段によりマスクされた部分以外のセクタ一に対して解析を行うことを特徴とする  [0010] (4) In the fundus image analysis apparatus according to (2), the image analysis unit analyzes the fundus image by comparing luminance distribution information of the entire fundus image with reference luminance distribution information obtained in advance. Then, the entire fundus image is further divided into predetermined sectors, and analysis is performed on sectors other than the portion masked by the mask means.
[0011] (5) (1)〜 (4)の眼底画像解析装置において、前記眼底画像は緑色の照明光に より撮影された画像であることを特徴とする。 [0011] (5) In the fundus image analysis apparatus according to (1) to (4), the fundus image is an image photographed with green illumination light.
[0012] (6) (1)〜(5)の眼底画像解析装置において、前記画像解析手段は前記解析に より糖尿病性網膜症の発症の有無を求めることを特徴とする。 [0012] (6) In the fundus image analysis apparatus according to (1) to (5), the image analysis means obtains the presence or absence of the onset of diabetic retinopathy by the analysis.
発明の効果  The invention's effect
[0013] 本発明によれば、眼底画像を解析する際に人的作業を極力介することなく効率よく 行うことができる。  [0013] According to the present invention, when analyzing a fundus image, it is possible to efficiently perform human work with minimal human intervention.
図面の簡単な説明  Brief Description of Drawings
[0014] [図 1]本実施形態における眼底撮影装置の概略構成を示した図である。 FIG. 1 is a diagram showing a schematic configuration of a fundus imaging apparatus in the present embodiment.
[図 2]本実施形態における眼底撮影装置の光学系を示した図である。  FIG. 2 is a diagram showing an optical system of a fundus imaging apparatus in the present embodiment.
[図 3]フォーカスチャートの構成を示した図である。  FIG. 3 is a diagram showing a configuration of a focus chart.
[図 4]ァライメント指標が形成された前眼部像を示した図である。  FIG. 4 is a diagram showing an anterior segment image in which an alignment index is formed.
[図 5A]本実施形態における眼底撮影装置のフォーカス調整を説明するための図で ある。 [図 5B]本実施形態における眼底撮影装置のフォーカス調整を説明するための図で ある。 FIG. 5A is a diagram for explaining focus adjustment of the fundus imaging apparatus in the present embodiment. FIG. 5B is a diagram for explaining focus adjustment of the fundus imaging apparatus in the present embodiment.
[図 6]糖尿病性網膜症の有無を-ユーラルネットワークを用いて解析するためのフロ 一チャートである。  [Fig. 6] A flow chart for analyzing the presence or absence of diabetic retinopathy using the Yural network.
[図 7]糖尿病性網膜症が発症している患者眼の眼底像を示した模式図である。  FIG. 7 is a schematic diagram showing a fundus image of a patient's eye developing diabetic retinopathy.
[図 8]血管や視神経乳頭を除いた後の眼底画像全体の輝度分布情報を示した図で ある。  FIG. 8 is a diagram showing luminance distribution information of the entire fundus image after removing blood vessels and optic discs.
[図 9]眼底画像を局所的な輝度分布情報に区分けした状態を示した図である。  FIG. 9 is a diagram showing a state in which the fundus image is divided into local luminance distribution information.
符号の説明  Explanation of symbols
[0015] 1 眼底撮影装置 [0015] 1 Fundus photographing apparatus
2 制御部  2 Control unit
6 画像解析部  6 Image analysis unit
100 撮影部  100 Shooting unit
300 眼底画像  300 fundus image
301 血管  301 blood vessels
302 視神経乳頭  302 Optic disc
303 暗色部  303 Dark color
304 明色部  304 Light color
310 輝度分布情報  310 Luminance distribution information
320 輝度分布情報  320 Luminance distribution information
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0016] 以下、本発明の実施形態を図面に基づいて説明する。図 1は本実施例の眼底撮影 装置の概略構成を示した模式図である。眼底撮影装置 1は CPU等を有する制御回 路カ なる制御部 2、種々の情報を表示するためのモニタ 3、各種設定を行うための 指示入力部 4、眼底画像等を記憶するための記憶部 5、記憶された眼底画像をニュ 一ラルネットワーク技術を用いて解析する画像解析部 6、解析結果を出力するため出 力部 7、被検眼 Eの眼底を撮影するための光学系を有する撮影部 100、撮影部 100 を被検眼 Eに対して前後,上下,左右方向 (XYZ方向)に駆動させるための駆動部 1 01等を備える。また、 8は撮影窓であり、被検眼 Eをこの撮影窓 8に位置させることに より装置 1内部の撮影部 100によって被検眼 Eの眼底を撮影する。モニタ 3,指示入力 部 4,記憶部 5,画像解析部 6、出力部 7,撮影部 100 (光源、受光素子等),駆動部 1 01は、制御部 2に電気的に接続され、制御部 2からの指令信号によって駆動制御さ れる。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a schematic diagram showing a schematic configuration of the fundus imaging apparatus of the present embodiment. The fundus imaging apparatus 1 includes a control unit 2 having a CPU and the like, a monitor 3 for displaying various information, an instruction input unit 4 for performing various settings, a storage unit for storing fundus images, etc. 5. Image analysis unit that analyzes stored fundus images using the neural network technology 6. Output unit 7 for outputting the analysis results. Image taking unit with optical system for photographing the fundus of eye E 100, drive unit 1 for driving the imaging unit 100 in the front-back, up-down, left-right direction (XYZ direction) with respect to the eye E Provide 01 mag. Reference numeral 8 denotes a photographing window, and the fundus of the subject eye E is photographed by the photographing unit 100 inside the apparatus 1 by positioning the subject eye E in the photographing window 8. Monitor 3, instruction input unit 4, storage unit 5, image analysis unit 6, output unit 7, photographing unit 100 (light source, light receiving element, etc.), drive unit 101 is electrically connected to control unit 2 and connected to control unit The drive is controlled by the command signal from 2.
[0017] 図 2は撮影部 100が持つ光学系の構成を示した図である。  FIG. 2 is a diagram illustrating a configuration of an optical system included in the photographing unit 100.
[0018] 被検眼を照明するための光学系は、眼底照明用の赤外光を発する光源 10、眼底 を撮影するための可視域のフラッシュ光を発する光源 11、赤外光を反射し可視光を 透過させる特性を持つダイクロイツクミラー 12、コリメータレンズ 13、フォーカスチヤー ト 14、集光レンズ 15、リング状の開口を有するリングスリット 16、ミラー 17、リレーレン ズ 18、ハーフミラー 19からなる。光源 10及び光源 11は、被検眼 Eの瞳と共役な関係 となっている。また、光源 11は可視光を発するものであれば使用することが可能であ るが、本実施形態では撮影される眼底像のうち、血管像をより強調させるために緑色 の単色光を発する光源を用いて 、る。  [0018] The optical system for illuminating the eye to be examined includes a light source 10 that emits infrared light for fundus illumination, a light source 11 that emits flash light in the visible range for photographing the fundus, and a visible light that reflects infrared light. It consists of a dichroic mirror 12 that transmits light, a collimator lens 13, a focus chart 14, a condenser lens 15, a ring slit 16 having a ring-shaped opening, a mirror 17, a relay lens 18, and a half mirror 19. The light source 10 and the light source 11 have a conjugate relationship with the pupil of the eye E to be examined. The light source 11 can be used as long as it emits visible light. In the present embodiment, the light source 11 emits green monochromatic light to further enhance the blood vessel image among the fundus images to be photographed. Use the
[0019] フォーカスチャート 14は、図 3に示すように、可視光及び赤外光を通すフィルタ 14a 上に所定の大きさのリング形状のチャート 14bが形成されている。このチャート 14bは 可視光を透過し、赤外光を透過させな 、特性を持たせたコーティング処理によって 形成されている。なお、指標を血管等と区別するため、血管の太さよりも十分大きな 形状としておくのが好ましい。また、フォーカスチャート 14は駆動手段 102によって後 述するフォーカシングレンズ 23とともに光軸に沿って移動し、被検眼 Eの眼底上にフ オーカス時の指標となるリング像を形成する。また、リングスリット 16は、リレーレンズ 1 8を介して被検眼 Eの瞳と共役となる位置に置かれる。  In the focus chart 14, as shown in FIG. 3, a ring-shaped chart 14b having a predetermined size is formed on a filter 14a that transmits visible light and infrared light. This chart 14b is formed by a coating process that transmits visible light but does not transmit infrared light. In order to distinguish an index from a blood vessel or the like, it is preferable to make the shape sufficiently larger than the thickness of the blood vessel. Further, the focus chart 14 is moved along the optical axis together with a focusing lens 23 described later by the driving means 102, and forms a ring image on the fundus of the eye E as an index at the time of focus. The ring slit 16 is placed at a position conjugate with the pupil of the eye E through the relay lens 18.
[0020] 光源 10から発せられた赤外光は、ダイクロイツクミラー 12により反射された後、コリメ ータレンズ 13を経てフォーカスチャート 14を背後力も照明する。フォーカスチャート 1 4を通過した赤外光は、集光レンズ 15を経てリングスリット 16を照明する。リングスリツ ト 16を透過した赤外光は、ミラー 17にて反射した後、リレーレンズ 18を介してハーフ ミラー 19にて反射した後、被検眼 Eの瞳にて結像し、眼底上にフォーカス用の指標と なるリング像を形成させつつ、眼底を照明する。また、光源 11から発せられた可視光 (本実施形態では緑色の単色光)は、ダイクロイツクミラー 12を透過した後、前述した 光源 10からの赤外光と同様の光路を迪り、被検眼 Eの眼底を照明する。なお、フォー カスチャート 14に形成されているチャート 14bは可視光を透過するため、光源 11から 発せられた可視光は被検眼 Eの眼底上にリング像を形成することなぐ眼底を一様に 照明する。 Infrared light emitted from the light source 10 is reflected by the dichroic mirror 12 and then illuminates the focus chart 14 with a back force via the collimator lens 13. The infrared light that has passed through the focus chart 14 illuminates the ring slit 16 through the condenser lens 15. Infrared light that has passed through the ring slit 16 is reflected by the mirror 17, reflected by the half mirror 19 via the relay lens 18, and then formed at the pupil of the eye E to be examined and focused on the fundus. The fundus is illuminated while forming a ring image that serves as an index of the eye. In addition, visible light emitted from the light source 11 After passing through the dichroic mirror 12 (in this embodiment, green monochromatic light), the same light path as that of the infrared light from the light source 10 described above is applied to illuminate the fundus of the eye E to be examined. Since the chart 14b formed on the focus chart 14 transmits visible light, the visible light emitted from the light source 11 uniformly illuminates the fundus without forming a ring image on the fundus of the eye E. To do.
[0021] 被検眼 Eの眼底を撮影するための光学系は、被検眼 E側から、ハーフミラー 19、対 物レンズ 20、ダイクロイツクミラー 21、絞り 22、フォーカシングレンズ 23、結像レンズ、 ハーフミラー 25、 2次元受光素子 26からなる。 2次元受光素子 26は、被検眼 Eの眼 底と共役な関係となっている。また、絞り 22は対物レンズ 20を介して被検眼 Eの瞳と 共役となる位置に配置される。また、フォーカシングレンズ 23は駆動手段 102によつ てフォーカスチャートとともに光軸に沿って移動する。光源 10または光源 11による照 明光の眼底力 の反射光は、ハーフミラー 19、対物レンズ 20、を経てー且結像した 後、ダイクロイツクミラー 21、絞り 22、フォーカシングレンズ 23、結像レンズ 24、 ハー フミラー 25を経て 2次元受光素子 26にて受光される。  [0021] The optical system for photographing the fundus of the eye E has a half mirror 19, an object lens 20, a dichroic mirror 21, an aperture 22, a focusing lens 23, an imaging lens, and a half mirror from the eye E side. 25 and 2D light receiving element 26. The two-dimensional light receiving element 26 has a conjugate relationship with the fundus of the eye E to be examined. Further, the diaphragm 22 is disposed at a position conjugate with the pupil of the eye E through the objective lens 20. The focusing lens 23 is moved along the optical axis together with the focus chart by the driving means 102. The reflected light of the fundus force of the illumination light from the light source 10 or 11 passes through the half mirror 19 and the objective lens 20 and forms an image, and then the dichroic mirror 21, aperture 22, focusing lens 23, imaging lens 24, The light is received by the two-dimensional light receiving element 26 through the half mirror 25.
[0022] 32a, 32bは、被検眼 Eの正面から上下左右、及び前後方向検出用のァライメント 指標を投影するとともに被検眼 Eの前眼部を照明するための光源である。光源 32a, 32bは撮影光軸 L1を挟んで対称的に配置された一対の矩形状の LEDであり、前述 した光源 10とは異なる波長の赤外光を発する。光源 32a, 32bは、被検眼 Eの角膜 に向けて所定の投影角度にて発散光束による有限遠の指標 (被検眼に対して垂直 方向に延びる矩形状の指標)を投影するとともに、前眼部全体を照明する。  [0022] Reference numerals 32a and 32b denote light sources for projecting alignment indices for detection in the up / down / left / right and front / rear directions from the front of the eye E and for illuminating the anterior eye portion of the eye E. The light sources 32a and 32b are a pair of rectangular LEDs arranged symmetrically with respect to the photographing optical axis L1, and emit infrared light having a wavelength different from that of the light source 10 described above. The light sources 32a, 32b project a finite index (a rectangular index extending in a direction perpendicular to the eye to be examined) by a divergent light beam at a predetermined projection angle toward the cornea of the eye E to be examined, and an anterior eye portion Illuminate the whole.
[0023] 被検眼 Eの前眼部を撮影するための光学系は、ハーフミラー 19,対物レンズ 20,ダ ィクロイツクミラー 21を眼底撮影用の光学系と共用するとともに、ダイクロイツクミラー 2 1による反射方向に配置される、フィールドレンズ 28、ミラー 29、結像レンズ 30、 2次 元受光素子 31からなる。光源 32a, 32bにより照明された被検眼 Eの前眼部像は被 検眼角膜に形成されるァライメント指標とともに、ハーフミラー 19、対物レンズ 20、ダ ィクロイツクミラー 21、フィールドレンズ 28、ミラー 29、結像レンズ 30を介して 2次元受 光素子 31に受光される。なお、 2次元受光素子 31は被検眼 Eの瞳と共役な関係とな つている。また、ダイクロイツクミラー 21は、可視光及び光源 10からの赤外光を透過し 、光源 32a, 32bから照射される赤外光を反射する特性を持つ。 [0023] The optical system for photographing the anterior segment of the eye E has half mirror 19, objective lens 20, dichroic mirror 21 shared with the optical system for fundus photography, and dichroic mirror 2 1 It consists of a field lens 28, a mirror 29, an imaging lens 30, and a two-dimensional light receiving element 31 that are arranged in the direction of reflection by. The anterior segment image of the eye E illuminated by the light sources 32a and 32b, along with the alignment index formed on the cornea of the eye to be examined, half mirror 19, objective lens 20, dichroic mirror 21, field lens 28, mirror 29, The light is received by the two-dimensional light receiving element 31 through the imaging lens 30. The two-dimensional light receiving element 31 has a conjugate relationship with the pupil of the eye E to be examined. The dichroic mirror 21 transmits visible light and infrared light from the light source 10. The infrared light emitted from the light sources 32a and 32b is reflected.
[0024] 27はハーフミラー 25の反射方向に置かれた可視光を発する固視灯であり、本実施 例では、光源 11と同色 (本実施形態では緑色)の光を発する 3 X 3の計 9個の複数の 固視灯を持つ。この固視灯 27を選択的に点灯させ、被検眼 Eの固視を行うことにより 、異なる領域の眼底像を得ることができる。 [0024] 27 is a fixation lamp that emits visible light placed in the reflection direction of the half mirror 25. In this embodiment, a total of 3 X 3 that emits light of the same color as the light source 11 (green in this embodiment). Has 9 multiple fixation lamps. By selectively lighting the fixation lamp 27 and performing fixation of the eye E, it is possible to obtain fundus images of different regions.
[0025] 以上のような構成を備える装置において、その動作を説明する。 [0025] The operation of the apparatus having the above configuration will be described.
[0026] まず、被検者の顔を装置に近づけ、撮影を行う側の眼 (被検眼 E)を図 1に示した撮 影窓 8上に位置させ、装置内部を覼き込むようにする。制御部 2は 9個の固視灯 27う ち、一つを点灯 (ここでは光軸上に位置する中央の固視灯とする)し、固視させる。さ らに制御部 2は光源 32a, 32bを点灯させ、 2次元受光素子 31に被検眼 Eの前眼部 像を受光させ、その受光結果に基づいて装置 (撮影部 100)と被検眼 Eとの位置合わ せ (ァライメント)を行う。 [0026] First, the subject's face is brought close to the apparatus, and the eye (examinee E) to be imaged is positioned on the imaging window 8 shown in FIG. . The control unit 2 turns on one of the nine fixation lamps 27 (in this case, the central fixation lamp located on the optical axis) to fixate. In addition, the control unit 2 turns on the light sources 32a and 32b, causes the two-dimensional light receiving element 31 to receive the anterior eye image of the eye E, and based on the light reception result, the device (imaging unit 100) and the eye E to be inspected. Align (alignment).
[0027] 図 4は 2次元受光素子 31にて受光した前眼部像を示した概略図である。光源 32a, 32bを点灯させることにより、被検眼 Eの前眼部が照明されるとともに、図示するような 矩形状のァライメント指標 33L及び 33Rが、被検眼 Eの角膜に投影される。制御部 2 は 2次元受光素子 31にて受光された前眼部像力 画像処理により瞳孔 Pを特定する とともに、特定した瞳孔 Pの中心 Pを求める。また、制御部 2は画像処理により前眼部  FIG. 4 is a schematic diagram showing an anterior segment image received by the two-dimensional light receiving element 31. By turning on the light sources 32a and 32b, the anterior segment of the eye E is illuminated, and rectangular alignment indexes 33L and 33R as shown are projected onto the cornea of the eye E. The control unit 2 specifies the pupil P by anterior eye image power image processing received by the two-dimensional light receiving element 31 and obtains the center P of the specified pupil P. In addition, the control unit 2 performs an anterior segment by image processing.
0  0
像に映るァライメント指標 33L, 33Rの中心を得るとともに、中心同士を結んだ線分の 中間位置 Mを求める。被検眼 Eに対する撮影部 100の上下左右方向のァライメント 状態は、求めた中心 Pと中間位置 Mとの位置関係力 検出され、前後方向のァライ  Obtain the center of alignment index 33L, 33R in the image, and find the middle position M of the line segment connecting the centers. The alignment state of the imaging unit 100 in the up / down / left / right direction with respect to the eye E is detected by detecting the positional relationship between the obtained center P and the intermediate position M, and the alignment in the front / rear direction
0  0
メント状態は、ァライメント指標 33L, 33Rの両者の像間隔を比較することにより検出 される。また、ァライメント指標 33L, 33Rは、有限遠指標の投影であるので、被検眼 Eと撮影部 100との前後方向の変化に伴ってその像間隔が変化することとなる。本実 施形態では、被検眼 Eと撮影部 100との前後方向における適正なァライメント距離に 対応するァライメント指標同士の間隔を予め所定値として求めておき、これを記憶部 5に記憶させてある。  The alignment state is detected by comparing the image intervals of the alignment indicators 33L and 33R. Further, since the alignment indices 33L and 33R are projections of a finite distance index, the image interval changes with a change in the front-rear direction between the eye E and the imaging unit 100. In the present embodiment, the interval between the alignment indexes corresponding to the appropriate alignment distance in the front-rear direction between the eye E and the imaging unit 100 is obtained in advance as a predetermined value and stored in the storage unit 5.
[0028] このように制御部 2は、光源 32a, 32bにより形成される指標像 (ァライメント指標)か ら求められる中間位置 Mと、前眼部像力 求められる瞳孔中心求めた中心 Pとに基 づいて撮影部を移動させるための距離情報を得て、両者が一致するように駆動部 10 1を駆動させて撮影部 100全体を上下左右方向に移動させ位置合せを行う。また、 制御部 2はァライメント指標 33Lと 33Rとの間隔が所定の間隔 (所定値)となるように、 駆動部 101を駆動させ、撮影部 100全体を被検眼に対して前後方向に移動させ位 置合せを行う。ァライメント状態がそれぞれ所定の許容範囲に入ると、制御部はァライ メント完了を判断する。 In this way, the control unit 2 is based on the intermediate position M obtained from the index image (alignment index) formed by the light sources 32a and 32b and the center P obtained from the pupil center obtained from the anterior eye image power. Then, distance information for moving the photographing unit is obtained, and the drive unit 101 is driven so that the two coincide with each other, and the whole photographing unit 100 is moved in the vertical and horizontal directions to perform alignment. In addition, the control unit 2 drives the drive unit 101 so that the interval between the alignment indices 33L and 33R becomes a predetermined interval (predetermined value), and moves the entire imaging unit 100 in the front-rear direction with respect to the eye to be examined. Make the alignment. When each alignment state falls within a predetermined allowable range, the control unit determines alignment completion.
[0029] 次に、制御部 2は光源 32a, 32bを消灯させるとともに、眼底照明用の光源 10を点 灯させ、赤外光を被検眼 Eの眼底に照射し、その反射光を 2次元受光素子 26にて受 光し眼底像を得る。図 5Aは 2次元受光素子 26にて受光した眼底像を示した概略図 である。 200はフォーカスチャート 14により眼底に投影された指標である。  [0029] Next, the control unit 2 turns off the light sources 32a and 32b, turns on the light source 10 for fundus illumination, irradiates the fundus of the eye E with infrared light, and receives the reflected light two-dimensionally. Light is received by element 26 and a fundus image is obtained. FIG. 5A is a schematic view showing a fundus image received by the two-dimensional light receiving element 26. 200 is an index projected onto the fundus by the focus chart 14.
[0030] 制御部 2は 2次元受光素子 26にて受光した眼底像において、指標 200を通るライ ン 210を設定し、この設定したライン 210上における輝度情報力も指標 200に該当す る輝度情報を検出する。図 5Bは設定したライン 210上における輝度情報 220を示し た模式図である。図において縦軸は輝度値を、横軸は位置を示し、 200' は指標 20 0に対応する輝度情報を示している。なお、図 5Bでは、説明を簡単にするために、眼 底における血管等、その他の部位に相当する輝度情報は除いてある。ここで、被検 眼 Eの眼底に対する撮影部 100のフォーカス状態が適切でな 、場合、眼底上に投影 される指標 200の像はぼけるため、図 5Bに示す点線のように、その輝度情報 20( のピーク高さ L2は低くなり、所定の閾値 Tにおける幅 W2は広くなる。制御部 2は、こ の指標 200に該当する輝度情報 20( を検出し、この輝度情報 20( に基づいて 最も高くなるピーク高さ L1と、最も狭くなる幅 W1が得られるように、駆動部 102を用 いてフォーカスチャート 14及びフォーカシングレンズ 23を連動して移動させ、フォー カス完了を判断する。なお、本実施形態ではフォーカスチャート 14を用いて眼底に 指標 200を投影し、この指標 200の受光状態 (輝度情報)に基づいてフォーカスを調 節するもとしたが、これに限るものではなぐ撮影された眼底像から血管等の特定の 部位を抽出して、この部位の輝度情報に基づ 、てフォーカスを調節することもできる  [0030] In the fundus image received by the two-dimensional light receiving element 26, the control unit 2 sets a line 210 that passes through the index 200, and the luminance information power on the set line 210 also indicates luminance information corresponding to the index 200. To detect. FIG. 5B is a schematic diagram showing luminance information 220 on the set line 210. In the figure, the vertical axis indicates the luminance value, the horizontal axis indicates the position, and 200 ′ indicates the luminance information corresponding to the index 200. In FIG. 5B, in order to simplify the description, luminance information corresponding to other parts such as blood vessels in the fundus is excluded. Here, when the focus state of the imaging unit 100 with respect to the fundus of the eye E is appropriate, the image of the index 200 projected onto the fundus is blurred, so the luminance information 20 as shown by the dotted line in FIG. 5B. (The peak height L2 of becomes lower, and the width W2 at the predetermined threshold T becomes wider. The control unit 2 detects the luminance information 20 (corresponding to the index 200, and based on this luminance information 20 ( The focus chart 14 and the focusing lens 23 are moved in conjunction with each other using the driving unit 102 so that the peak height L1 and the narrowest width W1 are obtained. In the embodiment, the index chart 200 is projected onto the fundus using the focus chart 14, and the focus is adjusted based on the light receiving state (luminance information) of the index 200. However, the captured fundus image is not limited to this. From blood vessels, etc. Extracting a constant sites, can also be adjusted based, Te focus on luminance information of the site
[0031] ァライメント及びフォーカスが完了すると、制御部 2は光源 10を消灯させるとともに 光源 11をフラッシュ点灯させて可視光 (本実施形態では緑色の単色光)にて眼底を 照明する、眼底からの反射光は、ハーフミラー 19、対物レンズ 20、を経てー且結像し た後、ダイクロイツクミラー 21、絞り 22、フォーカシングレンズ 23、結像レンズ 24、ノヽ 一フミラー 25を経て 2次元受光素子 26に受光される。制御部 2は、得られた眼底像 を被検者 Eの眼底画像として記憶部 5に記憶させるとともに、別の固視灯 27を順次点 灯させて、同様な手法にて同じ被検眼 Eから複数の眼底画像を得る。 [0031] When the alignment and focus are completed, the control unit 2 turns off the light source 10 and The light source 11 is flashed to illuminate the fundus with visible light (green monochromatic light in this embodiment). The reflected light from the fundus passes through the half mirror 19 and the objective lens 20 and forms an image. The light is received by the two-dimensional light receiving element 26 through the dichroic mirror 21, the aperture 22, the focusing lens 23, the imaging lens 24, and the first mirror 25. The control unit 2 stores the obtained fundus image in the storage unit 5 as the fundus image of the subject E and sequentially turns on the other fixation lamps 27, and uses the same method for the same eye E. A plurality of fundus images are obtained.
[0032] 次に、画像解析部 6による眼底画像の解析について図 6のフローチャートに基づき 説明する。本実施形態の眼科撮影装置は、撮影した眼底像 (眼底画像)を画像解析 部 6が持つ-ユーラルネットワークを用いて解析し、糖尿病性網膜症(Diabetic Retino pathy以下、単に DRと略す)の発症の有無を求めることができる眼底画像解析装置 の役目も果たす。なお、本実施形態の眼科撮影装置においては、 9個の固視灯を順 次点灯させることによって各々得られた眼底画像を既存の画像処理技術によって繋 ぎ合わせ一枚の眼底画像とし、これを解析するが、ここでは説明を簡単にするために 、 1個の固視灯の呈示によって得られた眼底画像を用いて解析する例を以下に示す Next, analysis of the fundus image by the image analysis unit 6 will be described based on the flowchart of FIG. The ophthalmologic photographing apparatus according to the present embodiment analyzes a photographed fundus image (fundus image) using a Yural network, and analyzes diabetic retinopathy (hereinafter simply referred to as DR). It also serves as a fundus image analyzer that can determine the onset of onset. In the ophthalmologic photographing apparatus according to the present embodiment, the fundus images obtained by sequentially lighting nine fixation lamps are joined together by an existing image processing technique to form a single fundus image. In order to simplify the explanation, here is an example of analysis using the fundus image obtained by the presentation of a single fixation lamp.
[0033] 始めに画像解析部 6は、記憶部 5に記憶された図 7に示すような眼底画像 300を取 り出す。このような眼底画像 300には、血管 301や視神経乳頭 302 (固視灯の点灯位 置によっては撮影されないこともある)が撮影されている。また、糖尿病性網膜症 (DR )である場合、出血によって生じる暗色部 303や綿花様白斑(cotton wool spots)と呼 ばれる明色部 304が撮影される。画像解析部 6は、画像処理技術を用いて眼底画像 における血管や視神経乳頭等の、その後の解析の邪魔となる部分を抽出し、これに 該当するピクセルを取り除き(マスクし)、残った眼底画像の各ピクセルを 0から 255段 階の輝度情報として全てカウントし、眼底画像全体の輝度分布情報を得る。なお、本 実施形態では眼底撮影を行う際に緑色の単色光を用いて撮影を行って ヽるため、眼 底上における血管等の赤い部分が黒色にて撮影されることとなり、その後の血管や 暗色部の抽出が行いやすくなつている。 First, the image analysis unit 6 extracts the fundus image 300 as shown in FIG. 7 stored in the storage unit 5. In such a fundus image 300, a blood vessel 301 and an optic nerve head 302 (which may not be photographed depending on the position of the fixation lamp) are photographed. In the case of diabetic retinopathy (DR), a dark portion 303 caused by bleeding and a bright portion 304 called cotton wool spots are photographed. The image analysis unit 6 uses image processing technology to extract the portions of the fundus image, such as blood vessels and optic nerve heads, that interfere with the subsequent analysis, removes (masks) the corresponding pixels, and leaves the remaining fundus image All pixels are counted as 0 to 255 levels of luminance information, and luminance distribution information of the entire fundus image is obtained. In this embodiment, when photographing the fundus, the image is taken using green monochromatic light, so that a red portion such as a blood vessel on the fundus is photographed in black, and the subsequent blood vessels and It is easier to extract the dark part.
[0034] 図 8は血管 301や視神経乳頭 302を除 、た後の眼底画像全体の輝度分布情報( グローバルな輝度分布情報)を示した図である。図中、横軸は 0〜255までの輝度値 を、縦軸はピクセル数を示す。また、実線 310は眼底画像 300に基づいた輝度分布 情報を、点線 320は予め求められている健常者の輝度分布情報であり、基準とされる 輝度分布情報を示している。この健常者の輝度分布情報 (点線) 320は、予め複数 の健常者の眼底画像力 得られる輝度分布情報を定量的に求めて決定されている。 画像解析部 6は解析する輝度分布情報 (実線) 310と健常者の輝度分布情報 320と が交わる付近を境界に暗色ゾーン、グレーゾーン、明色ゾーンの 3ゾーンに分ける。 画像解析部 6は、輝度分布情報 310において暗色ゾーンに該当する部分の輝度情 報を-ユーラルネットワークを用いて解析する。また同様にしてグレーゾーン、明色ゾ ーンにお 、ても-ユーラルネットワークを用いて解析し、これらの出力結果に基づ!/ヽ て DRを示すか否かを判断する。 FIG. 8 is a diagram showing luminance distribution information (global luminance distribution information) of the entire fundus image after removing the blood vessel 301 and the optic disc 302. In the figure, the horizontal axis is the brightness value from 0 to 255 The vertical axis indicates the number of pixels. A solid line 310 indicates luminance distribution information based on the fundus image 300, and a dotted line 320 indicates luminance distribution information of a healthy person obtained in advance, and indicates reference luminance distribution information. The healthy person's luminance distribution information (dotted line) 320 is determined in advance by quantitatively obtaining luminance distribution information obtained from the fundus image power of a plurality of healthy persons. The image analysis unit 6 divides into three zones, a dark color zone, a gray zone, and a light color zone, at the boundary where the luminance distribution information (solid line) 310 to be analyzed and the luminance distribution information 320 of the healthy person intersect. The image analysis unit 6 analyzes the luminance information of the portion corresponding to the dark zone in the luminance distribution information 310 using the -Ural network. Similarly, the gray zone and the light color zone are also analyzed using the -Ural network, and it is determined whether or not DR is indicated based on these output results.
[0035] なお、本実施形態のニューラルネットワークは、入力層、中間層、出力層の 3層から なるフィードフォワード型で構成されており、学習用の入出力データ (教師信号)を、 バックプロパゲーション法を用いて学習(トレーニング)させている。入力データとして は、例えば、各ゾーンの合計ピクセル数、最大 (最小)輝度情報、グレーゾーンと暗色 (明色)ゾーンとのピクセル数の差、各ゾーンにおけるピーク値に対するピクセル数、 各ゾーンにおけるボトム値に対するピクセル数等、の DRの判定に必要とされる特徴 データが与えられる。全てのトレーニングにおいては、出力エラーが所望の許容差よ り大きければ、ニューロンのネットワークマトリックス内部の重み付けパラメータは許容 される応答が出るまで体系的に調整されている。さらに、ネットワークは予め用意され た全てのトレーニング用マップデータセットでトレーニングが行われた後、それらとは 異なる独立したテスト用のマップデータセットで同時にテストされている。ネットワーク はトレーニング用とテスト用セットの両方を正しく分類できるように改良されているのでNote that the neural network of the present embodiment is configured as a feed-forward type composed of three layers, an input layer, an intermediate layer, and an output layer, and inputs and outputs learning data (teacher signal) to backpropagation. Learning (training) using the method. Input data includes, for example, the total number of pixels in each zone, maximum (minimum) luminance information, the difference in the number of pixels between the gray and dark (light) zones, the number of pixels relative to the peak value in each zone, the bottom in each zone Feature data required for DR determination, such as the number of pixels for the value, is given. In all training, if the output error is greater than the desired tolerance, the weighting parameters within the neuron's network matrix are systematically adjusted until an acceptable response is achieved. In addition, the network is trained on all pre-prepared training map datasets and then tested simultaneously on an independent test map dataset. The network has been improved to correctly classify both training and test sets.
、両者の許容誤差は、より高い精度を実現できるように低い値に設定されている。 The tolerance of both is set to a low value so that higher accuracy can be realized.
[0036] 画像解析部 6に用意されたこのような-ユーラルネットワークによって眼底画像全体 の輝度分布情報が解析され DRが示されると、制御部 2はモニタ 3にその旨を表示す る。また、 DRが示されない (異常とされない)場合、次に画像解析部 6は局所的な輝 度部分情報に基づ 、て-ユーラルネットワークを用 、て解析を行う。この際用いられ る-ユーラルネットワークも上述したような構成、トレーニングを行って!/、る。 [0037] 図 9は眼底画像を局所的な輝度分布情報に区分けした図を示す。前述同様に、画 像解析部 6は眼底画像上の血管や視神経乳頭に該当するピクセルをマスクするとと もに、眼底画像を小さな領域 400 (セクタ一)に区分けする。画像解析部 6はマスクさ れたピクセルが存在しない各セクタ一 400に対して、そのセクタ一内にあるピクセル 4 01を暗色、グレー、明色のピクセルにそれぞれ分け、その数をカウントする。この際に 暗色又は明色のピクセル数が既定数を超えて 、れば、そのセクタ一 400は DRを示 す可能性が高いとして、異常なセクタ一 400としてカウントされる。マスクされたピクセ ルが存在しな 、全てのセクタ一 400を同様に解析して!/、き、異常なセクタ一 400を検 出していく。 [0036] When the luminance distribution information of the entire fundus image is analyzed and DR is indicated by such a -Ural network prepared in the image analysis unit 6, the control unit 2 displays the fact on the monitor 3. If DR is not indicated (not considered abnormal), the image analysis unit 6 performs analysis using a local network based on local luminance partial information. The Eural network used in this case is also structured and trained as described above! FIG. 9 shows a diagram in which the fundus image is divided into local luminance distribution information. As described above, the image analysis unit 6 masks pixels corresponding to blood vessels and optic nerve heads on the fundus image and divides the fundus image into small regions 400 (one sector). For each sector 400 where no masked pixel exists, the image analysis unit 6 divides the pixel 401 in the sector 1 into dark, gray, and light pixels, and counts the number. At this time, if the number of dark or light pixels exceeds the predetermined number, the sector 400 is counted as an abnormal sector 400 because it is likely to indicate DR. If there is no masked pixel, analyze all sectors 400 in the same way and detect abnormal sectors 400.
[0038] 画像解析部 6は異常とされたセクタ一 400における暗色のピクセル力もなる部分、 明色のピクセル力もなる部分を前述同様に-ユーラルネットワークによって解析し、そ の出力結果に基づいて異常とされたセクタ一 400毎に対して DRを示す力否かを判 断する。画像解析部 6による-ユーラルネットワークによって眼底画像が所定のセクタ 一毎に解析され DRが示されると、制御部 2はモニタ 3にその旨を表示 (報知)する。ま た、ニューラルネットワークの出力結果により DRとされない場合、制御部 2はモニタ 3 にその旨を表示する。さらに、ニューラルネットワークにて DRを示すか否かを判断で きない場合、定義されていないとする出力を行い、制御部 2はこの出力結果に基づい て、別の要因が考えられるものとして、その他を示す結果をモニタ 3に表示する。これ らの表示結果は指示入力部 4の操作によって出力部 7より印刷 (報知)される。  [0038] The image analysis unit 6 analyzes the portion having the dark pixel force and the portion having the light pixel force in the sector 400 determined to be abnormal in the same manner as described above using the -Ural network, and performs an abnormality based on the output result. It is judged whether or not it shows the DR for every 400 sectors. When the fundus image is analyzed for each predetermined sector by the image analysis unit 6 through the Yural network and DR is indicated, the control unit 2 displays (notifies) that fact on the monitor 3. If the result of neural network output does not indicate DR, control unit 2 displays that fact on monitor 3. Furthermore, if it is not possible to determine whether or not DR is indicated by the neural network, an output indicating that it is not defined is output, and the control unit 2 assumes that another factor is considered based on the output result. Is displayed on monitor 3. These display results are printed (notified) from the output unit 7 by the operation of the instruction input unit 4.
[0039] なお、本実施形態では眼底画像を-ユーラルネットワークを用いて解析し、その解 析結果力 糖尿病性網膜症の発症の有無を判断するものとしている力 これに限るも のではな!/、。このように眼底画像力 血管や視神経乳頭等を除去 (マスク)した上で、 各ピクセルが持つ輝度の分布傾向を得て、ニューラルネットワークを用いることにより 、他の眼疾患にぉ 、ても眼底画像全体或いは眼底における局所的な差異に基づ 、 て同様の解析を行うことができる。  [0039] In the present embodiment, the fundus image is analyzed using the -Ural network, and the result of the analysis is a force that is used to determine the presence or absence of the onset of diabetic retinopathy. /. In this way, fundus image power After removing (masking) blood vessels, optic nerve head, etc., obtaining the luminance distribution tendency of each pixel, using a neural network, other fundus images can be obtained Similar analysis can be performed based on local differences in the whole or in the fundus.
[0040] また、本実施形態では眼底撮影装置に画像解析を行うための構成を持たせるもの としてるが、これに限るものではない。上述した画像解析は、眼底を撮影する機能を 持たせず、他の装置で得た眼底画像を解析する装置にぉ ヽても適用できる。  In the present embodiment, the fundus imaging apparatus has a configuration for performing image analysis, but the present invention is not limited to this. The image analysis described above does not have a function of photographing the fundus, and can be applied to any device that analyzes fundus images obtained by other devices.

Claims

請求の範囲 The scope of the claims
[1] 撮影された眼底画像の解析を行う眼底画像解析装置において、  [1] In a fundus image analysis device that analyzes the photographed fundus image,
撮影された眼底画像を記憶する記憶手段と、該記憶された前記眼底画像力 画像 処理により血管部分を特定してマスクするマスク手段と、  Storage means for storing a photographed fundus image, and mask means for specifying and masking a blood vessel portion by the stored fundus image force image processing;
該マスク手段によりマスクした以外の眼底画像における全ピクセルの輝度情報を求 めて眼底画像全体の輝度分布情報を得る輝度分布取得手段と、  Brightness distribution acquisition means for obtaining brightness distribution information of the entire fundus image by obtaining brightness information of all pixels in the fundus image other than masked by the mask means;
該輝度分布取得手段により取得された輝度分布情報と予め求められている基準輝 度分布情報とを比較して前記眼底画像を解析する画像解析手段と、  Image analysis means for analyzing the fundus image by comparing the luminance distribution information acquired by the luminance distribution acquisition means with reference luminance distribution information obtained in advance;
該画像解析手段により解析された結果を報知する報知手段と、  Informing means for informing the result analyzed by the image analyzing means,
を有することを特徴とする眼底画像解析装置。  A fundus image analyzing apparatus comprising:
[2] 請求項 1の眼底画像解析装置にお!ヽて、前記画像解析手段はニューラルネットヮー クを用いて解析することを特徴とする眼底画像解析装置。  [2] The fundus image analysis apparatus according to claim 1, wherein the image analysis means analyzes using a neural network.
[3] 請求項 2の眼底画像解析装置にぉ ヽて、前記画像解析手段は前記輝度分布情報と 基準輝度分布情報とを比較して前記輝度分布情報を暗色ゾーン,グレーゾーン,明 色ゾーンの 3ゾーンに分け、少なくとも該暗色ゾーン及び明色ゾーンのどちらか一方 を-ユーラルネットワークを用いて解析することを特徴とする眼底画像解析装置。  [3] In the fundus image analysis apparatus according to claim 2, the image analysis means compares the luminance distribution information with reference luminance distribution information, and compares the luminance distribution information with dark zone, gray zone, and light zone. A fundus image analysis apparatus characterized by dividing into three zones and analyzing at least one of the dark color zone and the light color zone using a -Ural network.
[4] 請求項 2の眼底画像解析装置にお ヽて、前記画像解析手段は前記眼底画像全体の 輝度分布情報と予め求められている基準輝度分布情報とを比較して前記眼底画像 を解析した後、さらに前記眼底画像全体を所定のセクタ一に分けるとともに前記マス ク手段によりマスクされた部分以外のセクタ一に対して解析を行うことを特徴とする眼 底画像解析装置。  [4] In the fundus image analysis apparatus according to claim 2, the image analysis unit analyzes the fundus image by comparing luminance distribution information of the entire fundus image with reference luminance distribution information obtained in advance. Thereafter, the fundus image analysis apparatus further divides the entire fundus image into predetermined sectors and performs analysis on sectors other than the portion masked by the mask means.
[5] 請求項 1〜4の眼底画像解析装置において、前記眼底画像は緑色の照明光により撮 影された画像であることを特徴とする眼底画像解析装置。  5. The fundus image analyzing apparatus according to claim 1, wherein the fundus image is an image taken with green illumination light.
[6] 請求項 1〜5の眼底画像解析装置において、前記画像解析手段は前記解析により 糖尿病性網膜症の発症の有無を求めることを特徴とする眼底画像解析装置。 6. The fundus image analysis apparatus according to claim 1, wherein the image analysis means obtains the presence or absence of the onset of diabetic retinopathy by the analysis.
PCT/JP2006/323413 2006-11-24 2006-11-24 Fundus image analyzer WO2008062528A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2006/323413 WO2008062528A1 (en) 2006-11-24 2006-11-24 Fundus image analyzer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2006/323413 WO2008062528A1 (en) 2006-11-24 2006-11-24 Fundus image analyzer

Publications (1)

Publication Number Publication Date
WO2008062528A1 true WO2008062528A1 (en) 2008-05-29

Family

ID=39429470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/323413 WO2008062528A1 (en) 2006-11-24 2006-11-24 Fundus image analyzer

Country Status (1)

Country Link
WO (1) WO2008062528A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018027273A (en) * 2016-08-19 2018-02-22 学校法人自治医科大学 Staging determination support system of diabetic retinopathy and method of supporting determination of staging of diabetic retinopathy
WO2019073962A1 (en) * 2017-10-10 2019-04-18 国立大学法人 東京大学 Image processing device and program
JP2019177032A (en) * 2018-03-30 2019-10-17 株式会社ニデック Ophthalmologic image processing device and ophthalmologic image processing program
CN110432860A (en) * 2019-07-01 2019-11-12 中山大学中山眼科中心 Become the method and system of ceasma based on lattice in deep learning identification wide area eyeground figure
JP2021154159A (en) * 2017-12-28 2021-10-07 株式会社トプコン Machine learning guided imaging system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005508215A (en) * 2001-08-30 2005-03-31 フィラデルフィア オフサルミック イメージング システムズ System and method for screening patients with diabetic retinopathy
JP2005253796A (en) * 2004-03-12 2005-09-22 Yokohama Tlo Co Ltd Ophthalmoscope
JP2005261789A (en) * 2004-03-22 2005-09-29 Kowa Co Fundus image processing method and fundus image processor
JP2006280682A (en) * 2005-04-01 2006-10-19 Hitachi Omron Terminal Solutions Corp Method of supporting diagnostic image provided with noise detection function

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005508215A (en) * 2001-08-30 2005-03-31 フィラデルフィア オフサルミック イメージング システムズ System and method for screening patients with diabetic retinopathy
JP2005253796A (en) * 2004-03-12 2005-09-22 Yokohama Tlo Co Ltd Ophthalmoscope
JP2005261789A (en) * 2004-03-22 2005-09-29 Kowa Co Fundus image processing method and fundus image processor
JP2006280682A (en) * 2005-04-01 2006-10-19 Hitachi Omron Terminal Solutions Corp Method of supporting diagnostic image provided with noise detection function

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018027273A (en) * 2016-08-19 2018-02-22 学校法人自治医科大学 Staging determination support system of diabetic retinopathy and method of supporting determination of staging of diabetic retinopathy
WO2019073962A1 (en) * 2017-10-10 2019-04-18 国立大学法人 東京大学 Image processing device and program
JPWO2019073962A1 (en) * 2017-10-10 2019-11-14 国立大学法人 東京大学 Image processing apparatus and program
JP2021154159A (en) * 2017-12-28 2021-10-07 株式会社トプコン Machine learning guided imaging system
JP2019177032A (en) * 2018-03-30 2019-10-17 株式会社ニデック Ophthalmologic image processing device and ophthalmologic image processing program
CN110432860A (en) * 2019-07-01 2019-11-12 中山大学中山眼科中心 Become the method and system of ceasma based on lattice in deep learning identification wide area eyeground figure

Similar Documents

Publication Publication Date Title
JP5117396B2 (en) Fundus photographing device
US7810928B2 (en) Evaluating pupillary responses to light stimuli
JP4113005B2 (en) Eye examination equipment
KR100738491B1 (en) Ophthalmic apparatus
RU2612500C2 (en) System and method for remote measurement of optical focus
US7874675B2 (en) Pupillary reflex imaging
US20070171363A1 (en) Adaptive photoscreening system
US6616277B1 (en) Sequential eye screening method and apparatus
US6663242B1 (en) Simultaneous, wavelength multiplexed vision screener
JP5850292B2 (en) Ophthalmic equipment
US20220338733A1 (en) External alignment indication/guidance system for retinal camera
WO2008062528A1 (en) Fundus image analyzer
JP3950876B2 (en) Fundus examination device
US8996097B2 (en) Ophthalmic measuring method and apparatus
JP2010233978A (en) Visual performance inspection device
JP2005102948A (en) Perimeter
JP4542350B2 (en) Anterior eye measurement device
US20230233078A1 (en) Vision Screening Device Including Color Imaging
US7404641B2 (en) Method for examining the ocular fundus
WO2000021432A1 (en) Methods and apparatus for digital ocular imaging
US20210307604A1 (en) Ophthalmic photographing apparatus
JP6325856B2 (en) Ophthalmic apparatus and control method
WO2021085020A1 (en) Ophthalmic device and method for controlling same
JP2005102947A (en) Ophthalmological device
EP4133992A1 (en) Determining color vision ability using a vision screening device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06833217

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06833217

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP