[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2019207800A1 - Ophthalmic image processing device and ophthalmic image processing program - Google Patents

Ophthalmic image processing device and ophthalmic image processing program Download PDF

Info

Publication number
WO2019207800A1
WO2019207800A1 PCT/JP2018/017327 JP2018017327W WO2019207800A1 WO 2019207800 A1 WO2019207800 A1 WO 2019207800A1 JP 2018017327 W JP2018017327 W JP 2018017327W WO 2019207800 A1 WO2019207800 A1 WO 2019207800A1
Authority
WO
WIPO (PCT)
Prior art keywords
ophthalmic
image
ophthalmic image
reference information
image processing
Prior art date
Application number
PCT/JP2018/017327
Other languages
French (fr)
Japanese (ja)
Inventor
友洋 宮城
幸弘 樋口
徹哉 加納
壮平 宮崎
涼介 柴
Original Assignee
株式会社ニデック
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニデック filed Critical 株式会社ニデック
Priority to JP2020515457A priority Critical patent/JP7196908B2/en
Priority to PCT/JP2018/017327 priority patent/WO2019207800A1/en
Publication of WO2019207800A1 publication Critical patent/WO2019207800A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Definitions

  • the present disclosure relates to an ophthalmic image processing apparatus that processes an ophthalmic image that is an image of an eye to be examined, and an ophthalmic image processing program that is executed in the ophthalmic image processing apparatus.
  • an ophthalmologic photographing apparatus for example, an optical coherence tomography (OCT), a fundus camera, a laser scanning optometry apparatus (SLO), or the like.
  • OCT optical coherence tomography
  • SLO laser scanning optometry apparatus
  • the inventor of the present invention makes an attempt to automatically output diagnostic results for each of a plurality of diseases to an ophthalmic image processing apparatus based on an ophthalmic image.
  • a user for example, a doctor or the like determines whether or not to perform a detailed diagnosis of the eye to be examined based on the plurality of automatic diagnosis results. Cannot be done easily.
  • the present disclosure is to provide an ophthalmic image processing apparatus and an ophthalmic image processing program capable of solving at least one of the plurality of aspects and generating useful information based on an ophthalmic image.
  • a first aspect of an ophthalmic image processing apparatus is an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined
  • the control unit of the ophthalmic image processing apparatus is an ophthalmologic image capturing unit. Acquiring the ophthalmic image captured by a unit, and inputting the ophthalmic image into a mathematical model trained by a machine learning algorithm to obtain an automatic diagnosis result for each of a plurality of diseases in the eye to be examined, Reference information that indicates in stages the degree to which at least one of the diseases is present in the eye to be examined is generated based on the plurality of automatic diagnosis results for the ophthalmologic image.
  • a second aspect of an ophthalmic image processing apparatus is an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined, and acquires the ophthalmic image captured by an ophthalmic image capturing unit.
  • Image acquisition means automatic diagnosis result acquisition means for acquiring an automatic diagnosis result for at least one disease in the eye to be examined by inputting the ophthalmic image into a mathematical model trained by a machine learning algorithm, and the image
  • the image of the target range from the ophthalmic image acquired by the image acquisition unit is Image extracting means for extracting as the ophthalmic image to be input to the mathematical model.
  • a first aspect of an ophthalmic image processing program provided by an exemplary embodiment of the present disclosure is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined. Is executed by the control unit of the ophthalmic image processing apparatus, and the ophthalmic image is input to an image acquisition step of acquiring the ophthalmic image captured by the ophthalmic image capturing unit and a mathematical model trained by a machine learning algorithm.
  • An automatic diagnosis result acquisition step for acquiring an automatic diagnosis result for each of a plurality of diseases in the eye to be examined, and reference information that indicates in steps the degree to which at least one of the plurality of diseases is present in the eye to be examined Generating a reference information based on the plurality of automatic diagnosis results for the ophthalmic image To execute the ophthalmologic image processing apparatus.
  • a second aspect of the ophthalmic image processing program provided by the exemplary embodiment of the present disclosure is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined. Is executed by the control unit of the ophthalmic image processing apparatus, and the ophthalmic image is input to an image acquisition step of acquiring the ophthalmic image captured by the ophthalmic image capturing unit and a mathematical model trained by a machine learning algorithm.
  • An automatic diagnostic result acquisition step for acquiring an automatic diagnostic result for at least any disease in the eye to be examined, and an ophthalmic image in which the range of the ophthalmic image acquired in the image acquisition step is input to the mathematical model Before being acquired in the image acquisition step when it is wider than the target range of An image of the target range of ophthalmic images, to execute an image extraction step of extracting as the ophthalmologic image to be input to the mathematical model, to the ophthalmologic image processing apparatus.
  • useful information based on the ophthalmic image is generated.
  • FIG. 1 is a block diagram showing a schematic configuration of an ophthalmic image processing system 100.
  • FIG. It is a figure which shows the imaging
  • the control unit of the ophthalmic image processing apparatus exemplified in the present disclosure acquires the ophthalmic image captured by the ophthalmic image capturing unit.
  • the control unit obtains an automatic diagnosis result for each of a plurality of diseases in the eye to be examined by inputting an ophthalmologic image into a mathematical model trained by a machine learning algorithm.
  • the control unit generates reference information, which is information indicating in a stepwise manner the degree to which at least one of a plurality of diseases is present in the eye to be examined, based on a plurality of automatic diagnosis results for the ophthalmologic image.
  • the determination as to whether or not to make a detailed diagnosis of the eye to be examined is easily made based on the reference information. Therefore, according to the ophthalmic image processing apparatus in the present disclosure, useful information based on the ophthalmic image is appropriately generated.
  • the number of stages of reference information to be generated is preferably 5 or less, and more preferably 3 or less. By setting the number of stages to 5 stages or less or 3 stages or less, the user can more easily determine whether or not to make a detailed diagnosis. However, the number of stages can be six or more.
  • the control unit may generate the reference information in 101 steps from 0% to 100%.
  • Mathematical models trained by a machine learning algorithm may be configured to output the presence probability of each disease as an automatic diagnosis result.
  • the control unit may generate reference information based on the existence probability of each disease. In this case, the degree to which at least one of a plurality of diseases exists is more accurately reflected in the reference information.
  • the ophthalmic image (that is, an ophthalmic image used for automatic diagnosis) input to the mathematical model may be an ophthalmic image acquired by an OCT apparatus.
  • An ophthalmologic image for example, a two-dimensional tomographic image, a three-dimensional tomographic image, etc.
  • acquired by the OCT apparatus includes deep information in addition to information on the surface of the tissue. Therefore, the ophthalmologic image acquired by the OCT apparatus is used for automatic diagnosis, thereby further improving the accuracy of automatic diagnosis.
  • an ophthalmologic image taken by an apparatus other than the OCT apparatus for example, at least one of a fundus camera, a laser scanning optometry apparatus (SLO), etc.
  • SLO laser scanning optometry apparatus
  • the control unit may input an instruction from a user who selects at least one of a plurality of stages of reference information.
  • the control unit may extract patient data corresponding to reference information at a stage selected by an input instruction from patient data including ophthalmologic images of a plurality of subjects or eyes. In this case, a plurality of patient data are appropriately managed according to the stage of the generated reference information.
  • the control unit may display a list of patient data extracted according to the reference information on the display unit.
  • the user can appropriately manage a plurality of patient data by looking at the list displayed on the display unit.
  • control unit may rearrange a plurality of patient data according to the reference information (that is, for each stage of the reference information), and display the rearranged list on the display unit.
  • the user can manage a plurality of patient data according to the degree of presence of the disease.
  • the control unit may transmit patient data corresponding to reference information at a specific stage among patient data including ophthalmic images to another device via a network.
  • patient data is appropriately processed according to the stage of the generated reference information.
  • an ophthalmic image processing apparatus may be installed in a health examination facility, and a doctor who performs a detailed diagnosis may be using the device in another facility.
  • the ophthalmologic image processing apparatus transmits patient data corresponding to reference information at a specific stage (for example, a stage where the degree of presence of disease is the highest) to a device used by a doctor via a network. Also good.
  • patient data that is highly necessary for detailed diagnosis is appropriately transmitted to the device used by the doctor.
  • control unit may process patient data by a method different from a method of transmitting patient data to other devices.
  • the control unit may create a report of patient data corresponding to reference information at a specific stage among one or a plurality of patient data.
  • the report may include, for example, at least one of various patient data (patient name, etc.), ophthalmic images, ophthalmic image analysis results, reference information, and the like.
  • the report may be created with data in a specific format (for example, PDF data), or may be created by printing on paper. In this case, a report is appropriately created according to the stage of the generated reference information.
  • the “specific stage” may be set according to, for example, an instruction (operation instruction or the like) input to the ophthalmic image processing apparatus by the user, or may be set in advance.
  • the control unit may transmit corresponding patient data to another device each time reference information of a specific stage is generated. Further, the control unit may extract patient data corresponding to reference information at a specific stage from a plurality of patient data, and transmit the extracted one or more patient data to another device.
  • the “specific stage” is not limited to one stage and may be a plurality of stages. Further, the control unit may cause the display unit to display details (for example, past history) of patient data corresponding to the reference information at a specific stage.
  • the control unit when only the reference information of one of the left eye and right eye patient data of the same subject is in a specific stage, the control unit outputs the patient data of the other eye (for example, other data At least one of transmission to a device, report generation, display on a display unit, etc.) may also be performed.
  • the patient data of the other eye for example, the eye with the lower degree of disease
  • diagnosis is performed more appropriately.
  • the control unit stores the eye patient data corresponding to the reference information in the specific stage. Depending on the type of disease, it may be determined whether to output only patient data or to output patient data for both eyes. Further, the control unit may determine whether or not to output both-eye patient data according to an instruction input by the user.
  • the control unit is different from the eye corresponding to the reference information at a specific stage when only the reference information of one of the patient data of the left eye and the right eye of the same subject is at a specific stage.
  • a notification that prompts the user to pay attention may be given. This notification may be performed according to the type of disease, and it may be determined whether to perform notification according to the setting by the user.
  • control unit stores data corresponding to reference information at a specific stage (at least one of patient data and data of each ophthalmic image) and data corresponding to reference information other than the specific stage. You may change the storage method. For example, when the data corresponds to reference information other than a specific stage, the control unit may omit the process of saving the data in the storage device, delete the data from the storage device, Data amount reduction processing (for example, conversion processing to a report file with a small data amount, etc.) may be executed. In this case, it is suppressed that the capacity of the storage device is compressed, and data is appropriately managed.
  • Data amount reduction processing for example, conversion processing to a report file with a small data amount, etc.
  • patient data including one or more ophthalmic image data may be stored in the storage means.
  • the control unit may generate reference information for each of the plurality of ophthalmic images when one patient data includes a plurality of ophthalmic images.
  • the control unit When a plurality of ophthalmic images are included in one patient data, the control unit generates reference information (hereinafter referred to as “latest reference information”) generated for an ophthalmic image having the latest timing.
  • high-stage reference information Any of reference information at the stage where the degree of disease is the highest (hereinafter referred to as “high-stage reference information”) may be associated with each patient data. In this case, appropriate reference information corresponding to the situation is associated with each patient data.
  • the patient data when the latest reference information is associated with the patient data, the patient data is appropriately managed based on the latest state of the eye to be examined.
  • the high-level reference information is associated with the patient data, the patient data is managed in consideration of the degree of the presence of the disease in the past ophthalmic image.
  • control unit may set which of the latest reference information and the high-level reference information is associated with the patient data in accordance with a selection instruction input by the user.
  • the user can associate appropriate reference information with the patient data in accordance with his / her diagnosis policy or the like.
  • which of the latest reference information and the high-level reference information is associated with the patient data may be automatically set according to various conditions.
  • the control unit when extracting the patient data corresponding to the reference information at the stage selected by the input instruction from the plurality of patient data, the control unit extracts the latest reference information associated with each patient data or the high-level reference information. Patient data may be extracted with reference to reference information. In addition, when the latest reference information or high-level reference information associated with the patient data is in a specific stage, the control unit may transmit the patient data to another device, create a report, or the like. .
  • the control unit may cause the display unit to display reference information corresponding to each of a plurality of ophthalmologic images included in the patient data when displaying the patient data on the display unit.
  • the user can appropriately grasp the reference information generated for each ophthalmic image. For example, when a plurality of ophthalmologic images photographed at different times are included in one patient data, the user can appropriately perform follow-up observation of the eye to be examined in consideration of each reference information. it can.
  • the control unit may input an instruction from the user for selecting whether to display the reference information on the display unit.
  • the control unit may switch between displaying and hiding reference information on the display unit according to the input instruction. For example, the user may not want to show the reference information to the subject. In addition, experienced users may not want to show reference information to other users in order to educate inexperienced users. In addition, the user may want to make a diagnosis or the like without referring to the reference information. Therefore, the user can perform various operations such as diagnosis more appropriately by switching the display and non-display of the reference information at his / her own will.
  • the control unit may extract ophthalmic images that differ in the degree of presence of the disease indicated by the generated reference information and the presence or absence of the disease determined by actual diagnosis. Further, the control unit may extract an ophthalmologic image in which the automatic diagnosis result obtained by the mathematical model is different from the actually performed diagnosis result.
  • the extracted ophthalmic image may be provided (for example, transmitted) to the manufacturer of the ophthalmic image processing apparatus. In this case, the manufacturer can improve the accuracy of subsequent automatic diagnosis results or reference information by causing the mathematical model to train the received ophthalmic image as training data.
  • the control unit may edit the reference information generated for the ophthalmologic image in accordance with an instruction input by the user.
  • the user can appropriately manage the patient data even if the content of the generated reference information may be different from the actual diagnosis result.
  • the control unit may indicate that the edited reference information is the edited reference information by highlighting or commenting.
  • the user who can input the reference information editing instruction may be limited by setting.
  • the control unit may edit the automatic diagnosis result obtained by the mathematical model in accordance with an instruction input by the user.
  • the control unit When the range of the acquired ophthalmic image (that is, the ophthalmic image captured by the ophthalmic image capturing unit) is wider than the target range of the ophthalmic image to be input to the mathematical model, the control unit performs a target from the acquired ophthalmic image.
  • An automatic diagnosis result may be acquired by extracting a range image and inputting the extracted ophthalmic image into a mathematical model. In this case, a highly accurate automatic diagnosis result is appropriately acquired regardless of the range of the ophthalmic image captured by the ophthalmic image capturing unit.
  • a specific method for extracting the target range from the acquired ophthalmic image can be selected as appropriate.
  • a predetermined range of a two-dimensional tomographic image (B-scan image) captured by the OCT apparatus is the target range, and the range of the two-dimensional tomographic image actually captured by the OCT apparatus may be wider than the target range.
  • the control unit may extract a predetermined range from the captured two-dimensional tomographic image.
  • a predetermined range of the two-dimensional tomographic image captured by the OCT apparatus is a target range, and an image actually captured by the OCT apparatus is captured by scanning each of a plurality of different scanning lines with measurement light. In some cases, it is a map image (three-dimensional OCT data).
  • the control unit may extract a two-dimensional tomographic image to be automatically diagnosed from the captured map image.
  • the target range is a map image of a predetermined range imaged by the OCT apparatus, and the map image actually imaged by the OCT apparatus may be wider than the predetermined range.
  • the control unit may extract a target range from a wide range of captured map images.
  • the target range is a predetermined range of the front image of the tissue (for example, the fundus) of the eye to be examined, and the range of the actually captured front image may be wider than the target range.
  • the control unit may extract the target range from the photographed front image.
  • control unit may automatically extract the target range from the acquired ophthalmic image.
  • the control unit may perform image processing on the acquired ophthalmologic image and extract a target range based on the result of the image processing.
  • the control unit may perform image processing on the acquired ophthalmologic image, detect a reference position (for example, the position of the macula on the fundus), and extract a target range based on the reference position.
  • the control unit may specify the position irradiated with the measurement light in the tissue of the eye to be examined, and extract the target range based on the specified position.
  • the control unit may extract the target range based on an instruction input by the user. That is, the user may manually specify the target range.
  • control unit extracts at least a target range image from the acquired ophthalmic image, and executes the process of inputting the extracted ophthalmic image into the mathematical model, at least of the plurality of other processes exemplified in the present disclosure. Any one (for example, processing for generating reference information) may be omitted.
  • the automatic diagnosis result when an automatic diagnosis result is acquired by extracting an image of the target range, the automatic diagnosis result may be an automatic diagnosis result for each of a plurality of diseases in the eye to be examined, or any disease in the eye to be examined It may be an automatic diagnosis result for.
  • a personal computer (hereinafter referred to as “PC”) 1 acquires ophthalmic image data of an eye to be examined (hereinafter simply referred to as “ophthalmic image”) from the ophthalmic image capturing apparatus 11 and acquires the acquired data.
  • ophthalmic image ophthalmic image data of an eye to be examined
  • the PC 1 functions as an ophthalmic image processing apparatus.
  • the device that functions as an ophthalmic image processing apparatus is not limited to the PC 1.
  • the ophthalmic image capturing device 11 may function as an ophthalmic image processing device.
  • a portable terminal such as a tablet terminal or a smartphone may function as an ophthalmic image processing apparatus.
  • the control units of a plurality of devices for example, the CPU 3 of the PC 1 and the CPU 13 of the ophthalmologic image capturing apparatus 11
  • an ophthalmic image processing system 100 exemplified in this embodiment includes a plurality of PCs used at each base (for example, a health check facility, a hospital, etc.).
  • FIG. 1 illustrates a PC 1 used at a base A that is a health check facility and a PC 21 used at a base B that is a hospital.
  • the PC 1 is used by a user at the site A (for example, a laboratory technician and a doctor).
  • the PC 1 includes a control unit 2 that performs various control processes and a communication I / F 5.
  • the control unit 2 includes a CPU 3 that is a controller that controls the memory and a storage device 4 that can store programs, data, and the like.
  • the storage device 4 stores an ophthalmic image processing program for executing ophthalmic image processing described later.
  • the communication I / F 5 connects the PC 1 to another device (for example, the PC 21) via the network 9 (for example, the Internet).
  • the PC 1 is connected to the operation unit 7 and the monitor 8.
  • the operation unit 7 is operated by the user in order for the user to input various instructions to the PC 1.
  • the operation unit 7 for example, at least one of a keyboard, a mouse, a touch panel, and the like can be used.
  • a microphone or the like for inputting various instructions may be used together with the operation unit 7 or instead of the operation unit 7.
  • the monitor 8 is an example of a display unit that can display various images.
  • the PC 1 can exchange various data (for example, ophthalmic image data) with the ophthalmic image capturing apparatus 11.
  • data for example, ophthalmic image data
  • a method in which the PC 1 exchanges data with the ophthalmic image capturing apparatus 11 can be selected as appropriate.
  • the PC 1 may exchange data with the ophthalmic image capturing apparatus 11 through at least one of wired communication, wireless communication, a removable storage medium (for example, a USB memory), and the like.
  • the ophthalmologic image photographing device 11 various devices for photographing an image of the eye to be examined can be used.
  • the ophthalmologic image capturing apparatus 11 used in the present embodiment is an OCT apparatus capable of capturing a tomographic image or the like of the tissue of the eye to be examined (the fundus in the present embodiment). Therefore, since an automatic diagnosis described later is executed based on an OCT image including information on the deep part of the tissue, the accuracy of the automatic diagnosis is improved.
  • an apparatus other than the OCT apparatus for example, at least one of a fundus camera, a laser scanning optometry apparatus (SLO), or the like
  • an image of a tissue other than the fundus of the eye to be examined for example, the anterior segment
  • the ophthalmic image capturing apparatus 11 includes a control unit 12 that performs various control processes and an ophthalmic image capturing unit 16.
  • the control unit 12 includes a CPU 13 that is a controller that controls the memory and a storage device 14 that can store programs, data, and the like.
  • the ophthalmologic image capturing unit 16 includes various configurations necessary for capturing an ophthalmic image of the eye to be examined.
  • the ophthalmic image capturing section 16 includes an OCT light source, a scanning section for scanning OCT light, an optical system for irradiating the eye to be examined, OCT light, A light receiving element for receiving light reflected by the tissue of the optometer is included.
  • the ophthalmologic image capturing unit 16 of the present embodiment includes a front observation optical system that captures a front image of the tissue of the eye to be examined (the fundus in the present embodiment).
  • the front image is a two-dimensional image when the tissue is viewed from the direction along the optical axis (front direction) of the OCT measurement light.
  • the front observation optical system for example, at least one of the configurations such as SLO and fundus camera can be adopted.
  • the ophthalmologic imaging apparatus 11 may acquire the three-dimensional OCT data of the tissue, and may acquire an image when viewing the tissue from the front direction (a so-called “Enface image”) based on the three-dimensional OCT data. In this case, the front observation optical system may be omitted.
  • the PC 21 is used by a user at site B.
  • the PC 21 includes a control unit 22 that performs various control processes and a communication I / F 25.
  • the control unit 22 includes a CPU 23 that is a controller that controls the memory and a storage device 24 that can store programs, data, and the like.
  • the communication I / F 25 connects the PC 21 to another device (for example, PC 1) via the network 9.
  • An operation unit 27 and a monitor 28 are connected to the PC 21.
  • FIG. 2 is a diagram showing the relationship between the front image 30 of the fundus of the eye to be examined and the imaging positions (that is, the scanning positions of the OCT measurement light) 35H and 35V of two two-dimensional tomographic images used in automatic diagnosis.
  • the front image 30 is captured by a front observation optical system provided in the ophthalmic image capturing unit 16 of the ophthalmic image capturing apparatus 11.
  • the front image 30 shown in FIG. 2 shows fundus tissues such as the optic disc 31, the macula 32, and the fundus blood vessel 33.
  • the two-dimensional tomographic image photographed in is used for automatic diagnosis.
  • Each of the imaging positions 35H and 35V is 6 mm in length.
  • the reference position, the length of the shooting position, the angle of the shooting position, the number of images used for automatic diagnosis, and the like can be changed as appropriate.
  • ophthalmic images other than the two-dimensional tomographic image for example, a front image of the fundus, a three-dimensional tomographic image, etc. may be used for automatic diagnosis.
  • a plurality of types of ophthalmic images of the same eye to be inspected, taken by each of a plurality of modalities, may be used for automatic diagnosis.
  • multiple types of ophthalmic images may be used for automatic diagnosis, multiple types of automatic diagnosis results may be output by different algorithms for each of the plurality of ophthalmic images, or automatically from a plurality of types of ophthalmic images by one algorithm.
  • a diagnosis result may be output.
  • the CPU 13 of the ophthalmologic image capturing apparatus 11 performs guide display of two-dimensional tomographic image capturing positions 35H and 35V suitable for automatic diagnosis on the fundus front image 30 before tomographic image capturing. .
  • the CPU 13 detects the reference position, and shows a guide display of the optimum photographing positions 35H and 35V centered on the reference position on the front image 30 displayed on the monitor (not shown).
  • the reference position may be detected by performing image processing or the like on the front image 30, or a position designated by the user may be detected as the reference position.
  • the ophthalmologic image capturing apparatus 11 may automatically capture a two-dimensional tomographic image suitable for automatic diagnosis.
  • the CPU 13 detects the reference position (the macular 32 in the present embodiment), and scans the OCT measurement light at the optimal imaging positions 35H and 35V centered on the reference position, thereby capturing an ophthalmic image. Good.
  • the ophthalmic image capturing apparatus 11 may capture an ophthalmic image in a wider range than an image range used in an automatic diagnosis described later. In this case, in the present embodiment, before automatic diagnosis is performed, an image of the target range for automatic diagnosis is extracted from a wide range of ophthalmic images.
  • an automatic diagnosis result for at least one of a plurality of diseases in the eye to be examined is obtained by using a mathematical model trained by a machine learning algorithm. To be acquired. Further, reference information is generated based on the acquired plurality of automatic diagnosis results. Reference information is information that indicates in a stepwise manner the degree to which at least one of a plurality of diseases is present in the eye to be examined. Furthermore, in this embodiment, various processes such as a process of displaying reference information on the monitor 8 and a process of extracting patient data according to the reference information are executed. Processing described below is executed by the CPU 3 in accordance with an ophthalmic image processing program stored in the storage device 4.
  • the reference information generation process will be described with reference to FIGS. 3 and 4.
  • reference information is generated for the ophthalmic image.
  • the CPU 3 acquires an ophthalmic image of the eye to be examined (S1).
  • the CPU 3 acquires the ophthalmic image captured by the ophthalmic image capturing unit 16 of the ophthalmic image capturing apparatus 11 from the ophthalmic image capturing apparatus 11.
  • the ophthalmologic image acquisition method can be changed as appropriate.
  • the CPU 13 of the ophthalmic image capturing device 11 may acquire an ophthalmic image stored in the storage device 14.
  • the CPU 3 determines whether or not the range of the ophthalmic image acquired in S1 is wider than the target range of the ophthalmic image input to a mathematical model (details will be described later) used in automatic diagnosis (S2). If the range of the ophthalmologic image acquired in S1 is the same as the target range for automatic diagnosis (S2: NO), the process proceeds to S4 as it is. When the range of the ophthalmic image acquired in S1 is wider than the target range of automatic diagnosis (S2: YES), the CPU 3 extracts an image of the target range of automatic diagnosis from the ophthalmic image acquired in S1 (S3). ).
  • a predetermined range of two two-dimensional tomographic images (B-scan images) captured by the OCT apparatus is a target range for automatic diagnosis.
  • the CPU 3 extracts a predetermined target range from the two-dimensional tomographic image acquired in S1.
  • the image acquired in S1 is a map image (three-dimensional OCT data) taken by scanning each of a plurality of different lines with OCT measurement light
  • the CPU 3 is an object of automatic diagnosis. Two two-dimensional tomographic images are extracted from the map image.
  • CPU3 of this embodiment extracts the image of a target range automatically in S3. Specifically, the CPU 3 performs image processing on the acquired ophthalmologic image, and extracts a target range image by detecting a reference position (a position of the macular in the present embodiment). Further, the CPU 3 may acquire information on the position where the measurement light is irradiated in the tissue of the eye to be examined, and extract an image of the target range based on the acquired position information. Note that the CPU 3 may extract an image of the target range based on an instruction input by the user.
  • the CPU 3 acquires an automatic diagnosis result for the ophthalmologic image (S4).
  • the CPU 3 acquires an automatic diagnosis result for each of a plurality of diseases in the eye to be examined by inputting an ophthalmologic image into a mathematical model trained by a machine learning algorithm (see FIG. 4).
  • the method for acquiring the automatic diagnosis result in this embodiment will be described in detail.
  • a machine learning algorithm for example, a neural network, random forest, boosting, support vector machine (SVM), etc. are generally known.
  • Neural network is a technique that mimics the behavior of biological nerve cell networks.
  • Neural networks include, for example, feedforward (forward propagation) neural networks, RBF networks (radial basis functions), spiking neural networks, convolutional neural networks, recursive neural networks (recurrent neural networks, feedback neural networks, etc.), probabilities Neural networks (such as Boltzmann machines and Bayesian networks).
  • Random forest is a method of generating a large number of decision trees by learning based on randomly sampled training data.
  • a random forest is used, the branch of a plurality of decision trees that have been learned as classifiers in advance is traced, and the average (or majority vote) of the results obtained from each decision tree is taken.
  • Boosting is a technique for generating a strong classifier by combining a plurality of weak classifiers.
  • a strong classifier is constructed by sequentially learning simple and weak classifiers.
  • SVM is a method of configuring a two-class pattern classifier using linear input elements. For example, the SVM learns the parameters of the linear input element based on a criterion (hyperplane separation theorem) for obtaining a margin maximizing hyperplane that maximizes the distance to each data point from the training data.
  • a criterion hyperplane separation theorem
  • a multilayer neural network is used as the machine learning algorithm.
  • the neural network includes an input layer for inputting data, an output layer for generating data to be predicted, and one or more hidden layers between the input layer and the output layer.
  • a plurality of nodes also referred to as units
  • a convolutional neural network (CNN), which is a kind of multilayer neural network, is used.
  • Mathematical model refers to a data structure for predicting the relationship between input data and output data, for example.
  • the mathematical model is constructed by being trained using a training data set.
  • the training data set is a set of training data for input and training data for output.
  • an ophthalmologic image of the eye to be inspected in the past (in this embodiment, the OCT measurement light was scanned by scanning the imaging positions extending in the horizontal direction and the vertical direction around the macula. Two two-dimensional tomographic images) are used.
  • the target range of the ophthalmologic image used for the automatic diagnosis matches the image range of the training data for input as much as possible.
  • diagnosis result data such as a disease name and a disease position are used.
  • the mathematical model is trained so that when certain input training data is input, output training data corresponding to the input training data is output. For example, the correlation data (for example, weight) of each input and output is updated by training.
  • the CPU 3 displays the image of the disease when displaying an image (for example, at least one of the front image of the fundus and the two-dimensional tomographic image in the present embodiment). At least one of position display, highlighted display of diseased part, enlarged display of diseased part, etc. is executed. Therefore, the user can easily confirm the position where it is determined that there is a possibility that the disease exists.
  • the CPU 3 changes the display form of various images on the monitor 8 according to the type of the disease that is highly likely to exist. For example, in the case of macular diseases such as age-related macular degeneration, central serous chorioretinopathy, retinal detachment, an abnormality may be seen in the thickness of the retina. Therefore, when it is determined that there is a high possibility of macular disease, the CPU 3 determines the thickness map indicating the retina thickness distribution when the fundus is viewed from the front, the retinal thickness analysis chart, and the retina of the normal eye. At least one of the comparison image and the like with the thickness is displayed on the monitor 8. In the case of diabetic retinopathy, abnormalities may be seen in the fundus blood vessels. Therefore, when it is determined that the possibility of diabetic retinopathy is high, the CPU 3 displays the OCT angiography on the monitor 8.
  • macular diseases such as age-related macular degeneration, central serous chorioretinopathy, retinal detachment
  • an abnormality may be seen in
  • the CPU 3 generates reference information that indicates in steps the degree to which at least one of the diseases exists based on a plurality of automatic diagnosis results obtained for each of the plurality of diseases (S5).
  • the CPU 3 stores the generated reference information in the storage device 4 in association with the ophthalmologic image targeted for automatic diagnosis (S6).
  • reference information is generated in three stages of “warning”, “caution”, and “normal” in descending order of the degree of presence of at least one of the diseases.
  • the number of stages can be six or more.
  • the CPU 3 of the present embodiment displays reference information for “warning” when there is even one disease whose existence probability is determined to be equal to or higher than a threshold value ⁇ (for example, 60%). Generate. If the highest existence probability is equal to or higher than a threshold value ⁇ (for example, 20%) and lower than the threshold value ⁇ , the CPU 3 generates reference information for “caution”. If the presence probability of all the diseases is less than the threshold value ⁇ , the CPU 3 generates “normal” reference information. However, the CPU 3 may generate the reference information by other methods.
  • the CPU 3 may generate the reference information in consideration of the number of diseases whose existence probability exceeds a predetermined threshold. That is, the CPU 3 may generate reference information indicating that the degree of presence of a disease is higher when the number of diseases whose existence probability exceeds a predetermined threshold is greater than when the number of diseases is small.
  • the timing for executing the reference information generation process can be selected as appropriate.
  • the PC 1 can automatically execute a reference information generation process for the acquired ophthalmic image.
  • a reference information generation start button 54 (see FIG. 8) is provided on at least a part of the display screen displayed on the monitor 8 by the PC 1.
  • the user operates the reference information generation start 54 button via the operation unit 7 (for example, a mouse) and inputs an instruction to start generation of reference information, thereby starting generation of reference information about a desired ophthalmic image. Can be made. Therefore, the user can generate reference information for past ophthalmic images or the like for which reference information has not yet been generated.
  • the CPU 13 may execute the reference information generation process in parallel with the imaging of the ophthalmic image. In this case, the CPU 13 may cause the display unit to display the generated reference information together with the ophthalmologic image being captured.
  • the time and accuracy of automatic diagnosis processing using a mathematical model are in a trade-off relationship with each other.
  • high-accuracy automatic diagnosis processing often takes a long time.
  • both an algorithm that can be processed at high speed and an algorithm that can execute high-precision automatic diagnosis are employed.
  • the CPU 3 of the present embodiment When acquiring an ophthalmic image (S1), the CPU 3 of the present embodiment first performs automatic diagnosis using a high-speed algorithm (S4), and generates reference information (S5). If the generated reference information is reference information (for example, “normal”) indicating that the degree of presence of the disease is low, the reference information generation processing for the ophthalmic image is terminated.
  • reference information for example, “normal”
  • the generated reference information is reference information indicating that the presence of a disease is high (for example, “caution” and “warning”)
  • automatic diagnosis is performed again using a high-precision algorithm (S4).
  • Reference information is generated (S5).
  • the CPU 3 may perform automatic diagnosis using a high-precision algorithm in the background while performing automatic diagnosis using a high-speed algorithm. In this case, after the reference information is generated in a short time, finally, highly accurate reference information is generated.
  • Patient data is created for each subject or eye to be examined and stored in the storage device 4. Details of the patient data will be described later with reference to FIGS. 6 and 7.
  • One patient data includes one or more ophthalmic images.
  • the PC 1 of the present embodiment can generate reference information for each of the plurality of ophthalmic images.
  • the user when reference information is generated for each of a plurality of ophthalmic images included in one patient data, the user can set the type of reference information associated with the patient data. it can. Specifically, as shown in FIG. 5, the user has the highest degree of presence of reference information (hereinafter referred to as “latest reference information”) generated for an ophthalmologic image having the latest shooting timing and the disease. It is possible to set which stage reference information (hereinafter referred to as “high stage reference information”) is associated with patient data. The user can manage the patient data based on the latest state of the eye to be examined by associating the latest reference information with the patient data.
  • latest reference information stage reference information
  • the user can manage patient data in consideration of the degree of past disease.
  • the user opens the corresponding reference information setting screen 38, selects either the latest reference information or the high-level reference information, and operates the “OK” button, whereby the type of reference information associated with the patient data is selected. Can be set.
  • the CPU 3 associates the selected type of reference information with each patient data.
  • FIG. 6 shows an example of a patient data display screen 40A when the latest reference information is associated with patient data.
  • FIG. 7 shows an example of a patient data display screen 40B in the case where high-level reference information is associated with patient data.
  • FIG. 8 shows an example of the two-dimensional tomographic images 51 and 52 displayed when the thumbnail 47 of June 20, 2017 in FIG. 6 is selected.
  • the patient data display screen is provided with a patient ID display field 41, a patient name display field 42, a gender display field 43, a left and right eye display field 44, and a corresponding reference information display field 45. It has been.
  • the left and right eye display field 44 is provided with a check box for selecting the left eye and the right eye. The user checks one of the left eye and right eye check boxes to select either the left eye or the right eye among one or more ophthalmic images included in the selected (displayed) patient data. Can be displayed.
  • the corresponding reference information display field 45 the type of reference information selected on the corresponding reference information setting screen 38 (see FIG. 5) described above is displayed.
  • each ophthalmic image (in this embodiment, one set (two) of two-dimensional tomographic images that are targets for generating reference information).
  • a thumbnail 47 of the front image corresponding to is displayed.
  • Each thumbnail 47 is provided with a reference information display field 48 and a date display field 49.
  • reference information display field 48 reference information generated for each ophthalmic image is displayed. That is, when displaying patient data on the monitor 8, the CPU 3 can display reference information corresponding to each of a plurality of ophthalmologic images included in the patient data. As a result, the follow-up observation of the eye to be examined is facilitated. In the example shown in FIGS.
  • a reference information display field 48 indicates that reference information has not been generated. In the examples shown in FIGS. 6 and 7, “/” indicates that the reference information has not yet been generated.
  • the date display field 49 displays the date when the ophthalmologic image was taken.
  • the latest reference information when the latest reference information is associated with patient data, it indicates which is the latest ophthalmic image that is the basis of the reference information associated with the entire patient data.
  • the display for is performed.
  • the frame of the thumbnail 47M of the latest ophthalmic image is displayed in a different manner from the frame of the other thumbnails 47, thereby indicating which is the latest ophthalmic image.
  • the display method for showing the ophthalmic image on which the reference information is based can be changed as appropriate.
  • the ophthalmologic image that is the basis of the reference information may be displayed by changing the display method of the reference information attached to the thumbnail 47M instead of the frame of the thumbnail 47M.
  • a plurality of thumbnails 47 are displayed side by side in the order in which the ophthalmic images were taken. Therefore, the user can easily grasp the progress of the eye to be examined by looking at the patient data display screen 40A.
  • the ophthalmic image (that is, the presence of the disease is the basis of the reference information associated with the entire patient data).
  • a display for indicating which is the highest ophthalmic image) is performed.
  • the frame of the thumbnail 47 ⁇ / b> N of the ophthalmic image with the highest degree of disease presence is displayed in a manner different from the frame of the other thumbnails 47.
  • a plurality of thumbnails 47 are displayed in order of the degree of presence of the disease. Therefore, the user can grasp a plurality of ophthalmologic images according to the degree of presence of the disease.
  • the CPU 3 displays an ophthalmic image corresponding to the selected thumbnail 47 (in this embodiment, reference information).
  • An ophthalmologic image display screen 50 including a base two-dimensional tomographic image) is displayed on the monitor 8.
  • two selected two-dimensional tomographic images 51 and 52 that are the basis for generating reference information together with the selected thumbnail 47, reference information 48, and date 49.
  • the display mode of the ophthalmologic image display screen 50 can be changed. For example, a graph or the like indicating the thickness of a specific layer in the fundus may be mainly displayed as the two-dimensional tomographic images 51 and 52.
  • the user when the reference information generated for the ophthalmologic image is different from the actual diagnosis result, the user can input an instruction to edit the reference information via the operation unit 7 or the like.
  • the CPU 3 edits the selected reference information in accordance with an instruction input by the user. As a result, patient data is managed more appropriately.
  • the CPU 3 sets the display mode of the edited reference information to a display mode different from the unedited reference information. Therefore, the user can easily grasp whether or not the reference information has been edited.
  • the user can limit the users who can edit the reference information to specific users by operating the operation unit 7.
  • the list display screen 60 includes a list display unit 61 and a search condition input unit 62.
  • the list display unit 61 displays a list of various types of information regarding each patient data created for each subject or each eye to be examined. Specifically, in the list display unit 61 of the present embodiment, reference information associated with patient data is displayed in addition to ID, name, age, and gender information included in the patient data. Therefore, the user can manage patient data appropriately based on the reference information.
  • the user can set which of the latest reference information and the high-level reference information is associated with the patient data.
  • the CPU 3 causes the list display unit 61 to display the reference information selected by the user among the latest reference information and the high-level reference information in association with each patient data.
  • the list display screen 60 may include other information (for example, photographing date and time of an ophthalmic image).
  • the search condition input unit 62 is input with various search conditions when the user wants to search patient data desired by the entire list.
  • the search conditions in this embodiment include reference information in addition to the patient's name, age, gender, and the like.
  • the user When searching for patient data based on the reference information, the user operates the operation unit 7 to select at least one of “warning”, “caution”, “normal”, and “non-generated”, so that a plurality of pieces of reference information are obtained.
  • the CPU 3 extracts patient data corresponding to the reference information (in the present embodiment, the latest reference information or the high-level reference information) at the stage selected by the input instruction from the patient data.
  • the CPU 3 causes the monitor 8 to display a list of patient data extracted according to the reference information at the selected stage.
  • FIG. 10 shows a list display screen 60 when a search is performed with reference information “warning” from the state shown in FIG. 9. As described above, according to this embodiment, patient data is appropriately managed based on
  • a list of a plurality of patient data is displayed side by side in order of photographing date and time of the ophthalmic image.
  • the CPU 3 may display a list of a plurality of patient data side by side according to the reference information (that is, for each stage of the reference information).
  • the user can input an instruction to select whether to display reference information on the monitor 8 to the PC 1.
  • the selection instruction may be input by operating the operation unit 7 by the user, for example.
  • the CPU 3 switches between displaying and hiding the reference information illustrated in FIGS. 6 to 10 on the monitor 8 in accordance with an instruction input from the user. For example, when an instruction to hide the reference information is input, the CPU 3 hides the reference information displayed in FIGS. Therefore, the user can perform a task such as diagnosis more appropriately.
  • the patient data output process which CPU3 performs is demonstrated.
  • the CPU 3 determines whether or not an instruction to select a reference information stage has been input (S11).
  • the selection instruction may be input by operating the operation unit 7 by the user, for example.
  • the CPU 3 adds the reference information (the latest reference information or the high-level reference information in this embodiment) at the stage selected from the patient data according to the input instruction.
  • Corresponding patient data is extracted (S12).
  • the CPU 3 specifies the reference information when only one reference information of the patient data of the left eye and the right eye of the same subject is in a specific stage (for example, a stage where the degree of disease is highest).
  • the patient data of the other eye of the same subject may be extracted together with the patient data of the eye corresponding to the reference information at the stage.
  • the user can diagnose the other eye based on the diagnosis result of one eye.
  • the CPU 3 may notify the user that attention should be paid to the other eye.
  • the CPU 3 determines whether to extract only the patient data of the eye corresponding to the reference information at a specific stage or to extract the patient data of both eyes according to the type of the disease that is highly likely to exist. Alternatively, it may be determined according to an instruction from the user.
  • the CPU 3 determines whether or not a patient data transmission instruction has been input (S14).
  • a transmission instruction is input (S14: YES)
  • the CPU 3 transmits patient data to another device (for example, the PC 21 at the base B shown in FIG. 1) via the network 9 (S15). If the reference information stage is selected in S11, the patient data extracted in S12 (that is, patient data corresponding to the reference information in the selected stage) is transmitted in S15.
  • the report mentioned later may be transmitted to another device instead of patient data or together with patient data.
  • the CPU 3 determines whether or not a report output instruction has been input (S17).
  • an output instruction is input (S17: YES)
  • the CPU 3 outputs a report of patient data (S18). If the reference information stage is selected in S11, a report of the patient data extracted in S12 (that is, patient data corresponding to the reference information of the selected stage) is output in S18.
  • the report output method may be a method of printing on paper or a method of outputting data in a specific format (for example, PDF data). Needless to say, the information included in the report can be selected as appropriate.
  • the CPU 3 determines whether or not an end instruction has been input (S19). If an end instruction has not been input (S19: NO), the process returns to the determination in S11. When the end instruction is input (S19: YES), the patient data output process ends.
  • the patient data transmitted to the other device in S15 may be the entire patient data including one or a plurality of ophthalmic images, or a part of the patient data (for example, the latest ophthalmic image). There may be. The same applies to the output of the report.
  • the process of outputting patient data (that is, the process of transmitting patient data and the process of outputting reports) is performed in response to the input of an instruction from the user.
  • the timing for outputting patient data can be changed as appropriate.
  • the CPU 3 sequentially generates reference information for each of a plurality of ophthalmic images, and outputs patient data each time reference information at a specific stage (for example, reference information of “warning” and “caution”) is generated. Processing may be performed.
  • the ophthalmologic image capturing apparatus 11 can execute a reference information generation process (see FIG. 3) and a patient data output process (see FIG. 11).
  • the CPU 16 of the ophthalmic image capturing apparatus 11 may generate reference information for the captured ophthalmic image every time an ophthalmic image of the eye to be examined is captured.
  • the CPU 16 may output patient data each time reference information at a specific stage is generated.
  • the CPU 3 also includes data corresponding to reference information at a specific stage (at least one of patient data, ophthalmic images included in the patient data, analysis result data, etc.) and data corresponding to reference information at a specific stage.
  • data corresponding to reference information at a specific stage at least one of patient data, ophthalmic images included in the patient data, analysis result data, etc.
  • data corresponding to reference information at a specific stage are stored in the storage device 4 in different ways. For example, the CPU 3 may omit the process of storing at least a part of the data in the storage device 4 when the data corresponds to reference information other than a specific stage (for example, “warning” and “caution”). At least a part of the data may be deleted from the storage device 4. Further, the CPU 3 may execute a process for reducing the amount of data stored in the storage device 4.
  • the user can input the result of actually diagnosing an ophthalmologic image to the PC 1.
  • the CPU 3 can extract ophthalmic images in which the degree of the disease indicated by the reference information generated for the ophthalmic image is different from the presence or absence of the disease determined by the user by the actual diagnosis. Further, the CPU 3 can extract ophthalmic images in which the automatic diagnosis result obtained by the mathematical model is different from the result of the diagnosis actually performed by the user.
  • the CPU 3 provides (for example, transmits via the network 9) the extracted ophthalmic image to the manufacturer of the ophthalmic image processing apparatus and the ophthalmic image processing program. The manufacturer can improve the accuracy of subsequent automatic diagnosis results or reference information by training a mathematical model using the provided ophthalmic images as training data.
  • the ophthalmologic photographing apparatus 11 can execute the reference information generation process.
  • the CPU 16 of the ophthalmologic photographing apparatus 11 may change the photographing method of the eye to be examined according to at least one of the automatic diagnosis result and the reference information generation result.
  • the CPU 16 first takes ophthalmic images (two two-dimensional tomographic images in the above embodiment) necessary for automatic diagnosis and generation of reference information.
  • the CPU 16 captures an additional ophthalmic image (for example, Map photographing for photographing a three-dimensional tomographic image of the optometric tissue may be executed. In this case, a more appropriate image is taken as necessary according to at least one of the automatic diagnosis result and the reference information generation result.
  • an additional ophthalmic image for example, Map photographing for photographing a three-dimensional tomographic image of the optometric tissue may be executed. In this case, a more appropriate image is taken as necessary according to at least one of the automatic diagnosis result and the reference information generation result.
  • the process of acquiring an ophthalmic image in S1 of FIG. 3 is an example of an “image acquisition step”.
  • the process of extracting the image of the target range in S2 and S3 in FIG. 3 is an example of “image extraction step”.
  • the process of acquiring the automatic diagnosis result in S4 of FIG. 3 is an example of “automatic diagnosis result acquisition step”.
  • the process of generating reference information in S5 of FIG. 3 is an example of a “reference information generation step”.
  • CPU3 which performs the process which acquires an ophthalmologic image by S1 of FIG. 3 is an example of an "image acquisition means.”
  • the CPU 3 that executes the process of extracting the image in the target range in S2 and S3 in FIG. 3 is an example of the “image extracting unit”.
  • CPU3 which performs the process which acquires an automatic diagnostic result by S4 of FIG. 3 is an example of an "automatic diagnostic result acquisition means.”

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

An ophthalmic image processing device according to the present invention processes an ophthalmic image of a subject eye. A control unit of the ophthalmic image processing device acquires an ophthalmic image captured by an ophthalmic image imaging unit. The control unit acquires an automatic diagnosis result for each of a plurality of diseases in the subject eye by inputting the ophthalmic image to a mathematical model learned by a machine learning algorithm. On the basis of the plurality of automatic diagnosis results for the ophthalmic image, the control unit generates reference information indicating, in stages, the degree to which at least any of the plurality of diseases is present in the subject eye.

Description

眼科画像処理装置および眼科画像処理プログラムOphthalmic image processing apparatus and ophthalmic image processing program
 本開示は、被検眼の画像である眼科画像を処理する眼科画像処理装置、および、眼科画像処理装置において実行される眼科画像処理プログラムに関する。 The present disclosure relates to an ophthalmic image processing apparatus that processes an ophthalmic image that is an image of an eye to be examined, and an ophthalmic image processing program that is executed in the ophthalmic image processing apparatus.
 従来、眼科撮影装置(例えば、光干渉断層計(OCT)、眼底カメラ、レーザ走査型検眼装置(SLO)等)によって得られた眼科画像に基づいて、被検眼に対する種々の診断が行われている。 Conventionally, various diagnoses for an eye to be examined are performed based on ophthalmic images obtained by an ophthalmologic photographing apparatus (for example, an optical coherence tomography (OCT), a fundus camera, a laser scanning optometry apparatus (SLO), or the like). .
特開2015-104581号公報JP2015-104581A
 本開示における技術の1つの側面について説明する。本願発明の発明者は、複数の疾患の各々に対する診断結果を、眼科画像に基づいて眼科画像処理装置に自動的に出力させる試みを行っている。ここで、疾患の進行状況等は被検眼に応じて異なるので、各々の疾患に対する自動診断結果を全て正確に出力することは困難である。従って、複数の疾患の各々に対する自動診断結果が複数出力されると、ユーザ(例えば医師等)は、被検眼の詳細な診断を行うべきか否か等の判断を、複数の自動診断結果に基づいて容易に行うことができない場合がある。 One aspect of the technology in this disclosure will be described. The inventor of the present invention makes an attempt to automatically output diagnostic results for each of a plurality of diseases to an ophthalmic image processing apparatus based on an ophthalmic image. Here, since the progress of disease and the like vary depending on the eye to be examined, it is difficult to accurately output all automatic diagnosis results for each disease. Therefore, when a plurality of automatic diagnosis results for each of a plurality of diseases are output, a user (for example, a doctor or the like) determines whether or not to perform a detailed diagnosis of the eye to be examined based on the plurality of automatic diagnosis results. Cannot be done easily.
 他の側面について説明する。機械学習アルゴリズムによって訓練された数学モデルに眼科画像を入力することで、被検眼の自動診断結果を取得する場合、訓練用データセット(入力用訓練データと出力用訓練データ)によって訓練された数学モデルが用いられる。ここで、適切な眼科画像が数学モデルに入力されなければ、自動診断の精度が向上し難いので、有益な情報が生成され難い。 Explain other aspects. Mathematical model trained by training data set (input training data and training data for output) when automatic diagnosis result of eye to be examined is obtained by inputting ophthalmic image to mathematical model trained by machine learning algorithm Is used. Here, unless an appropriate ophthalmic image is input to the mathematical model, it is difficult to improve the accuracy of the automatic diagnosis, and thus it is difficult to generate useful information.
 本開示は、上記複数の側面の少なくともいずれかを解決し、眼科画像に基づく有益な情報を生成することが可能な眼科画像処理装置および眼科画像処理プログラムを提供することである。 The present disclosure is to provide an ophthalmic image processing apparatus and an ophthalmic image processing program capable of solving at least one of the plurality of aspects and generating useful information based on an ophthalmic image.
 本開示における典型的な実施形態が提供する眼科画像処理装置の第1態様は、被検眼の眼科画像を処理する眼科画像処理装置であって、前記眼科画像処理装置の制御部は、眼科画像撮影部によって撮影された前記眼科画像を取得し、機械学習アルゴリズムによって訓練された数学モデルに前記眼科画像を入力することで、前記被検眼における複数の疾患の各々に対する自動診断結果を取得し、前記複数の疾患の少なくともいずれかが前記被検眼に存在する度合いを段階的に示す参考情報を、前記眼科画像に対する前記複数の自動診断結果に基づいて生成する。 A first aspect of an ophthalmic image processing apparatus provided by an exemplary embodiment of the present disclosure is an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined, and the control unit of the ophthalmic image processing apparatus is an ophthalmologic image capturing unit. Acquiring the ophthalmic image captured by a unit, and inputting the ophthalmic image into a mathematical model trained by a machine learning algorithm to obtain an automatic diagnosis result for each of a plurality of diseases in the eye to be examined, Reference information that indicates in stages the degree to which at least one of the diseases is present in the eye to be examined is generated based on the plurality of automatic diagnosis results for the ophthalmologic image.
 本開示における典型的な実施形態が提供する眼科画像処理装置の第2態様は、被検眼の眼科画像を処理する眼科画像処理装置であって、眼科画像撮影部によって撮影された前記眼科画像を取得する画像取得手段と、機械学習アルゴリズムによって訓練された数学モデルに前記眼科画像を入力することで、前記被検眼における少なくともいずれかの疾患に対する自動診断結果を取得する自動診断結果取得手段と、前記画像取得手段によって取得された前記眼科画像の範囲が、前記数学モデルに入力する眼科画像の対象範囲よりも広い場合に、前記画像取得手段によって取得された前記眼科画像から前記対象範囲の画像を、前記数学モデルに入力する前記眼科画像として抽出する画像抽出手段と、を備える。 A second aspect of an ophthalmic image processing apparatus provided by a typical embodiment of the present disclosure is an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined, and acquires the ophthalmic image captured by an ophthalmic image capturing unit. Image acquisition means, automatic diagnosis result acquisition means for acquiring an automatic diagnosis result for at least one disease in the eye to be examined by inputting the ophthalmic image into a mathematical model trained by a machine learning algorithm, and the image When the range of the ophthalmic image acquired by the acquisition unit is wider than the target range of the ophthalmic image input to the mathematical model, the image of the target range from the ophthalmic image acquired by the image acquisition unit is Image extracting means for extracting as the ophthalmic image to be input to the mathematical model.
 本開示における典型的な実施形態が提供する眼科画像処理プログラムの第1態様は、被検眼の眼科画像を処理する眼科画像処理装置によって実行される眼科画像処理プログラムであって、前記眼科画像処理プログラムが前記眼科画像処理装置の制御部によって実行されることで、眼科画像撮影部によって撮影された前記眼科画像を取得する画像取得ステップと、機械学習アルゴリズムによって訓練された数学モデルに前記眼科画像を入力することで、前記被検眼における複数の疾患の各々に対する自動診断結果を取得する自動診断結果取得ステップと、前記複数の疾患の少なくともいずれかが前記被検眼に存在する度合いを段階的に示す参考情報を、前記眼科画像に対する前記複数の自動診断結果に基づいて生成する参考情報生成ステップと、を前記眼科画像処理装置に実行させる。 A first aspect of an ophthalmic image processing program provided by an exemplary embodiment of the present disclosure is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined. Is executed by the control unit of the ophthalmic image processing apparatus, and the ophthalmic image is input to an image acquisition step of acquiring the ophthalmic image captured by the ophthalmic image capturing unit and a mathematical model trained by a machine learning algorithm. An automatic diagnosis result acquisition step for acquiring an automatic diagnosis result for each of a plurality of diseases in the eye to be examined, and reference information that indicates in steps the degree to which at least one of the plurality of diseases is present in the eye to be examined Generating a reference information based on the plurality of automatic diagnosis results for the ophthalmic image To execute the ophthalmologic image processing apparatus.
 本開示における典型的な実施形態が提供する眼科画像処理プログラムの第2態様は、被検眼の眼科画像を処理する眼科画像処理装置によって実行される眼科画像処理プログラムであって、前記眼科画像処理プログラムが前記眼科画像処理装置の制御部によって実行されることで、眼科画像撮影部によって撮影された前記眼科画像を取得する画像取得ステップと、機械学習アルゴリズムによって訓練された数学モデルに前記眼科画像を入力することで、前記被検眼における少なくともいずれかの疾患に対する自動診断結果を取得する自動診断結果取得ステップと、前記画像取得ステップにおいて取得された前記眼科画像の範囲が、前記数学モデルに入力する眼科画像の対象範囲よりも広い場合に、前記画像取得ステップにおいて取得された前記眼科画像から前記対象範囲の画像を、前記数学モデルに入力する前記眼科画像として抽出する画像抽出ステップと、を前記眼科画像処理装置に実行させる。 A second aspect of the ophthalmic image processing program provided by the exemplary embodiment of the present disclosure is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined. Is executed by the control unit of the ophthalmic image processing apparatus, and the ophthalmic image is input to an image acquisition step of acquiring the ophthalmic image captured by the ophthalmic image capturing unit and a mathematical model trained by a machine learning algorithm. An automatic diagnostic result acquisition step for acquiring an automatic diagnostic result for at least any disease in the eye to be examined, and an ophthalmic image in which the range of the ophthalmic image acquired in the image acquisition step is input to the mathematical model Before being acquired in the image acquisition step when it is wider than the target range of An image of the target range of ophthalmic images, to execute an image extraction step of extracting as the ophthalmologic image to be input to the mathematical model, to the ophthalmologic image processing apparatus.
 本開示に係る眼科画像処理装置および眼科画像処理プログラムによると、眼科画像に基づく有益な情報が生成される。 According to the ophthalmic image processing apparatus and the ophthalmic image processing program according to the present disclosure, useful information based on the ophthalmic image is generated.
眼科画像処理システム100の概略構成を示すブロック図である。1 is a block diagram showing a schematic configuration of an ophthalmic image processing system 100. FIG. 被検眼の眼底の正面画像30と、二次元断層画像の撮影位置35H,35Vを示す図である。It is a figure which shows the imaging | photography position 35H, 35V of the front image 30 and the two-dimensional tomographic image of the fundus of the eye to be examined. 参考情報生成処理のフローチャートである。It is a flowchart of a reference information generation process. 本実施形態における自動診断結果の一例を示す図である。It is a figure which shows an example of the automatic diagnosis result in this embodiment. 対応参考情報設定画面38の一例を示す図である。It is a figure which shows an example of the corresponding | compatible reference information setting screen. 最新参考情報が患者データに対応付けられている場合の、患者データの表示画面40Aの一例を示す図である。It is a figure which shows an example of 40 A of patient data display screens when the newest reference information is matched with patient data. 高段階参考情報が患者データに対応付けられている場合の、患者データの表示画面40Bの一例を示す図である。It is a figure which shows an example of the display screen 40B of patient data when high-stage reference information is matched with patient data. 二次元断層画像の表示画面50の一例を示す図である。It is a figure which shows an example of the display screen 50 of a two-dimensional tomographic image. 患者データの一覧全体を表示している一覧表示画面60の一例を示す図である。It is a figure which shows an example of the list display screen 60 which displays the whole list of patient data. 選択された段階の参考情報に対応する患者データの一覧を表示している一覧表示画面60の一例を示す図である。It is a figure which shows an example of the list display screen 60 which displays the list of the patient data corresponding to the reference information of the selected step. 患者データ出力処理のフローチャートである。It is a flowchart of a patient data output process.
<概要>
 本開示で例示する眼科画像処理装置の制御部は、眼科画像撮影部によって撮影された眼科画像を取得する。制御部は、機械学習アルゴリズムによって訓練された数学モデルに眼科画像を入力することで、被検眼における複数の疾患の各々に対する自動診断結果を取得する。制御部は、複数の疾患の少なくともいずれかが被検眼に存在する度合いを段階的に示す情報である参考情報を、眼科画像に対する複数の自動診断結果に基づいて生成する。
 この場合、複数の自動診断結果の各々をユーザが自ら把握しなくても、被検眼の詳細な診断を行うべきか否か等の判断が、参考情報に基づいて容易に行われる。よって、本開示における眼科画像処理装置によると、眼科画像に基づく有益な情報が適切に生成される。
<Overview>
The control unit of the ophthalmic image processing apparatus exemplified in the present disclosure acquires the ophthalmic image captured by the ophthalmic image capturing unit. The control unit obtains an automatic diagnosis result for each of a plurality of diseases in the eye to be examined by inputting an ophthalmologic image into a mathematical model trained by a machine learning algorithm. The control unit generates reference information, which is information indicating in a stepwise manner the degree to which at least one of a plurality of diseases is present in the eye to be examined, based on a plurality of automatic diagnosis results for the ophthalmologic image.
In this case, even if the user does not grasp each of the plurality of automatic diagnosis results, the determination as to whether or not to make a detailed diagnosis of the eye to be examined is easily made based on the reference information. Therefore, according to the ophthalmic image processing apparatus in the present disclosure, useful information based on the ophthalmic image is appropriately generated.
 なお、生成する参考情報の段階数は5段階以下が望ましく、3段階以下がさらに望ましい。段階数を5段階以下または3段階以下とすることで、ユーザは、詳細な診断を行うべきか否か等の判断を、より容易に行うことができる。ただし、段階数を6段階以上とすることも可能である。例えば、制御部は、0%~100%の101段階で参考情報を生成してもよい。 Note that the number of stages of reference information to be generated is preferably 5 or less, and more preferably 3 or less. By setting the number of stages to 5 stages or less or 3 stages or less, the user can more easily determine whether or not to make a detailed diagnosis. However, the number of stages can be six or more. For example, the control unit may generate the reference information in 101 steps from 0% to 100%.
 機械学習アルゴリズムによって訓練された数学モデル(以下、単に「数学モデル」という場合もある)は、それぞれの疾患の存在確率を自動診断結果として出力するように構成されていてもよい。制御部は、それぞれの疾患の存在確率に基づいて参考情報を生成してもよい。この場合、複数の疾患の少なくともいずれかが存在する度合いが、より正確に参考情報に反映される。 Mathematical models trained by a machine learning algorithm (hereinafter sometimes simply referred to as “mathematical models”) may be configured to output the presence probability of each disease as an automatic diagnosis result. The control unit may generate reference information based on the existence probability of each disease. In this case, the degree to which at least one of a plurality of diseases exists is more accurately reflected in the reference information.
 数学モデルに入力される眼科画像(つまり、自動診断に用いられる眼科画像)は、OCT装置によって取得された眼科画像であってもよい。OCT装置によって取得された眼科画像(例えば、二次元断層画像、三次元断層画像等)は、組織の表面の情報に加えて深部の情報も含む。従って、OCT装置によって取得された眼科画像が自動診断に用いられることで、自動診断の精度がさらに向上する。ただし、OCT装置以外の装置(例えば、眼底カメラ、レーザ走査型検眼装置(SLO)等の少なくともいずれか)によって撮影された眼科画像が自動診断に用いられてもよい。 The ophthalmic image (that is, an ophthalmic image used for automatic diagnosis) input to the mathematical model may be an ophthalmic image acquired by an OCT apparatus. An ophthalmologic image (for example, a two-dimensional tomographic image, a three-dimensional tomographic image, etc.) acquired by the OCT apparatus includes deep information in addition to information on the surface of the tissue. Therefore, the ophthalmologic image acquired by the OCT apparatus is used for automatic diagnosis, thereby further improving the accuracy of automatic diagnosis. However, an ophthalmologic image taken by an apparatus other than the OCT apparatus (for example, at least one of a fundus camera, a laser scanning optometry apparatus (SLO), etc.) may be used for automatic diagnosis.
 制御部は、参考情報の複数の段階の少なくともいずれかを選択するユーザからの指示を入力してもよい。制御部は、複数の被検者または被検眼の、眼科画像を含む患者データから、入力された指示によって選択された段階の参考情報に対応する患者データを抽出してもよい。この場合、生成された参考情報の段階に応じて、複数の患者データが適切に管理される。 The control unit may input an instruction from a user who selects at least one of a plurality of stages of reference information. The control unit may extract patient data corresponding to reference information at a stage selected by an input instruction from patient data including ophthalmologic images of a plurality of subjects or eyes. In this case, a plurality of patient data are appropriately managed according to the stage of the generated reference information.
 制御部は、参考情報に応じて抽出した患者データの一覧を、表示部に表示させてもよい。この場合、ユーザは、表示部に表示される一覧を見ることで、複数の患者データを適切に管理することができる。 The control unit may display a list of patient data extracted according to the reference information on the display unit. In this case, the user can appropriately manage a plurality of patient data by looking at the list displayed on the display unit.
 なお、複数の患者データの一覧の表示方法を変更することも可能である。例えば、制御部は、参考情報に応じて(つまり、参考情報の段階毎に)複数の患者データを並び替えて、並び替えた一覧を表示部に表示させてもよい。この場合、ユーザは、疾患が存在する度合いに応じて複数の患者データを管理することができる。 In addition, it is possible to change the display method of a list of a plurality of patient data. For example, the control unit may rearrange a plurality of patient data according to the reference information (that is, for each stage of the reference information), and display the rearranged list on the display unit. In this case, the user can manage a plurality of patient data according to the degree of presence of the disease.
 制御部は、眼科画像を含む患者データのうち、特定の段階の参考情報に対応する患者データを、ネットワークを介して他のデバイスに送信してもよい。この場合、生成された参考情報の段階に応じて、患者データが適切に処理される。例えば、本開示に係る眼科画像処理装置が健康診断施設に設置され、詳細な診断を行う医師が他の施設でデバイスを使用している場合がある。この場合、眼科画像処理装置は、特定の段階(例えば、疾患が存在する度合いが最も高い段階等)の参考情報に対応する患者データを、ネットワークを介して、医師が使用するデバイスに送信してもよい。その結果、詳細な診断を行う必要性が高い患者データが、医師が使用するデバイスに適切に送信される。 The control unit may transmit patient data corresponding to reference information at a specific stage among patient data including ophthalmic images to another device via a network. In this case, patient data is appropriately processed according to the stage of the generated reference information. For example, an ophthalmic image processing apparatus according to the present disclosure may be installed in a health examination facility, and a doctor who performs a detailed diagnosis may be using the device in another facility. In this case, the ophthalmologic image processing apparatus transmits patient data corresponding to reference information at a specific stage (for example, a stage where the degree of presence of disease is the highest) to a device used by a doctor via a network. Also good. As a result, patient data that is highly necessary for detailed diagnosis is appropriately transmitted to the device used by the doctor.
 また、制御部は、他のデバイスに患者データを送信する方法とは異なる方法で、患者データを処理してもよい。例えば、制御部は、1つまたは複数の患者データのうち、特定の段階の参考情報に対応する患者データのレポートを作成してもよい。レポートには、例えば、患者の各種データ(患者名等)、眼科画像、眼科画像の解析結果、参考情報等の少なくともいずれかが含まれていてもよい。レポートは、特定の形式のデータ(例えばPDFデータ等)で作成されてもよいし、紙に印刷されることで作成されてもよい。この場合、生成された参考情報の段階に応じて、レポートが適切に作成される。 Also, the control unit may process patient data by a method different from a method of transmitting patient data to other devices. For example, the control unit may create a report of patient data corresponding to reference information at a specific stage among one or a plurality of patient data. The report may include, for example, at least one of various patient data (patient name, etc.), ophthalmic images, ophthalmic image analysis results, reference information, and the like. The report may be created with data in a specific format (for example, PDF data), or may be created by printing on paper. In this case, a report is appropriately created according to the stage of the generated reference information.
 なお、「特定の段階」は、例えば、ユーザによって眼科画像処理装置に入力される指示(操作指示等)に応じて設定されてもよいし、予め設定されていてもよい。また、制御部は、特定の段階の参考情報が生成される毎に、対応する患者データを他のデバイスに送信してもよい。また、制御部は、複数の患者データから、特定の段階の参考情報に対応する患者データを抽出し、抽出した1つまたは複数の患者データを他のデバイスに送信してもよい。なお、「特定の段階」が1つの段階に限定されず、複数の段階であってもよいことは言うまでもない。また、制御部は、特定の段階の参考情報に対応する患者データの詳細(例えば既往歴等)を、表示部に表示させてもよい。 Note that the “specific stage” may be set according to, for example, an instruction (operation instruction or the like) input to the ophthalmic image processing apparatus by the user, or may be set in advance. The control unit may transmit corresponding patient data to another device each time reference information of a specific stage is generated. Further, the control unit may extract patient data corresponding to reference information at a specific stage from a plurality of patient data, and transmit the extracted one or more patient data to another device. Needless to say, the “specific stage” is not limited to one stage and may be a plurality of stages. Further, the control unit may cause the display unit to display details (for example, past history) of patient data corresponding to the reference information at a specific stage.
 また、制御部は、同一の被検者の左眼および右眼の患者データのうちの一方の参考情報のみが特定の段階である場合に、他方の眼の患者データの出力(例えば、他のデバイスへの送信、レポート作成、表示部への表示等の少なくともいずれか)も行ってもよい。例えば緑内障等は、一方の眼で発症すると、他方の眼でも発症しやすい傾向がある。従って、一方の眼に関する参考情報が特定の段階である場合に、他方の眼(例えば、疾患が存在する度合いが低い方の眼)の患者データも合わせて出力することで、ユーザは、一方の眼の診断結果も踏まえて他方の眼の診断を行うことができる。よって、より適切に診断が行われる。 In addition, when only the reference information of one of the left eye and right eye patient data of the same subject is in a specific stage, the control unit outputs the patient data of the other eye (for example, other data At least one of transmission to a device, report generation, display on a display unit, etc.) may also be performed. For example, when glaucoma develops in one eye, it tends to develop in the other eye. Therefore, when the reference information for one eye is in a specific stage, the user can also output the patient data of the other eye (for example, the eye with the lower degree of disease), so that the user can The other eye can be diagnosed based on the result of the eye diagnosis. Therefore, diagnosis is performed more appropriately.
 また、制御部は、同一の被検者の左眼および右眼の患者データのうちの一方の参考情報のみが特定の段階である場合に、特定の段階の参考情報に対応する眼の患者データのみを出力するか、両眼の患者データを出力するかを、疾患の種類に応じて決定してもよい。また、制御部は、両眼の患者データを出力するか否かを、ユーザによって入力された指示に応じて決定してもよい。 In addition, when only the reference information of one of the left-eye and right-eye patient data of the same subject is in a specific stage, the control unit stores the eye patient data corresponding to the reference information in the specific stage. Depending on the type of disease, it may be determined whether to output only patient data or to output patient data for both eyes. Further, the control unit may determine whether or not to output both-eye patient data according to an instruction input by the user.
 また、制御部は、同一の被検者の左眼および右眼の患者データのうちの一方の参考情報のみが特定の段階である場合に、特定の段階の参考情報に対応する眼とは異なる眼について、ユーザに注意(例えば経過観察等)を促す通知を行ってもよい。この通知は、疾患の種類に応じて行われてもよいし、ユーザによる設定に応じて通知を行うか否かが決定されてもよい。 The control unit is different from the eye corresponding to the reference information at a specific stage when only the reference information of one of the patient data of the left eye and the right eye of the same subject is at a specific stage. For the eyes, a notification that prompts the user to pay attention (for example, follow-up observation) may be given. This notification may be performed according to the type of disease, and it may be determined whether to perform notification according to the setting by the user.
 また、制御部は、特定の段階の参考情報に対応するデータ(患者データ、および各々の眼科画像のデータ等の少なくともいずれか)と、特定の段階以外の参考情報に対応するデータの、記憶装置における保存方法を変更してもよい。例えば、制御部は、データが特定の段階以外の参考情報に対応する場合、記憶装置にデータを保存する処理を省略してもよいし、データを記憶装置から削除してもよいし、記憶装置に記憶されているデータのデータ量の削減処理(例えば、データ量が小さいレポートファイルへの変換処理等)を実行してもよい。この場合、記憶装置の容量が圧迫されることが抑制されると共に、データが適切に管理される。 Further, the control unit stores data corresponding to reference information at a specific stage (at least one of patient data and data of each ophthalmic image) and data corresponding to reference information other than the specific stage. You may change the storage method. For example, when the data corresponds to reference information other than a specific stage, the control unit may omit the process of saving the data in the storage device, delete the data from the storage device, Data amount reduction processing (for example, conversion processing to a report file with a small data amount, etc.) may be executed. In this case, it is suppressed that the capacity of the storage device is compressed, and data is appropriately managed.
 各々の被検者または被検眼毎に、1つまたは複数の眼科画像のデータを含む患者データが記憶手段に記憶されていてもよい。制御部は、1つの患者データに複数の眼科画像が含まれている場合に、複数の眼科画像の各々に対して参考情報を生成してもよい。制御部は、1つの患者データに複数の眼科画像が含まれている場合に、撮影されたタイミングが最も新しい眼科画像に対して生成された参考情報(以下、「最新参考情報」という)と、疾患が存在する度合いが最も高い段階の参考情報(以下、「高段階参考情報」という)のうちのいずれかを、各々の患者データに対応付けてもよい。この場合、状況に応じた適切な参考情報が、各々の患者データに対応付けられる。例えば、最新参考情報が患者データに対応付けられる場合には、最新の被検眼の状態に基づいて患者データが適切に管理される。また、高段階参考情報が患者データに対応付けられる場合には、過去の眼科画像における疾患の存在の度合いも考慮されたうえで患者データが管理される。 For each subject or eye to be examined, patient data including one or more ophthalmic image data may be stored in the storage means. The control unit may generate reference information for each of the plurality of ophthalmic images when one patient data includes a plurality of ophthalmic images. When a plurality of ophthalmic images are included in one patient data, the control unit generates reference information (hereinafter referred to as “latest reference information”) generated for an ophthalmic image having the latest timing. Any of reference information at the stage where the degree of disease is the highest (hereinafter referred to as “high-stage reference information”) may be associated with each patient data. In this case, appropriate reference information corresponding to the situation is associated with each patient data. For example, when the latest reference information is associated with the patient data, the patient data is appropriately managed based on the latest state of the eye to be examined. When the high-level reference information is associated with the patient data, the patient data is managed in consideration of the degree of the presence of the disease in the past ophthalmic image.
 なお、制御部は、ユーザによって入力される選択指示に応じて、最新参考情報および高段階参考情報のいずれを患者データに対応付けるかを設定してもよい。この場合、ユーザは、自らの診断の方針等に応じて、適切な参考情報を患者データに対応付けることができる。また、最新参考情報および高段階参考情報のいずれを患者データに対応付けるかが、各種条件に応じて自動的に設定されてもよい。 Note that the control unit may set which of the latest reference information and the high-level reference information is associated with the patient data in accordance with a selection instruction input by the user. In this case, the user can associate appropriate reference information with the patient data in accordance with his / her diagnosis policy or the like. Also, which of the latest reference information and the high-level reference information is associated with the patient data may be automatically set according to various conditions.
 また、制御部は、複数の患者データから、入力された指示によって選択された段階の参考情報に対応する患者データを抽出する際に、それぞれの患者データに対応付けられた最新参考情報または高段階参考情報を参照して患者データを抽出してもよい。また、制御部は、患者データに対応付けられた最新参考情報または高段階参考情報が特定の段階である場合に、患者データの他のデバイスへの送信、またはレポートの作成等を行ってもよい。 In addition, when extracting the patient data corresponding to the reference information at the stage selected by the input instruction from the plurality of patient data, the control unit extracts the latest reference information associated with each patient data or the high-level reference information. Patient data may be extracted with reference to reference information. In addition, when the latest reference information or high-level reference information associated with the patient data is in a specific stage, the control unit may transmit the patient data to another device, create a report, or the like. .
 制御部は、患者データを表示部に表示させる際に、患者データに含まれる複数の眼科画像の各々に対応する参考情報を、表示部に表示させてもよい。この場合、ユーザは、それぞれの眼科画像に対して生成された参考情報を適切に把握することができる。例えば、異なる時間に撮影された複数の眼科画像が1つの患者データに含まれている場合には、ユーザは、それぞれの参考情報を考慮したうえで、被検眼の経過観察を適切に行うことができる。 The control unit may cause the display unit to display reference information corresponding to each of a plurality of ophthalmologic images included in the patient data when displaying the patient data on the display unit. In this case, the user can appropriately grasp the reference information generated for each ophthalmic image. For example, when a plurality of ophthalmologic images photographed at different times are included in one patient data, the user can appropriately perform follow-up observation of the eye to be examined in consideration of each reference information. it can.
 制御部は、表示部に参考情報を表示させるか否かを選択するためのユーザからの指示を入力してもよい。制御部は、入力した指示に応じて、表示部への参考情報の表示および非表示を切り替えてもよい。例えば、ユーザは、参考情報を被検者に見せたくない場合もある。また、経験豊富なユーザは、経験の浅いユーザを教育するために、他のユーザに参考情報を見せたくない場合もある。また、ユーザは、参考情報を参照せずに診断等を行いたい場合もある。従って、ユーザは、参考情報の表示と非表示を自らの意思で切り替えることで、より適切に診断等の各種業務を行うことができる。 The control unit may input an instruction from the user for selecting whether to display the reference information on the display unit. The control unit may switch between displaying and hiding reference information on the display unit according to the input instruction. For example, the user may not want to show the reference information to the subject. In addition, experienced users may not want to show reference information to other users in order to educate inexperienced users. In addition, the user may want to make a diagnosis or the like without referring to the reference information. Therefore, the user can perform various operations such as diagnosis more appropriately by switching the display and non-display of the reference information at his / her own will.
 制御部は、生成された参考情報が示す疾患の存在の度合いと、実際の診断によって判断された疾患の有無が異なる眼科画像を抽出してもよい。また、制御部は、数学モデルによって得られた自動診断結果と、実際に行われた診断結果とが異なる眼科画像を抽出してもよい。抽出された眼科画像は、眼科画像処理装置のメーカーに提供(例えば送信等)されてもよい。この場合、メーカーは、受信した眼科画像を訓練データとして数学モデルに訓練を行わせることで、以後の自動診断結果または参考情報の精度を向上させることができる。 The control unit may extract ophthalmic images that differ in the degree of presence of the disease indicated by the generated reference information and the presence or absence of the disease determined by actual diagnosis. Further, the control unit may extract an ophthalmologic image in which the automatic diagnosis result obtained by the mathematical model is different from the actually performed diagnosis result. The extracted ophthalmic image may be provided (for example, transmitted) to the manufacturer of the ophthalmic image processing apparatus. In this case, the manufacturer can improve the accuracy of subsequent automatic diagnosis results or reference information by causing the mathematical model to train the received ophthalmic image as training data.
 制御部は、ユーザによって入力される指示に応じて、眼科画像に対して生成した参考情報を編集してもよい。この場合、ユーザは、生成された参考情報の内容が、実際の診断結果と異なる場合があっても、患者データを適切に管理することができる。なお、制御部は、編集した参考情報を表示部に表示させる場合に、編集後の参考情報であることを、強調表示またはコメント等によって示してもよい。また、参考情報の編集指示を入力できるユーザを、設定によって制限できてもよい。また、制御部は、ユーザによって入力される指示に応じて、数学モデルによって得られる自動診断結果を編集してもよい。 The control unit may edit the reference information generated for the ophthalmologic image in accordance with an instruction input by the user. In this case, the user can appropriately manage the patient data even if the content of the generated reference information may be different from the actual diagnosis result. When the edited reference information is displayed on the display unit, the control unit may indicate that the edited reference information is the edited reference information by highlighting or commenting. In addition, the user who can input the reference information editing instruction may be limited by setting. Further, the control unit may edit the automatic diagnosis result obtained by the mathematical model in accordance with an instruction input by the user.
 制御部は、取得された眼科画像(つまり、眼科画像撮影部によって撮影された眼科画像)の範囲が、数学モデルに入力する眼科画像の対象範囲よりも広い場合に、取得された眼科画像から対象範囲の画像を抽出し、抽出した眼科画像を数学モデルに入力することで自動診断結果を取得してもよい。この場合、眼科画像撮影部によって撮影された眼科画像の範囲に関わらず、高精度の自動診断結果が適切に取得される。 When the range of the acquired ophthalmic image (that is, the ophthalmic image captured by the ophthalmic image capturing unit) is wider than the target range of the ophthalmic image to be input to the mathematical model, the control unit performs a target from the acquired ophthalmic image. An automatic diagnosis result may be acquired by extracting a range image and inputting the extracted ophthalmic image into a mathematical model. In this case, a highly accurate automatic diagnosis result is appropriately acquired regardless of the range of the ophthalmic image captured by the ophthalmic image capturing unit.
 なお、取得された眼科画像から対象範囲を抽出するための具体的な方法は、適宜選択できる。例えば、OCT装置によって撮影される二次元断層画像(Bスキャン画像)の所定の範囲が対象範囲であり、実際にOCT装置によって撮影された二次元断層画像の範囲が対象範囲よりも広い場合がある。この場合、制御部は、撮影された二次元断層画像から所定の範囲を抽出してもよい。また、OCT装置によって撮影される二次元断層画像の所定の範囲が対象範囲であり、実際にOCT装置によって撮影された画像が、複数の異なる走査ラインの各々に測定光を走査させることで撮影されたマップ画像(三次元OCTデータ)である場合もある。この場合、制御部は、撮影されたマップ画像から、自動診断の対象となる二次元断層画像を抽出してもよい。また、対象範囲が、OCT装置によって撮影される所定の範囲のマップ画像であり、実際にOCT装置によって撮影されたマップ画像が所定の範囲よりも広い場合もある。この場合、制御部は、撮影された広範囲のマップ画像から対象範囲を抽出してもよい。また、対象範囲が被検眼の組織(例えば眼底)の正面画像の所定の範囲であり、実際に撮影された正面画像の範囲が対象範囲よりも広い場合もある。この場合、制御部は、撮影された正面画像から対象範囲を抽出してもよい。 Note that a specific method for extracting the target range from the acquired ophthalmic image can be selected as appropriate. For example, a predetermined range of a two-dimensional tomographic image (B-scan image) captured by the OCT apparatus is the target range, and the range of the two-dimensional tomographic image actually captured by the OCT apparatus may be wider than the target range. . In this case, the control unit may extract a predetermined range from the captured two-dimensional tomographic image. In addition, a predetermined range of the two-dimensional tomographic image captured by the OCT apparatus is a target range, and an image actually captured by the OCT apparatus is captured by scanning each of a plurality of different scanning lines with measurement light. In some cases, it is a map image (three-dimensional OCT data). In this case, the control unit may extract a two-dimensional tomographic image to be automatically diagnosed from the captured map image. Further, the target range is a map image of a predetermined range imaged by the OCT apparatus, and the map image actually imaged by the OCT apparatus may be wider than the predetermined range. In this case, the control unit may extract a target range from a wide range of captured map images. Further, the target range is a predetermined range of the front image of the tissue (for example, the fundus) of the eye to be examined, and the range of the actually captured front image may be wider than the target range. In this case, the control unit may extract the target range from the photographed front image.
 また、制御部は、取得された眼科画像から自動的に対象範囲を抽出してもよい。この場合、制御部は、取得された眼科画像に対して画像処理を行い、画像処理の結果に基づいて対象範囲を抽出してもよい。例えば、制御部は、取得された眼科画像に対して画像処理を行い、基準位置(例えば眼底の黄斑の位置等)を検出し、基準位置に基づいて対象範囲を抽出してもよい。また、眼科画像がOCT装置によって撮影されている場合、制御部は、被検眼の組織において測定光が照射された位置を特定し、特定した位置に基づいて対象範囲を抽出してもよい。また、制御部は、ユーザによって入力された指示に基づいて対象範囲を抽出してもよい。つまり、ユーザが手動で対象範囲を指定してもよい。 Further, the control unit may automatically extract the target range from the acquired ophthalmic image. In this case, the control unit may perform image processing on the acquired ophthalmologic image and extract a target range based on the result of the image processing. For example, the control unit may perform image processing on the acquired ophthalmologic image, detect a reference position (for example, the position of the macula on the fundus), and extract a target range based on the reference position. Further, when the ophthalmologic image is taken by the OCT apparatus, the control unit may specify the position irradiated with the measurement light in the tissue of the eye to be examined, and extract the target range based on the specified position. The control unit may extract the target range based on an instruction input by the user. That is, the user may manually specify the target range.
 なお、制御部は、取得された眼科画像から対象範囲の画像を抽出し、抽出した眼科画像を数学モデルに入力する処理を実行する場合には、本開示で例示する他の複数の処理の少なくともいずれか(例えば、参考情報を生成する処理等)を省略してもよい。また、対象範囲の画像を抽出して自動診断結果を取得する場合、自動診断結果は、被検眼における複数の疾患の各々に対する自動診断結果であってもよいし、被検眼にいけるいずれかの疾患に対する自動診断結果であってもよい。 Note that the control unit extracts at least a target range image from the acquired ophthalmic image, and executes the process of inputting the extracted ophthalmic image into the mathematical model, at least of the plurality of other processes exemplified in the present disclosure. Any one (for example, processing for generating reference information) may be omitted. In addition, when an automatic diagnosis result is acquired by extracting an image of the target range, the automatic diagnosis result may be an automatic diagnosis result for each of a plurality of diseases in the eye to be examined, or any disease in the eye to be examined It may be an automatic diagnosis result for.
<実施形態>
(システム構成)
 以下、本開示における典型的な実施形態の1つについて、図面を参照して説明する。一例として、本実施形態では、パーソナルコンピュータ(以下、「PC」という)1が、眼科画像撮影装置11から被検眼の眼科画像のデータ(以下、単に「眼科画像」という)を取得し、取得した眼科画像に対して各種処理を行う。つまり、本実施形態では、PC1が眼科画像処理装置として機能する。しかし、眼科画像処理装置として機能するデバイスは、PC1に限定されない。例えば、眼科画像撮影装置11が眼科画像処理装置として機能してもよい。タブレット端末またはスマートフォン等の携帯端末が、眼科画像処理装置として機能してもよい。また、複数のデバイスの制御部(例えば、PC1のCPU3と、眼科画像撮影装置11のCPU13)が協働して各種画像処理を行ってもよい。
<Embodiment>
(System configuration)
Hereinafter, one exemplary embodiment of the present disclosure will be described with reference to the drawings. As an example, in this embodiment, a personal computer (hereinafter referred to as “PC”) 1 acquires ophthalmic image data of an eye to be examined (hereinafter simply referred to as “ophthalmic image”) from the ophthalmic image capturing apparatus 11 and acquires the acquired data. Various processes are performed on the ophthalmologic image. That is, in this embodiment, the PC 1 functions as an ophthalmic image processing apparatus. However, the device that functions as an ophthalmic image processing apparatus is not limited to the PC 1. For example, the ophthalmic image capturing device 11 may function as an ophthalmic image processing device. A portable terminal such as a tablet terminal or a smartphone may function as an ophthalmic image processing apparatus. In addition, the control units of a plurality of devices (for example, the CPU 3 of the PC 1 and the CPU 13 of the ophthalmologic image capturing apparatus 11) may perform various image processing in cooperation.
 図1に示すように、本実施形態で例示する眼科画像処理システム100は、各拠点(例えば、健康診断施設、病院等)において使用される複数のPCを備える。図1では、健康診断施設である拠点Aで使用されるPC1と、病院である拠点Bで使用されるPC21を例示している。 As shown in FIG. 1, an ophthalmic image processing system 100 exemplified in this embodiment includes a plurality of PCs used at each base (for example, a health check facility, a hospital, etc.). FIG. 1 illustrates a PC 1 used at a base A that is a health check facility and a PC 21 used at a base B that is a hospital.
 PC1は、拠点Aのユーザ(例えば、検査技師および医師等)によって使用される。PC1は、各種制御処理を行う制御ユニット2と、通信I/F5を備える。制御ユニット2は、制御を司るコントローラであるCPU3と、プログラムおよびデータ等を記憶することが可能な記憶装置4を備える。記憶装置4には、後述する眼科画像処理を実行するための眼科画像処理プログラムが記憶されている。また、通信I/F5は、ネットワーク9(例えばインターネット)を介して、PC1を他のデバイス(例えばPC21)と接続する。 The PC 1 is used by a user at the site A (for example, a laboratory technician and a doctor). The PC 1 includes a control unit 2 that performs various control processes and a communication I / F 5. The control unit 2 includes a CPU 3 that is a controller that controls the memory and a storage device 4 that can store programs, data, and the like. The storage device 4 stores an ophthalmic image processing program for executing ophthalmic image processing described later. In addition, the communication I / F 5 connects the PC 1 to another device (for example, the PC 21) via the network 9 (for example, the Internet).
 PC1は、操作部7およびモニタ8に接続されている。操作部7は、ユーザが各種指示をPC1に入力するために、ユーザによって操作される。操作部7には、例えば、キーボード、マウス、タッチパネル等の少なくともいずれかを使用できる。なお、操作部7と共に、または操作部7に代えて、各種指示を入力するためのマイク等が使用されてもよい。モニタ8は、各種画像を表示することが可能な表示部の一例である。 The PC 1 is connected to the operation unit 7 and the monitor 8. The operation unit 7 is operated by the user in order for the user to input various instructions to the PC 1. For the operation unit 7, for example, at least one of a keyboard, a mouse, a touch panel, and the like can be used. A microphone or the like for inputting various instructions may be used together with the operation unit 7 or instead of the operation unit 7. The monitor 8 is an example of a display unit that can display various images.
 PC1は、眼科画像撮影装置11との間で各種データ(例えば眼科画像のデータ)のやり取りを行うことができる。PC1が眼科画像撮影装置11との間でデータのやり取りを行う方法は、適宜選択できる。例えば、PC1は、有線通信、無線通信、着脱可能な記憶媒体(例えばUSBメモリ)等の少なくともいずれかによって、眼科画像撮影装置11との間でデータのやり取りを行ってもよい。 The PC 1 can exchange various data (for example, ophthalmic image data) with the ophthalmic image capturing apparatus 11. A method in which the PC 1 exchanges data with the ophthalmic image capturing apparatus 11 can be selected as appropriate. For example, the PC 1 may exchange data with the ophthalmic image capturing apparatus 11 through at least one of wired communication, wireless communication, a removable storage medium (for example, a USB memory), and the like.
 眼科画像撮影装置11には、被検眼の画像を撮影する種々の装置を用いることができる。一例として、本実施形態で使用される眼科画像撮影装置11は、被検眼の組織(本実施形態では眼底)の断層画像等を撮影することが可能なOCT装置である。従って、後述する自動診断が、組織の深部の情報を含むOCT画像に基づいて実行されるので、自動診断の精度が向上する。しかし、OCT装置以外の装置(例えば、眼底カメラ、レーザ走査型検眼装置(SLO)等の少なくともいずれか)が用いられてもよい。また、被検眼の眼底以外の組織(例えば前眼部等)の画像が、眼科画像撮影装置11によって撮影されてもよい。 As the ophthalmologic image photographing device 11, various devices for photographing an image of the eye to be examined can be used. As an example, the ophthalmologic image capturing apparatus 11 used in the present embodiment is an OCT apparatus capable of capturing a tomographic image or the like of the tissue of the eye to be examined (the fundus in the present embodiment). Therefore, since an automatic diagnosis described later is executed based on an OCT image including information on the deep part of the tissue, the accuracy of the automatic diagnosis is improved. However, an apparatus other than the OCT apparatus (for example, at least one of a fundus camera, a laser scanning optometry apparatus (SLO), or the like) may be used. In addition, an image of a tissue other than the fundus of the eye to be examined (for example, the anterior segment) may be captured by the ophthalmologic image capturing device 11.
 眼科画像撮影装置11は、各種制御処理を行う制御ユニット12と、眼科画像撮影部16を備える。制御ユニット12は、制御を司るコントローラであるCPU13と、プログラムおよびデータ等を記憶することが可能な記憶装置14を備える。 The ophthalmic image capturing apparatus 11 includes a control unit 12 that performs various control processes and an ophthalmic image capturing unit 16. The control unit 12 includes a CPU 13 that is a controller that controls the memory and a storage device 14 that can store programs, data, and the like.
 眼科画像撮影部16は、被検眼の眼科画像を撮影するために必要な各種構成を備える。例えば、眼科画像撮影装置11としてOCT装置が用いられる場合、眼科画像撮影部16には、OCT光源、OCT光を走査するための走査部、OCT光を被検眼に照射するための光学系、被検眼の組織によって反射された光を受光する受光素子等が含まれる。また、本実施形態の眼科画像撮影部16は、被検眼の組織(本実施形態では眼底)の正面画像を撮影する正面観察光学系を備える。正面画像とは、OCTの測定光に光軸に沿う方向(正面方向)から組織を見た場合の二次元の画像である。正面観察光学系の構成には、例えば、SLO、眼底カメラ等の少なくともいずれかの構成を採用できる。なお、眼科画像撮影装置11は、組織の三次元OCTデータを取得し、正面方向から組織を見た場合の画像(所謂「Enface画像」)を三次元OCTデータに基づいて取得してもよい。この場合、正面観察光学系は省略されてもよい。 The ophthalmologic image capturing unit 16 includes various configurations necessary for capturing an ophthalmic image of the eye to be examined. For example, when an OCT apparatus is used as the ophthalmic image capturing apparatus 11, the ophthalmic image capturing section 16 includes an OCT light source, a scanning section for scanning OCT light, an optical system for irradiating the eye to be examined, OCT light, A light receiving element for receiving light reflected by the tissue of the optometer is included. Further, the ophthalmologic image capturing unit 16 of the present embodiment includes a front observation optical system that captures a front image of the tissue of the eye to be examined (the fundus in the present embodiment). The front image is a two-dimensional image when the tissue is viewed from the direction along the optical axis (front direction) of the OCT measurement light. For the configuration of the front observation optical system, for example, at least one of the configurations such as SLO and fundus camera can be adopted. The ophthalmologic imaging apparatus 11 may acquire the three-dimensional OCT data of the tissue, and may acquire an image when viewing the tissue from the front direction (a so-called “Enface image”) based on the three-dimensional OCT data. In this case, the front observation optical system may be omitted.
 PC21は、拠点Bのユーザによって使用される。PC21は、各種制御処理を行う制御ユニット22と、通信I/F25を備える。制御ユニット22は、制御を司るコントローラであるCPU23と、プログラムおよびデータ等を記憶することが可能な記憶装置24を備える。通信I/F25は、ネットワーク9を介して、PC21を他のデバイス(例えばPC1)と接続する。また、PC21には、操作部27およびモニタ28が接続されている。 PC 21 is used by a user at site B. The PC 21 includes a control unit 22 that performs various control processes and a communication I / F 25. The control unit 22 includes a CPU 23 that is a controller that controls the memory and a storage device 24 that can store programs, data, and the like. The communication I / F 25 connects the PC 21 to another device (for example, PC 1) via the network 9. An operation unit 27 and a monitor 28 are connected to the PC 21.
(眼科画像の撮影)
 図2を参照して、本実施形態の自動診断(詳細は後述する)において用いられる眼科画像の撮影方法の一例について説明する。図2は、被検眼の眼底の正面画像30と、自動診断において用いられる2つの二次元断層画像の撮影位置(つまり、OCT測定光の走査位置)35H,35Vの関係を示す図である。本実施形態では、正面画像30は、眼科画像撮影装置11の眼科画像撮影部16が備える正面観察光学系によって撮影される。図2に示す正面画像30には、視神経乳頭31、黄斑32、および眼底血管33等の眼底組織が写っている。
(Ophthalmological imaging)
With reference to FIG. 2, an example of an ophthalmic image capturing method used in the automatic diagnosis (details will be described later) of the present embodiment will be described. FIG. 2 is a diagram showing the relationship between the front image 30 of the fundus of the eye to be examined and the imaging positions (that is, the scanning positions of the OCT measurement light) 35H and 35V of two two-dimensional tomographic images used in automatic diagnosis. In the present embodiment, the front image 30 is captured by a front observation optical system provided in the ophthalmic image capturing unit 16 of the ophthalmic image capturing apparatus 11. The front image 30 shown in FIG. 2 shows fundus tissues such as the optic disc 31, the macula 32, and the fundus blood vessel 33.
 一例として、本実施形態では、基準位置(本実施形態では黄斑32)を中心として水平方向に延びる撮影位置35Hで撮影された二次元断層画像と、基準位置を中心として垂直方向に延びる撮影位置35Vで撮影された二次元断層画像が、自動診断で用いられる。撮影位置35H,35Vの各々の長さは6mmである。ただし、基準位置、撮影位置の長さ、撮影位置の角度、自動診断に用いられる画像の数等は、適宜変更できる。また、二次元断層画像以外の眼科画像(例えば、眼底の正面画像、三次元断層画像等)が自動診断に用いられてもよい。複数のモダリティの各々によって撮影された、同一の被検眼の複数種類の眼科画像が、自動診断に用いられてもよい。複数種類の眼科画像が自動診断に用いられる場合、複数の眼科画像の各々に対して異なるアルゴリズムによって複数種類の自動診断結果が出力されてもよいし、複数種類の眼科画像から1つのアルゴリズムによって自動診断結果が出力されてもよい。 As an example, in the present embodiment, a two-dimensional tomographic image photographed at a photographing position 35H extending in the horizontal direction around the reference position (macular 32 in the present embodiment) and a photographing position 35V extending in the vertical direction around the reference position. The two-dimensional tomographic image photographed in is used for automatic diagnosis. Each of the imaging positions 35H and 35V is 6 mm in length. However, the reference position, the length of the shooting position, the angle of the shooting position, the number of images used for automatic diagnosis, and the like can be changed as appropriate. In addition, ophthalmic images other than the two-dimensional tomographic image (for example, a front image of the fundus, a three-dimensional tomographic image, etc.) may be used for automatic diagnosis. A plurality of types of ophthalmic images of the same eye to be inspected, taken by each of a plurality of modalities, may be used for automatic diagnosis. When multiple types of ophthalmic images are used for automatic diagnosis, multiple types of automatic diagnosis results may be output by different algorithms for each of the plurality of ophthalmic images, or automatically from a plurality of types of ophthalmic images by one algorithm. A diagnosis result may be output.
 本実施形態の眼科画像撮影装置11のCPU13は、断層画像の撮影が行われる前の眼底の正面画像30上で、自動診断に適した二次元断層画像の撮影位置35H,35Vのガイド表示を行う。詳細には、CPU13は、基準位置を検出し、基準位置を中心とする最適な撮影位置35H,35Vのガイド表示を、モニタ(図示せず)に表示している正面画像30上に示す。その結果、自動診断に適した眼科画像が容易に撮影される。なお、基準位置は、正面画像30に対して画像処理等が行われることで検出されてもよいし、ユーザによって指定された位置が基準位置として検出されてもよい。 The CPU 13 of the ophthalmologic image capturing apparatus 11 according to the present embodiment performs guide display of two-dimensional tomographic image capturing positions 35H and 35V suitable for automatic diagnosis on the fundus front image 30 before tomographic image capturing. . Specifically, the CPU 13 detects the reference position, and shows a guide display of the optimum photographing positions 35H and 35V centered on the reference position on the front image 30 displayed on the monitor (not shown). As a result, an ophthalmic image suitable for automatic diagnosis is easily taken. The reference position may be detected by performing image processing or the like on the front image 30, or a position designated by the user may be detected as the reference position.
 また、眼科画像撮影装置11は、自動診断に適した二次元断層画像を自動で撮影してもよい。この場合、CPU13は、基準位置(本実施形態では黄斑32)を検出し、基準位置を中心とする最適な撮影位置35H,35VにOCT測定光を走査させることで、眼科画像を撮影してもよい。 Further, the ophthalmologic image capturing apparatus 11 may automatically capture a two-dimensional tomographic image suitable for automatic diagnosis. In this case, the CPU 13 detects the reference position (the macular 32 in the present embodiment), and scans the OCT measurement light at the optimal imaging positions 35H and 35V centered on the reference position, thereby capturing an ophthalmic image. Good.
 なお、眼科画像撮影装置11では、後述する自動診断において用いられる画像の範囲よりも広い範囲で眼科画像が撮影されてもよい。この場合、本実施形態では、自動診断が行われる前に、広範囲の眼科画像から自動診断の対象範囲の画像が抽出される。 Note that the ophthalmic image capturing apparatus 11 may capture an ophthalmic image in a wider range than an image range used in an automatic diagnosis described later. In this case, in the present embodiment, before automatic diagnosis is performed, an image of the target range for automatic diagnosis is extracted from a wide range of ophthalmic images.
(眼科画像処理)
 以下、本実施形態の眼科画像処理について詳細に説明する。本実施形態の眼科画像処理では、機械学習アルゴリズムによって訓練された数学モデルが用いられることで、被検眼における複数の疾患の少なくともいずれか(本実施形態では複数の疾患の各々)に対する自動診断結果が取得される。また、取得された複数の自動診断結果に基づいて、参考情報が生成される。参考情報とは、複数の疾患の少なくともいずれかが被検眼に存在する度合いを段階的に示す情報である。さらに、本実施形態では、参考情報をモニタ8に表示させる処理、参考情報に応じて患者データを抽出する処理等、種々の処理が実行される。以下説明する処理は、記憶装置4に記憶された眼科画像処理プログラムに従って、CPU3によって実行される。
(Ophthalmological image processing)
Hereinafter, the ophthalmic image processing of this embodiment will be described in detail. In the ophthalmic image processing according to the present embodiment, an automatic diagnosis result for at least one of a plurality of diseases in the eye to be examined (each of the plurality of diseases in the present embodiment) is obtained by using a mathematical model trained by a machine learning algorithm. To be acquired. Further, reference information is generated based on the acquired plurality of automatic diagnosis results. Reference information is information that indicates in a stepwise manner the degree to which at least one of a plurality of diseases is present in the eye to be examined. Furthermore, in this embodiment, various processes such as a process of displaying reference information on the monitor 8 and a process of extracting patient data according to the reference information are executed. Processing described below is executed by the CPU 3 in accordance with an ophthalmic image processing program stored in the storage device 4.
(参考情報生成処理)
 図3および図4を参照して、参考情報生成処理について説明する。参考情報生成処理では、眼科画像に対して参考情報が生成される。まず、CPU3は、被検眼の眼科画像を取得する(S1)。本実施形態では、CPU3は、眼科画像撮影装置11の眼科画像撮影部16によって撮影された眼科画像を、眼科画像撮影装置11から取得する。しかし、眼科画像の取得方法は適宜変更できる。例えば、眼科画像撮影装置11が参考情報生成処理を実行する場合には、眼科画像撮影装置11のCPU13は、記憶装置14に記憶されている眼科画像を取得してもよい。
(Reference information generation process)
The reference information generation process will be described with reference to FIGS. 3 and 4. In the reference information generation process, reference information is generated for the ophthalmic image. First, the CPU 3 acquires an ophthalmic image of the eye to be examined (S1). In the present embodiment, the CPU 3 acquires the ophthalmic image captured by the ophthalmic image capturing unit 16 of the ophthalmic image capturing apparatus 11 from the ophthalmic image capturing apparatus 11. However, the ophthalmologic image acquisition method can be changed as appropriate. For example, when the ophthalmic image capturing device 11 executes the reference information generation process, the CPU 13 of the ophthalmic image capturing device 11 may acquire an ophthalmic image stored in the storage device 14.
 次いで、CPU3は、S1で取得した眼科画像の範囲が、自動診断において使用される数学モデル(詳細は後述する)に入力する眼科画像の対象範囲よりも広いか否かを判断する(S2)。S1で取得した眼科画像の範囲と、自動診断の対象範囲とが同一である場合には(S2:NO)、処理はそのままS4へ移行する。S1で取得した眼科画像の範囲が、自動診断の対象範囲よりも広い場合には(S2:YES)、CPU3は、S1で取得した眼科画像から、自動診断の対象範囲の画像を抽出する(S3)。詳細は後述するが、本実施形態では、OCT装置によって撮影される2つの二次元断層画像(Bスキャン画像)の所定の範囲が、自動診断の対象範囲となる。例えば、S1で取得した二次元断層画像の範囲が対象範囲よりも広い場合には、CPU3は、S1で取得した二次元断層画像から所定の対象範囲を抽出する。また、S1で取得した画像が、複数の異なるラインの各々にOCT測定光を走査させることで撮影されたマップ画像(三次元OCTデータ)である場合には、CPU3は、自動診断の対象となる2つの二次元断層画像を、マップ画像から抽出する。 Next, the CPU 3 determines whether or not the range of the ophthalmic image acquired in S1 is wider than the target range of the ophthalmic image input to a mathematical model (details will be described later) used in automatic diagnosis (S2). If the range of the ophthalmologic image acquired in S1 is the same as the target range for automatic diagnosis (S2: NO), the process proceeds to S4 as it is. When the range of the ophthalmic image acquired in S1 is wider than the target range of automatic diagnosis (S2: YES), the CPU 3 extracts an image of the target range of automatic diagnosis from the ophthalmic image acquired in S1 (S3). ). Although details will be described later, in the present embodiment, a predetermined range of two two-dimensional tomographic images (B-scan images) captured by the OCT apparatus is a target range for automatic diagnosis. For example, when the range of the two-dimensional tomographic image acquired in S1 is wider than the target range, the CPU 3 extracts a predetermined target range from the two-dimensional tomographic image acquired in S1. When the image acquired in S1 is a map image (three-dimensional OCT data) taken by scanning each of a plurality of different lines with OCT measurement light, the CPU 3 is an object of automatic diagnosis. Two two-dimensional tomographic images are extracted from the map image.
 なお、本実施形態のCPU3は、S3において、自動的に対象範囲の画像を抽出する。詳細には、CPU3は、取得された眼科画像に対して画像処理を行い、基準位置(本実施形態では黄斑の位置)を検出することで、対象範囲の画像を抽出する。また、CPU3は、被検眼の組織において測定光が照射された位置の情報を取得し、取得した位置情報に基づいて対象範囲の画像を抽出してもよい。なお、CPU3は、ユーザによって入力された指示に基づいて、対象範囲の画像を抽出してもよい。 In addition, CPU3 of this embodiment extracts the image of a target range automatically in S3. Specifically, the CPU 3 performs image processing on the acquired ophthalmologic image, and extracts a target range image by detecting a reference position (a position of the macular in the present embodiment). Further, the CPU 3 may acquire information on the position where the measurement light is irradiated in the tissue of the eye to be examined, and extract an image of the target range based on the acquired position information. Note that the CPU 3 may extract an image of the target range based on an instruction input by the user.
 次いで、CPU3は、眼科画像に対する自動診断結果を取得する(S4)。本実施形態では、CPU3は、機械学習アルゴリズムによって訓練された数学モデルに眼科画像を入力することで、被検眼における複数の疾患の各々に対する自動診断結果を取得する(図4参照)。 Next, the CPU 3 acquires an automatic diagnosis result for the ophthalmologic image (S4). In the present embodiment, the CPU 3 acquires an automatic diagnosis result for each of a plurality of diseases in the eye to be examined by inputting an ophthalmologic image into a mathematical model trained by a machine learning algorithm (see FIG. 4).
 本実施形態における自動診断結果の取得方法について、詳細に説明する。機械学習アルゴリズムとしては、例えば、ニューラルネットワーク、ランダムフォレスト、ブースティング、サポートベクターマシン(SVM)等が一般的に知られている。 The method for acquiring the automatic diagnosis result in this embodiment will be described in detail. As a machine learning algorithm, for example, a neural network, random forest, boosting, support vector machine (SVM), etc. are generally known.
 ニューラルネットワークは、生物の神経細胞ネットワークの挙動を模倣する手法である。ニューラルネットワークには、例えば、フィードフォワード(順伝播型)ニューラルネットワーク、RBFネットワーク(放射基底関数)、スパイキングニューラルネットワーク、畳み込みニューラルネットワーク、再帰型ニューラルネットワーク(リカレントニューラルネット、フィードバックニューラルネット等)、確率的ニューラルネット(ボルツマンマシン、ベイシアンネットワーク等)等がある。 Neural network is a technique that mimics the behavior of biological nerve cell networks. Neural networks include, for example, feedforward (forward propagation) neural networks, RBF networks (radial basis functions), spiking neural networks, convolutional neural networks, recursive neural networks (recurrent neural networks, feedback neural networks, etc.), probabilities Neural networks (such as Boltzmann machines and Bayesian networks).
 ランダムフォレストは、ランダムサンプリングされた訓練データに基づいて学習を行って、多数の決定木を生成する方法である。ランダムフォレストを用いる場合、予め識別器として学習しておいた複数の決定木の分岐を辿り、各決定木から得られる結果の平均(あるいは多数決)を取る。 Random forest is a method of generating a large number of decision trees by learning based on randomly sampled training data. When a random forest is used, the branch of a plurality of decision trees that have been learned as classifiers in advance is traced, and the average (or majority vote) of the results obtained from each decision tree is taken.
 ブースティングは、複数の弱識別器を組み合わせることで強識別器を生成する手法である。単純で弱い識別器を逐次的に学習させることで、強識別器を構築する。 Boosting is a technique for generating a strong classifier by combining a plurality of weak classifiers. A strong classifier is constructed by sequentially learning simple and weak classifiers.
 SVMは、線形入力素子を利用して2クラスのパターン識別器を構成する手法である。SVMは、例えば、訓練データから、各データ点との距離が最大となるマージン最大化超平面を求めるという基準(超平面分離定理)で、線形入力素子のパラメータを学習する。 SVM is a method of configuring a two-class pattern classifier using linear input elements. For example, the SVM learns the parameters of the linear input element based on a criterion (hyperplane separation theorem) for obtaining a margin maximizing hyperplane that maximizes the distance to each data point from the training data.
 本実施形態では、機械学習アルゴリズムとして多層型のニューラルネットワークが用いられている。ニューラルネットワークは、データを入力するための入力層と、予測したいデータを生成するための出力層と、入力層と出力層の間の1つ以上の隠れ層を含む。各層には、複数のノード(ユニットとも言われる)が配置される。詳細には、本実施形態では、多層型ニューラルネットワークの一種である畳み込みニューラルネットワーク(CNN)が用いられている。 In this embodiment, a multilayer neural network is used as the machine learning algorithm. The neural network includes an input layer for inputting data, an output layer for generating data to be predicted, and one or more hidden layers between the input layer and the output layer. In each layer, a plurality of nodes (also referred to as units) are arranged. Specifically, in this embodiment, a convolutional neural network (CNN), which is a kind of multilayer neural network, is used.
 数学モデルは、例えば、入力データと出力データの関係を予測するためのデータ構造を指す。数学モデルは、訓練データセットを用いて訓練されることで構築される。訓練データセットは、入力用訓練データと出力用訓練データのセットである。入力用訓練データには、過去に撮影された被検眼の眼科画像(本実施形態では、黄斑を中心として水平方向および垂直方向の各々に延びる撮影位置にOCT測定光を走査することで撮影された、2つの二次元断層画像)が用いられる。自動診断の精度を向上させるためには、自動診断に用いられる眼科画像の対象範囲は、入力用訓練データの画像範囲と極力一致させることが望ましい。出力用訓練データには、疾患名および疾患の位置等の診断結果のデータが用いられる。数学モデルは、ある入力用訓練データが入力された時に、それに対応する出力用訓練データが出力されるように訓練される。例えば、訓練によって、各入力と出力の相関データ(例えば、重み)が更新される。 Mathematical model refers to a data structure for predicting the relationship between input data and output data, for example. The mathematical model is constructed by being trained using a training data set. The training data set is a set of training data for input and training data for output. In the training data for input, an ophthalmologic image of the eye to be inspected in the past (in this embodiment, the OCT measurement light was scanned by scanning the imaging positions extending in the horizontal direction and the vertical direction around the macula. Two two-dimensional tomographic images) are used. In order to improve the accuracy of the automatic diagnosis, it is desirable that the target range of the ophthalmologic image used for the automatic diagnosis matches the image range of the training data for input as much as possible. As the output training data, diagnosis result data such as a disease name and a disease position are used. The mathematical model is trained so that when certain input training data is input, output training data corresponding to the input training data is output. For example, the correlation data (for example, weight) of each input and output is updated by training.
 CPU3は、数学モデルに眼科画像を入力する。その結果、図4に示すように、複数の疾患(図4の例では、疾患A~疾患I)の各々に対する自動診断結果が出力される。本実施形態では、各々の疾患が存在する確率、および疾患の位置(詳細には、疾患が存在する可能性があると判断された位置)が出力される。なお、S3において対象範囲の画像が抽出されている場合、CPU3は、S3で抽出された眼科画像を数学モデルに入力する。その結果、自動診断の精度が向上する。 CPU 3 inputs an ophthalmic image to the mathematical model. As a result, as shown in FIG. 4, an automatic diagnosis result for each of a plurality of diseases (in the example of FIG. 4, disease A to disease I) is output. In the present embodiment, the probability that each disease exists and the position of the disease (specifically, the position where it is determined that there is a possibility that the disease exists) are output. Note that if an image of the target range is extracted in S3, the CPU 3 inputs the ophthalmic image extracted in S3 to the mathematical model. As a result, the accuracy of automatic diagnosis is improved.
 また、自動診断によって疾患の位置が検出された場合には、CPU3は、画像(例えば、本実施形態では、眼底の正面画像および二次元断層画像の少なくともいずれか)を表示させる際に、疾患の位置の表示、疾患部位の強調表示、および疾患部位の拡大表示等の少なくともいずれかを実行する。従って、ユーザは、疾患が存在する可能性があると判断された位置を、容易に確認することができる。 When the position of the disease is detected by automatic diagnosis, the CPU 3 displays the image of the disease when displaying an image (for example, at least one of the front image of the fundus and the two-dimensional tomographic image in the present embodiment). At least one of position display, highlighted display of diseased part, enlarged display of diseased part, etc. is executed. Therefore, the user can easily confirm the position where it is determined that there is a possibility that the disease exists.
 また、特定の疾患が存在する確率が閾値以上と判定された場合には、CPU3は、存在する可能性が高い疾患の種類に応じて、モニタ8における各種画像の表示形態を変更する。例えば、加齢黄斑変性症、中心性漿液性脈絡網膜症、網膜剥離等の黄斑疾患の場合、網膜の厚さに異常が見られる場合がある。従って、黄斑疾患が存在する可能性が高いと判定された場合には、CPU3は、眼底を正面から見た場合の網膜の厚みの分布を示す厚みマップ、網膜厚の解析チャート、正常眼の網膜厚との比較画像等の少なくともいずれかを、モニタ8に表示させる。また、糖尿病網膜症の場合、眼底血管に異常が見られる場合がある。従って、糖尿病網膜症の可能性が高いと判定された場合には、CPU3は、OCTアンジオグラフィをモニタ8に表示させる。 Further, when it is determined that the probability that a specific disease is present is equal to or greater than the threshold, the CPU 3 changes the display form of various images on the monitor 8 according to the type of the disease that is highly likely to exist. For example, in the case of macular diseases such as age-related macular degeneration, central serous chorioretinopathy, retinal detachment, an abnormality may be seen in the thickness of the retina. Therefore, when it is determined that there is a high possibility of macular disease, the CPU 3 determines the thickness map indicating the retina thickness distribution when the fundus is viewed from the front, the retinal thickness analysis chart, and the retina of the normal eye. At least one of the comparison image and the like with the thickness is displayed on the monitor 8. In the case of diabetic retinopathy, abnormalities may be seen in the fundus blood vessels. Therefore, when it is determined that the possibility of diabetic retinopathy is high, the CPU 3 displays the OCT angiography on the monitor 8.
 図3の説明に戻る。次いで、CPU3は、複数の疾患の各々に対して得られた複数の自動診断結果に基づいて、少なくともいずれかの疾患が存在する度合いを段階的に示す参考情報を生成する(S5)。CPU3は、生成した参考情報を、自動診断の対象とした眼科画像に対応付けて記憶装置4に記憶させる(S6)。 Returning to the explanation of FIG. Next, the CPU 3 generates reference information that indicates in steps the degree to which at least one of the diseases exists based on a plurality of automatic diagnosis results obtained for each of the plurality of diseases (S5). The CPU 3 stores the generated reference information in the storage device 4 in association with the ophthalmologic image targeted for automatic diagnosis (S6).
 一例として、本実施形態では、少なくともいずれかの疾患が存在する度合いが高い順に、「警告」「注意」「正常」の3段階で参考情報が生成される。段階数を5段階以下(より望ましくは3段階以下)とすることで、ユーザは、詳細な診断を実行すべきか否か等の判断をより容易に行うことができる。ただし、段階数を6段階以上とすることも可能である。 As an example, in this embodiment, reference information is generated in three stages of “warning”, “caution”, and “normal” in descending order of the degree of presence of at least one of the diseases. By setting the number of stages to 5 or less (more desirably 3 or less), the user can easily determine whether or not to perform a detailed diagnosis. However, the number of stages can be six or more.
 また、複数の自動診断結果に基づいて参考情報を生成する具体的な方法は、適宜選択できる。一例として、本実施形態のCPU3は、図4に示すように、存在確率が閾値α(例えば60%)以上と判定された疾患が1つでも存在する場合には、「警告」の参考情報を生成する。CPU3は、最も高い存在確率が閾値β(例えば20%)以上、閾値α未満であれば、「注意」の参考情報を生成する。CPU3は、すべての疾患の存在確率が閾値β未満であれば、「正常」の参考情報を生成する。しかし、CPU3は、他の方法で参考情報を生成してもよい。例えば、CPU3は、存在確率が所定の閾値を超えた疾患の数を考慮して参考情報を生成してもよい。つまり、CPU3は、存在確率が所定の閾値を超えた疾患の数が多い場合には、少ない場合に比べて、疾患が存在する度合いが高いことを示す参考情報を生成してもよい。 Also, a specific method for generating reference information based on a plurality of automatic diagnosis results can be selected as appropriate. As an example, as shown in FIG. 4, the CPU 3 of the present embodiment displays reference information for “warning” when there is even one disease whose existence probability is determined to be equal to or higher than a threshold value α (for example, 60%). Generate. If the highest existence probability is equal to or higher than a threshold value β (for example, 20%) and lower than the threshold value α, the CPU 3 generates reference information for “caution”. If the presence probability of all the diseases is less than the threshold value β, the CPU 3 generates “normal” reference information. However, the CPU 3 may generate the reference information by other methods. For example, the CPU 3 may generate the reference information in consideration of the number of diseases whose existence probability exceeds a predetermined threshold. That is, the CPU 3 may generate reference information indicating that the degree of presence of a disease is higher when the number of diseases whose existence probability exceeds a predetermined threshold is greater than when the number of diseases is small.
 なお、参考情報生成処理を実行するタイミングも適宜選択できる。本実施形態のPC1は、眼科画像撮影装置11から眼科画像を取得すると、取得した眼科画像に対する参考情報の生成処理を自動的に実行することができる。また、PC1がモニタ8に表示させる表示画面の少なくとも一部には、参考情報生成開始ボタン54(図8参照)が設けられている。ユーザは、操作部7(例えばマウス等)を介して参考情報生成開始54ボタンを操作し、参考情報の生成を開始させる指示を入力することで、所望の眼科画像についての参考情報の生成を開始させることができる。従って、ユーザは、参考情報が未だ生成されていない過去の眼科画像等に対しても、参考情報を生成させることができる。また、眼科画像撮影装置11が参考情報生成処理を実行する場合には、CPU13は、眼科画像の撮影と並行して参考情報生成処理を実行してもよい。この場合、CPU13は、生成された参考情報を、撮影中の眼科画像と共に表示部に表示させてもよい。 Note that the timing for executing the reference information generation process can be selected as appropriate. When the PC 1 according to the present embodiment acquires an ophthalmic image from the ophthalmic image capturing apparatus 11, the PC 1 can automatically execute a reference information generation process for the acquired ophthalmic image. A reference information generation start button 54 (see FIG. 8) is provided on at least a part of the display screen displayed on the monitor 8 by the PC 1. The user operates the reference information generation start 54 button via the operation unit 7 (for example, a mouse) and inputs an instruction to start generation of reference information, thereby starting generation of reference information about a desired ophthalmic image. Can be made. Therefore, the user can generate reference information for past ophthalmic images or the like for which reference information has not yet been generated. Further, when the ophthalmic image capturing apparatus 11 executes the reference information generation process, the CPU 13 may execute the reference information generation process in parallel with the imaging of the ophthalmic image. In this case, the CPU 13 may cause the display unit to display the generated reference information together with the ophthalmologic image being captured.
 また、数学モデルを用いた自動診断処理の時間と精度は、互いにトレードオフの関係にある。つまり、高精度の自動診断処理には、長い時間がかかることが多い。本実施形態では、高速で処理可能なアルゴリズムと、高精度な自動診断を実行可能なアルゴリズムが共に採用されている。本実施形態のCPU3は、眼科画像を取得すると(S1)、まず、高速アルゴリズムを用いて自動診断を行い(S4)、参考情報を生成する(S5)。生成された参考情報が、疾患の存在度合いが低いことを示す参考情報(例えば「正常」)であれば、その眼科画像に対する参考情報の生成処理を終了する。一方で、生成された参考情報が、疾患の存在度合いが高いことを示す参考情報(例えば、「注意」および「警告」)であれば、高精度アルゴリズムを用いて再度自動診断を行い(S4)、参考情報を生成する(S5)。その結果、参考情報が効率よく生成される。なお、CPU3は、高速アルゴリズムによる自動診断を行いつつ、バックグラウンドで高精度アルゴリズムによる自動診断を行ってもよい。この場合、短時間で参考情報が生成された後、最終的には高精度な参考情報が生成される。 Also, the time and accuracy of automatic diagnosis processing using a mathematical model are in a trade-off relationship with each other. In other words, high-accuracy automatic diagnosis processing often takes a long time. In the present embodiment, both an algorithm that can be processed at high speed and an algorithm that can execute high-precision automatic diagnosis are employed. When acquiring an ophthalmic image (S1), the CPU 3 of the present embodiment first performs automatic diagnosis using a high-speed algorithm (S4), and generates reference information (S5). If the generated reference information is reference information (for example, “normal”) indicating that the degree of presence of the disease is low, the reference information generation processing for the ophthalmic image is terminated. On the other hand, if the generated reference information is reference information indicating that the presence of a disease is high (for example, “caution” and “warning”), automatic diagnosis is performed again using a high-precision algorithm (S4). Reference information is generated (S5). As a result, reference information is efficiently generated. Note that the CPU 3 may perform automatic diagnosis using a high-precision algorithm in the background while performing automatic diagnosis using a high-speed algorithm. In this case, after the reference information is generated in a short time, finally, highly accurate reference information is generated.
(患者データに対応付ける参考情報の設定)
 図5を参照して、患者データに対応付ける参考情報の設定について説明する。本実施形態では、被検者または被検眼毎に患者データが作成され、記憶装置4に記憶されている。患者データの詳細については図6および図7を参照して後述するが、1つの患者データには、1つまたは複数の眼科画像が含まれる。さらに、本実施形態のPC1は、1つの患者データに複数の眼科画像が含まれている場合に、複数の眼科画像の各々に対して参考情報を生成することができる。
(Setting of reference information corresponding to patient data)
With reference to FIG. 5, the setting of the reference information matched with patient data is demonstrated. In the present embodiment, patient data is created for each subject or eye to be examined and stored in the storage device 4. Details of the patient data will be described later with reference to FIGS. 6 and 7. One patient data includes one or more ophthalmic images. Furthermore, when a plurality of ophthalmic images are included in one patient data, the PC 1 of the present embodiment can generate reference information for each of the plurality of ophthalmic images.
 ここで、本実施形態では、1つの患者データに含まれる複数の眼科画像の各々に対して参考情報が生成されている場合、ユーザは、その患者データに対応付ける参考情報の種類を設定することができる。詳細には、図5に示すように、ユーザは、撮影されたタイミングが最も新しい眼科画像に対して生成された参考情報(以下、「最新参考情報」という)と、疾患の存在度合いが最も高い段階の参考情報(以下、「高段階参考情報」という)のいずれを患者データに対応付けるかを設定できる。ユーザは、最新参考情報を患者データに対応付けることで、最新の被検眼の状態に基づいて患者データを管理することができる。一方で、ユーザは、高段階参考情報を患者データに対応付けることで、過去の疾患の存在度合いも考慮したうえで患者データを管理することができる。本実施形態では、ユーザは、対応参考情報設定画面38を開き、最新参考情報および高段階参考情報のいずれかを選択して「OK」ボタンを操作することで、患者データに対応付ける参考情報の種類を設定することができる。CPU3は、選択された種類の参考情報を、各々の患者データに対応付ける。 Here, in this embodiment, when reference information is generated for each of a plurality of ophthalmic images included in one patient data, the user can set the type of reference information associated with the patient data. it can. Specifically, as shown in FIG. 5, the user has the highest degree of presence of reference information (hereinafter referred to as “latest reference information”) generated for an ophthalmologic image having the latest shooting timing and the disease. It is possible to set which stage reference information (hereinafter referred to as “high stage reference information”) is associated with patient data. The user can manage the patient data based on the latest state of the eye to be examined by associating the latest reference information with the patient data. On the other hand, by associating high-level reference information with patient data, the user can manage patient data in consideration of the degree of past disease. In the present embodiment, the user opens the corresponding reference information setting screen 38, selects either the latest reference information or the high-level reference information, and operates the “OK” button, whereby the type of reference information associated with the patient data is selected. Can be set. The CPU 3 associates the selected type of reference information with each patient data.
(患者データの表示)
 図6から図8を参照して、患者データの表示方法の一例について説明する。図6は、最新参考情報が患者データに対応付けられている場合の、患者データの表示画面40Aの一例を示す。図7は、高段階参考情報が患者データに対応付けられている場合の、患者データの表示画面40Bの一例を示す。図8は、図6における2017年6月20日のサムネイル47が選択された場合に表示される二次元断層画像51,52の一例を示す。
(Display patient data)
An example of a method for displaying patient data will be described with reference to FIGS. FIG. 6 shows an example of a patient data display screen 40A when the latest reference information is associated with patient data. FIG. 7 shows an example of a patient data display screen 40B in the case where high-level reference information is associated with patient data. FIG. 8 shows an example of the two-dimensional tomographic images 51 and 52 displayed when the thumbnail 47 of June 20, 2017 in FIG. 6 is selected.
 まず、各種表示欄について説明する。図6および図7に示すように、患者データの表示画面には、患者ID表示欄41、患者名表示欄42、性別表示欄43、左右眼表示欄44、および対応参考情報表示欄45が設けられている。左右眼表示欄44には、左眼および右眼を選択するためのチェックボックスが設けられている。ユーザは、左眼および右眼のいずれかチェックボックスにチェックを入れることで、選択中(表示中)の患者データに含まれる1つまたは複数の眼科画像のうち、左眼および右眼のいずれかの眼科画像を表示させることができる。また、対応参考情報表示欄45には、前述した対応参考情報設定画面38(図5参照)で選択された参考情報の種類が表示される。 First, the various display fields will be described. As shown in FIGS. 6 and 7, the patient data display screen is provided with a patient ID display field 41, a patient name display field 42, a gender display field 43, a left and right eye display field 44, and a corresponding reference information display field 45. It has been. The left and right eye display field 44 is provided with a check box for selecting the left eye and the right eye. The user checks one of the left eye and right eye check boxes to select either the left eye or the right eye among one or more ophthalmic images included in the selected (displayed) patient data. Can be displayed. In the corresponding reference information display field 45, the type of reference information selected on the corresponding reference information setting screen 38 (see FIG. 5) described above is displayed.
 図6および図7に示すように、患者データの表示画面40A,40Bでは、それぞれの眼科画像(本実施形態では、参考情報を生成する対象となる1組(2つ)の二次元断層画像)に対応する正面画像のサムネイル47が表示される。それぞれのサムネイル47には、参考情報表示欄48と日付表示欄49が付与される。参考情報表示欄48には、それぞれの眼科画像に対して生成された参考情報が表示される。つまり、CPU3は、患者データをモニタ8に表示させる際に、患者データに含まれる複数の眼科画像の各々に対応する参考情報を表示させることができる。その結果、被検眼の経過観察等が容易になる。図6および図7に示す例では、「警」が警告、「注」が注意、「正」が正常であることを示す。また、未だ参考情報が生成されていない眼科画像については、参考情報が生成されていないことが参考情報表示欄48で示される。図6および図7に示す例では、参考情報が未だ生成されていないことが、「/」によって示されている。また、日付表示欄49には、眼科画像が撮影された日付が表示される。 As shown in FIGS. 6 and 7, on the patient data display screens 40 </ b> A and 40 </ b> B, each ophthalmic image (in this embodiment, one set (two) of two-dimensional tomographic images that are targets for generating reference information). A thumbnail 47 of the front image corresponding to is displayed. Each thumbnail 47 is provided with a reference information display field 48 and a date display field 49. In the reference information display field 48, reference information generated for each ophthalmic image is displayed. That is, when displaying patient data on the monitor 8, the CPU 3 can display reference information corresponding to each of a plurality of ophthalmologic images included in the patient data. As a result, the follow-up observation of the eye to be examined is facilitated. In the example shown in FIGS. 6 and 7, “warning” indicates warning, “note” indicates caution, and “correct” indicates normal. For an ophthalmologic image for which reference information has not yet been generated, a reference information display field 48 indicates that reference information has not been generated. In the examples shown in FIGS. 6 and 7, “/” indicates that the reference information has not yet been generated. The date display field 49 displays the date when the ophthalmologic image was taken.
 図6に示すように、最新参考情報が患者データに対応付けられている場合には、患者データ全体に対応付けられている参考情報の基となった最新の眼科画像がいずれであるかを示すための表示が行われる。一例として、図6に示す例では、最新の眼科画像のサムネイル47Mの枠が、他のサムネイル47の枠とは異なる態様で表示されることで、最新の眼科画像がいずれであるかが示されている。しかし、参考情報の基となった眼科画像を示すための表示方法は、適宜変更できる。例えば、サムネイル47Mの枠でなく、サムネイル47Mに付された参考情報の表示方法を変更することで、参考情報の基となった眼科画像が示されてもよい。 As shown in FIG. 6, when the latest reference information is associated with patient data, it indicates which is the latest ophthalmic image that is the basis of the reference information associated with the entire patient data. The display for is performed. As an example, in the example illustrated in FIG. 6, the frame of the thumbnail 47M of the latest ophthalmic image is displayed in a different manner from the frame of the other thumbnails 47, thereby indicating which is the latest ophthalmic image. ing. However, the display method for showing the ophthalmic image on which the reference information is based can be changed as appropriate. For example, the ophthalmologic image that is the basis of the reference information may be displayed by changing the display method of the reference information attached to the thumbnail 47M instead of the frame of the thumbnail 47M.
 また、最新参考情報が患者データに対応付けられている場合には、複数のサムネイル47が、眼科画像が撮影されたタイミングの順番で並べて表示される。従って、ユーザは、患者データの表示画面40Aを見ることで、被検眼の経過を容易に把握することができる。 Also, when the latest reference information is associated with patient data, a plurality of thumbnails 47 are displayed side by side in the order in which the ophthalmic images were taken. Therefore, the user can easily grasp the progress of the eye to be examined by looking at the patient data display screen 40A.
 図7に示すように、高段階参考情報が患者データに対応付けられている場合には、患者データ全体に対応付けられている参考情報の基となった眼科画像(つまり、疾患の存在度合いが最も高い眼科画像)がいずれであるかを示すための表示が行われる。一例として、図7に示す例では、図6と同様に、疾患の存在度合いが最も高い眼科画像のサムネイル47Nの枠が、他のサムネイル47の枠とは異なる態様で表示される。 As shown in FIG. 7, when the high-level reference information is associated with the patient data, the ophthalmic image (that is, the presence of the disease is the basis of the reference information associated with the entire patient data). A display for indicating which is the highest ophthalmic image) is performed. As an example, in the example illustrated in FIG. 7, similarly to FIG. 6, the frame of the thumbnail 47 </ b> N of the ophthalmic image with the highest degree of disease presence is displayed in a manner different from the frame of the other thumbnails 47.
 また、高段階参考情報が患者データに対応付けられている場合には、複数のサムネイル47が、疾患の存在度合いの順番で並べて表示される。従って、ユーザは、疾患の存在度合いに応じて複数の眼科画像を把握することができる。 Further, when the high-level reference information is associated with the patient data, a plurality of thumbnails 47 are displayed in order of the degree of presence of the disease. Therefore, the user can grasp a plurality of ophthalmologic images according to the degree of presence of the disease.
 また、患者データの表示画面40A,40Bにおいて表示されているサムネイル47のいずれかがユーザによって選択されると、CPU3は、選択されたサムネイル47に対応する眼科画像(本実施形態では、参考情報の基となる二次元断層画像)を含む眼科画像表示画面50を、モニタ8に表示させる。図8に示すように、本実施形態の眼科画像表示画面50では、選択されたサムネイル47、参考情報48、日付49と共に、参考情報を生成する基となった2つの二次元断層画像51,52が表示される。なお、眼科画像表示画面50の表示態様を変更することも可能である。例えば、眼底における特定の層の厚みを示すグラフ等が、二次元断層画像51,52と主に表示されてもよい。 When any one of the thumbnails 47 displayed on the patient data display screens 40A and 40B is selected by the user, the CPU 3 displays an ophthalmic image corresponding to the selected thumbnail 47 (in this embodiment, reference information). An ophthalmologic image display screen 50 including a base two-dimensional tomographic image) is displayed on the monitor 8. As shown in FIG. 8, in the ophthalmic image display screen 50 of the present embodiment, two selected two-dimensional tomographic images 51 and 52 that are the basis for generating reference information together with the selected thumbnail 47, reference information 48, and date 49. Is displayed. Note that the display mode of the ophthalmologic image display screen 50 can be changed. For example, a graph or the like indicating the thickness of a specific layer in the fundus may be mainly displayed as the two-dimensional tomographic images 51 and 52.
 また、本実施形態では、ユーザは、眼科画像に対して生成された参考情報が実際の診断結果と異なる場合には、操作部7等を介して参考情報の編集指示を入力することができる。CPU3は、ユーザによって入力される指示に応じて、選択された参考情報を編集する。その結果、患者データがより適切に管理される。なお、CPU3は、編集した参考情報の表示態様を、未編集の参考情報とは異なる表示態様とする。よって、ユーザは、参考情報が編集されたか否かを容易に把握できる。また、ユーザは、操作部7を操作することで、参考情報を編集できるユーザを特定のユーザに制限することもできる。 Further, in the present embodiment, when the reference information generated for the ophthalmologic image is different from the actual diagnosis result, the user can input an instruction to edit the reference information via the operation unit 7 or the like. The CPU 3 edits the selected reference information in accordance with an instruction input by the user. As a result, patient data is managed more appropriately. The CPU 3 sets the display mode of the edited reference information to a display mode different from the unedited reference information. Therefore, the user can easily grasp whether or not the reference information has been edited. In addition, the user can limit the users who can edit the reference information to specific users by operating the operation unit 7.
(患者データの一覧表示)
 図9および図10を参照して、患者データの一覧の表示方法の一例について説明する。患者データの一覧の表示指示がユーザによって入力されると、CPU3は、図9に示すように、患者データの一覧表示画面60をモニタ8に表示させる。一覧表示画面60には、一覧表示部61と、検索条件入力部62が含まれる。一覧表示部61には、被検者または被検眼毎に作成されたそれぞれの患者データに関する各種情報の一覧が表示される。詳細には、本実施形態の一覧表示部61では、患者データに含まれるID、名前、年齢、および性別の情報に加えて、患者データに対応付けられている参考情報が表示される。従って、ユーザは、参考情報に基づいて適切に患者データを管理することができる。前述したように、本実施形態では、ユーザは最新参考情報と高段階参考情報のいずれを患者データに対応付けるかを設定できる。CPU3は、最新参考情報と高段階参考情報のうち、ユーザによって選択された参考情報を各々の患者データに対応付けて、一覧表示部61に表示させる。なお、一覧表示画面60に他の情報(例えば、眼科画像の撮影日時等)が含まれていてもよいことは言うまでもない。
(List of patient data)
An example of a method for displaying a list of patient data will be described with reference to FIGS. 9 and 10. When an instruction to display a list of patient data is input by the user, the CPU 3 causes the monitor 8 to display a patient data list display screen 60 as shown in FIG. The list display screen 60 includes a list display unit 61 and a search condition input unit 62. The list display unit 61 displays a list of various types of information regarding each patient data created for each subject or each eye to be examined. Specifically, in the list display unit 61 of the present embodiment, reference information associated with patient data is displayed in addition to ID, name, age, and gender information included in the patient data. Therefore, the user can manage patient data appropriately based on the reference information. As described above, in this embodiment, the user can set which of the latest reference information and the high-level reference information is associated with the patient data. The CPU 3 causes the list display unit 61 to display the reference information selected by the user among the latest reference information and the high-level reference information in association with each patient data. Needless to say, the list display screen 60 may include other information (for example, photographing date and time of an ophthalmic image).
 検索条件入力部62には、ユーザが所望する患者データを一覧全体から検索したい場合に、各種検索条件が入力される。本実施形態における検索条件には、患者の名前、年齢、および性別等に加えて参考情報も含まれる。ユーザは、参考情報に基づいて患者データを検索する場合、操作部7を操作して「警告」「注意」「正常」「未生成」の少なくともいずれかを選択することで、参考情報の複数の段階の少なくともいずれかを選択する指示を入力する。CPU3は、患者データの中から、入力された指示によって選択された段階の参考情報(本実施形態では、最新参考情報または高段階参考情報)に対応する患者データを抽出する。CPU3は、選択された段階の参考情報に応じて抽出した患者データの一覧を、モニタ8に表示させる。図10は、図9に示す状態から「警告」の参考情報で検索が行われた場合の一覧表示画面60を示す。以上のように、本実施形態によると、患者データが参考情報に基づいて適切に管理される。 The search condition input unit 62 is input with various search conditions when the user wants to search patient data desired by the entire list. The search conditions in this embodiment include reference information in addition to the patient's name, age, gender, and the like. When searching for patient data based on the reference information, the user operates the operation unit 7 to select at least one of “warning”, “caution”, “normal”, and “non-generated”, so that a plurality of pieces of reference information are obtained. Enter instructions to select at least one of the stages. The CPU 3 extracts patient data corresponding to the reference information (in the present embodiment, the latest reference information or the high-level reference information) at the stage selected by the input instruction from the patient data. The CPU 3 causes the monitor 8 to display a list of patient data extracted according to the reference information at the selected stage. FIG. 10 shows a list display screen 60 when a search is performed with reference information “warning” from the state shown in FIG. 9. As described above, according to this embodiment, patient data is appropriately managed based on reference information.
 なお、図9に示す一覧表示部61では、複数の患者データの一覧が、眼科画像の撮影日時の順に並べて表示されている。しかし、CPU3は、参考情報に応じて(つまり、参考情報の段階毎に)複数の患者データの一覧を並べて表示させてもよい。 In the list display unit 61 shown in FIG. 9, a list of a plurality of patient data is displayed side by side in order of photographing date and time of the ophthalmic image. However, the CPU 3 may display a list of a plurality of patient data side by side according to the reference information (that is, for each stage of the reference information).
(参考情報の表示および非表示の切り替え)
 本実施形態では、ユーザは、モニタ8に参考情報を表示させるか否かを選択する指示を、PC1に入力することができる。選択指示は、例えば、ユーザが操作部7を操作することで入力されてもよい。CPU3は、図6~図10で例示した参考情報のモニタ8への表示と非表示を、ユーザから入力された指示に応じて切り替える。例えば、CPU3は、参考情報を非表示とする指示が入力された場合、図6~図10で表示させていた参考情報を非表示とする。従って、ユーザは、より適切に診断等の業務を行うことができる。
(Show / hide reference information)
In the present embodiment, the user can input an instruction to select whether to display reference information on the monitor 8 to the PC 1. The selection instruction may be input by operating the operation unit 7 by the user, for example. The CPU 3 switches between displaying and hiding the reference information illustrated in FIGS. 6 to 10 on the monitor 8 in accordance with an instruction input from the user. For example, when an instruction to hide the reference information is input, the CPU 3 hides the reference information displayed in FIGS. Therefore, the user can perform a task such as diagnosis more appropriately.
(患者データの出力)
 図11を参照して、CPU3が実行する患者データ出力処理について説明する。患者データ出力処理では、他のデバイスへの患者データの送信処理、および、患者データのレポートの出力処理が行われる。まず、CPU3は、参考情報の段階を選択する指示が入力されたか否かを判断する(S11)。選択指示は、例えば、ユーザが操作部7を操作することで入力されてもよい。選択指示が入力されると(S11:YES)、CPU3は、患者データの中から、入力された指示によって選択された段階の参考情報(本実施形態では、最新参考情報または高段階参考情報)に対応する患者データを抽出する(S12)。
(Output of patient data)
With reference to FIG. 11, the patient data output process which CPU3 performs is demonstrated. In the patient data output process, a process for transmitting patient data to another device and a process for outputting a report of patient data are performed. First, the CPU 3 determines whether or not an instruction to select a reference information stage has been input (S11). The selection instruction may be input by operating the operation unit 7 by the user, for example. When the selection instruction is input (S11: YES), the CPU 3 adds the reference information (the latest reference information or the high-level reference information in this embodiment) at the stage selected from the patient data according to the input instruction. Corresponding patient data is extracted (S12).
 なお、CPU3は、同一の被検者の左眼および右眼の患者データのうちの一方の参考情報のみが特定の段階(例えば、疾患が存在する度合いが最も高い段階)である場合に、特定の段階の参考情報に対応する眼の患者データと共に、同一の被検者の他方の眼の患者データも抽出してもよい。この場合、ユーザは、一方の眼の診断結果も踏まえて他方の眼の診断を行うことができる。なお、この場合には、CPU3は、他方の眼についても注意すべき旨を、ユーザに通知してもよい。また、CPU3は、特定の段階の参考情報に対応する眼の患者データのみを抽出するか、両眼の患者データを抽出するかを、存在する可能性が高い疾患の種類に応じて決定してもよいし、ユーザからの指示に応じて決定してもよい。 Note that the CPU 3 specifies the reference information when only one reference information of the patient data of the left eye and the right eye of the same subject is in a specific stage (for example, a stage where the degree of disease is highest). The patient data of the other eye of the same subject may be extracted together with the patient data of the eye corresponding to the reference information at the stage. In this case, the user can diagnose the other eye based on the diagnosis result of one eye. In this case, the CPU 3 may notify the user that attention should be paid to the other eye. Further, the CPU 3 determines whether to extract only the patient data of the eye corresponding to the reference information at a specific stage or to extract the patient data of both eyes according to the type of the disease that is highly likely to exist. Alternatively, it may be determined according to an instruction from the user.
 次いで、CPU3は、患者データの送信指示が入力されたか否かを判断する(S14)。送信指示が入力されると(S14:YES)、CPU3は、ネットワーク9を介して患者データを他のデバイス(例えば、図1に示す拠点BのPC21)に送信する(S15)。ここで、S11において参考情報の段階が選択されている場合には、S15では、S12で抽出された患者データ(つまり、選択された段階の参考情報に対応する患者データ)が送信される。なお、患者データの代わりに、または患者データと共に、後述するレポートが他のデバイスに送信されてもよい。 Next, the CPU 3 determines whether or not a patient data transmission instruction has been input (S14). When a transmission instruction is input (S14: YES), the CPU 3 transmits patient data to another device (for example, the PC 21 at the base B shown in FIG. 1) via the network 9 (S15). If the reference information stage is selected in S11, the patient data extracted in S12 (that is, patient data corresponding to the reference information in the selected stage) is transmitted in S15. In addition, the report mentioned later may be transmitted to another device instead of patient data or together with patient data.
 次いで、CPU3は、レポートの出力指示が入力されたか否かを判断する(S17)。出力指示が入力されると(S17:YES)、CPU3は、患者データのレポートを出力する(S18)。ここで、S11において参考情報の段階が選択されている場合には、S18では、S12で抽出された患者データ(つまり、選択された段階の参考情報に対応する患者データ)のレポートが出力される。なお、レポートの出力方法は、紙に印刷する方法でもよいし、特定の形式のデータ(例えばPDFデータ等)で出力する方法でもよい。また、レポートに含める情報が適宜選択できることは言うまでもない。 Next, the CPU 3 determines whether or not a report output instruction has been input (S17). When an output instruction is input (S17: YES), the CPU 3 outputs a report of patient data (S18). If the reference information stage is selected in S11, a report of the patient data extracted in S12 (that is, patient data corresponding to the reference information of the selected stage) is output in S18. . Note that the report output method may be a method of printing on paper or a method of outputting data in a specific format (for example, PDF data). Needless to say, the information included in the report can be selected as appropriate.
 次いで、CPU3は、終了指示が入力されたか否かを判断する(S19)。終了指示が入力されていなければ(S19:NO)、処理はS11の判断に戻る。終了指示が入力されると(S19:YES)、患者データ出力処理は終了する。 Next, the CPU 3 determines whether or not an end instruction has been input (S19). If an end instruction has not been input (S19: NO), the process returns to the determination in S11. When the end instruction is input (S19: YES), the patient data output process ends.
 なお、S15で他のデバイスに送信される患者データは、1つまたは複数の眼科画像を含む患者データの全体であってもよいし、患者データの一部(例えば、最新の眼科画像等)であってもよい。レポートの出力についても同様である。 The patient data transmitted to the other device in S15 may be the entire patient data including one or a plurality of ophthalmic images, or a part of the patient data (for example, the latest ophthalmic image). There may be. The same applies to the output of the report.
 また、本実施形態では、ユーザからの指示が入力されることを契機として、患者データを出力する処理(つまり、患者データを送信する処理、およびレポートを出力する処理)が行われる。しかし、患者データを出力するタイミングは適宜変更できる。例えば、CPU3は、複数の眼科画像の各々について順に参考情報を生成し、特定の段階の参考情報(例えば、「警告」および「注意」の参考情報)が生成される毎に、患者データの出力処理を行ってもよい。例えば、眼科画像撮影装置11が参考情報生成処理(図3参照)および患者データ出力処理(図11参照)を実行することも可能である。この場合、眼科画像撮影装置11のCPU16は、被検眼の眼科画像を撮影する毎に、撮影した眼科画像に対して参考情報を生成してもよい。CPU16は、特定の段階の参考情報が生成される毎に、患者データの出力を行ってもよい。 In this embodiment, the process of outputting patient data (that is, the process of transmitting patient data and the process of outputting reports) is performed in response to the input of an instruction from the user. However, the timing for outputting patient data can be changed as appropriate. For example, the CPU 3 sequentially generates reference information for each of a plurality of ophthalmic images, and outputs patient data each time reference information at a specific stage (for example, reference information of “warning” and “caution”) is generated. Processing may be performed. For example, the ophthalmologic image capturing apparatus 11 can execute a reference information generation process (see FIG. 3) and a patient data output process (see FIG. 11). In this case, the CPU 16 of the ophthalmic image capturing apparatus 11 may generate reference information for the captured ophthalmic image every time an ophthalmic image of the eye to be examined is captured. The CPU 16 may output patient data each time reference information at a specific stage is generated.
 また、CPU3は、特定の段階の参考情報に対応するデータ(患者データ、患者データに含まれる眼科画像および解析結果のデータ等の少なくともいずれか)と、特定の段階以外の参考情報に対応するデータを、異なる方法で記憶装置4に記憶させる。例えば、CPU3は、データが特定の段階(例えば「警告」および「注意」)以外の参考情報に対応する場合、データの少なくとも一部を記憶装置4に保存する処理を省略してもよいし、記憶装置4からデータの少なくとも一部を削除してもよい。また、CPU3は、記憶装置4に記憶されているデータのデータ量を削減する処理を実行してもよい。 The CPU 3 also includes data corresponding to reference information at a specific stage (at least one of patient data, ophthalmic images included in the patient data, analysis result data, etc.) and data corresponding to reference information at a specific stage. Are stored in the storage device 4 in different ways. For example, the CPU 3 may omit the process of storing at least a part of the data in the storage device 4 when the data corresponds to reference information other than a specific stage (for example, “warning” and “caution”). At least a part of the data may be deleted from the storage device 4. Further, the CPU 3 may execute a process for reducing the amount of data stored in the storage device 4.
(メーカーへのデータ提供)
 本実施形態では、ユーザは、実際に自らが眼科画像を診断した結果を、PC1に入力することができる。CPU3は、眼科画像に対して生成した参考情報が示す疾患の度合いと、ユーザが実際の診断によって判断した疾患の有無とが異なる眼科画像を抽出することができる。また、CPU3は、数学モデルによって得られた自動診断結果と、ユーザが実際に行った診断の結果とが異なる眼科画像を抽出することができる。CPU3は、抽出した眼科画像を、眼科画像処理装置および眼科画像処理プログラムのメーカーに提供(例えば、ネットワーク9を介して送信)する。メーカーは、提供された眼科画像を訓練データとして数学モデルを訓練することで、以後の自動診断結果または参考情報の精度を向上させることができる。
(Providing data to manufacturers)
In the present embodiment, the user can input the result of actually diagnosing an ophthalmologic image to the PC 1. The CPU 3 can extract ophthalmic images in which the degree of the disease indicated by the reference information generated for the ophthalmic image is different from the presence or absence of the disease determined by the user by the actual diagnosis. Further, the CPU 3 can extract ophthalmic images in which the automatic diagnosis result obtained by the mathematical model is different from the result of the diagnosis actually performed by the user. The CPU 3 provides (for example, transmits via the network 9) the extracted ophthalmic image to the manufacturer of the ophthalmic image processing apparatus and the ophthalmic image processing program. The manufacturer can improve the accuracy of subsequent automatic diagnosis results or reference information by training a mathematical model using the provided ophthalmic images as training data.
 上記実施形態で開示された技術は一例に過ぎない。従って、上記実施形態で例示された技術を変更することも可能である。例えば、前述したように、参考情報生成処理を眼科撮影装置11が実行することも可能である。この場合、眼科撮影装置11のCPU16は、自動診断結果および参考情報の生成結果の少なくともいずれかに応じて、被検眼の撮影方法を変更してもよい。例えば、CPU16は、まず、自動診断および参考情報の生成に必要な眼科画像(上記実施形態では、2つの二次元断層画像)を撮影する。次いで、自動診断結果および参考情報の生成結果が所定の結果である場合(例えば、参考情報が「警告」または「注意」である場合)、CPU16は、付加的な眼科画像の撮影(例えば、被検眼の組織の三次元断層画像を撮影するためのマップ撮影等)を実行してもよい。この場合、自動診断結果および参考情報の生成結果の少なくともいずれかに応じて、より適切な画像が必要に応じて撮影される。 The technology disclosed in the above embodiment is merely an example. Therefore, it is possible to change the technique exemplified in the above embodiment. For example, as described above, the ophthalmologic photographing apparatus 11 can execute the reference information generation process. In this case, the CPU 16 of the ophthalmologic photographing apparatus 11 may change the photographing method of the eye to be examined according to at least one of the automatic diagnosis result and the reference information generation result. For example, the CPU 16 first takes ophthalmic images (two two-dimensional tomographic images in the above embodiment) necessary for automatic diagnosis and generation of reference information. Next, when the automatic diagnosis result and the generation result of the reference information are predetermined results (for example, when the reference information is “warning” or “caution”), the CPU 16 captures an additional ophthalmic image (for example, Map photographing for photographing a three-dimensional tomographic image of the optometric tissue may be executed. In this case, a more appropriate image is taken as necessary according to at least one of the automatic diagnosis result and the reference information generation result.
 なお、図3のS1で眼科画像を取得する処理は、「画像取得ステップ」の一例である。図3のS2,S3で対象範囲の画像を抽出する処理は、「画像抽出ステップ」の一例である。図3のS4で自動診断結果を取得する処理は、「自動診断結果取得ステップ」の一例である。図3のS5で参考情報を生成する処理は、「参考情報生成ステップ」の一例である。図3のS1で眼科画像を取得する処理を実行するCPU3は、「画像取得手段」の一例である。図3のS2,S3で対象範囲の画像を抽出する処理を実行するCPU3は、「画像抽出手段」の一例である。図3のS4で自動診断結果を取得する処理を実行するCPU3は、「自動診断結果取得手段」の一例である。 Note that the process of acquiring an ophthalmic image in S1 of FIG. 3 is an example of an “image acquisition step”. The process of extracting the image of the target range in S2 and S3 in FIG. 3 is an example of “image extraction step”. The process of acquiring the automatic diagnosis result in S4 of FIG. 3 is an example of “automatic diagnosis result acquisition step”. The process of generating reference information in S5 of FIG. 3 is an example of a “reference information generation step”. CPU3 which performs the process which acquires an ophthalmologic image by S1 of FIG. 3 is an example of an "image acquisition means." The CPU 3 that executes the process of extracting the image in the target range in S2 and S3 in FIG. 3 is an example of the “image extracting unit”. CPU3 which performs the process which acquires an automatic diagnostic result by S4 of FIG. 3 is an example of an "automatic diagnostic result acquisition means."
1  PC
3  CPU
7  操作部
8  モニタ
9  ネットワーク
11  眼科画像撮影装置
13  CPU
16  眼科画像撮影部
21  PC
30  正面画像
38  対応参考情報設定画面
40A,40B  患者データ表示画面
48  参考情報表示欄
50  眼科画像表示画面
51,52  二次元断層画像
60  一覧表示画面
62  検索条件入力部
100  眼科画像処理システム

 
1 PC
3 CPU
7 Operation unit 8 Monitor 9 Network 11 Ophthalmic image capturing device 13 CPU
16 Ophthalmology image capturing unit 21 PC
30 Front image 38 Corresponding reference information setting screen 40A, 40B Patient data display screen 48 Reference information display column 50 Ophthalmic image display screen 51, 52 Two-dimensional tomographic image 60 List display screen 62 Search condition input unit 100 Ophthalmic image processing system

Claims (11)

  1.  被検眼の眼科画像を処理する眼科画像処理装置であって、
     前記眼科画像処理装置の制御部は、
     眼科画像撮影部によって撮影された前記眼科画像を取得し、
     機械学習アルゴリズムによって訓練された数学モデルに前記眼科画像を入力することで、前記被検眼における複数の疾患の各々に対する自動診断結果を取得し、
     前記複数の疾患の少なくともいずれかが前記被検眼に存在する度合いを段階的に示す参考情報を、前記眼科画像に対する前記複数の自動診断結果に基づいて生成することを特徴とする眼科画像処理装置。
    An ophthalmologic image processing apparatus that processes an ophthalmologic image of an eye to be examined,
    The controller of the ophthalmic image processing apparatus
    Obtaining the ophthalmic image captured by the ophthalmic image capturing unit;
    By inputting the ophthalmic image into a mathematical model trained by a machine learning algorithm, an automatic diagnosis result for each of a plurality of diseases in the eye to be examined is obtained,
    An ophthalmologic image processing apparatus that generates reference information that indicates in steps the degree to which at least one of the plurality of diseases is present in the eye to be examined based on the plurality of automatic diagnosis results for the ophthalmologic image.
  2.  請求項1に記載の眼科画像処理装置であって、
     前記制御部は、
     前記参考情報の複数の段階の少なくともいずれかを選択する指示を入力し、
     複数の被検者または被検眼の、前記眼科画像を含む患者データから、入力された指示によって選択された段階の参考情報に対応する患者データを抽出することを特徴とする眼科画像処理装置。
    The ophthalmic image processing apparatus according to claim 1,
    The controller is
    An instruction to select at least one of the plurality of stages of the reference information;
    An ophthalmic image processing apparatus, wherein patient data corresponding to reference information at a stage selected by an input instruction is extracted from patient data including the ophthalmic images of a plurality of subjects or eyes.
  3.  請求項2に記載の眼科画像処理装置であって、
     前記制御部は、
     前記選択された段階の参考情報に応じて抽出した患者データの一覧を、表示部に表示させることを特徴とする眼科画像処理装置。
    The ophthalmic image processing apparatus according to claim 2,
    The controller is
    A list of patient data extracted according to the selected reference information is displayed on a display unit.
  4.  請求項1から3のいずれかに記載の眼科画像処理装置であって、
     前記制御部は、
     前記眼科画像を含む1つまたは複数の患者データのうち、特定の段階の前記参考情報に対応する患者データを、ネットワークを介して他のデバイスに送信することを特徴とする眼科画像処理装置。
    An ophthalmic image processing apparatus according to any one of claims 1 to 3,
    The controller is
    An ophthalmic image processing apparatus which transmits patient data corresponding to the reference information at a specific stage among one or a plurality of patient data including the ophthalmic image to another device via a network.
  5.  請求項1から4のいずれかに記載の眼科画像処理装置であって、
     各々の被検者または被検眼毎に、1つまたは複数の前記眼科画像のデータを含む患者データが記憶手段に記憶されており、
     前記制御部は、
     1つの前記患者データに複数の前記眼科画像が含まれている場合に、複数の前記眼科画像の各々に対して前記参考情報を生成することが可能であると共に、
     1つの前記患者データに複数の前記眼科画像が含まれている場合に、撮影されたタイミングが最も新しい前記眼科画像に対して生成された前記参考情報と、疾患が存在する度合いが最も高い段階の前記参考情報のうちのいずれかを、各々の前記患者データに対応付けることを特徴とする眼科画像処理装置。
    An ophthalmic image processing apparatus according to any one of claims 1 to 4,
    For each subject or eye, patient data including data of one or more ophthalmic images is stored in the storage means,
    The controller is
    When a plurality of the ophthalmic images are included in one patient data, the reference information can be generated for each of the plurality of ophthalmic images,
    When a plurality of the ophthalmologic images are included in one patient data, the reference information generated for the ophthalmologic image with the newest shooting timing and the highest degree of presence of a disease Any one of the reference information is associated with each of the patient data.
  6.  請求項1から5のいずれかに記載の眼科画像処理装置であって、
     各々の被検者または被検眼毎に、1つまたは複数の前記眼科画像のデータを含む患者データが記憶手段に記憶されており、
     前記制御部は、
     1つの前記患者データに複数の前記眼科画像が含まれている場合に、複数の前記眼科画像の各々に対して前記参考情報を生成することが可能であると共に、
     前記患者データを表示部に表示させる際に、前記患者データに含まれる複数の前記眼科画像の各々に対応する前記参考情報を、前記表示部に表示させることを特徴とする眼科画像処理装置。
    An ophthalmologic image processing apparatus according to any one of claims 1 to 5,
    For each subject or eye, patient data including data of one or more ophthalmic images is stored in the storage means,
    The controller is
    When a plurality of the ophthalmic images are included in one patient data, the reference information can be generated for each of the plurality of ophthalmic images,
    When the patient data is displayed on a display unit, the reference information corresponding to each of the plurality of ophthalmic images included in the patient data is displayed on the display unit.
  7.  請求項1から6のいずれかに記載の眼科画像処理装置であって、
     制御部は、
     表示部に前記参考情報を表示させるか否かを選択する指示を入力し、
     入力した指示に応じて、前記表示部への参考情報の表示および非表示を切り替えることを特徴とする眼科画像処理装置。
    An ophthalmic image processing apparatus according to any one of claims 1 to 6,
    The control unit
    Enter an instruction to select whether to display the reference information on the display unit,
    An ophthalmologic image processing apparatus that switches between displaying and hiding reference information on the display unit in accordance with an input instruction.
  8.  請求項1から7のいずれかに記載の眼科画像処理装置であって、
     前記制御部は、
     取得された前記眼科画像の範囲が、前記数学モデルに入力する眼科画像の対象範囲よりも広い場合に、取得された前記眼科画像から前記対象範囲を抽出し、
     抽出した前記眼科画像を前記数学モデルに入力することで前記自動診断結果を取得することを特徴とする眼科画像処理装置。
    The ophthalmic image processing apparatus according to claim 1,
    The controller is
    When the acquired range of the ophthalmic image is wider than the target range of the ophthalmic image input to the mathematical model, the target range is extracted from the acquired ophthalmic image;
    The ophthalmic image processing apparatus characterized by acquiring the automatic diagnosis result by inputting the extracted ophthalmic image into the mathematical model.
  9.  被検眼の眼科画像を処理する眼科画像処理装置であって、
     眼科画像撮影部によって撮影された前記眼科画像を取得する画像取得手段と、
     機械学習アルゴリズムによって訓練された数学モデルに前記眼科画像を入力することで、前記被検眼における少なくともいずれかの疾患に対する自動診断結果を取得する自動診断結果取得手段と、
     前記画像取得手段によって取得された前記眼科画像の範囲が、前記数学モデルに入力する眼科画像の対象範囲よりも広い場合に、前記画像取得手段によって取得された前記眼科画像から前記対象範囲の画像を、前記数学モデルに入力する前記眼科画像として抽出する画像抽出手段と、
     を備えたことを特徴とする眼科画像処理装置。
    An ophthalmologic image processing apparatus that processes an ophthalmologic image of an eye to be examined,
    Image acquiring means for acquiring the ophthalmic image captured by the ophthalmic image capturing unit;
    Automatic diagnosis result acquisition means for acquiring an automatic diagnosis result for at least any disease in the eye to be examined by inputting the ophthalmic image into a mathematical model trained by a machine learning algorithm;
    When the range of the ophthalmic image acquired by the image acquisition unit is wider than the target range of the ophthalmic image input to the mathematical model, an image of the target range is obtained from the ophthalmic image acquired by the image acquisition unit. Image extracting means for extracting as the ophthalmic image to be input to the mathematical model;
    An ophthalmic image processing apparatus comprising:
  10.  被検眼の眼科画像を処理する眼科画像処理装置によって実行される眼科画像処理プログラムであって、
     前記眼科画像処理プログラムが前記眼科画像処理装置の制御部によって実行されることで、
     眼科画像撮影部によって撮影された前記眼科画像を取得する画像取得ステップと、
     機械学習アルゴリズムによって訓練された数学モデルに前記眼科画像を入力することで、前記被検眼における複数の疾患の各々に対する自動診断結果を取得する自動診断結果取得ステップと、
     前記複数の疾患の少なくともいずれかが前記被検眼に存在する度合いを段階的に示す参考情報を、前記眼科画像に対する前記複数の自動診断結果に基づいて生成する参考情報生成ステップと、
     を前記眼科画像処理装置に実行させることを特徴とする眼科画像処理プログラム。
    An ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined,
    The ophthalmic image processing program is executed by the control unit of the ophthalmic image processing apparatus,
    An image acquisition step of acquiring the ophthalmic image captured by the ophthalmic image capturing unit;
    An automatic diagnosis result acquisition step of acquiring an automatic diagnosis result for each of a plurality of diseases in the eye to be examined by inputting the ophthalmic image into a mathematical model trained by a machine learning algorithm;
    A reference information generating step for generating, in a step-wise manner, reference information indicating a degree at which at least one of the plurality of diseases is present in the eye to be examined based on the plurality of automatic diagnosis results for the ophthalmic image;
    Is executed by the ophthalmic image processing apparatus.
  11.  被検眼の眼科画像を処理する眼科画像処理装置によって実行される眼科画像処理プログラムであって、
     前記眼科画像処理プログラムが前記眼科画像処理装置の制御部によって実行されることで、
     眼科画像撮影部によって撮影された前記眼科画像を取得する画像取得ステップと、
     機械学習アルゴリズムによって訓練された数学モデルに前記眼科画像を入力することで、前記被検眼における少なくともいずれかの疾患に対する自動診断結果を取得する自動診断結果取得ステップと、
     前記画像取得ステップにおいて取得された前記眼科画像の範囲が、前記数学モデルに入力する眼科画像の対象範囲よりも広い場合に、前記画像取得ステップにおいて取得された前記眼科画像から前記対象範囲の画像を、前記数学モデルに入力する前記眼科画像として抽出する画像抽出ステップと、
     を前記眼科画像処理装置に実行させることを特徴とする眼科画像処理プログラム。

     
    An ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image of an eye to be examined,
    The ophthalmic image processing program is executed by the control unit of the ophthalmic image processing apparatus,
    An image acquisition step of acquiring the ophthalmic image captured by the ophthalmic image capturing unit;
    An automatic diagnosis result acquisition step of acquiring an automatic diagnosis result for at least any disease in the eye to be examined by inputting the ophthalmic image into a mathematical model trained by a machine learning algorithm;
    When the range of the ophthalmic image acquired in the image acquisition step is wider than the target range of the ophthalmic image input to the mathematical model, an image of the target range is obtained from the ophthalmic image acquired in the image acquisition step. Extracting an image as the ophthalmic image to be input to the mathematical model;
    Is executed by the ophthalmic image processing apparatus.

PCT/JP2018/017327 2018-04-27 2018-04-27 Ophthalmic image processing device and ophthalmic image processing program WO2019207800A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020515457A JP7196908B2 (en) 2018-04-27 2018-04-27 Ophthalmic image processing device and ophthalmic image processing program
PCT/JP2018/017327 WO2019207800A1 (en) 2018-04-27 2018-04-27 Ophthalmic image processing device and ophthalmic image processing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/017327 WO2019207800A1 (en) 2018-04-27 2018-04-27 Ophthalmic image processing device and ophthalmic image processing program

Publications (1)

Publication Number Publication Date
WO2019207800A1 true WO2019207800A1 (en) 2019-10-31

Family

ID=68295218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/017327 WO2019207800A1 (en) 2018-04-27 2018-04-27 Ophthalmic image processing device and ophthalmic image processing program

Country Status (2)

Country Link
JP (1) JP7196908B2 (en)
WO (1) WO2019207800A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428072A (en) * 2020-03-31 2020-07-17 南方科技大学 Ophthalmologic multimodal image retrieval method, apparatus, server and storage medium
JP2021117652A (en) * 2020-01-24 2021-08-10 キヤノンメディカルシステムズ株式会社 Medical examination support system, medical examination support device, and medical examination support program
WO2022145129A1 (en) * 2020-12-28 2022-07-07 株式会社トプコン Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
WO2022208903A1 (en) * 2021-03-31 2022-10-06 株式会社ニデック Oct device, oct data processing method, program, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160100806A1 (en) * 2014-10-13 2016-04-14 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for predicting early onset glaucoma
JP2018027273A (en) * 2016-08-19 2018-02-22 学校法人自治医科大学 Staging determination support system of diabetic retinopathy and method of supporting determination of staging of diabetic retinopathy

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160100806A1 (en) * 2014-10-13 2016-04-14 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for predicting early onset glaucoma
JP2018027273A (en) * 2016-08-19 2018-02-22 学校法人自治医科大学 Staging determination support system of diabetic retinopathy and method of supporting determination of staging of diabetic retinopathy

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AYATSUKA , YUJI ET AL.: "Detecting Ocular Diseases from an Optical Coherence Tomography Image by Machine Learning", IEICE TECHNICAL REPORT, vol. 116, no. 298, 7 November 2016 (2016-11-07), pages 11 - 14, ISSN: 0913-5685 *
KHALIL, T. ET AL.: "An Overview of Automated Glaucoma Detection", IEEE COMPUTING CONFERENCE PROCEEDINGS, vol. 2017, 2017, pages 620 - 632, XP033294580, ISBN: 978-1-5090-5442-8, DOI: 10.1109/SAI.2017.8252161 *
TAKAHASHI, HIDENORI ET AL.: "Applying artificial intelligence to disease staging Deep learning for improved staging of diabetic retinopathy", PLOS ONE, 22 June 2017 (2017-06-22), XP055646119 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021117652A (en) * 2020-01-24 2021-08-10 キヤノンメディカルシステムズ株式会社 Medical examination support system, medical examination support device, and medical examination support program
JP7510761B2 (en) 2020-01-24 2024-07-04 キヤノンメディカルシステムズ株式会社 Medical support system, medical support device, and medical support program
CN111428072A (en) * 2020-03-31 2020-07-17 南方科技大学 Ophthalmologic multimodal image retrieval method, apparatus, server and storage medium
WO2022145129A1 (en) * 2020-12-28 2022-07-07 株式会社トプコン Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
WO2022208903A1 (en) * 2021-03-31 2022-10-06 株式会社ニデック Oct device, oct data processing method, program, and storage medium

Also Published As

Publication number Publication date
JPWO2019207800A1 (en) 2021-05-13
JP7196908B2 (en) 2022-12-27

Similar Documents

Publication Publication Date Title
JP6907563B2 (en) Image processing device and image processing program
WO2018143180A1 (en) Image processing device and image processing program
EP4023143A1 (en) Information processing device, information processing method, information processing system, and program
US20220151483A1 (en) Ophthalmic apparatus, method for controlling ophthalmic apparatus, and computer-readable medium
JP6878923B2 (en) Image processing equipment, image processing system, and image processing program
US11617505B2 (en) Ophthalmic system, ophthalmic information processing device, and ophthalmic diagnosing method
JP7196908B2 (en) Ophthalmic image processing device and ophthalmic image processing program
WO2020202680A1 (en) Information processing device and information processing method
JP2018147387A (en) System and method for processing ophthalmic examination information
JP7194136B2 (en) OPHTHALMOLOGICAL APPARATUS, OPHTHALMOLOGICAL APPARATUS CONTROL METHOD, AND PROGRAM
JP7332463B2 (en) Control device, optical coherence tomography device, control method for optical coherence tomography device, and program
JP2019208852A (en) Ophthalmologic image processing apparatus and ophthalmologic image processing program
JP7406901B2 (en) Information processing device and information processing method
WO2020116351A1 (en) Diagnostic assistance device and diagnostic assistance program
JP2024045441A (en) Ophthalmologic image processing device and ophthalmologic image processing program
JP7328489B2 (en) Ophthalmic image processing device and ophthalmic photographing device
JP7563384B2 (en) Medical image processing device and medical image processing program
JP6568375B2 (en) Ophthalmic information processing system, image processing apparatus, and image processing method
JP7302184B2 (en) Ophthalmic image processing device and ophthalmic image processing program
JP2018147386A (en) System and method for processing ophthalmic examination information
JP2021074095A (en) Ophthalmologic image processing device and ophthalmologic image processing program
JP7468163B2 (en) Ophthalmic image processing program and ophthalmic image processing device
JP6898969B2 (en) Ophthalmic information processing system and ophthalmic information processing method
US12141969B2 (en) Medical image processing device and medical image processing program
WO2021066039A1 (en) Medical information processing program, and medical information processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18916596

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020515457

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18916596

Country of ref document: EP

Kind code of ref document: A1