[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024072330A1 - System and process of automated ocular inflammatory region assessment - Google Patents

System and process of automated ocular inflammatory region assessment Download PDF

Info

Publication number
WO2024072330A1
WO2024072330A1 PCT/TH2022/000038 TH2022000038W WO2024072330A1 WO 2024072330 A1 WO2024072330 A1 WO 2024072330A1 TH 2022000038 W TH2022000038 W TH 2022000038W WO 2024072330 A1 WO2024072330 A1 WO 2024072330A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
automated
image
ocular inflammatory
region
Prior art date
Application number
PCT/TH2022/000038
Other languages
French (fr)
Inventor
Linda HANSAPINYO
Karn PATANUKHOM
Original Assignee
Chiang Mai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chiang Mai University filed Critical Chiang Mai University
Priority to PCT/TH2022/000038 priority Critical patent/WO2024072330A1/en
Publication of WO2024072330A1 publication Critical patent/WO2024072330A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the eye is one of organs of which several forms of abnormalities can be found, such as cataract/glaucoma, macular degeneration, hyalosis, etc.
  • cataract/glaucoma a complex regional cataract
  • macular degeneration a progressive ophthalma
  • hyalosis a progressive deterioration or morbidity rate of the eye is thus increased rapidly. This may cause living problems in the future. So, the accurate diagnosis of abnormalities and continuous eye care are essential for the present living.
  • ocular hypertensives are used in eye drop dosage form as one of therapeutic drugs.
  • the said drug group usually has side effects on the conjunctiva and corneal surface, such as causing corneal dryness, and eye irritation or burning. In severe cases, it may affect the vision or visibility of patients.
  • preservatives in eye drops themselves also are one of main causes of destruction of tear film and corneal epithelial cells, and the said preservatives are ingredients in almost all types of eye drops from antibiotics, anti-inflammatory drugs and mydriatic drugs to artificial tears; that is, patients treated with almost all types of eye drops, particularly those on long-term use, are likely to experience side effects on the cornea and conjunctiva, causing conjunctive hyperemia.
  • the eye may be damaged in place or on top of its underlying illness that may be out of control. So, the monitoring, and the examination or diagnosis of the said side effects are necessary.
  • ophthalmic diseases such as infection or allergy
  • conjunctive hyperemia of which area size or color strength can be used in indicating the severity of the diseases or symptoms that enables the physician to assess and plan for the treatment in time.
  • inventions for conjunctive hyperemia assessment have been developed, generally analyzing conjunctive hyperemia content using a digital camera connected with a slit lamp and considering the pixel count in regions of interest (ROI) based on a color threshold. Both the ROI and threshold are self-specified by individual users, and may be different in each patient, dependent of their eye form and symptoms.
  • ROI regions of interest
  • Results change according to user-specified variables each time. Although grading or scoring problems can be solved partially, the said inventions still rely on examiners’ discretion, which may change each time, because main variables used in the calculation must be specified by the user.
  • each photographing session causes errors in terms of positions, sizes, and alignment degrees of the eye, resulting in difficulty in each examination being referred to by the examiner.
  • a Chinese patent, publication number CN111839455 A with the invention title “Eye sign recognition method and device for thyroid-related eye diseases” mentions eye recognition and examination of thyroid-related eye diseases by the examination of eyelids and ocular movements, beginning with facial recognition to detect the ocular position, then to examine inside of regions determined to be the eye as to whether they meet pre-specified rules and to examine the blinking by the calculation of area between the right eye and the upper eyelid.
  • a European patent, publication number EP3666177A4 with the invention title “Electronic device and method for determining degree of conjunctival hyperemia by using same” mentions a device determining the degree of conjunctival hyperemia. It mentions the segmentation of blood vessels into many nodes, calculation of blood vessel sizes based on binary images and subsequent comparison to a pre-specified blood vessel size to determine conjunctival hyperemia.
  • the system and process of automated ocular inflammatory region assessment acquire more than one image of the same eye from the user to generate image group and ROI, where the user can choose whether to use automatic processing or add such regions themselves.
  • the said regions will be processed to harmonize with image data of the same eye in corresponding regions among images based on the improvement of sizes, directions, and positions of images in the same group so that they are consistent before adding ROI before restituting each image.
  • each image will be assessed for quantitative conjunctive hyperemia as at least one form of percentage of blood vessel pixels in ROI, percentage of strong red pixels in ROI, percentage of weak red pixels in ROI and percentage of normal pixels in ROI through the removal of reflective light, improvement of quality of blood vessel pixels; and the number of pixels is then counted in ROI by referring to red color strength rules before displaying the outcome for the user.
  • the said operation helps removing the need for the physician to control the invention in all examinations closely because the invention itself can generate ROI, and pixel color comparison to pre-specified rules enables standard assessment of the eye.
  • the user or physician can also opt for self-determination of ROI.
  • the system and process of automated ocular inflammatory region assessment consist of the acquisition of at least one pictorial information set of the eye for generating ROI or acquisition of the said regions from the user before harmonizing corresponding specific regions to generate ROI into all images, improving the image quality, particularly of blood vessel segments and then counting the number of pixels in regions by referring to pre-specified rules.
  • the said invention has a purpose of facilitating accurate assessment of conjunctive hyperemia in the patient without the physician or user having to control the entire procedure, and resources used in the therapy and diagnosis of many patients can be minimized.
  • the process of automated ocular inflammatory region assessment according to this invention is shown by an embodiment only.
  • the process of automated ocular inflammatory region assessment is an invention for lessening burdens and minimizing resources used in conjunctive hyperemia region assessment in patients.
  • Figure 1 demonstrates an embodiment of the process of automated ocular inflammatory region assessment.
  • Figure 2 demonstrates an embodiment of the system of automated ocular inflammatory region assessment that is basically composed of a processing unit 1000 connected through a network with image acquisition device 1200 and memory 1100, in which is recorded by operational unit.
  • the process of automated ocular inflammatory region assessment consists of a step of image data acquisition 100 of at least one image, as image data acquisition from the user or data retrieve from the memory 1100 that is basically arranged in the RGB color space where all the said image data are in the form of near eye image data that can contain different image sizes, file extensions, resolutions, ratios, white balance, or lighting conditions.
  • the said image data is basically acquired from the memory 1100 that can be recorded in by direct addition of data from the user through a user interface, which is used as a channel in acquisition and display of processing results to the user by this invention or in conjunction with at least one image acquisition device 1200, or different image acquisition devices 1200 with different or same internal configurations or photograph taking from different or same environments.
  • the said eye image data preferably are those without dyeing for checking and may contain different or same ocular deviation, ocular sizes, magnifying powers of the image acquisition device, or incidences.
  • grouping 110 is performed by a grouping unit 1101, which is one of operational units that is recorded inside of the memory 1100, which operates in conjunction with the processing unit 1000.
  • the said grouping 110 is the linking of at least two images as an image data set to allow other operational units inside of the invention to perceive that the image data are of the same eye.
  • the said operation can arise from the determination by the user who has been aware of the information on photograph source through the user interface or from other identity classification systems as well.
  • the outcome from the grouping 110 by the grouping unit 1101 is obtained in the form of ocular image data set, of which eye images are linked by the eye owner identification that may be the same or different in each data set. All the said image data sets are recorded inside of the memory 1100 to await being further called upon by other operational units inside of the invention.
  • the said image data set is put through a step of determining regions of interest (ROI) 120 by a region determination unit 1102 inside of the memory 1100 that is arranged to acquire at least one photograph data set from the memory 1100 or from other devices connected via a network with the processing unit 1000 in order to link to ROI data to specify the extent of image processing in next steps.
  • ROI regions of interest
  • the step of determining ROI 120 is divided into automatic ROI determination 20 and human ROI determination 30, of which procedure is shown in Figure 3 that demonstrates an embodiment of the automatic ROI determination and Figure 4 that demonstrates an embodiment of the human ROI determination process.
  • the automatic ROI determination 20 is operated by the region determination unit 1102 that at least consists of the selection of at least one image required to be checked and at least one required method 21 by the user to be used as main data for processing in other images, and then image data check 22 is performed to see which image data selected by the user have not been linked to ROI. If all images selected by the user have been linked to ROI, the automatic analysis 20 is stopped by the region determination unit 1102 immediately.
  • segmentation 23 of the said unlinked image data is performed by the region determination unit 1102, basically as the transformation from RGB color space to HSV or other color spaces, dependent on the region analysis method selected by the user in the step of selection of image data required to be checked and required method 21.
  • the white of the eye is selected, and image registration 24 is performed for use in the next processing step, basically as region configuration of the white of the eye using a morphological operator to eliminate error data in all previous steps and optimize the size.
  • the morphological operator used in this step is basically composed of at least one operation of erosion, dilation, opening and closing.
  • region boundary determination and refinement 25 is performed to smooth the said region boundary to optimize the shape of regions for subsequent processing steps.
  • the said regions of the white of the eye are linked to image data and recording 26 is performed inside of the memory 1100 to await being processed further in the step of ocular region synchronization 40.
  • the human ROI determination 30 is operated by the region determination unit 1102 that at least consists of the selection of image data required to be checked 31 from at least one image from the user to use as main data for processing in other image data.
  • region data are acquired, and the linking 32 is performed that basically is as data acquisition through the user interface in the form of polygon data. Beginning regions may be selected from the memory 1100 for use in linking of positions and improvement of region data size, or region drawing may be acquired from the user for linking to image data.
  • recording 33 of the said linking is performed in the memory 1100 to await being further processed in the next step of ocular region synchronization 40.
  • the ocular region synchronization 40 also is one of steps of ROI determination 120, which is shown in Figure 5 that demonstrates an embodiment of the ocular region synchronization, which is operated by the region determination unit 1102, which is provided for the search of positions commonly appearing on main image data where regions required to be checked have been determined and on other image data within the same image data set, and as the generation of regions required to be checked on the said other image data by referring to regions required to be checked on the main image data.
  • the purpose is to determine corresponding regions of all ocular images within the same data set by referring to image data that have been linked to ROI so that conjunctive hyperemia assessment in each image is based on the calculation of the same regions and errors that may occur because of a general method of region determination are minimized.
  • the ocular region synchronization 40 is at least composed of a step of data acquisition 41 of ROI, which is linked to at least one image that can be both the outcome of the automatic ROI determination 20 and human ROI determination 30 and all image data within image data set related to the said linked image data.
  • image registration is performed by the extraction of key points 42 basically selected from any one in local feature methods, including Harris Conner, Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), FAST, Maximally Stable Extremal Regions (MSER) or Local Feature Transformer (LoFTR); and then descriptor extraction 43 is performed on all key points basically selected from at least one method of Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and BRIEF.
  • the region determination unit 1102 performs processing to determine the method corresponding to that used in the step of extraction of key points 42.
  • a check 44 is performed to see whether there are still unprocessed image data within image data set, which in this case refers to the descriptor extraction 43. Should there be, the region determination unit 1102 retrieves the said unprocessed image data for the extraction of key points 42 and the descriptor extraction 43 until all image data within the image data set have undergone the descriptor extraction already.
  • a step of matching key points 45 is initiated that is basically operated by measuring Euclidean distance among at least two images, where an obtained outcome is key points with similar feature within the two compared images; that is, the determination as to where on the first image the points that appear on the present image data are.
  • geometric transformation matrix 46 preferably using random sample consensus (RANSAC).
  • RANSAC random sample consensus
  • the check 44 is performed by the region determination unit 1102 to search whether there are any other unprocessed images, which in this case is the transformation of coordinates in ROI of present image into the coordinate system of the first image 47. If there are, the operation is repeated from the step of data acquisition 41 with modification of the present image to the said unprocessed images while retaining the first image.
  • the region determination unit 1102 proceeds with the determination of commonly appearing regions 48 on all image data within image data set by referring to the outcome obtained from the geometric transformation 45 of all image data.
  • the transformation of coordinates of regions 49 is performed eventually, which is the outcome of the step of determination of commonly appearing regions 48 on the first image into the coordinate system of each image within image data set using the geometric transformation matrix, which is the outcome of the step of calculation for geometric transformation matrix 45, and the obtained outcome in the form of region data linked to all images within the image data set is recorded in the memory 1100.
  • This step has a purpose of determining ocular inflammatory regions through the red color in ROI of photograph data set that is linked to the said ROI.
  • the obtained outcome is basically composed of at least any one of percentage of blood vessel pixels in ROI, percentage of strong red pixels in ROI, percentage of weak red pixels in ROI and percentage of normal pixels in ROI.
  • the said step of assessment of ocular red color strength 130 is shown in Figure 6 that demonstrates an embodiment of the conjunctive hyperemia assessment process basically composed of the user performing the selection of a blood vessel segmentation method 131 from selectable two methods, namely rule-based method 50 and deep-leaming-based method 60. Then the inflammation assessment unit 1103 perform check 132 to see which image data are within image data set of interest and have not been processed.
  • inflammation assessment unit 1103 will remove glare regions 133 in the said ROI of the said image data. Basically, pixels with R, G and B values in the RGB color space over 78.43% of the highest color values are considered segments of glare regions. Then the blood vessel segmentation 133 is initiated according to the process selected by the user in the blood vessel segmentation method 131.
  • Figure 7 demonstrates blood vessel determination using the rule-based method where if the user selects the rule-based method 50, the inflammation assessment unit 1103 transforms the color space 51 of all image data basically from the RGB color space to the YCbCr color space. Then morphological top-hat transform 52 with images in the luminance channel and red difference chroma channel according to pre-specified rules.
  • blood vessel image extraction 53 are performed using a pre-specified equation.
  • the obtained outcome may be put through the morphological operator for refinement so that blood vessels are linked more completely, and then the outcome obtained from the blood vessel image extraction 53 is made into binary image 54 that is basically operated by using an adaptive threshold.
  • the obtained outcome is considered initial regions of blood vessels, and then removal of small region group 55 is performed.
  • the outcome obtained in this step will be pixel segments of blood vessels and is recorded in the memory 1100 to await being called upon further.
  • Figure 8 demonstrates blood vessel determination using the deep-learning-based method where in the event that the user selects the deep-learning-based method 60, the inflammation assessment unit 1103 will input segmentation method for the white of the eye 61 from the memory 1100 for use in processing in the form of addition of previously processed ocular image data to the said learning method prior to image allotment 62, allowing the method to select segments of blood vessels from photograph data easily. Then processing in conjunction with the method 63 is performed where the outcome is obtained in the same manner as that obtained from the processing with the rule-based method 50; that is, pixel segments of blood vessels, which are recorded in the memory 1100 to await being called upon further.
  • the outcome obtained from the blood vessel segmentation 133 may be put through the morphological operator to help improve the quality and eliminate errors that arise from the previous processing step.
  • the morphological operator is basically composed of at least one operation of erosion, dilation, opening and closing.
  • image transformation 134 basically as the transformation to the HSV color space using hue channel values in segmentation into strong red pixels, weak red pixels and normal pixels according to user-pre-specified rules, which can be amended freely.
  • the inflammation assessment unit 1103 Upon completion of the operation, the inflammation assessment unit 1103 performs counting 135 of pixels under pre-specified conditions basically composed of counting of all pixels in ROI, blood vessel pixels, strong red pixels, weak red pixels, and normal pixels by referring to the outcome obtained from the previous processing by the region determination unit 1102 and inflammation assessment unit 1103. The outcome is basically obtained as the percentage of each type of pixels.
  • the inflammation assessment unit 1103 will perform the entire process in loops until all image data within data set are completely processed.
  • the outcome is obtained in the form of count linked to each image or the count average from all image data within image data set.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

System and process of automated ocular inflammatory region assessment consist of near ocular image acquisition through image acquisition device or image addition by the user, followed by grouping or linking of image data of the same eye for use as reference for results of check in each image. Then regions of interest are determined, which is divided into automatic region determination where a processing unit searches for preferred regions and creates the said regions onto all image data within image data set, or human region determination where polygon region data are acquired and positions are determined by the user for processing and adding region data onto all image data within image data set prior to conjunctive hyperemia assessment by optimizing the color space and then counting pixels according to pre-specified rules.

Description

SYSTEM AND PROCESS OF AUTOMATED OCULAR INFLAMMATORY REGION
ASSESSMENT
Field of the Invention
Engineering, particularly that involving systems and processes of automated ocular inflammatory region assessment
Background of the Invention
The eye is one of organs of which several forms of abnormalities can be found, such as cataract/glaucoma, macular degeneration, hyalosis, etc. When combined with the fact that the eye is one of organs important for human living that is used and deteriorate every day and that the present living is surrounded with things that can easily destroy or cause deterioration of the eye, the deterioration or morbidity rate of the eye is thus increased rapidly. This may cause living problems in the future. So, the accurate diagnosis of abnormalities and continuous eye care are essential for the present living.
Individual eye disease therapies may be different based on symptoms. For example, in glaucoma, which is one of leading causes of blindness worldwide with an increasing rate every year, ocular hypertensives are used in eye drop dosage form as one of therapeutic drugs. The said drug group usually has side effects on the conjunctiva and corneal surface, such as causing corneal dryness, and eye irritation or burning. In severe cases, it may affect the vision or visibility of patients.
In addition to side effects of the drugs, preservatives in eye drops themselves also are one of main causes of destruction of tear film and corneal epithelial cells, and the said preservatives are ingredients in almost all types of eye drops from antibiotics, anti-inflammatory drugs and mydriatic drugs to artificial tears; that is, patients treated with almost all types of eye drops, particularly those on long-term use, are likely to experience side effects on the cornea and conjunctiva, causing conjunctive hyperemia.
Without close monitoring of the said side effects in all examination rounds, the eye may be damaged in place or on top of its underlying illness that may be out of control. So, the monitoring, and the examination or diagnosis of the said side effects are necessary.
In addition to side effects of the drugs, other ophthalmic diseases, such as infection or allergy, all cause conjunctive hyperemia of which area size or color strength can be used in indicating the severity of the diseases or symptoms that enables the physician to assess and plan for the treatment in time.
In general, qualitative conjunctive hyperemia assessment is performed by ophthalmologists; that is, humans are used in its severity grading assessment, which usually causes examination errors easily because of the same examiner with different readiness each day or of different discretion in each examiner. Initially, a method is used where more than one examiner perform the examination simultaneously to determine average assessment values, but the said method is based on the discretion of examiners mainly and wastes the limited human resources.
So, inventions for conjunctive hyperemia assessment have been developed, generally analyzing conjunctive hyperemia content using a digital camera connected with a slit lamp and considering the pixel count in regions of interest (ROI) based on a color threshold. Both the ROI and threshold are self-specified by individual users, and may be different in each patient, dependent of their eye form and symptoms.
Results change according to user-specified variables each time. Although grading or scoring problems can be solved partially, the said inventions still rely on examiners’ discretion, which may change each time, because main variables used in the calculation must be specified by the user.
Also, although the method is applied by using more than one photograph of the patient at each examination, each photographing session causes errors in terms of positions, sizes, and alignment degrees of the eye, resulting in difficulty in each examination being referred to by the examiner.
So, inventions that can be used in quantitative conjunctive hyperemia examination with minimal human work to minimize errors while maintaining the accuracy are necessary.
After performing patent database searches, some prior inventions relating to ocular examination technology were found. For example:
A Thai invention patent, application number 1801001471, with the invention title “Image quality and characteristics, image adaptation and characteristic extraction for the classification of ocular blood vessels and faces, and combination of data on ocular blood vessels and faces and/or facial elements for a biological measure system” mentions the verification of biological measures, being eyes and faces. It mentions a procedure from image input, determination of points of interest, being eyes and regions around the eyes, and calculation of the skin area around the points of interest.
A Chinese patent, publication number CN111839455 A, with the invention title “Eye sign recognition method and device for thyroid-related eye diseases” mentions eye recognition and examination of thyroid-related eye diseases by the examination of eyelids and ocular movements, beginning with facial recognition to detect the ocular position, then to examine inside of regions determined to be the eye as to whether they meet pre-specified rules and to examine the blinking by the calculation of area between the right eye and the upper eyelid. A European patent, publication number EP3666177A4, with the invention title “Electronic device and method for determining degree of conjunctival hyperemia by using same” mentions a device determining the degree of conjunctival hyperemia. It mentions the segmentation of blood vessels into many nodes, calculation of blood vessel sizes based on binary images and subsequent comparison to a pre-specified blood vessel size to determine conjunctival hyperemia.
Then many other prior inventions relating to specified ocular image processing were found to be used in the identification or eye abnormality examination. Nevertheless, all inventions at least need control by the user or pre-assessment by the administrator. In the latter case, it is naturally likely that the said assessment will affect the processing capability because even in the same user with multiple assessments, the use of the same calculation variables can still yield different results as explained previously.
In contrast, the system and process of automated ocular inflammatory region assessment acquire more than one image of the same eye from the user to generate image group and ROI, where the user can choose whether to use automatic processing or add such regions themselves. The said regions will be processed to harmonize with image data of the same eye in corresponding regions among images based on the improvement of sizes, directions, and positions of images in the same group so that they are consistent before adding ROI before restituting each image.
Then each image will be assessed for quantitative conjunctive hyperemia as at least one form of percentage of blood vessel pixels in ROI, percentage of strong red pixels in ROI, percentage of weak red pixels in ROI and percentage of normal pixels in ROI through the removal of reflective light, improvement of quality of blood vessel pixels; and the number of pixels is then counted in ROI by referring to red color strength rules before displaying the outcome for the user.
The said operation helps removing the need for the physician to control the invention in all examinations closely because the invention itself can generate ROI, and pixel color comparison to pre-specified rules enables standard assessment of the eye. At the same time, in patients with various special eye characteristics or if the calculation accuracy is required to be checked, the user or physician can also opt for self-determination of ROI.
Summary of the Invention
The system and process of automated ocular inflammatory region assessment consist of the acquisition of at least one pictorial information set of the eye for generating ROI or acquisition of the said regions from the user before harmonizing corresponding specific regions to generate ROI into all images, improving the image quality, particularly of blood vessel segments and then counting the number of pixels in regions by referring to pre-specified rules. The said invention has a purpose of facilitating accurate assessment of conjunctive hyperemia in the patient without the physician or user having to control the entire procedure, and resources used in the therapy and diagnosis of many patients can be minimized.
Brief Description of the Drawings
Figure 1, demonstrating an embodiment of the process of automated ocular inflammatory region assessment
Figure 2, demonstrating an embodiment of the system of automated ocular inflammatory region assessment
Figure 3, demonstrating an embodiment of automatic ROI determination process
Figure 4, demonstrating an embodiment of human ROI determination process
Figure 5, demonstrating an embodiment of ocular region synchronization process
Figure 6, demonstrating an embodiment of conjunctive hyperemia assessment process
Figure 7, demonstrating blood vessel determination using rule-based method
Figure 8, demonstrating blood vessel determination using deep-leaming-based method
Detailed Description of the Invention
The process of automated ocular inflammatory region assessment according to this invention is shown by an embodiment only. The process of automated ocular inflammatory region assessment is an invention for lessening burdens and minimizing resources used in conjunctive hyperemia region assessment in patients.
Figure 1 demonstrates an embodiment of the process of automated ocular inflammatory region assessment. Figure 2 demonstrates an embodiment of the system of automated ocular inflammatory region assessment that is basically composed of a processing unit 1000 connected through a network with image acquisition device 1200 and memory 1100, in which is recorded by operational unit. The process of automated ocular inflammatory region assessment consists of a step of image data acquisition 100 of at least one image, as image data acquisition from the user or data retrieve from the memory 1100 that is basically arranged in the RGB color space where all the said image data are in the form of near eye image data that can contain different image sizes, file extensions, resolutions, ratios, white balance, or lighting conditions.
The said image data is basically acquired from the memory 1100 that can be recorded in by direct addition of data from the user through a user interface, which is used as a channel in acquisition and display of processing results to the user by this invention or in conjunction with at least one image acquisition device 1200, or different image acquisition devices 1200 with different or same internal configurations or photograph taking from different or same environments. The said eye image data preferably are those without dyeing for checking and may contain different or same ocular deviation, ocular sizes, magnifying powers of the image acquisition device, or incidences.
Should images be acquired from different eyes, grouping 110 is performed by a grouping unit 1101, which is one of operational units that is recorded inside of the memory 1100, which operates in conjunction with the processing unit 1000. The said grouping 110 is the linking of at least two images as an image data set to allow other operational units inside of the invention to perceive that the image data are of the same eye. The said operation can arise from the determination by the user who has been aware of the information on photograph source through the user interface or from other identity classification systems as well.
The outcome from the grouping 110 by the grouping unit 1101 is obtained in the form of ocular image data set, of which eye images are linked by the eye owner identification that may be the same or different in each data set. All the said image data sets are recorded inside of the memory 1100 to await being further called upon by other operational units inside of the invention.
When the said image data set is complete, it is put through a step of determining regions of interest (ROI) 120 by a region determination unit 1102 inside of the memory 1100 that is arranged to acquire at least one photograph data set from the memory 1100 or from other devices connected via a network with the processing unit 1000 in order to link to ROI data to specify the extent of image processing in next steps.
Basically, the step of determining ROI 120 is divided into automatic ROI determination 20 and human ROI determination 30, of which procedure is shown in Figure 3 that demonstrates an embodiment of the automatic ROI determination and Figure 4 that demonstrates an embodiment of the human ROI determination process.
The automatic ROI determination 20 is operated by the region determination unit 1102 that at least consists of the selection of at least one image required to be checked and at least one required method 21 by the user to be used as main data for processing in other images, and then image data check 22 is performed to see which image data selected by the user have not been linked to ROI. If all images selected by the user have been linked to ROI, the automatic analysis 20 is stopped by the region determination unit 1102 immediately.
In the event that remaining image data selected by the user have not been linked to region data, segmentation 23 of the said unlinked image data is performed by the region determination unit 1102, basically as the transformation from RGB color space to HSV or other color spaces, dependent on the region analysis method selected by the user in the step of selection of image data required to be checked and required method 21. Upon obtaining the outcome from the step of segmentation 23, the white of the eye is selected, and image registration 24 is performed for use in the next processing step, basically as region configuration of the white of the eye using a morphological operator to eliminate error data in all previous steps and optimize the size. The morphological operator used in this step is basically composed of at least one operation of erosion, dilation, opening and closing.
Then region boundary determination and refinement 25 is performed to smooth the said region boundary to optimize the shape of regions for subsequent processing steps. Then the said regions of the white of the eye are linked to image data and recording 26 is performed inside of the memory 1100 to await being processed further in the step of ocular region synchronization 40.
In contrast, the human ROI determination 30 is operated by the region determination unit 1102 that at least consists of the selection of image data required to be checked 31 from at least one image from the user to use as main data for processing in other image data. Then region data are acquired, and the linking 32 is performed that basically is as data acquisition through the user interface in the form of polygon data. Beginning regions may be selected from the memory 1100 for use in linking of positions and improvement of region data size, or region drawing may be acquired from the user for linking to image data. Then recording 33 of the said linking is performed in the memory 1100 to await being further processed in the next step of ocular region synchronization 40.
The ocular region synchronization 40 also is one of steps of ROI determination 120, which is shown in Figure 5 that demonstrates an embodiment of the ocular region synchronization, which is operated by the region determination unit 1102, which is provided for the search of positions commonly appearing on main image data where regions required to be checked have been determined and on other image data within the same image data set, and as the generation of regions required to be checked on the said other image data by referring to regions required to be checked on the main image data.
The purpose is to determine corresponding regions of all ocular images within the same data set by referring to image data that have been linked to ROI so that conjunctive hyperemia assessment in each image is based on the calculation of the same regions and errors that may occur because of a general method of region determination are minimized.
The ocular region synchronization 40 is at least composed of a step of data acquisition 41 of ROI, which is linked to at least one image that can be both the outcome of the automatic ROI determination 20 and human ROI determination 30 and all image data within image data set related to the said linked image data. Then image registration is performed by the extraction of key points 42 basically selected from any one in local feature methods, including Harris Conner, Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), FAST, Maximally Stable Extremal Regions (MSER) or Local Feature Transformer (LoFTR); and then descriptor extraction 43 is performed on all key points basically selected from at least one method of Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and BRIEF. The region determination unit 1102 performs processing to determine the method corresponding to that used in the step of extraction of key points 42.
Then a check 44 is performed to see whether there are still unprocessed image data within image data set, which in this case refers to the descriptor extraction 43. Should there be, the region determination unit 1102 retrieves the said unprocessed image data for the extraction of key points 42 and the descriptor extraction 43 until all image data within the image data set have undergone the descriptor extraction already.
Upon all image data have undergone the descriptor extraction 43, a step of matching key points 45 is initiated that is basically operated by measuring Euclidean distance among at least two images, where an obtained outcome is key points with similar feature within the two compared images; that is, the determination as to where on the first image the points that appear on the present image data are.
Then data on coordinates of all matched points are calculated for geometric transformation matrix 46, preferably using random sample consensus (RANSAC). Upon obtaining the geometric transformation matrix between both images, the program will transform coordinates in ROI of present image into the coordinate system of the first image 47 and store the outcome from the said step in the memory 1100 to await being called upon further.
The check 44 is performed by the region determination unit 1102 to search whether there are any other unprocessed images, which in this case is the transformation of coordinates in ROI of present image into the coordinate system of the first image 47. If there are, the operation is repeated from the step of data acquisition 41 with modification of the present image to the said unprocessed images while retaining the first image.
However, if the region determination unit 1102 performs the check 44 and then finds no apparent unprocessed images within image data set of interest, the region determination unit 1102 proceeds with the determination of commonly appearing regions 48 on all image data within image data set by referring to the outcome obtained from the geometric transformation 45 of all image data. The transformation of coordinates of regions 49 is performed eventually, which is the outcome of the step of determination of commonly appearing regions 48 on the first image into the coordinate system of each image within image data set using the geometric transformation matrix, which is the outcome of the step of calculation for geometric transformation matrix 45, and the obtained outcome in the form of region data linked to all images within the image data set is recorded in the memory 1100.
Then, upon completion of the preparation and generation of RO I, a step of assessment of ocular red color strength 130 by an inflammation assessment unit 1103, which is another operational unit in the memory 1100, is initiated. This step has a purpose of determining ocular inflammatory regions through the red color in ROI of photograph data set that is linked to the said ROI.
The obtained outcome is basically composed of at least any one of percentage of blood vessel pixels in ROI, percentage of strong red pixels in ROI, percentage of weak red pixels in ROI and percentage of normal pixels in ROI.
The said step of assessment of ocular red color strength 130 is shown in Figure 6 that demonstrates an embodiment of the conjunctive hyperemia assessment process basically composed of the user performing the selection of a blood vessel segmentation method 131 from selectable two methods, namely rule-based method 50 and deep-leaming-based method 60. Then the inflammation assessment unit 1103 perform check 132 to see which image data are within image data set of interest and have not been processed.
If unprocessed image data are found, inflammation assessment unit 1103 will remove glare regions 133 in the said ROI of the said image data. Basically, pixels with R, G and B values in the RGB color space over 78.43% of the highest color values are considered segments of glare regions. Then the blood vessel segmentation 133 is initiated according to the process selected by the user in the blood vessel segmentation method 131.
Figure 7 demonstrates blood vessel determination using the rule-based method where if the user selects the rule-based method 50, the inflammation assessment unit 1103 transforms the color space 51 of all image data basically from the RGB color space to the YCbCr color space. Then morphological top-hat transform 52 with images in the luminance channel and red difference chroma channel according to pre-specified rules.
Then blood vessel image extraction 53 are performed using a pre-specified equation. The obtained outcome may be put through the morphological operator for refinement so that blood vessels are linked more completely, and then the outcome obtained from the blood vessel image extraction 53 is made into binary image 54 that is basically operated by using an adaptive threshold. The obtained outcome is considered initial regions of blood vessels, and then removal of small region group 55 is performed. Eventually, the outcome obtained in this step will be pixel segments of blood vessels and is recorded in the memory 1100 to await being called upon further.
Figure 8 demonstrates blood vessel determination using the deep-learning-based method where in the event that the user selects the deep-learning-based method 60, the inflammation assessment unit 1103 will input segmentation method for the white of the eye 61 from the memory 1100 for use in processing in the form of addition of previously processed ocular image data to the said learning method prior to image allotment 62, allowing the method to select segments of blood vessels from photograph data easily. Then processing in conjunction with the method 63 is performed where the outcome is obtained in the same manner as that obtained from the processing with the rule-based method 50; that is, pixel segments of blood vessels, which are recorded in the memory 1100 to await being called upon further.
The outcome obtained from the blood vessel segmentation 133 may be put through the morphological operator to help improve the quality and eliminate errors that arise from the previous processing step. The morphological operator is basically composed of at least one operation of erosion, dilation, opening and closing.
The outcome obtained is put through image transformation 134 basically as the transformation to the HSV color space using hue channel values in segmentation into strong red pixels, weak red pixels and normal pixels according to user-pre-specified rules, which can be amended freely.
Upon completion of the operation, the inflammation assessment unit 1103 performs counting 135 of pixels under pre-specified conditions basically composed of counting of all pixels in ROI, blood vessel pixels, strong red pixels, weak red pixels, and normal pixels by referring to the outcome obtained from the previous processing by the region determination unit 1102 and inflammation assessment unit 1103. The outcome is basically obtained as the percentage of each type of pixels. The inflammation assessment unit 1103 will perform the entire process in loops until all image data within data set are completely processed.
The outcome is obtained in the form of count linked to each image or the count average from all image data within image data set.
Any modifications per this invention may be clearly understood and doable by experts in this field. This may fall under this invention’s scope and intention as presented by the attached claims.
Best Mode of the Invention
As described in Detailed Description of the Invention

Claims

Claims
1. Process of automated ocular inflammatory region assessment, consisting of a processing unit (1000) connected through a network with image acquisition device (1200), a user interface device and memory (1100), in which is recorded by the process of automated ocular inflammatory region assessment that consists of:
A. Image data acquisition (100) of at least one image;
B. Grouping (110) that is provided to link at least two images to form an image data set, where the outcome is obtained in the form of an ocular image data set, of which each ocular image is linked by the eye owner identification that may be the same or different in each data set. Additionally, all the said image data sets are recorded in the memory (1100);
C. Determination of regions of interest (ROI) (120) that is linking of photograph data set to data on ROI;
D. Conjunctive hyperemia assessment (130), which is ocular inflammatory region determination through the red color in ROI
2. The process of automated ocular inflammatory region assessment according to claim 1, wherein the image data acquisition (100) preferably acquires images in the RGB color space
3. The process of automated ocular inflammatory region assessment according to claim 1, wherein image data are in the form of near ocular image data that can contain different or same image sizes, file extensions, resolutions, ratios, white balance or lighting conditions
4. The process of automated ocular inflammatory region assessment according to claim 1, wherein image data preferably are ocular image data without dyeing for checking
5. The process of automated ocular inflammatory region assessment according to claim 1, wherein image data preferably contain different or same ocular deviation, ocular sizes, magnifying powers of the image acquisition device, or incidences
6. The process of automated ocular inflammatory region assessment according to claim 1, wherein the memory (1100) is additionally recorded in with image data from the user through the user interface device directly or operates in conjunction with at least one image acquisition device (1200)
7. The process of automated ocular inflammatory region assessment according to claim 1, wherein the image acquisition device (1200) is provided as at least one distinctive image acquisition device (1200), with different or same internal configurations or taking photographs from different or same environments
8. The process of automated ocular inflammatory region assessment according to claim 1, wherein the grouping (110) may preferably be operated by the user who has been aware of the source of photographs through the user interface device or by other person identity classification systems
9. The process of automated ocular inflammatory region assessment according to claim 1, wherein the determination of ROI (120) is divided into at least one form of automatic ROI determination (20) and human ROI determination (30)
10. The process of automated ocular inflammatory region assessment according to claim 9, wherein the automatic ROI determination (20) consists of:
A. Selection of at least one image required to be checked and one required method (21) by the user;
B. Image data check (22) to see which image data selected by the user have not been linked to ROI. In the event that all image data selected by the user have been linked to ROI, automatic assessment (20) is stopped immediately;
C. Pixel segmentation (23) of the said image data that have not been linked;
D. Selection of the white of the eye and image registration (24) for use in the next processing step;
E. Region boundary determination and refinement (25) to smooth the said region boundary;
F. Linking of the said regions of the white of the eye to image data and recording (26) in the memory (1100); and
G. Ocular region synchronization (40), which is provided for the search of positions commonly appearing on main image data of which regions required to be checked have been determined and on other image data within the same image data set, and for the creation of regions required to be checked onto the said other image data by referring to the regions required to be checked on the main image data
11. The process of automated ocular inflammatory region assessment according to claim 10, wherein the pixel segmentation (23) preferably is transformation of images from the RGB color space to the HSV color space
12. The process of automated ocular inflammatory region assessment according to claim 10, wherein D, the selection of the white of the eye and image registration (24), preferably is region configuration of the white of the eye using a morphological operator
13. The process of automated ocular inflammatory region assessment according to claim 12, wherein the morphological operator preferably is selectable from at least one operation of erosion, dilation, opening and closing
14. The process of automated ocular inflammatory region assessment according to claim 9, wherein the automatic ROI determination (30) consists of:
A. Selection of at least one image required to be checked (31) from the user;
B. Region data acquisition and linking (32) to image data;
C. Recording (33) of the said linking in the memory (1100); and
G. Ocular region synchronization (40), which is provided for the search of positions commonly appearing on main image data of which regions required to be checked have been determined and on other image data within the same image data set and as the creation of regions required to be checked onto the said other image data by referring to the regions required to be checked on the main image data
15. The process of automated ocular inflammatory region assessment according to claim 14, wherein the region data acquisition (32) is in the form of polygon data
16. The process of automated ocular inflammatory region assessment according to claim 14, wherein the region data acquisition (32) preferably is at least one channel of either selection of initial region data from inside of the memory (1100) or data acquisition through the user interface device
17. The process of automated ocular inflammatory region assessment according to claim 10 or 14, wherein the ocular region synchronization (40) at least consists of:
A. Data acquisition (41) that acquires data on ROI linked to at least one image and all image data within image data set related to the said linked image data;
B. Image registration (42);
C. Descriptor extraction (43) for all key points using a method corresponding with that used in the step of extraction of key points (42);
D. Check (44) to see whether there are any unprocessed image data within image data set. Should there be, the said unprocessed image data are retrieved for the extraction of key points (42) and descriptor extraction (43) until the descriptor has already been extracted from all image data within the image data set;
E. Matching of key points (45) is provided as the measurement of descriptor difference among at least two images;
F. Calculation for geometric transformation matrix (46); G. Transformation of coordinates in ROI of present image into the coordinate system of the first image (47) and store the outcome obtained from the said step in the memory (1100);
H. Check (44) to see whether there are any unprocessed image data within image data set. Should there be, the operation from the step of data acquisition (41) is repeated with modification of present image to the said unprocessed image data while retaining the first image;
I. Determination of commonly appearing regions (48) of all image data within image data set by referring to the outcome obtained from the step of geometric transformation (45) of all image data; and
J. Transformation of coordinates of regions (49) that are the outcome of the step of determination of commonly appearing regions (48) of the first image to the coordinate system of each image within image data set using the geometric transformation matrix, which is the outcome of the step of calculation for geometric transformation matrix (45); and outcome recording in the memory (1100) The process of automated ocular inflammatory region assessment according to claim 17, wherein the data acquisition (41) preferably acquires the outcome of automatic ROI determination (20) or human ROI determination (30) The process of automated ocular inflammatory region assessment according to claim 17, wherein the image registration (42) preferably selects any one of local feature methods The process of automated ocular inflammatory region assessment according to claim 19, wherein the local feature methods basically are selectable from Harris Conner, Scale- Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), FAST, Maximally Stable Extremal Regions (MSER) or Local Feature Transformer (LoFTR) The process of automated ocular inflammatory region assessment according to claim 17, wherein the descriptor extraction (43) is basically selected as at least one method of Scale- Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and BRIEF The process of automated ocular inflammatory region assessment according to claim 17, wherein the matching of key points (45) is preferably performed by measuring Euclidean distance The process of automated ocular inflammatory region assessment according to claim 17, wherein the calculation for geometric transformation matrix (46) preferably uses random sample consensus (RANSAC)
24. The process of automated ocular inflammatory region assessment according to claim 1, wherein the conjunctive hyperemia assessment (130) preferably yields the outcome that consists of at least any one of percentage of blood vessel pixels in ROI, percentage of strong red pixels in ROI, percentage of weak red pixels in ROI and percentage of normal pixels in ROI
25. The process of automated ocular inflammatory region assessment according to claim 1, wherein the conjunctive hyperemia assessment (130) is composed of:
A. Selection of a blood vessel segmentation method (131);
B. Check (132) to see which image data are within image data set of interest and have not been processed. Only if unprocessed image data are found will the next step be performed;
C. Removal of glare regions (132) in ROI of image data;
D. Blood vessel segmentation (133) according to the process selected by the user in the step of selection of a blood vessel segmentation method (131);
E. Image transformation (134) so that images are in the HSV color space; and
F. Counting (135) of pixels under pre-specified conditions
26. The process of automated ocular inflammatory region assessment according to claim 25, wherein the selection of a blood vessel segmentation method (131) consists of at least one option of rule-based method (50) and deep-learning-based method (60)
27. The process of automated ocular inflammatory region assessment according to claim 25, wherein the removal of glare regions (132) is processed by determining pixels with R, G and B values in the RGB color space over 78.43% of maximum values as glare region segments
28. The process of automated ocular inflammatory region assessment according to claim 26, wherein the rule-based method (50) is composed of:
A. Color space transformation (51) of all image data;
B. Morphological top-hat transform (52) with images in the luminance channel and red difference chroma channel according to pre-specified rules;
C. Blood vessel image extraction (53) using a pre-specified equation;
D. Generation of binary form (54); and
E. Removal of small region groups (55) and outcome recording in the memory
(1100)
29. The process of automated ocular inflammatory region assessment according to claim 28, wherein the color space transformation (51) preferably transforms the RGB color space to the YCbCr color space
30. The process of automated ocular inflammatory region assessment according to claim 28, wherein the blood vessel image extraction (53) further operates by putting the obtained outcome through the morphological operator for the refinement so that blood vessels are connected more completely
31. The process of automated ocular inflammatory region assessment according to claim 28, wherein the generation of binary form (54) is preferably performed using the adaptive threshold method
32. The process of automated ocular inflammatory region assessment according to claim 26, wherein the deep-learning-based method (60) is composed of:
A. Segmentation of the white of the eye (61) from the memory (1100) for use in processing in the form of the addition of ocular image data that have previously been processed to the said learning method;
B. Image segmentation (62);
C. Processing in conjunction with the method (63);
33. The process of automated ocular inflammatory region assessment according to claim 25, wherein the blood vessel segmentation (133) further operates by putting the outcome through a morphological operator
34. The process of automated ocular inflammatory region assessment according to claim 32, wherein the morphological operator is composed of at least one operation of erosion, dilation, opening and closing
35. The process of automated ocular inflammatory region assessment according to claim 25, wherein the image transformation (134) is preferably performed using the hue channel as values in segmentation into strong red pixels, weak red pixels and normal pixels
36. The process of automated ocular inflammatory region assessment according to claim 25, wherein the counting (135) is composed of total pixel count within RO I, blood vessel pixel count, strong red pixel count, weak red pixel count and normal pixel count
37. System of automated ocular inflammatory region assessment, consisting of a processing unit (1000), which is connected through a network with image acquisition device (1200) and memory (1100), which is at least recorded by: a grouping unit (1101), which functions in linking each image in the form of image data set using the eye owner identification; a region determination unit (1102), which is provided to acquire photograph data set and link to ROI data; and an inflammation assessment unit (1103), which is provided to determine ocular inflammatory regions through the red color in ROI of the said photograph data set linked to ROI The system of automated ocular inflammatory region assessment according to claim 37, wherein the operation of the region determination unit (1102) is divided into automatic assessment (20) and human ROI determination (30)
PCT/TH2022/000038 2022-09-30 2022-09-30 System and process of automated ocular inflammatory region assessment WO2024072330A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/TH2022/000038 WO2024072330A1 (en) 2022-09-30 2022-09-30 System and process of automated ocular inflammatory region assessment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/TH2022/000038 WO2024072330A1 (en) 2022-09-30 2022-09-30 System and process of automated ocular inflammatory region assessment

Publications (1)

Publication Number Publication Date
WO2024072330A1 true WO2024072330A1 (en) 2024-04-04

Family

ID=90478830

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TH2022/000038 WO2024072330A1 (en) 2022-09-30 2022-09-30 System and process of automated ocular inflammatory region assessment

Country Status (1)

Country Link
WO (1) WO2024072330A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013126568A1 (en) * 2012-02-21 2013-08-29 Massachusetts Eye & Ear Infirmary Calculating conjunctival redness
WO2021071868A1 (en) * 2019-10-10 2021-04-15 Tufts Medical Center, Inc. Systems and methods for determining tissue inflammation levels
WO2021097449A1 (en) * 2019-11-17 2021-05-20 Berkeley Lights, Inc. Systems and methods for analyses of biological samples

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013126568A1 (en) * 2012-02-21 2013-08-29 Massachusetts Eye & Ear Infirmary Calculating conjunctival redness
WO2021071868A1 (en) * 2019-10-10 2021-04-15 Tufts Medical Center, Inc. Systems and methods for determining tissue inflammation levels
WO2021097449A1 (en) * 2019-11-17 2021-05-20 Berkeley Lights, Inc. Systems and methods for analyses of biological samples

Similar Documents

Publication Publication Date Title
US20220076420A1 (en) Retinopathy recognition system
Zhang et al. Automatic cataract detection and grading using deep convolutional neural network
US11213197B2 (en) Artificial neural network and system for identifying lesion in retinal fundus image
Gardner et al. Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool.
Sinthanayothin et al. Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images
US20180360305A1 (en) Method and system for classifying optic nerve head
EP4006833B1 (en) Image processing system and image processing method
Nayak et al. Automatic identification of diabetic maculopathy stages using fundus images
Kajan et al. Detection of diabetic retinopathy using pretrained deep neural networks
Hatanaka et al. Glaucoma risk assessment based on clinical data and automated nerve fiber layer defects detection
Sirazitdinova et al. Validation of computerized quantification of ocular redness
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
GB2593824A (en) Analysis method and system for feature data change of diabetic retinopathy fundus, and storage device
Vyas et al. A comprehensive survey on image modality based computerized dry eye disease detection techniques
WO2024072330A1 (en) System and process of automated ocular inflammatory region assessment
Hanđsková et al. Diabetic rethinopathy screening by bright lesions extraction from fundus images
Patankar et al. Diagnosis of Ophthalmic Diseases in Fundus Image Using various Machine Learning Techniques
Listyalina et al. Detection of optic disc centre point in retinal image
Sabina et al. Convolutional Neural Network Analysis of Fundus for Glaucoma Diagnosis
Gobinath et al. Deep convolutional neural network for glaucoma detection based on image classification
Danao et al. Machine learning-based glaucoma detection through frontal eye features analysis
Ammal et al. Texture Feature Analysis in Fundus Image in Screening Diabetic Retinopathy
Ghorab et al. Computer-Based Detection of Glaucoma Using Fundus Image Processing
Truitt et al. Utility of color information for segmentation of digital retinal images: neural-network-based approach
Devi et al. A SURVEY ON IDENTIFICATION OF DIABETIC RETINOPATHY FOR MEDICAL DIAGNOSIS

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22961138

Country of ref document: EP

Kind code of ref document: A1