这是indexloc提供的服务,不要输入任何密码
Skip to main content

Diagnosis of microbial keratitis using smartphone-captured images; a deep-learning model

Abstract

Background

Microbial keratitis (MK) poses a substantial threat to vision and is the leading cause of corneal blindness. The outcome of MK is heavily reliant on immediate treatment following an accurate diagnosis. The current diagnostics are often hindered by the difficulties faced in low and middle-income countries where there may be a lack of access to ophthalmic units with clinical experts and standardized investigating equipment. Hence, it is crucial to develop new and expeditious diagnostic approaches. This study explores the application of deep learning (DL) in diagnosing and differentiating subtypes of MK using smartphone-captured images.

Materials and methods

The dataset comprised 889 cases of bacterial keratitis (BK), fungal keratitis (FK), and acanthamoeba keratitis (AK) collected from 2020 to 2023. A convolutional neural network-based model was developed and trained for classification.

Results

The study demonstrates the model’s overall classification accuracy of 83.8%, with specific accuracies for AK, BK, and FK at 81.2%, 82.3%, and 86.6%, respectively, with an AUC of 0.92 for the ROC curves.

Conclusion

The model exhibits practicality, especially with the ease of image acquisition using smartphones, making it applicable in diverse settings.

Background

Microbial keratitis (MK) typically manifests with a defect in the corneal epithelium that overlays a corneal infiltrate with a response in the anterior chamber. This is accompanied by severe and advancing pain, which may lead to the loss of vision and/or necessitate surgical intervention [1]. MK is the predominant etiology leading to corneal blindness. MK can arise due to a wide array of microbial organisms, including bacteria, fungi, viruses, and parasites. Formerly, it was regarded as a ‘silent epidemic’ in low-income and middle-income nations, exhibiting an annual incidence of 113–799 cases per 100,000 individuals, in comparison to high-income countries with an incidence rate ranging from 2.5 to 4.3 cases per 100,000 individuals annually. MK typically affects individuals during their productive years, thereby accentuating the financial burden experienced by both affected individuals and the nations they reside in [2,3,4]. In 2010, MK prompted approximately 1 million visits to hospitals or clinical practices in the USA. The cost of treating bacterial keratitis is approximately US$933 per patient [4]. In the UK, hospital admissions for MK have a median cost of £2855, with higher costs for longer admissions and lower socioeconomic status [5]. In Taiwan, outpatient management costs around US$72, while inpatient treatment costs US$1027 [6]. In Australia, severe disease in contact lens wearers costs AU$5500 with an 18-day duration, compared to mild disease costing AU$800 with symptoms lasting 7 days [7]. Disease impact in low-income countries is likely more severe due to limited healthcare access.

Notably, 23–62% of cases experience a decrease in best corrected visual acuity (BCVA) of two or more lines after MK due to corneal scarring, topographical changes, or irregularities due to corneal thinning [8,9,10,11]. It is of utmost importance to acknowledge that the result of MK is heavily reliant on immediate treatment following a prompt and accurate diagnosis [12]. The present practice of diagnosing MK by employing slit lamp photography of infectious corneal disease among cornea specialists has been found to yield only 66.0–75.9% accuracy in distinguishing between bacterial and fungal keratitis [13]. The gold standard method for definitively diagnosing MK is corneal scraping and biopsy. The other routine diagnostics include polymerase chain reaction (PCR) and in vivo confocal microscopy. However, these approaches are often hindered by the difficulties faced in low and low middle-income countries where there may be a lack of access to ophthalmic units with clinical experts and standardized investigating equipment, leading to a dependence on empirical treatment and delayed definitive treatment. Additionally, even when microbiological workups are available, the procedure may take several days before yielding any results [14]. These challenges may be held responsible for poorer clinical outcomes and a higher risk of irreversible complications. On the other hand, poorly differentiated clinical features may also contribute to misdiagnosis, which can lead to a disastrous chain of inappropriate treatment, thereby increasing the risk of unidentified clinically essential lesions. This underscores the need for the development of new, more precise, and expeditious diagnostic approaches. These new approaches can be easily accessible in rural areas where other diagnostic options may not be available. Therefore, timely diagnosis can be achieved, and appropriate management will start sooner, ultimately leading to a better prognosis.

Artificial intelligence (AI) has garnered the interest of different medical fields that require visual analysis for making diagnoses. Deep learning (DL), a branch of AI, has shown potential in improving healthcare effectiveness by aiding in automated clinical diagnosis through high-dimensional analysis. Additionally, DL’s availability may reduce the need for costly diagnostic equipment and technicians, enabling better care provision. Moreover, DL could help diagnose eye conditions in areas lacking resources, potentially preventing severe vision loss. Also, the recent COVID-19 pandemic has greatly impacted ophthalmic patients and services, emphasizing the importance of digital health [15, 16]. In ophthalmology, DL has demonstrated diagnostic accuracy comparable to clinical experts in identifying diseases like macular degeneration, glaucoma, and diabetic retinopathy [17,18,19]. Furthermore, many research groups worldwide, including our team, have assessed the use of DL models in diagnosing various types of MK using corneal photos or confocal images with varying success, thanks to AI advancements [20,21,22,23,24,25]. A systematic review showed that DL algorithms have great potential in accurately diagnosing and categorizing IK, with performance similar to or possibly even surpassing that of expert corneal professionals [26]. In line with our recent efforts [20, 23, 24], we propose a DL-based model in this study that utilizes photos captured by smartphone cameras to differentiate between important subtypes of MK.

Materials and methods

Study design

Participants from Farabi Eye Hospital (Iran), Virginia Eye Consultants (USA), and Nor Hachn Polyclinic (Armenia) were successfully recruited and enrolled in the present study. The inclusion criteria were individuals with a confirmed diagnosis of bacterial keratitis (BK), fungal keratitis (FK), and Acanthamoeba keratitis (AK) between the years 2020 and 2023. The confirmation of the diagnosis was carried out through microbiological culture, in vivo confocal microscopy (IVCM), or PCR. A variety of diagnostic modalities was applied to include confirmed cases as much as possible to enhance the sample size of the study. This approach was similar to the routine clinical practice, in which the diagnosis and proper treatment is made based on one or multi-modality approach. Patients with mixed or other infections, culture-negative cases, individuals with a history of corneal graft procedures such as penetrating keratoplasty, corneal patch grafts, and amniotic membrane grafts, as well as other notable ocular surface conditions that could potentially interfere with the assessments and analyses, were excluded from the study population. Moreover, images exhibiting poor quality, extreme gazes, or incompletely opened eyelids were also excluded, ensuring that only the most valid and representative images were included in the analysis.

The images required for the study were meticulously captured using handheld smartphones (iPhone XS and iPhone 13 Pro). To ensure the comfort and safety of the participants, the photography process was carried out following the administration of proper anesthesia, if necessary, which is a standard practice in the field. The participants’ eyelids were deliberately kept open during the photography process, allowing for a comprehensive and accurate examination of the cornea. Additionally, the photography process was conducted under appropriate room lighting conditions, taking into consideration the importance of optimal lighting for capturing clear and informative images. Also, a slit lamp adaptor for smartphones was used to help with consistency and ease of imaging. To capture the photos, 1 to 2 centimeters was established between the camera lens and the eyepiece. The flash was deactivated. The camera was configured to capture images at the highest resolution available. The area of interest was then brought into focus and captured. Deidentified smartphone photos saved as portable network graphics (size: 12–18 MB). To maintain consistency and facilitate the subsequent analysis, all images were cropped, excluding areas beyond the cornea. This cropping process was crucial in removing any irrelevant or distracting elements from the images, thereby enhancing the clarity and focusing on the corneal region, which is of utmost importance in the context of this study (Fig. 1). Furthermore, to ensure comparability and uniformity across all images, a standardization process was implemented, whereby all images were adjusted to a uniform size. This standardization process played a vital role in eliminating any potential bias or confounding factors that may arise from variations in image size, thus ensuring the accuracy and reliability of the subsequent analysis.

Fig. 1
figure 1

A sample of captured and analyzed image

As part of the initial data collection phase, a substantial number of external photographs of the eye were meticulously captured and subsequently reviewed by the research team. This extensive data collection process was necessary to ensure that a comprehensive and representative sample was obtained, enabling robust and in-depth analysis. However, in adherence to the rigorous standards set forth by the study protocol, inappropriate images that did not meet the predefined criteria were meticulously excluded from the analysis. After implementing this meticulous exclusion process, a total of 889 high-quality and suitable samples, collected from a diverse group of 98 patients, were included in the subsequent analysis. It is noteworthy to mention that the distribution of these samples across the different diagnostic groups was as follows: 78 images, accounting for approximately 8.77% of the total samples, were collected from the AK group, demonstrating the significance and relevance of this particular subgroup in the overall study. The BK group, on the other hand, contributed the largest proportion of samples, with 479 images, approximately 53.88% of the total. Finally, the FK group comprised 332 photographs, representing approximately 37.34% of the total samples. This distribution highlights the diversity and representation of the sample population, ensuring that the subsequent analysis encompasses a wide range of cases and accurately reflects the characteristics of the target population.

This study adhered to the principles of the Declaration of Helsinki. Ethical clearance was obtained from the Ethics Committee of Farabi Eye Hospital. The requirement for written informed consent was waived by the Ethics Committee. All methods were conducted in compliance with pertinent guidelines and regulations.

Model design and network details

The computer’s restricted processing power caused the original image resolution to be altered to 300 × 300. Debugging and running the codes were done on a laptop with AMD RYZEN 9 6000 SERIES, 32 GB DDR5 RAM, and Nvidia GEFORCE RTX 3070 Ti with 8 GB VRAM. Python 3.10 was used for developing all codes, while TensorFlow package version 2.12 was utilized for implementing deep learning-based codes on Windows Subsystem for Linux (WSL) version 2.0.

In order to achieve our study’s goals, we created a training framework using convolutional neural networks (CNNs), which excel at recognizing image-based tasks. The model was developed to differentiate between different types of MK (AK, BK, FK) using a dataset of 602 smartphone images. The data was divided into 72% for training, 8% for validation, and 20% for evaluation. We utilized a K-fold cross-validation method with K equal to 5. The CNN-based network used in the study is shown in Fig. 2.

Fig. 2
figure 2

The structure of the designed convolutional neural network (CNN), where the input is an image that may or may not include a diagnosis of infectious keratitis and the output is a value between zero and one indicating the diagnosis of keratitis

Python was used to conduct the simulation, with Tensorflow utilized for deep learning coding. All layers, except the last one, used Relu as the activation function, while Softmax was used for the last layer due to it being a three-class task. The learning algorithm employed was Adam, with a learning rate of 0.001, and cross-entropy served as the loss function. Moreover, both tasks were assigned a batch size of 50 and an epoch number of 200. Class weighting was not necessary as imbalanced classes did not negatively affect the final results.

Analysis

Sensitivity, specificity, accuracy, precision, F1 score, and R2 score were calculated using Eq. (1) through (6).

$$Sensitivity{\text{ }}(\operatorname{Re} call) = \frac{{{\bf{True}}\:{\bf{Positives}}}}{{{\bf{True}}\:{\bf{Positives}} + \:{\bf{False}}\:{\bf{Negatives}}}}$$
(1)
$$Specificity = \frac{{{\bf{True}}\:{\bf{Negatives}}}}{{{\bf{True}}\:{\bf{Negatives}} + \:{\bf{False}}\:{\bf{positives}}}}$$
(2)
$$Accuracy = \frac{{{\bf{True}}\:{\bf{Positives}}\: + \:{\bf{True}}\:{\bf{Negatives}}}}{{{\bf{Total}}\:{\bf{population}}}}$$
(3)
$$Precision = \frac{{{\bf{True}}\:{\bf{Positives}}\:}}{{{\bf{True}}\:{\bf{Positives}}\: + \:{\bf{False}}\:{\bf{positives}}\:}}$$
(4)
$$F1\,Score = 2 \times \frac{{{\bf{Precision}} \times {\bf{Recall}}}}{{{\bf{Precision}} + {\bf{Recall}}}}$$
(5)
$$R2\,Score = 1 - \frac{{{\bf{Sum}}\:{\bf{of}}\:{\bf{Squares}}\:{\bf{of}}\:{\bf{Residuals}}}}{{{\bf{Total}}\:{\bf{Sum}}\:{\bf{of}}\:{\bf{Squares}}}}$$
(6)

Results

Table 1 Shows the values of different calculated parameters of our model. Also, Fig. 3 depicts the confusion matrix, ROC curve, and precision-recall (PR) curve of the model for distinguishing each MK subtype. The discrimination accuracy is 0.81 for AK, and for BK and FK, it stands at 0.82 and 0.87, respectively. The ROC curves’ area under the curve (AUC) for AK, BK, and FK are 0.99, 0.89 and 0.88, respectively. The AUC of the PR curve for AK is 0.90, while for BK and FK, it is 0.86 and 0.83, respectively.

Table 1 Values of sensitivity, specificity, accuracy, F1 score, and R2 score regarding the model in the evaluation phase
Fig. 3
figure 3

Results of our model. (A) Confusion matrix of the model. (B) ROC curve of the Acan-thamoeba keratitis group. (C) ROC curve of the bacterial keratitis group. (D) ROC curve of the fungal keratitis group. (E) precision-recall curve of Acanthamoeba keratitis group. (F) preci-sion-recall curve of the bacterial keratitis group. (G) precision-recall curve of the fungal keratitis group. (For the evaluation phase, 12, 56, and 52 images per fold were used from the classes of Acanthamoeba, Bacterial, and Fungal, respectively. On average, roughly 10, 46, and 45 images, from the classes of Acanthamoeba, Bacterial, and Fungal, respectively, were correctly labeled in the evaluation phase. The presented confusion matrix is based on averaging results of all folds and this estimate is reliable, yet not 100% accurate.)

Discussion

Microbial keratitis is an ophthalmic emergency that poses a significant risk to vision and necessitates urgent intervention. The urgency of treatment is heavily dependent on timely diagnosis. Regrettably, the current gold standard diagnostic, corneal scraping and culture, has certain challenges, including low rate of culture positivity and a protracted duration required for culture testing that contributes to a delay in initiating treatment. Notably, deep learning (DL) has demonstrated highly promising results in the diagnosis of MK [26]. In the current study, we have developed a CNN-based model that utilizes DL to effectively distinguish between different subtypes of MK. This model has been specifically designed to leverage the ubiquity and advancements in smartphone technology, allowing for the use of smartphone cameras to capture the necessary photographs for analysis. By utilizing smartphone cameras, this model offers increased portability and cost-effectiveness when compared to traditional slit-lamp imaging setups commonly used for medical image recognition. This is particularly advantageous as it overcomes the limitations associated with slit-lamp imaging and provides easy accessibility to diverse regions, including both developed countries and economically challenged developing regions. Therefore, the current model not only demonstrates a higher level of practicality but also ensures that the benefits of medical image recognition are extended to regions that may not have access to specialized equipment. Our model has been evaluated and has shown an overall discrimination accuracy of 0.838, indicating its effectiveness in correctly classifying different subtypes of MK. Furthermore, when analyzing the discrimination accuracy for specific subtypes, we found that the model achieved an accuracy of 0.81 for AK, 0.82 for BK, and 0.87 for FK, respectively. This comprehensive evaluation of the model’s performance provides further evidence of its potential in the field of computer-aided diagnosis and medical image recognition.

In a multicenter study, Redd et al. similarly constructed a CNN-based model utilizing handheld cameras. The top-performing model was MobileNet, attaining an AUC ranging from 0.83 to 0.86. The CNN ensemble achieved an AUC of 0.84. Their investigation revealed that CNNs exhibited comparatively higher accuracy in identifying fungal ulcers (81%) as opposed to bacterial ulcers (75%) [27]. A similar trend emerged in our results, with the accuracy of FK reaching 88%, slightly surpassing that of BK at 85%. However, it’s noteworthy that the AUCs for both FK and BK were identical, registering at 0.91. The key advantage of the MobileNet model resides in its mobility, making it particularly suitable for integration into telemedicine applications. Notably, our model shares the same potential for versatility in telemedicine settings. To the best of our knowledge, this is the first study introducing a CNN-based model utilizing smartphone photos for the detection of MK. While Redd et al.‘s model utilized photos captured with portable handheld cameras, it was confined to distinguishing between BK and FK [27]. In contrast, our model extends its capability by incorporating the identification of AK. Notably, our model exhibited a higher accuracy in identifying AK compared to both BK and FK.

There are two distinct categories of digital cameras, namely single lens reflex (SLR) cameras and ‘point-and-shoot’ cameras. The selection of either type of camera for usage in medical clinics is contingent upon several factors such as the allocated budget, ease of use, specific photographic requirements, and the proficiency of the user. It is important to note that SLR cameras tend to be heavier, bulkier, and more expensive compared to ‘point-and-shot’ cameras. In addition to these factors, one must also consider the megapixel resolution when making a camera selection. For instance, a 3.2-megapixel camera can adequately fulfill the needs of clinical photography [28]. Nonetheless, it is worth mentioning that smartphones are emerging as a viable alternative to traditional digital cameras. The latest iterations of smartphones boast rear camera resolutions of up to an impressive 50 megapixels, accompanied by image sensors, lens correction capabilities, as well as optical and electronic image stabilization features. Notably, the image quality produced by the smartphone cameras was found to be exceptionally high, on par with the images captured using a slit lamp camera [29]. At present, smartphones are equipped with built-in cameras that are well-suited for slit lamp imaging purposes. The quality of the slit lamp images captured using a smartphone is contingent upon three factors: the resolution of the smartphone camera sensor, the resolution of the slit lamp or microscope, and the focal length of the smartphone camera system.

In a study, Hung et al. employed segmentation models (U2 Net, U-net, and U-net++), with the U2 Net model demonstrating superior performance in cropping slit lamp images [30]. It is plausible that the U2 segmentation model achieved more precise corneal cropping compared to our manual method. Notably, Hung et al. reported that their most effective CNN model achieved an average accuracy of 80.0% in differentiating between FK and BK [30]. Xu et al. employed three classical deep architectures, including VGG16, GoogLeNet-v3, and DenseNet, to develop models for distinguishing subtypes of MK [25]. Their analysis of slit lamp images occurred at three distinct levels: image-level, patch-level, and sequence-level. These levels aimed to capture features from the entire image, the lesion area, and sequences of patch sets, respectively. For patch-level analysis, they utilized manual segmentation, creating patches representing the infectious lesion of the cornea, areas beyond the infectious lesion, conjunctival injection, and anterior chamber exudation. The classification accuracy demonstrated a notable increase, rising from 55.24% at the image-level to 78.73% at the sequence-level. This highlights the improved performance of CNN-based models when attention is focused on the target area of the image, while disregarding irrelevant regions [25]. In our study, we similarly applied manual segmentation to concentrate on the corneas of IK patients, a choice that likely contributed to the high accuracy.

It should be noted that our research included 889 photographs collected from a diverse sample of 98 patients, which implies that some patients may have multiple images represented. The similarity among these photographs could artificially enhance the evaluation metrics. However, we recognized that there were actions we could have undertaken to alleviate these issues as much as possible; the dataset was subjected to multiple shuffles, and we implemented K-fold Cross-Validation while avoiding any augmentation techniques, especially on the evaluation set. Moreover, we took care to manually exclude samples that exhibited a pronounced similarity to other images associated with the same patient. In our dataset, the AK group makes up just 8.77%, while the BK group surpasses over half of the total, which is a notable class imbalance. Despite this issue, the model achieved satisfactory accuracy for each class, and our preliminary experiments did not yield improved results when employing class weighting techniques. Consequently, we opted to present our approach without the application of any class weights.

This study has several limitations. Firstly, the clinical onset of AK typically includes epithelial involvement of the cornea, sometimes manifesting as a pseudodendritic lesion [31]. As the disease advances, it infiltrates the corneal stroma, frequently resulting in a ring-like lesion in the later stages [31]. Consequently, AK can be presented in various scenarios. The limited sample size of 61 images in the AK group may not adequately capture the diverse clinical presentations associated with AK. Secondly, corneal images are recognized for their increased susceptibility to artifacts compared to retinal images. The quality of the photographs can be affected by several factors, such as reflections from the surrounding environment, ambient lighting conditions, and overall image brightness. Despite our efforts to mitigate these artifacts, achieving complete elimination was challenging. Previous studies have shown a significant reduction in the incidence of misclassified data when image brightness was meticulously controlled within a specific range [32]. Also, this study did not include samples from viral keratitis, which is a relatively common presentation in clinical practice. The multi-center design of the study resulted in various photographers capturing the images, potentially affecting the uniformity of the photographs. To address this concern, multiple images were taken of each patient, and only those meeting acceptable quality standards were selected for inclusion. Additionally, while a slit lamp adapter for smartphones was utilized, suggesting that the photographic process may still necessitate more advanced professional equipment, it is noteworthy that the cost of the slit-lamp adapter is significantly lower than that of a camera-mounted slit-lamp. Moreover, transfer learning has the potential to facilitate the development of entirely mobile-based applications in the future.

Conclusion

In conclusion, this study leveraged deep learning, specifically a CNN, to develop a model capable of distinguishing between different subtypes of MK. Utilizing ubiquitous smartphone cameras, we aimed to enhance portability and cost-effectiveness compared to traditional slit-lamp imaging setups. Our CNN-based model demonstrated practicality in clinical settings, showcasing an overall discrimination accuracy of 0.838. Notably, discrimination accuracy for AK, BK, and FK reached 0.81, 0.82, and 0.87, respectively, with an AUC of 0.92 for the ROC curves. We also described a unique method of manual cropping (to minimize artifact and allow for appropriate analysis) for smartphone images, which are taken through the ocular and have the circular image compared to traditional slit-lamp cameras. Moving forward, continued advancements in deep learning and imaging technologies hold the promise of further refining the accuracy and robustness of models in the field of ophthalmology.

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

MK:

Microbial keratitis

BCVA:

Best-corrected visual acuity

PCR:

Polymerase chain reaction

AI:

Artificial intelligence

DL:

Deep learning

BK:

Bacterial keratitis

FK:

Fungal keratitis

AK:

Acanthamoeba keratitis

IVCM:

In vivo confocal microscopy

CNN:

convolutional neural network

ROC:

Receiver operating characteristic

AUC:

Area under the curve

References

  1. Stapleton F (2023) The epidemiology of infectious keratitis. Ocul Surf 28:351–363. https://doi.org/10.1016/j.jtos.2021.08.007

    Article  PubMed  Google Scholar 

  2. Lee R, Manche EE (2016) Trends and associations in hospitalizations due to corneal ulcers in the United States, 2002–2012. Ophthalmic Epidemiol 23(4):257–263. https://doi.org/10.3109/09286586.2016.1172648

    Article  PubMed  Google Scholar 

  3. Collier SA, Gronostaj MP, MacGurn AK, Cope JR, Awsumb KL, Yoder JS et al. Estimated burden of kera-titis—United States, 2010. MMWR Morb Mortal Wkly Rep. 2014;63(45):1027

  4. Ballouz D, Maganti N, Tuohy M, Errickson J, Woodward MA (2019) Medication burden for patients with bacterial keratitis. Cornea 38(8):933–937. https://doi.org/10.1097/ico.0000000000001942

    Article  PubMed  PubMed Central  Google Scholar 

  5. Moussa G, Hodson J, Gooch N, Virdee J, Peñaloza C, Kigozi J et al (2020) Calculating the economic burden of presumed microbial keratitis admissions at a tertiary referral centre in the UK. Eye (Lond) 35(8):2146–2154. https://doi.org/10.1038/s41433-020-01333-9

    Article  PubMed  Google Scholar 

  6. Koh YY, Sun CC, Hsiao CH (2020) Epidemiology and the estimated burden of microbial keratitis on the health care system in Taiwan: a 14-Year Population-based study. Am J Ophthalmol 220:152–159. https://doi.org/10.1016/j.ajo.2020.07.026

    Article  CAS  PubMed  Google Scholar 

  7. Keay L, Edwards K, Dart J, Stapleton F (2008) Grading contact lens-related microbial keratitis: relevance to disease burden. Optom Vis Sci 85(7):531–537. https://doi.org/10.1097/opx.0b013e31817dba2e

    Article  PubMed  Google Scholar 

  8. Henry CR, Flynn HW, Miller D, Forster RK, Alfonso EC (2012) Infectious keratitis progressing to endophthalmitis. Ophthalmology 119(12):2443–2449. https://doi.org/10.1016/j.ophtha.2012.06.030

    Article  PubMed  Google Scholar 

  9. Keay L, Edwards K, Naduvilath T, Taylor HR, Snibson GR, Forde K et al (2006) Microb Keratitis Ophthalmol 113(1):109–116. https://doi.org/10.1016/j.ophtha.2005.08.013

    Article  Google Scholar 

  10. Wong TY, Ormonde SE, Gamble G, McGhee CNJ (2003) Severe infective keratitis leading to hospital admission in New Zealand. Br J Ophthalmol 87(9):1103–1108. https://doi.org/10.1136/bjo.87.9.1103

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Green MD, Apel A, Naduvilath T, Stapleton F (2007) Clinical outcomes of keratitis. Clin Exp Ophthalmol 35(5):421–426. https://doi.org/10.1111/j.1442-9071.2007.01511.x

    Article  PubMed  Google Scholar 

  12. Walker DH (2014) Principles of diagnosis of infectious diseases. Pathobiology Hum Disease p.222

  13. Dalmon CA, Porco TC, Lietman TM, Prajna NV, Lalitha P, Das MR et al (2012) The clinical differentiation of bacterial and fungal keratitis: a photographic survey. Invest Ophthalmol Vis Sci 53(4):1787. https://doi.org/10.1167/iovs.11-8478

    Article  PubMed  PubMed Central  Google Scholar 

  14. Ting DSJ, Gopal BP, Deshmukh R, Seitzman GD, Said DG, Dua HS (2022) Diagnostic armamentarium of infectious keratitis: a comprehensive review. Ocul Surf 23:27–39. https://doi.org/10.1016/j.jtos.2021.11.003

    Article  PubMed  PubMed Central  Google Scholar 

  15. Li JPO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE et al (2021) Digital technology, tele-medicine and artificial intelligence in ophthalmology: a global perspective. Prog Retin Eye Res 82:100900. https://doi.org/10.1016/j.preteyeres.2020.100900

    Article  CAS  PubMed  Google Scholar 

  16. Gunasekeran DV, Tham YC, Ting DS, Tan GS, Wong TY. Digital health during COVID-19: lessons from operation-alising new models of care in ophthalmology. Lancet Digit Health. Feb 1 2021;3(2):e124-34

  17. De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomašev N, Blackwell S et al (2018) Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med 24(9):1342–1350. https://doi.org/10.1038/s41591-018-0107-6

    Article  CAS  PubMed  Google Scholar 

  18. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A et al (2016) Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22):2402. https://doi.org/10.1001/jama.2016.17216

    Article  PubMed  Google Scholar 

  19. Kim SJ, Cho KJ, Oh S (2017) Development of machine learning models for diagnosis of glaucoma. PLoS ONE 12(5):e0177726. https://doi.org/10.1371/journal.pone.0177726

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Soleimani M, Esmaili K, Rahdar A, Aminizadeh M, Cheraqpour K, Tabatabaei SA et al. From the diagnosis of infectious keratitis to discriminating fungal subtypes; a deep learning-based study. Sci Rep. 2023;13(1):22200

  21. Ting DSJ, Foo VH, Yang LWY, Sia JT, Ang M, Lin Z et al (2020) Artificial intel-ligence for anterior segment diseases: emerging applications in ophthalmology. Br J Ophthalmol 105(2):158–168. https://doi.org/10.1136/bjophthalmol-2019-315651

    Article  PubMed  Google Scholar 

  22. Zhang Z, Wang Y, Zhang H, Samusak A, Rao H, Xiao C et al (2023) Artificial intelligence-assisted diagnosis of ocular surface diseases. Front Cell Dev Biol 11. https://doi.org/10.3389/fcell.2023.1133680

  23. Soleimani M, Cheraqpour K, Sadeghi R, Pezeshgi S, Koganti R, Djalilian AR Artificial intelligence and infectious keratitis: where are we now? Life 2023, 13 (11), 2117. https://doi.org/10.3390/life13112117

  24. Shareef O, Soleimani M, Tu E, Jacobs D, Ciolino J, Rahdar A et al (2024) A novel artificial intelligence model for diagnosing Acanthamoeba keratitis through confocal microscopy. Ocul Surf. Jul 29

  25. Xu Y, Kong M, Xie W, Duan R, Fang Z, Lin Y et al (2021) Deep sequential feature learning in clinical image classification of infectious keratitis. Engineering 7(7):1002–1010

    Article  Google Scholar 

  26. Sarayar R, Lestari YD, Setio AAA, Sitompul R (2023) Accuracy of artificial intelligence model for infectious keratitis classi-fication: a systematic review and meta-analysis. Front Public Health 11. https://doi.org/10.3389/fpubh.2023.1239231

  27. Redd TK, Prajna NV, Srinivasan M, Lalitha P, Krishnan T, Rajaraman R et al (2022) Image-based differentiation of bacterial and fungal keratitis using deep convolutional neural networks. Ophthalmol Sci 2(2):100119. https://doi.org/10.1016/j.xops.2022.100119

    Article  PubMed  PubMed Central  Google Scholar 

  28. Mukherjee B, Nair AG (2012) Principles and practice of external digital photography in ophthalmology. Indian J Ophthalmol 60(2):119. https://doi.org/10.4103/0301-4738.94053

    Article  PubMed  PubMed Central  Google Scholar 

  29. Muth DR, Blaser F, Foa N, Scherm P, Mayer WJ, Barthelmes D et al (2023) Smartphone Slit lamp imag-ing—Usability and Quality Assessment. Diagnostics (Basel) 13(3):423. https://doi.org/10.3390/diagnostics13030423

    Article  PubMed  Google Scholar 

  30. Hung N, Shih A, Lin C, Kuo M, Hwang YS, Wu W et al (2021) Using slit-lamp images for deep learning-based identification of bacterial and fungal keratitis: Model Development and Validation with different convolutional neural networks. Diagnostics (Basel) 11(7):1246. https://doi.org/10.3390/diagnostics11071246

    Article  PubMed  Google Scholar 

  31. Azzopardi M, Chong YJ, Ng B, Recchioni A, Logeswaran A, Ting DSJ (2023) Diagnosis of Acanthamoeba keratitis: past, Present and future. Diagnostics (Basel) 13(16):2655. https://doi.org/10.3390/diagnostics13162655

    Article  CAS  PubMed  Google Scholar 

  32. Ghosh A, Thammasudjarit R, Jongkhajornpong P, Attia J, Thakkinstian A, Cornea. 2021;41(5):616–622. https://doi.org/10.1097/ico.0000000000002830

Download references

Acknowledgements

None.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, MS and AYC; methodology, MS and AYC; formal analysis, MJA, AR, and SY; data curation, AYC, NQC, AK, KE, and MA; writing—original draft preparation, NT, MA and KC; writing—review and editing, KC, MA, KE, IL, and MD; supervision, MS and AYC; project administration, MS. MS and AYC should be considered the joint first author. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Kasra Cheraqpour.

Ethics declarations

Ethics approval and consent to participate

This study adhered to the ethical standards outlined in the Declaration of Helsinki. The ethical clearance for the study, under the code IR TUMS.FARABIH.REC.1400.064, was granted by the Ethics Committee of Farabi Eye Hospital, affiliated with Tehran University of Medical Sciences, Tehran, Iran.

Consent for publication

Consent for publication was waived by IRB of Tehran University of Medical Sciences.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Soleimani, M., Cheung, A.Y., Rahdar, A. et al. Diagnosis of microbial keratitis using smartphone-captured images; a deep-learning model. J Ophthal Inflamm Infect 15, 8 (2025). https://doi.org/10.1186/s12348-025-00465-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12348-025-00465-x

Keywords