banner

Blog

Mar 01, 2025

Deep learning based highly accurate transplanted bioengineered corneal equivalent thickness measurement using optical coherence tomography | npj Digital Medicine

npj Digital Medicine volume 7, Article number: 308 (2024) Cite this article

1497 Accesses

Metrics details

Corneal transplantation is the primary treatment for irreversible corneal diseases, but due to limited donor availability, bioengineered corneal equivalents are being developed as a solution, with biocompatibility, structural integrity, and physical function considered key factors. Since conventional evaluation methods may not fully capture the complex properties of the cornea, there is a need for advanced imaging and assessment techniques. In this study, we proposed a deep learning-based automatic segmentation method for transplanted bioengineered corneal equivalents using optical coherence tomography to achieve a highly accurate evaluation of graft integrity and biocompatibility. Our method provides quantitative individual thickness values, detailed maps, and volume measurements of the bioengineered corneal equivalents, and has been validated through 14 days of monitoring. Based on the results, it is expected to have high clinical utility as a quantitative assessment method for human keratoplasties, including automatic opacity area segmentation and implanted graft part extraction, beyond animal studies.

Globally, bilateral corneal blindness is estimated to affect 4.9 million people, which accounts for 12% of the 39 million blind individuals worldwide1, which is based on WHO 2010 global blindness data and WHO 2002 sub-region causes (updated with 2010 data)2. Corneal transplantation surgery, serving as the definitive treatment for irreversible corneal opacification unresponsive to medical or surgical interventions, faces significant challenges. The primary approach involves allogeneic corneal transplantation using donated corneas; however, a widespread shortage of donor corneas hampers accessibility for numerous patients. Presently, an estimated 12 million individuals await corneal transplantation globally, with a transplant recipient rate of only one person for every 70 individuals on the waiting list. Notably, over half of the donated corneas originate from the United States and India3. Moreover, the success rates of corneal transplantation diminish in high-risk patients, encompassing immune-mediated graft rejection, corneal endothelial damage due to immune reactions, multiple transplant failures, glaucoma, and corneal neovascularization4,5. Addressing the scarcity of donor corneas and the challenges associated with allogeneic corneal transplantation, there is a growing imperative for the development of synthetic polymer materials that provoke minimal immune responses, leading to the emergence of bioengineered corneal equivalents. In accordance with the increasing demands, many approaches have been made to develop biocompatible bioengineered corneal equivalents using natural polymers including collagen, chondroitin sulfate, proteoglycans, and hyaluronic acid6,7,8,9.

The most crucial factors determining the performance of a bioengineered corneal equivalent are the structural integrity and biocompatibility of the graft6. Slit-lamp bio-microscopic examination is essential for the general evaluation of the bioengineered corneal equivalent. This allows the non-invasive examination of the transparency, edema, neovascularization, and epithelial defects10,11,12. However, the quantitative assessment of the graft obtainable through slit-lamp examination is limited; factors such as graft thickness and volume cannot be accurately measured using this technique. The ex vivo histological method allows for the observation of the graft condition, the degree of attachment between the graft and the recipient, interactions, and the infiltration of inflammatory cells, which are essential for assessing biocompatibility13,14,15. However, tissue staining techniques including hematoxylin and eosin and immuno-histochemical staining for the implanted graft should be performed after sacrificing the experimental subject, it is impossible to continuously evaluate the status of the implant.

For the evaluation of the biocompatibility of the implanted bioengineered corneal equivalent, measuring both its thickness and the total thickness of the cornea is crucial. If the biocompatibility of the implant is low, an inflammatory response may induce corneal edema, resulting in an increase in the overall corneal thickness. Therefore, monitoring changes in thickness, including both the transplanted graft and the recipient cornea over time, is essential when assessing the biocompatibility of the implanted cornea16,17,18. There are a variety of instruments to measure corneal thickness such as confocal microscopy19,20, corneal topography21,22, ultrasonic pachymetry23,24, and optical coherence tomography (OCT)25,26. Conventionally, ultrasonic pachymetry is most commonly used to measure the corneal thickness, however, it needs direct contact with the cornea and the need for topical anesthesia to measure the whole region. In addition, by dividing the cornea into partial sections and measuring the thickness at representative points for each area, it is possible to discern overall trends, but comprehensive information for every location within the cornea may not be obtained. To address these limitations, OCT has been introduced and widely utilized in various applications including ophthalmology27,28, dentistry29,30, dermatology31,32, and even industry33,34. OCT image-based evaluation of bioengineered corneal equivalent through thickness measurement was conducted before11,35. However, since this research used A-scan profiling to discriminate the bioengineered corneal equivalent, it is hard to match the equal position and angle at every time. In addition, because A-scan profiling provides depth information at a single point on the cornea, it is not sufficient to quantify the thickness of the entire corneal region.

In this study, we demonstrate the deep learning-based automatic three-dimensional (3D) segmentation of bioengineered corneal equivalent using OCT image for precise biocompatibility evaluation through thickness-based whole graft integrity assessment. For model training and testing, we used OCT images from rabbits after bioengineered cornea transplantation surgery. In addition, three ophthalmologists labeled the images according to their experience and expertise. To achieve the high accuracy of bioengineered corneal equivalent segmentation, we compared representative four deep-learning models and algorithm-based segmentation. Then, we measured the thickness variation of the implanted cornea over a 14-day monitoring period using the proposed method. In addition, to demonstrate the performance of our method for whole bioengineered corneal equivalent thickness measurement, we compared our result with simulated ultrasonic pachymetry as a conventional method. To the best of our knowledge, this is the first study to measure the 3D bioengineered corneal equivalent thickness variance to assist the biocompatibility evaluation using deep learning.

After network training, the best performance checkpoint of each U-Net based on four different models (i.e., U-Net, Attention U-Net, Nested U-Net, and Res U-Net) was saved and compared each other using test datasets as shown in Fig. 1, which used the dataset of doctor 1. Among the deep learning architectures, Attention U-Net consistently had the best performance for each of dates (Fig. 1a). To quantitatively compare the performance of each model, we conducted 10 different independent training and tests. Based on the obtained results, the averaged values with standard deviation are shown in Fig. 1. The averaged accuracy of Attention U-Net during 14 days monitoring is 92.743%, in contrast, the averaged accuracies of the other three models with standard deviation are 88.85% (Nested U-Net), 92.42% (Res U-Net), and 90.535% (U-Net). Moreover, even the minimum accuracy of Attention U-Net (93.54%) is higher than the maximum accuracies of other models (90.96%, 93.42%, and 91.71%), which verified that Attention U-Net is the most suitable deep learning model for 3D bioengineered corneal equivalent segmentation. In addition, in terms of the standard error of the mean, the maximum value of Attention U-Net is 0.1610, which is comparably lower than other models (1.1482, 0.2944, and 1.3253), which supports the robustness of the proposed method. In addition, to compare the segmented results of each ophthalmologist, we trained and tested three different datasets from different doctors using Attention U-Net as shown in Fig. 1b. As a results, despite differences in expertise among ophthalmologists leading to variations in labeling, obtained accuracies are over 90%, therefore our training process using three different data sets enhances the robustness for various conditions. The specific values of mean, standard deviation, and standard error of the mean at each deep learning model (Fig. 1a) and each ophthalmologist (Fig. 1b) were shown in Table 1 and Table 2, respectively.

a 10 different times averaged results of 4 different U-Net based model accuracy with standard deviation at 4 different date to select the suitable deep learning model. b 10 different times averaged results of representative model (Attention U-Net) accuracy standard deviation at 4 different dates.

In addition, the superb performance of Attention U-Net than other three models is also verified in our proposed scoring method (confusion matrix) as shown in Table 3. As shown in Eqs. (1) and (2), Dice coefficient and Jaccard index are designed to quantitatively evaluate the similarity of two binarized ground truth and automatically segmented results, both scores express relative similarity the intersection area in pixel-level with close to 1 indicating high similarity. In Table 3, each dataset affected the difference in Jaccard index and dice coefficient within the same network, but both indices were best scored Attention U-Net, followed by the Nested U-Net. Therefore, our representative analysis focused on Attention U-Net and all of the following results were obtained by Attention U-Net.

To demonstrate the superb performance of the proposed deep learning-based automatic bioengineered corneal equivalent segmentation method, we compared the two-dimensional (2D) image outputs of the model with the results of a customized segmentation algorithm as described in Fig. 2. Among all the labeled volumes, we selected two representative dates (Day 1 (Fig. 2a and Day 7 (Fig. 2b) as shown in Fig. 2. The first column of Fig. 2 represents each maximum amplitude projection (MAP) image to show the whole region of the transplanted cornea. Among the graft transplanted area, we extracted cross-sections from three representative positions, indicated by green, blue, and red dashed lines at MAP image. Based on the original input image, labeling results of doctor 1, model output, and algorithm results are sequentially displayed for each position. Upon qualitative comparison of the images, it is evident that the model outputs resemble the results labeled by the doctor. On the other hand, although the algorithm exhibits a similar overall segmented region, it is hard to effectively filter out outliers marked by the yellow arrows, leading to decreased segmentation accuracy. Although the shapes of labeled 2D images are varied, as evident in the output of the deep learning model, the graft were accurately segmented compared to algorithm-based results, which have lots of errors compared to labels. Therefore, this outcome verified that the effectiveness of the proposed automatic graft segmentation using deep learning at different input image conditions.

From the MAP image, three different representative positions are selected (green, blue, and red). The comparison includes the original input images (second column), labeled images from Doctor 1 (third column), deep learning model output images (fourth column), and customized algorithm output images (fifth column). The edge color of each image indicates the extracted line from the MAP image, and the yellow arrows in the algorithm images indicate errors. a Results from 1 day after transplantation. b Results from 7 days after transplantation. MAP maximum amplitude projection.

In addition, for quantitative analysis of the model outputs and algorithm, we introduced four different factors (MSE, SSIM, embedding similarity, and LPIPS distance), whose results are presented in Fig. 3. In the case of MSE, which focuses on pixel-level differences with values close to 0 indicating high similarity, the value of algorithm output exhibits significant discrepancies compared to the highly similar outputs of the trained model. In addition, for SSIM, which compared the structural features of pixels with values close to 1 indicating high similarity, the calculated value of the algorithm is noticeably lower than three different model outputs. Moreover, in contrast to the simple pixel-based comparison metrics like MSE and SSIM, when comparing the measured embedding similarity (values close to 1 indicating higher similarity) and LPIPS (values lose to 0 indicating higher similarity) values between model outputs and algorithm results, it becomes evident that the algorithm-based segmented results are less accurate for both factors shown in Fig. 3c and d. Consequently, after quantitatively analyzing the four different factors, it was consistently observed that the predictions of the deep learning model were similar to the results of doctors 1, 2, and 3, whereas the algorithm-based results (i.e., conventional method) showed inferior performance. Thus, the results in Fig. 3 demonstrate the feasibility of the deep learning-based method for segmenting implanted corneal grafts.

Outputs of deep learning models trained with three different doctors (Doctor 1, 2, and 3) and output of the algorithm were compared using data measured at 4 different dates. a MSE comparison. b Calculated SSIM values. c Measured embedding similarity. d Measured LPIPS distance.

To demonstrate the robustness of our deep neural network model at different cases, we automatically 3D segmented the graft of each ophthalmologists during 14-day monitoring, as shown in Fig. 4. First column of Fig. 4a–d was the MAP image of the original graft transplanted eye by each date and second columns were MAP images of the annotated label (first row) and segmented output (second row). It is evident that the segmented results were varied according to the proficiency of each ophthalmologist, which supports the robustness of our model at different datasets. Upon qualitative comparison of the graft area MAP images, it is confirmed that the model outputs resemble not only the results labeled by the doctor but also between each doctor dataset and even observed the equal tendency for graft area to decrease as the days passed.

a–d are results with the date of 1, 4, 7, and 14 days after bioengineered corneal equivalent transplantation surgery, respectively. MAP maximum amplitude projection.

In addition, in terms of the graft thickness variation analysis by each date, we employed a thickness map as shown in Fig. 5. The thickness maps with color bar from Attention U-Net output were obtained by each date and dataset. Each color represents the relative thickness corresponding to the color bar and the black area means zero graft thickness information at two-dimensional OCT images. We compared the thickness map according to the dates (Fig. 5a–d), variance in distribution and area of graft thickness was verified. In addition, 7 days and 14 days after the transplants, it is confirmed that the thickness decreased or reduction of graft area by thickness color map. Therefore, our proposed automatic graft segmentation method not only classified the artificial graft region but also provided the thickness distribution of the whole graft area in micrometer-scale, which enables high-precision monitoring after transplantation.

Each obtained collagen sheet thickness is normalized and mapped according to the color bar. As the color shifts towards red, the thickness increases, and as it shifts towards black, it becomes thinner. a–d are results with the date of 1, 4, 7, and 14 days after bioengineered corneal equivalent transplantation surgery, respectively.

Additionally, we quantitatively evaluated the thickness variance of graft area corresponding to each date by different colored thickness histogram as shown in Fig. 6. The Horizontal axis of each histogram represents the thickness of every point in the graft, and the vertical axis represents a number of pixels corresponding to thickness. Figure 6a–c represents a histogram of each ground truth dataset annotated by doctors and representative output from the trained network, which are presented solid line and dashed lines, respectively. In terms of the ground truth, although each histogram has a different number of thickness pixels because of the different experiences of each ophthalmologist, the thickness varying tendency according to the date was identically identified in three doctors. In addition, as an aspect of qualitative comparison, it is able to confirm that the segmented results of our proposed method follows both thickness-varying tendency and histogram distribution. In addition, to quantitatively compare the thickness variance at each date, we obtained pixel average and standard deviation of graft thickness as shown in Table 4. In ground truth datasets, one day after surgery (Day 1), the largest area of graft remained with maintaining circular shape after transplantation, and the average and standard deviation of overall thickness was the highest. After 4 days, the average thickness was decreased (46.69 µm; 6.67 pixels; 28.78%) compared to Day 1, which proves that the graft integrity was enhanced at the whole cornea region. These decrease in statistical average thickness was also confirmed in Fig. 6b, which expanded of minimum thickness area (blue colored area) than Day 1. In addition, in Day 7, since more regions were well combined with the cornea, which reduced the overall region of remained the graft (Fig. 6c), however, as an aspect of thickness, over time, the integration of the recipient cornea and bioengineered corneal equivalent leads to a widening region with an indistinct boundary, resulting in a measured increase in thickness. Therefore, the graft thickness in the histogram was increased by an average 15.54 µm (increased by 13.54%, 2.22 pixels). Moreover, in Day 14, based on the number of thickness pixels, the area of graft was decreased by 18.21% from Day 1 to Day 14. Also, comparing the similarity of histograms between ground truth and representative output, we computed the correlation between the two histograms at each dataset from doctor. Histogram similarity expresses the similarity numerically using the hue, saturation, and brightness of the pixels in the two images, and has a value from -1 at completely different images to a maximum of 1 at completely equal images. In our results, comparing with ground truth datasets, histogram similarity of doctor 1, doctor 2, and doctor 3 datasets showed 0.9445, 0.9456, and 0.9338, respectively. Therefore, based on the results using our proposed method, it was confirmed that the statistically inferred transformation

The value on the y-axis represents the number of pixels that have the collagen sheet thickness value corresponding to the x-axis. a–c obtained histogram results from three different doctors. The histogram of ground truths and outputs are presented solid line and dashed lines, respectively. In addition, the black color denotes result of day 1, the red color denotes result of day 4, the green color denotes the result of day 7, and the blue color denotes the result of day 14.

In addition, to prove the effectiveness of our proposed Attention U-Net-based graft segmentation, the results were compared with a simulated ultrasound corneal pachymetry map, which has been widely utilized in cornea thickness measurement as a conventional method. Total 48 points at equal intervals from the center of the cornea were extracted to simulate the ultrasonic pachymetry, and pachymetry map was created with a color corresponding to each thickness. Figure 7 represents our proposed method-based graft thickness visualization and ultrasound pachymetry map of bioengineered corneal equivalent at each day. From the pachymetry map, we can visualize the thickness variance and low-level thickness distribution, but there is a difference in detailed corneal shape and thickness distribution compared with the color map. In red arrow area in Fig. 7a, thickness of the graft represents min value with ultrasound pachymetry, but pixel thickness of color map consists of discrete values following a distribution with a mean of 167.09 µm (23.87 pixels) and a standard deviation of 17.92 µm (2.56 pixels). Because of the measurement interval of ultrasound pachymetry, it was confirmed that over 50,000 pixels of graft thickness information were compressed and interpolated into 48 pieces, resulting in information loss of graft. In order to measure the thickness and transformation of the graft composed of continuous value, it was confirmed that our proposed method shows the superb performance as an aspect of the amount of thickness data in detail than the ultrasonic pachymetry. Additionally, the area indicated by the blue arrow in Fig. 7c shows the limitations of ultrasound pachymetry with interpolation for graft thickness measurements. In our suggestion method, removal area of graft between inferior and temporal was presented with zero value and colored black. However, corneal ultrasound pachymetry replaced these removal graft areas with interpolated minimum value of color bar because of measurement intervals. Similarly, the omission of thickness information due to interpolation was occurred in the graft deformation in Fig. 7d, which makes it hard to detect the actual biological changes of the graft with an ultrasound pachymetry.

For OCT, as the color shifts towards red, the thickness increases, and as it shifts towards black, it becomes thinner. For ultrasound pachymetry, as the color shifts towards red, the thickness increases, and as it shifts towards green, it becomes thinner. a–d are results with the date of 1, 4, 7, and 14 days after bioengineered corneal equivalent transplantation surgery. S superior, N nasal, T temporal, I inferior.

In this study, we proposed deep learning-based automatic bioengineered corneal equivalent segmentation and its validating method, which has several advantages than conventional corneal thickness measurement devices and demonstrated are desirable in bioengineered corneal equivalent implanted corneal OCT images. (1) First, we demonstrated that the thickness map of our proposed method provides more dense information than the existing interpolated ultrasound pachymetry map36,37. The thickness information of the graft mapped to pachymetry map presents only representative thickness values obtained by ultrasound pachymetry because of measurement intervals, therefore representative thickness values confirmed insufficient quantitative information than our proposed method. Segmentation results and thickness map of Attention U-Net showed implanted graft information with more than 50,000 thickness points and shape of the graft, but ultrasound pachymetry only represented several points of interpolated thickness with circular shape. Also, we showed that ultrasound pachymetry has limitations for monitoring the progression of corneal graft. In research of bioengineered corneal equivalent, by utilizing the ultrasound pachymetry, it is hard to measure of discontinuous thickness changes such as tearing, crumpling, and cracking of the graft. Therefore, the information obtained is limited to the number of measurement point, meaning that it cannot provide information on continuous changes or on areas that were not measured (i.e., omitted data)38. As a result, the amount of information that can be provided is limited compared to OCT. (2) Second, our Attention U-Net based segmentation performance proved ground truth datasets robustness by training datasets from three different ophthalmologists. Three different specialists annotated for the ground truth datasets from bioengineered corneal equivalent OCT images with their subjective medical perspective, and each dataset was used to train the Attention U-Net with equal parameters and network configuration. After training, the thickness of bioengineered corneal equivalent measured from the test results of the Attention U-Net and ground truth on three different datasets were visualized through a histogram. In conventional OCT pachymetry, it is hard to estimate the tendency of this thickness distribution because the boundary of intensity with A-line scan cannot be defined using a unified algorithm after adhesion between the graft and the cornea occurs11. Although the number of pixels corresponding to each thickness is different, our results showed that there is tendency between three different datasets. In our statistical analysis using changes in graft thickness distribution, results proved that our method was able to provide the changes in thickness distribution determined by the degree of graft integrity and this result used as an indicator for evaluating biocompatibility assessment of bioengineered corneal equivalent. In addition, in this study, since the used bioengineered corneal equivalent was made with regenerative 3D-printed bio corneal graft unlike synthetic cornea (e.g., Boston KPro), it is essentially required to monitor whether it has a thickness exceeding a certain level until new collagen is generated. In other words, the artificial graft used in this experiment is clearly distinguished from surrounding tissues by its thickness and boundaries, which indicates that inflammation and cloudiness did not occur. This demonstrates that the biocompatibility of the artificial graft is high. (3) Finally, our proposed deep learning method presents actual values rather than inferred values generated from the interpolation algorithm. Corneal thickness measurement using OCT pachymetry has the convenient unnecessary of immersing the eye in a coupling fluid, therefore, OCT-based corneal pachymetry replaced the ultrasound pachymetry using image processing and interpolation algorithm39. Despite convenient advantages of OCT-based pachymetry, however, our results show that implanted graft changed unevenly or even existing of disconnected areas. In these cases, interpolation-based pachymetry map provided insufficient or uncertain thickness information comparing our suggestion deep learning-based corneal graft thickness measurement.

As an aspect of U-Net-based deep learning model in ophthalmic OCT image segmentation, retina is a common object because retina has multiple inner layers including internal limiting membrane (ILM), retinal pigment epithelium (RPE), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), external limiting membrane (ELM), photoreceptor layer (PR), nerve fiber layer (NFL), retinal pigment epithelium (RPE) and Bruch’s membrane (BM). In recent years, the various applications of U-Net-based deep learning technique for retinal layers segmentation in OCT image have been widely reported40,41,42,43,44. Along with the reported retinal layer segmentation, corneal OCT image segmentation researches with U-Net-based deep learning model have been demonstrated as well. Most of the previous researches focused on the segmentation of corneal layers including epithelium layer, Bowman’s layer, and stroma layer45,46,47. In contrast, our proposed method is first approach to apply U-Net-based deep learning network for implanted corneal graft segmentation at OCT images. Therefore, our study is expected to serve as an initial step for widely utilizing corneal OCT images in various cases, including corneal layer segmentation with diseases, corneal transplantation, and corneal opacities.

In addition, the reason for conducting volumetric analysis of implanted corneal graft is that for long-term monitoring, it is difficult to obtain images from the exact same position every time, which hinders the quantitative analysis of thickness variance. To address this limitation, we introduced 3D segmentation of the entire corneal graft in this study. By combining the segmented results from 2D images into a 3D volume analysis, we were able to achieve accurate analysis even during long-term monitoring. In terms of the 3D segmentation of the whole implanted corneal graft, we used a basic 2D U-Net architecture-based deep learning model in this study. It is also possible to modify the basic U-Net structure into a 3D U-Net48 to obtain a complete volume of the corneal graft. The reason for using 2D U-Net in this study is that in actual clinical settings, the entire volume is not always measured, and images are often obtained at regular intervals. Therefore, 2D OCT image segmentation is sometimes necessary. With clinical applicability in mind, this approach is expected to facilitate the integration and utilization of other experimental results and data.

In terms of the dataset used for train, validation, and test, a total of 15 three-dimensional volume datasets consisting of 3,086 cross-sectional images were obtained from day 0 (immediately after transplantation) to day 14 from an in-vivo experiment. To set the train:validation:test ratio at 2:1:1, the volumes were divided into an 8:3:4. The reason for the smaller number of volumes used on days 4, 7, and 14 is that, over time, some cases showed integration with the existing cornea, while others experienced graft detachment due to external factors such as the rabbit scratching. However, it is noteworthy that despite using a higher proportion of data immediately after transplantation and the early stages for training, and not using a validation dataset, the test results for days 4, 7, and 14 were highly precise. This demonstrates the effectiveness of the model proposed in this study. Additionally, as the number of datasets used for training increases, the accuracy is expected to improve further. Therefore, based on the results of this study, precise analysis of the thickness of the implanted corneal graft is anticipated to be achievable. In addition, as an aspect of bias for deep learning model, the model was trained using data from Day 0 and Day 1 because validation datasets from Day 4, 7, and 14 were not used. However, the test results for Days 4, 7, and 14 showed a similar level of accuracy when compared to the results from Day 1. This level of accuracy would not have been achieved if the model had been biased towards the data from Day 0 and 1. Therefore, it provides evidence that the model in this study did not develop bias towards specific inputs.

In applying the results of this study to actual clinical settings, one must consider the differences between bioengineered corneal equivalents and DMEK or DSAEK. The primary consideration is the difference in thickness, as DMEKs are notably thin, with a thickness of 20 µm. However, this issue can be technically addressed by enhancing the axial resolution of OCT. The depth resolution of OCT is determined by the wavelength bandwidth of the light source used. For example, if an OCT light source with a central wavelength (\({{\rm{\lambda }}}_{c}\)) of 820 nm and a bandwidth (\(\Delta {\rm{\lambda }}\)) of 200 nm is used, the axial resolution (\(0.44\times {\lambda }_{c}^{2}/\Delta \lambda\)) will be 1.47 µm. Therefore, it is possible to image the thickness of a 550 µm donor graft and a 20 µm DMEK, resulting in 374 and 13 pixels, respectively, making segmentation feasible. Furthermore, in the results of bioengineered corneal equivalent transplantation, it was observed that the thickness was thinner near the periphery, yet successful segmentation was achieved. Given the OCT resolution capabilities, which can differentiate thicknesses of 1.5 µm or greater, it is expected that comparable segmentation accuracy can be attained in human cases as well. In addition, in terms of model accuracy, using a model trained on bioengineered corneal equivalent data may lead to reduced accuracy when applied directly to DMEK and DSAEK segmentation tasks. Therefore, it is recommended to follow the workflow demonstrated in this study: first, obtain DMEK and DSAEK data, then perform a labeling process before training the model. This approach is anticipated to achieve results comparable to those obtained with the bioengineered corneal equivalent data.

In terms of the effect of graft curvature and location on thickness measurement, the fundamental imaging method proposed in this study, OCT, is capable of imaging the entire corneal area, meaning that the curvature of the graft has a minimal impact on the overall thickness analysis. In other words, imaging the entire corneal area allows for the acquisition of a 3D volumetric image following the curvature of the cornea. Therefore, since the implanted bioengineered corneal equivalent is well within the imaging range of OCT, it is highly unlikely that curvature would interfere with the results. As a clinical case, the images taken after DSEK, DMEK, DALK, and PKP surgeries reveal that the transplanted areas are either distinguishable as different layers compared to the existing cornea or show differences in intensity compared to surrounding tissues. The deep learning model used in this study, which automatically segments the surgical site based on the acquired images, extracts features from the images through convolution operations. This suggests that the surgical area with different intensity levels or distinguishable layers on the OCT images is sufficiently distinguishable in the actual clinical environment. Moreover, the findings of our proposed study is able to be applied not only to cases with bioengineered corneal equivalents, DALK and PKP but also to cases with posterior locations, such as DMEK and DSEK, without any limitations.

In addition, the presented deep learning-based automatic implanted graft segmentation method has room to be improved further. First, in this study, a total of 15 volumes from 9 subjects were used for the training and testing of the deep learning model. Despite the limited number of subjects, the model was able to learn effectively without bias, resulting in high test accuracy. However, it is believed that if a larger dataset had been used, even higher accuracy could have been achieved. Therefore, by integrating the newly proposed method for corneal graft segmentation with data obtained from various centers and hospitals, the robustness and applicability of the deep learning model are expected to be further enhanced. Next, we were able to accurately measure the thickness of the implanted cornea quantitatively through 3D volume analysis. However, we were unable to analyze differences in visual acuity outcomes over time related to thickness variance because this information was not measured in the original experiment. This limitation is attributed to the fact that the study involved animal experiments using rabbits, leading to unmeasurable data. Nevertheless, if this research is applied to clinical settings, along with previous research addressing the relationship between the thickness of transplanted grafts and biocompatibility (i.e., corneal graft survival)49,50, it is expected that further studies can be conducted to explore the correlation between various factors and thickness variance.

Therefore, the proposed deep learning-based automatic three-dimensional segmentation analysis of bioengineered corneal equivalent is high-precision method for evaluating the biocompatibility of the graft as an aspect of thickness with quantitatively parameters while continuous monitoring experiments. The results of this study offer a new perspective on models that were previously limited to distinguishing layers of the cornea and the application of deep learning in OCT corneal images. The findings suggest that this approach can be used not only for thickness measurement and segmentation of corneal transplant areas (bioengineered corneal equivalent, DMEK, DSEK, and PKP) but also for comprehensive pre- and post-keratoplasty evaluation and analysis, including the diagnosis of inflammation-based thickness variance and identification of opacity areas before and after surgery. Based on the experimental process flow chart of this study, it appears that this method could be a practical tool for immediate application in clinical settings.

For this study, we used bioengineered corneal equivalent transplanted OCT volume images derived from our previous study of examining collagen sheet for rabbit corneal perforation patch graft using swept-source OCT35. Specifically, this study developed an in situ photochemical crosslinking (IPC)-assisted collagen compression process and demonstrated the clinical potential of the resulting IPC-compressed collagen construct using an in vivo rabbit corneal perforation model. The IPC-construct stably protected the perforation site by maintaining its structure without noticeable biodegradation and inflammation, effectively preventing aqueous humor leakage and maintaining the integrity of the eye globe without causing any additional complications. The animal experiment was approved by the Institutional Animal Care and Use Committee of Daegu-Gyeongbuk Advanced Medical Innovation Foundation (DGMIF) and performed in accordance with protocol DGMIF-17 080 801-00 following the Association for Research in Vision and Ophthalmology’s animal used guidelines. The New Zealand whie rabbits (3 ~ 3.5 kg, 1 week acclimatisation periods) were anesthetized with an intramyuscular injection of a mixture of ketamine (15 mg/kg) and Rompun (5 mg/kg). The more specified description of animal experiment, it is able to be found in our IPC-assisted collagen paper35. Three trained doctors, including the ophthalmologist who performed the cornea transplantation surgery in the animal experiment annotated the ground truth datasets. To train the deep learning model, it is required to use the original input image and label (i.e., ground truth) to give the answer for model and find the distinguishable part of images (i.e., implanted corneal graft). Then, three different ground truths were generated by each ophthalmologist based on different experience and criteria. So, each dataset (input and ground truth) was used for each independent model training (doctor 1, 2, and 3) to validate the robustness of network. As an aspect of generation process of ground truth shown in Fig. 8(a), after each specialist annotated the target graft area (area selection), this annotation was segmented and processed by region growing and binarization algorithm. A total 3086 corneal OCT images in 15 volume data were prepared, and each data was preprocessed under the same conditions. Each volume data was divided into five types by date: (1) 0 Day after transplantation (5 volume), (2) 1 Day after transplantation (4 volume), (3) 4 Days after transplantation (2 volume), (4) 7 Days after transplantation (2 volume), and (5) 14 Days after transplantation (2 volume) as shown in Table 5. Among datasets from Day 1, 4, 7, and 14, one datasets was used for the final test of the deep learning model from each date datasets, and the remaining datasets were used for training and validation. Specifically, in our experiment, we used 9 different rabbits for the data collection, then, a total of 15 volumes were acquired. For training, validation, test dataset distribution, the data obtained from the same object was used exclusively for either training or testing, not both. In other words, the objects used for training were different from those used for validation and testing. This approach supports the validity of the results obtained in this study. An overall flow chart for generating the deep learning model for predicting the transplanted graft region at OCT images was shown in Fig. 9.

a Dataset preparation flow annotated by specialists. b Evaluation method for comparing outputs of U-Net based models. c Architecture of representative U-Net based model (Attention U-Net).

In total 3086 OCT of transplanted corneal graft were obtained. All images were manually annotated and used as a ground truth. 3086 images were divided into three categories (training data: 1658 images; validation data: 632 images; test data: 83 images). Using the processed dataset, 4 different U-Net-based deep learning models were trained and predicted the corneal graft region using test datasets. Then, through the evaluation using the dice coefficient and Jaccard index, the best deep learning model is selected and quantitative analyses were conducted.

For the automatic segmentation of bioengineered corneal equivalent area, we used four modified U-Net based deep neural network models: modified U-Net51, Nested U-Net52, Res U-Net53, and Attention U-Net54. To save the best checkpoint of model after training, performance of models was evaluated under unified loss using binary cross entropy loss (BCE). Also, to determine the best model according to the evaluation score, the outputs of each model were evaluated and quantitatively compared by confusion matrix. As an evaluation method of segmentation results, we used pixel wise calculation55, including accuracy (AC), sensitivity (SE), specificity (SP), dice coefficient (DI), and Jaccard index (JA) as shown in Fig. 8(b). Through comparison of the confusion matrix results, the representative model with the best performance was selected, and the following quantitative analysis was conducted using the representative model. Figure 8(c) is the structure visualization of Attention U-Net network used as the representative model for our quantitative analysis. The networks were implemented using Python 3.7.10 based on Pytorch 1.13.1. The workstation setup consisted of AMD EPYC 7262 CPU, 256 GB RAM, and NVIDIA RTX A6000 GPU. Each network was trained for 200 epochs (6 minutes per epoch) and 8 image batch size with 1658 training images and 632 validation images with 512 × 512 pixels. All the networks were optimized using Adam optimizer started with 0.001 learning rate and step learning rate scheduler with a weight decay factor of 0.1 was employed to prevent overfitting. Input images of model training were transformed with normalization, random shift with 10 percent of image size, and random flip. In terms of the size of OCT images, each OCT volume had an original size of 1710 × 500 × 610 pixels (X × Y × Z). The pixel resolution of the system was 7 × 21 × 7 µm3, resulting in a physical volume dimension of 11.9 × 10.5 × 4.2 mm3. To focus on the implanted corneal graft region, the original images were first cropped in the XZ plane to a size of 1024 × 500 × 512 pixels. Next, we selected the corneal region in the XY plane, with each volume containing approximately 200 images, resulting in an output size of 1024 × 200 × 512 pixels. Then, the X-axis size was then reduced to 512 pixels. Consequently, the final used volume size was 512 × 200 × 512 pixels, corresponding to physical dimensions of 7.2 × 4.2 × 3.5 mm3.

To quantitatively visualize the graft thickness variance for analyzing the variance in thickness over the full region, we adopted the thickness map and thickness histogram. Because of the hemispherical shape of the graft or recipient cornea, the thickness map was optimized to visually represent the overall distribution of the graft thickness. To obtain the thickness map, we measured the number of pixels between the upper and lower boundaries of the segmented corneal image from the network outputs. This process was conducted on the ground truth and outputs of networks, respectively, and compared by each date. In addition, the histogram was plotted using the stack thickness value of the graft, and statistical analysis of the correlation between variances in thickness and date was conducted.

To demonstrate the superb performance of our proposed deep learning-based automatic graft volume segmentation method, which is conventional graft thickness measurement method, we customized graft segmentation algorithm using MATLAB as shown in Fig. 10. Our customized algorithm is largely divided into two steps: (1) graft edge estimation (Fig. 10b), and (2) graft area extraction (Fig. 10c). To clearly show the results of each stage, we indicated the graft region of each cross-section images with red-square and magnified images were concurrently demonstrated in Fig. 10. Unprocessed input image (Fig. 10a) was initially processed by (erode and dilate) and (open and close) whose mask size was equal as 4 by 4. Then, through median filtering with 10 by 10 mask, threshold (reference intensity: 50), and binarization, the processed image for edge estimation was obtained (Fig. 10b). Based on the results of edge estimation, the raw upper and lower boundary were extracted. Since intensity-based boundary extraction has vulnerable to drastic intensity fluctuation of each pixel, we introduced moving average (filter size: 20) to soften the boundary. By removing the outlier, the finally segmented image for comparing with deep learning-based method was obtained as shown in Fig. 10d. For quantitative comparison the segmented images between our proposed method and conventional algorithm, we introduced 4 different factors including mean squared error (MSE), structural similarity index (SSIM), embedding similarity, learned perceptual image patch similarity (LPIPS) distance in this manuscript. In terms of measuring embedding similarity and LPIPS, pre-trained Resnet5056 and Alexnet57 were used respectively. Before running the model, the input images were transformed from grayscale to RGB channel and normalized (mean: [0.485, 0.456, 0.406], standard deviation: [0.229, 0.224, 0.225]) to matched with the pre-trained models.

a Original input image. b Summarized sequence to estimate the edge of collagen sheet. c Procedure to extract the collagen sheet area from the result of b. d Finally filtered image.

The used OCT images of transplanted bioengineered corneal equivalent are available in the OSF repository, see https://doi.org/10.17605/OSF.IO/PN5YJ.

Codes for four different deep learning models and statistical analysis are openly available at our GitHub repository (https://github.com/LEMon-0822/Transplanted-bioengineered-corneal-equivalent_U-Net-based-deep-learning-model/).

Oliva, M. S., Schottman, T. & Gulati, M. Turning the tide of corneal blindness. Indian J. Ophthalmol. 60, 423–427 (2012).

Article PubMed PubMed Central Google Scholar

Pascolini, D. & Mariotti, S. P. Global estimates of visual impairment: 2010. Br. J. Ophthalmol. 96, 614–618 (2012).

Article PubMed Google Scholar

Gain, P. et al. Global survey of corneal transplantation and eye banking. JAMA Ophthalmol. 134, 167–173 (2016).

Article PubMed Google Scholar

Coster, D. J. & Williams, K. A. The impact of corneal allograft rejection on the long-term outcome of corneal transplantation. Am. J. Ophthalmol. 140, 1112–1122 (2005).

Article PubMed Google Scholar

Ilhan-Sarac, O. & Akpek, E. K. Current concepts and techniques in keratoprosthesis. Curr. Opin. Ophthalmol. 16, 246–250 (2005).

Article PubMed Google Scholar

Darougar, S. & Darougar, D. (Google Patents, 2007).

Chirila, T. V. An overview of the development of artificial corneas with porous skirts and the use of PHEMA for such an application. Biomaterials 22, 3311–3317 (2001).

Article PubMed CAS Google Scholar

Duan, X., McLaughlin, C., Griffith, M. & Sheardown, H. Biofunctionalization of collagen for improved biological response: scaffolds for corneal tissue engineering. Biomaterials 28, 78–88 (2007).

Article PubMed CAS Google Scholar

Griffith, M. et al. Artificial human corneas: scaffolds for transplantation and host regeneration. Cornea 21, S54–S61 (2002).

Article PubMed Google Scholar

Koudouna, E. et al. Immune cells on the corneal endothelium of an allogeneic corneal transplantation rabbit model. Investig. Ophthalmol. Vis. Sci. 58, 242–251 (2017).

Article CAS Google Scholar

Park, J. et al. Biocompatibility evaluation of bioprinted decellularized collagen sheet implanted in vivo cornea using swept‐source optical coherence tomography. J. Biophoton. 12, e201900098 (2019).

Article Google Scholar

Zhang, C. et al. Biocompatibility evaluation of bacterial cellulose as a scaffold material for tissue-engineered corneal stroma. Cellulose 27, 2775–2784 (2020).

Article CAS Google Scholar

Cursiefen, C., Chen, L., Dana, M. R. & Streilein, J. W. Corneal lymphangiogenesis: evidence, mechanisms, and implications for corneal transplant immunology. Cornea 22, 273–281 (2003).

Article PubMed Google Scholar

Said, D. G. et al. Histologic features of transplanted amniotic membrane: implications for corneal wound healing. Ophthalmology 116, 1287–1295 (2009).

Article PubMed Google Scholar

Chan, A. S. et al. Histological features of Cytomegalovirus-related corneal graft infections, its associated features and clinical significance. Br. J. Ophthalmol. 100, 601–606 (2016).

Article PubMed Google Scholar

Grewal, D. S., Brar, G. S. & Grewal, S. P. Assessment of central corneal thickness in normal, keratoconus, and post-laser in situ keratomileusis eyes using Scheimpflug imaging, spectral domain optical coherence tomography, and ultrasound pachymetry. J. Cataract Refract. Surg. 36, 954–964 (2010).

Article PubMed Google Scholar

Vithana, E. N. et al. Collagen-related genes influence the glaucoma risk factor, central corneal thickness. Hum. Mol. Genet. 20, 649–658 (2011).

Article PubMed CAS Google Scholar

Copt, R.-P., Thomas, R. & Mermoud, A. Corneal thickness in ocular hypertension, primary open-angle glaucoma, and normal tension glaucoma. Arch. Ophthalmol. 117, 14–16 (1999).

Article PubMed CAS Google Scholar

Patel, S. V., McLaren, J. W., Hodge, D. O. & Bourne, W. M. Normal human keratocyte density and corneal thickness measurement by using confocal microscopy in vivo. Investig. Ophthalmol. Vis. Sci. 42, 333–339 (2001).

CAS Google Scholar

McLaren, J. W., Nau, C. B., Erie, J. C. & Bourne, W. M. Corneal thickness measurement by confocal microscopy, ultrasound, and scanning slit methods. Am. J. Ophthalmol. 137, 1011–1020 (2004).

Article PubMed Google Scholar

Liu, Z., Huang, A. J. & Pflugfelder, S. C. Evaluation of corneal thickness and topography in normal eyes using the Orbscan corneal topography system. Br. J. Ophthalmol. 83, 774–778 (1999).

Article PubMed PubMed Central CAS Google Scholar

Suzuki, S. et al. Corneal thickness measurements: scanning-slit corneal topography and noncontact specular microscopy versus ultrasonic pachymetry. J. Cataract Refract. Surg. 29, 1313–1318 (2003).

Article PubMed Google Scholar

Miglior, S. et al. Intraobserver and interobserver reproducibility in the evaluation of ultrasonic pachymetry measurements of central corneal thickness. Br. J. Ophthalmol. 88, 174 (2004).

Article PubMed PubMed Central CAS Google Scholar

Tai, L.-Y., Khaw, K.-W., Ng, C.-M. & Subrayan, V. Central corneal thickness measurements with different imaging devices and ultrasound pachymetry. Cornea 32, 766–771 (2013).

Article PubMed Google Scholar

Muscat, S., McKay, N., Parks, S., Kemp, E. & Keating, D. Repeatability and reproducibility of corneal thickness measurements by optical coherence tomography. Investig. Ophthalmol. Vis. Sci. 43, 1791–1795 (2002).

Google Scholar

Fishman, G. R., Pons, M. E., Seedor, J. A., Liebmann, J. M. & Ritch, R. Assessment of central corneal thickness using optical coherence tomography. J. Cataract Refract. Surg. 31, 707–711 (2005).

Article PubMed Google Scholar

Drexler, W. et al. Ultrahigh-resolution ophthalmic optical coherence tomography. Nat. Med. 7, 502–507 (2001).

Article PubMed PubMed Central CAS Google Scholar

Seong, D. et al. Dynamic compensation of path length difference in optical coherence tomography by an automatic temperature control system of optical fiber. IEEE Access 8, 77501–77510 (2020).

Article Google Scholar

Hsieh, Y.-S. et al. Dental optical coherence tomography. Sensors 13, 8928–8949 (2013).

Article PubMed PubMed Central Google Scholar

Kim, Y. et al. Non-invasive optical coherence tomography data-based quantitative algorithm for the assessment of residual adhesive on bracket-removed dental surface. Sensors 21, 4670 (2021).

Article PubMed PubMed Central CAS Google Scholar

Welzel, J. Optical coherence tomography in dermatology: a review. Ski. Res. Technol.: Rev. Artic. 7, 1–9 (2001).

Article CAS Google Scholar

Seong, D. et al. Virtual intraoperative optical coherence tomography angiography integrated surgical microscope for simultaneous imaging of morphological structures and vascular maps in vivo. Opt. Lasers Eng. 151, 106943 (2022).

Article Google Scholar

Su, R. et al. Perspectives of mid-infrared optical coherence tomography for inspection and micrometrology of industrial ceramics. Opt. Express 22, 15804–15819 (2014).

Article PubMed PubMed Central Google Scholar

Seong, D. et al. Ultrahigh-speed spectral-domain optical coherence tomography up to 1-mhz a-scan rate using space–time-division multiplexing. IEEE Trans. Instrum. Meas. 70, 1–8 (2021).

Article Google Scholar

Hong, H. et al. Ultra-stiff compressed collagen for corneal perforation patch graft realized by in situ photochemical crosslinking. Biofabrication 12, 045030 (2020).

Article PubMed Google Scholar

Hoehn, A. et al. Comparison of ultrasonic pachymetry and Fourier-domain optical coherence tomography for measurement of corneal thickness in dogs with and without corneal disease. Vet. J. 242, 59–66 (2018).

Article PubMed PubMed Central CAS Google Scholar

Doğan, M. & Ertan, E. Comparison of central corneal thickness measurements with standard ultrasonic pachymetry and optical devices. Clin. Exp. Optom. 102, 126–130 (2019).

Article PubMed Google Scholar

Li, Y., Shekhar, R. & Huang, D. Corneal pachymetry mapping with high-speed optical coherence tomography. Ophthalmology 113, 792–799.e792 (2006).

Article PubMed Google Scholar

Dos Santos, V. A. et al. CorneaNet: fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deep learning. Biomed. Opt. Express 10, 622–641 (2019).

Article PubMed PubMed Central Google Scholar

Wang, B. et al. Boundary aware U-Net for retinal layers segmentation in optical coherence tomography images. IEEE J. Biomed. Health Inform. 25, 3029–3040 (2021).

Article PubMed Google Scholar

Kugelman, J. et al. A comparison of deep learning U-Net architectures for posterior segment OCT retinal layer segmentation. Sci. Rep. 12, 14888 (2022).

Article PubMed PubMed Central Google Scholar

Asgari, R. et al. in Ophthalmic Medical Image Analysis: 6th International Workshop, OMIA 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, Proceedings 6. 77–85 (Springer).

Matovinovic, I. Z., Loncaric, S., Lo, J., Heisler, M. & Sarunic, M. in 2019 11th International symposium on image and signal processing and analysis (ISPA). 49–53 (IEEE).

Karn, P. K. & Abdulla, W. H. Advancing Ocular Imaging: A hybrid attention mechanism-based U-Net Model for precise segmentation of sub-retinal layers in OCT images. Bioengineering 11, 240 (2024).

Article PubMed PubMed Central Google Scholar

Santos, V. A. D. et al. CorneaNet: fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deep learning. Biomed. Opt. Express 10, 622–641 (2019).

Article PubMed PubMed Central Google Scholar

Wang, L. et al. Automated delineation of corneal layers on OCT images using a boundary-guided CNN. Pattern Recognit. 120, 108158 (2021).

Article PubMed PubMed Central Google Scholar

Wang, L. et al. EE-Net: An edge-enhanced deep learning network for jointly identifying corneal micro-layers from optical coherence tomography. Biomed. Signal Process. Control 71, 103213 (2022).

Article Google Scholar

Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19. 424–432 (Springer).

Sugar, A. et al. Factors associated with corneal graft survival in the cornea donor study. JAMA Ophthalmol. 133, 246–254 (2015).

Article PubMed PubMed Central Google Scholar

Neff, K. D., Biber, J. M. & Holland, E. J. Comparison of central corneal graft thickness to visual acuity outcomes in endothelial keratoplasty. Cornea 30, 388–391 (2011).

Article PubMed Google Scholar

Ronneberger, O., Fischer, P. & Brox, T. in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. 234–241 (Springer).

Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N. & Liang, J. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4. 3–11 (Springer).

Diakogiannis, F. I., Waldner, F., Caccetta, P. & Wu, C. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote Sens. 162, 94–114 (2020).

Article Google Scholar

Oktay, O. et al. Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018).

Yan, Z., Yang, X. & Cheng, K.-T. Joint segment-level and pixel-wise losses for deep learning based retinal vessel segmentation. IEEE Trans. Biomed. Eng. 65, 1912–1923 (2018).

Article PubMed Google Scholar

He, K., Zhang, X., Ren, S. & Sun, J. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.

Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25 (2012).

Download references

This study was supported by the National Research Foundation of Korea grant funded by the Korea government (Ministry of Science & ICT, MSIT) (NRF-2021R1A2C2013939). And this research was also supported by the MSIT (Ministry of Science and ICT), Korea under the Innovative Human Resource Development for Local Intellectualization support program (IITP-2024-RS-2022-00156389) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation). In addition, this work was also supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2023R1A2C100605312).

These authors contributed equally: Daewoon Seong, Euimin Lee.

School of Electronic and Electrical Engineering, College of IT engineering, Kyungpook National University, Daegu, Republic of Korea

Daewoon Seong, Euimin Lee, Yoonseok Kim, Mansik Jeon & Jeehyun Kim

Bio-Medical Institute, Kyungpook National University Hospital, Daegu, Korea

Che Gyem Yae, JeongMun Choi & Hong Kyun Kim

Department of Ophthalmology, School of Medicine, Kyungpook National University, Daegu, Korea

Che Gyem Yae, JeongMun Choi & Hong Kyun Kim

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

H.K.K., M.J., and J.K. contributed to the conceptualization and design of the study. D.S. and E.L. handled and mainly analyzed the research data. All authors interpreted the results. D.S. and E.L. constructed deep learning models and conducted statistical analysis. H.K.K., M.J., and J.K. provided critical feedback throughout the research process. D.S. wrote the initial draft of the manuscript. E.L., Y.K., C.G.Y., J.M.C., H.K.K., M.J., and J.K. provided manuscript revision. All authors read and approved the manuscript.

Correspondence to Hong Kyun Kim or Mansik Jeon.

The authors declare no competing interests.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

Seong, D., Lee, E., Kim, Y. et al. Deep learning based highly accurate transplanted bioengineered corneal equivalent thickness measurement using optical coherence tomography. npj Digit. Med. 7, 308 (2024). https://doi.org/10.1038/s41746-024-01305-3

Download citation

Received: 21 May 2024

Accepted: 15 October 2024

Published: 05 November 2024

DOI: https://doi.org/10.1038/s41746-024-01305-3

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

SHARE