0% found this document useful (0 votes)
23 views12 pages

Retinal Image Analysis For Heart Disease Risk Prediction A Deep Learning Approach

Uploaded by

Vancy Fernandes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views12 pages

Retinal Image Analysis For Heart Disease Risk Prediction A Deep Learning Approach

Uploaded by

Vancy Fernandes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Received 19 March 2025, accepted 13 April 2025, date of publication 18 April 2025, date of current version 6 May 2025.

Digital Object Identifier 10.1109/ACCESS.2025.3562433

Retinal Image Analysis for Heart Disease Risk


Prediction: A Deep Learning Approach
N. D. BISNA , P. SONA , AND AJAY JAMES
Department of Computer Science and Engineering, Government Engineering College Thrissur, APJ Abdul Kalam Technological University,
Thiruvananthapuram 695016, Kerala
Corresponding author: N. D. Bisna ([email protected])

ABSTRACT Heart disease is considered as one of the leading causes of death worldwide. Predicting
heart diseases from retinal fundus images is a promising approach in the early detection and monitoring
of cardiovascular health conditions. The change in the retinal microvasculature is an indication towards
systemic diseases such as cardiovascular diseases and hypertension. This study aims to explore the potential
use of deep learning for early detection and prediction of cardiovascular health. through retinal images.
The connection between the heart and the small blood vessels are called microvasculature. Imaging the
retinal vessels provides a noninvasive way to study the cardiovascular system. By leveraging the potential
of Convolutional Neural Network, retinal images are analyzed to identify patterns and anomalies which
strongly correlates with cardiovascular conditions. Experimental results show that our method improves the
accuracy in prediction of heart diseases, hence opens a novel and improved non-invasive approach to predict
Cardiovascular diseases.

INDEX TERMS Deep learning, EfficientNet-B3, multimodal data fusion, global average pooling (GAP),
Grad-CAM, data augmentation, fundus image, cardiovascular risk assessment.

I. INTRODUCTION promising method for assessing cardiovascular risk since the


Since cardiovascular disease (CVD) continues to rank among microvasculature of the retina reflects both physiological and
the world’s top causes of death, better preventive, early pathological alterations in the body’s systemic circulation.
detection, and treatment strategies are necessary. Reducing Retinal imaging offers a unique and promising approach for
associated risks and the burden on the world’s health the early detection of cardiovascular diseases because the
depends on early and accurate detection of heart disease. retinal microvasculature mirrors the condition of systemic
In the past, techniques including electrocardiography (ECG), blood vessels. Changes in the retina, such as vessel narrowing
echocardiography, stress testing, and invasive procedures like or microaneurysms, are often early indicators of systemic
angiography have been used to diagnose CVD. Despite their conditions like hypertension and atherosclerosis. Unlike
effectiveness, these methods are frequently expensive, need invasive diagnostic methods, retinal imaging is safe, cost-
for certain tools and knowledge, and could not be available effective, and widely accessible. Integrating deep learning,
in environments with low resources. Recent developments especially Convolutional Neural Networks (CNNs), into
in artificial intelligence (AI) and medical imaging have retinal image analysis enhances this potential by automating
created new avenues for non-invasive, economical diagnostic feature extraction and enabling the identification of subtle
methods, especially when it comes to using retinal imaging patterns that may escape traditional clinical evaluation. CNNs
to measure cardiovascular risk. An extension of the central are particularly advantageous in medical imaging tasks due
nervous system, the human retina provides a special view to their ability to capture complex spatial hierarchies within
on the health of the systemic arteries. Retinal imaging is a the data. As the retina reflects systemic health, leveraging its
images through AI-based methods provides a non-invasive
The associate editor coordinating the review of this manuscript and pathway for predicting cardiovascular risks with greater
approving it for publication was Yiqi Liu . precision and scalability. Research has shown that anomalies
2025 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.
76388 For more information, see https://creativecommons.org/licenses/by/4.0/ VOLUME 13, 2025
N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

in the retinal vasculature can be a sign of significant resources, where access to specialized care is frequently
heart disease risk factors such atherosclerosis, diabetes, and restricted. By enhancing current CNN-based methods for
hypertension. According to this expanding body of research, predicting cardiac illness, this study adds to the continuing
studying retinal pictures can reveal important information investigation of AI in medical diagnostics. Our research
about cardiovascular health. attempts to create a dependable, interpretable, and scalable
Image analysis has been transformed by deep learning, model for cardiovascular risk assessment by tackling the
especially Convolutional Neural Networks (CNNs), which issues of data diversity, feature extraction, and multimodal
make it possible to automatically extract features and integration. The encouraging results of this study may lead
recognize patterns. CNNs have demonstrated impressive to additional developments in medical diagnostics, which
performance in medical imaging applications, such as the would ultimately improve clinical judgment and patient care
identification of glaucoma and diabetic retinopathy. Signif- globally.
icant obstacles still exist even though previous studies have Since heart disease is one of the main causes of death
investigated the use of CNNs in predicting cardiovascular globally, early detection is essential [1]. This study proposes
risk from retinal pictures. Without particularly optimizing a novel approach for predicting heart disease based on retinal
models for heart disease risk prediction, the majority of images. Researchers have used machine learning techniques,
previous research has concentrated on identifying broad specifically Random Forest classification, to diagnose the
vascular anomalies. Additionally, the majority of earlier presence or absence of heart disease with promising accuracy
research has depended on small datasets that frequently using a dataset of retinal scans. To improve clarity, the
lack variation in patient demographics and cardiovascular technique preprocesses retinal pictures. Important character-
diseases. By resolving a number of significant shortcomings istics are extracted, including blood vessel size and backdrop
of earlier studies, this work raises the bar for retinal lighting. After analyzing these characteristics, the system
image-based heart disease prediction: uses machine learning techniques, particularly Support
In contrast to earlier research ( [3], [5], [18]), which Vector Machines (SVM) and Random Forest Classifiers
mostly used traditional CNN architectures, we present a more (RFC), to determine whether the photos are suggestive of
sophisticated deep learning framework that incorporates cardiac sickness or not. Research shows that when it came to
attention mechanisms to concentrate on important retinal identifying heart disease in retinal scans, the Random Forest
characteristics that are suggestive of cardiovascular risk. Our Classifier performed better than alternative techniques.Blood
approach improves feature extraction from retinal pictures pressure readings, patient histories, and blood tests are
by utilizing attention-based CNNs, which increases inter- the mainstays of traditional cardiovascular disease (CVD)
pretability and classification accuracy. The generalizability of risk assessment. As one of the primary causes of death
many earlier investigations has been limited by their reliance worldwide [2], researchers are looking into retinal imaging as
on small or homogeneous datasets. Our method makes use a possible substitute for non-invasive CVD risk assessment.
of a more representative and varied dataset that includes It has been shown that retinal imaging, a frequent and
people of different ages, ethnicities, and cardiovascular painless part of many eye exams, may accurately predict
diseases. This guarantees a reliable and objective model blood pressure, age, gender, smoking status, and other CVD
that can function well across various populations. Current risk factors.
techniques mostly concentrate on retinal images alone. Our The use of artificial intelligence (AI) to the analysis of reti-
study, on the other hand, investigates the integration of nal fundus photos for the prediction of chronic diseases was
retinal imaging data with other clinical factors (such blood examined in a Chinese study [3]. Researchers concentrated
pressure and cholesterol levels) to provide a comprehensive on excessive blood pressure, diabetes (hyperglycemia), and
framework for predicting cardiovascular risk in various dyslipidemia (high cholesterol). More than 600 people’s reti-
populations. Our study uses transfer learning techniques nal pictures were examined, and the results were compared to
with pre-trained models that have been refined on retinal relevant health information. While the study’s performance
pictures, whereas previous works have used CNNs that were for high blood pressure and high cholesterol was moderate
taught from scratch. This method enhances the model’s (AUC = 0.766 and 0.703, respectively), it demonstrated
responsiveness to actual clinical situations and drastically remarkable accuracy in diagnosing diabetes (AUC = 0.88).
lowers the requirement for large amounts of labeled data. Using retinal pictures and DXA scans, research [4]
The specific application of CNNs for retinal-based car- investigates a deep learning-based method for identifying
diovascular risk prediction is still in its infancy, despite cardiovascular disease (CVD). The study obtained a 75.6%
the ongoing evolution of deep learning applications in classification accuracy using cropped image sets. Several
medical imaging. By using AI-driven insights from retinal retrieved characteristics were used to train machine learning
pictures, this study unites the fields of cardiology and models for DXA scans, and XGBoost produced the best
ophthalmology and has the potential to revolutionize the accuracy (77.4%). The advantages of integrating several data
identification of early-stage heart disease. Furthermore, our sources for cardiovascular risk assessment were highlighted
suggested methodology is in line with the increased focus on when retinal pictures and DXA scans were combined in
AI-powered diagnostics in healthcare settings with limited a multi-modal deep learning model, increasing the overall

VOLUME 13, 2025 76389


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

accuracy to 78.3%.The study ‘‘Effective Heart Disease as a major advancement in non-invasive cardiac disease
Prediction Using Hybrid Machine Learning Techniques’’ prediction techniques.
[6] presents a novel approach to increase the precision
of machine learning-based heart disease prediction. Using II. METHODOLOGY
Support Vector Machines (SVM), Neural Networks (NN), Heart disease is a leading cause of death globally. Early detec-
and Decision Trees (DT), the authors examine current tion and risk prediction can significantly improve patient
methods. Their suggested Hybrid Random Forest with outcomes. This project proposes a deep learning based system
Linear Model (HRFLM) approach claims higher accuracy for predicting heart disease risk using retinal images. The
than current approaches when used to the UCI Machine system leverages Convolutional Neural Networks (CNNs) to
Learning Repository heart disease dataset since it uses all automatically extract features from retinal images that are
dataset attributes without feature selection.Heart disease indicative of heart disease risk.
prediction is one of the many medical imaging applications
for Convolutional Neural Networks (CNNs). In order to A. THE PROPOSED MODEL
predict the likelihood of heart disease, the paper ‘‘Heart Using retinal fundus images and structured health data, the
Disease Prediction Using CNN Algorithm’’ [10] suggests suggested approach uses a Convolutional Neural Network
a CNN-based system that makes use of structured data, (CNN) to predict the risk of heart disease. Because CNNs
including patient age, gender, and cholesterol levels. Data can automatically extract hierarchical spatial features that
collection, preprocessing to deal with missing values, and correspond with cardiovascular problems, they are especially
splitting into training and test sets are all part of the system. useful in medical image analysis. The model seeks to improve
The model was trained to discover connections between predictive accuracy and generalizability across a range of
patient information and the existence of heart disease. Using groups by combining clinical health records with cardiovas-
a hospital dataset, the authors assessed the system and found cular risk factors with retinal imaging data from EyePACS.
that it had an accuracy of 85% to 88%.Although CNN- Retinal fundus images from EyePACS are included in the
based methods have demonstrated encouraging outcomes, dataset, along with related health information that includes
they have drawbacks such as the requirement for large labeled important cardiovascular risk variables like blood pressure,
datasets and computationally costly training procedures. cholesterol, body mass index (BMI), diabetes, and other
CNNs are better at extracting features than more conventional metabolic markers. Instead of being allocated by hand, the
machine learning models like SVM and RFC, but they need heart disease risk labels were generated from structured
to be heavily optimized in order to generalize well across clinical data using accepted cardiovascular risk assessment
a range of demographics. By utilizing structured data and techniques. In order to ensure compliance with verified
image-based features, hybrid models that combine CNNs medical guidelines, the Framingham Risk Score (FRS)
with other feature-based classifiers, like HRFLM [6], have and ASCVD (Atherosclerotic Cardiovascular Disease) Risk
shown better performance than standalone CNNs. Score values were utilized directly for classification if they
Retinal image-based heart disease prediction has advanced, had already been calculated in the patient’s medical records.
but there are still a number of obstacles to overcome. A unique risk classification framework was used in situations
Numerous earlier studies have not examined hybrid or where such structured risk scores were not available. Using
ensemble approaches, instead concentrating on individual predetermined medical thresholds, this framework divided
machine learning or deep learning models. In the area patients into high-risk and low-risk groups: According
of predicting cardiac disease, the combination of CNNs to cardiovascular risk factor aggregation models, patients
with feature-based classifiers such as Random Forest and with hypertension (systolic blood pressure 140 mmHg or
XGBoost has demonstrated promise but is yet not well diastolic blood pressure 90 mmHg), hypercholesterolemia
studied. Furthermore, rather than using only image-based (total cholesterol 240 mg/dL), obesity (BMI 30), or diabetes
feature extraction, the majority of CNN-based studies were categorized as high-risk. Individuals who had no
also use structured clinical data in addition to imaging history of diabetes, normal blood pressure (<120/80 mmHg),
data [10], which restricts their use in situations where healthy cholesterol levels (<200 mg/dL), and a normal
clinical parameters are not accessible.By improving model body mass index (18.5-24.9) were classified as low-risk.
interpretability and generalizability through attention-based This automated risk labeling technique ensures consistency,
CNN architectures and hybrid feature integration, our study eliminates subjective biases, and enables for independent
expands on earlier CNN-based research. Our methodology validation using external datasets. Furthermore, this method
seeks to address biases in previous research by integrating a is consistent with commonly used clinical risk assessment
varied dataset. Additionally, we investigate multimodal data models, which allows the system to be modified for use in
fusion, which combines retinal imaging with other health actual healthcare environments. Before training, the dataset
parameters to increase the accuracy of cardiovascular risk was thoroughly preprocessed to guarantee consistency and
assessment beyond what can be accomplished by solo CNN improve model resilience. To improve numerical stability
models. With these improvements, our study is positioned during training, pixel values were normalized using min-max

76390 VOLUME 13, 2025


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

scaling, which rescales pixel intensities to the range [0, 1]:


X − Xmin
Xnormalized = (1)
Xmax − Xmin
where the original pixel intensity is denoted by X , and the
minimum and maximum pixel values are denoted by Xmin
and Xmax , respectively. This change makes sure that the
model isn’t unduly impacted by changes in image brightness.
To further stabilize the training process, mean subtraction was
also employed to center pixel intensities around zero.
A number of data augmentation methods were used to
improve generalization and lessen overfitting. To accommo-
date for inherent variances in the angles at which images
were captured, random rotation was applied within a range
of ±15◦ . To replicate various orientations frequently seen in
retinal imaging, flipping was done both horizontally and ver-
tically. Furthermore, to improve visibility and feature extrac-
tion, adaptive histogram equalization (AHE) was applied to
amplify tiny details in retinal structures and improve vascular
contrast. Gamma correction, which simulates various lighting
situations and makes sure the model learns to recognize
pertinent patterns under varied illumination, was added to
further improve resilience. By artificially increasing dataset
diversity, this augmentation technique aids the model in
learning more resilient feature representations. Since health
data is typically partial, missing values were addressed using FIGURE 1. Proposed model architecture.
a systematic manner to protect data integrity and reduce bias.
Because median imputation is less susceptible to outliers than
mean imputation, it was used to impute missing values for elements like edges and textures were frozen for fine-tuning.
continuous variables including blood pressure, cholesterol, The binary cross-entropy loss function was selected as the
and BMI. Missing values for categorical variables, such as optimization criterion since the classification problem entails
smoking history and diabetes status, were filled up using determining whether a patient is at high or low risk for
the most common category (mode imputation). A more heart disease. Because it calculates the difference between
dependable and consistent predictive model is made possible expected probabilities and actual labels, this loss function
by this method, which guarantees that all data points are works well for binary classification tasks. When confidence is
genuine without adding unnatural biases. high, inaccurate classifications are penalized more severely.
The Adam optimizer (Adaptive Moment Estimation) was
1) MODEL TRAINING AND OPTIMIZATION used to effectively minimize the loss function and improve
The suggested approach uses retinal fundus images to model convergence. Faster convergence and more stable
forecast the risk of heart disease using EfficientNet, a cutting- training are achieved by Adam’s dynamic adjustment of the
edge convolutional neural network (CNN) architecture. The learning rate for each parameter based on the first- and
effective scaling strategy of EfficientNet, which strikes a second-order moments of previous gradients. A learning
balance between depth, width, and resolution to get excellent rate scheduler, which decays the learning rate if validation
performance with minimal computational resources, makes loss does not improve over subsequent epochs, was used to
it very beneficial. This makes it the perfect option for gradually lower the learning rate, which was initially set at
medical image analysis, where precise predictions depend on 0.001. This method efficiently fine-tunes the model and keeps
capturing fine-grained details. The model was started using the optimizer from overshooting ideal minima. To avoid
pre-trained weights from ImageNet and then refined on the overfitting, the model was trained over a number of epochs,
labeled dataset because creating a deep CNN from scratch and its performance was tracked using validation data.
necessitates a sizable dataset and significant processing To make sure the model performs effectively when applied
capacity. By adjusting to the unique properties of retinal to unknown data, a number of regularization strategies
pictures linked to cardiovascular risk, fine-tuning enables were used: After a set number of epochs, training was
the model to take advantage of the previously learned automatically ended if the validation loss stopped getting
feature representations. The deeper layers were trained to better. This keeps overfitting and pointless calculations at
discover domain-specific patterns linked to heart disease risk bay. To randomly deactivate a portion of neurons during
factors, while the earlier layers which identify low level training, dropout layers were added to the model’s fully

VOLUME 13, 2025 76391


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

clinical data, and the classification process. Here, the model


combines fully connected layers for structured clinical data
processing with convolutional neural networks (CNNs) for
analyzing retinal fundus images. The model’s components are
explained layer by layer below.

3) INPUT LAYERS
Two distinct input routes are used by the model: one for
structured clinical data and another for retinal pictures. 224 ×
FIGURE 2. Retinal fundus images. 224x3 RGB fundus images are processed via the retinal
image pathway, which normalizes pixel values to a 0-1
range for reliable feature analysis. Rotation, flipping, and
linked layers. As a result, the model is forced to learn brightness modifications are examples of data augmentation
more generalized patterns and is less dependent on particular techniques that could be employed to increase the model’s
neurons. The dropout rate was set at 0.3. To standardize generalizability. A vector of numerical health data, such
activations, increase stability, and speed up convergence, as blood pressure, BMI, and cholesterol, is sent to the
batch normalization was used after convolutional layers structured clinical data pipeline. To provide consistency
efficiently adjust the model. across various feature scales, these values are standardized
using standardization or min-max scaling.
B. MATERIALS AND METHODS
1) DATASET
4) CNN FEATURE EXTRACTION (EFFICIENTNET-B3
Retinal fundus pictures from the EyePACS collection are BACKBONE)
used in this investigation, together with or- ganized clinical
EfficientNet-B3 is used as the feature extraction backbone
health records that include important cardiovascular risk
due to its efficiency and higher accuracy compared to
variables. The clinical data improves predicted accuracy by
traditional CNNs. The key layers in EfficientNet-B3 are:
include important indications like blood pressure, cholesterol,
body mass index (BMI), diabetes status, and other metabolic 1) Initial Convolutional Layer: The model includes a
markers, while the retinal images are the main input for the convolutional layer with 32 3 × 3 filters applied. The
deep learning network. output size is maintained by setting padding to ‘‘same’’
The dataset was carefully preprocessed to guarantee and reducing the spatial dimensions of the feature
consistency and dependability. To standardize input size maps by a stride of 2. Swish was selected as the
and increase computational performance, the retinal images activation function because it improves gradient flow
were reduced to a uniform dimension of 224×224 pixels and lowers the likelihood of dormant neurons. Batch
because they were taken at varying resolutions. To enhance Normalization is used to stabilize the activations during
feature extraction and model generalization, a number of data training after the convolution.
augmentation methods were used, such as adaptive histogram 2) MBConv Layers For effective feature extraction,
equalization (AHE), gamma correction, horizontal and verti- the EfficientNet-B3 design makes use of Mobile
cal flipping, and random rotation (±15Â◦ ).Missing values for Inverted Bottleneck Convolution (MBConv) layers.
the clinical data were handled methodically. Because median There are three main operations in these tiers. In order
imputation offers robustness against outliers, it was used to to extract spatial characteristics independently inside
continuous variables including blood pressure, cholesterol, each channel, depthwise convolution is first applied.
and BMI. To maintain data integrity for categorical variables These collected characteristics are then combined
such as diabetes status and smoking history, mode imputation and integrated across all channels using pointwise
was used. By minimizing bias and maintaining the dataset’s (1 × 1) convolution. Lastly, a Squeeze-and-Excitation
robustness, this preprocessing technique improves the deep (SE) block is added, which allows the network to
learning model’s ability to predict the risk of heart disease. highlight the most essential features by dynamically
modifying the feature maps’ relevance. A stride of
2) CONVOLUTIONAL NEURAL NETWORK (CNN) two is used in the network’s early layers to effectively
The proposed deep learning model integrates convolutional downsample the input image and minimize its spatial
neural networks (CNNs) with structured clinical data to dimensions. On the other hand, later layers use a
predict heart disease risk. The core of the image process- stride of 1 to preserve finer features and spatial
ing component is EfficientNet-B3, a state-of-the-art CNN information. Squeeze-and-Excitation (SE) blocks are
architecture known for its balance between accuracy and incorporated to further improve feature representation.
computational efficiency. This section outlines the role of This allows the model to focus on salient features
EfficientNet-B3 in feature extraction, its integration with more effectively, which improves performance by

76392 VOLUME 13, 2025


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

TABLE 1. Table of MBConv blocks with their specifications. TABLE 2. Dense layers for feature transformation.

TABLE 3. Dense layers for feature transformation.

dynamically modifying the significance of feature


maps.
3) Batch Normalization Layers: After every convolu- that represents the structured clinical data is specifically
tional block, BN is used to enhance generalization and merged with the flattened feature vector that is obtained from
stabilize training. It facilitates the training of deeper the convolutional neural network (CNN), which contains
networks by reducing internal covariate shift. the information based on images. Concatenation creates
4) Global Average Pooling (GAP) Layer: The model this combination, which yields a single, cohesive feature
uses Global Average Pooling (GAP) to process fea- vector. By combining these two different data streams
ture maps rather than completely connected layers. the quantitative insights from the clinical parameters and
By calculating each feature map’s average value, this the rich visual information from the retinal images the
method efficiently summarizes the data it includes. model is able to use a more comprehensive and instructive
By drastically lowering the model’s parameter count, representation. More accurate and reliable predictions are
this method helps to lower the possibility of overfitting. eventually produced by the model’s ability to identify
Additionally, GAP makes sure that the most notable intricate linkages and dependencies that could be overlooked
characteristics that the convolutional neural network if the data sources were handled separately thanks to this
extracts are retained and included in the output. integrated method.
5) Dropout Layer (Regularization): A dropout rate
of 0.3 is used to reduce overfitting. This method 7) FULLY CONNECTED LAYERS WITH DROPOUT
randomly deactivates 30% of each layer’s neurons After the feature fusion phase, the model refines the merged
during training. Because it cannot depend on any features and generates the final prediction using a sequence of
one neuron to reliably contribute to the output, this classification layers. These layers are made up of dense layers
stochastic deactivation encourages the network to learn that are successively smaller; this design decision makes
more resilient and generalizable characteristics. feature refining more effective. 128 neurons make up the first
6) Flatten Layer: Converts the 2D feature maps into a 1D classification layer, which uses ReLU activation. To improve
feature vector to be fused with structured clinical data. stability and avoid overfitting, Batch Normalization and a
dropout rate of 0.4 are included. A second dense layer with
5) FULLY CONNECTED LAYERS 64 neurons that likewise uses Batch Normalization and ReLU
To change its features, the organized clinical data is processed activation comes next. To further reduce overfitting, a final
through a number of deep, completely connected layers. dense layer with 32 neurons is added, this time using ReLU
64 neurons make up the first dense layer, which uses ReLU activation and a dropout rate of 0.3. Higher dropout rates in
activation to add non-linearity. Batch Normalization is then critical regions and the deliberate use of increasingly thinner
used to stabilize the activations and speed up training. Then, layers guarantee the model’s stability while successfully
a second dense layer with 32 neurons is added, which avoiding overfitting, enabling better generalization and
likewise uses ReLU activation. This second dense layer is prediction accuracy.
followed by a dropout layer with a 0.3 dropout rate to
improve generalization and avoid overfitting. ReLU, Batch 8) FINAL PREDICTION LAYER
Normalization, and dropout work together to give the model A single neuron with a sigmoid activation function is the
the ability to robustly understand intricate correlations in the model’s ultimate output. This function converts the input
clinical data. from the neuron into a probability score that represents the
likelihood of heart disease risk and ranges from 0 to 1.
6) FEATURE FUSION LAYER Because EfficientNet-B3 has better accuracy and parameter
A feature fusion layer is used in the model to produce a efficiency than more conventional designs like ResNet and
thorough representation for prediction. The feature vector VGG, it is used as the CNN backbone in the suggested

VOLUME 13, 2025 76393


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

multi-modal model. Retinal fundus images are processed by TABLE 4. Performance metrics of the proposed model.
EfficientNet-B3 utilizing a sequence of convolutional layers
intended to extract hierarchical features. Depthwise separable
convolutions are used to minimize computational complexity
without sacrificing accuracy. Both visual and numerical
health characteristics can be included for a more thorough
risk assessment according to the model’s feature fusion
process, which concatenates derived image features with
structured clinical data to improve prediction performance.
Global average pooling (GAP), a crucial part of the design,
minimizes overfitting by lowering the number of parameters
while maintaining crucial spatial information. The fused
features are refined while preserving computing efficiency
thanks to the fully connected layers’ progressive reduction
method (128→64→32). ReLU is used in hidden layers to
enhance gradient flow and learning, and a sigmoid activation
function in the last layer guarantees a probability-based
binary classification for assessing the risk of heart disease.
Activation functions are carefully selected. The model uses
batch normalization to control activations and dropout layers
to avoid overfitting in order to further improve generalization
and stability. This ensures stable performance and quicker
convergence during training.

III. RESULTS AND DISCUSSION


The model’s effectiveness in forecasting the risk of heart
disease using retinal fundus images and structured clinical
data is thoroughly examined in this part. A thorough
grasp of the model’s advantages and disadvantages is
ensured by the evaluation of the results using a variety of
metrics. Medical applications require a more comprehen-
sive evaluation framework than typical classification jobs, FIGURE 3. Confusion matrix of the proposed model.

where accuracy and F1-score may be adequate. Sensitivity,


specificity, and AUC-ROC curves—all essential for clinical
applicability—are used in this study to evaluate performance. possibility of false negatives, which could result in heart
Additionally, interpretability methods like Grad-CAM and disease going undetected and untreated. At the same time,
SHAP are used to shed light on the model’s decision-making a specificity of 91.2% guarantees that low-risk people are not
process. To emphasize the benefits of the suggested method, mistakenly categorized as high-risk, avoiding unnecessary
a comparison with alternative deep learning architectures, medical procedures.
such as CNN, VGG16, and ResNet-50, is also carried out. To assess the model’s practical usefulness, a thorough
Using common classification criteria, such as accuracy, confusion matrix analysis was conducted to comprehend
precision, recall (sensitivity), specificity, and F1-score, the the distribution of false positives and false negatives. The
efficacy of the suggested model is first assessed. AUC-ROC model incorrectly identified 28 high-risk cases as low-risk,
(Area Under the Receiver Operating Characteristic Curve) resulting in false negatives, as can be seen from the confusion
is a more accurate indicator of the model’s capacity to matrix. Since neglecting a high-risk patient could result in
differentiate between high-risk and low-risk patients, even major complications or even death, this is a crucial concern
though both metrics give a basic idea of how well the model in healthcare. Future research could prioritize sensitivity
works. The main performance indicators derived from the by using cost- sensitive learning strategies or ensemble
suggested approach are compiled in Table 4. The model’s learning techniques to lessen this. However, 35 false positives
outstanding generalization ability is demonstrated by its where low-risk people were mistakenly labeled as high-risk
overall accuracy of 92.4%. More significantly, the model’s were also generated by the program. Although there is no
AUC-ROC value of 96.3% indicates that it successfully immediate health danger, individuals may become anxious
distinguishes between high-risk and low-risk situations, and undergo needless tests as a result. In order to guarantee
making it extremely dependable for clinical application. fairness and dependability in practical medical applications,
Sensitivity (93.5%) in medical diagnostics is essential for the model does not show significant bias toward any one
accurately identifying high-risk cases and reducing the class, as evidenced by the comparatively even distribution of

76394 VOLUME 13, 2025


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

FIGURE 4. Grad-CAM visualization.

TABLE 5. Performance comparison with other models.

FIGURE 5. Epoch vs Accuracy.

false positives and false negatives. The inability to interpret


deep learning models is one of the main obstacles to their
use in the medical field. To foster confidence and make sure
the model doesn’t rely on erroneous correlations, medical
professionals demand transparency in AI decision-making.
This was addressed by creating heatmaps that highlighted the
crucial retinal regions affecting predictions using Grad-CAM
(Gradient-weighted Class Activation Mapping).
The model mainly concentrates on vascular structures,
optic disc abnormalities, and areas with microvascular dam-
age, which are recognized biomarkers of cardiovascular risk, FIGURE 6. Epoch vs Loss.
according to the Grad-CAM visualizations. This implies that
the model is consistent with accepted medical wisdom, hence
enhancing its dependability. Moreover, the contributions of produces a more thorough and insightful depiction of patient
structured clinical data elements were examined using SHAP risk factors. By adding pertinent clinical characteristics,
(SHapley Additive exPlanations). The findings show that the our strategy improves predicted accuracy in contrast to
most important indicators of heart disease risk are BMI, traditional CNN-based techniques that just use image fea-
systolic blood pressure, cholesterol, and retinal microvascular tures.Furthermore, the EfficientNet-B3 backbone is essential
alterations. The significance of combining imaging and for feature extraction optimization. Its architecture ensures
non-imaging information to enhance predictive performance improved generalization across many datasets by achieving
is supported by these results. excellent performance with a smaller computational foot-
The suggested model’s performance was evaluated against print. The algorithm can identify intricate patterns in retinal
ResNet-50, VGG16, and a conventional CNN in order to images without incurring undue computing burden because
confirm its efficacy. to this effective feature extraction.
The findings show that the suggested multimodal model The Figure 5 and Figure 6 presents the training and
based on EfficientNet-B3 performs better than conventional validation performance of a model over 25 epochs, showing
architectures. Numerous important elements are responsi- accuracy and loss values at intervals of 5 epochs. As training
ble for this progress. First off, the model can integrate progresses, both training and validation accuracy improve,
structured clinical data with image-based characteristics with training accuracy rising from 75.2% at epoch 5 to
thanks to the integration of multimodal learning, which 93.5% at epoch 25, while validation accuracy increases from

VOLUME 13, 2025 76395


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

FIGURE 7. Learning Rate vs Loss. FIGURE 9. Batch size vs accuracy.

FIGURE 8. Learning Rate vs Loss. FIGURE 10. Optimizer vs Accuracy.

72.1% to 92.4%. Simultaneously, the training and validation highlight the importance of selecting an appropriate learning
loss decrease, indicating better model optimization. Training rate to ensure both efficient training and high performance.
loss drops from 0.56 to 0.18, and validation loss decreases The line graph batch size vs accuracy illustrates how different
from 0.61 to 0.22. The trend suggests that the model is batch sizes affect training and validation accuracy, as well
learning effectively, with a minimal gap between training as convergence speed. A smaller batch size (4) results
and validation accuracy, indicating a good generalization to in the highest training accuracy (93.1%) but slows down
unseen data. convergence. Increasing the batch size to 8 or 16 provides a
The Figure 7 & 8 compares the performance of different balance between accuracy and training speed, with validation
learning rates in terms of accuracy, loss, and convergence accuracy peaking at 92.0% for batch size 8. However, using
speed. A learning rate of 0.01 is the most effective, achieving a larger batch size (32) speeds up convergence but reduces
92.4% accuracy with a low loss of 0.22, making it the both training (89.5%) and validation accuracy (87.9%).
optimal choice. A higher learning rate of 0.1 leads to fast but This trend suggests that while larger batch sizes improve
unstable learning, resulting in lower accuracy and higher loss. computational efficiency, they may slightly compromise
Conversely, smaller learning rates, such as 0.001 and 0.0001, model generalization. From the comparison of different
result in slower convergence, with 0.0001 taking too long to optimizers Adam performs the best, achieving the highest
learn despite maintaining reasonable accuracy. The results accuracy (92.4%) and the lowest loss (0.22) while ensuring

76396 VOLUME 13, 2025


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

FIGURE 11. Optimizer vs Loss.

FIGURE 12. Activation Function vs Accuracy & training time.

fast and stable convergence. RMSprop also delivers good


performance, with 91.5% accuracy and 0.24 loss, making it a
solid alternative. SGD (Stochastic Gradient Descent) shows
slower convergence, leading to moderate accuracy (89.2%)
and slightly higher loss. AdaGrad, despite its adaptiveness,
performs the worst, with the lowest accuracy (87.8%) and
the highest loss (0.35), struggling with slower convergence.
These results highlight Adam as the most efficient optimizer
for balancing accuracy, loss, and training speed.
At the time of evaluation of different activation functions
based on accuracy, training time, and loss, ReLU achieves the
highest accuracy (92.4%) and the lowest loss (0.22), making
it the most effective choice while keeping training time
at a reasonable 45 minutes. LeakyReLU and ELU provide
slightly lower accuracy (91.8% and 91.2%) with slightly
increased training times. Tanh and Sigmoid, however, result
in lower accuracy (89.7% and 85.3%), higher loss values,
and significantly longer training times (52 and 58 minutes,
respectively). This suggests that ReLU is the best option
for balancing accuracy, efficiency, and loss in deep learning
models. A dropout rate of 0.1 achieves the highest accuracy FIGURE 13. Dropout rate on accuracy and loss.
(94.3%) and the lowest loss (0.18), making it the best
option. As the dropout rate increases, accuracy gradually that the model converges successfully, enhancing stability
declines, and loss increases, indicating that excessive dropout and performance in general. The model’s higher predictive
reduces model performance. While dropout helps prevent capacity in determining the risk of heart disease from retinal
overfitting, excessively high values (e.g., 0.5) lead to signif- pictures is a result of these coupled characteristics.
icant accuracy reduction (90.2%) and increased loss (0.30).
The findings suggest that a moderate dropout rate (around IV. CONCLUSION
0.1 to 0.3) provides a good balance between regularization With the help of integrated clinical data, this work showed
and performance. Moreover, a major factor in the model’s that it is feasible to use deep learning more especially, the
efficacy is the training process’s resilience. By keeping the EfficientNet-B3 architecture to predict cardiovascular risk
model from depending too much on any one feature, dropout from retinal fundus images. The model demonstrated its
layers assist minimize overfitting. By exposing the model promise as a non-invasive screening tool by achieving a
to a wide variety of variances in the training images, data high degree of predicted accuracy. This technique provides a
augmentation approaches improve generalization even more. supplementary approach to current diagnostic tools like ECG
Last but not least, using adaptive learning rates guarantees and blood tests by detecting minor retinal vascular alterations

VOLUME 13, 2025 76397


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

associated with cardiovascular diseases, thereby enhancing [11] V. Shankar, V. Kumar, U. Devagade, V. Karanth, and K. Rohitaksha, ‘‘Heart
early risk stratification. However, the existence of false disease prediction using CNN algorithm,’’ Social Netw. Comput. Sci.,
vol. 1, no. 3, p. 170, May 2020.
negatives calls for additional improvement using methods [12] A. Mehmood, M. Iqbal, Z. Mehmood, A. Irtaza, M. Nawaz, T. Nazir, and
such as cost-sensitive learning. Validating the model’s gen- M. Masood, ‘‘Prediction of heart disease using deep convolutional neural
eralizability across various populations and clinical contexts networks,’’ Arabian J. Sci. Eng., vol. 46, no. 4, pp. 3409–3422, Apr. 2021.
[13] F. Rustam, A. Ishaq, K. Munir, M. Almutairi, N. Aslam, and I. Ashraf,
should be a top priority for future research, which should also ‘‘Incorporating CNN features for optimizing performance of ensemble
address potential dataset biases and real-world variabilities. classifier for cardiovascular disease prediction,’’ Diagnostics, vol. 12,
Despite the encouraging results, clinical integration needs no. 6, p. 1474, Jun. 2022.
[14] M. M. R. K. Mamun and A. Alouani, ‘‘FA-1D-CNN implementation to
thorough validation and comparative research to evaluate its
improve diagnosis of heart disease risk level,’’ in Proc. World Congr. Electr.
efficiency and cost-effectiveness in comparison to accepted Eng. Comput. Syst. Sci., Aug. 2020, pp. 122–1.
practices. Retinal imaging offers a potentially effective and [15] A. Dewan and M. Sharma, ‘‘Prediction of heart disease using a
accessible screening method, but its practical application hybrid technique in data mining classification,’’ in Proc. 2nd Int.
Conf. Comput. Sustain. Global Develop. (INDIACom), Mar. 2015,
depends on strong algorithms that can manage a variety pp. 704–706.
of patient demographics and picture variability. In order to [16] A. U. Rehman, T. Alam, and S. B. Belhaouari, ‘‘Investigating potential risk
improve cardiovascular disease prevention and management, factors for cardiovascular diseases in adult Qatari population,’’ in Proc.
IEEE Int. Conf. Informat., IoT, Enabling Technol. (ICIoT), Doha, Qatar,
future research should concentrate on thorough clinical Feb. 2020, pp. 267–270.
studies to assess its effect on patient outcomes and investigate [17] P. W. F. Wilson, R. B. D’Agostino, D. Levy, A. M. Belanger,
its potential for smooth integration into current healthcare H. Silbershatz, and W. B. Kannel, ‘‘Prediction of coronary heart disease
using risk factor categories,’’ Circulation, vol. 97, no. 18, pp. 1837–1847,
processes. May 1998.
[18] C. Y. Cheung et al., ‘‘A deep-learning system for the assessment of
ACKNOWLEDGMENT cardiovascular disease risk via the measurement of retinal-vessel calibre,’’
Nature Biomed. Eng., vol. 5, no. 6, pp. 498–508, Oct. 2020.
The authors would like to thank APJ Abdul Kalam Techno- [19] D. S. Sharp, M. E. Andrew, C. M. Burchfiel, J. M. Violanti, and
logical University (KTU) and the Department of Computer J. Wactawski-Wende, ‘‘Body mass index versus dual energy X-ray
Science and Engineering, Government Engineering College, absorptiometry-derived indexes: Predictors of cardiovascular and diabetic
disease risk factors,’’ Amer. J. Human Biol., vol. 24, no. 4, pp. 400–405,
Thrissur, for their facilities and support provided during the Jul. 2012.
research. [20] P. Melillo, N. De Luca, M. Bracale, and L. Pecchia, ‘‘Classification tree
for risk assessment in patients suffering from congestive heart failure
via long-term heart rate variability,’’ IEEE J. Biomed. Health Informat.,
REFERENCES vol. 17, no. 3, pp. 727–733, May 2013.
[1] R. Rekha, V. P. Brintha, and P. Anushree, ‘‘Heart disease prediction using [21] G. Guidi, M. C. Pettenati, P. Melillo, and E. Iadanza, ‘‘A machine learning
retinal fundus image,’’ in Proc. Int. Conf. Artif. Intell., Smart Grid Smart system to improve heart failure patient assistance,’’ IEEE J. Biomed. Health
City Appl. Cham, Switzerland: Springer, 2020, pp. 765–772. Informat., vol. 18, no. 6, pp. 1750–1756, Nov. 2014.
[2] A. S. Kumar and N. Sinha, ‘‘Cardiovascular disease in India: A 360 degree [22] G. Parthiban and S. K. Srivatsa, ‘‘Applying machine learning methods in
overview,’’ Med. J. Armed Forces India, vol. 76, no. 1, pp. 1–3, Jan. 2020. diagnosing heart disease for diabetic patients,’’ Int. J. Appl. Inf. Syst., vol. 3,
[3] R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, no. 7, pp. 25–30, Aug. 2012.
G. S. Corrado, L. Peng, and D. R. Webster, ‘‘Prediction of cardiovascular [23] N. G. B. Amma, ‘‘Cardiovascular disease prediction system using genetic
risk factors from retinal fundus photographs via deep learning,’’ Nature algorithm and neural network,’’ in Proc. Int. Conf. Comput., Commun.
Biomed. Eng., vol. 2, no. 3, pp. 158–164, Feb. 2018. Appl., Dindigul, India, Feb. 2012, pp. 1–5.
[4] L. Zhang, M. Yuan, Z. An, X. Zhao, H. Wu, H. Li, Y. Wang, B. Sun, H. Li, [24] Y. Ma, J. Xiong, Y. Zhu, Z. Ge, R. Hua, M. Fu, C. Li, B. Wang, Li Dong,
S. Ding, X. Zeng, L. Chao, P. Li, and W. Wu, ‘‘Prediction of hypertension, X. Zhao, J. Chen, C. Rong, C. He, Y. Chen, Z. Wang, W. Wei, W. Xie, and
hyperglycemia and dyslipidemia from retinal fundus photographs via deep Y. Wu, ‘‘Development and validation of a deep learning algorithm using
learning: A cross-sectional study of chronic diseases in central China,’’ fundus photographs to predict 10-year risk of ischemic cardiovascular
PLoS ONE, vol. 15, no. 5, May 2020, Art. no. e0233166. diseases among Chinese population,’’ Sci. Bull., vol. 67, no. 1, pp. 17–20,
[5] H. R. H. Al-Absi, M. T. Islam, M. A. Refaee, M. E. H. Chowdhury, and Jan. 2022, doi: 10.1016/j.scib.2021.08.016.
T. Alam, ‘‘Cardiovascular disease diagnosis from DXA scan and retinal [25] J.-H. Wu and T. Y. A. Liu, ‘‘Application of deep learning to retinal-
images using deep learning,’’ Sensors, vol. 22, no. 12, p. 4310, Jun. 2022. image-based oculomics for evaluation of systemic health: A review,’’
[6] C. Agurto, E. S. Barriga, V. Murray, S. Nemeth, R. Crammer, W. Bauman, J. Clin. Med., vol. 12, no. 1, p. 152, Dec. 2022, doi: 10.3390/jcm120
G. Zamora, M. S. Pattichis, and P. Soliz, ‘‘Automatic detection of 10152.
diabetic retinopathy and age-related macular degeneration in digital fundus [26] S. Arooj, S. U. Rehman, A. Imran, A. Almuhaimeed, A. K. Alzahrani, and
images,’’ Investigative Opthalmol. Vis. Sci., vol. 52, no. 8, p. 5862, A. Alzahrani, ‘‘A deep convolutional neural network for the early detection
Jul. 2011. of heart disease,’’ Biomedicines, vol. 10, no. 11, p. 2796, Nov. 2022, doi:
[7] S. Mohan, C. Thirumalai, and G. Srivastava, ‘‘Effective heart disease 10.3390/biomedicines10112796.
prediction using hybrid machine learning techniques,’’ IEEE Access, vol. 7, [27] J. Colcombe, R. Mundae, A. Kaiser, J. Bijon, and Y. Modi, ‘‘Reti-
pp. 81542–81554, 2019. nal findings and cardiovascular risk: Prognostic conditions, novel
[8] S. R. Tithi, A. Aktar, F. Aleem, and A. Chakrabarty, ‘‘ECG data analysis biomarkers, and emerging image analysis techniques,’’ J. Personal-
and heart disease prediction using machine learning algorithms,’’ in Proc. ized Med., vol. 13, no. 11, p. 1564, Oct. 2023, doi: 10.3390/jpm131
IEEE Region 10 Symp. (TENSYMP), Jun. 2019, pp. 819–824. 11564.
[9] N. H. Kamaruddin, M. Murugappan, and M. Iqbal Omar, ‘‘Early prediction [28] F. Huang, J. Lian, K.-S. Ng, K. Shih, and V. Vardhanabhuti, ‘‘Predicting
of cardiovascular diseases using ECG signal: Review,’’ in Proc. IEEE CT-based coronary artery disease using vascular biomarkers derived
Student Conf. Res. Develop. (SCOReD), Dec. 2012, pp. 48–53. from fundus photographs with a graph convolutional neural network,’’
[10] N. N. Anuar, H. Hafifah, S. M. Zubir, A. Noraidatulakma, J. Rosmina, Diagnostics, vol. 12, no. 6, p. 1390, Jun. 2022.
M. N. Ain, H. M. Akma, Z. N. Farawahida, K. A. Shawani, and M. Syakila, [29] V. Srilakshmi, K. Anuradha, and C. Shoba Bindu, ‘‘Intelligent decision
‘‘Cardiovascular disease prediction from electrocardiogram by using support system for cardiovascular risk prediction using hybrid loss deep
machine learning,’’ Int. J. Online Biomed. Eng., vol. 16, no. 7, pp. 34–48, joint segmentation and optimized deep learning,’’ Adv. Eng. Softw.,
2020, doi: 10.3991/ijoe.v16i07.13569. vol. 173, Nov. 2022, Art. no. 103198.

76398 VOLUME 13, 2025


N. D. Bisna et al.: Retinal Image Analysis for Heart Disease Risk Prediction: A Deep Learning Approach

[30] R. G. Barriada, O. Simó-Servat, A. Planas, C. Hernández, R. Simó, and N. D. BISNA received the B.Tech. degree in
D. Masip, ‘‘Deep learning of retinal imaging: A useful tool for coronary computer science and engineering from the Uni-
artery calcium score prediction in diabetic patients,’’ Appl. Sci., vol. 12, versity of Calicut, in 2003, and the M.Tech.
no. 3, p. 1401, Jan. 2022. degree in computer science and engineering
[31] J. Son, J. Y. Shin, E. J. Chun, K.-H. Jung, K. H. Park, and S. J. Park, from Anna University, in 2015. She is currently
‘‘Predicting high coronary artery calcium score from retinal fundus images with the Department of Computer Science and
with deep learning algorithms,’’ Transl. Vis. Sci. Technol., vol. 9, no. 2, Engineering, Government Engineering College
p. 28, May 2020.
Thrissur, APJ Abdul Kalam Technological Uni-
[32] A. Diederichsen, ‘‘Coronary artery calcium score and cardiovascular event
versity, Thiruvananthapuram, India. Her research
prediction,’’ JAMA, vol. 304, no. 7, p. 741, Aug. 2010.
interests include machine learning in healthcare,
[33] T. H. Rim et al., ‘‘Deep-learning based cardiovascular risk stratification
using coronary artery calcium scores predicted from retinal photographs,’’ deep learning, and the IoT.
Lancet Digit. Health, vol. 3, pp. e306–e316, Feb. 2021.
[34] T. H. Rim, G. Lee, Y. Kim, Y. C. Tham, C. J. Lee, S. J. Baik, Y. A. Kim,
M. Yu, M. Deshmukh, and B. K. Lee, ‘‘Prediction of systemic biomarkers
from retinal photographs: Development and validation of deep- learning
algorithms,’’ Lancet Digit. Health, vol. 2, pp. e526–e536, Jan. 2020.
[35] T. K. Redd, J. P. Campbell, J. M. Brown, S. J. Kim, S. Ostmo, R. V. P. Chan,
J. Dy, D. Erdogmus, S. Ioannidis, J. Kalpathy-Cramer, and M. F. Chiang, P. SONA received the B.Tech. degree from
‘‘Evaluation of a deep learning image assessment system for detecting the Vimal Jyothi Engineering College, Kannur.
severe retinopathy of prematurity,’’ Brit. J. Ophthalmol., vol. 103, no. 5, She is currently pursuing the M.Tech. degree
pp. 580–584, May 2019. with the Department of Computer Science and
[36] M. E. Hoque and K. Kipli, ‘‘Deep learning in retinal image segmentation Engineering, Government Engineering College
and feature extraction: A review,’’ Int. J. Online Biomed. Eng., vol. 17, Thrissur, APJ Abdul Kalam Technological Univer-
no. 14, pp. 103–118, Dec. 2021. sity, Thiruvananthapuram, India.
[37] S. M. Zekavat et al., ‘‘Deep learning of the retina enables Phenome- and
genome-wide analyses of the microvasculature,’’ Circulation, vol. 145,
no. 2, pp. 134–150, Jan. 2022.
[38] C. Y.-L. Cheung, M. K. Ikram, C. Sabanayagam, and T. Y. Wong, ‘‘Retinal
microvasculature as a model to study the manifestations of hypertension,’’
Hypertension, vol. 60, no. 5, pp. 1094–1103, Nov. 2012.
[39] C. De Ciuceis et al., ‘‘Comparison between invasive and noninvasive
techniques of evaluation of microvascular structural alterations,’’ J.
Hypertension, vol. 36, no. 5, pp. 1154–1163, May 2018. AJAY JAMES received the B.Tech. degree in com-
[40] G. Bertelsen, T. Peto, H. Lindekleiv, H. Schirmer, M. D. Solbu, I. puter science and engineering from Manonma-
Toft, A. K. Sjølie, and I. Njølstad, ‘‘Sex differences in risk factors for
niam Sundaranar University, in 2002, the M.Tech.
retinopathy in non-diabetic men and women: The Tromsø eye study,’’ Acta
degree in computer science and engineering from
Ophthalmologica, vol. 92, no. 4, pp. 316–322, Jun. 2014.
[41] D. Hua, Y. Xu, X. Zhang, T. He, C. Chen, Z. Chen, and Y. Xing, ‘‘Retinal
Pondicherry University, in 2008, and the Ph.D.
microvascular changes in hypertensive patients with different levels of degree from NIT Durgapur, in 2020. He is cur-
blood pressure control and without hypertensive retinopathy,’’ Current Eye rently an Associate Professor with the Department
Res., vol. 46, no. 1, pp. 107–114, Jan. 2021. of Computer Science and Engineering, Govern-
[42] T. Y. Wong, R. Klein, A. R. Sharrett, B. B. Duncan, D. J. Couper, ment Engineering College Thrissur, APJ Abdul
J. M. Tielsch, B. E. K. Klein, and L. D. Hubbard, ‘‘Retinal arteriolar Kalam Technological University, Thiruvanantha-
narrowing and risk of coronary heart disease in men and women,’’ puram, India. His research interests include machine learning, image
Atherosclerosis Risk Communities Study. JAMA, vol. 287, no. 9, processing, and video processing.
pp. 1153–1159, Mar. 2002, doi: 10.1001/jama.287.9.1153.

VOLUME 13, 2025 76399

You might also like