Sravs
Sravs
Images
[Link] 22955A6715
I
Brain Tumor Identification using MRI Images
A Project Report
Submitted in Partial Fulfilment of the
Requirements for the Award of the Degree Of
Bachelor of Technology
in
CSE(Data Science)
By
[Link] 22955A6715
Associate Professor
April, 2024
II
DECLARATION
I certify that
a. the work contained in this report is original and has been done by me under the guidance
of my supervisor(s).
b. the work has not been submitted to any other Institute for any degree or diploma.
c. I have followed the guidelines provided by the Institute in preparing the report.
d. I have conformed to the norms and guidelines given in the Ethical Code of Conduct of the
Institute.
e. whenever I have used materials (data, theoretical analysis, figures, and text) from other
sources, I have given due credit to them by citing them in the text of the report and giving
their details in the references. Further, I have taken permission from the copyright owners
of the sources, whenever necessary.
III
CERTIFICATE
This is to certify that the project report entitled Brain Tumor Identification using MRI
Images submitted by Ms. Gajjela Sravanthi to the Institute of Aeronautical Engineering,
Hyderabad, in partial fulfilment of the requirements for the award of the Degree Bachelor of
Technology in CSE (Data Science) is a bonafide record of work carried out by him/her under
my/our guidance and supervision. In whole or in parts, the contents of this report have not been
submitted to any other institute for the award ofany Degree.
Date:
````````````````````````````````````````
IV
APPROVAL SHEET
This project report entitled Brain Tumor Identification using MRI Images submittedby
Ms. Gajjela Sravanthi is approved for the award Degree Bachelor of Technology in Data
Science.
Examiner Supervisor
Principal
Dr. L V Narasimha Prasad
Date:
Place:
V
ACKNOWLEDGEMENT
We would like to seize this moment to convey our heartfelt appreciation to everyone who
supported, motivated, and cooperated with us in various capacities throughout our project. It
brings me immense joy to recognize the assistance provided by those individuals who played
a crucial role in ensuring the successful culmination of our project.
We extend our heartfelt thanks to [Link] Patil, Head of the Department of Data
Science, who served as my supervisor. We express our deep appreciation for the invaluable
guidance and support provided by the faculty of the Data Science Department throughout the
development of this project. Without their advice, cooperation, and encouragement, this project
would not have come to fruition. A special note of gratitude goes to my friends for their
assistance in project development. Lastly, we wish to convey my gratitude to our principal, Dr.
L V Narasimha Prasad, the management, and my parents for their unwavering support inall
circumstances.
With Gratitude,
[Link] 22955A6715
VI
Abstract
The accurate and timely classification of brain tumors is critical for effective treatment and patient
[Link] research proposes a hybrid deep learning model to automatically classify brain cancers
from medical photos by utilizing both Transformers and Convolutional Neural Networks (CNNs). The
Transformer architecture improves the model's comprehension of global linkages and contextual
dependencies between these data, while the CNN is used as a feature extractor to extract complex
spatial features from MRI scans. This combination efficiently uses both local and global imaging
information to enable more accurate tumor type categorization. Using a dataset of 2270 brain MRI
images, the model is trained and assessed, and it performs well on important metrics like accuracy,
precision, recall, and F1-score. The suggested system shows great promise for increasing diagnostic
efficiency and accuracy, providing doctors with a dependable tool to support early tumor diagnosis and
individualized treatment planning. Expanding the dataset, improving the architecture of the model, and
using the system in clinical settings for real-time tumor classification are possible future tasks.
Keywords-
Tumor Classification, Convolutional Neural Networks (CNN), Transformer, Deep Learning,
Medical Image Analysis, MRI, Hybrid Model, Multi-Class Classification, Automated Diagnosis,
Feature Extraction.
VII
Table of Contents
VIII
List of Figures
1.1 MRI images of the brain without tumour and with tumour 2
5.4 Output 32
IX
CHAPTER -1
INTRODUCTION
1.1 INTRODUCTION
Brain tumors are abnormal growths of cells within the brain, which can be life-threatening
and require precise detection and classification for effective treatment. Among the various
types of brain tumors, glioma, meningioma, and pituitary tumors are commonly
encountered in clinical practice. Accurate identification and differentiation of these
tumors from normal brain tissue (no tumor) is crucial for timely and appropriate medical
intervention.
Because magnetic resonance imaging (MRI) can produce comprehensive images of
brain tissues, it is one of the main technologies used by doctors to diagnose brain
cancers. However, the manual examination of MRI scans is a time-consuming and
subjective process, often prone to errors, particularly when faced with subtle or complex
tumor features. Consequently, there has been a surge in interest in applying artificial
intelligence (AI), particularly deep learning, to automate the classification of brain
tumors.
Convolutional Neural Networks (CNNs), a type of deep learning, have proven very
effective at extracting significant spatial characteristics from images in image
classification tasks. CNNs work well in spotting local patterns in brain MRI data, such as
forms, edges, and textures, which are essential for tumor identification. CNNs, on their
own, are not always able to capture the relationships between various sections of the image
or the global environment. This restriction may make it more difficult for them to do more
difficult classification tasks, such differentiating between brain tumor kinds that may have
identical local characteristics but distinct overall structures.
To overcome this challenge, we propose a hybrid deep learning model that combines
CNNs with Transformer architectures. Transformers, originally designed for natural
language processing, have recently shown promise in image classification by using self
attention mechanisms to capture long-range dependencies and contextual relationships in
data. By integrating CNNs and Transformers, our approach aims to leverage the strengths
of both models—CNNs for detailed local feature extraction and Transformers for
capturing global context—thereby improving the accuracy of brain tumor classification.
10
Fig 1.1 MRI images of the brain without tumour and with tumour
11
1.2 EXISTING SYSTEM
• Histogram of Oriented Gradients (HOG): HOG is a feature descriptor used to detect objects in
images. While it is effective for certain computer vision tasks, it may not be well-suited for the
nuanced and complex patterns in MRI images of brain tumors. The fixed nature of HOG features
can miss subtle variances and irregularities, leading to lower accuracy in tumor identification.
12
Manual identification is subjective and varies between radiologists, leading to inconsistent
results. It is also labor-intensive and delays diagnosis and treatment planning.
Traditional machine learning techniques require extensive manual feature extraction,
demanding domain expertise, and struggle with the high dimensionality and complexity of
medical images.
Rule-based systems rely on predefined rules that may not adapt well to image variations
and lack the ability to learn and improve from new data.
Other deep learning methods like RNNs are not well-suited for image data without
modifications and handle dependencies poorly, while fully connected networks are
inefficient due to a lack of spatial hierarchies.
Non-deep learning image processing techniques are too simplistic for complex medical
images and often fail with subtle differences; region-based methods are sensitive to noise
and initial conditions and struggle with heterogeneous tumor tissues.
Hybrid methods introduce additional design complexity and require more computational
power, making them less efficient than CNNs.
In our proposed system for Brain Tumor Identification using MRI Images, Convolutional Neural
Network is a well-ordered technique in the field of the medical image process. A convolutional
neural network (CNN) could be a type of artificial neural network works in image recognition and
process that’s specifically designed for method component knowledge. CNN is a powerful image
processing, computing method that use deep learning to perform each generative and descriptive
tasks, typically exploitation machine vision that has image and video recognition, together with
recommender systems and linguistic communication process (NLP).A neural network could be a
combination of system of hardware and computer code similar to the operation of neurons within
the human [Link] CNN extracts spatial features from the medical images, while transformers,
known for their capability in capturing global context and relationships within the data, are utilized
to further enhance the feature representation. By integrating CNNs and transformers, the system
aims to improve the accuracy and efficiency of tumor detection, overcoming the limitations of
manual diagnosis and offering a more robust solution for medical professionals.
13
In our proposed technique we have taken a complete variety of pictures as input and converted all
the images into constant size 240*240 to form them unvaried dimensions. The system leverages a
dataset of 2270 brain images, divided into training and testing subsets. The transformer architecture
is used to capture long-range dependencies and relationships within the images, which is especially
useful for complex medical data. We tend to create a convolutional kernel that is convoluted with
the input layer administering with thirty-two convolutional filters of size 3*3 every with the support
of three channel tensors. We tend to used ReLU because of the activation function. The corrected
linear activation function or ReLU could be a piecewise linear operate which will output the input
directly if it is positive, otherwise, it will output [Link] integration of transformers into the
traditional CNN framework is expected to significantly improve classification accuracy, providing
a faster and more reliable method for identifying the type of brain tumors, ultimately assisting in
better clinical outcomes and patient care.
14
1.3.1. MERITS OF PROPOSED SYSTEM
In our proposed system for Brain Tumor Identification using MRI Images, several merits
distinguish it from traditional methods:
• High Accuracy: The proposed system leverages advanced deep learning techniques, ensuring
high accuracy in detecting and classifying brain tumors from MRI images.
• Early Detection: By identifying tumors at an early stage, the system can significantly improve
patient outcomes and survival rates through timely intervention and treatment.
• Non-Invasive: MRI imaging is a non-invasive method, making the detection process safer and
more comfortable for patients compared to invasive diagnostic methods.
• Automated Processing: The automated nature of the system reduces the workload on
radiologists and healthcare professionals, allowing them to focus on more critical tasks and
increasing overall efficiency.
• Consistency: The system provides consistent results, minimizing the variability and subjectivity
that can occur with human interpretation of MRI images.
• Scalability: The proposed system can be scaled to process large volumes of MRI images, making
it suitable for deployment in large healthcare facilities and research institutions.
15
CHAPTER 2
LITERATURE SURVEY
The literature review on brain tumor identification using MRI images has evolved significantly over the
years, with numerous studies contributing to the advancement of this critical field. Among the early
contributors, Smith et al. presented a groundbreaking work titled "Deep CNN: A Deep Convolutional
Neural Network for Brain Tumor Detection in MRI Images." Their objective was to develop a deep
CNN model tailored for detecting brain tumors in MRI scans. While their research marked a significant
step forward, it was limited by a relatively small dataset for training, which potentially restricted the
model's generalizability. Additionally, they highlighted a lack of standardization in evaluation metrics,
making cross-study comparisons challenging.
Building upon this foundation, Zhang and Wang conducted a comprehensive survey titled "Deep
Learning for Brain Tumor Detection in MRI Images: A Survey." Their work reviewed recent
advancements in deep learning methods for brain tumor detection, offering a broad overview of state-
of-the-art techniques. However, specific gaps or drawbacks in their survey were not detailed in the text.
In another study, Kim et al. aimed to enhance brain tumor classification through CNN-based feature
extraction from MRI images. Their research, "Improved Brain Tumor Classification via CNN-Based
Feature Enhancement of MR Images," demonstrated improved classification accuracy. However, they
noted that the performance of their method was heavily dependent on the choice of CNN architecture,
indicating that model selection played a crucial role in the effectiveness of their approach.
Chen and Li further expanded the scope of research with their work titled "Brain Tumor Detection and
Segmentation Using Deep Learning." Their objective was to develop a CNN-based approach capable of
both detecting and segmenting brain tumors in MRI images. Despite the promising results, they
provided limited explanation of their network architecture, which could hinder reproducibility and
future developments in the field.
In a broader context, Patel et al. reviewed existing methods for brain tumor segmentation and
classification in their study, "A Review on Brain Tumor Segmentation and Classification in MRI
Images." While their review encompassed various techniques, it lacked a comprehensive comparison
16
among different methods, making it difficult to pinpoint the most effective approaches. This
highlighted the need for more in-depth comparative studies to guide future research.
Lastly, Liu et al. offered an extensive review of deep CNN models for brain tumor segmentation in
their work, "Deep Convolutional Neural Networks for Brain Tumor Segmentation: A Review." They
provided valuable insights into various models, but their discussion on real-world application
challenges was limited. This gap underscored the necessity for further exploration into practical
implementation issues to bridge the gap between research and clinical practice.
Together, these studies paint a detailed picture of the strides made in brain tumor identification using
MRI images, while also highlighting areas where future research can address existing limitations and
enhance the applicability of these advanced techniques in clinical settings.
• Programming Language : Python 3.x for coding the deep learning model and associated
scripts.
• Data Processing Libraries: Pandas and NumPy for efficient data manipulation, and scikit-learn
for preprocessing tasks
17
2.2.2 HARDWARE REQUIREMENTS
Hardware Requirements for Brain Tumor Identification Using MRI Images Developing and
deploying a system for brain tumor identification using MRI images involves significant
computational resources. Here are the essential hardware requirements:
[Link]-Performance CPU:
- A multi-core processor (e.g., Intel Xeon or AMD Ryzen) with high clock speeds is essential for
general data processing and running the operating system efficiently.
- Minimum: 8-core processor
- Recommended: 16-core or higher
2. Powerful GPU:
-A high-performance GPU (Graphics Processing Unit) is critical for deep learning tasks, as it
accelerates the training and inference processes of convolutional neural networks (CNNs).
- Minimum: NVIDIA GTX 1080 Ti
- Recommended: NVIDIA RTX 3090 or NVIDIA A100
3. Memory (RAM):
- Sufficient RAM is necessary to handle large datasets and facilitate efficient data processing.
- Minimum: 32 GB
- Recommended: 64 GB or higher
4. Storage:
- Fast and ample storage is required for storing large MRI datasets and trained models. SSDs
(Solid State Drives) are preferred for their speed.
- Minimum: 1 TB SSD
- Recommended: 2 TB SSD (or more) with additional HDDs for backup and long-term storage
5. Cooling System:
- Proper cooling is essential to maintain optimal performance and prevent overheating during
intensive computations.
- Recommended: Liquid cooling or high-end air cooling systems
By ensuring these hardware components are in place, one can efficiently develop and deploy a
robust system for brain tumor identification using MRI images, leveraging advanced deep
learning techniques to achieve high accuracy and reliability.
18
2.2.3 FUNCTIONAL REQUIREMENTS
19
CHAPTER 3
SYSTEM DESIGN
3.1 SYSTEM ARCHITECTURE
20
CHAPTER 4
4.1 METHODOLOGY
1. Data Acquisition
The dataset used for this project consists of brain MRI images labeled for the presence or absence of
tumors. The dataset is collected from publicly available sources (such as Kaggle or medical
institutions) and contains images of varying sizes and quality. The dataset is split into three subsets:
training, validation, and testing, in a [Link] ratio respectively.
2. Data Preprocessing
Before feeding the images into the neural networks, preprocessing is performed:
Resizing: All images are resized to a fixed dimension of 240x240 pixels to maintain consistency in input
size for both CNN and ViT models.
Normalization: Pixel values are normalized to fall within the [0, 1] range by dividing the RGB values
by 255.
Data Augmentation: To increase the diversity of the dataset and reduce overfitting, data augmentation
techniques like horizontal flipping, rotation, zooming, and shifting are applied to the training set.
The Convolutional Neural Network (CNN) is designed with the following key components:
The architecture is optimized using the Adam optimizer, and binary cross-entropy is used as the loss
function. Early stopping is applied to prevent overfitting.
The Vision Transformer (ViT) model is also explored for this task. The key steps include:
21
Patch Embedding: Each input image is divided into patches (e.g., 16x16), which are flattened and
linearly projected into a lower-dimensional space.
Transformer Encoder: A series of transformer encoder blocks are applied to learn the relationships
between different patches in the image. The encoder block consists of multi-head self-attention layers
and feed-forward neural networks with layer normalization.
Classification Head: The output from the transformer is passed through a classification head for binary
classification.
Implementation
5. Training
Both models (CNN and ViT) are trained on the preprocessed dataset. The training involves:
6. Evaluation
After training, the models are evaluated using the test set. Performance metrics include:
22
Scalability and Maintenance:
Scalability: Designed the system architecture to handle multiple concurrent requests and ensure
robust performance under varying workloads.
Maintenance: Implemented error logging and monitoring mechanisms to facilitate rapid
troubleshooting and regular updates for model retraining with new data.
[Link] (`numpy`)
Purpose: Provides support for large, multi-dimensional arrays and matrices, along with a collection of
mathematical functions to operate on these arrays.
Usage: Useful for handling numerical data, performing mathematical operations, and efficient storage
of image pixel data.
[Link] (`pandas`)
Purpose: Offers data structures and data analysis tools for manipulating numerical tables and time series.
Usage: Ideal for loading, manipulating, and analyzing tabular data, such as CSV files containing
metadata about the images.
[Link] (`tensorflow`)
Purpose:An end-to-end open-source platform for machine learning, providing a comprehensive
ecosystem of tools, libraries, and community resources.
Usage: Acts as the backend for building and training the CNN model. TensorFlow includes Keras, which
is a high-level API for building neural networks.
[Link] (`[Link]`)
Purpose: A comprehensive library for creating static, animated, and interactive visualizations in Python.
Usage: For plotting training and validation metrics like loss and accuracy over epochs.
[Link]-learn:
23
Purpose:For evaluation metrics and model validation.
Usage:For computing accuracy, classification report, precision, recall, and F1-score.
[Link] (`seaborn`)
Purpose: A statistical data visualization library based on Matplotlib.
Usage: To create attractive and informative statistical graphics, such as heatmaps for confusion matrices.
We used the Brain Tumor Classification (MRI) dataset for our experiment. We tooka total of 2270 images
with different types of tumours like pitutary tumor, meningioma tumor, glioma tumor and no tumor .
This dataset is consisting of two classes, where class 1 refers to tumour images and class 0 refers to non-
tumour images. we have 1816 training images and 454 testing images, we used images as validating
images.
The tumor images are classified into four classes which shows the stages are:
Class 1: meningioma_tumor
Class 2: no_tumor
Class 3: pituitary_tumor
Class 4: glioma_tumor
Training: The training phase of the brain tumor detection model involves a dataset initially
comprising 1818 images.
Testing: The testing phase of the brain tumor detection model involves the evaluation of model
performance using a carefully curated set of 454 images.
24
4.2 SOURCE CODE
i. Importing modules
25
[Link] Dataset
[Link] Splitting
[Link] Model
26
V. VIT Transformer
27
[Link] Training
[Link] Evaluation
28
CHAPTER 5
RESULTS
The outcome of the project demonstrates that the developed models are capable
of accurately detecting the presence of brain tumors in MRI images. Both the
Convolutional Neural Network (CNN) and Vision Transformer (ViT) models
successfully classified whether a tumor was present or absent in the input MRI
images. Additionally, the models could differentiate between various types of
brain tumors, such as glioma, meningioma, and pituitary tumors, achieving an
accuracy of over 84% in predicting the correct tumor type. This outcome validates
the effectiveness of deep learning methods in assisting medical professionals with
brain tumor diagnosis through non-invasive imaging techniques.
The confusion matrix is normalized, meaning the values are expressed as proportions rather than raw
counts. This makes it easier to compare performance across different classes.
The matrix has two rows and two columns. The rows represent the true labels (Yes and No), while the
columns represent the predicted labels (Yes and No).
29
Prediction table
Diagonal Elements: These represent correct predictions. For example, the value 0.80 in the
top-left corner indicates that 80% of the instances that were actually "Yes" were correctly
predicted as "Yes."
Off-Diagonal Elements: These represent incorrect predictions. For example, the value 0.13 in
the bottom-left corner indicates that 13% of the instances that were actually "No" were
incorrectly predicted as "Yes
Training Loss: This curve represents the loss function evaluated on the training data during
each epoch. As the model trains, the loss typically decreases, indicating that the model is
learning to fit the training data better.
30
Validation Loss: This curve represents the loss function evaluated on a separate validation
dataset, which is not used for training. It provides an estimate of how well the model
generalizes to unseen data.
Training Accuracy: This curve shows the accuracy of the model on the training data. As the
model learns, the accuracy typically increases.
Validation Accuracy: This curve shows the accuracy of the model on the validation data. It
provides an estimate of how well the model generalizes to unseen data.
Figure 5.3: Graph of loss and accuracy for Vision Transformer model
Training Accuracy: This curve shows the accuracy of the model on the training data. As the
model learns, the accuracy typically increases.
Validation Accuracy: This curve shows the accuracy of the model on the validation data. It
provides an estimate of how well the model generalizes to unseen data.
31
Figure 5.4: Output
32
CHAPTER 6
CONCLUSION
In this project, we explored the efficacy of Convolutional Neural Networks (CNN) and Vision
Transformers (ViT) for detecting brain tumors. Our experiments demonstrate that both CNN
and ViT models can achieve high accuracy in classifying brain tumor images. The CNN model
achieved a test accuracy of 84.31%, while the ViT model further improved this performance by
leveraging attention mechanisms for better feature extraction. These results validate the
effectiveness of deep learning approaches in medical image classification, particularly for brain
tumor detection.
The combination of CNN's localized feature learning and ViT's global context understanding
offers a promising direction for future research in medical diagnostics. However, the dataset
used in this study was limited, which may affect the generalizability of the results. Further
work with larger, more diverse datasets and more refined model architectures could enhance
the reliability and accuracy of these models in real-world clinical applications.
33
REFERENCES:
[1] Hany Kasban, Mohsen El-bendary, Dina Salama, A comparative study of medical
imaging techniques, Int. J. Inf. Sci. Intell. Syst. 4 (2015) 37–58; J. Clerk Maxwell, A
Treatise on Electricity and Magnetism, vol. 2, 3rd ed., Clarendon, Oxford, 1892, pp. 68–73.
[3] Anam Mustaqeem, Ali Javed, Tehseen Fatima, Int. J. Image Graph. Signal Process. 10
(2012) 34–39.
[4] M.L. Oelze, J.F. Zachary, W.D. O’Brien Jr., Differentiation of tumour types in vivo by
scatterer property estimates and parametric images using ultrasound backscatter, vol. 1, 5-8
Oct. 2003, pp. 1014–1017.
[5] Brain Tumour: Statistics, [Link] Editorial Board, 1/2021. (Accessed on January
2021
[7] Tonmoy Hossain, Fairuz Shadmani Shishir, Mohsena Ashraf, M.D. Abdullah Al Nasim,
Faisal Muhammad Shah, Brain tumour detection using convolutional neural network, in:
1st InternationalConference on Advances in Science, Engineering and Robotics Technology,
ICASERT, 3-5 May 2019, 2019
[8] Deepak, S. & Ameer, P. M. Brain tumor classification using deep CNN features via transfer learning. Brain
Tumor Classif. Using Deep CNN Features Transfer Learn. 111(1), 1–19 (2019).
[9] Saleh, A., Sukaik, R., & Abu-Naser, S. S. Brain tumor classification using deep learning. In 2020
International Conference on Assistive and Rehabilitation Technologies,
IEEE. [Link] (2020).
[10] Waghmare, V. K. & Kolekar, M. H. Brain tumor classification using deep learning. Internet Things
Healthc. Technol. 73(1), 155–175 (2021).
[11] N. Gordillo, E. Montseny, P. Sobrevilla, State of the art survey on MRI brain tumour segmentation,
Magn. Reson. Imaging 31 (8) (2013) 1426–1438.
[12] D. White, A. Houston, W. Sampson, G. Wilkins, Intra and interoperator variations in region-of-interest
drawing and their effect on the measurement of glomerular filtration rates, Clin. Nucl. Med. 24 (1999) 177–181.
[13] Afshar P, Mohammadi A, Plataniotis KN. Brain tumor type classification via capsule networks. In: 2018 25th
IEEE international conference on image processing (ICIP). IEEE; 2018 Oct 7. p. 3129–33.
[14] Afshar P, Mohammadi A, Plataniotis KN. Brain tumor type classification via capsule networks. In: 2018 25th
IEEE international conference on image processing (ICIP). IEEE; 2018 Oct 7. p. 3129–33.
[15] Kayaalp, F., Basarslan, M. S., & Polat, K. TSCBAS: A novel correlation based attribute selection method and
application on telecommunications churn analysis. In 2018 International Conference on Artificial Intelligence and Data
[16] Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition, 2014.
[17] Bal, F. & Kayaalp, F. A novel deep learning-based hybrid method for the determination of productivity of
[18] Sartaj, B., Ankita, K., Prajakta, B., Sameer, D., & Swati, K. Brain tumor classification (MRI). Kaggle
(2020). [Link]
[19] Brain tumor segmentation and grading of lower-grade glioma using deep learning in MRI imagesComput.
Biol. Med., 121 (2020), Article 103758
[20] Lotlikar, V.S.; Satpute, N.; Gupta, A. Brain Tumor Detection Using Machine Learning and Deep
Learning: A Review. Curr. Med. Imaging 2022, 18, 604–622. [Google Scholar] [CrossRef]
[21] Xie, Y.; Zaccagna, F.; Rundo, L.; Testa, C.; Agati, R.; Lodi, R.; Manners, D.N.; Tonon, C.
Convolutional neural network techniques for brain tumor classification (from 2015 to 2022): Review,
challenges, and future perspectives. Diagnostics 2022, 12, 1850. [Google Scholar] [CrossRef]
[22] Almadhoun, H.R.; Abu-Naser, S.S. Detection of Brain Tumor Using Deep Learning. Int. J. Acad. Eng.
Res. (IJAER) 2022, 6, 29–47. [Google Scholar]
[23] Sapra, P.; Singh, R.; Khurana, S. Brain tumor detection using neural network. Int. J. Sci. Mod. Eng.
(IJISME) ISSN 2013, 1, 2319–6386. [Google Scholar]
<1%
[Link]
<1%
Internet
[Link]
Internet
<1%
[Link]
Internet
<1%
[Link]
Internet
<1%
[Link]
Internet
<1%
[Link]
Internet
<1%
[Link]
<1%
Internet
[Link]
Internet
<1%
Anuj Jain, Arnav Jalui, Jahanvi Jasani, Yash Lahoti, Ruhina Karani.
"De... <1%
Crossref
Firoj Alam, Tanvirul Alam, Ferda Ofli, Muhammad Imran. "Robust Traini...
Crossref
<1%
R.A. Welikala, P.J. Foster, P.H. Whincup, A.R. Rudnicka, C.G. Owen,
D.P... <1%
Crossref
Samiya Majid Baba, Indu Bala. "Detection of Diabetic Retinopathy with ...
Crossref
<1%
[Link]
Internet
<1%
[Link]
Internet
<1%
[Link]
Internet
<1