Skip to content

This project was designed by Swetaswa Basak and Paras Patil

Notifications You must be signed in to change notification settings

Swetaswa/Brain-tumor-detection

Repository files navigation

Performing Brain Tumor classification and Segmentation

Brief Description

This project was designed by Swetaswa Basak and Paras PAtil, as our end year Masters degree in Software Engineering. The study segments and classifies MRI scans of brain tumours. The lack of use of EfficientNetB7, one of the top commonly used pre-trained transfer learning models, in the categorization of brain tumours served as the impetus for this effort. Therefore, we made the decision to compare it to the most recent, widely used, and best transfer learning model, VGG16. In this research, the accuracy, complexity, and training and prediction times of VGG16 and EfficientNetB7 are compared. Then put the Best model into practise in a web application.

Keywords

EfficientNetB7, VGG16, classification, segmentation, transfer learning, pre-trained model

AUTHORS

  1. Swetaswa Basak, Vellore Institute of Technology, India
  2. Paras Patil, Vellore Institute of Technology, India
  3. Praneeth Prakash Namakar, Vellore Institute of Technology, India

Overview

Table of Content
Specifications and requirements
Content of project with results
Acknowledgements
Web Application

Specifications and requirements

Machine Specifications

Table of Content Description
Operating system Windows 10 & Windows 11 64-bit operating system
RAM 8GB & 16GB
Graphics Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz 1.80 GHz
Editors Jupyter Notebbok, Spyder, Google Colab

Requirements

  1. Python
  2. Tensorflow
  3. Keras
  4. matplotlib
  5. Streamlit

Content of project with results

1. Data Collection & Pre-Processing

The data was downloaded from kaggle, Read the README file to get the link for the dataset.

A Glimpse of the data

DataSet

Pre-Processing of the data

The images where transformed in the following manner

Transformation OutPut
Original Image Transformation
horizontal flipping Transformation
vertical flipping Transformation
zooming at 0.2 Transformation
rotation at 20 degrees Transformation
featurewise_std_normalization Transformation
shear-range of 0.2 Transformation
brightening at the range 0.2-1.5 Transformation
width shift range of 0.1 Transformation
height shift range of 0.1 Transformation

2. Model Development and Fine-tuning

The following parameters namely: Batch size, Epochs, Dropout and Optimizer, of the model where fined, and the visualisation of the results are below

The following values and parameters where chosen:

  • Optimizers - We will use only 5 optimizers to see which one works best, namely; 1. SGD,2. RMSprop,3. Adam,4. Adagrad,5. Adadelta. the best optimizer will be used in the New model

  • Number of epochs - We will also use only 4 epochs to choose the best performer namely; 1, 2, 5, 10.

  • Batch size - we will also use these batch sizes to choose the optimum batch size, namely; 8, 16.

  • Dropout - we will optimise the dropout of the model, values 0.5, 0.6, 0.7, 0.8, 0.9

EfficientNetB7 - Batch size and Epochs

FineTuning

VGG16 - Batch size and Epochs

FineTuning

EfficientNetB7 - Dropout

FineTuning

VGG16 - Dropout

FineTuning

EfficientNetB7 - Optimizer

FineTuning

VGG16 - Optimizer

FineTuning

3. Final Model Implementation and Results

This is the structure of the EfficientNetB7 model we implemented with optimum parameters, similarly with the VGG16 model.

Model

4. Conclusions

Our aim of our project was to compare VGG16 and EfficientNetB7, in terms of accuracy, complexity, and training time & prediction time(how long it takes to predict),

And our study shows that EfficientNetB7 takes longer to predict than VGG16, and also its training Time takes longer, and the best model in terms of accuracy is EfficientNetB7 with 98.19% over VGG16 which is 92.30%, therefore for our web app we chose EfficientNetb7 even though it has a limitation of taking longer to predict other VGG16.

Acknowledgements

I appreciate and acknowledge the supervision of the project by Mrs Uma Maheswari, my TARP Project Lecturer at Vellore Institute of Technology, India in 2022.

Thank you.

About

This project was designed by Swetaswa Basak and Paras Patil

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published