0% found this document useful (0 votes)
9 views71 pages

Jeevitha R 2021

The thesis presents a method for detecting brain tumors in MRI images using a combination of Fuzzy C-means algorithm for segmentation and Convolutional Neural Network (CNN) for classification. It outlines the process of preprocessing, feature extraction, and classification, demonstrating improved accuracy in tumor detection with CNN compared to other methods. The work was submitted by Jeevitha R as part of her Master's degree requirements at Panimalar Engineering College in April 2021.

Uploaded by

S Maheswari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views71 pages

Jeevitha R 2021

The thesis presents a method for detecting brain tumors in MRI images using a combination of Fuzzy C-means algorithm for segmentation and Convolutional Neural Network (CNN) for classification. It outlines the process of preprocessing, feature extraction, and classification, demonstrating improved accuracy in tumor detection with CNN compared to other methods. The work was submitted by Jeevitha R as part of her Master's degree requirements at Panimalar Engineering College in April 2021.

Uploaded by

S Maheswari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

MRI BRAIN ABNORMALITY DETECTION USING

CONVENTIONAL NEURAL NETWORK(CNN)

A THESIS

Submitted by
JEEVITHA.R
(211419403001)

A project report submitted to the

FACULTY OF INFORMATION AND COMMUNICATION


ENGINEERING

in partial fulfilment for the award of the degree of

MASTER OF ENGINEERING IN
COMMUNICATION SYSTEMS
PANIMALAR ENGINEERING COLLEGE,
POONAMALLEE
ANNA UNIVERSITY: CHENNAI 600 025

APRIL 2021
ANNA UNIVERSITY, CHENNAI

BONAFIDE CERTIFICATE

Certified that this Report titled “MRI BRAIN ABNORMALITY


DETECTION USING CONVENTIONAL NEURAL NETWORK
(CNN)” is the bonafide work of JEEVITHA.R (211419403001) who
carried out the work under my supervision. Certified further that to the
best of my knowledge the work reported herein does not form part of
any another thesis or dissertation on the basis on which a degree or
award was conferred on an earlier occasion on this or any other
candidate.

SIGNATURE SIGNATURE
Dr. S. MALATHI, M.E., Dr. D. SELVARAJ, M.E., PhD.,
Ph.D.,
DEAN/PG-DEPARTMENT ECE-PROFESSOR
Computer Science and Electronics and Communication Engineering
Engineering Panimalar Engineering college
Panimalar Engineering College Poonamallee ,Chennai
Poonamallee , Chennai

Submitted for the project phase II viva-voice examination held on


05/08/2021

INTERNAL EXAMINER EXTERNAL EXAMINER

ii
ABSTRACT

Brain tumors have high diversity in appearance and there is a


similarity between tumor and normal tissues and thus the extraction of
tumor regions from images becomes unyielding. In this project, MRI
brain tumor detection is performed from a raw brain images using Fuzzy
C-means algorithm and followed by Convolutional Neural
Network(CNN) algorithm. Here, firstly preprocessing step is performed
by Skull Stripping algorithm followed by Segmentation process. Fuzzy
C-means algorithm is used to segment the Cerebrospinal Fluid(CSF),
Grey matter(GM) and White Matter(WM) from the database. The third
part is to extract features to find whether the tumor is present or not,
here eleven features are extracted like mean, entropy, S.D(Standard
Deviation). The final part is the classification process done by both
Convolutional Neural Network(CNN) and Support Vector
Machine(SVM) in which it is able to differentiate whether the input
image is normal image or an abnormal image. And it also finds the
tumor region(ROI Region). Compared to other methods, here the values
of the features extracted are higher for normal images than for abnormal
images and it is shown from the graphs drawn from the extracted
features. Finally, Comparison is done between both the methods and the
accuracy has been increased in CNN method in finding the tumor
region.

iii
ACKNOWLEDGEMENT

A project of this magnitude and nature requires kind co-operation and


support from many, for successful completion. I wish to express my sincere
thanks to all those who were involved in the completion of thisproject.

I would like to express my deep gratitude to Our Beloved


Secretary and Correspondent, Dr . P. CHINNADURAI, M.A., Ph.D., for
his kind words and enthusiastic motivation which inspired me a lot in
completing thisproject.

I also offer my sincere thanks to our dynamic directors


[Link], [Link] KUMAR, M.E., and
[Link] SAKTHIKUMAR, B.E., MBA. For providing me
with necessary facilities for completion of this project.

I also express my appreciation and gratefulness to Our Principal


Dr. K. MANI, M.E., Ph.D., who helped me in the completion of the project.

I wish to convey my thanks to the Dean, Dr. [Link], M.E.,


Ph.D., for her support and by providing me time to complete the project.

My utmost gratitude to my project guide [Link], M.E.,


Ph.D., Professor, Department of electronics and communication engineering
for his guidance throughout the course of the [Link] but not the least, I
wish to thank my parents and friends for their endless support.

JEEVITHA.R

iv
TABLE OF CONTENT

CHAPTER NO. TITLE PAGE NO.

ABSTRACT iii
LIST OF FIGURES viii
LIST OF TABLES x
LIST OF ABBREVATIONS xi

1 INTRODUCTION 1
1.1 MAGNETIC RESONANCE IMAGING(MRI) 1
1.2 MRI VS CT 2
1.3 MRI FOR BRAIN 4
1.4 MRI PROCEDURE 4
1.5 MRI RESULTS 5

2 LITERATURE SURVEY 7

3 EXISTING METHOD 19

4 PROPOSED SYSTEM 20
4.1 BRAIN IMAGE SEGMENTATION 21
4.2 VARIOUS TECHNIQUES OF IMAGE 22
SEGMENTATION
4.2.1 Thresholding method 22
4.2.2 Edge based Segmentation method 24
4.2.3 Region based Segmentation method 24
4.2.4 Clustering based Segmentation method 25
v
CHAPTER NO. TITLE PAGE NO.

4.2.5 Watershed based methods 27


4.2.6 Partial Differential equation based 27
Segmentation method
4.2.7 Artificial Neural Network based 27
Segmentation method
4.3 PREPROCESSING 28
4.4 SKULL STRIPPING 28
4.5 CSF,GM AND WM SEGMENTATION 29
4.6 FUZZY C-MEANS CLUSTERING 30

5 FEATURE EXTRACTION FROM SEGMENTED 33


IMAGES
5.1 BLOCK DIAGRAM 33
5.2 FEATURES USED 34

6 BRAIN IMAGE CLASSIFICATION 35


6.1 CONVOLUTIONAL NEURAL NETWORK 36
6.2 SUPPORT VECTOR MACHINE 39

7 EXPERIMENTAL DATA, SIMULATION AND 40


RESULTS

8 CONCLUSION AND FUTURE WORK 56

vi
REFERENCES 57

PUBLICATIONS 60

vii
LIST OF FIGURES

FIGURE NO. TITLE PAGE NO.

3.1 Block diagram of Brain 19


classification using SVM

Block diagram for proposed Brain


4.1 20
classification using CNN

4.2 Types of Segmentation 23


4.3 Skull stripped image 29
5.1 Block diagram for proposed CNN 33
Classification
7.1 Original abnormal image 41
7.2 Resized image 42
7.3 CSF,GM, WM Segmentation 42
7.4 Operated ROI image 43
7.5 ROI image 43
7.6 Tumor presented image 44
7.7(a) Dialog box for CNN 44
7.7(b) Dialog box for SVM 44
7.8 Accuracy of CNN method 45
7.9 Accuracy of SVM method 45
7.10 Normal original image 46
7.11 Resized normal image 46
7.12 CSF,GM,WM Segmentation 47
7.13 Operated ROI Image 47

viii
FIGURE NO. TITLE PAGE NO.
7.14 ROI image 48
7.15 Normal image detected without 48
tumor
7.16(a) Dialog box of CNN method 49
7.16(b) Dialog box of SVM method 49
7.17 Comparison table for cA 49
7.18 Comparison table for cH 50
7.19 Comparison table for cV 50
7.20 Comparison table for cD 51
7.21 Comparison table for Mean 51
7.22 Comparison table for Standard 52
Deviation
7.23 Comparison table for Entropy 52
7.24 Comparsion table for Variance 53
7.25 Comparsion table for Smoothness 53
7.26 Comparison table for Kurtosis 54
7.27 Comparison table for Skewness 54

ix
LIST OF TABLE

TABLE NO. TITLE PAGE NO.

6.1 Feature calculation for first set of 40


datasets
6.2 Feature calculation for remaining set 41
of datasets
7.1 Comparison of accuracy between 55
CNN and SVM

x
LIST OF ABBREVATIONS

CSF Cerebro Spinal Fluid


GM Grey Matter
WM White Matter
CNN Convolutional Neural Network
SVM Support Vector Machine
FCM Fuzzy C-Means
MRI Magnetic Resonance Imaging
cA Approximation Coefficient m
cH Horizonatl coefficient
cV Vertical coefficient
cD Diagonal coefficient
S.D Standard Deviation
NMR Nuclear Magnetic Resonance
CT Computed Tomography
fMRI Functional Magnetic Resonance Imaging
SR Super Resolution
DWT Discrete Wavelet Transform
GLCM Grey Level Cooccurence Matrix
ROI Region of Interest

xi
CHAPTER 1

INTRODUCTION

1.1 Magnetic resonance imaging (MRI)

Magnetic resonance imaging (MRI) is a medical imaging technique


used in radiology to form pictures of the anatomy and
the physiological processes of the body. MRI scanners use strong magnetic
fields, magnetic field gradients, and radio waves to generate images of the
organs in the body. MRI does not involve X-rays or the use of ionizing
radiation, which distinguishes it from CT and PET scans. MRI is a medical
application of nuclear magnetic resonance (NMR). NMR can also be used
for imaging in other NMR applications, such as NMR spectroscopy.

While the hazards of ionizing radiation are now well controlled in most
medical contexts, an MRI may still be seen as a better choice than a CT scan.
MRI is widely used in hospitals and clinics for medical
diagnosis and staging and follow-up of disease without exposing the body
to radiation. An MRI may yield different information compared with CT.
Risks and discomfort may be associated with MRI scans. Compared with CT
scans, MRI scans typically take longer and are louder, and they usually need
the subject to enter a narrow, confining tube. In addition, people with some
medical implants or other non-removable metal inside the body may be
unable to undergo an MRI examination safely.

Certain atomic nuclei are able to absorb radio frequency energy when
placed in an external magnetic field; the resultant evolving spin
polarization can induce a RF signal in a radio frequency coil and thereby be
detected.[2] In clinical and research MRI, hydrogen atoms are most often used
to generate a macroscopic polarization that is detected by antennas close to
the subject being examined.[2] Hydrogen atoms are naturally abundant in

1
humans and other biological organisms, particularly in water and fat. For this
reason, most MRI scans essentially map the location of water and fat in the
body. Pulses of radio waves excite the nuclear spin energy transition, and
magnetic field gradients localize the polarization in space. By varying the
parameters of the pulse sequence, different contrasts may be generated
between tissues based on the relaxation properties of the hydrogen atoms
therein.

Since its development in the 1970s and 1980s, MRI has proven to be a
versatile imaging technique. While MRI is most prominently used
in diagnostic medicine and biomedical research, it also may be used to form
images of non-living objects. MRI scans are capable of producing a variety
of chemical and physical data, in addition to detailed spatial images. The
sustained increase in demand for MRI within health systems has led to
concerns about cost effectiveness and overdiagnosis.[3][4]

Both MRIs and CT scans can view internal body structures. However,
a CT scan is faster and can provide pictures of tissues, organs, and skeletal
structure. An MRI is highly adept at capturing images that help doctors
determine if there are abnormal tissues within the body. MRIs are more
detailed in their images.

1.2 MRI vs CT

CT scans and MRIs are both used to capture images within your
[Link] biggest difference is that MRIs (magnetic resonance imaging) use
radio waves and CT (computed tomography) scans use [Link] both are
relatively low risk, there are differences that may make each one a better
option depending on the circumstances.

A constant magnetic field and radio frequencies bounce off of the fat
and water molecules in your body. Radio waves are transmitted to a receiver

2
in the machine which is translated into an image of the body that can be used
to diagnose [Link] MRI is a loud machine. Typically, you’ll be offered
earplugs or headphones to make the noise more [Link]’ll also be asked
to lie still while the MRI is taking place.

CT scans are more widely used than MRIs and are typically less
[Link], however, are thought to be superior in regards to the detail
of the image. The most notable difference is that CT scans use X-rays while
MRIs do [Link] differences between MRI and CT scans include their risks
and benefits:

Both CT scans and MRIs pose some risks when used. The risks are based
on the type of imaging as well as how the imaging is performed. CT scan
risks include:

• harm to unborn babies

• a very small dose of radiation

• a potential reaction to the use of dyes

MRI risks include:

• possible reactions to metals due to magnets

• loud noises from the machine causing hearing issues

• increase in body temperature during long MRIs

• claustrophobia

You should consult a doctor prior to an MRI if you have implants including:

• artificial joints

• eye implants

3
• an IUD

• a pacemaker

Both MRIs and CT scans can view internal body structures. However, a
CT scan is faster and can provide pictures of tissues, organs, and skeletal
[Link] MRI is highly adept at capturing images that help doctors
determine if there are abnormal tissues within the body. MRIs are more
detailed in their images.

1.3 MRI for Brain

MRI can detect a variety of conditions of the brain such as cysts,


tumors, bleeding, swelling, developmental and structural abnormalities,
infections, inflammatory conditions, or problems with the blood vessels. It
can determine if a shunt is working and detect damage to the brain caused by
an injury or a stroke.

MRI of the brain can be useful in evaluating problems such as


persistent headaches, dizziness, weakness, and blurry vision or seizures, and it
can help to detect certain chronic diseases of the nervous system, such as
multiple sclerosis.

In some cases, MRI can provide clear images of parts of the brain that
can't be seen as well with an X-ray, CAT scan, or ultrasound, making it
particularly valuable for diagnosing problems with the pituitary gland and
brain stem.

1.4 MRI Procedure

An MRI of the brain usually takes 30-45 minutes to perform. Your


child will lie on the movable scanning table while the technologist places him
or her into position. A special plastic device called a coil may be placed
around your child's head. The table will slide into the tunnel and the
technician will take images of the head. Each scan takes a few minutes.

4
To detect specific problems, your child may be given a contrast
solution through an IV. The solution is painless as it goes into the vein. The
contrast highlights certain areas of the brain, such as blood vessels, so doctors
can see more detail in specific areas. The technician will ask if your child is
allergic to any medications or food before the contrast solution is given. The
contrast solution used in MRI tests is generally safe. However, allergic
reactions can occur. Talk to your doctor about the benefits and risks of
receiving contrast solution in your child's case.

As the exam proceeds, your child will hear repetitive sounds from the
machine, which are normal. Your child may be given headphones to listen to
music or earplugs to block the noise, and will have access to a call button in
case he or she becomes uneasy during the test. If sedated, your child will be
monitored at all times and will be connected to a machine that checks the
heartbeat, breathing, and oxygen level.

Once the exam is over, the technician will help your child off the table;
if sedation was used, your child may be moved to a recovery area.

1.5 MRI RESULTS

The MRI images will be viewed by a radiologist who's specially


trained in interpreting the scans. The radiologist will send a report to your
doctor, who'll discuss the results with you and explain what they mean. In
most cases, results can't be given directly to the patient or family at the time
of the test. If the MRI was done on an emergency basis, the results can be
made available quickly.

MRIs are safe and relatively easy. No health risks are associated with
the magnetic field or radio waves, since the low-energy radio waves use no
radiation. The procedure can be repeated without side [Link] your child

5
requires sedation, you may discuss the risks and benefits of sedation with
your provider. Also, because contrast solutions can cause allergic reactions in
some kids, be sure to check with your doctor before your child receives any
solution. There should be medical staff on hand who are prepared to handle
an allergic reaction. If your child has decreased kidney function, this is an
important medical condition to discuss with the radiologist and technician
before receiving IV contrast since it may lead to some rare complications.

You can help your child prepare for an MRI by explaining the test in
simple terms before the examination. Make sure to explain that pictures of the
head will be taken and that the equipment will probably make knocking and
buzzing [Link] also may help to remind your child that you'll be nearby
during the entire [Link] an injection of contrast fluid or sedation is needed,
you can tell your child that the initial sting of the needle will be brief and that
the test itself is [Link] your child will be awake for the test, be sure to
explain the importance of lying still.

6
CHAPTER 2

LITERATURE SURVEY

Tonmoy Hossain [Link](2017) presented a brain Tumor Detection Using


Convolutional Neural Network. In this paper, they proposed a method to
extract brain tumor from 2D Magnetic Resonance brain Images (MRI) by
Fuzzy C-Means clustering algorithm which was followed by traditional
classifiers and convolutional neural network. The experimental study was
carried on a real-time dataset with diverse tumor sizes, locations, shapes, and
different image intensities. In traditional classifier part, we applied six
traditional classifiers namely Support Vector Machine (SVM), K-Nearest
Neighbor (KNN), Multilayer Perceptron (MLP), Logistic Regression, Naïve
Bayes and Random Forest which was implemented in scikit-learn. Afterward,
we moved on to Convolutional Neural Network (CNN) which is implemented
using Keras and Tensorflow because it yields to a better performance than the
traditional ones. In our work, CNN gained an accuracy of 97.87%, which is
very compelling. The main aim of this paper is to distinguish between normal
and abnormal pixels, based on texture based and statistical based features.

Madhupriya G [Link](2019) presented a Brain Tumor


Segmentation With Deep Learning Technique. The proposed work is based
on Deep learning technique which is a deep neural network and probabilistic
neural network to detect unwanted masses in the brain. Our work is
personalized for both high and low-level grades. With the help of the MRI
images, segmentation can be performed and the segmented images can be
compared with the normal brain tissues also with the tumor cells. The results
are provided (whether the brain contains a tumor or not) based on the
comparison. In this paper, the segmentation is done using a convolution
neural network and Probabilistic neural network. Here, the comparison sketch
of various models is done. Based on that, we discovered an architecture which

7
is based on Convolutional Neural Networks (CNN) with both 3*3 and 7*7 in
an overlapped manner, and build a cascaded architecture, so that we can able
to segment a tumor accurately in an effective manner, since we use Image
dataset Brats13. Similarly, we use a probabilistic neural network for detecting
tumors and compare the result of both of them. We proposed a unique CNN
and PNN architectures which are different from those conventional models
used in image processing and computer vision techniques. Our model deals
with both local and global features.

Mahnoor Ali [Link](2020) presented Brain Tumour Image


Segmentation Using Deep Networks. In this paper, we propose an ensemble
of two segmentation networks: a 3D CNN and a U-Net, in a significant yet
straightforward combinative technique that results in better and accurate
predictions. Both models were trained separately on the BraTS-19 challenge
dataset and evaluated to yield segmentation maps which considerably differed
from each other in terms of segmented tumour sub-regions and were
ensembled variably to achieve the final prediction. The suggested ensemble
achieved dice scores of 0.750, 0.906 and 0.846 for enhancing tumour, whole
tumour, and tumour core, respectively, on the validation set, performing
favourably in comparison to the state-of-the-art architectures currently
available.

Jose bernal [Link](2019) presented a Quantitative Analysis of Patch-


Based Fully Convolutional Neural Networks for Tissue Segmentation on
Brain Magnetic Resonance Imaging. In this paper, they analyze a sub-group
of deep learning methods producing dense predictions. This branch, referred
in the literature as fully CNN (FCNN), is of interest as these architectures can
process an input volume in less time than CNNs. The study focuses on
understanding the architectural strengths and weaknesses of literature-like
approaches. They implement eight FCNN architectures inspired by robust
state of-the-art methods on brain segmentation related tasks and use them

8
within a standard pipeline. They evaluate them using the IBSR18,
MICCAI2012, and iSeg2017 datasets as they contain infant and adult data and
exhibit different voxel spacing, image quality, number of scans, and available
imaging modalities. The discussion is driven in four directions: comparison
between 2D and 3D approaches, the relevance of multiple imaging sequences,
the effect of patch size, and the impact of patch overlap as a sampling strategy
for training and testing models. Besides the aforementioned analysis, we show
that the methods under evaluation can yield top performance on the three data
collections. A public version is accessible to download from our research
website to encourage other researchers to explore the evaluation framework.

Jinglong du [Link](2017) presented a brain MRI Super-Resolution


Using 3D Dilated Convolutional Encoder–Decoder Network. Recently, deep
convolutional neural networks (CNN) have achieved impressive success in
MRI super-resolution (SR) reconstruction. Increasing network depth or width
can enlarge the receptive field to improve SR accuracy, however, it is
impractical for MRI reconstruction in clinical applications because of high
computational loads. To address this issue, we propose a novel dilated
convolutional encoder-decoder (DCED) network to improve the resolution of
MRI. We exploit three-dimensional (3D) dilated convolutions as encoders to
extract high-frequency features. The dilated encoders capture wider
contextual information by exponentially enlarging the receptive field, without
introducing additional parameters or layers. Then we decode the features
using deconvolution operations to alleviate gridding artifacts and restore fine
details. To improve information flow, the encoders and decoders are
aggregated into symmetrically connected blocks. The output of each block is
passed to the final convolution layer, which facilitates to extract hierarchical
features. In addition, we also exploit a geometric self-ensemble 3D wavelet
fusion method to improve the potential performance of MRI SR.
Experimental results on four public available brain datasets show that our

9
proposed method outperforms NLM (non-local means), LRTV (low-rank and
total variation) and current CNN-based SR methods, which demonstrates that
our method achieves a new state-of-the-art performance in MRI SR task.

Seetha J [Link](2019) presented Brain tumor classification using


Convolutional neural networks. In this work MRI images are used to diagnose
tumor in the brain. However the huge amount of data generated by MRI scan
thwarts manual classification of tumor vs non-tumor in a particular time. But
it having some limitation (i.e) accurate quantitative measurements is provided
for limited number of images. Hence trusted and automatic classification
scheme are essential to prevent the death rate of human. The automatic brain
tumor classification is very challenging task in large spatial and structural
variability of surrounding region of brain tumor. In this work, automatic brain
tumor detection is proposed by using Convolutional Neural Networks (CNN)
classification. The deeper architecture design is performed by using small
kernels. The weight of the neuron is given as small. Experimental results
show that the CNN archives rate of 97.5% accuracy with low complexity and
compared with the all other state of arts methods.

Meiyan Huang [Link](2014) presented Brain Tumor Segmentation


Based on Local Independent Projection-based Classification. Here, a novel
automatic tumor segmentation method for MRI images is proposed. This
method treats tumor segmentation as a classification problem. Additionally,
the local independent projection-based classification (LIPC) method is used to
classify each voxel into different classes. A novel classification framework is
derived by introducing the local independent projection into the classical
classification model. Locality is important in the calculation of local
independent projections for LIPC. Locality is also considered in determining
whether local anchor embedding is more applicable in solving linear
projection weights compared with other coding methods. Moreover, LIPC
considers the data distribution of different classes by learning a softmax

10
regression model, which can further improve classification performance. In
this study, 80 brain tumor MRI images with ground truth data are used as
training data and 40 images without ground truth data are used as testing data.
The segmentation results of testing data are evaluated by an online evaluation
tool. The average dice similarities of the proposed method for segmenting
complete tumor, tumor core, and contrast-enhancing tumor on real patient
data are 0.84, 0.685, and 0.585, respectively. These results are comparable to
other state-of-the-art methods.

Abdu Gumaei [Link](2019) presented a Hybrid Feature Extraction


Method with Regularized Extreme Learning Machine for Brain Tumor
Classification. In this paper, they propose a hybrid feature extraction method
with regularized extreme learning machine for developing an accurate brain
tumor classification approach. The approach starts by extracting the features
from brain images using the hybrid feature extraction method; then,
computing the covariance matrix of these features to project them into a new
significant set of features using principle component analysis (PCA). Finally,
a regularized extreme learning machine (RELM) is used for classifying the
type of brain tumor. To evaluate and compare the proposed approach, a set of
experiments is conducted on a new public dataset of brain images.
Experimental results proved that the approach is more effective compared to
the existing state-of-the-art approaches, and the performance in terms of
classification accuracy improved from 91.51% to 94.233% for the experiment
of random holdout technique.

Shan Shen [Link](2015) presented MRI Fuzzy Segmentation of Brain


Tissue Using Neighborhood Attraction With Neural-Network Optimization. A
robust segmentation technique based on an extension to the traditional fuzzy
c-means (FCM) clustering algorithm is proposed in this paper. A
neighborhood attraction, which is dependent on the relative location and
features of neighboring pixels, is shown to improve the segmentation

11
performance dramatically. The degree of attraction is optimized by a neural-
network model. Simulated and real brain MR images with different noise
levels are segmented to demonstrate the superiority of the proposed technique
compared to other FCM-based methods. This segmentation method is a key
component of an MR image-based classification system for brain tumors,
currently being developed.

Chenjie Ge [Link](2020) presented Enlarged Training Dataset by


Pairwise GANs for Molecular-Based Brain Tumor Classification. To tackle
the commonly encountered problems of insufficiently large brain tumor
datasets and incomplete modality of image for deep learning, they propose to
add augmented brain MR images to enlarge the training dataset by employing
a pairwise Generative Adversarial Network (GAN) model. The pairwise GAN
is able to generate synthetic MRIs across different modalities. To achieve the
patient-level diagnostic result, we propose a post-processing strategy to
combine the slice-level glioma subtype classification results by majority
voting. A two-stage course-tofine training strategy is proposed to learn the
glioma feature using GAN-augmented MRIs followed by real MRIs. To
evaluate the effectiveness of the proposed scheme, experiments have been
conducted on a brain tumor dataset for classifying glioma molecular subtypes:
isocitrate dehydrogenase 1 (IDH1) mutation and IDH1 wild-type. Our results
on the dataset have shown good performance (with test accuracy 88.82%).
Comparisons with several state-of-the-art methods are also included.

Neelum Noreen [Link](2020) presented A Deep Learning Model Based


on Concatenation Approach for the Diagnosis of Brain Tumor. In recent
applications of pre-trained models, normally features are extracted from
bottom layers which are different from natural images to medical images. To
overcome this problem, this study proposes a method of multi-level features
extraction and concatenation for early diagnosis of brain tumor. Two
pretrained deep learning models i.e. Inception-v3 and DensNet201 make this

12
model valid. With the help of these two models, two different scenarios of
brain tumor detection and its classification were evaluated. First, the features
from different Inception modules were extracted from pre-trained Inception-
v3 model and concatenated these features for brain tumor classification. Then,
these features were passed to softmax classifier to classify the brain tumor.
Second, pre-trained DensNet201 was used to extract features from various
DensNet blocks. Then, these features were concatenated and passed to
softmax classifier to classify the brain tumor. Both scenarios were evaluated
with the help of three-class brain tumor dataset that is available publicly. The
proposed method produced 99.34 %, and 99.51% testing accuracies
respectively with Inception-v3 and DensNet201 on testing samples and
achieved highest performance in the detection of brain tumor. As results
indicated, the proposed method based on features concatenation using pre-
trained models outperformed as compared to existing state-of-the-art deep
learning and machine learning based methods for brain tumor classification.

Ghazanfar Latif [Link](2018) presented Enhanced MR Image


Classification Using Hybrid Statistical and Wavelets Features. In this paper,
an enhanced method is presented for glioma MR images classification using
hybrid statistical and wavelet features. In the proposed method, 52 features
are extracted using the first-order and second-order statistical features (based
on the four MRI modalities: Flair, T1, T1c, and T2) in addition to the discrete
wavelet transform producing a total of 152 features. The extracted features are
applied to the multilayer perceptron (MLP) classifier. The results using the
MLP were compared with various known classifiers. The method was tested
on the dataset MICCAI BraTS 2015 which is a standard dataset used for
research purposes. The proposed hybrid statistical and wavelet features
produced 96.72% accuracy for high-grade glioma and 96.04% accuracy for
low-grade glioma, which are relatively better compared to the existing
studies.

13
Zahraa A. Al-saffar [Link](2020) presented a Novel Approach to
Improving Brain Image Classification Using Mutual Information-Accelerated
Singular Value Decomposition. The proposed system has six stages: pre-
processing, clustering, tumour localization, feature extraction, MI-ASVD and
classification. First, the MR images are smoothed by using enhancement
techniques such as Gaussian kernel filters. Then, local difference in intensity-
means (LDI-Means) clustering is employed to segment and detect suspicious
regions. The grey-level run-length matrix (GLRLM), texture, and colour
intensity features are used for tumour feature extraction. Later, a special
method including a summation of feature selection and dimensionality
reduction, MI-ASVD, is applied to select the most useful features for the
classification process. Finally, the simplified residual neural network
technique is implemented to classify the MR brain images. Using MI-ASVD
provided accurate and more efficacious results in classification compared
with the original feature space and with two other standard dimensionality
reduction methods, principal component analysis (PCA) and singular value
decomposition (SVD). It achieved a classification accuracy of 94.91%, which
is better than the two state-of-the-art techniques as well as methods from
similar published studies.

Pradeep Kumar Mallick [Link](2019) presented Brain MRI Image


Classification for Cancer Detection Using Deep Wavelet Autoencoder-Based
Deep Neural Network. In this paper, a technique for image compression using
a deep wavelet autoencoder (DWA), which blends the basic feature reduction
property of autoencoder along with the image decomposition property of
wavelet transform is proposed. The combination of both has a tremendous
effect on sinking the size of the feature set for enduring further classification
task by using DNN. A brain image dataset was taken and the proposed DWA-
DNN image classifier was considered. The performance criterion for the

14
DWA-DNN classifier was compared with other existing classifiers such as
autoencoder-DNN or DNN, and it was noted that the proposed method
outshines the existing methods.

Afnan M. Alhassan [Link](2020) presented BAT Algorithm With fuzzy


C-Ordered Means (BAFCOM) Clustering Segmentation and Enhanced
Capsule Networks (ECN) for Brain Cancer MRI Images Classification. An
approach of automated segmentation has proposed in this paper, which
enables the segmentation of tumor out of MRI images, besides enhances the
efficiency of segmentation and classification. The initial functions of this
approach include preprocessing and segmentation processes for segmenting
tumor or tissue of benign and malignant by expanding a range of data and
clustering. A modern learning-based approach has suggested in this study, in
order to process the automated segmentation in multimodal MRI images to
identify brain tumor, hence the clustering algorithm of Bat Algorithm with
Fuzzy C-Ordered Means (BAFCOM) has recommended segmenting the
tumor. The Bat Algorithm calculates the initial centroids and distance within
the pixels in the clustering algorithm of BAFCOM, which also acquires the
tumor through determining the distance among tumor Region of Interest (RoI)
and non-tumor RoI. Afterwards, the MRI image has analyzed by the
Enhanced Capsule Networks (ECN) method to categorize it as normal and
brain tumor. Ultimately, the algorithm of ECN has assessed the performance
of proposed approach by distinguishing the two categories of the tumor over
MRI images, besides the suggested ECN classifier has assessed by the
measurement factors of accuracy, precision, recall, and F1-score. In addition,
the genetic algorithm has applied to process the automatic tumor stage
classification, which in turn classification accuracy enhanced.

Miss Monika shukla [Link] (2013) presented a Comparative Study of


Wavelet and Curvelet Transform for Image Denoising. This paper describes a
comparison of the discriminating power of the various multiresolution based

15
thresholding techniques i.e., Wavelet, curve let for image [Link]
transform offer exact reconstruction, stability against perturbation, ease of
implementation and low computational complexity. We propose to employ
curve let for facial feature extraction and perform a thorough comparison
against wavelet transform; especially, the orientation of curve let is analysed.
Experiments show that for expression changes, the small scale coefficients of
curve let transform are robust, though the large scale coefficients of both
transform are likely influenced. The reason behind the advantages of curvelet
lies in its abilities of sparse representation that are critical for compression,
estimation of images which are denoised and its inverse problems, thus the
experiments and theoretical analysis coincide.

Aneesh S Perumprath [Link] (2020) presented a Modified Deep


Convolutional Neural Network for Brian Abnormalities Detection. In this
research, Deep Convolutional Neural Networks (DCNN) is one of the widely
used deep learning networks for any practical applications. The accuracy is
generally high and the manual feature extraction process is not necessary in
these networks. However, the high accuracy is achieved at the cost of huge
computational complexity. The complexity in DCNN is mainly due to: (a)
Increased number of layers between input and output layers and (b) Two set
of parameters (one set of filter coefficients and another set of weights in the
fully connected network need to be adjusted. In this work, the second aspect
is targeted to reduce the computational complexity of conventional DCNN.
Suitable modifications are performed in the training algorithm to reduce the
number of parameter adjustments. The weight adjustment process in the fully
connected layer is completely eliminated in the proposed modified approach.
Instead, a simple assignment process is used to find the weights of this fully
connected layer. Thus, the computational complexity is significantly reduced
in the proposed approach.

16
[Link] shunmaga sundari [Link] (2017) presented a Brain Tumor
Segmentation Using Convolutional Neural Networks in MRI Images. In this
paper, they have presented an effective brain tumor detection technique based
on Neural Network (NN) and our previously designed brain tissue
segmentation. This technique hits the target with the aid of the following
major steps, which includes: Pre-processing of the brain images.,
segmentation of pathological tissues (Tumor), normal tissues (White Matter
(WM) and Gray Matter (GM)) and fluid (Cerebrospinal Fluid (CSF)),
extraction of the relevant features from each segmented tissues and
classification of the tumor images with NN. As well, the experimental results
and analysis is evaluated by means of Quality Rate (QR) with normal and the
abnormal Magnetic Resonance Imaging (MRI) images. The performance of
the proposed technique is been validated and compared with the standard
evaluation metrics such as sensitivity, specificity and accuracy values for NN,
K-NN classification and bayesian classification techniques. The obtained
results depicts that the classification results yields better results in NNs when
compared with the other techniques.

[Link] [Link] (2018) presented a Segmentation and Detection of


Tumor in MRI images Using CNN and SVM Classification. In this paper,
they propose a programmed division strategy based on Convolutional Neural
Networks (CNN). The kernels are used for the purpose of classification. Here,
likewise researched the utilization of force standardization as a pre-preparing
step, which in spite of the fact that not regular in CNN-based division
strategies, demonstrated together with information enlargement to be
extremely viable for mind tumor division in MRI pictures. The extension of
the work is done by calculating certain parameters of the image. Detecting the
accurate tumour cells where high density of area is infected. Also calculating
the feature of cells. Calculating the features provide us the depth of infection
i.e stage of infection. SVM classification is performed with the calculated

17
parameters. Extraction and detection of tumour from MRI scan images of the
brain is done by using MATLAB tool.

Siti Noraini Sulaiman [Link] (2014) presented a Segmentation of Brain


MRI Image Based on Clustering Algorithm. In this project, the image
segmentation via the clustering method is used to cluster or segment the
images into three different regions which represent the white matter (WM),
grey matter (GM) and cerebrospinal fluid spaces (CSF), respectively. These
regions are significant for the physician or radiographer to analyse and
diagnose the disease. The clustering method known as Adaptive Fuzzy
Kmeans (AFKM) is proposed to be used in this project as a tool to classify the
three regions. The results are then compared with the fuzzy C-means
clustering. The segmented image is analysed both qualitatively and
quantitatively. The results demonstrate that the proposed method is suitable to
be used as segmentation tools for MRI brain images using image
segmentation.

18
CHAPTER 3

EXISTING METHOD

Figure 3.1 Block diagram for proposed Brain classification using SVM

The above figure shows the flow of classification using Support


Vector Machine(SVM). First, the MRI Brain image has been sent as input to
the preprocessing unit where it resize the image of the original image. Then,
skull stripping happens where the removal of skull happens. Then, image
segmentation is done by Fuzzy – C means(FCM) to separate the WM, GM,
CSF and Tumor regions. After that, features are extracted for getting
ROI(Region Of Interest) region and to classify whether the tumor is present or
not. Finally classification using SVM is done to classify normal and abnormal
images.

19
CHAPTER 4

PROPOSED SYSTEM

cA, cH, cV, cD,


Mean, Standard
Deviation, Entropy,
Variance,
Skewness, Kurtosis,
Smoothness

CNN

Figure 4.1 Block diagram for proposed Brain classification using CNN

The figure 4.1 shows how the flow of the proposed system goes.
Here, the dataset is first preprocessed and removal of skull is performed by
skull stripping algorithm. Then segmentation is performed by Fuzzy C means
algorithm for CSF, WM and GM Segmentation. After the morphological
operation is done, features are extracted for getting ROI(Region Of Interest)
region and to classify whether the tumor is present or not. Finally
classification using Convolutional Neural Network(CNN) to classify normal
and abnormal images.

20
4.1 Brain Image Segmentation

In digital image processing and computer vision, image


segmentation is the process of partitioning a digital image into multiple
segments (sets of pixels, also known as image objects). The goal of
segmentation is to simplify and/or change the representation of an image into
something that is more meaningful and easier to analyze. Image segmentation
is typically used to locate objects and boundaries (lines, curves, etc.) in
images. More precisely, image segmentation is the process of assigning a
label to every pixel in an image such that pixels with the same label share
certain characteristics.

The result of image segmentation is a set of segments that collectively


cover the entire image, or a set of contours extracted from the image (see edge
detection). Each of the pixels in a region are similar with respect to some
characteristic or computed property, such as color, intensity, or texture.
Adjacent regions are significantly different with respect to the same
characteristic(s). When applied to a stack of images, typical in medical
imaging, the resulting contours after image segmentation can be used to
create 3D reconstructions with the help of interpolation algorithms
like marching cubes.

We can divide or partition the image into various parts called


segments. It’s not a great idea to process the entire image at the same time as
there will be regions in the image which do not contain any information. By
dividing the image into segments, we can make use of the important segments
for processing the image. That, in a nutshell, is how image segmentation
works. An image is a collection or set of different pixels. We group together
the pixels that have similar attributes using image segmentation.

21
4.2 Various techniques of Image Segmentation

The image segmentation can be classified into two basic types:


Local segmentation (concerned with specific part or region of image) and
Global segmentation (concerned with segmenting the whole image, consisting
of large number of pixels). The image segmentation approaches can be
categorized into two types based on properties of image.

4.2.1 Thresholding Method

Thresholding methods are the simplest methods for image


segmentation. These methods divide the image pixels with respect to their
intensity level. These methods are used over images having lighter objects
than background. The selection of these methods can be manual or automatic
i.e. can be based on prior knowledge or information of image features. There
are basically three types of thresholding:

1) Global Thresholding: This is done by using any appropriate threshold


value/T. This value of T will be constant for whole image. On the basis of
T the output image q(x,y) can be obtained from original image p(x,y).

22
Figure 4.2 Types of Segmentation

2) Variable Thresholding: In this type of thresholding, the value of T can


vary over the image. This can further be of two types:
• Local Threshold: In this the value of T depends upon the
neighborhood of x and
• Adaptive Threshold: The value of T is a function of x and y.
3) Multiple Thresholding: In this type of thresholding, there are multiple
threshold values like T0 and T1. By using these output image can be
computed as:
The values of thresholds can be computed with the help of the peaks
of the image histograms. Simple algorithms can also be generated to compute
these.

23
4.2.2 Edge Based Segmentation Method

The edge detection techniques are well developed techniques of


image processing on their own. The edge based segmentation methods are
based on the rapid change of intensity value in an image because a single
intensity value does not provide good information about edges. Edge
detection techniques locate the edges where either the first derivative of
intensity is greater than a particular threshold or the second derivative has
zero crossings. In edge based segmentation methods, first of all the edges are
detected and then are connected together to form the object boundaries to
segment the required regions. The basic two edge based segmentation
methods are: Gray histograms and Gradient based methods. To detect the
edges one of the basic edge detection techniques like sobel operator, canny
operator and Robert‟s operator etc can be used. Result of these methods is
basically a binary image. These are the structural techniques based on
discontinuity detection.

4.2.3 Region Based Segmentation Method

The region based segmentation methods are the methods that


segments the image into various regions having similar characteristics. There
are two basic techniques based on this method.

1) Region growing methods: The region growing based segmentation


methods are the methods that segments the image into various regions based
on the growing of seeds (initial pixels). These seeds can be selected manually
(based on prior knowledge) or automatically (based on particular application).
Then the growing of seeds is controlled by connectivity between pixels and
with the help of the prior knowledge of problem, this can be stopped. The

24
basic algorithm (based on 8- connectivity) steps for region growing method
are:

If p(x,y) is the original image that is to be segmented and s(x,y) is


the binary image where the seeds are located. Let ‘T’ be any predicate which
is to be tested for each (x,y) location.
• First of all, all the connected components of ‘s’ are eroded.
• Compute a binary image PT. Where PT (x, y) = 1, if T(x, y) = True.
• Compute a binary image ‘q’, where q(x, y) = 1, if PT(x, y) = 1 and (x, y) is
8-connected to seed in ‘s’.
These connected components in ‘q’ are segmented regions.

2) Region splitting and merging methods: The region splitting and merging
based segmentation methods uses two basic techniques i.e. splitting and
merging for segmenting an image into various regions. Splitting stands for
iteratively dividing an image into regions having similar characteristics and
merging contributes to combining the adjacent similar regions. Following
diagram shows the division based on quad tree. The basic algorithm steps for
region growing and merging are.
Let ‘p’ be the original image and ‘T’ be the particular predicate.
• First of all the R1 is equal to p.
• Each region is divided into quadrants for which T (Ri) = False.
• If for every region, T (Rj) = True, then merge adjacent regions Ri and Rj
such that T (Ri U Rj) = True.
• Repeat step 3 until merging is impossible.

4.2.4 Clustering Based Segmentation Method

The clustering based techniques are the techniques, which segment


the image into clusters having pixels with similar characteristics. Data

25
clustering is the method that divides the data elements into clusters such that
elements in same cluster are more similar to each other than others. There are
two basic categories of clustering methods: Hierarchical method and Partition
based method. The hierarchical methods are based on the concept of trees. In
this the root of the tree represents the whole database and the internal nodes
represent the clusters. On the other side the partition based methods use
optimization methods iteratively to minimize an objective function. In
between these two methods there are various algorithms to find clusters.
There are basic two types of clustering.

1) Hard Clustering: Hard clustering is a simple clustering technique that


divides the image into set of clusters such that one pixel can only belong to
only one cluster. In other words it can be said that each pixel can belong to
exactly one cluster. These methods use membership functions having values
either 1 or 0 i.e. one either certain pixel can belong to particular cluster or not.
An example of a hard clustering based technique is one k-means clustering
based technique known as HCM. In this technique, first of all the centers are
computed then each pixel is assigned to nearest center. It emphasizes on
maximizing the intra cluster similarity and also minimizing the inter cluster
equality.

2) Soft clustering: The soft clustering is more natural type of clustering


because in real life exact division is not possible due to the presence of noise.
Thus soft clustering techniques are most useful for image segmentation in
which division is not strict. The example of such type of technique is fuzzy c-
means clustering. In this technique pixels are partitioned into clusters based
on partial membership i.e. one pixel can belong to more than one clusters and
this degree of belonging is described by membership values. This technique is
more flexible than other techniques.

26
4.2.5 Watershed Based Methods

The watershed based methods uses the concept of topological


interpretation. In this the intensity represents the basins having hole in its
minima from where the water spills. When water reaches the border of basin
the adjacent basins are merged together. To maintain separation between
basins dams are required and are the borders of region of segmentation. These
dams are constructed using dilation. The watershed methods consider the
gradient of image as topographic surface. The pixels having more gradient are
represented as boundaries which are continuous.

4.2.6 Partial Differential Equation Based Segmentation Method

The partial differential equation based methods are the fast methods
of segmentation. These are appropriate for time critical applications. There
are basic two PDE methods: non-linear isotropic diffusion filter (used to
enhance the edges) and convex non-quadratic variation restoration (used to
remove noise). The results of the PDE method is blurred edges and
boundaries that can be shifted by using close operators. The fourth order PDE
method is used to reduce the noise from image and the second order PDE
method is used to better detect the edges and boundaries.

4.2.7 Artificial Neural Network Based Segmentation Method

The artificial neural network based segmentation methods simulate


the learning strategies of human brain for the purpose of decision making.
Now days this method is mostly used for the segmentation of medical images.
It is used to separate the required image from background. A neural network
is made of large number of connected nodes and each connection has a
particular weight. This method is independent of PDE. In this the problem is

27
converted to issues which are solved using neural network. This method has
basic two steps: extracting features and segmentation by neural network.

4.3 Preprocessing

Pre-processing is a common name for operations with images at the


lowest level of abstraction — both input and output are intensity images.
These iconic images are of the same kind as the original data captured by the
sensor, with an intensity image usually represented by a matrix of image
function values (brightnesses). The aim of pre-processing is an improvement
of the image data that suppresses unwilling distortions or enhances some
image features important for further processing, although geometric
transformations of images (e.g. rotation, scaling, translation) are classified
among pre-processing methods here since similar techniques are used.

4.4 Skull Stripping

The MRI system produces brain image as 3D volumetric data


expressed as a stack of two-dimensional slices and it is necessary to use
computer-aided tool to explore the information contained in these brain slices
for various brain image applications such as volumetric analysis, study of
anatomical structure, localization of pathology, diagnosis, treatment planning,
surgical planning, computer-integrated surgery, construction of anatomical
models, 3D visualization, and research.

Moreover, skull stripping being a preliminary step, designed to


eliminate non-brain tissues from MR brain images for many clinical
applications and analyses, its accuracy and speed are considered as the key
factors in the brain image segmentation and analysis. However, the accurate
and automated skull stripping methods help to improve the speed and
accuracy of prognostic and diagnostic procedures in medical applications. A

28
number of automated skull stripping algorithms are available in the literature.
Several comparative studies have also been carried out on the existing skull
stripping methods to analyze their performance using the commonly available
datasets.

Figure 4.3 Skull Stripped image

The figure 3.2 shows skull stripping performed for an abnormal Brain image.

3.5 CSF, GM and WM Segmentation

The goal of image segmentation is to divide an image into a set of


semantically meaningful, homogeneous, and nonoverlapping regions of
similar attributes such as intensity, depth, color, or texture. The segmentation
result is either an image of labels identifying each homogeneous region or a
set of contours which describe the region boundaries.

29
Fundamental components of structural brain MRI analysis include the
classification of MRI data into specific tissue types and the identification and
description of specific anatomical structures. Classification means to assign to
each element in the image a tissue class, where the classes are defined in
advance. The problems of segmentation and classification are interlinked
because segmentation implies a classification, while a classifier implicitly
segments an image. In the case of brain MRI, image elements are typically
classified into three main tissue types: white matter (WM), gray matter (GM),
and cerebrospinal fluid (CSF).

Image segmentation can be performed on 2D images, sequences of 2D


images, or 3D volumetric imagery. Most of the image segmentation research
has focused on 2D images. If the data is defined in 3D space (e.g., obtained
from a series of MRI images), then typically each image “slice” is segmented
individually in a “slice-by-slice” manner. This type of segmenting 3D image
volumes often requires a postprocessing step to connect segmented 2D slices
into a 3D volume or a continuous surface. Furthermore, the resulting
segmentation can contain inconsistencies and nonsmooth surface due to
omitting important anatomical information in 3D space. Therefore, the
development of 3D segmentation algorithms is desired for more accurate
segmentation of volumetric imagery. The main difference between 2D and 3D
image segmentation is in the processing elements, pixels/voxels, respectively,
and their 2D or 3D neighborhoods over which image features are calculated.

3.6 Fuzzy C Means Clustering

FCM[3] is a form of clustering in which each data point can belong to


more than one cluster. It involves assigning data points to clusters , such that
items in the same class or cluster are as similar as possible, while items
belonging to different classes are as dissimilar as [Link] standard FCM

30
is an iterative, unsupervised clustering algorithm, initially developed by FCM
algorithm introduced. The following model of FCM is described.

This algorithm will reduce the images into a form without losing its
features. It works well with image dimensions. Kernel/Filter, K = 1 0 1 0 1 0
1 0 1 Signal reconstruction is achieved by an inverse transformation
[Link] category includes traditional transform coding techniques
applied to ECG signals Wavelet technique is the obvious choice for ECG
signal compression because of its localized and non-stationary property and
the well-proven ability of wavelets to see through signals at different
resolutions. Wavelets are mathematical functions that cut up data into
different scale-shift components. The wavelet decomposition splits the
analyzing signal into average and detail coefficients, using finite impulse
response digital filters. The main task in wavelet analysis (decomposition and
reconstruction) is to find a good analyzing function (mother wavelet) to
perform an optimal decomposition. The ability of DWT to separate pertinent
signal components has led to several wavelet-based techniques which
supersede those based on traditional Fourier methods. The discrete wavelet
transform has interesting mathematics and fits in with the standard signal
filtering and encoding methodologies.

Processing Steps for Image Enhancement and Segmentation Image


Enhancement Image enhancement is done by using Mean Adjustment. It is
used for improving contrast and brightness.

1) Different ranges of intensity thresholding is given and analyzed.

2) Mean Adjustment is exploited on the input image. 36


NTSC(:,:,1)=NTSC(:,:,1)+MeanAdjust*(1-NTSC(:,:,1));

3) For the Mean Adjusted Image maxima and minima of the iamge is
calculated.

31
Minima(k)=Sort(ceil(LowerThresholding*R*C))
Maxima(k)=Sort(ceil(UpperThresholding*R*C))

Modified Fuzzy C Means Segmentation

1) Initialize the class

2) Double precision of the image is exploited.

3) To simplify the computation process M*N matrix is converted into M*1


matrix

4) Using Baye’s rule posteriori probability is analyzed


Enhancement=Enhancement-MiPix+1

5) Length of the matrix is calculated

6) Estimating the maxima of the Image pixel Classification The absolute


centroid position between central point and location of• intensity pixels is
calculated. Classification of pixels is calculated by finding the minima of the
above• difference. Mean Estimation The location of classified pixels is found
by calculating the mean of the• classified pixels

7) The output of the image is calculated by estimating the size & absolute
value of the Image.

32
CHAPTER 5

FEATURE EXTRACTION FROM SEGMENTED IMAGES

Feature extraction is a process of dimensionality reduction by which


an initial set of raw data is reduced to more manageable groups for
processing. A characteristic of these large data sets is a large number of
variables that require a lot of computing resources to process. Feature
extraction is the name for methods that select and /or combine variables into
features, effectively reducing the amount of data that must be processed,
while still accurately and completely describing the original data set.

5.1 Block Diagram

Figure 5.1 Block Diagram for proposed CNN classification

The figure 4.1 shows that the MRI test image is given as input in
which the first process is preprocessing. Skull stripping algorithm is

33
performed here where the rgb image is converted into gray image. Next
comes the segmentation process where CSF, GM and WM segmentation
process are done. Here, 11 features are found out and extracted to find normal
and abnormal brain images. Finally, classification using CNN is performed.

5.2 Features used

The features used here are listed below with formulas:

cA = mean2(cA);
cH = mean2(cH);
cV = mean2(cV);
cD = mean2(cD);
Mean = mean2(J);
Standard_Deviation = std2(J);
Entropy = entropy(J);
Variance = mean2(var(double(J)));
a = sum(double(J(:)));
Smoothness = 1-(1/(1+a));
Kurtosis = kurtosis(double(J(:)));
Skewness = skewness(double(J(:)));

Here, cA computes the approximation coefficients matrix and


details coefficients matrices cH, cV, and cD (horizontal, vertical, and
diagonal, respectively).

34
CHAPTER 6

BRAIN IMAGE CLASSIFICATION

The objective of image classification is to identify and portray, as a


unique gray level (or color), the features occurring in an image in terms of the
object or type of land cover these features actually represent on the
ground. Image classification is perhaps the most important part of
digital image analysis.
Image classification is a complex process that may be affected by
many factors. Because classification results are the basis for many
environmental and socioeconomic applications, scientists and practitioners
have made great efforts in developing advanced classification approaches and
techniques for improving classification accuracy. Image classification is used
in a lot in basic fields like medicine, education and security. Correct
classification has vital importance, especially in medicine. Therefore,
improved methods are needed in this field. The proposed deep CNNs are an
often-used architecture for deep learning and have been widely used in
computer vision and audio recognition. In the literature, different values of
factors used for the CNNs are considered. From the results of the experiments
on the CIFAR dataset, we argue that the network depth is of the first priority
for improving the accuracy. It can not only improve the accuracy, but also
achieve the same high accuracy with less complexity compared to increasing
the network width.
In order to classify a set of data into different classes or categories, the
relationship between the data and the classes into which they are classified
must be well understood. Generally, classification is done by a computer, so,
to achieve classification by a computer, the computer must be trained.
Sometimes it never gets sufficient accuracy with the results obtained, so
training is a key to the success of classification. To improve the classification
accuracy, inspired by the ImageNet challenge, the proposed work considers

35
classification of multiple images into the different categories (classes) with
more accuracy in classification, reduction in cost and in a shorter time by
applying parallelism using a deep neural network model.

The image classification problem requires determining the category


(class) that an image belongs to. The problem is considerably complicated by
the growth of categories' count, if several objects of different classes are
present in the image and if the semantic class hierarchy is of interest, because
an image can belong to several categories simultaneously. Fuzzy classes
present another difficulty for probabilistic categories' assignment. Moreover, a
combination of different classification approaches has shown to be helpful for
the improvement of classification accuracy [1].

6.1 Convolutional Neural Network

Artificial Intelligence has been witnessing a monumental growth in


bridging the gap between the capabilities of humans and machines.
Researchers and enthusiasts alike, work on numerous aspects of the field to
make amazing things happen. One of many such areas is the domain of
Computer Vision.

The agenda for this field is to enable machines to view the world as
humans do, perceive it in a similar manner and even use the knowledge for a
multitude of tasks such as Object & Video recognition, Object Analysis &
Classification, Media Recreation, Recommendation Systems, Natural
Language Processing, etc. The advancements in Computer Vision with Deep
Learning has been constructed and perfected with time, primarily over one
particular algorithm — a Convolutional Neural Network.

In deep learning, a convolutional neural network (CNN,


or ConvNet) is a class of deep neural networks, most commonly applied to
analyzing visual imagery.[1] They are also known as shift invariant or space

36
invariant artificial neural networks (SIANN), based on their shared-
weights architecture and translation invariance characteristics.[2][3] They have
applications in image and video recognition, recommender systems,[4] image
classification, medical image analysis, natural language processing,[5] and
financial time series.[6]

CNNs are regularized versions of multilayer perceptrons. Multilayer


perceptrons usually mean fully connected networks, that is, each neuron in
one layer is connected to all neurons in the next layer. The "fully-
connectedness" of these networks makes them prone to overfitting data.
Typical ways of regularization include adding some form of magnitude
measurement of weights to the loss function. CNNs take a different approach
towards regularization: they take advantage of the hierarchical pattern in data
and assemble more complex patterns using smaller and simpler patterns.
Therefore, on the scale of connectedness and complexity, CNNs are on the
lower extreme.

Convolutional network were inspired by biological processes in that


the connectivity pattern between neurons resembles the organization of the
animal visual cortex. Individual cortical neurons respond to stimuli only in a
restricted region of the visual field known as the receptive field. The receptive
fields of different neurons partially overlap such that they cover the entire
visual field. CNNs use relatively little pre-processing compared to
other image classification algorithms. This means that the network learns
the filters that in traditional algorithms were hand-engineered. This
independence from prior knowledge and human effort in feature design is a
major advantage. The name “convolutional neural network” indicates that the
network employs a mathematical operation called convolution. Convolution is
a specialized kind of linear operation. Convolutional networks are simply
neural networks that use convolution in place of general matrix multiplication
in at least one of their layers.[11]

37
A convolutional neural network consists of an input and an output
layer, as well as multiple hidden layers. The hidden layers of a CNN typically
consist of a series of convolutional layers that convolve with a multiplication
or other dot product. The activation function is commonly a RELU layer, and
is subsequently followed by additional convolutions such as pooling layers,
fully connected layers and normalization layers, referred to as hidden layers
because their inputs and outputs are masked by the activation function and
final convolution. Though the layers are colloquially referred to as
convolutions, this is only by convention. Mathematically, it is technically
a sliding dot product or cross-correlation. This has significance for the indices
in the matrix, in that it affects how weight is determined at a specific index
point.

When programming a CNN, the input is a tensor with shape (number of


images) x (image height) x (image width) x (image depth). Then after passing
through a convolutional layer, the image becomes abstracted to a feature map,
with shape (number of images) x (feature map height) x (feature map width) x
(feature map channels). A convolutional layer within a neural network should
have the following attributes:

• Convolutional kernels defined by a width and height (hyper-parameters).


• The number of input channels and output channels (hyper-parameter).
• The depth of the Convolution filter (the input channels) must be equal to
the number channels (depth) of the input feature map.

Convolutional layers convolve the input and pass its result to the next
layer. This is similar to the response of a neuron in the visual cortex to a
specific stimulus.[12] Each convolutional neuron processes data only for
its receptive field. Although fully connected feedforward neural networks can
be used to learn features as well as classify data, it is not practical to apply
this architecture to images. A very high number of neurons would be

38
necessary, even in a shallow (opposite of deep) architecture, due to the very
large input sizes associated with images, where each pixel is a relevant
variable. For instance, a fully connected layer for a (small) image of size 100
x 100 has 10,000 weights for each neuron in the second layer. The
convolution operation brings a solution to this problem as it reduces the
number of free parameters, allowing the network to be deeper with fewer
parameters.

6.2 Support Vector Machine(SVM)

A support vector machine (SVM) is a supervised machine learning model


that uses classification algorithms for two-group classification problems or
regression problems. It uses a technique called the kernel trick to transform
your data and then based on these transformations it finds an optimal
boundary between the possible outputs. After giving an SVM model sets of
labeled training data for each category, they're able to categorize new text.
The algorithm creates a line or a hyperplane which separates the data into
classes. A cluster consists of four types of SVMs, which help in managing the
cluster and its resources and data access to the clients and applications.

An advanced kernel based techniques such as Support Vector


Machine (SVM) for the classification of volume of MRI data as normal and
abnormal will be deployed. Support Vector Machine is a supervised
classification algorithm where we draw a line between two different
categories to differentiate between them. SVM classifies between two classes
by constructing a hyperplane in high-dimensional feature space which can be
used for classification. SVM works relatively well when there is a clear
margin of separation between classes. It is more effective in high dimensional
spaces. It is effective in cases where the number of dimensions is greater than
the number of samples and is relatively memory efficient.

39
CHAPTER 7

EXPERIMENTAL DATA, SIMULATION AND RESULTS

Here, the features are calculated for 40 images from the above
mentioned formulas. The label parameter is used here to differentiate the
normal images from the abnormal images where 1 is denoted for normal
images and 2 & 3 for abnormal images.

Table 7.1 Feature Calculation for first set of datasets

Table 7.1 shows the values calculated for all the features performed
on the first set of dataset. Here, label is given to differentiate normal and
abnormal images where, 2 and 3 denotes abnormal images and 1 denotes
normal images.

40
Table 7.2 Feature Calculation for remaining set of datasets

Figure 7.1 Original abnormal image

Table 7.2 shows the feature calculation for the remaining set of
dataset. Figure 7.1 shows the original abnormal image.

41
Figure 7.2 Resized image

Figure 7.3 CSF, GM, WM Segmentation

Figure 7.2 shows the resized image of the original abnormal image.
Figure 7.3 shows the segmentation of CSF(Cerebrospinal fluid), GM(Grey
matter) and WM(White matter) using Fuzzy C means algorithm.

42
Figure 7.4 ROI(Region of Interest) operated image

Figure 7.5 ROI image

Figure 7.4 shows the ROI Region of the operated image where the
tumor is detected. Figure 7.5 shows the ROI region where the tumour is

43
isolated without noise.

Figure 7.6 Tumour presented image

(a) (b)

Figure 7.7(a) Dialog box for CNN (b) Dialog box for SVM

Figure 7.6 shows the isolated tumour image found out that tumour is
present and Figure 7.7(a) shows the dialog box stating tumor is present for
CNN and Figure 7.7(b) shows the dialog box stating tumor is present for
SVM.

44
Figure 7.8 Accuracy of CNN method

Figure 7.9 Accuracy of SVM method

45
Figure 7.10 Normal original image

Figure 7.8 shows the accuracy of CNN method and Figure 7.9
shows the accuracy of SVM method. Figure 7.10 shows the original image of
the normal tissue.

Figure 7.11 Resized normal image

46
Figure 7.12 CSF,GM,WM Segmentation

Figure 7.11 shows the resized image of the original normal image.
Figure 7.12 shows the segmentation of CSF(Cerebrospinal fluid), GM(Grey
matter) and WM(White matter) using Fuzzy C means algorithm.

Figire 7.13 Operated ROI Image

47
Figure 7.14 ROI Image

Figure 7.13 shows the ROI Region of the operated. Figure 6.14
shows the original ROI region.

Figure 7.15 Normal image detected without tumor

48
(a) (b)

Figure 7.16(a)Dialog box of CNN (b) Dialog box of SVM

Figure 7.15 shows that in the image tumor is not detected and
Figure 7.16(a) shows that dialog box stating that the image is normal without
any tumor in CNN and (b) shows that the tumor is present in a normal image
using SVM.

10
9
8
7
6
5 Normal

4 Abnormal

3
2
1
0
1 2 3 4 5 6 7 8 9 10 11

Figure 7.17 Comparison table for cA

Figure 7.17 shows the graph for the approximation coefficient


matrix cA for both normal and abnormal images.

49
0.6

0.5

0.4

0.3 Normal
Abnormal
0.2

0.1

0
1 2 3 4 5 6 7 8 9 10 11

Figure 7.18 Comparison table for cH

Figure 7.18 shows the graph for the Horizontal detail coefficient
matrix cH for both normal and abnormal images. Both denotes that the values
are higher for normal images.

0.12

0.1

0.08

0.06 Normal
Abnormal
0.04

0.02

0
1 2 3 4 5 6 7 8 9 10 11

Figure 7.19 Comparison table for cV

50
0.6

0.5

0.4

0.3 Normal
Abnormal
0.2

0.1

0
1 2 3 4 5 6 7 8 9 10 11

Figure 7.20 Comparison table for cD

Figure 7.19 shows the graph for the vertical detail coefficient matrix
cV for both normal and abnormal images. Figure 7.20 shows the graph for the
Diagonal detail coefficient matrix cD for both normal and abnormal images.
Both denotes that the values are higher for normal images.

2.5

1.5
Normal

1 Abnormal

0.5

0
1 2 3 4 5 6 7 8 9 10 11

Figure 7.21 Comparison table for Mean

51
10
9
8
7
6
5 Normal

4 Abnormal

3
2
1
0
1 2 3 4 5 6 7 8 9 10 11

Figure 7.22 Comparison table for S.D(Standard Deviation)

Figure 7.21 shows the graph for the Mean for both normal and
abnormal images. Figure 7.22 shows the graph for the Standard
Deviation(S.D) for both normal and abnormal images. Both denotes that the
values are higher for normal images.

2.5

1.5 Normal
Abnormal
1

0.5

0
1 2 3 4 5 6 7 8 9 10 11

Figure 7.23 Comparison table for Entropy

52
1.2

0.8

0.6 Normal
Abnormal
0.4

0.2

0
1 2 3 4 5 6 7 8 9 10 11

Figure 7.24 Comparison table for Variance

Figure 7.23 shows the graph for Entropy for both normal and
abnormal images. Figure 7.24 shows the graph for Variance for both normal
and abnormal images. Both denotes that the values are higher for normal
images.

0.0005
0.0004
0.0003
0.0002
0.0001
Normal
0
1 2 3 4 5 6 7 8 9 10 11 Abnormal
-0.0001
-0.0002
-0.0003
-0.0004
-0.0005

Figure 7.25 Comparison table for Smoothness

53
1.00E-03

8.00E-04

6.00E-04

4.00E-04
Normal
2.00E-04
Abnormal
0.00E+00
1 2 3 4 5 6 7 8 9 10 11
-2.00E-04

-4.00E-04

-6.00E-04

Figure 7.26 Comparison table for Kurtosis

Figure 7.25 shows the graph for Smoothness for both normal and
abnormal images. Figure 7.26 shows the graph for Kurtosis for both normal
and abnormal images.

4.00E-04

3.00E-04

2.00E-04

1.00E-04
Normal
0.00E+00
1 2 3 4 5 6 7 8 9 10 11 Abnormal
-1.00E-04

-2.00E-04

-3.00E-04

-4.00E-04

Figure 7.27 Comparison table for Skewness

Figure 7.27 shows the graph for Skewness for both normal and

54
abnormal images.

CLASSIFICATION METHODS ACCURACY

Conventional Neural Network(CNN) 97.50%

Support Vector Network(SVM) 95.00%

Table 7.1 Comparison of accuracy between CNN and SVM

55
CHAPTER 8

CONCLUSION AND FUTURE WORK

In this proposed work, different sizes of MRI Brain images have been used
and done preprocessing to remove skull. Segmentation is done by Fuzzy C-
Means algorithm to segment CSF, GM and WM Regions. And 11 features are
extracted to find the Region Of Interest region to find whether tumour is
present or not. Here, it has classified the normal and abnormal images by
Convolutional Neural Network(CNN) and Support Vector Machine(SVM)
and shown that the extracted features values like mean are higher for normal
images than abnormal images. So, here the algorithms is able to differentiate
whether a image is normal image or abnormal image and the accuracy has
been increased in the CNN method while comparing with that of the SVM
method. The accuracy achieved in CNN method is 97.50% while that of the
SVM method is 95.00%.

56
REFERENCES

1 Tonmoy Hossain [Link], ” Brain Tumor Detection Using Convolutional


Neural Network”, IEEE Transactions on Medical Imaging 2017.

2 Madhupriya G [Link],” Braın Tumor Segmentatıon Wıth Deep Learnıng


Technıque”, IEEE Xplore Part Number: CFP19J32-ART; ISBN: 978-
1-5386-9439-8(2019).

3 Mahnoor Ali [Link], “Brain Tumour Image Segmentation Using Deep


Networks”, DOI 10.1109/ACCESS.2020.3018160, IEEE Access.

4 Jose bernal [Link], “Quantitative Analysis of Patch-Based Fully


Convolutional Neural Networks for Tissue Segmentation on Brain
Magnetic Resonance Imaging”, IEEE ACCESS.2019.2926697, 2019.

5 Jinglong du [Link],” Brain MRI Super-Resolution Using 3D Dilated


Convolutional Encoder–Decoder Network”, IEEE
ACCESS.2020.2968395, VOL. 23 NO. 2, 2020.

6 J. Seetha [Link], “Brain tumor classification using Convolutional neural


networks”, IEEE Transactions on medical imaging 2019.

7 Meiyan Huang [Link], “Brain Tumor Segmentation Based on Local


Independent Projection-based Classification”, DOI
10.1109/TBME.2014.2325410, IEEE Transactions on Biomedical
Engineering.

8 Abdu Gumaei [Link], “A Hybrid Feature Extraction Method with


Regularized Extreme Learning Machine for Brain Tumor
Classification”, DOI 10.1109/ACCESS.2019.2904145, IEEE Access.

9 Shan Shen [Link], “MRI Fuzzy Segmentation of Brain Tissue Using


Neighborhood Attraction With Neural-Network Optimization”, IEEE
Transactions On Information Technology In Biomedicine, Vol. 9, No.
3, September 2015.

10 Chenjie Ge [Link], “Enlarged Training Dataset by Pairwise GANs for


Molecular-Based Brain Tumor Classification”, Digital Object
Identifier 10.1109/ACCESS.2020.2969805, IEEE Access.

57
11 Neelum Noreen [Link], “A Deep Learning Model Based on
Concatenation Approach for the Diagnosis of Brain Tumor”, Digital
Object Identifier 10.1109/ACCESS.2020.2978629, IEEE Access.

12 Ghazanfar Latif [Link], “Enhanced MR Image Classification Using


Hybrid Statistical and Wavelets Features”, Digital Object Identifier
10.1109/ACCESS.2018.2888488, IEEE Access.

13 Zahraa A. Al-saffar [Link], “A Novel Approach to Improving Brain


Image Classification Using Mutual Information-Accelerated Singular
Value Decomposition”, Digital Object Identifier
10.1109/ACCESS.2020.2980728, IEEE Access.

14 Pradeep Kumar Mallick [Link], “Brain MRI Image Classification for


Cancer Detection Using Deep Wavelet Autoencoder-Based Deep
Neural Network”, Digital Object Identifier
10.1109/ACCESS.2019.2902252, IEEE Access.

15 Afnan M. Alhassan [Link], “BAT Algorithm With fuzzy C-Ordered


Means (BAFCOM) Clustering Segmentation and Enhanced Capsule
Networks (ECN) for Brain Cancer MRI Images Classification”, Digital
Object Identifier 10.1109/ACCESS.2020.3035803, IEEE Access.

16 Miss Monika shukla [Link], “A Comparative Study of Wavelet and


Curvelet Transform for Image Denoising”, IOSR Journal of
Electronics and Communication Engineering (IOSR-JECE) Volume 7,
Issue 4 (Sep. - Oct. 2013), PP 63-68 [Link].

17 Aneesh S Perumprath [Link], “A Modified Deep Convolutional Neural


Network for Brian Abnormalities Detection”, International Journal of
Engineering Research & Technology (IJERT) Vol. 9 Issue 01,
January-2020.

18 Anubha [Link], “A REVIEW ON MRI IMAGE SEGMENTATION


TECHNIQUES”, International Journal of Advanced Research in
Electronics and Communication Engineering (IJARECE) Volume 4,
Issue 5, May 2015.

19 [Link] shunmaga sundari [Link], “Brain Tumor Segmentation Using


Convolutional Neural Networks in MRI Images”, International Journal
of Advanced Research in Management, Architecture, Technology and
Engineering (IJARMATE) Vol. 3, Issue 6, June 2017.

58
20 [Link] [Link], “Segmentation and Detection of Tumor in MRI images
Using CNN and SVM Classification”, Proc. IEEE Conference on
Emerging Devices and Smart Systems (ICEDSS 2018) 2-3 March
2018, Mahendra Engineering College, Tamilnadu, India.

21 Siti Noraini Sulaiman [Link], “Segmentation of Brain MRI Image Based


on Clustering Algorithm”, 2014 IEEE Symposium on Industrial
Electronics and Applications (ISIEA), Sept 28-Oct 1, 2014.

22 Jeevitha R [Link], “Segmentation Techniques for Brain Tumor from MRI


- A Survey”, Advances in Parallel Computing Journal [IOS-Press],
Scopus, DOI: 10.3233/APC200181, 2020.

23 Jeevitha R [Link], “MRI Brain Abnormality detection using


Conventional Neural Network(CNN)”, Advances in Parallel
Computing Journal [IOS-Press], Scopus, 2021.

59
PUBLICATIONS

[Link] Author Paper title Conference Status Index


name name/journal

1 Jeevitha.R “Segmentation COMET 2K20[3rd Presented and Scopus


techniques for International published for
Brain tumor Conference on publication in
from MRI - A Emerging Current advances in
Survey” Trends in COMputing parallel
and Expert computing
Technology] at journal[IOS-
Panimalar Press]
Engineering College
on March 6th and 7th,
2020
2 Jeevitha.R “MRI Brain COMET 2K21[4th Presented and Scopus
Abnormality International accepted for
detection using Conference on publication in
Conventional Emerging Current advances in
Neural Trends in COMputing parallel
Network(CNN) and Expert computing
” Technology] at journal[IOS-
Panimalar Press]
Engineering College
on March 26th and
27th, 2021

60

You might also like