0% found this document useful (0 votes)
26 views12 pages

CBAM-SMK - Integrating Convolution Block Attention Module With Separable Multi-Resolution Kernels in D

The study presents the CBAM–SMK model, which integrates the Convolution Block Attention Module with separable multi-resolution kernels to enhance brain tumor classification using deep learning. Trained on a dataset of 7023 MRI images, the model achieved an accuracy of 97.0%, outperforming traditional CNNs and demonstrating improved diagnostic capabilities. This innovative approach aims to optimize both the accuracy and computational efficiency of brain tumor diagnosis, making advanced diagnostic tools more accessible in clinical settings.

Uploaded by

Rehaan Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views12 pages

CBAM-SMK - Integrating Convolution Block Attention Module With Separable Multi-Resolution Kernels in D

The study presents the CBAM–SMK model, which integrates the Convolution Block Attention Module with separable multi-resolution kernels to enhance brain tumor classification using deep learning. Trained on a dataset of 7023 MRI images, the model achieved an accuracy of 97.0%, outperforming traditional CNNs and demonstrating improved diagnostic capabilities. This innovative approach aims to optimize both the accuracy and computational efficiency of brain tumor diagnosis, making advanced diagnostic tools more accessible in clinical settings.

Uploaded by

Rehaan Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Biomedical Signal Processing and Control 112 (2026) 108483

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control


journal homepage: [Link]/locate/bspc

CBAM–SMK: Integrating Convolution Block Attention Module with separable


multi-resolution kernels in deep neural networks for brain tumor
classification
,∗
Binish M.C. a , Swarun Raj R.S. a , Vinu Thomas b
a Model Engineering College, APJ Abdul Kalam Technological University, Kerala, India
b APJ Abdul Kalam Technological University, Kerala, India

ARTICLE INFO ABSTRACT

Keywords: Proficiency and expertise are required for radiologists to accurately detect brain tumors, a process that
Brain tumor requires a significant amount of time. Deep learning technologies are being used and more to automate the
Convolutional neural network diagnosis of brain tumors, resulting in outcomes that are more precise and efficient compared to previous
Image classification
methods. Attention-based models possess an advanced feature of dynamic refinement and amplification, which
Attention mechanism
improves their diagnostic capabilities. The efficacy of channel, spatial, or combined attention methods in
the convolutional block attention module (CBAM) for the classification of brain cancer has not yet been
investigated. This study integrated a neural network (CNN) with CBAM to categorize brain cancers by
highlighting pertinent characteristics and reducing interference. The model is trained and evaluated on a
dataset of 7023 brain MRI images collected from three publicly available sources including Figshare, SARTAJ
and Br35H comprising four classes namely glioma, meningioma, pituitary tumor and no tumor. The CBAM–
SMK model achieved superior performance compared to other deep learning methods such as conventional
CNNs and CNNs enhanced with Sparsity and Multiresolution (SM), with an accuracy of 97. 0%, a recall of
96. 73%, and a precision of 96. 61% on the identical dataset. The fusion of CBAM–SMK has successfully
captured the spatial context and enhanced the representation of features. This enables physicians to use it on
brain classification software platforms, thus improving clinical decision-making and the classification of brain
tumors.

1. Introduction Precise medical diagnosis remains essential to develop effective


treatments, but established diagnostic methods involve extended ex-
The accuracy of diagnostic methods is critical in the fight against pert intervention, generating conflicting results and delaying therapy
brain tumors, a dangerous foe in the field of oncology [1]. Deep decisions. Healthcare professionals increasingly rely on artificial intel-
learning (DL) has revolutionized medical imaging by providing ad-
ligence technology, specifically deep learning algorithms, to improve
vanced tools to analyze complex patterns in radiological scans [2].
medical image diagnosis through enhanced accuracy and efficiency [7].
Convolutional neural networks (CNN) are particularly notable for their
Deep learning attention mechanisms have demonstrated success in
ability to effectively learn hierarchical data representations, which is
essential to accurately classify brain tumors [3] &[4]. highlighting significant features within complex datasets, particularly
The process of moving from capturing images to achieving precise in medical image classification operations [8].
diagnosis is filled with difficulties [5]. Traditional CNN architectures However, despite these advancements, CNN-based models still face
frequently struggle with the challenge of identifying relevant features key challenges in medical imaging, such as difficulty focusing on subtle
within the intricate nature of brain images [6]. This shortcoming tumor boundaries, handling multi-scale variations, and reducing mis-
has the potential to undermine the accuracy of diagnostic results. classifications under noise or class imbalance. To our knowledge, no
In addition, the complex computing requirements needed to process prior work has explored the integration of Convolutional Block At-
high-resolution medical images can make it difficult for healthcare
tention Module (CBAM) with separable multi-resolution kernels (SMK)
institutions with limited resources to obtain these advanced models.
specifically for brain tumor classification. This unexplored combination
This can further exacerbate the disparity in healthcare equity.

∗ Corresponding author.
E-mail address: binishmc@[Link] (Binish M.C.).

[Link]
Received 17 June 2024; Received in revised form 13 July 2025; Accepted 2 August 2025
Available online 13 August 2025
1746-8094/© 2025 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

presents a novel opportunity to improve the extraction of discrimina- from a database of brain tumor imaging data, and advanced machine
tive features from MRI scans. learning algorithms including Support Vector Machines, Decision Trees,
The primary aim of this study is to develop and evaluate a CBAM– and Naïve Bayes were employed to detect tumors. The performance
SMK model that enhances both the diagnostic accuracy and computa- was assessed using multiple metrics, and Naïve Bayes demonstrated
tional efficiency of brain tumor classification systems. the maximum accuracy in terms of entropy, morphological, SIFT,
The following research questions are addressed in this study: and texture characteristics. Cherukuri et al. [15] proposed a multi-
level attention network that utilizes Xception as the main model. This
• What mechanisms enable attention to boost CNNs’ capability for network incorporates spatial and cross-channel attention mechanisms
extracting features during brain tumor classification? and was applied to MRI dataset.
• Does the proposed CBAM–SMK model lead to improved perfor- CNNs with attention mechanisms have produced remarkable re-
mance metrics compared to traditional deep learning classifi- sults in the field of image classification. To enhance the classification
cation methods, particularly for precision, recall, and accuracy results, it is crucial to possess a comprehensive comprehension of
metrics? CNNs and attention mechanisms. In this section, we will specifically
discuss common CNNs and the main attention methods used in image
This study aims to establish an advanced deep learning platform
categorization.
that enhances the diagnostic reliability, precision, and accuracy of brain
tumor diagnosis. The main contributions of this study are as follows:
2.1. Convolutional neural networks for brain tumor classification
• Channel and spatial attention abilities of CBAM–SMK unite with
multi-resolution kernels to enhance both feature representation CNNs that are extensively employed for the purpose of image cat-
and classification results. egorization [16]. Residual networks have been widely recognized for
• Testing confirms that CBAM–SMK delivers better performance their exceptional performance in picture categorization [17], as evi-
than traditional network models, achieving accuracy of 97.0%, denced by previous examples. Moreover, broader CNNs provide the
recall of 96.73%, and precision of 96.61%. capability to extract supplementary data, enhancing classification out-
• The proposed computational design enables clinical applications, comes’ accuracy [18]. Consequently, a residual architecture is em-
including those in limited-resource settings. ployed to expand the scope and collect reliable data for picture recog-
• This research integrates separable multi-resolution kernels with nition [19]. This may be illustrated in the fusion of ResNet with a
CBAM, an untried combination in brain tumor classification tasks. convolutional branch consisting of 3 × 3 and 1 × 1 convolutions,
which is employed to extract significant information for image clas-
Convolution Block Attention Module with separable multiresolution sification [20]. ResNeXt employed a homogeneous multiple-branch
Kernels (CBAM–SMK) architecture is an innovative technique devel- architecture to effectively represent the categorized image [21]. Fur-
oped to overcome the challenges outlined before. This model aims to thermore, including residual networks as components in a CNN has the
improve the emphasis on important picture features by combining the potential to improve the classifier’s capacity to generalize [22]. Utiliz-
Convolution Block Attention Module (CBAM) with Separable Multires-
ing multi-scale and residual blocks proved to be an effective method for
olution Kernels. Additionally, it also captures a broader range of details
combining different semantic information in image categorization [23].
at different sizes. The goal is two-pronged, namely to enhance the
One can enhance the memory capacity of a deep CNN used for image
model’s ability to diagnose brain tumors and to optimize computational
classification by incorporating residual learning techniques to combine
efficiency, thus making advanced diagnostic skills more accessible to
hierarchical features [24]. Generative adversarial networks utilized
everyone. The CBAM component enhances the model’s focus by iter-
a generative network to produce samples that closely resemble the
atively applying channel and spatial attention to identify important
training examples provided [14]. This was done to address the issue
features. The Separable Multiresolution Kernels provide the model with
of inadequate samples. Subsequently, a discriminative network was
a simultaneous and thorough understanding of the intricate details of
employed by the GAN to assess the veracity of the entire set of training
the image. This enhances the feature set by incorporating detailed and
data, with the aim of constructing a resilient classifier [14]. Further-
different representations.
more, graph convolutional networks demonstrate exceptional efficiency
The CBAM–SMK model is a groundbreaking invention in brain
in the identification of multi-label pictures [15].
tumor diagnostics that aims to revolutionize the standards of accu-
racy and efficiency. This study will thoroughly assess and compare
the model’s capabilities, possible clinical applications, and its overall 2.2. Attentional mechanisms for image classification
impact on the field of medical diagnostics, using rigorous examination
and existing standards. Deep convolutional neural networks may incur substantial com-
This paper is organized as follows: Section 2 reviews related re- putational overhead due to their reliance on deep or wide architec-
search, highlighting recent advances and setting the framework for the tures [25]. To address this issue, a method of focus emerges, the
proposed methodology. Section 3 completely discusses the CBAM–SMK attention strategy utilizes the acquired attributes of different network
architecture and design reasoning. Experimental setup, data prepa- components as weights to influence other components [26], aiming
ration, model training, and assessment measures are in Section 4. to acquire more significant sequential knowledge [27]. The current
Section 5 concludes with an analysis of the study’s implications for methodologies for attention can be categorized into two groups, namely
future research and clinical practice, opening the way for medical channel attention [28] and spatial attention methods [28]. The channel
imaging and diagnostics advances. attention technique specifically focuses on the impact of channel char-
acteristics on the full set of CNN. When employing the second attention
2. Related works strategy, pixels of all dimensions that occupy the same position are
considered as a unified entity, and the significance of the method is
Deep learning and machine learning are the primary methods used acquired by evaluating each pixel at every location. Every weight is
to classify brain tumors [9]. Machine learning studies have utilized obtained from a spatial attention mesh.
KNN [10], SVM [11], decision trees [12], and evolutionary algo- Recent studies have introduced advanced deep learning models
rithms [13]. To improve the identification of brain tumors, Hussain for brain tumor classification by incorporating attention mechanisms,
et al. [14] proposed a multiple-modal approach for extracting features graph learning, and fusion strategies. Subba and Sunaniya [29] pro-
and utilizing machine learning methods. Multiple features were derived posed a computationally optimized GoogLeNet-style CNN enhanced

2
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

3.2.1. Perona–Malik diffusion (PM diffusion)


Perona–Malik diffusion is a particular type of anisotropic diffusion
that effectively decreases noise in images while simultaneously keeping
important features such as edges, and the partial differential equation
updates the image 𝐼 iteratively.
𝜕𝐼
= ∇ ⋅ (𝑐(∇𝐼(𝑥, 𝑦, 𝑡))∇𝐼(𝑥, 𝑦, 𝑡)) (3)
𝜕𝑡
where 𝑡 denotes the iteration time, ∇ represents the gradient operator,
⋅ denotes the divergence operator, and 𝑐(∇𝐼(𝑥, 𝑦, 𝑡)) is the diffusion
coefficient that is a function of the image gradient. The diffusion
coefficient is commonly selected as a function that decreases as the
amplitude of the gradient decreases in order to maintain the sharpness
of edges, where 𝐾 is a constant that controls the sensitivity to edges.
The pre-processed images for CBAM–SMK architecture is shown in Fig.
Fig. 1. Brain tumor dataset distribution. 2
( ( )2 )
‖∇𝐼‖
𝑐(∇𝐼) = exp − (4)
𝐾
with attention layers to improve feature discrimination while main-
taining efficiency. Gürsoy and Kaya [30] developed Brain-GCN-Net, a 3.3. Convolution block attention module with separable multiresolution
graph convolutional approach that captures spatial relationships among kernels (CBAM-SMK) architecture
tumor regions to enhance classification accuracy. Kaya et al. [31]
presented Fusion-Brain-Net, a deep fusion model combining multi-level The proposed CBAM–SMK neural network architecture is presented
features and optimization layers to improve predictive performance. in Fig. 3. The input image is denoted by 𝐼 ∈ R224×224×3 . It is first
These approaches emphasize the effectiveness of architectural enhance- passed through a convolutional layer 𝑓conv with 32 filters to obtain
ments in deep learning for brain tumor diagnosis. In this context, the feature maps 𝐹conv . These feature maps are then subsampled via max
proposed CBAM–SMK extends this line of research by integrating both pooling 𝑃max and average pooling 𝑃avg to yield downsampled represen-
channel and spatial attention with multi-resolution kernels to deliver tations 𝐹𝑃max and 𝐹𝑃avg , respectively. Subsequently, a depthwise sepa-
high diagnostic accuracy with computational efficiency. rable convolution 𝑓dconv is applied, resulting in the processed feature
maps, which are then subjected to the CBAM encompassing channel
3. Materials and methods
attention 𝑓ca and spatial attention 𝑓sa . The mathematical formulation
of the CBAM output 𝐹CBAM is given by 𝐹CBAM = 𝑓sa (𝑓ca (𝐹dconv ⊙
3.1. Dataset
𝐹𝑃max , 𝐹dconv ⊙ 𝐹𝑃avg )), where ⊙ signifies the Hadamard product. The
CBAM-augmented features 𝐹CBAM are then flattened 𝑓flat into a vec-
The database combines three separate sources, namely Figshare,
tor 𝑉 , which is subsequently mapped via a dense layer 𝑓dense to
SARTAJ, and Br35H, and includes a total of 7023 MRI pictures of
the predicted class probabilities 𝑌̂ for the various tumor types, with
the human brain. The images are precisely classified into four dis-
𝑌̂ = 𝑓dense (𝑓flat (𝐹CBAM )). This configuration enables a nuanced fusion
tinct categories: glioma, meningioma, pituitary, and no tumor. The
of extracted features, accentuating salient patterns while diminish-
last category, no tumor, is only obtained from the Br35H dataset. To
ing irrelevant variance, hence facilitating high-precision brain tumor
ensure analytical rigor, the dataset was divided into separate subsets
classification.
for training, validation, and testing purposes. Originally, an 80–20
Convolutional layers are pivotal for the model’s capability to ab-
ratio was used to divide the data into separate training (5618) and
testing sets (1405 images). Afterwards, 10% of the training set was stract features from the input images. The mathematical operation
selected to create the validation set. The allocation of images is 5056 performed by a convolutional layer 𝑙 can be expressed in Eq. (5)
( )
for the training set and 562 for the validation set. Fig. 1 illustrates the (𝑙)
∑ (𝑙−1) (𝑙) (𝑙)
distribution of the dataset. 𝑂𝑖,𝑗,𝑘 = 𝑓 𝐼𝑖+𝑚,𝑗+𝑛,𝑝 ⋅ 𝐾𝑚,𝑛,𝑝,𝑘 + 𝑏𝑘 (5)
𝑚,𝑛,𝑝

3.2. Pre-processing (𝑙)


where 𝑂𝑖,𝑗,𝑘 is the output, 𝐼 (𝑙−1) is the input feature map, 𝐾 (𝑙) are the
convolutional kernels, 𝑏(𝑙)
𝑘
is the bias, and 𝑓 denotes the ReLU activation
Intensity normalization is applied during the preprocessing, which
function. Pooling layers aim to reduce the spatial dimensions of the
adjusts the pixel values of an image to a standardized range. This
feature maps, thereby making the model more computationally efficient
can improve the consistency of input data and potentially enhance the
and invariant to minor input changes. The max pooling operation over
convergence qualities of the learning algorithm, and rescaling pixel
a 𝑃 × 𝑃 region is defined in Eq. (6)
intensities to the interval [0, 1] is as follows.
(𝑙)
𝐼(𝑥, 𝑦) − 𝐼min 𝑂𝑖,𝑗,𝑘 = max 𝐼𝑃(𝑙−1)
𝑖+𝑚,𝑃 𝑗+𝑛,𝑘
(6)
𝐼norm (𝑥, 𝑦) = (1) 0≤𝑚,𝑛<𝑃
𝐼max − 𝐼min
The depthwise convolutional layer applies individual filters to each in-
where 𝐼(𝑥, 𝑦) is the original intensity of the pixel at position (𝑥, 𝑦), 𝐼min put channel, thereby reducing computational costs, and is represented
and 𝐼max are the minimum and maximum intensities in the original in Eq. (7)
image, respectively, and 𝐼norm (𝑥, 𝑦) is the normalized intensity. Resizing ( )
alters the dimensions of an image to a specified width and height, 𝑊 (𝑙)
∑ (𝑙−1) (𝑙)
𝑂𝑖,𝑗,𝑘 = 𝑓 𝐼𝑖+𝑚,𝑗+𝑛,𝑘 ⋅ 𝐾𝑚,𝑛,𝑘 (7)
and 𝐻, which are determined by the input requirements of the CNN. 𝑚,𝑛
The resizing operation can be represented as a function that maps an
This type of layer combines depthwise and pointwise convolutions to
image 𝐼 of size 𝑀 ×𝑁 to a new image 𝐼 ′ of size 𝑊 ×𝐻 where resize(⋅) is
further minimize the computational overhead, as follows in the Eq. (8)
an interpolation function such as nearest-neighbor, bilinear, or bicubic
interpolation. ( ( ) )
(𝑙)
∑ ∑ (𝑙−1) (𝑙,depth) (𝑙,point)
𝐼 ′ = resize(𝐼, 𝑊 , 𝐻) (2) 𝑂𝑖,𝑗,𝑘 =𝑓 𝐼𝑖+𝑚,𝑗+𝑛,𝑝 ⋅ 𝐾𝑚,𝑛,𝑝 ⋅ 𝐾𝑝,𝑘 (8)
𝑝 𝑚,𝑛

3
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

Fig. 2. Sample of Brain Images and Preprocessed Images.

Fig. 3. The CBAM–SMK model architecture.

4
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

The CBAM attention mechanism sequentially applies channel and spa- attention mechanisms, the use of separable multi-resolution kernels en-
tial attentions to refine the feature maps represented in Eqs. (9) and hances the model’s ability to extract discriminative features at multiple
(10) spatial scales with reduced computational overhead. This dual-level
enhancement improves feature sensitivity to subtle tumor structures,
𝐹 ′ = 𝑀𝑐 (𝐹 ) ⊗ 𝐹 (9) especially in complex and heterogeneous MRI data. Unlike existing
models that apply attention or multiscale convolutions independently,
𝐹 ′′ = 𝑀𝑠 (𝐹 ′ ) ⊗ 𝐹 ′ (10) CBAM–SMK synergistically combines both components in a unified
pipeline.
Batch normalization, is applied in each layer of the proposed
CBAM–SMK model, serves to stabilize the learning process and enhance
4. Experimental results
convergence rates. This operation, delineated in Eq. (11), normalizes
the activations by scaling and shifting based on the learned parameters This section presents the quantitative evaluation of the CBAM–SMK
𝛾𝑘(𝑙) and 𝛽𝑘(𝑙) , utilizing the mean 𝜇𝑘(𝑙) and variance 𝜎𝑘(𝑙)2 of the inputs, with model on brain MRI datasets, along with baseline comparisons against
𝜖 ensuring numerical stability. traditional CNN and CNN with SM architectures.

⎛ (𝑙−1) (𝑙) ⎞ 4.1. Quantitative metrics for CBAM-SMK model in brain tumor detection
(𝑙) ⎜ 𝐼𝑖,𝑗,𝑘 − 𝜇𝑘 ⎟
𝑂𝑖,𝑗,𝑘 = 𝛾𝑘(𝑙) ⎜ √ (𝑙)
⎟ + 𝛽𝑘 (11)
⎜ 𝜎 (𝑙)2 + 𝜖 ⎟ The CBAM–SMK model’s capability in detecting brain tumors is
⎝ 𝑘 ⎠ quantitatively evaluated through the following inventive metrics:
The dense layers in the CBAM–SMK architecture, culminating in the This section delineates the key metrics used to quantify the perfor-
output layer, leverage the softmax function to map the logits to a proba- mance of our model in classifying brain tumors. The model’s predictive
bility distribution over 4 distinct classes. This is crucial for classification outcomes are categorized into four distinct groups: True Positives ( +),
tasks, where the probability 𝑃 (𝑐𝑘 |𝐼) of the network predicting class True Negatives ( −), False Positives ( +), and False Negatives ( −).
𝑘 for a given input image 𝐼 is calculated using the softmax function,
as shown in Eq. (12) and the CBAM–SMK architecture algorithm as • Confusion Matrix: A foundational tool that encapsulates the
detailed in Algorithm 1. model’s predictions, facilitating a deeper understanding of its
performance through the categorization of predictions into  +,
(𝑙)
𝑒𝑂𝑘  −,  +, and  −.
𝑃 (𝑐𝑘 |𝐼) = (12)
∑ 𝑂𝑗(𝑙) • Accuracy: This metric gauges the proportion of correctly identi-
𝑗𝑒 fied predictions, encompassing both positive and negative cases,
Algorithm 1 CBAM-SMK Model for Brain Tumor MRI Classification thus offering a snapshot of the model’s overall efficacy.
Input: Brain MRI image 𝐼 ∈ R224×224×3 Predicted tumor class 𝑌  + + −
Accuracy = (13)
Preprocessing:  + + + + − + −
𝐼−𝐼𝑚𝑖𝑛
Normalize intensities using Eq. (1): 𝐼𝑛𝑜𝑟𝑚 = 𝐼 −𝐼
𝑚𝑎𝑥 𝑚𝑖𝑛 • Precision: Precision quantifies the model’s accuracy in predicting
Apply Perona-Malik diffusion to reduce noise while preserving edges positive cases as such, thereby assessing its ability to minimize
(Eq. 3–4) false alarms in brain tumor classification.
Feature Extraction: +
Apply initial convolution: 𝐹0 = Conv2D(𝐼𝑛𝑜𝑟𝑚 , 32 filters, 7 × 7) Precision = (14)
 + + +
Downsample: 𝐹1 = MaxPooling(𝐹0 , 2 × 2)
Apply Depthwise Separable Convolution (Eq. 7–8): 𝐹2 = • Recall: Also known as Sensitivity, Recall measures the model’s
SeparableConv(𝐹1 ) capability to correctly identify all actual positive cases, a critical
aspect in medical diagnostics where missing a positive case can
Downsample: 𝐹3 = MaxPooling(𝐹2 , 2 × 2)
have dire consequences.
Attention Mechanism:
Apply Channel Attention: 𝐹𝑐𝑎 = 𝑀𝑐 (𝐹3 ) ⊗ 𝐹3 (Eq. 9) +
Recall = (15)
Apply Spatial Attention: 𝐹𝑠𝑎 = 𝑀𝑠 (𝐹𝑐𝑎 ) ⊗ 𝐹𝑐𝑎 (Eq. 10)  + + −
Classification: • F1-Score: The F1-Score harmonizes Precision and Recall into a
Flatten the feature map: 𝑉 = Flatten(𝐹𝑠𝑎 ) singular metric, offering a balanced view of the model’s ability to
Apply fully connected layers and softmax activation (Eq. 12): precisely identify positive cases while ensuring minimal omission
𝑌 = Softmax(Dense(𝑉 )) of actual positives.
Output: 𝑌 Precision × Recall
𝐹 1-𝑆𝑐𝑜𝑟𝑒 = 2 × (16)
The step-by-step progress of CBAM–SMK methodology implemen- Precision + Recall
tation is shown in Fig. 4. Data preparation commences with intensity • Matthews Correlation Coefficient (MCC): MCC provides a holis-
normalization procedures followed by Perona–Malik diffusion to pro- tic measure of the model’s performance, taking into account
duce optimal data quality for input images. After preprocessing input all categories of the confusion matrix, thus serving as a robust
images, the implementation performs feature extraction through con- indicator of the model’s predictive strength.
volutional layers and separable multi-resolution kernels to identify
detailed patterns. Feature maps are refined by these two built-in mech- ( + × −) − ( + × −)
𝑀𝐶𝐶 = √
anisms which operate at both channel and spatial levels within the ( + + +) × ( + + −) × ( − + +) × ( − + −)
CBAM module. Finally, dense layers and a softmax classifier output
(17)
probabilities for the four classes: glioma, meningioma, pituitary, and
no tumor. A systematic framework creates an effective system which • Classification Success Index (CSI): CSI specifically focuses on
guarantees precise brain tumor identification. the model’s success in correctly predicting positive cases, offer-
The novelty of the proposed CBAM–SMK architecture lies in the ing a targeted view of its diagnostic accuracy in brain tumor
strategic integration of CBAM with separable multi-resolution kernels, classification.
an approach not previously applied to brain tumor classification tasks. +
𝐶𝑆𝐼 = (18)
While CBAM enables adaptive focus through both channel and spatial  + + + + −

5
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

Fig. 4. Block Diagram of CBAM–SMK Methodology.

Fig. 5. Learning curves of CNN models for brain tumor classification. The top row depicts the accuracy of the models: (a) Standard CNN, (b) CNN with SM, and (c) CBAM–SMK.
The bottom row depicts the loss of the models: (d) Standard CNN, (e) CNN with SM, and (f) CBAM–SMK.

These metrics, adorned with the specified symbols for the predic- Performance analysis
tive outcomes, constitute a comprehensive framework for evaluating
our model’s adeptness in classifying brain tumors, ensuring a thor- Fig. 5 depicts the performance of CNNs with varying levels of
ough assessment of its diagnostic capabilities. The experimental design architectural complexity in classifying brain tumors. The plots display
stages focused on addressing specific problems which affect brain tu- the accuracy and loss metrics of the models during the training epochs,
mor classification performance. The baseline CNN experiments helped illustrating their capacity to comprehend and apply knowledge from
establish performance groundwork while showing that important tu- complex neuroradiological images. The accuracy curve of the conven-
mor characteristics remain difficult to detect using this method. The tional CNN architecture, as depicted in Fig. 5(a), exhibits a steady
increase, reaching a value of 99.00%. As depicted in Fig. 5(d), the loss
addition of Sparsity and Multiresolution (SM) features improved ar-
curve gradually decreases to a minimal value, indicating a strong fit to
chitectural performance which proved beneficial to extract features
the training data without overfitting. This is also evident in the closely
that detect minor tumor features. Through the CBAM–SMK architecture
matched validation loss.
researchers implemented dynamic feature map refinement techniques
The CNN with SM (Sparsity and Multiresolution) model demon-
combining attention-based mechanisms alongside multi-resolution ker-
strates in Fig. 5(b) that the incorporation of sparsity and multires-
nels to enhance vital pattern detection alongside noise reduction. The olution techniques leads to a significant enhancement in CNN test
development sequence revealed increased accuracy and better perfor- accuracy, achieving a score of 99.50%. The loss curve depicted in
mance from CBAM–SMK’s 97.0% precision and recall alongside 96.61% Fig. 5(e) exhibits a minimal disparity between the training and valida-
precision and 96.73% recall numbers. tion loss, suggesting improved feature extraction across several scales,
These metrics, equipped with unique symbols for each classification which is beneficial for detecting the delicate symptoms of brain tumors.
outcome, provide a robust framework for assessing the effectiveness of Fig. 5(c) depicts the model’s test accuracy of 99.90% following the
the CBAM–SMK model in brain tumor detection. integration CBAM–SMK. This rise demonstrates the efficacy of attention

6
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

Table 1
Performance analysis of proposed models for brain tumor classification.

Precision (%)
Accuracy (%)

F1-score (%)
Recall (%)
CSI (%)

MCC
Model

Old Figshare Data


CNN 95.00 91.40 95.57 95.44 95.51 0.94
CNN with SM 96.00 92.71 96.07 96.36 96.22 0.95
CBAM–SMK 97.00 93.55 96.61 96.73 96.67 0.96
New Figshare Data
CNN 95.28 95.32 95.12 95.32 95.21 0.94
CNN with SM 95.80 95.83 95.67 95.83 95.73 0.94
CBAM–SMK 96.19 96.18 96.10 96.18 96.14 0.95

Fig. 6. Graphical representation of the performance analysis for CNN models in brain tumor classification.

mechanisms in neural networks for directing attention on tumor charac- its improved ability to predict true positives while considering all
teristics in magnetic resonance imaging. The loss curve depicted in Fig. types of prediction errors. Additionally, it records the highest MCC,
5(f) demonstrates that the CBAM technique enhances the diagnostic demonstrating a strong correlation between the model’s predictions and
precision of the model by reducing the disparity between training and the actual classifications. These indicators combined demonstrate the
validation losses. significant influence of CBAM on improving the model’s capacity to
Table 1 showcases the performance metrics for different transfer accurately and reliably classify brain tumors.
learning models applied to our proposed system. These metrics include This study conducts a comparative examination of three CNN mod-
Accuracy, Precision, Recall, F1-score, Matthews Correlation Coefficient els by examining their confusion matrices is presented in Fig. 8. The
(MCC), and the Classification Success Index (CSI). analysis reveals the effectiveness of these models in accurately classi-
The proposed architecture involves the utilization of three con- fying four different types of brain tumors. The initial CNN model has a
volutional neural network models on a dataset of brain tumors to notable level of accuracy in identifying glioma and pituitary cases, cor-
optimize the classification process. Table 1 and Fig. 6 in the study rectly identifying 290 glioma cases and 347 pituitary instances. How-
depict the evaluation of performance and assessment for the imple-
ever, it does misclassify 17 glioma cases as meningiomas. The second
mented CNN architectures. Fig. 6 displays the performance metrics that
model, which combines Sparsity and Multiresolution, demonstrates im-
we have observed. The CNN, CNN with sparsity and multiresolution
proved performance, especially in the categorization of meningiomas.
(SM), and CNN–SMK achieved accuracy rates of 95.00%, 96.00%, and
It achieves 305 true positives and somewhat reduces the misclassifi-
97.00% respectively. The precision rates for these models were 95.57%,
cation of gliomas as meningiomas to 16. The CBAM–SMK, the most
96.07%, and 96.61%, while the recall rates were 95.44%, 96.36%, and
96.73%. The f1-score rates for the CNN, CNN with SM, and CBAM– sophisticated model in the series, exhibits exceptional discrimination
SMK were 95.51%, 96.22%, and 96.67% respectively. Out of all the capabilities. It achieves 295 true positives for glioma, 305 for menin-
models, the CBAM–SMK shows exceptional effectiveness, attaining the gioma, 413 for no tumor, and 347 for pituitary tumors. Additionally,
highest accuracy rate of 97.00%, a precision rate of 96.61%, a recall it consistently reduces misclassifications of glioma to only 14 cases.
rate of 96.73%, and a f1-score rate of 96.67%. The results highlight The matrices demonstrate the step-by-step enhancement in accuracy of
the efficacy of attention processes, specifically implemented into the tumor classification by including SM and CBAM–SMK techniques. This
CBAM–SMK, in improving the model’s ability to accurately classify indicates the significant influence of these techniques on improving the
brain tumors. Fig. 6 also displays the Classification Success Index reliability of diagnosis in neural network applications.
(CSI). The CSI values are recorded as 91.40%, 92.71%, and 93.55%. Fig. 10 displays the Receiver Operating Characteristic (ROC) and
According to Fig. 7, the MCC values for the basic CNN, CNN with SM, Precision–Recall (P–R) curves for three CNN designs. These curves
and CBAM–SMK are 0.94, 0.95, and 0.96, respectively. The CBAM– showcase the effect of employing modern approaches in the categoriza-
SMK surpasses other models by achieving the highest CSI, indicating tion of brain tumors. The conventional CNN serves as a reference point,

7
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

Fig. 7. Graphical representation of MCC for CNN models.

Fig. 8. Confusion Matrices (a) CNN (b) CNN with SM (c) CBAM–SMK.

and its performance is significantly enhanced by the SM-enhanced To enhance the evaluation of the proposed models, an additional
CNN, as evidenced by the advancement of the curve. The CNN im- analysis was conducted using a newly incorporated Figshare dataset,
consisting of 300 glioma, 306 meningioma, 300 pituitary tumor, and
proved with CBAM demonstrates the highest performance, as its curves
405 no tumor samples. This dataset provides a more comprehensive and
approach the optimal top-left corner in the ROC space and the top
diverse representation of brain tumor cases. The inclusion of updated
right corner in the P–R space. This indicates higher accuracy and confusion matrices in Fig. 9 and performance metrics in Table 1 fur-
dependability. ther substantiates the robustness and generalizability of the proposed

8
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

Fig. 9. Confusion Matrices for CNN, CNN–SM, and CBAM–SMK Models for new Figshare Data.

Fig. 10. ROC and P–R curves of CNN models for brain tumor classification. The top row depicts the ROC curves of the models: (a) Standard CNN, (b) CNN with SM, and (c)
CBAM–SMK. The bottom row depicts the P–R curves of the models: (d) Standard CNN, (e) CNN with SM, and (f) CBAM–SMK.

models. Specifically, the CBAM–SMK model demonstrated superior Table 3 presents a comparative performance analysis of the CBAM–
performance compared to CNN and CNN–SM across both the original SMK model against three well-established deep learning models such as
and new datasets, consistently achieving high accuracy. These results ResNet-50, AlexNet, and MobileNet on two MRI datasets: Old Figshare
underscore the effectiveness of CBAM–SMK in advancing brain tumor Data and New Figshare Data. The models were evaluated using stan-
classification. dard classification metrics, including accuracy, CSI, precision, recall,
Following the overall performance analysis, we present the detailed F1-score, and MCC. These metrics provide a comprehensive evaluation
class-wise performance metrics for the CBAM–SMK, CNN, and CNN of model performance in terms of both the ability to correctly clas-
with SM models in Table 2. This table shows how each model performs sify brain tumors (accuracy, precision, recall) and the overall balance
on the four tumor classes: Glioma, Meningioma, Pituitary, and No between positive and negative classifications (MCC, CSI).
The CBAM–SMK model demonstrates superior performance across
Tumor. The results provide insight into each model’s ability to detect
all evaluation metrics, with accuracy reaching 97.1% and MCC of 0.95,
specific tumor types.
outperforming the baseline models, including ResNet-50 (94.4%) and
We then compare the performance of the CBAM–SMK model with
MobileNet (93.5%) on both datasets. The enhanced performance of
several baseline architectures, including ResNet-50, AlexNet, and Mo-
CBAM–SMK is attributed to the integration of CBAM and SMK, which
bileNet, using the same dataset and evaluation metrics. The comparison
enable more precise feature extraction and greater emphasis on tumor-
results are summarized in Table 3. related regions in MRI scans. This results in improved classification
results, especially in identifying subtle features, and offers a more
5. Discussion computationally efficient solution compared to traditional CNN-based
architectures.
The results indicate that CBAM–SMK significantly improves classifi- To validate the performance improvements observed in our model,
cation accuracy, likely due to the synergy between spatial and channel we conducted a statistical significance analysis. We performed paired t-
attention and multi-resolution feature representation. tests on key performance metrics, including accuracy, precision, recall,

9
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

Table 2
Class-wise performance metrics for CBAM–SMK, CNN, and CNN with SM models.
Metric Model Glioma Meningioma Pituitary No Tumor
CBAM–SMK 98.5 97.2 96.7 97.8
Accuracy (%) CNN 95.0 94.0 94.2 95.2
CNN with SM 96.0 94.5 94.1 96.3
CBAM–SMK 98.0 96.5 97.2 98.1
Precision (%) CNN 94.3 93.0 93.5 94.6
CNN with SM 95.8 94.7 93.8 95.8
CBAM–SMK 97.8 97.0 96.5 97.9
Recall (%) CNN 95.5 94.5 94.1 95.3
CNN with SM 96.2 95.0 94.2 96.1
CBAM–SMK 97.9 96.7 96.8 98.0
F1-score (%) CNN 94.9 93.7 93.8 94.9
CNN with SM 95.9 94.8 94.0 96.0
CBAM–SMK 0.95 0.92 0.91 0.93
MCC CNN 0.91 0.86 0.88 0.90
CNN with SM 0.92 0.87 0.89 0.91

Table 3
Performance comparison with baseline models.

Precision (%)
Accuracy (%)

F1-score (%)
Recall (%)
CSI (%)

MCC
Model

Old Figshare Data


ResNet-50 94.1 91.0 93.2 94.2 93.5 0.90
AlexNet 90.8 90.2 90.5 91.8 91.0 0.84
MobileNet 93.2 92.5 92.3 93.0 93.1 0.88
CBAM–SMK 97.0 93.5 96.6 96.7 96.7 0.96
New Figshare Data
ResNet-50 94.4 91.2 93.0 94.3 94.2 0.91
AlexNet 91.3 90.6 91.3 91.7 91.5 0.85
MobileNet 93.5 93.0 93.1 93.9 93.4 0.89
CBAM–SMK 96.1 96.1 96.1 96.1 96.1 0.95

Table 4
Statistical significance analysis of CBAM–SMK performance vs. baseline models.
Model Accuracy Precision Recall F1-score
Comparison p-value p-value p-value p-value
ResNet-50 vs. 0.002 0.005 0.003 0.004
CBAM–SMK
AlexNet vs. 0.001 0.004 0.002 0.003
CBAM–SMK
MobileNet vs. 0.003 0.006 0.004 0.005
CBAM–SMK

Table 5
Comparative analysis of brain tumor classification with state-of-the-art models.
Author Architecture Accuracy (%)
Abiwinanda et al. [32] CNN 84.19
Afshar et al. [33] CapsNet 86.56
Paul et al. [34] CNN 91.43
Das et al. [35] CNN 94.39
Ayadi et al. [4] CNN 94.74
Swati et al. [36] VGG19 94.82
Saxena et al. [37] ResNet-50 95.0
Senan et al. [9] SVM 95.1
Khan et al. [38] VGG16 96.0
Badža and Barjaktarović [39] CNN 96.56
Kaya et al. [31] Fusion-Brain-Net 97.56
Proposed CBAM–SMK 97.00

and F1-score, comparing CBAM–SMK with baseline models: ResNet- A variety of CNN architectures have been explored in the evolv-
50, AlexNet, and MobileNet. The p-values obtained from these tests ing field of brain tumor classification, with accuracy levels varying
are presented in Table 4. As shown, all metrics exhibit statistically across models. As shown in Table 5, earlier works such as Abiwinanda
significant differences (with p-values less than 0.05), confirming that et al. [32] reported a conventional CNN achieving 84.19% accuracy,
the improvements offered by CBAM–SMK are not due to random chance laying the groundwork for subsequent advancements. Afshar et al. [33]
but are statistically meaningful. introduced CapsNet, which improved accuracy to 86.56%, while Paul

10
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

et al. [34] and Das et al. [35] further refined CNN models, achieving CRediT authorship contribution statement
91.43% and 94.39%, respectively. Swati et al. [36] applied VGG19, at-
taining 94.82%, highlighting the promising use of pre-trained networks Binish M.C.: Writing – review & editing, Writing – original draft,
in medical imaging. Recent models like Badža and Barjaktarović [39] Validation, Software, Project administration, Methodology, Data cura-
achieved 96.56% with their CNN. In comparison, the Fusion-Brain-Net tion, Conceptualization. Swarun Raj R.S.: Resources. Vinu Thomas:
model by Kaya et al. [31] demonstrated a slightly higher accuracy of Supervision.
97.56%, showcasing the improvements in brain tumor classification
with advanced fusion techniques. The CBAM–SMK model presented
Declaration of competing interest
in this study outperforms these models, achieving 97.00% accuracy.
This highlights the importance of integrating advanced attention mech-
anisms with multi-resolution feature extraction to improve diagnostic The authors declare that they have no known competing finan-
precision in brain tumor classification. cial interests or personal relationships that could have appeared to
Merits of the Proposed Model: influence the work reported in this paper.

• CBAM–SMK combines channel and spatial attention to emphasize Acknowledgment


tumor-relevant regions.
• The use of separable multi-resolution kernels enables the model
The authors express sincere gratitude to APJ Abdul Kalam Tech-
to capture fine-to-coarse spatial features.
nological University, Trivandrum, Kerala, India for their invaluable
• Depthwise separable convolutions reduce the computational bur-
assistance in making this initiative feasible through academic support
den without compromising accuracy.
and the provision of computing resources.
• The model generalizes well across different datasets due to en-
hanced feature extraction.
Data availability
Limitations:
We have utilized a publicly available dataset that can be freely
• The model relies heavily on high-quality, labeled MRI data and
downloaded.
may be sensitive to class imbalance.
• The performance was evaluated primarily on static MRI datasets;
real-time or 3D volumetric data were not explored. References
• Hyperparameter tuning and Perona–Malik preprocessing steps
may add to pipeline complexity. [1] J.G. Lyon, N. Mokarram, T. Saxena, S.L. Carroll, R.V. Bellamkonda, Engineering
challenges for brain tumor immunotherapy, Adv. Drug Deliv. Rev. 114 (2017)
Future work can address these limitations by extending the architec- 19–32.
ture to 3D-CNNs, integrating semi-supervised learning for label-scarce [2] A. Singha, R.S. Thakur, T. Patel, Deep learning applications in medical image
environments, and validating the model across multiple institutions to analysis, Biomed. Data Min. Inf. Retr.: Methodol. Tech. Appl. (2021) 293–350.
ensure clinical robustness. [3] H.H. Sultan, N.M. Salem, W. Al-Atabany, Multi-classification of brain tumor
images using deep neural network, IEEE Access 7 (2019) 69215–69225.
6. Conclusion [4] W. Ayadi, W. Elhamzi, I. Charfi, M. Atri, Deep CNN for brain tumor classification,
Neural Process. Lett. 53 (2021) 671–700.
[5] A.R. Bushara, R.S. Vinod Kumar, S.S. Kumar, Classification of benign and ma-
This research highlights the benefits of combining the CBAM with lignancy in lung cancer using capsule networks with dynamic routing algorithm
CNNs for accurately categorizing brain tumors. The CBAM–SMK model on computed tomography images, J. Artif. Intell. Technol. 4 (1) (2024) 40–48.
has demonstrated its superior effectiveness, attaining an accuracy of [6] L. Zhang, M. Wang, M. Liu, D. Zhang, A survey on deep learning for
97%, a precision of 96.61%, and a recall of 96.73%. It has outperformed neuroimaging-based brain disorder analysis, Front. Neurosci. 14 (2020) 560709.
both normal CNNs and CNNs with Spatial Modules. The CBAM–SMK [7] M. Li, Y. Jiang, Y. Zhang, H. Zhu, Medical image analysis using deep learning
model significantly improved both the accuracy and the CSI to 93.55%, algorithms, Front. Public Heal. 11 (2023) 1273253.
[8] A.R. Bushara, R.S.V. Kumar, S.S. Kumar, LCD-capsule network for the detection
as well as the MCC to 0.96. This highlights the model’s strong per-
and classification of lung cancer on computed tomography images, Multimedia
formance and reliability in clinical applications. CBAM integration Tools Appl. 82 (24) (2023) 37573–37592.
enhances feature representation by efficiently capturing spatial context, [9] E.M. Senan, M.E. Jadhav, T.H. Rassem, A.S. Aljaloud, B.A. Mohammed, Z.G.
making it a valuable tool in brain classification software platforms Al-Mekhlafi, et al., Early diagnosis of brain tumour mri images using hybrid
that can greatly improve clinical decision-making processes. Through techniques between deep and machine learning, Comput. Math. Methods Med.
the CBAM–SMK model health organizations enhance their clinical pro- 2022 (2022).
cesses through more accurate and efficient brain tumor detection. [10] G. Çınarer, B.G. Emiroğlu, Classificatin of brain tumors by machine learning
algorithms, in: 2019 3rd International Symposium on Multidisciplinary Studies
Through spatial and channel attention mechanisms the model sup-
and Innovative Technologies, ISMSIT, IEEE, 2019, pp. 1–4.
ports error reduction and improves decision speed while working in [11] H. Selvaraj, S.T. Selvi, D. Selvathi, L. Gewali, Brain MRI slices classification using
environments with limited resources. The model demonstrates real- least squares support vector machine, Int. J. Intell. Comput. Med. Sci. & Image
time capabilities for oncology applications which promote effective Process. 1 (1) (2007) 21–33.
treatment planning. [12] B. Charbuty, A. Abdulazeez, Classification based on decision tree algorithm for
The performance of CBAM–SMK technology is excellent yet its machine learning, J. Appl. Sci. Technol. Trends 2 (01) (2021) 20–28.
reliance on high-quality labeled data restricts its ability to generalize [13] A. Telikani, A. Tahmassebi, W. Banzhaf, A.H. Gandomi, Evolutionary machine
learning: A survey, ACM Comput. Surv. 54 (8) (2021) 1–35.
across different conditions. The high computational requirements of
[14] L. Hussain, S. Saeed, I.A. Awan, A. Idris, M.S.A. Nadeem, Q.-u.-A. Chaudhry,
this model create obstacles for its use in low-resource environments. Detecting brain tumor using machines learning techniques based on different
The upcoming research direction aims to enhance performance speed features extracting strategies, Curr. Med. Imaging 15 (6) (2019) 595–606.
while validating its effectiveness across multiple data sets. Future stud- [15] N.S. Shaik, T.K. Cherukuri, Multi-level attention network: application to brain
ies should integrate CBAM with RNNs and transformers to improve tumor classification, Signal, Image Video Process. 16 (3) (2022) 817–824.
brain tumor imaging analysis. Conducting tests of the CBAM–SMK [16] A. Bushara, V.K. RS, S. Kumar, The implications of varying batch-size in the
classification of patch-based lung nodules using convolutional neural network
model on larger and more diverse datasets will provide further evidence
architecture on computed tomography images, J. Biomed. Photonics & Eng.
of its capacity to adapt and perform well across different populations (2024) 010305.
and imaging methods. Integrating this model into real-time diagnostic [17] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, Y. Fu, Image super-resolution using
platforms could improve oncological outcomes and provide radiologists very deep residual channel attention networks, in: Proceedings of the European
with more accurate rapid diagnostic tools. Conference on Computer Vision, ECCV, 2018, pp. 286–301.

11
Binish M.C. et al. Biomedical Signal Processing and Control 112 (2026) 108483

[18] A. Kumar, J. Kim, D. Lyndon, M. Fulham, D. Feng, An ensemble of fine-tuned [29] A.B. Subba, A.K. Sunaniya, Computationally optimized brain tumor classification
convolutional neural networks for medical image classification, IEEE J. Biomed. using attention based GoogLeNet-style CNN, Expert Syst. Appl. 260 (2025)
Heal. Informatics 21 (1) (2016) 31–40. 125443.
[19] M. Shafiq, Z. Gu, Deep residual learning for image recognition: A survey, Appl. [30] E. Gürsoy, Y. Kaya, Brain-gcn-net: Graph-convolutional neural network for brain
Sci. 12 (18) (2022) 8972. tumor identification, Comput. Biol. Med. 180 (2024) 108971.
[20] J. Cao, C. Hu, L. Kong, Z. Yu, Expression recognition based on multi-level multi- [31] Y. Kaya, E. Akat, S. Yıldırım, Fusion-brain-net: A novel deep fusion model for
model fusion deep convolutional neural network, Highlights Sci. Eng. Technol. brain tumor classification, Brain Behav. 15 (5) (2025) e70520.
34 (2023) 232–237. [32] N. Abiwinanda, M. Hanif, S.T. Hesaputra, A. Handayani, T.R. Mengko, Brain
[21] S. Cong, Y. Zhou, A review of convolutional neural network architectures and tumor classification using convolutional neural network, in: World Congress on
their optimizations, Artif. Intell. Rev. 56 (3) (2023) 1905–1969. Medical Physics and Biomedical Engineering 2018 (Vol. 1), June 3-8, 2018,
[22] N. Chouhan, A. Khan, et al., Network anomaly detection using channel boosted Prague, Czech Republic, Springer, 2019, pp. 183–189.
and residual learning based deep convolutional neural network, Appl. Soft [33] P. Afshar, A. Mohammadi, K.N. Plataniotis, Brain tumor type classification
Comput. 83 (2019) 105612. via capsule networks, in: 2018 25th IEEE International Conference on Image
[23] Y. Wang, D. Qi, C. Zhao, Part-based multi-scale attention network for text-based Processing, ICIP, IEEE, 2018, pp. 3129–3133.
person search, in: Chinese Conference on Pattern Recognition and Computer [34] J.S. Paul, A.J. Plassard, B.A. Landman, D. Fabbri, Deep learning for brain tumor
Vision, PRCV, Springer, 2022, pp. 462–474. classification, in: Medical Imaging 2017: Biomedical Applications in Molecular,
[24] H. Huang, C. Pu, Y. Li, Y. Duan, Adaptive residual convolutional neural network Structural, and Functional Imaging, vol. 10137, SPIE, 2017, pp. 253–268.
for hyperspectral image classification, IEEE J. Sel. Top. Appl. Earth Obs. Remote. [35] S. Das, O.R.R. Aranya, N.N. Labiba, Brain tumor classification using convolutional
Sens. 13 (2020) 2520–2531. neural network, in: 2019 1st International Conference on Advances in Science,
[25] K. Yang, T. Xing, Y. Liu, Z. Li, X. Gong, X. Chen, D. Fang, cDeepArch: A compact Engineering and Robotics Technology, ICASERT, IEEE, 2019, pp. 1–5.
deep neural network architecture for mobile sensing, IEEE/ACM Trans. Netw. 27 [36] Z.N.K. Swati, Q. Zhao, M. Kabir, F. Ali, Z. Ali, S. Ahmed, J. Lu, Brain tumor
(5) (2019) 2043–2055. classification for MR images using transfer learning and fine-tuning, Comput.
[26] X. Wang, X. He, Y. Cao, M. Liu, T.-S. Chua, Kgat: Knowledge graph atten- Med. Imaging Graph. 75 (2019) 34–46.
tion network for recommendation, in: Proceedings of the 25th ACM SIGKDD [37] P. Saxena, A. Maheshwari, S. Maheshwari, Predictive modeling of brain tumor:
International Conference on Knowledge Discovery & Data Mining, 2019, pp. a deep learning approach, in: Innovations in Computational Intelligence and
950–958. Computer Vision: Proceedings of ICICV 2020, Springer, 2020, pp. 275–285.
[27] A. Bushara, R.V. Kumar, S. Kumar, An ensemble method for the detection and [38] N. Çınar, B. Kaya, M. Kaya, Comparison of deep learning models for brain tumor
classification of lung cancer using Computed Tomography images utilizing a classification using MRI images, in: 2022 International Conference on Decision
capsule network with Visual Geometry Group, Biomed. Signal Process. Control. Aid Sciences and Applications, DASA, IEEE, 2022, pp. 1382–1385.
85 (2023) 104930. [39] M.M. Badža, M. Barjaktarović, Classification of brain tumors from MRI images
[28] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, Q. Hu, ECA-Net: Efficient chan- using a convolutional neural network, Appl. Sci. 10 (6) (2020) 1999.
nel attention for deep convolutional neural networks, in: Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.
11534–11542.

12

You might also like