0% found this document useful (0 votes)
15 views13 pages

Machine Learning Based Prediction For Compressive and 2021 Construction and

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views13 pages

Machine Learning Based Prediction For Compressive and 2021 Construction and

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Construction and Building Materials 266 (2021) 121117

Contents lists available at ScienceDirect

Construction and Building Materials


journal homepage: [Link]/locate/conbuildmat

Machine learning-based prediction for compressive and flexural


strengths of steel fiber-reinforced concrete
Min-Chang Kang a, Doo-Yeol Yoo a,⇑, Rishi Gupta b
a
Department of Architectural Engineering, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea
b
Department of Civil Engineering, University of Victoria, 3800 Finnerty Rd, Victoria, BC V8W 2Y2, Canada

h i g h l i g h t s

 Compressive and flexural strengths of SFRC are successfully predicted by machine learning algorithms.
 Tree-based and boosting models are recommended for SFRC predictions.
 W/C ratio and silica fume are most important parameters of predicting compressive strength.
 Fiber volume fraction and silica fume are the most important for predicting flexural strength.
 XGBoost and gradient boost regressors are selected as the most appropriate machine learning algorithms of SFRC.

a r t i c l e i n f o a b s t r a c t

Article history: Steel fiber-reinforced concrete (SFRC) has a performance superior to that of normal concrete because of
Received 18 June 2020 the addition of discontinuous fibers. The development of strengths prediction technique of SFRC is, how-
Received in revised form 15 September ever, still in its infancy compared to that of normal concrete because of its complexity and limited avail-
2020
able data. To overcome this limitation, research was conducted to develop an optimum machine learning
Accepted 24 September 2020
Available online 12 October 2020
algorithm for predicting the compressive and flexural strengths of SFRC. The resulting feature impact was
also analyzed to confirm the reliability of the models. To achieve this, compressive and flexural strengths
data from SFRC were collected through extensive literature reviews, and a database was created. Eleven
Keywords:
Steel fiber-reinforced concrete
machine learning algorithms were then established based on the dataset. K-fold validation was con-
Machine learning ducted to prevent overfitting, and the algorithms were regulated. The boosting- and tree-based models
Strength prediction had the optimal performance, whereas the K-nearest neighbor, linear, ridge, lasso regressor, support vec-
Feature importance tor regressor, and multilayer perceptron models had the worst performance. The water-to-cement ratio
and silica fume content were the most influential factors in the prediction of compressive strength of
SFRC, whereas the silica fume and fiber volume fraction most strongly influenced the flexural strength.
Finally, it was found that, in general, the compressive strength prediction performance was better than
the flexural strength prediction performance, regardless of the machine learning algorithm.
Ó 2020 Elsevier Ltd. All rights reserved.

1. Introduction over that of normal concrete [1]. Furthermore, adding the appro-
priate amount of steel fibers (0–1.5 vol%) enhances the durability
Fiber-reinforced concrete (FRC) can improve the mechanical and frost resistance of concrete [2]. Nguyen el al. [3] reported that
properties of ordinary concrete through the bridging effect of dis- owing to the restriction of crack propagation by polyvinyl alcohol
continuous fibers. FRC thus increases the flexural and tensile and steel fibers, rebar corrosion in FRC is generally localized and
strengths of concrete and can increase the toughness and resis- the total mass loss decreases compared to that of normal concrete.
tance to cracks. In particular, the most commonly used steel Numerous studies [4–11] have been conducted in which prediction
fiber-reinforced concrete (SFRC) has demonstrated significant models of mechanical properties of normal concrete based on a
effects by exhibiting an increase in flexural strength of ~ 3%–81% numerous data have been proposed; however, SFRC has more fore-
casting variables, such as fiber type, volume fraction, and aspect
ratio, than normal concrete and development of appropriate pre-
⇑ Corresponding author. diction models is relatively new. It is thus difficult to predict the
E-mail addresses: cmway013@[Link] (M.-C. Kang), dyyoo@[Link] compressive and flexural strengths of SFRC with ordinary linear
(D.-Y. Yoo), guptar@[Link] (R. Gupta).

[Link]
0950-0618/Ó 2020 Elsevier Ltd. All rights reserved.
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

or nonlinear regression analyses. Machine-learning techniques can included the water-to-cement (W/C) ratio, sand-to-aggregate (S/
provide a solution to the problem of the predicting the strength of a) ratio, coarse aggregate size, superplasticizer, silica fume, volume
SFRC. fraction of the hooked steel fiber, and fiber aspect ratio. The nine
In the case of normal concrete, many studies [4–12] have been features were considered identically to both the compressive and
conducted to predict compressive strength, electrical resistance, flexural strength predictions of SFRC, caused by the fact that many
etc., based on machine-learning techniques. Numerous studies previous studies investigated the compressive and flexural
[4,5,7,9] on artificial neural networks (ANNs) have been conducted. strengths simultaneously because they are basic mechanical prop-
ANNs perform well in the case of substantial data. In addition, high erties and each of the features affects both the compressive and
predictability of compressive strength using support vector machi- flexural strengths. The effects of all features on compressive and
nes (SVMs) has been demonstrated [7,9], and various linear algo- flexural strengths are as follows:
rithms, tree-based methods [9,10], boosting-based methods
[7,11], etc. have been adopted. In addition, recently, not only con- 2.1.1. W/C Ratio
crete compressive strength but also other strengths have been ana- A W/C ratio has a great influence on the strengths of concrete.
lyzed. Behnood et al. [13] analyzed not only the compressive Abbass et al. [21] reported that both the compressive and flexural
strength of concrete, but also the flexural and splitting tensile strengths decrease as the W/C ratio increases. Similarly, Reddy
strengths through the M5P model. However, machine-learning- et al. [36] conducted the compressive and flexural tests according
based prediction studies of SFRC have been very limited because to the W/C ratio of self-consolidating concrete and discovered that
it is difficult to collect data to create models with various variables it has a great influence. Nili et al. [37] reported that a higher flex-
(i.e., fiber type, aspect ratio, etc.), and there are no published stud- ural strength is obtained by decreasing the W/C ratio. Therefore,
ies yet to determine suitable algorithms for strength prediction. the W/C ratio was identified as a factor affecting the compressive
Therefore, in this study, a machine-learning model that predicts and flexural strengths of SFRC, and thus, it was selected as a
not only the compressive strength of SFRC but also the flexural variable.
strength is built, and algorithms that are most suitable are dis-
cussed through a comparative analysis.
2.1.2. S/a ratio
In this study, a total of 11 algorithms were applied to create a
The influence of the sand-to-aggregate (S/a) ratio on the
machine-learning model. Linear algorithms (i.e., linear, ridge, lasso,
strengths of SFRC has been considered as an important factor.
and support vector region), decision tree, random forest, K-nearest
Kim et al. [38] discovered that as S/a ratio increases to 0.444,
neighbor, and boosting-based algorithms were considered. How-
0.515, and 0.615, respectively, the compressive strength of SFRC
ever, when applying machine-learning techniques, overfitting,
is also increased, such as 50.13, 52.62, and 54.10 MPa, respectively.
the most important factor, should be considered. If overfitting
In addition, as the S/a ratio increased, the flexural strength of SFRC
occurs, the model is unreliable because it cannot perform properly
increased. The higher S/a ratio enhanced the local stiffness of ITZ
when the data are overlearned and other external data are applied.
surrounding fibers. Chitlange et al. [39] also showed a large change
Therefore, in this study, K-fold validation was used to prevent
in compressive and flexural strengths according to S/a ratio. There-
overfitting, and if overfitting was deemed to have occurred, the
fore, the S/a ratio was selected as an important variable in building
model was regulated. The machine-learning models made with
prediction models.
each algorithm were compared and analyzed to select suitable
algorithms for the prediction of SFRC strength. Variables that are
important factors in forecasting through the prediction of the 2.1.3. Coarse aggregate size
model were also determined. A coarse aggregate size has a partial influence on the mechani-
cal properties of concrete. Han et al. [40] discovered that the coarse
aggregate size affects the compressive strength of concrete. As a
2. Data preprocessing result of measuring the flexural strength, an increase in the maxi-
mum size of the coarse aggregate showed a strength improvement.
Before building the machine-learning models, the dataset must Rao et al. [41] also showed an increase in compressive strength
be preprocessed to choose which data to be used. Although numer- when the coarse aggregate size increased up to 12.5 mm. Jang
ous researchers [4–12] have performed machine learning predic- et al. [27] reported that the effect of the coarse aggregate size on
tion, no research has been done to construct a machine-learning the compressive and flexural strengths of SFRC has insignificant.
model of SFRC. A new dataset limited to SFRC is thus required to Therefore, it is necessary to analyze the influence of the coarse
build SFRC machine-learning models to predict the compressive aggregate size through the use of machine learning methods, since
and flexural strengths. Therefore, to build the machine-learning it showed a tendency to increase the strength by the coarse aggre-
models, data were collected from numerous previous papers gate size partially.
[14–35] on SFRC in which both compressive and flexural strengths
are presented.
2.1.4. Superplasticizer
A superplasticizer is a frequently used admixture for making
2.1. Building the dataset high strength concrete, also known as a high range water reducer.
Khan et al. [42] reported that the use of pozzolanic materials and
The data used in this study was collected from 22 references superplasticizer can improve the mechanical properties of con-
[14–35]. In total, 220 sets of compressive and flexural strengths crete. In addition, it is necessary to use superplasticizer because
were used. Because the purpose of this research is to build initial concrete needs higher water content, causing bleeding. In addition,
SFRC machine-learning models and predict mechanical properties, Aruntasß et al. [43] denoted that, as compared to the control speci-
the dataset was built only with data from concrete reinforced with mens, the slump flow and the compressive strength increase as the
the most commonly used hooked-end steel fibers. Although the superplasticizer increases up to 1.5%. In addition, as a result of
dataset comprised many variables, only those found to be funda- steam curing for 90 days, as the superplasticizer increased to
mentally affected were selected and preprocessed. 1.5%, the flexural strength also increased. Therefore, superplasti-
Consequently, there were a total of nine features in the dataset, cizer was selected as a feature to measure the direct influence of
consisting of seven input data and two output data. Input data the compressive and flexural strengths of SFRC in the ML models.
2
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

2.1.5. Silica fume StandardScaler, which is a function in scikit-learn that converts


A silica fume plays an important role in the strength of concrete the mean to zero and the standard deviation to one.
as reported in many previous studies. Köksal et al. [18] showed
that both the compressive and flexural strengths are improved 3. Methodology
with increasing silica fume content. Nili et al. [44] also noted that
the compressive strength is improved as silica fume content 3.1. Evaluation method
increases. When steel fibers and silica fume were used together,
the flexural strength was significantly improved. The silica fume For evaluating the accuracy of machine-learning algorithms,
content was thus identified as a factor affecting the compressive root mean square error (RMSE), mean absolute error (MAE), and
and flexural strengths of SFRC and selected as a feature. mean absolute percent error (MAPE) were considered.

2.1.6. Fiber volume fraction and aspect ratio 3.1.1. Rmse


In a previous study on the effect of variables of fiber volume In regression, the loss function that is generally used is the
fraction and aspect ratio on compressive and flexural strengths, RMSE, defined as follows:
Yazıcı et al. [1] reported that the compressive strength is increased vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u n
by the fiber volume fraction in the case of aspect ratio of 45. On the u1 X 0
RMSE ¼ t
2
other hand, for the higher aspect ratios of 65 and 80, the compres- ðy  y Þ ð1Þ
n i¼1
sive strength increased only up to 1% of fiber volume fraction. Both
the compressive and flexural strengths improved strength com- where y’ is the predicted value, y is the actual value, and n is the
pared to the control specimen. In addition, as a result of experi- number of data samples. The larger the error, the larger is the
mental study using fiber volume fraction up to 1%, Köksal et al. RMSE; thus, the accuracy of predictions can be estimated. RMSE
[18] showed that both the compressive and flexural strengths is commonly used when the model-predicted values are different
increase. The influence of fiber volume fraction and aspect ratio from real values. In addition, the performance of the model is
on compressive strength of SFRC was relatively small but improved when the RMSE value is smaller.
increased. Thus, it is necessary to consider the fiber volume frac-
tion and aspect ratio as features in the ML models. 3.1.2. Mae
Output data comprised of the compressive strength and flexural MAE is the arithmetic mean of the deviation, expressed as the
strength. Fig. 1 shows the compressive strength and flexural absolute value minus the total mean of each measurement:
strength density distributions. The maximum, minimum, and aver-
age of compressive strengths of SFRC were 106.7, 23.83, and 1X n
MAE ¼ jy  y0 j: ð2Þ
59.4 MPa, respectively, and those of flexural strengths were 24.8, n i¼1
2.72, and 7.89 MPa, respectively. It was verified from Fig. 1 that
the output data are well distributed.
3.1.3. Mape
MAPE is an evaluation method that complements the disadvan-
2.2. Data scaling and splitting
tages of size-dependent errors of MAE. It is defined as
n  0 
After building the dataset, the dataset was split into two groups. 1X 
y  y  100:
MAPEð%Þ ¼ ð3Þ
The first is a training set, which plays a role in training models. The n i¼1  y 
second is a testing set. The testing set is used for comparison with
the predicted results with the machine-learning model trained as
the training set. In this study, the training set and the test set were 3.2. Cross-validation
divided in the ratio of 8:2. Oey et al. [45] explained that scaling is
performed after the splitting of the training and testing sets to Overfitting refers to the inability to apply additional data
avoid data leakage. Therefore, the data for each value were scaled because of the creation of analytical results that are too close or
to a standard normal distribution, because the ranges and units of exactly matched to a particular dataset [46]. Overfitting is useless
values are different. The model was rescaled by employing for predictive purposes, as it is well estimated for the sample data

Fig. 1. Output data histogram.

3
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

used in learning but is not well predicted when new data are given. 3.3.2. Ridge regression
If the number of data is small and the number of validation data is Ridge regression is a linear regression with regulation and is
also small, it is less reliable to increase the number of validation defined as follows:
data, making it difficult to ascertain the normal. The verification X
n
method for addressing these dilemmas is, therefore, the K-fold J ðhÞ ¼ MSEðhÞ þ a h2i ð5Þ
cross-validation method [6,19]. i¼1
The K-fold cross-validation repeats the following learning and
where a is the intensity of regulation. Ridge regression is a
validation steps:
regression using the L2-norm (square regulation). The regulation
P
(a ni¼1 h2i ) is added to the cost function and does not apply when
1. Divide all data into five subsets {D1, D2, D3, D4, and D5}.
assessing performance in a test set or predicting an actual sample.
2. Use data {D1, D2, D3, and D4} as learning data to develop a
Hyperparameter a controls how much the model will be regulated.
regression model and use data {D5} to cross-validate.
When a is very large, all weights are close to zero and eventually
3. To construct a regression model and cross-validate with data
become a horizontal line over the average of the data [47].
{D5}, using data {D1, D2, D3, and D4} as the learning data.
4. Use data {D2, D3, D4, and D5} as learning data to build a regres-
3.3.3. Lasso regression
sion model and use data {D1} to cross-validate. This process is
Lasso regression is an algorithm similar to ridge regression and
repeated to cover all the data.
is defined as follows:
This results in a total of K models and K cross-validation capa- X
n

bilities. The final cross-validation performance is calculated by J ðhÞ ¼ MSEðhÞ þ a jhi j: ð6Þ
i¼1
averaging the cross-verification performance of these K models.
Fig. 2 shows a schematic description of five-fold validation. Compared to ridge regression, lasso regression is a regression
using the L1-norm (absolute regulation). If the coefficient of the
characteristic value is too low, it is zeroed to erase the characteris-
3.3. Machine-learning algorithm
tic. Zeroing the effect of property on the model increases the bias to
prevent overfitting. Hyperparameter a is also a parameter that
Various machine-learning algorithms were used to predict
controls the effect of a penalty. Increasing the value increases the
compressive and flexural strengths of SFRC in this study and to
effectiveness of the penalty and decreasing the value to be almost
determine suitable algorithms as the predictive models. The linear
zero equals linear regression. The purpose of lasso is to find param-
regressor, lasso regressor, ridge regressor, K-nearest neighbor
eters that minimize the sum of MSE and penalty terms [47].
(KNN) regressor, decision tree regressor, random forest regressor,
AdaBoost regressor, gradient boost regressor, and XGBoost regres-
3.3.4. K Nearest neighbors (KNN) regressor
sor were used for machine-learning models. The feature impor-
K nearest neighbor (KNN) is a technique in which predictions
tance of the models with good performance was analyzed to
for new records can be found compared to records that are most
secure the reliability of the model and compared with the previous
similar in a dataset. Fig. 3(a) shows schematic images of KNN algo-
research results.
rithms. Regression as well as classification of machine learning can
be applied. KNN is assumed that observations close to the space of
3.3.1. Linear regression the data attributes (i.e., concrete mix) are also close to each other
Linear regression is the most general algorithm based on super- in the space of the output values. The output values are predicted
vised learning for machine learning prediction. In many previous using a predefined function of the nearest neighbor’s response
studies, this algorithm was used to predict the compressive value, taking the nearest out of the data space into account. For
strength of concrete, because it is the most basic and easy to apply. the case of standard KNN, the average function is commonly used.
Linear regression performs the task to predict a dependent variable Standard KNN properties are as follows: 1. assign the same impor-
value (y) based on a given independent variable (x). Therefore, this tance to all neighbors and use the average function to calculate the
regression technique identifies a linear relationship between vn response value of unknown observations; 2. assign the same
(input) and b y (output) [6,47] as follows: weight to all attributes by assuming that all normalized attributes
are equally important; 3. use Euclidean distances to calculate dis-
y ¼ h0 þ h1 v1 þ h2 v2 þ    þ hn vn
b ð4Þ tance. The advantage of KNN is that it is insignificantly affected by
the noise contained in the learning data, and thus it is effective
where b y is a dependent variable value, vn is an independent
algorithm for large amounts of learning data [47,48].
variable value, and h is the bias.
3.3.5. Decision tree regression
The decision tree analyzes the data and shows the pattern that
exists between them as a combination of predictable rules. Fig. 3
(b) is schematic image of a decision tree. It is called a decision tree
because its shape is similar to that of a tree. The goal of the regres-
sion tree is to create partitions in predictors, so that the target vari-
ables can be predicted based on partitions between input variables.
In addition, because the regression tree implicitly selects variables,
the trained regression tree displays variables that are more impor-
tant to predict target variables from the previous node in the tree.
The advantage of the regression tree is that it can handle both
numeric and categorical data. Therefore, this approach is relatively
easier than other models, but careful consideration is needed to
avoid overfitting of data. The disadvantage of the regression tree
Fig. 2. Schematic description of five-fold validation. is model instability. A simple change in the dataset can create a
4
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

Fig. 3. Graphical representation of various machine learning algorithms.

completely different set of partitions, and, as a result, it does not AdaBoost is a way to increase the weight of training samples
become the best model [47,49]. Therefore, the decision tree model underfitted by the previous model. First, AdaBoost creates the first
is prone to overfitting the data. For this reason, more complex tree- weak learner and output weight (C) and model weight (w) for each
based methods (random forest, boosted trees, etc.) are often con- data point according to the results after learning and updates the
sidered more reliable. Because of these problems, the K-fold model with the data weight obtained. The second weak learner is
method was applied to create a predictive model. Because overfit- then created and learned, and this is repeated N times. Training
ting occurred, the model was regulated. examples incorrectly predicted by elevated models induced in
the previous step increase weight, while accurately predicted mod-
3.3.6. Random forest regressor els decrease weight. As the iteration progresses, unpredictable
Using a random forest is a way of constructing a regression tree examples continue to be affected. After N iterations, N weak learn-
ensemble to reduce the fluctuation of individual trees. The decision ers are each given a model weight (w) to make the final model
trees gather to build a forest, by using the concept of ‘‘bootstrap [47,51].
aggregation” (bagging) to create many similar datasets sampled
from the same source dataset. Bagging is a method of combining 3.3.8. Gradient boosting regressor
a trained basic model for training data. Because of its small bias The gradient boosting regression tree is another ensemble
and great variance, the tree is vulnerable to overfitting. The advan- method that combines multiple decision trees to create a powerful
tage of the random forest method is that it greatly reduces instabil- model [47]. By default, there is no randomness in the gradient
ity. However, the drawback of the decision tree is that it tends to boosting regression tree. Instead, strong dictionary pruning is used.
overfit training data, although the decision tree alone can perform A gradient boosting tree usually uses a tree that is fewer than five
machine learning. To prevent overfitting, there are ways to create deep, thereby using less memory and enabling faster predictions.
random forest models through multiple decision trees or through The fundamental idea of gradient boosting is to connect numerous
regulation. Consequently, in this study, the model was regulated simple models, such as the shallow trees (known as weak learners).
[47,50]. Each tree can predict only a fraction of the data. Adding more trees
enhances performance. Like AdaBoost, gradient boosts add predic-
3.3.7. AdaBoost regressor tors sequentially to the ensemble to compensate for previous
Boosting is an ensemble method that makes strong learners by errors. Unlike AdaBoost, however, it learns the new predictor from
connecting several weak learners and is a method of learning by the residual error created by the previous predictor [12,47].
adjusting the sample weight of the learning data of the next model
based on the learning results of the previous model. For this rea- 3.3.9. eXtreme gradient boosting (XGBoost)
son, the results of the previous learning will affect the next learn- eXtreme gradient boosting (XGBoost) is an ensemble learning
ing, and, as one progresses through the boosting round, the weight algorithm based on the classification and regression tree (CART)
of the data will increase. A schematic image of boosting is shown in that can be used for both classification and regression problems.
Fig. 3(c). The existing gradient boosting machine (GBM) suffers from the
5
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

disadvantages of overfitting and slowness. Therefore, XGBoost can the circle is called a neuron or node, Xi represents the input signal
be employed as an advanced GBM with distributed/parallel pro- that is applied to each neuron, y is the output, and wi is the
cessing. XGBoost is a more reliable learning algorithm because it weights. When the input signal is sent to the neuron, each of them
has two self-compatible regulatory functions (shrinkage and col- is multiplied by its own weight, and when the sum of the signals
umn subsampling), and, when there is a lot of data, the learning exceeds the set limit, the sum function acts as the calculation of
time, compared to that of GBM, is shorter and it has better predic- the net input that applications a neuron. The weighted sum of
tive ability. XGBoost is used in many areas because of its high input values is calculated using the following equation:
problem-solving capabilities and low requirements. Therefore,
the XGBoost algorithm was applied to this study, and the overall X
n

performance of the boosting techniques were compared [7,12]. netj ¼ wij xi þ b ð8Þ
i¼1

3.3.10. Support vector regression (SVR) where netj is the weighted sum of the jth neuron, wij are the
Support vector regression (SVR), which is the most common parameters indicating the weight of each signal, xi are the input
application form of support vector machines (SVMs), has been pro- values, and b is the bias [54].
posed by Vapnik et al. [52]. Vapnik et al. [52] proposed e-SVR by At this time, the activation function proceeds with the net input
introducing an alternative e-insensitive loss function. This loss obtained from the function and plays a role in determining
function allows the concept of margin to be used for regression whether to enable output. There are several activation functions,
problems. The purpose of SVR is to find a function having at most such as the sigmoid, rectified linear unit (ReLU), and step function.
e deviation from the actual target vectors for all given training data In this study, the ReLU function is used as the activation function,
and that needs to be as flat as possible. Some of the key features of although the sigmoid function has been used as an activation func-
SVR include minimizing observed training errors and attempts to tion in many other studies [55–57]. The reason for using the ReLU
minimize generalized errors to achieve generalized performance. function is that the sigmoid function is a binary function. If there
When SVMs for regression operations are used, SVR should use are many layers, the values of each step are subtracted, and the
cost functions to measure the empirical risk to minimize regres- results are passed to the first layer. If the internal hidden layers
sion errors. For a given training data with k number of samples are all composed of sigmoid functions, all the values calculated
be represented by {xi, yi}, i = 1, . . . , k, where xi is input vector at each step must be between 0 and 1. Moreover, the derivative
and yi is the target value, consider the linear regression function. becomes very small, i.e., 1 when the sigmoid function value is
In a case where the training dataset is {xi, yi}, i = 1, . . . , k, where either too high or too low. This causes gradients to vanish in the
xi is input vector and yi is the target value, consider the linear deep networks and poor learning. However, the ReLU function is
regression function f ðxÞ ¼ hw; xi þ b, as shown in Fig. 3(d). To make a function that has been frequently adopted recently. ReLU is a
the function as flat as possible, w needs to be minimized and this function that returns the input if the input exceeds zero and
can be achieved by 12 wT  w, the norm 12 kw2 k. Therefore, minimize returns zero if it is below zero; consequently, the speed of learning
1
wT  w in order to minimize w. becomes very fast. Moreover, the ReLU function is able to over-
2
come the vanishing gradient problem [58]. Therefore, ReLU was
To minimize 12 wT  w applied to the internal hidden layer, and the Adam function was

yi  hw; xi i  b  e; applied only in the last output layer.
Subject to
hw; xi i þ b  yi  e: A neural network is composed of input and output layers of
nodes, and it is designed by assigning relevant weights to provide
However, if a function that meets the above constraints does output as a result of given input. Training data and output-data
not exist, the above expression cannot be used. Therefore, a slack trained models use the set divided above. There are various ANN
variable (nðiÞ  0Þ needs be introduced for each sample. nðiÞ is intro- learning algorithms; among them, the multilayer perceptron
duced to measure the deviation in training samples outside the e (MLP) is used the most. An MLP is composed of an input layer,
intensive zone. Therefore, SVR can be formulated as a minimiza- an output layer, and a hidden layer, where there must be at least
tion, as follows: one hidden layer. When two or more layers of the ANN are hidden,
1 Xm this is called a deep neural network, as shown in Fig. 3(f). Multiple
min kwk2 þ C nðiÞ layers of neurons with nonlinear transmission allow networks to
2 i¼1 learn the nonlinear relationship between input vectors and output
8 vectors. A learning algorithm for networks with constituted hidden
< yi  hw; xi i  b  e þ n ;
ð iÞ
> units was introduced by Rumelhart [59]. This algorithm, called
Subject to hw; xi i þ b  yi  e þ nðiÞ ; ð7Þ back-propagation (BP), is the most widely used learning algorithm.
>
: For each training sample, the BP algorithm first makes predictions
nðiÞ  0; i ¼ 1;    ; n:
and measures errors, which will be propagated backward. Then,
In this study, the linear SVM was adopted to predict the the error is readjusted using the weights and biases of hidden
mechanical properties of SFRC because it is the most general pre- and output layers for reducing the difference between the test data
diction method [52,53]. and prediction data. This reverse process effectively calculates the
error gradient for all connection weights in the network by propa-
3.3.11. Multilayer perceptron (MLP) gating the error gradient backward. In this study, Keras, which is
Artificial neural network (ANN) is a statistical learning algo- an open-source library, was used to build the MLP model. Keras
rithm inspired by biological neural networks. The ANN model con- applies the BP algorithm automatically.
sists of a number of highly interconnected neurons. The model To construct an MLP model, the number of layers, the number of
proposed by McCulloch and Pitts in 1943 is still the most com- neurons in each layer, the activation function, the loss function, etc.
monly used model in other ANN architectures. It includes the main need to be determined. Young et al. [4] employed an MLP algo-
features of biological neural networks: parallelism and high con- rithm with two hidden layers and 10 and 5 neurons, respectively.
nectivity. Artificial neuron structures consist of a total of five parts: In this study, therefore, eight variable characteristics were used
input, weights, the sum function, the activation function, and out- for input values, and two hidden layers were used. In addition,
put. The perceptron of the ANN is shown in Fig. 3(e). In this figure, the number of neurons in each hidden layer was used to create a
6
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

model using 5 and 10 neurons, respectively. In the case of MLP, the variables, silica fume, W/C ratio, and coarse aggregate size were
number of studies can be determined, and the higher the number factors dominating the compressive strength of SFRC. Among them,
of studies (called the ‘epoch’), the smaller the error with respect silica fume is commonly found to be the most significant value
to the learning data, but this may lead to overfitting. To prevent affecting compressive strength. In contrast, the volume fraction
this, an early stopping method was used to stop learning if errors and aspect ratio of steel fibers were relatively insignificant, mean-
increased compared to the previous epoch. ing that they appear to have a minor effect on the compressive
strength. Köksal et al. [18] reported that the concrete compressive
4. Results and discussion strength increases as the amount of silica fume increases. And, the
compressive strength decreased as the maximum aggregate size
The compressive and flexural strengths of SFRC were predicted increased in a previous study [60]. Yazıcı et al. [1] noted that the
through various machine-learning techniques. The predicted val- use of fibers in concrete increases the compressive strength by
ues were compared with the actual values to demonstrate the fea- only ~ 4%–19%, which is relatively insignificant compared to the
sibility of machine-learning algorithms, and the feature tensile or flexural strength. Yoo et al. [15] also reported an insignif-
importance of the five algorithms exhibiting excellent performance icant increase and decrease in the compressive strength of concrete
was analyzed in detail to ensure the reliability of the models. by adding steel fibers, which is consistent with the findings of Hsu
and Hsu [61]. Therefore, based on the feature importance values
4.1. Compressive strength and consistency with previous studies, the machine-learning algo-
rithms of compression strength could be considered as reasonable.
Table 1 lists the machine learning prediction results of the com- The correlations between the compressive strength and the
pressive strength of SFRC. Estimating the compression strength of variables considered are shown in Fig. 6. It is obvious that the com-
SFRC through various machine-learning techniques showed that pressive strength decreased with increasing W/C ratio and coarse
the algorithm with the best performance was the XGBoost algo- aggregate size, whereas it increased with silica fume content. How-
rithm, It showed the best performance with RMSE of 3.6144 and ever, the compressive strength of SFRC was insignificantly influ-
MAE of 2.3540, respectively. The predicted performance of the enced by the S/a ratio, superplasticizer content, fiber volume
lasso regressor was found to be the worst, with RMSE of 18.3005 fraction, and aspect ratio of fiber. Especially, fiber volume fraction
and MAE of 14.8517. The differences in RMSE and MAE values and aspect ratio showed a relatively slight influence. Yang et al.
between XGBoost and lasso regressors were about 14 and [62] reported that the effect on the fiber volume fraction on the
12 MPa, respectively. compressive strength is unclear because the fiber dispersion is dis-
In addition, XGBoost, gradient boost, random forest, and deci- turbed by high fiber concentrations, leading to a formation of
sion tree regressors seemed to perform better than KNN regression, cracks from the weakest zones with insufficient amount of fibers
linear regression, and MLP, although regulations were imposed to and poor compactness of concrete. Air void in hydrated cement
prevent overfitting the models. paste with high fiber volume fraction resulted in the slight change
In Fig. 4, one can see that models made with algorithms such as of strength. In addition, Yoo et al. [15] reported that the compres-
the XGBoost, gradient boost, AdaBoost, random forest, and decision sive strength decreased slightly with the incorporation of steel
tree regressors appeared to well predict the actual measurements. fibers for normal concrete, while it increased with an addition of
However, models made with KNN, linear, MLP, ridge, SVR, and steel fibers for the cases of high strength concrete and ultra high
Lasso showed relatively wide deviation, with gaps from the center strength concrete. This verified that the variables most influencing
line, giving significant differences from the actual measurements. the compressive strength are the W/C ratio, coarse aggregate size,
Overall, the performance of the boost-based ensemble methods and silica fume content, which is consistent with the findings from
was good, and the tree-based algorithms also showed relatively feature importance in Fig. 5.
good performance. However, the performance of the linear models
(i.e., linear, lasso, and ridge regressions) was poor, caused by the 4.2. Flexural strength
complexity of the correlation between the strength and the con-
crete mix proportions. In the case of KNN models, because the nat- The performance of prediction models on the flexural strength
ure of the algorithms is based on predictions through similar data, of SFRC is given in Table 2. The gradient boost algorithm exhibited
it was found that, when new data were input, prediction was diffi- the best performance, with RMSE of 1.5111 and MAE of 1.1841,
cult without similar data in the training data. In addition, the con- respectively. Among the prediction models, the MLP model exhib-
figurations of each neuron and layer would have been difficult to ited the worst performance, with the RMSE of 3.4133 and MAE of
accurately measure because the MLP model used a small number 2.7871, which is similar to the performance of compressive
of data when calculating weights for each layer. strength prediction models. The performance of the top five mod-
In Fig. 5, the five best predictive algorithms are compared for els (gradient boost, XGBoost, random forest, AdaBoost, and deci-
feature importance. The analysis showed that, relative to other sion tree regressors) were identical.
From Fig. 7, one can see that the predictive values of the gradi-
Table 1 ent boost, XGBoost, random forest, AdaBoost, and decision tree
Results of machine learning algorithms for the compressive strength. regressors were quite similar to the measured data, being close
Model RMSE MAE to the center line. This indicates that the results are quite pre-
dictable, although the data deviations were large. The linear, ridge,
1 XGBoost Regressor 3.6144 2.3540
2 Gradient Boosting Regressor 5.0839 3.1441 support vector, lasso regressor, and MLP models, however, exhib-
3 Adaboost Regressor 6.2401 5.0743 ited significant differences from actual flexural strengths. Such
4 Random Forest Regressor 7.9932 6.6395 poor predictions appeared as a result of problems such as complex-
5 Decision Tree Regressor 9.3662 7.3979 ity in the concrete mix and the limited number of data. Similarly,
6 K Nearest Neighbors (KNN) 11.5256 8.2706
7 Linear Regressor 12.4273 11.3765
the predictive performance of the algorithms of the boost tech-
8 Multi Layer Perceptron (MLP) 14.3430 11.7779 nique and the tree-based ensemble method was quite good regard-
9 Ridge Regressor 14.4741 12.5374 less of the number of data. It was thus appropriate to use the
10 Support Vector Regressor 18.0214 14.7034 boost-based or tree-based algorithms to predict the flexural
11 Lasso Regressor 18.3005 14.8517
strength of SFRC.
7
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

Fig. 4. Comparison of compressive strength predictions with various algorithms.

The five best predictive models were selected, and the feature flexural strength of SFRC. The correlations between the flexural
importance is analyzed in Fig. 8, which shows a significant differ- strength and variables are summarized in Fig. 9. It was obvious
ence compared to the predicted compression strength model. In that the flexural strength of SFRC increased with increasing fiber
the previous prediction models of compressive strength, the effects volume fraction and silica fume content, which is similar to the
of fiber volume fraction and aspect ratio were insignificant, but the results of previous studies [18], thus proving the reliability of the
fiber volume fraction and aspect ratio significantly influenced the predictive model. The lower W/C ratio also seemed to increase
flexural strength of SFRC. As would expected it generally increased the flexural strength of SFRC. However, there were no trends in
with the parameters. Unlike the feature importance analysis of the other variables on how they affected the flexural strength.
compressive strength, the feature importance of silica fume, fiber Therefore, it can be concluded that the addition of steel fibers more
volume fraction, and S/a ratio was higher than that of the W/C strongly influences the flexural strength than the compressive
ratio. The other variables (coarse aggregate size, superplasticizer, strength of concrete. Gesoğlu et al. [63] reported that silica fume
fly ash, and fiber aspect ratio) insignificantly influenced the is effective in improving the flexural strength of SFRC, and Yoo
8
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

Fig. 5. Feature importance of compressive strength.

Fig. 6. Correlation between variables and compressive strength.

et al. [15] discovered an increase in the flexural strength of SFRC


with lower W/C ratios. These findings are consistent with those
Table 2 of machine learning predictions in this study.
Results of machine learning algorithms for flexural strength. Both the compressive and flexural strengths were reasonably
Model RMSE MAE predicted through the gradient boost regressor, XGBoost regressor,
AdaBoost regressor, decision tree regressor, and random forest
1 Gradient Boosting Regressor 1.5111 1.1841
2 XGBoost Regressor 1.6284 1.2516
regressor algorithms. In contrast, the performance of the KNN
3 Random Forest Regressor 1.8356 1.3015 regressor, linear regressor, ridge regressor, support vector regres-
4 Adaboost Regressor 2.0743 1.6950 sor, lasso regressor, and MLP algorithms was relatively poorer.
5 Decision Tree Regressor 2.2004 1.7884 The ridge regressor, lasso regressor, and support vector regressor
6 K Nearest Neighbors (KNN) 2.4263 1.6556
exhibited worse performance than even the linear regressor. This
7 Linear Regressor 2.5332 1.7968
8 Ridge Regressor 2.6485 1.9486 was because they have more regulations than the linear regressor,
9 Support Vector Regressor 2.8167 2.0764 resulting in worse performance.
10 Lasso Regressor 3.3478 2.6912 In addition, the analysis of feature importance of the top five
11 Multi Layer Perceptron (MLP) 3.4133 2.7871
models indicated that silica fume and W/C ratio were the most
9
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

Fig. 7. Comparison of flexural strength predictions with various algorithms.

important mix variables in the prediction of compressive strength distributions of the boost-based model and tree-based model
and that silica fume and fiber volume fraction were the most were closer to zero, whereas the errors for the rest of the tech-
important variables in the prediction of flexural strength. However, niques are distributed. However, in the flexural strength graph,
MLP, which has been widely used in previous studies, yielded poor there was less deviation than in the compressive strength graph
predictions of the mechanical properties of SFRC. This is due to the because the flexural strength was always lower than the com-
difficulty in constructing each neuron and layer owing to the lack pressive strength. Because of the difficulty in making a direct
of enough data when configuring the MLP and the early learning comparison with the normalized cumulative probability distribu-
termination to prevent overfitting. tion, MAPE was compared in Fig. 11 to directly compare the pre-
Fig. 10 shows the normalized cumulative probability distribu- dicted performances of the compressive and flexural strengths.
tions of errors for a comparative analysis of the performance Both the compressive and flexural strength analyses showed that
among compressive and flexural strength prediction models. In the XGBoost regressor and gradient boost regressor models per-
both compression and flexural strength graphs, the error formed better than other models, whereas the linear regressor

10
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

Fig. 8. Feature importance of flexural strength.

Fig. 9. Correlation between variables and flexural strength.

and MLP models provided the highest MAPE values, indicating the and random forest regression, whereas relatively poor perfor-
least accuracy. Consequently, the XGBoost and gradient boost mance was found from linear regression, ridge regression, SVR,
regressors can be recommended as the most appropriate lasso regression, and MLP. The models with tree-based machine-
machine-learning algorithms for predicting the compressive and learning models and boosting techniques exhibited fairly good
flexural strengths of SFRC. results because they classified according to the characteristics of
each variable to make nodes and leaves, allowing them to gradu-
5. Conclusion ally achieve better performance through weak learners.
Based on the comparisons of the strengths and mix variables, it
Both the compressive and flexural strengths of SFRC were well was found that the W/C ratio and silica fume were the most impor-
predicted using the machine learning algorithms employing gradi- tant variables in predicting compressive strength. For the flexural
ent boost regression, XGBoost regression, decision tree regression, strength prediction, silica fume content and fiber volume fraction

11
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

Fig. 10. Normalized cumulative probability distribution of compressive and flexural strength prediction errors.

Fig. 11. MAPE comparison of prediction models.

played the most important roles, which is in common sense. Acknowledgements


XGBoost and gradient boost regressors can thus be selected as
the most appropriate machine-learning algorithms for predicting This work was supported by the National Research Foundation
both the compressive and flexural strengths of SFRC. of Korea (NRF) grant funded by the Korea government (MSIT) (No.
2017R1C1B2007589).

CRediT authorship contribution statement


References
Min-Chang Kang: Data curation, Formal analysis, Writing - _
ß . Yazıcı, G. Inan,
[1] S V. Tabak, Effect of aspect ratio and volume fraction of steel
original draft. Doo-Yeol Yoo: Conceptualization, Writing - review fiber on the mechanical properties of SFRC, Constr. Build. Mater. 21 (6) (2007)
& editing, Supervision, Funding acquisition. Rishi Gupta: Formal 1250–1253.
analysis, Writing - review & editing. [2] M. Nili, A. Azarioon, A. Danesh, A. Deihimi, Experimental study and modeling
of fiber volume effects on frost resistance of fiber reinforced concrete,
International Journal of Civil Engineering 16 (3) (2018) 263–272.
[3] W. Nguyen, J.F. Duncan, G. Jen, C.P. Ostertag, Influence of matrix cracking and
Declaration of Competing Interest hybrid fiber reinforcement on the corrosion initiation and propagation
behaviors of reinforced concrete, Corros. Sci. 140 (2018) 168–181.
The authors declare that they have no known competing finan- [4] B.A. Young, A. Hall, L. Pilon, P. Gupta, G. Sant, Can the compressive strength of
concrete be estimated from knowledge of the mixture proportions?: New
cial interests or personal relationships that could have appeared insights from statistical analysis and machine learning methods, Cem. Concr.
to influence the work reported in this paper. Res. 115 (2019) 379–388.

12
Min-Chang Kang, Doo-Yeol Yoo and R. Gupta Construction and Building Materials 266 (2021) 121117

[5] K.O. Akande, T.O. Owolabi, S. Twaha, S.O. Olatunji, Performance comparison of [34] Sarbini, N. N., Ibrahim, I. S., & Saim, A. A. (2011, August). Enhancement on
SVM and ANN in predicting compressive strength of concrete, IOSR Journal of Strength Properties of Steel Fibre Reinforced Concrete. In EACEF-International
Computer Engineering 16 (5) (2014) 88–94. Conference of Civil Engineering (Vol. 1, pp. 038-038).
[6] J.S. Chou, C.F. Tsai, A.D. Pham, Y.H. Lu, Machine learning in concrete strength [35] KM, A. F., & Varghese, S. (2014). Behavioral Study of Steel Fiber and
simulations: Multi-nation data analytics, Constr. Build. Mater. 73 (2014) 771– Polypropylene Fiber Reinforced Concrete. International journal of research in
780. engineering and technology, 2(10), 17-24.
[7] J. Duan, P.G. Asteris, H. Nguyen, X.N. Bui, H. Moayedi, A novel artificial [36] V. Mallikarjuna Reddy, M.V. Seshagiri Rao, P. Srilakshmi, B. Sateesh Kumar,
intelligence technique to predict compressive strength of recycled aggregate Effect of w/c ratio on workability and mechanical properties of high strength
concrete using ICA-XGBoost model, Engineering with Computers (2020) 1–18. self compacting concrete (m70 grade), International Journal of Engineering
[8] S.M. Gupta, Support vector machines based modelling of concrete strength, Research and Development 7 (1) (2013) 06–13.
World Academy of Science, Engineering and Technology 36 (2007) 305–311. [37] M. Nili, V. Afroughsabet, Combined effect of silica fume and steel fibers on the
[9] J.S. Chou, A.D. Pham, Enhanced artificial intelligence for ensemble approach to impact resistance and mechanical properties of concrete, Int. J. Impact Eng. 37
predicting high performance concrete compressive strength, Constr. Build. (8) (2010) 879–886.
Mater. 49 (2013) 554–563. [38] J.J. Kim, D.J. Kim, S.T. Kang, J.H. Lee, Influence of sand to coarse aggregate ratio
[10] C. Deepa, K. SathiyaKumari, V.P. Sudha, Prediction of the compressive strength on the interfacial bond strength of steel fibers in concrete for nuclear power
of high performance concrete mix using tree based modeling, International plant, Nucl. Eng. Des. 252 (2012) 1–10.
Journal of Computer Applications 6 (5) (2010) 18–24. [39] M. Chitlange, P. Pajgade, strength appraisal of artificial sand as fine aggregate in
[11] H.I. Erdal, Two-level and hybrid ensembles of decision trees for high SFRC, ARPN Journal of Engineering and Applied Sciences 5 (10) (2010) 34–38.
performance concrete compressive strength prediction, Eng. Appl. Artif. [40] J. Han, M. Zhao, J. Chen, X. Lan, Effects of steel fiber length and coarse aggregate
Intell. 26 (7) (2013) 1689–1697. maximum size on mechanical properties of steel fiber reinforced concrete,
[12] W. Dong, Y. Huang, B. Lehane, G. Ma, XGBoost algorithm-based prediction of Constr. Build. Mater. 209 (2019) 577–591.
concrete electrical resistivity for structural health monitoring, Autom. Constr. [41] G.A. Rao, B.R. Prasad, Fracture energy and softening behavior of high-strength
114 (2020) 103155. concrete, Cem. Concr. Res. 32 (2) (2002) 247–252.
[13] A. Behnood, E.M. Golafshani, Machine learning study of the mechanical [42] M. Khan, M. Ali, Effect of super plasticizer on the properties of medium
properties of concretes containing waste foundry sand, Constr. Build. Mater. strength concrete prepared with coconut fiber, Constr. Build. Mater. 182
243 (2020) 118152. (2018) 703–715.
[14] D.V. Soulioti, N.M. Barkoula, A. Paipetis, T.E. Matikas, Effects of fibre geometry [43] H.Y. Aruntasß, S. Cemalgil, O. S ß imsßek, G. Durmusß, M. Erdal, Effects of super
and volume fraction on the flexural behaviour of steel-fibre reinforced plasticizer and curing conditions on properties of concrete with and without
concrete, Strain 47 (2011) e535–e541. fiber, Mater. Lett. 62 (19) (2008) 3441–3443.
[15] D.Y. Yoo, Y.S. Yoon, N. Banthia, Flexural response of steel-fiber-reinforced [44] M. Nili, V. Afroughsabet, Property assessment of steel–fibre reinforced
concrete beams: Effects of strength, fiber content, and strain-rate, Cem. Concr. concrete made with silica fume, Constr. Build. Mater. 28 (1) (2012) 664–669.
Compos. 64 (2015) 84–92. [45] T. Oey, S. Jones, J.W. Bullard, G. Sant, Machine learning can predict setting
[16] S. Jang, H. Yun, Effects of Curing Age and Fiber Volume Fraction on Flexural behavior and strength evolution of hydrating cement systems, J. Am. Ceram.
Behavior of High-Strength Steel Fiber-Reinforced Concrete, Journal of the Soc. 103 (1) (2020) 480–490.
Korean Society of Hazard Mitigation 16 (4) (2016) 15–21. [46] D.J. Leinweber, Stupid data miner tricks: overfitting the S&P 500, The Journal of
[17] J.H. Lee, B. Cho, E. Choi, Flexural capacity of fiber reinforced concrete with a Investing 16 (1) (2007) 15–22.
consideration of concrete strength and fiber content, Constr. Build. Mater. 138 [47] A. Géron, Hands-On Machine Learning with Scikit-Learn, Keras, and
(2017) 222–231. TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems,
_ Yiğit, Y. Sßahin, Combined effect of silica fume and steel
[18] F. Köksal, F. Altun, I. O’Reilly Media, 2019.
fiber on the mechanical properties of high strength concretes, Constr. Build. [48] K. Elevado, J. Galupino, R. Gallardo, Compressive strength modelling of
Mater. 22 (8) (2008) 1874–1880. concrete mixed with fly ash and waste ceramics using K-nearest neighbor
[19] R. Kohavi, August). A study of cross-validation and bootstrap for accuracy algorithm, International Journal of Geomate 15 (48) (2018) 169–174.
estimation and model selection, Ijcai (Vol. 14 (2) (1995) 1137–1145. [49] M.A. DeRousseau, E. Laftchiev, J.R. Kasprzyk, B. Rajagopalan, W.V. Srubar III, A
[20] E.S. Yoon, S.B. Park, An experimental study on the mechanical properties and comparison of machine learning methods for predicting the compressive
long-term deformations of high-strength steel fiber reinforced concrete, strength of field-placed concrete, Constr. Build. Mater. 228 (2019) 116661.
Journal of the Korean Society of Civil Engineers 26 (2A) (2006) 401–409. [50] L. Breiman, Bagging predictors. Machine learning 24 (2) (1996) 123–140.
[21] W. Abbass, M.I. Khan, S. Mourad, Evaluation of mechanical properties of steel [51] D.C. Feng, Z.T. Liu, X.D. Wang, Y. Chen, J.Q. Chang, D.F. Wei, Z.M. Jiang, Machine
fiber reinforced concrete with different strengths of concrete, Constr. Build. learning-based compressive strength prediction for concrete: An adaptive
Mater. 168 (2018) 556–569. boosting approach, Constr. Build. Mater. 230 (2020) 117000.
[22] D.Y. Yoo, Y.S. Yoon, N. Banthia, Predicting the post-cracking behavior of [52] Vapnik, V., Golowich, S. E., & Smola, A. J. (1997). Support vector method for
normal-and high-strength steel-fiber-reinforced concrete beams, Constr. function approximation, regression estimation and signal processing.
Build. Mater. 93 (2015) 477–485. In Advances in neural information processing systems (pp. 281-287).
[23] H.H. Lee, H.J. Lee, Characteristic strength and deformation of SFRC considering [53] P.S.M. Thilakarathna, S. Seo, K.K. Baduge, H. Lee, P. Mendis, G. Foliente,
steel fiber factor and volume fraction, Journal of the Korea Concrete Institute Embodied carbon analysis and benchmarking emissions of high and very-high
16 (6) (2004) 759–766. strength concrete using machine learning algorithms, J. Cleaner Prod. 121281
[24] Y.L. Kim, D.S. Park, C.H. Seo, Variations of material characteristics of high- (2020).
strength concrete according to increase of steel fiber volume, Journal of the [54] I.N. Da Silva, D.H. Spatti, R.A. Flauzino, L.H.B. Liboni, S.F. dos Reis Alves,
architectural Institute of Korea (2005) 95–101. Artificial neural networks, Springer International Publishing, Cham, 2017, p.
[25] Y.H. Oh, Evaluation of flexural strength for normal and high strength concrete 39.
with hooked steel fibers, Journal of the Korea Concrete Institute 20 (4) (2008) [55] J.S. Chou, C.F. Tsai, Concrete compressive strength analysis using a combined
531–539. classification and regression technique, Autom. Constr. 24 (2012) 52–60.
[26] P.S. Song, S. Hwang, Mechanical properties of high-strength steel fiber- [56] T. Nguyen, A. Kashani, T. Ngo, S. Bordas, Deep neural network with high-order
reinforced concrete, Constr. Build. Mater. 18 (9) (2004) 669–673. neuron for the prediction of foamed concrete strength, Comput.-Aided Civ.
[27] S.J. Jang, H.D. Yun, Combined effects of steel fiber and coarse aggregate size on Infrastruct. Eng. 34 (4) (2019) 316–332.
the compressive and flexural toughness of high-strength concrete, Compos. [57] I.B. Topcu, M. Sarıdemir, Prediction of compressive strength of concrete
Struct. 185 (2018) 203–211. containing fly ash using artificial neural networks and fuzzy logic, Comput.
[28] K.M. Aldossari, W.A. Elsaigh, M.J. Shannag, Effect of steel fibers on flexural Mater. Sci. 41 (3) (2008) 305–311.
behavior of normal and high strength concrete, International Journal of Civil [58] H.H. Tan, K.H. Lim, in: June). Vanishing Gradient Mitigation with Deep
and Environmental Engineering 8 (1) (2014) 22–26. Learning Neural Network Optimization, IEEE, 2019, pp. 1–4.
[29] D.O. Ku, S.D. Kim, H.S. Kim, K.K. Choi, Flexural performance characteristics of [59] D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning internal representations by
amorphous steel fiber-reinforced concrete, Journal of the Korea concrete error propagation, No. ICS-8506, California Univ San Diego La Jolla Inst for
Institute 26 (4) (2014) 483–489. Cognitive Science, 1985.
[30] J. Thomas, A. Ramaswamy, Mechanical properties of steel fiber-reinforced [60] S.K. Al-Oraimi, R. Taha, H.F. Hassan, The effect of the mineralogy of coarse
concrete, J. Mater. Civ. Eng. 19 (5) (2007) 385–392. aggregate on the mechanical properties of high-strength concrete, Constr.
[31] A. Sivakumar, M. Santhanam, Mechanical properties of high strength concrete Build. Mater. 20 (7) (2006) 499–503.
reinforced with metallic and non-metallic fibres, Cem. Concr. Compos. 29 (8) [61] L.S. Hsu, C.T. Hsu, Stress-strain behavior of steel-fiber high-strength concrete
(2007) 603–608. under compression, ACI Struct. J. 91 (4) (1994) 448–457.
[32] V. Afroughsabet, T. Ozbakkaloglu, Mechanical and durability properties of [62] J.M. Yang, J.K. Kim, D.Y. Yoo, Flexural and shear behaviour of high-strength
high-strength concrete containing steel and polypropylene fibers, Constr. SFRC beams without stirrups, Mag. Concr. Res. 71 (10) (2019) 503–518.
Build. Mater. 94 (2015) 73–82. [63] M. Gesoğlu, E. Güneyisi, R. Alzeebaree, K. Mermerdasß, Effect of silica fume and
[33] C.D. Atisß, O. Karahan, Properties of steel fiber reinforced fly ash concrete, steel fiber on the mechanical properties of the concretes produced with cold
Constr. Build. Mater. 23 (1) (2009) 392–399. bonded fly ash aggregates, Constr. Build. Mater. 40 (2013) 982–990.

13

You might also like