0% found this document useful (0 votes)
74 views

Effects of Computer-Based Training in Computer Har

Uploaded by

John ivan Diaz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views

Effects of Computer-Based Training in Computer Har

Uploaded by

John ivan Diaz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

International Journal of Technology-Enabled Student Support Services

Volume 12 • Issue 1

Effects of Computer-Based Training


in Computer Hardware Servicing on
Students’ Academic Performance
Rex Perez Bringula, University of the East, Philippines*
https://orcid.org/0000-0002-1789-9601

John Vincent T. Canseco, University of the East, Philippines


Patricia Louise J. Durolfo, University of the East, Philippines
Lance Christian A. Villanueva, University of the East, Philippines
Gabriel M. Caraos, University of the East, Philippines

ABSTRACT

This study determined the effects of computer-based training in computer hardware servicing with a
pedagogical agent named “DAC: The Builder” on the academic performance of computing students.
Fifty-six university students (30 students in the control group, 26 students in the experimental group)
participated in a two-week experiment. The majority of the experimental group exhibited gaming
behavior but subsequently reduced it after DAC intervention. The data collected in this study showed
that the null hypothesis stating that there is no significant difference in the pretest and posttest scores
of the experimental group can be rejected. Moreover, the hands-on posttest scores of both groups
had significant differences. This study demonstrates that returning students to the lecture when they
exhibit gaming the system behavior is an effective tool for discouraging this behavior. The use of the
DAC is therefore recommended for students taking up computer hardware servicing. Implications
and recommendations were also discussed.

Keywords
Computer Hardware Assembly, Gaming the System, Hardware Servicing Skill, Tutoring System

1. INTRODUCTION

Computer hardware servicing is a technical skill where students have to learn computer set building,
computer troubleshooting, software installation, system configuration, and computer maintenance (De
Jesus, 2019). From basic secondary school to computer-related courses in tertiary education, computer
hardware servicing instructions are a fundamental skill in computer education (Hsu & Hwang,
2014). However, there are challenges to learning the course. The difficulty experienced by students
in assembling a computer is not only due to a lack of practice but also to insufficient assistance and

DOI: 10.4018/IJTESSS.317410 *Corresponding Author



Copyright © 2022, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.


1
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

materials (Hwang et al., 2011). For example, to understand the functions of a motherboard, students
need to see a fully functional motherboard. The ideal teaching method for the subject is to allow
the students to use a functional motherboard. However, it will be highly impractical to dismantle
a working computer to show the motherboard. Moreover, providing individualized feedback to all
students will be very tedious and time-consuming (Botarleanu et al., 2018).
One way to address these issues is to employ computer-based training (CBT) software
(subsequently referred to as software) for a computer hardware servicing system (De Jesus, 2019).
However, prior work (e.g., De Jesus, 2019) did not include interventions when students are gaming
the system (GTS) (a deliberate behavior to exploit the system to achieve correct responses rather than
learning the materials; Baker et al., 2008) and assistance from a pedagogical agent. To address these
gaps, this study was conceived. This study developed software for computer hardware servicing for
computing students (Information Technology, Computer Science, and Information Systems) with a
pedagogical agent capable of detecting the GTS. Specifically, the study aims to answer the following
research questions (RQ). 1) What is the software utilization of the students in the experimental group
in terms of the number of lectures taken, time spent on the hands-on activities, number of hands-on
errors, time spent gaming the system, and lesson where GTS was observed? 2) What are the hardware
servicing academic performances of the students in the control and experimental groups in terms of
pretest scores, posttest scores, time spent on the hands-on activities, and number of hands-on errors?
3) Is there a significant difference between the academic performances of the students in terms of
pretest scores, posttest scores, time spent on the hands-on exercises, and number of hands-on errors
in the control and experimental groups?
The following null hypotheses were tested in this study:

H0a: There is no significant difference in the pretest scores of the experimental and control group.
H0b: There is no significant difference in the posttest scores of the experimental and control group.
H0c: There is no significant difference in the time spent on the hands-on activities of the experimental
and control group.
H0d: There is no significant difference in the number of hands-on errors committed of the experimental
and control group.
H0e: There is no significant difference in the pretest and posttest scores of the students in the control group.
H0f: There is no significant difference in the pretest and posttest scores of the students in the
experimental group.

2. LITERATURE REVIEW

2.1 Computer-Based Training


Computer-based training (CBT) is a methodology for providing systematic, structured learning
(Bedwell & Salas, 2010). Practitioners and students could have relied on CBT (Bedwell & Salas, 2010)
since it is an effective educational tool (Oduma et al., 2019). CBT is an evolving field. Researchers
in this field are continuously developing CBT software with the intention of improving the students’
academic performance. For instance, CBT was employed to learn languages. Ecalle et al. (2020)
used CBT programs to stimulate learning to read in French for new immigrant children. Two groups
of students used different CBT software programs. The first group has just started to learn French,
while the second group could already identify a few French words. The experiments showed that
there was a significant effect on phonemic awareness in the first group, while there was a significant
effect on word reading in the second group. CBT for language acquisition is also beneficial for the
older population. In a recent similar study, Klimova (2021) conducted a mini-review of the benefits
of CBT for foreign language training in healthy older people. Klimova (2021) disclosed that CBT
for foreign language acquisition was indeed helpful for older individuals.

2
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

CBT was also employed in mathematics learning. For example, Mousa and Molnár (2020)
determined whether CBT in math improves the inductive reasoning of 9 to 11-year-old children. Their
study found evidence to support the conclusion that the experimental group (those who underwent
CBT) had higher posttest scores than the control group. In the recent study of Zwart et al. (2021),
they utilized CBT for training nursing students in professional duties that included mathematical tasks
associated with medication processes. The CBT system included mathematical medication scenarios
and basic arithmetic exercises that could support mathematical medication learning. Data gathered
from 118 participants showed that the CBT improved the mathematical memorization of all students.
De Jesus (2019) conducted a similar study on computer hardware servicing. This is the only
study that is closely related to this current study. De Jesus (2019) developed a CBT named “Computer
Hardware Servicing and Maintenance Trainer” (CHSM Trainer). The CSHM Trainer reduced the
time spent practicing interfaces and troubleshooting. The software received a very satisfactory
subjective evaluation from the students. However, the software could not detect gaming behaviors
or the functionalities of a pedagogical agent.

2.2 Pedagogical Agent and Feedback


Pedagogical agents (PA) are virtual characters that facilitate instruction (Bringula et al., 2018;
Lane & Schroeder, 2022). There is a growing body of research that reports the impact of PA on
students’ learning and behavior. There are studies that report both positive (Bringula et al., 2018;
Mohammadhasani et al., 2018) and inconclusive (Li et al., 2016) effects of PA. Nonetheless, a
recent systematic literature review agreed that PA had a significant effect on students’ learning
(Martha & Santoso, 2019).
The ability of the PA to provide real-time and personalized feedback contributed to its
effectiveness. Feedback is a computer-generated message that could assist or correct a student during
a learning process (Bimba et al., 2017; Bringula et al., 2017). It may provide textual, gesture, voice,
or facial responses (Bringula et al., 2020; Dinçer & Doğanay, 2017; Kim & Baylor 2016). The study
of Dinçer and Doğanay (2017) utilized four different PAs with either audible or textual feedback in
teaching students about MS Excel. The four PAs were Tuna (with an appearance of a boy), Ada (a
grown-up female), Ali (a grown-up male), and Zipzip (a robot). Students could choose which PAs
and feedback they liked. It was revealed that designs with agents had positive effects on learners’
motivation, academic success, and cognitive load.
In a recent study, Bringula et al. (2020) investigated the impact of two versions of PAs of intelligent
tutoring systems on the mathematics performance of the students. The first version only provided
textual feedback and a neutral synthetic facial expression (SFE). The second version also provided
textual feedback but included other SFEs (happy, surprise, and sadness). Students who utilized the
second version had higher mathematics performance than those students who utilized the first version.

2.3 Gaming the System


Gaming the system (GTS) is a student’s deliberate attempt “to succeed in an educational task by
systematically taking advantage of properties and regularities in the system used to complete that
task, rather than by thinking through the material” (Baker, Mitrović, & Mathews, 2010, p. 267). One
form of GTS is guessing (Walonoski & Heffernan, 2006). Different strategies were employed in the
software to prevent guessing. These strategies include delaying hint requests (Price et al., 2017), using
a pedagogical agent to remind students not to game the software (Baker et al., 2006), allowing to see
the gaming behaviors of other students (Verginis et al., 2011), providing textual feedback (Arroyo et
al., 2007; Bringula et al., 2018; Roll et al., 2007; Walonoski & Heffernan, 2006), using response time
as indicator of guessing (Guo et al., 2016), and informing the students that the software is aware of
the students’ behavior (Nunes et al., 2016). It was also suggested non-awarding of points (Kraemer
et al., 2012). All except the first strategy found to reduce gaming.

3
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

Students exhibit GTS for various reasons. Students exhibit GTS because they want to know
the reaction of the PA (Rodrigo et al., 2012). Other students were genuinely stuck on the activity
(Beck & Rodrigo, 2014). In a classroom setting, teachers or tutors provide interventions to help
students move forward with the lesson. One of these strategies is to repeat the lecture. For example,
Bringula et al. (2020) reported that videos were the preferred teaching materials since students
could repeat the lectures.

3. METHODOLOGY

3.1 Software Utilized


The students utilized computer-based training software accessible through a local area network. The
software has a pedagogical agent named “DAC (Disassemble and Construct): The Builder” (simply
referred to as DAC) capable of conducting hardware servicing training. The software had lessons and
assessments (e.g., hands-on activities, quizzes, and examinations). It covered two lessons (Desktop
Assembly and Disassembly and Troubleshooting) of the course syllabus.
The two lessons contained 16 lectures. The pedagogical agent delivered the lecture through
text and images. The first lecture was about the parts of the system unit and its functions (Figure 1).
Following the lecture, a 25-item randomized quiz related to the lecture was given to assess the student’s
comprehension. The student must have at least 13 points to proceed to the next lecture. If the score
was not satisfactory, the student would repeat the lecture, and the next module would remain locked.
For the rest of the lectures, DAC taught the students how to assemble and troubleshoot computers.
At the end of every lecture, students took part in a hands-on activity. The students could take the
lessons and activities at their own pace.
DAC assisted the students through the lectures, activities, and examinations. DAC provided
textual feedback and displayed neutral and happy facial expressions. It can conduct tutorials/lectures,
provide hints, detect gaming, reprimand students, and recommend topics (Figure 1 and Figure 2).
If DAC detected gaming, it would prompt the students about their behavior and redirect them back
to the lesson. A student was deemed gaming the system when an individual made three consecutive
mistakes within 15 seconds. It was assumed that students exhibited GTS because they were stuck in

Figure 1. DAC: The Builder conducting a lecture

4
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

Figure 2. DAC: The Builder assisting the student to build a virtual computer

the activity (Beck & Rodrigo, 2014). Students would only be allowed to retake the exercise after the
lesson (Figure 3). This strategy was based on the study of Bringula et al. (2020).

3.2 Research Design, the Subjects, Determination of Sample


Size, Sampling Design, and Sampling Technique
This experimental study utilized the quasi-experimental pretest-posttest control group design (Figure
4). Computing students from one university in the Philippines participated in the study. There were
two classes in the hardware servicing course. Only one teacher handled the two classes. All students
in the two classes participated in the study. The two classes had a total of 73 students. However, only
56 students completed the experiment.

Figure 3. DAC re-directs the student to the tutorial after it detected GTS

5
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

A class section was randomly assigned (R) either to an experimental or control group. The
experimental group consisted of 26 students, while the control group consisted of 30 students (Figure
4). The average age of the participants in both groups was 20 years. The majority of the participants
are male students in both groups: 22 male participants in the experimental group and 19 in the control
group. Most of the students who participated in the experimental group were third-year students (n
= 19), while second-year students (n = 15) were in the control group.
Both sets of students took a pretest (O) before the intervention period (X). The whole experiment
lasted for two weeks. The intervention period lasted for four non-consecutive days (i.e., two class
sessions within a week, and each session lasted for 1.5 hours). Afterward, a posttest (O) was
administered to both groups.

3.3 Research Instrument and Data Gathering Procedure


The academic performances of the students were measured through objective tests and hands-on activities.
The objective part was composed of the pretest and posttest. Both tests were incorporated into the system
to facilitate the randomization of items. Initially, the tests included a 39-item isomorphic multiple-choice
test. The content of the tests was about the general parts of the computer and the installation process.
The teacher of the course helped the researchers develop the items for the tests. Then, it was pilot-tested
with 21 students who were not part of the study. The pilot testers were students of another Information
Technology course where hardware servicing was embedded in the syllabus. Unclear instructions and
vague sentences were deleted from the tests. The final tests contained 38 items.
The students in both groups had already taken the initial topic (i.e., the theoretical part) of the
syllabus when the study was conducted. Both groups were given a pretest before the intervention.
The pretest was conducted a day before the intervention period. Afterward, the students in the
experimental group utilized the software during their class hours. This is the intervention period. Each
intervention period lasted for 1.5 hours. Students can repeat the lecture as desired. For the control
group, the students were also taught in the laboratory (i.e., the laboratory served as the classroom)
for the same duration. Students in the control group were also taught the same course contents for
two weeks. After the teacher’s lecture, students took part in hands-on activities and quizzes in DAC.
Finally, the students in both groups took the posttest. The posttest was given on the last day of the
experiment. Students were given an hour to finish each test. The experiment lasted for two weeks.
The second part of the research instrument entailed hands-on activities. In these activities,
students were asked to set up a virtual computer unit (Figure 3). Students have to complete the set-
up within an hour. There were no points associated with this activity. Instead, students’ completion
times and the number of errors they committed were recorded. If DAC detects the student is gaming
the system, DAC will inform the students of their gaming behavior and they will be redirected to the
lesson (Figure 3). The number of lectures taken, time spent gaming the system, and activities where
GTS was observed were also logged in the system.

Figure 4. Randomized Pretest-Posttest Control Group Design

6
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

3.4 Statistical Treatment of Data


The study utilized descriptive statistics such as sums, means, and standard deviations. Mann-Whitney
U and Wilcoxon Signed rank tests were employed to determine significant differences in the hardware
servicing performances of the participants in the control and experimental groups. A 0.05 level of
significance was adopted to determine the reliability of the findings.

4. RESULTS

4.1 RQ1: Software Utilization of the Students


Table 1 shows the students’ software utilization. The experimental group took part in more hands-on
exercises than the control group. Despite taking more hands-on exercises, the experimental group
spent less time completing the hands-on exercises than the control group. The experimental and
control groups took, on average, 9.12 and 13.62 minutes to complete a hands-on activity. This finding
conforms to the study of De Jesus (2019), which found that students who utilized the software tended
to finish the activities quickly. Furthermore, the former had fewer errors in the hands-on exercises.
The majority of the students in the experimental group took 13 lectures.
During the activities, on average, students in the experimental group would exhibit GTS every
4.03 seconds. This means that the students were committing three mistakes within 4.03 seconds. The
majority of the GTS was observed during the 13th activity. Initially, more than 50% (n = 15) of the
students exhibited GTS. After learning that they would return to the lecture, the number of students
who exhibited GTS was reduced to 4. However, three students exhibited gaming three times.

4.2 RQ2: Hardware Servicing Performances and RQ3: Test of


Difference Between Means of Hardware Servicing Performances
Table 2 shows the Mann-Whitney U test on the hardware servicing performance of the students between
the two groups. The experimental group (M = 25.8) had lower pretest scores than the control group
(M = 27). The control group’s mean and sum of ranks of pretest scores (M = 30.35; s = 910.50) are
higher than the experimental group’s (M = 26.37; s = 685.50). However, the differences in the ranks
are not significant (U = 334.5; p > 0.05). The posttest scores and mean ranks between the groups
were almost equal. It can be expected that the difference between the rank of the posttest scores was

Table 1. Software Utilization

Experimental Group Control Group


Software Utilization
(n = 26) (n = 30)
Average Number of Hands-on Completed 11 7
546.97 seconds 817.30 seconds
Average Time Spent on Hands-on
(9.12 minutes) (13.62 minutes)
Number of Hands-on Errors 69.88 102.6
Average Time Exhibited GTS 4.03 seconds -
Activity where GTS was observed Activity 13 -
Average Number of Activity Taken 13 -
First time gaming 15 students -
Second time gaming 4 students -
Third time gaming 3 students -

7
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

Table 2. Mann-Whitney U Test on the Hardware Servicing Performance of the Students in the Experimental (n = 26) and
Controlled (n = 30) Groups

Mean Rank Sum of


Test Group Mean U p-value
(M) Ranks (s)
Experimental 25.8 26.37 685.50
Pretest 334.5 0.361
Control 27.0 30.35 910.50
Experimental 27.9 27.81 723.00
Posttest 372.0 0.766
Control 28.0 29.10 873.00
Time Spent Experimental 9.12 min 25.50 585.00
on Hands-on 234.0 0.010
Activities Control 13.62 min 33.70 1011.00

Hands-on Experimental 69.88 23.00 598.00


247.00 0.019
Errors Control 102.6 33.27 998.00

not significantly different (U = 372.0, p > 0.05). The first (H0a) and second (H0b) null hypotheses
are both accepted.
The Mann-Whitney U test confirmed that there is a significant difference between the software
utilization of the control and experimental groups. The experimental group spent less time (U =
234.00, p < 0.05) and had fewer errors committed during the activities (U = 247.00, p < 0.05). Hence,
the third (H0c) and fourth (H0d) null hypotheses are both rejected.
Meanwhile, the Wilcoxon signed-rank tests were conducted on the pretest and posttest scores
of the groups (Table 3). In the control group, there is almost an equal number of negative (n = 14)
and positive (n = 15) ranks. Moreover, the mean negative rank is 12.71 and the mean positive rank
is 17.13. The sum of the positive ranks (s = 279.50) is higher than the sum of the negative ranks (s
= 178.00). However, the difference between the mean ranks of the pretest and posttest scores in the
control group is not significant (Z = 0.855, p > 0.05). Therefore, the null hypothesis stating that (H0e)
there is no significant difference between the pretest and posttest scores of the control group is accepted.
In the experimental group, there are more positive ranks (n = 20) than negative ranks (n = 5).
Consequently, the mean positive rank (M = 13.98) is higher than the mean negative rank (M = 9.10).
The sum of the ranks further shows the discrepancy between the positive (s = 270.50) and negative
(s = 45.50) ranks. The difference between the mean rank of the scores was found to be significant (Z
= -3.194, p < 0.05). Hence, the posttest scores are higher than the pretest scores of the experimental

Table 3. Wilcoxon Signed-Rank Tests on the Hardware Servicing Performance of the Students between their Pretest and
Posttest Scores

Mean Rank Sum of


Test Rank n Z p-value
(M) Ranks (s)
Negative 14 12.71 178.00
Post_Con –
Positive 15 17.13 257.00 0.855 0.392
Pre_Con
Ties 1
Negative 5 9.10 45.50
Post_Exp –
Positive 20 13.98 279.50 -3.194 0.001
Pre_Exp
Ties 1

8
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

group. Consequently, the null hypothesis stating that (H0f) there is no significant difference between
the pretest and posttest scores of the experimental group is rejected.

5. DISCUSSION

This study determined the impact of a CBT on the computer hardware servicing skills of college
students. Towards this goal, the academic performances of the students in the experimental and
control groups were compared. Moreover, the software utilization of the experimental group was
investigated. The experimental group had better software utilization than the control group in terms
of the average number of hands-on activities completed, average time spent, and number of hands-
on errors committed. The experimental group was able to cover more hands-on activities than the
control group. The experimental group also exhibited their knowledge more correctly and quickly than
the control group. These findings agree with the study by De Jesus (2019). The favorable software
utilization of the experiment group can be attributed to the students’ familiarity with the software.
The software is indeed able to assist the students’ learning of computer hardware servicing at their
phase. Nevertheless, despite the lack of familiarity with the system, the control group was able to
complete seven hands-on activities.
Consistent with the literature, students in this study also exhibited GTS behavior. In the context
of this study, students attempted to fit the parts of a computer into the different computer slots.
GTS is exhibited within 4.03 seconds. As a result, students responded to the activities passively.The
majority of the GTS was logged in the 13th activity. Perhaps, students were attempting to finish all
the lessons quickly.
In the first case, more than half of the students displayed GTS.After the intervention of DAC,
there was a significant reduction in GTS. Therefore, the combination of returning the students to the
lecture, textual feedback, and neutral facial expression are effective ways to prevent this student’s
behavior. However, there were still 3 students who persisted in their GTS behavior. It is unclear why
these students continue this behavior despite taking more time to re-learn the lesson. Future research
is necessary to shed light on this phenomenon.
The Mann-Whitney U test on the pretest scores of both groups showed no significant difference.
Thus, the prior knowledge of the students in the course is similar. In other words, when the study was
conducted, they had the same levels of understanding of the lesson. At the end of the intervention
period, the differences in their posttest scores were not statistically significant. This means students
in the traditional lecture setting and the experimental setting could not outperform each other.
The Wilcoxon signed-rank tests provided another insight into the group’s academic performance in
hardware servicing. For the control group, the rank of the pretest and posttest scores was not statistically
different. This finding suggests traditional lectures could not increase the students’ scores to a large
extent. Meanwhile, the experimental group had a different result. Students who use the software have
the potential to significantly improve their grades.However, as shown in the previous statistical test,
the scores of the students in the experimental group did not exceed the scores of the control group.

6. THEORETICAL AND PRACTICAL CONTRIBUTIONS

This study contributed to the existing threads of discussion on preventing GTS and on the field of
CBT in general. In prior studies, preventing GTS was focused on reprimanding or reminding students
about their usage behavior (Arroyo et al., 2007; Baker et al., 2006; Nunes et al., 2016; Roll et al.,
2007; Walonoski & Heffernan, 2006). While these strategies have been proven effective in reducing
GTS, they may lack pedagogical value. In this current study, the response of the PA was based on the
assumption that students exhibit GTS because of a lack of skills. Consistent with the study of Chen
et al. (2012), students need to repeat the lesson as a more definitive course of action. The gaming

9
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

behavior intervention employed in this study, as shown in the findings, significantly reduced the
number of students who exhibited this behavior.
Furthermore, this study offers practical implications. Considering the positive outcomes of the
experiment, the use of the software is encouraged. The software may also be utilized as supplemental
material for students. Specifically, at-risk, struggling, or absentee students may use the software to
catch up with the course content. CBT researchers may also consider redirecting the students to their
lessons as a way to deter GTS.

7. CONCLUSION, LIMITATIONS, AND RECOMMENDATIONS

This study determined the students’ utilization of a CBT software named “DAC” and its impact
on their academic performance. The experimental group had a more favorable use of the software
compared to the control group. However, this is mainly attributed to familiarity with the system. The
experimental group exhibited GTS. This behavior was significantly reduced after DAC intervention.
Hence, redirecting the students to retake the lesson is an effective way to deter GTS.
The study did not find evidence to reject the first, second, and fifth null hypotheses. However,
the third, fourth, and sixth hypotheses were rejected. Three conclusions can be derived from this
finding. First, students can learn both in traditional and experimental settings. Second, the students
in both conditions could not outperform each other. In other words, after each intervention, it can
be expected that their scores will be the same. Lastly, the software can assist students in catching up
with their peers.
Despite the promising results, there are several limitations in the study that are worth further
investigation. Every intervention has limitations, and the strategy employed in this study is no
exception. It is still unclear how students will avoid being detected in their GTS behavior. Incorporation
of other intervention strategies in the system is suggested to determine the relative impact of these
strategies. Lastly, the software only covered the hardware servicing of desktop computers. Thus,
laptop servicing may be incorporated into future research.

10
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

REFERENCES

Arroyo, I., Ferguson, K., Johns, J., Dragon, T., Meheranian, H., Fisher, D., & Woolf, B. P. et al. (2007). Repairing
disengagement with non-invasive interventions. Artificial Intelligence in Education, 2007, 195–202.
Baker, R., Walonoski, J., Heffernan, N., Roll, I., Corbett, A., & Koedinger, K. (2008). Why students engage in “gaming
the system” behavior in interactive learning environments. Journal of Interactive Learning Research, 19(2), 185–224.
Baker, R. S., Corbett, A. T., Koedinger, K. R., Evenson, S., Roll, I., Wagner, A. Z., & Beck, J. E. et al. (2006).
Adapting to when students game an intelligent tutoring system. In International conference on intelligent tutoring
systems (pp. 392-401). Springer. doi:10.1007/11774303_39
Baker, R. S., Mitrović, A., & Mathews, M. (2010). Detecting gaming the system in constraint-based tutors.
In International Conference on User Modeling, Adaptation, and Personalization (pp. 267-278). Springer.
doi:10.1007/978-3-642-13470-8_25
Beck, J., & Rodrigo, M. M. T. (2014). Understanding wheel spinning in the context of affective factors. In
International conference on intelligent tutoring systems (pp. 162-167). Springer. doi:10.1007/978-3-319-07221-0_20
Bedwell, W. L., & Salas, E. (2010). Computer‐based training: Capitalizing on lessons learned. International
Journal of Training and Development, 14(3), 239–249.
Bimba, A. T., Idris, N., Al-Hunaiyyan, A., Mahmud, R. B., & Shuib, N. L. B. M. (2017). Adaptive
feedback in computer-based learning environments: A review. Adaptive Behavior, 25(5), 217–234.
doi:10.1177/1059712317727590
Botarleanu, R. M., Dascalu, M., Sirbu, M. D., Crossley, S. A., & Trausan-Matu, S. (2018). ReadME–Generating
personalized feedback for essay writing using the ReaderBench framework. In H. Knoche, E. Popescu, & A.
Cartelli (Eds), Conference on Smart Learning Ecosystems and Regional Development (pp. 133-145). Springer.
Bringula, R., De Leon, J. S., Rayala, K. J., Pascual, B. A., & Sendino, K. (2017). Effects of different types of feedback
of a mobile-assisted learning application and motivation towards mathematics learning on students’ mathematics
performance. International Journal of Web Information Systems, 13(3), 241–259. doi:10.1108/IJWIS-03-2017-0017
Bringula, R., Fosgate, I. C., Yorobe, J. L., & Garcia, N. P. (2020). Exploring the Sequences of Synthetic Facial
Expressions and Type of Problems Solved in a Personal Instructing Agent using Lag Sequential Analysis. In
2020 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 764-
769). IEEE. doi:10.1109/TALE48869.2020.9368492
Bringula, R. P., Fosgate, I. C. O. Jr, Garcia, N. P. R., & Yorobe, J. L. M. (2018). Effects of pedagogical agents
on students’ mathematics performance: A comparison between two versions. Journal of Educational Computing
Research, 56(5), 701–722. doi:10.1177/0735633117722494
De JesusA. N. B. (2019). Computer hardware servicing and maintenance trainer. https://ssrn.com/
abstract=3448885
Dinçer, S., & Doğanay, A. (2017). The effects of multiple-pedagogical agents on learners’ academic success,
motivation, and cognitive load. Computers & Education, 111, 74–100. doi:10.1016/j.compedu.2017.04.005
Ecalle, J., Vidalenc, J. L., & Magnan, A. (2020). Computer-based Training Programs to Stimulate Learning to
Read in French for Newcomer Migrant Children: A Pilot Study. Journal of Educational Cultural and Psychological
Studies, (22), 23–47. doi:10.7358/ecps-2020-022-ecal
Guo, H., Rios, J. A., Haberman, S., Liu, O. L., Wang, J., & Paek, I. (2016). A new procedure for detection of
students’ rapid guessing responses using response time. Applied Measurement in Education, 29(3), 173–183.
doi:10.1080/08957347.2016.1171766
Hsu, C. K., & Hwang, G. J. (2014). A context-aware ubiquitous learning approach for providing instant learning
support in personal computer assembly activities. Interactive Learning Environments, 22(6), 687–703. doi:10
.1080/10494820.2012.745425
Hwang, G. J., Wu, C. H., Tseng, J. C. R., & Huang, I. (2011). Development of a ubiquitous learning platform
based on a real-time help-seeking mechanism. British Journal of Educational Technology, 42(6), 992–1002.
doi:10.1111/j.1467-8535.2010.01123.x

11
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

Kim, Y., & Baylor, A. L. (2016). Research-based design of pedagogical agent roles: A review, progress, and
recommendations. International Journal of Artificial Intelligence in Education, 26(1), 160–169. doi:10.1007/
s40593-015-0055-y
Klimova, B. (2021). Are There Any Cognitive Benefits of Computer-Based Foreign Language Training for
Healthy Elderly People?–A Mini-Review. Frontiers in Psychology, 11, 573287. doi:10.3389/fpsyg.2020.573287
PMID:33584410
Kraemer, E. E., Davies, S. C., Arndt, K. J., & Hunley, S. (2012). A comparison of the Mystery Motivator and the
Get’Em On Task interventions for off‐task behaviors. Psychology in the Schools, 49(2), 163–175. doi:10.1002/
pits.20627
Lane, H. C., & Schroeder, N. L. (2022). Pedagogical agents. In B. Lugrin, C. Pelachaud, & D. Traum (Eds.), The
Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent
Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application (pp. 307-330). Association
of Computing Machinery. doi:10.1145/3563659.3563669
Li, J., Kizilcec, R., Bailenson, J., & Ju, W. (2016). Social robots and virtual agents as lecturers for video instruction.
Computers in Human Behavior, 55, 1222–1230. doi:10.1016/j.chb.2015.04.005
Martha, A. S. D., & Santoso, H. B. (2019). The design and impact of the pedagogical agent: A systematic
literature review. Journal of Educators Online, 16(1), n1. doi:10.9743/jeo.2019.16.1.8
Mohammadhasani, N., Fardanesh, H., Hatami, J., Mozayani, N., & Fabio, R. A. (2018). The pedagogical agent
enhances mathematics learning in ADHD students. Education and Information Technologies, 23(6), 2299–2308.
doi:10.1007/s10639-018-9710-x
Mousa, M., & Molnár, G. (2020). Computer-based training in math improves inductive reasoning of 9-to 11-year-
old children. Thinking Skills and Creativity, 37, 100687. doi:10.1016/j.tsc.2020.100687
Nunes, T. M., Bittencourt, I. I., Isotani, S., & Jaques, P. A. (2016). Discouraging gaming the system through
interventions of an animated pedagogical agent. In European Conference on Technology Enhanced Learning
(pp. 139-151). Springer. doi:10.1007/978-3-319-45153-4_11
Oduma, C. A., Onyema, L. N., & Akiti, N. (2019). E-learning platforms in business education for skill acquisition.
[NIGJBED. Nigerian Journal of Business Education, 6(2), 104–112.
Price, T. W., Zhi, R., & Barnes, T. (2017, June). Hint generation under uncertainty: The effect of hint quality
on help-seeking behavior. In E. André, R. Baker, X. Hu, M. Rodrigo, & B. du Boulay (Eds.), International
conference on artificial intelligence in education (pp. 311-322). Springer. doi:10.1007/978-3-319-61425-0_26
Rodrigo, M. M. T., Baker, R. S., Agapito, J., Nabos, J., Repalam, M. C., Reyes, S. S., & San Pedro, M. O. C.
(2012). The effects of an interactive software agent on student affective dynamics while using; an intelligent
tutoring system. IEEE Transactions on Affective Computing, 3(2), 224–236. doi:10.1109/T-AFFC.2011.41
Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2007, June). Can Help-Seeking Be Tutored? Searching
for the Secret Sauce of Metacognitive Tutoring. Artificial Intelligence in Education, 2007, 203–210.
Verginis, I., Gouli, E., Gogoulou, A., & Grigoriadou, M. (2011). Guiding learners into re-engagement through
the SCALE environment: An empirical study. IEEE Transactions on Learning Technologies, 4(3), 275–290.
doi:10.1109/TLT.2011.20
Walonoski, J. A., & Heffernan, N. T. (2006). Detection and analysis of off-task gaming behavior in intelligent
tutoring systems. In International Conference on Intelligent Tutoring Systems (pp. 382-391). Springer.
doi:10.1007/11774303_38
Zwart, D. P., Goei, S. L., Noroozi, O., & Van Luit, J. E. (2021). The effects of computer-based virtual learning
environments on nursing students’ mathematical learning in medication processes. Research and Practice in
Technology Enhanced Learning, 16(1), 1–21. doi:10.1186/s41039-021-00147-x

12
International Journal of Technology-Enabled Student Support Services
Volume 12 • Issue 1

Rex P. Bringula is a professor at the University of the East (UE) College of Computer Studies and Systems. He
received his BS Computer Science degree from UE as a Department of Science and Technology scholar. He
received his Master’s in Information Technology and Ph.D. in Technology Management in Technological University
of the Philippines. He is active in conducting school- and government-funded research projects, and in participating
in local and international conferences. His research interests are in computer science/IT education, affective
computing, Internet studies, cyber-behavior, web usability, and environmental issues.

John Vincent Canseco graduated from the University of the East, Manila, Philippines.

Patricia Louise J. Durolfo was a student at the University of the East, Manila, Philippines.

Lance Christian Villanueva was a student at the University of the East, Manila, Philippines.

Gabriel M. Caraos was a student at the University of the East, Manila, Philippines.

13

You might also like