0% found this document useful (0 votes)
1 views11 pages

BTN4

The document analyzes various training and development assessment models, including Phillips's ROI Model, Kirkpatrick's Four-Level Model, CIPP Model, Utility Analysis, Balanced Scorecard, and Success Case Method. Each model has its advantages and disadvantages, focusing on aspects such as financial impact, comprehensive evaluation, and strategic alignment. The analysis emphasizes the importance of selecting appropriate evaluation methods based on the specific training context and organizational needs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
1 views11 pages

BTN4

The document analyzes various training and development assessment models, including Phillips's ROI Model, Kirkpatrick's Four-Level Model, CIPP Model, Utility Analysis, Balanced Scorecard, and Success Case Method. Each model has its advantages and disadvantages, focusing on aspects such as financial impact, comprehensive evaluation, and strategic alignment. The analysis emphasizes the importance of selecting appropriate evaluation methods based on the specific training context and organizational needs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 11

II.

Analysis Of Training And Development Assessment Models


1. Phillips's ROI Model:
The Kirkpatrick Model was the de facto model of training evaluation in the 1970s and
1980s. With his book on training evaluation, Jack Phillips expanded on its
shortcomings to include considerations for the return on investment (ROI) of training
programs.
Phillips’ expanded approach to the Kirkpatrick Model is used to build the Phillips ROI
Model. The ROI model of training evaluation that Jack Phillips proposed includes five
steps:
-​ Reaction
-​ Learning
-​ Application and Implementation
-​ Impact
-​ Return on Investment

Advantages:
Traces the complete chain of impact
Detailed data from the chain of impact allows the training program to be debugged if
level 5 produces a negative ROI figure. Training managers can pinpoint the exact
reason for failure.

Since, Phillips ROI Model is an extension of the Kirkpatrick Model with a fifth level
to examine the financial return on investment (ROI) of training programs. Phillips
ROI Model offers several advantages:
-​ Financial impact: converting training outcomes into dollar values enables
organizations to directly measure the financial worth of their training efforts,
and it is simple to compare program costs and benefits.

-​ Holistic Evaluation System: The model has five levels—Reaction, Learning,


Application and Implementation, Impact, and ROI—giving a complete picture
of the effectiveness of a program, ranging from participant satisfaction to return
on investment.

-​ Informed Decision-Making: Organizations can allocate resources more


efficiently by knowing which programs return the best, thus facilitating
strategic planning and maximizing training investments.

Measures intangibles
The Phillips ROI model accepts that you cannot measure certain outcomes in
monetary value and the final ROI won’t represent such outcomes.

Outcomes such as customer satisfaction, employee satisfaction, and team bonding are
hard to measure in numbers. Therefore, the Phillips Model measures such “soft”
outcomes in addition to “hard” numerical outcomes.

-​ Increased Stakeholder Credibility: Shaping tangible ROI outcomes aids in the


argument in favor of training programs, fostering stakeholder confidence and
reinforcement of alignment with the strategic goals of an organization.

-​ Ongoing Improvement: The detailed insights gained from each level of analysis
facilitate the identification of areas of improvement, leading to the shaping of
better training programs and improved future returns.

Disadvantages:

Late measurement of ROI


The ROI of a training program evaluated with the Phillips Model is not known until
the training program is over. It would be impossible to make changes if we find out
that the training program was a failed project after the project is over.

-​ Time-Consuming and Resource-Intensive – Collecting data from five levels


(Reaction, Learning, Application, Impact, and ROI) is time-consuming and
resource-intensive, making it inappropriate for smaller organizations with less
resources.

-​ Hard to Isolate Training Impact – It is at times hard to determine whether the


improvements were the result of training alone because other factors (e.g.,
market conditions, leadership changes) influence business outcomes.

Useful for only 5-10% of training programs


The Phillips Model provides more than enough data until level 4 to know how
successful or unsuccessful a training program is. The ROI calculation in level 5 is
good to know, but it’s not necessary in most cases.

According to Jack Phillips:

-​ All programs need level 1 evaluation


-​ About 90% of programs need level 2 evaluation
-​ About 30% of programs need level 3 evaluation
-​ 10-20% of programs need level 4 evaluation
-​ Only 5-10% of programs need level 5 evaluation
-​ Complexity of Calculating ROI – Conversion of intangible benefits (e.g., staff
motivation or improved teamwork) into financial amounts is subjective and
difficult, raising the risk of inaccurate ROI estimates
-​ Risk of Resistance by Stakeholders – Some managers and workers may resist
participating in extensive information-gathering protocols, leading to
incomplete or unbalanced feedback.
-​ Not Applicable to All Training Initiatives – ROI measurement is better
applicable to skills training with clear outcomes but may not work for soft
skills training, leadership training, or culture change initiatives.

2. Kirkpatrick's Four-Level Model

The Kirkpatrick’s Four-Level Model is a widely used framework for evaluating


training programs. It was created by Donald Kirkpatrick, Ph.D., to define the four
levels of training evaluation. Each successive level of the model represents a more
precise measure of the effectiveness of a training program.
Level 1: Reaction - measures how the student feels about the training
experience.
Level 2: Learning - assesses the student’s resulting learning and increase in
knowledge from the training experience.
Level 3: Behavior - evaluates how the student’s behavior changes and improves
after applying the skills on the job.
Level 4: Results - examines the impacts that the student’s performance has on
the business.

Advantages:
There are many reasons why the Kirkpatrick Model is so widely used. Firstly,
this model offers a comprehensive and systematic way to assess training effectiveness
at multiple levels, from learner satisfaction to business outcomes. This helps
organizations understand how training impacts employee performance and business
growth. The Kirkpatrick Model is highly flexible and adaptable, which means it can
be applied to various types of training, including in-person, e-learning, on-the-job
training, and it is also useful across industries such as healthcare, IT, manufacturing
and finance. Additionally, it can identify the strengths and weaknesses of the training
design, delivery, and support while suggesting areas for improvement. The model can
also help to align the training objectives with the business goals and to demonstrate
the return on investment (ROI) of training.

Disadvantages:
However, there are also some challenges and concerns to be careful of when
employing the Kirkpatrick Model. Levels 3 and 4 require ongoing observation and
data collection, which can be resource-intensive. Organizations may struggle to justify
the cost of extensive evaluations. Collecting and analyzing data from all 4 levels can
be complex. For example, Level 1 relies on students' feedback, which may not
accurately reflect learning effectiveness. It can be difficult to isolate the effects of
training from other factors that influence performance as well as results, and it can be
subjective and biased to measure and interpret the data. Even with carefully developed
metrics, it may not be possible to directly prove the efficacy of the program or a
positive ROI without a long-term and consistent measurement of results. The
Kirkpatrick Model is only employed after a training or instructional program is
finished, which means it’s harder to adjust or improve a training that is in progress.

3. CIPP Model
●​ The CIPP evaluation model was first introduced by Stufflebeam in 1983 with a
view of providing evaluators with the necessary information to make informed
decisions. CIPP is an acronym for Context, Input, Process and Product.
○​ Context: Evaluate whether the objectives meet learners’ expectations
and if they are relevant to the learners’ needs. Furthermore, it assesses
whether the objectives are clear and achieved by the end of the training
program.
○​ Input: Determine the resources of the training program including the
content of the program, the technical infrastructure and equipment that
support the training program’s implementation.
○​ Process: Determine the implementation of the training program through
three aspects: Learner participation, teaching-learning strategies, and
learners’ involvement in research activities. More specificially, it
assesses whether learners actively engaged during internships or not,
whether the teaching method is appropriate for each course module or
whether learners encouraged to conduct research projects or not.
○​ Product: Determine the impacts of the training program via three main
angles: A comprehensive assessment of the program, the competency of
learners after the program, and the impression of the program.
=> This model allows trainers to develop a more holistic picture and thenceforth
identify where they need to make improvements by spending time evaluating the
context, inputs, processes, and the final product.
●​ Advantages
○​ Comprehensive evaluation: The model allows a thorough assessment by
considering four essential dimensions of a program: Context (the
environment), input (resources and plans), process (implementation),
and product (outcomes and results).
○​ Flexible and adaptable: The CIPP model can be applied to various types
of programs and projects, making it versatile across different fields such
as education, healthcare, and business. It adapts to both formative (early
stage) and summative (final stage) evaluation needs.
○​ Continuous improvement: By focusing on ongoing assessment
throughout the program's lifecycle (from planning to outcomes), the
model promotes continuous improvement through identifying areas for
enhancement in the context, inputs, processes, and results.
○​ Comprehensive decision-making: CIPP provides a well-rounded basis
for decision-making, offering insights into whether the program is
aligned with its goals and objectives, is being implemented as planned,
and if it produces the desired results. This can inform future planning
and resource allocation.
○​ Stakeholder focus: It takes into account the perspectives of various
stakeholders (including participants, funders, and administrators),
helping to understand their needs and expectations. This ensures the
program is tailored to address the priorities of those involved or affected
by the program.
●​ Disadvantages
○​ Complexity and time-consuming: The CIPP model requires significant
time to collect information and demands a large amount of data to fully
evaluate the program and support decision-making, especially for large
or ongoing programs. Furthermore, collecting and analyzing data across
all four dimensions can be a big challenge for smaller organizations or
programs with limited budgets.
○​ Requireing adequate expertise: The CIPP model demands evaluators to
have a solid understanding of program design, implementation, and
assessment techniques. Without sufficient expertise, the evaluation
process could be ineffective or inaccurate, limiting the model’s utility.
○​ Overwhelming information: Due to the comprehensive evaluation
process, there is a risk of collecting too much data, which could
overwhelm evaluators and make it difficult to extract meaningful
conclusions. This can lead to an overload of information that hinders
decision-making.
○​ Difficulty in long-term assessment: The model is primarily designed for
assessing programs within a set timeframe. However, its application in
evaluating long-term, ongoing programs may be limited if the program's
outcomes evolve significantly over time.
4. Utility Analysis:

Utility Analysis is a crucial tool in Human Resource Management (HRM) that


helps organizations evaluate the effectiveness of their training programs and other HR
interventions. By providing a systematic method for assessing the financial impact of
HR initiatives, Utility Analysis allows decision-makers to determine whether a
particular program is worth the investment. However, despite its advantages, this
method also has certain limitations. This essay explores the benefits and drawbacks of
Utility Analysis in HRM.

Advantages of Utility

One of the primary advantages of Utility Analysis is its structured evaluation


approach. It offers a systematic framework for assessing different HR programs,
ensuring that decisions are based on data rather than intuition. By incorporating
multiple criteria, including qualitative and quantitative factors, organizations can make
well-informed choices regarding their HR strategies.

Another significant benefit is financial justification. Utility Analysis helps


organizations measure the return on investment (ROI) of HR initiatives, particularly
training programs. By estimating the monetary value gained from improved employee
performance, companies can determine whether the benefits outweigh the costs. This
financial insight is essential for effective budgeting and resource allocation.

Additionally, Utility Analysis enhances decision-making by providing quantifiable


data that supports strategic planning. HR managers can compare different training
programs based on their expected utility, enabling them to choose the most effective
and cost-efficient option. This data-driven approach minimizes risks and optimizes
workforce development.
Disadvantages of Utility Analysis

Despite its usefulness, Utility Analysis has some drawbacks. One key limitation is
the subjectivity involved in assessments. Many aspects of this analysis rely on
estimates and assumptions, such as the expected improvement in performance or the
monetary value assigned to productivity gains. This subjectivity can introduce biases
and affect the accuracy of the analysis.

Moreover, Utility Analysis simplifies complex organizational dynamics. While it


focuses on specific measurable outcomes, it may overlook external factors and
interdependencies that influence the success of HR programs. As a result,
organizations might not get a complete picture of how an intervention truly impacts
performance and productivity.

Another challenge is the requirement for accurate data. Conducting a reliable


Utility Analysis depends on the availability of comprehensive and precise data.
However, collecting such data can be difficult, especially when measuring intangible
benefits like employee motivation or long-term career development. Without accurate
information, the results of the analysis may be misleading.

Utility Analysis is a valuable method for assessing the effectiveness of HR


interventions, particularly training programs. It provides a structured, data-driven
approach to evaluating ROI and making informed decisions. However, its limitations,
including subjectivity, simplification of real-world complexities, and the need for
accurate data, must be carefully considered. To maximize its effectiveness,
organizations should use Utility Analysis alongside other evaluation methods to gain a
more comprehensive understanding of their HR initiatives.

5. Balanced Scorecard
A balanced scorecard (BSC) is a strategic management performance metric that a
company can use to improve internal business operations and external results.1 It's a
way for organizations to focus on processes that, when combined, can help them meet
their financial goals. BSCs are used extensively in business and industry, government,
and nonprofit organizations worldwide.
-​ The balanced scorecard (BSC) measured across four main aspects:
-​ At the center of the model is the Vision and Strategy, which is the core
direction that the business wants to achieve. Every aspect and goal is built on
this vision and strategy.
-​ Financial: the business measures and monitors financial requirements &
results. (eg: Focus on reducing production costs through optimizing raw
material usage, improving processes, and applying energy-saving
technologies.)
-​ Customer: measure and monitor customer satisfaction and performance
requirements to meet customer needs. (eg: Improve product quality by strictly
controlling quality, increasing the rate of qualified products, and implementing
customer feedback programs.)
-​ Internal Business Process: measure & monitor the metrics and requirements
of key internal processes within the customer-facing business (eg. Improve
production line efficiency by reducing waiting time, eliminating
non-value-added activities, and applying Lean Manufacturing methods.)
-​ Learning & Growth: focuses on how businesses educate & train employees,
improve knowledge and how businesses have used this knowledge to maintain
competitive advantage in the market. (eg: Train staff on operation and
maintenance of new equipment, and encourage innovation from staff to adopt
advanced technology.)
Balance scorecard (BSC)

Advantage Disadvantage

-​ Improved Strategic Communication & -​ Picking the Wrong Metrics. If you get
Execution by creating a unified the performance indicators wrong, the
performance measurement system. This balanced scorecard can completely
ensures that all team members and misrepresent how your organization is
departments use the same language when performing. Instead of driving
discussing strategy and progress, making improvement, it can lead to wasted time
collaboration easier. chasing the wrong priorities.
-​ Connects the individual worker to -​ Poor Data Quality or Gaps. The
organizational goals. A balanced balanced scorecard relies heavily on
scorecard helps employees “keep their eyes accurate and up-to-date data. If the data
on the prize” is incomplete, out-of-date, or just plain
-​ Better Alignment of Projects and wrong, it won't give a clear or reliable
Initiatives. The Balanced Scorecard helps picture of performance. This can lead to
organizations map their projects and bad decisions.
initiatives to the different strategic -​ Overloading on KPIs. Choosing too
objectives, which in turn ensures that the many KPIs can quickly overwhelm
projects and initiatives are tightly focused your team. When everything is
on delivering the most strategic objectives measured, nothing stands out as a
-​ Improved Performance Reporting. The priority. This can cause gridlock and
Balanced Scorecard can be used to guide make it hard to take clear action.
the design of performance reports and -​ Requires strong leadership support
dashboards. to be successful. It may be having
-​ Easier Strategy Reporting. With the trouble with the scorecard because new
Balanced Scorecard framework leadership isn't convinced that BSC is a
implemented, the strategy map with its viable option—or existing leadership
supporting documentation is already a simply may not like or understand the
strategic report that is updated regularly. structure.

6. Success Case Method


-​ The Success Case Method is a process that combines analysis with case study
and storytelling. It deliberately looks at the most, and least, successful
participants of a program to find out how well an organizational initiative is
working, as well as explain the contextual factors that differentiate successful
from unsuccessful adopters of new initiatives.
-​ There are five key steps in the SCM method (Brinkerhoff, 2003, p.29):
+​ Focusing and planning a Success Case study
+​ Creating an “impact model” that defines what success should look like
+​ Designing and implementing a survey to search for best and worst cases
+​ Interviewing and documenting success cases
+​ Communicating findings, conclusions, and recommendations
-​ The Success Case Method works by dividing the process into two parts.
+​ Part 1: A survey was conducted among training program participants to
identify the best and least successful individuals.
+​ Part 2: The selected candidates are interviewed in detail to answer
pivotal questions. The findings from both detailed interviews are shared
with the rest of the trainees in the form of success stories.

THE SUCCESS CASE METHOD (SCM)

ADVANTAGES DISADVANTAGES

- Cost-Effective & Accessible: Provides a - Qualitative Data Limitations: The


low-cost and easy-to-use approach. SCM’s qualitative nature makes it
- Practical Insight Gathering: Offers a challenging to quantify findings and
real-world, rich understanding of training compare them across different cases. It lacks
impacts. the statistical rigor and generalizability of
- Structured Evaluation: Uses established quantitative models, making broad,
methodologies to assess training data-driven conclusions about training
effectiveness, participant reactions, effectiveness difficult.
strengths, and weaknesses. - Subjectivity and Researcher
- Deep-Dive Analysis: Combines interviews Dependency: The method relies heavily on
and surveys to build a theory of change, the skills, objectivity, and judgment of the
uncover critical success factors, and researchers, which can introduce
understand outcome significance. subjectivity into the evaluation process.
- Evidence-Based Decision Making: Evaluators’ biases or limited qualitative
Demonstrates how training improves analysis skills can affect the interpretation
business results, justifies investments, and credibility of the success stories.
identifies obstacles, and informs impact - Limited Scope of Cases: By intentionally
enhancement strategies. focusing on extreme cases (the best and
- Compelling Illustration: Use real user worst performers), the SCM does not
stories to produce dramatic, capture the average participant’s experience
hard-to-argue-with evidence of results and or the full spectrum of outcomes. This
accomplishments. extreme-case focus may lead to conclusions
- Knowledge Sharing: Identify best that are not representative of the broader
practices and rapidly diffuse them into the population.
organization’s knowledge base, preventing - Potential for Over-interpretation:
unrecognized expertise from hindering Because SCM emphasizes compelling
competitive improvement. narratives, there is a risk that dramatic
- Effective Evaluation: Meet the practical success or failure stories may overshadow
demands of critically evaluating new more nuanced, less extreme outcomes,
leading to potential over-interpretation of
initiatives, providing legally sound evidence results.
of their success or failure.

-​ Case Study: System Analysis Program Development (SAP)


+​ SAP has used the Success Case Method to evaluate training programs to
evaluate a customer empathy training program for SAP’s sales teams.
+​ Using SCM, SAP sought to improve how its sales professionals engaged
with customers. By focusing on those individuals who showed
outstanding application of the training, the SCM helped pinpoint the
specific behaviors that were driving improved customer satisfaction (uch
as active listening and proactive problem-solving).
+​ The process:
●​ Defining Success: SAP first established clear business outcomes
as the key indicators of success for the training.
●​ Surveying Participants: They then administered a survey to a
large group of sales professionals who had completed the
training. This survey was designed to identify both high
performers (those who had effectively applied the training) and
low performers.
●​ Conducting In-Depth Interviews: For the high-performing group,
SAP conducted detailed interviews:
➢​ Specific behaviors and techniques that successful sales
professionals used.
➢​ How these behaviors translated into better customer
interactions and higher sales.
➢​ The organizational supports and environmental factors that
facilitated these successes.
●​ Generating Actionable Insights:
These success stories helped SAP understand what was working well
and offered clear recommendations for how to replicate these practices
across the sales team.
●​ Informing Decision Making:
The insights from the SCM study enabled senior management to justify
ongoing and future investments in the training program. They also used
the findings to make adjustment to boost overall training effectiveness.

You might also like