0% found this document useful (0 votes)
111 views43 pages

Capitol Campbell - Diferente Individuale Relevante Pentru Performanta

Capitol Campbell _Diferente Individuale Relevante Pentru Performanta

Uploaded by

Monica Muresan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
111 views43 pages

Capitol Campbell - Diferente Individuale Relevante Pentru Performanta

Capitol Campbell _Diferente Individuale Relevante Pentru Performanta

Uploaded by

Monica Muresan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 43

Chapter 22

Assessment in Industrial and


Organizational Psychology:
An Overview
John P. Campbell

The basic theme of this chapter is that the assessment enterprise in industrial and organizational
(I/O) psychology is very broad, very complex, and
very intense. The major underlying reason is that
the world of work constitutes the major portion of
almost everybodys adult life, over a long period
of time. It is complicated. The major components of
this complexity are the broad array of variables that
must be assessed; the multidimensionality of virtually every one of them; the difficulties involved in
developing specifications for such a vast array of
variables; the wide variety of assessment methods;
the intense interplay among science, research, and
practice; and the critical value judgments that come
into play. This chapter gives a structured overview
of these issues, with particular reference to substantively modeling psychologys major variable
domains and the attendant assessment issues that
are raised. The conclusion is that substantive specifications for what psychologists are trying to assess
are critically important, and I/O psychologists
should not shortchange this requirement, no matter
how much the marketplace seems to demand
otherwise.
To be fair, the term assessment can take on different meanings. Perhaps its narrowest construction is
as a multifactor evaluation of specific individuals in
terms of their suitability for a specific course of
action, such as selection, training, or promotion.
However, if the full spectrum of research and practice
concerning the applications of psychology to the
world of work is considered, assessment becomes a
much, much broader activity. This chapter takes the

broadest perspective. It equates assessment with


measurement and outlines a map of the assessment
landscape. The landscape is described in terms of
(a) an overall framework of relationships that
describe what I/O psychology is about, (b) the range
of assessment purposes that flow from this framework, (c) the range and complexity of the variables
that require assessment, (d) the range and complexity of the assessment methods that can be used, and
(e) the psychometric issues that permeate the assessment enterprise.
In the beginning were the independent variable
and the dependent variable, a distinction that
sounds sophomoric but is of fundamental importance and is often neglected. For example, when discussing the history of assessing leadership, a
distinction is often made between trait models and
behavioral models as though they were competing
explanations (e.g., Hunt, 1999). However, the
behavioral models (e.g., Bowers & Seashore, 1966)
focus on leader performancethe dependent
variableand trait models focus on a particular
set of performance determinants (e.g., cognitive
ability, personality)the independent variables.
The dependent variable is the variable of real interest. It is the variable one wants to predict, enhance,
or explain for various value-laden reasons. The independent variable has no intrinsic, or extrinsic, value.
For example, knowing someones general cognitive
ability has no intrinsic value. It only has value
because it predicts, or does not predict, something
else that is of value (e.g., leadership performance).
Similarly, independent variables such as training

DOI: 10.1037/14047-022
APA Handbook of Testing and Assessment in Psychology: Vol. 1. Test Theory and Testing and Assessment in Industrial and Organizational Psychology,
355
K. F. Geisinger (Editor-in-Chief)
Copyright 2013 by the American Psychological Association. All rights reserved.

John P. Campbell

programs or motivational interventions have no


value unless they can change something that is
important (i.e., critical dependent variables).
DEPENDENT VARIABLE LANDSCAPE
So what then are the dependent variables of value
that populate the I/O psychology landscape? Identifying the relevant set is indeed a value judgment,
and the superordinate distinction is whether one
takes the individual or the institutional (i.e., organizational) point of view (Cronbach & Gleser, 1965).
That is, is it the values of the management that
determine what dependent variables are important,
or the values of the individual job holder? The management cares about the viability of the organization. Individuals care about their own viability.
Sometimes their respective concerns overlap. For
example, the management values high individual
performance because it contributes to the goals of
the organization. Individuals strive for high performance because it improves their standard of living,
long-term financial security, or sense of self-worth.
However, for the individual, higher and higher levels
of performance may lose value because the effort
required to achieve them detracts from other dependent variables, such as ones general life satisfaction.
Wherein lie the values of the researcher and
scientist? One argument is that the researcher and
scientist must choose between the values of the
organization and the values of the individual. Once
that choice is made, then the interests of the scientist focus on determining the best methods of assessment, given the purposes for which the information
is to be used. An alternative argument is that the scientist does not make the value judgment. A dependent variable, such as individual performance, is
modeled and measured for the purpose of studying
its determinants. Such research can be used both by
the organization to improve selection and by the
individual to improve career planning. The intent
here is not to settle such arguments but to make the
point that value judgments permeate all choices of
what to assess on the dependent variable side. It is
also tempting to argue that values do not intrude on
the independent variable side where the canons of
psychometric theory preside, but obviously such is
356

not the case, as discussed in a later section of the


chapter. Those value judgments pertain to the consequences of the decisions made as a function of
assessment of the independent variable. A partial
taxonomy of the dependent variables in I/O psychology follows.
From the organizations point of view, the dependent variables are

individual performance in a work role, including


individual performance as a team member;
voluntary turnover;
team performance as a team, not as the aggregation of individual contributions;
team viability (analogous to individual turnover);
productivity (in the economists sense) of (a)
individuals, (b) teams, and (c) organizational
units; and
organizational unit effectiveness (i.e., the bottom
line).

From the individuals point of view, the dependent variables are

career and occupational achievement;


satisfaction with the outcomes of working
(which could include satisfaction with performance achievement);
perceived (or experienced) fair treatment (e.g.,
distributive and procedural justice);
frequency of injury from accidents; and
overall health and well-being, including physical
and mental health, perceived stress, and work
family conflict.

These two lists carry at least the following


assumptions, qualifications, or both:
1. Organizations are not concerned about job satisfaction or subjective well-being as dependent
variables, but only as independent variables that
have implications for performance, productivity,
effectiveness, or turnover.
2. Information pertaining to the determinants of
performance may be used in a selection system,
to benefit the organization, or in a career guidance system, to benefit the individual (e.g., using
ability, personality, and interest assessment
to plan educational or job search activities).

Assessment in Industrial and Organizational Psychology

Similarly, training programs that produce higher


skill levels can enhance individual performance
for the benefit of the organization or enhance
career options for individuals.
3. Fair and equitable treatment of individual
employees and the level of individual health and
well-being may be important dependent variables
for the organization if they are incorporated as
goals in the organizations ethical code or in a
policy statement of corporate social responsibility, for which the management is then held
responsible.
For the most part, I/O psychology does not operate from the individual point of view, even though
several of its early pioneers did, for example, Donald
Paterson or Walter van Dyke Bingham (cf. Koppes,
Thayer, Vinchur, & Salas, 2007). At some point,
vocational psychology (i.e., the individual point of
view) became part of counseling psychology (Campbell, 2007; Meyer, 2007).
The dependent variable landscape is complex for
assessment purposes, even as illustrated by the preceding simple lists. The complexity of assessment
increases considerably when each of the general
variables is modeled in terms of its major components. Consider each of the following.

Individual Performance
Before the mid-1980s, there was, relative to the
assessment of individual performance, simply the
criterion problem (J. T. Austin & Villanova,
1992), which was the problem of finding some existing and applicable indicator that could be construed
as a measure (i.e., assessment) of individual performance (e.g., sales, number of pieces produced)
while not worrying too much about the validity,
reliability, deficiency, and contamination of the indicators. Since then, much has happened regarding
how performance is defined and how its latent structure is modeled.
In brief, the consensus is that individual performance is best defined as consisting of the actions
people engage in at work that are directed at achieving the organizations goals and that can be scaled in
terms of how much they contribute to said goals.
For example, sometimes it takes a great deal of

covert thinking before the individual does something. Performance is the action, not the thinking
that preceded the action, and someone must identify those actions that are relevant to the organizations goals and those that are not. For those that are
(i.e., performance), the level of proficiency with
which the individual performs them must be scaled.
Both the judgment of relevance and the judgment of
level of proficiency depend on a specification of the
organizations important substantive goals, not content-free goals such as make a profit.
Nothing in this definition requires that a set of
performance actions be circumscribed by the term
job or that they remain static over a significant
length of time. Neither does it require that the goals
of an organization remain fixed or that a particular
management cadre be responsible for determining
the organizations goals (also know as vision). However, for performance assessment to take place, the
major operative goals of the organization, within
some meaningful time frame, must be known, and
the methods by which individual actions are judged
to be goal relevant, and scaled in terms of what represents high and low proficiency, must be legitimized by the stakeholders empowered to do so by
the organizations charter. Otherwise, there is no
organization. This is as true for a family as it is for
a corporation.
This definition creates a distinction between performance, as defined earlier, and the outcomes of
performance (e.g., sales level, incurred costs) that
are not solely determined by the performance of a
particular individual, even one of its top executives.
If these outcome indicators represent the goals of
the organization, then individual performance
should certainly be related to them. If not, the specifications for individual performance are wrong and
need changing or, conversely, the organization is
pursuing the wrong goals. If the variability in an
outcome indicator is totally under the individuals
control, then it is a measure of performance.
Given an apparent consensus on this definition
of performance, considerable effort has been
devoted to specifying the dimensionality of performance, in the context of the latent structure of the
performance actions required by a particular occupation, job, position, or work role (see Bartram,
357

John P. Campbell

2005; Borman & Brush, 1993; Borman & Motowidlo, 1993; Campbell, McCloy, Oppler, & Sager,
1993; Griffin, Neal, & Parker, 2007; Murphy, 1989a;
Organ, 1988; Yukl, Gordon, & Taber, 2002). These
models have become known as performance models,
and they seem to offer differing specifications for
what constitutes the nature of performance as a
construct. However, the argument here is that
correspondence is virtually total.
Campbell (2012) has integrated all past and current specifications of the dimensional structure of
the dependent variable, individual performance,
including those dealing with leadership and management performance, and the result is summarized in
the eight basic factors discussed in the next section.
Orthogonality is not asserted or implied, but
content distinctions that have different implications
for selection, training, and organizational outcomes
certainly are. Although scores on the different
dimensions may be added together for a specific
measurement purpose, it is not possible to provide
a substantive specification for a general factor.
Whether dimensions can be as general as contextual
performance or citizenship behavior is also
problematic.
Basic factors. The basic substantive factors of
individual performance in a work role (which are
not synonymous with Campbell et al., 1993) are
asserted to be the following.
Factor 1: Technical Performance. All models
acknowledge that virtually all jobs or work roles
have technical performance requirements. Such
requirements can vary by substantive area (driving a
vehicle vs. analyzing data) and by level of complexity or difficulty within area (driving a taxi vs. driving
a jetliner; tabulating sales frequencies vs. modeling
institutional investment strategies). Technical performance is not to be confused with task performance.
A task is simply one possible unit of description that
could be used for any performance dimension.
The subfactors for this dimension are obviously
numerous, and the domain could be parsed into
wide or narrow slices. The Occupational Information Network (O*NET; Peterson, Mumford, Borman, Jeanneret, & Fleishman, 1999) is based on the
U.S. Department of Labors Standard Occupational
358

Classification structure, which currently uses 821


occupations for describing the major distinctions in
technical task content across the entire labor force,
and the 821 occupations are further aggregated into
three higher order levels consisting of 449, 96, and
23 occupational clusters, respectively. The managers
of O*NET have interestingly divided some of the
Standard Occupational Classifications into narrower
slices to better suit user needs and have also added
new and emerging occupations such that O*NET
14.0 collected data on 965 occupations. The number
will grow in the future (Tippins & Hilton, 2010).
Potentially, at least, an occupational classification
based on technical task content could be used to
archive I/O psychology assessment data on individual work-role performance, end-of-training performance, or predicted performance.
Factor 2: Communication. The Campbell et al.
(1993) model is the only one that isolates communication as a separate dimension, but it appears as
a subfactor in virtually all others. Communication
refers to the proficiency with which one conveys
information that is clear, understandable, and well
organized. It is defined as being independent of subject matter expertise. The two major subfactors are
oral and written communication.
Factor 3: Initiative, Persistence, and Effort. This
factor emerged from the contextual performance
and management performance literatures as well as
the organizational citizenship behavior literature in
which it was referred to as Individual Initiative. To
make this factor conform to the definition of performance used here it must be composed of observable
actions. Consequently, it is typically specified in
terms of extra hours, voluntarily taking on additional tasks, going beyond prescribed responsibilities, working under extreme or adverse conditions,
and so forth.
Factor 4: Counterproductive Work Behavior.
Counterproductive Work Behavior (CWB), as it has
come to be called, refers to a category of individual
actions or behaviors that have negative implications
for accomplishment of the organizations goals (see
Chapter 35, this volume, for additional information
on this area).
The current literature does not speak with
one voice regarding the meaning of CWB, but the

Assessment in Industrial and Organizational Psychology

specifications generally circumscribe actions that are


intentional, that violate or deviate from prescribed
norms, and that have a negative effect on the individuals contribution to the goals of the unit or organization. Descriptions of this domain are provided
by Gruys and Sackett (2003) and Robinson and Bennett (1995). The general agreement seems to be that
two major subfactors exist (e.g., see R. J. Bennett &
Robinson, 2000; Berry, Ones, & Sackett, 2007;
Dalal, 2005) distinguished by deviant behaviors
directed at the organization (theft, sabotage, falsifying information, malingering) and behaviors
directed at individuals, including the self (e.g., physical attacks, verbal abuse, sexual harassment, drug
and alcohol abuse). Although not yet fully substantiated by research, it seems reasonable to also expect
an approachavoidance, or moving toward versus
moving away, distinction for both organizational
deviance and individual deviance. That is, the CWBs
dealing with organizational deviance seem to be
divided between aggressively destroying or misusing
resources versus avoiding or withdrawing from the
responsibilities of the work role. Similarly, CWBs
directed at individuals seem to be divided between
aggressive actions that are directed at other people
and destructive actions directed at the self, such as
alcohol and drug abuse and neglect of safety precautions. The approachavoidance distinction is a
recurring one in the study of motivation (Elliot &
Thrash, 2002; Gable, Reis, & Elliot, 2003) and of
personality (Watson & Clark, 1993), including a
major two-factor model of psychopathology (Markon, Krueger, & Watson, 2005). It is also suggested
in a study of CWB by Marcus, Schuler, Quell, and
Humpfner (2002).
A major issue in the CWB literature is whether
its principal subfactors are simply the extreme negative end of other performance factors or whether
they are independent constructs. The evidence
currently available (Berry et al., 2007; Dalal, 2005;
Kelloway, Loughlin, Barling, & Nault, 2002; Miles,
Borman, Spector, & Fox, 2002; Ones & Viswesvaran, 2003; Spector, Bauer, & Fox, 2010) has suggested that CWBs are not simply the negative side
of other performance components. Low scores on
other performance dimensions could result from a
lack of knowledge or skill, but low scores on CWB

reflect intentional deviance and are dispositional in


origin.
Factor 5: Supervisory, Manager, Executive (i.e.,
hierarchal) Leadership. This factor refers to leadership performance in a hierarchical relationship. The
substantive content, as specified by the leadership
research literature, is most parsimoniously described
by the six leadership factors listed in Exhibit 22.1
(Campbell, 2012). The parsimony results from
the remarkable convergence of the literature, as
detailed in Campbell (2012), from the Ohio State
and Michigan studies through the contingency
theories of Fielder, House, Vroom, and Yetton to
the current emphasis on being charismatic and

Exhibit 22.1
Six Basic Factors Making Up Leadership
Performance
1. Consideration, support, person centered: Providing
recognition and encouragement, being supportive
when under stress, giving constructive feedback,
helping others with difficult tasks, building networks
with and among others
2. Initiating structure, guiding, directing: Providing task
assignments; explaining work methods; clarifying
work roles; providing tools, critical knowledge, and
technical support
3. Goal emphasis: Encouraging enthusiasm and
commitment for the groups or organizations
goals, emphasizing the important missions to be
accomplished
4. Empowerment, facilitation: Delegating authority and
responsibilities to others, encouraging participation,
allowing discretion in decision making
5. Training, coaching: One-on-one coaching and
instruction regarding how to accomplish job tasks,
how to interact with other people, and how to deal
with obstacles and constraints
6. Serving as a model: Models appropriate behavior
regarding interacting with others, acting unselfishly,
working under adverse conditions, reacting to
crisis or stress, working to achieve goals, showing
confidence and enthusiasm, and exhibiting principled
and ethical behavior.

Note. From Oxford Handbook of Industrial and


Organizational Psychology (p. 173), by S. Kozlowski
(Ed.), 2012, New York, NY: Oxford University Press.
Copyright 2012 by Oxford University Press. Adapted
with permission.

359

John P. Campbell

transformational, leading the team, and operating


in highly complex and dynamic environments. In
conversations about leadership, the emphasis may
be on leader performance, as defined here, or it may
be on the outcomes of leader actions (e.g., follower
satisfaction, unit profitability), on the determinants
(predictors) of leadership performance, or on the
contextual influences on leader performance or
performance outcomes. However, when describing or assessing leadership performance (as defined
here), the specifications are always in terms of one
or more of these six factors. The relative emphasis
may be different, and different models may hypothesize different paths from leader performance to
leader effectiveness, which for some people may be
the interesting part, but the literatures characterization of leader performance itself seems to always be
within the boundaries of these six subfactors.
Similarly, the six subfactors circumscribe hierarchical leadership performance at all organizational
levels. However, the relative emphasis on the factors
may change at higher organizational levels, and the
specific actions within each subfactor may also
receive differential emphasis.
Factor 6: Management Performance (hierarchical).
Within a hierarchical organization, this factor
includes those actions that deal with obtaining, preserving, and allocating the organizations resources
to best achieve its goals. The major subfactors of
management performance are given in Exhibit 22.2
(Campbell, 2012). The major distinction between
leadership performance and management performance, which not everybody agrees on, is that the
leadership dimensions involve interpersonal influence. The management dimensions do not. As it
was for the components of leadership, there may be
considerably different emphases on the management
performance subfactors across work roles and also as
a function of the type of organization, organizational
level, changes in the situational context, changes in
organization goals, and so forth. Also, nothing in
the leadershipmanagement distinction implies two
separate jobs or work roles. They coexist.
Factor 7: PeerTeam Member Leadership
Performance. The content of this factor is parallel to the actions that make up hierarchical leadership (see Factor 5). The defining characteristic is
360

Exhibit 22.2
Eight Basic Factors of Management Performance
1. Decision making, problem solving, and strategic
innovation: Making sound and timely decisions about
major goals and strategies. Includes gathering
information from both inside and outside the organization,
staying connected to important information sources,
forecasting future trends, and formulating strategic
and innovative goals to take advantage of them
2. Goal setting, planning, organizing, and budgeting:
Formulating operative goals; determining how to
use personnel and resources (financial, technical,
logistical) to accomplish goals; anticipating potential
problems; estimating costs
3. Coordination: Actively coordinating the work of two
or more units or the work of several work groups
within a unit; scheduling operations; includes
negotiating and cooperating with other units
4. Monitoring unit effectiveness: Evaluating progress
and effectiveness of units against goals: monitoring
costs and resource consumption
5. External representation: Representing the organization
to those not in the organization (e.g., customers, clients,
government agencies, nongovernment organizations,
the public); maintaining a positive organizational image;
serving the community; answering questions and
complaints from outside the organization
6. Staffing: Procuring and providing for the
development of human resources; not one-on-one
coaching, training, or guidance, but providing the
human resources that the organization or unit needs
7. Administration: Performing day-to-day administrative
tasks, keeping accurate records, documenting
actions; analyzing routine information and making
information available in a timely manner
8. Commitment and compliance: Compliance with
the policies, procedures, rules, and regulations of
the organization; full commitment to orders and
directives, together with loyal constructive criticism
of organizational policies and actions
Note. From Oxford Handbook of Industrial and
Organizational Psychology (p. 173), by S. Kozlowski (Ed.),
2012, New York, NY: Oxford University Press. Copyright
2012 by Oxford University Press. Adapted with permission.

that these actions are in the context of peer or team


member interrelationships, and the peerteam
relationships in question can be at any organizational level (e.g., production teams vs. management
teams). That is, the team may consist of nonsupervisory roles or a team of unit managers.
Factor 8: Team MemberPeer Management
Performance. A defining characteristic of the

Assessment in Industrial and Organizational Psychology

high-performance work team (e.g., Goodman,


Devadas, & Griffith-Hughson, 1988) is that team
members perform many of the management functions shown in Exhibit 22.2. For example, the team
member performance factors identified in a critical incident study by Olson (2000) that are not
accounted for by the technical performance factors,
or the peer leadership factors, concern such management functions as planning and problem solving,
determining within-team coordination requirements
and workload balance, and monitoring team performance. In addition, the contextual performance
and organizational citizenship behavior literatures
have both strongly indicated that representing the
unit or organization to external stakeholders and
exhibiting commitment to and compliance with
the policies and procedures of the organization are
critical performance factors at any organizational
level. Consequently, to a greater extent than most
researchers realize or acknowledge, important elements of management performance exist in the peer
or team context as well as in the hierarchical (i.e.,
managementsubordinate) setting.
Again, these eight factors are intended to be an
integrative synthesis of what the literature has suggested are the principal dimensions of performance
in a work role. They are meant to encompass all previous work on individual performance modeling,
team member performance, and leadership and
management. Even though the different streams of
literature may use somewhat different words for
essentially the same performance actions, great consistency exists across the different sources.
Performance dynamics. The latent structure just
summarized has direct implications for the content
of performance assessments. However, it does not
speak to whether an individuals level of performance is stable over time or whether it changes.
Assessment of performance dynamics must deal
with additional complexities. One source of such
dynamics is that performance requirements of the
work role itself change over time, which can occur
because of changes in (a) the substantive content
of the requirements, (b) the level of performance
expected, (c) the conditions under which a particular level of performance is expected, or (d) some

combination of these. Individuals can also change.


Much of I/O psychology research and practice deals
with planned interventions designed to enhance
the individual knowledge, skill, and motivational
determinants of performance, such as training
and development, goal setting, feedback, rewards
of various kinds, better supervision, and so forth.
Such interventions, with performance requirements
held constant, could increase the group mean, have
differential effects across people, or both. The performance changes produced can be sizable (e.g.,
Carlson, 1997; Katzell & Guzzo,1983; Locke &
Latham, 2002).
Interventions designed to enhance individual
performance determinants can also be implemented by the individuals own processes of selfmanagement and regulation (Kanfer, Chen, &
Pritchard, 2008; Lord, Diefendorff, Schmidt, & Hall,
2010), and the effectiveness of these self-regulation
processes could vary widely across people. In addition, if they have the latitude to do so, individuals
could conduct their own job redesign (i.e., change
the substantive content of their work role) to better
utilize their knowledge and skills and increase the
effort they are willing to spend. Academics are fond
of doing that.
As noted by Sonnentag and Frese (2012), individual performance can also change simply as a
function of the passage of time. Of course, time is a
surrogate for such things as practice and experience,
the aging process, or changes in emotional states
(Beal, Weiss, Barros, & MacDermid, 2005).
Most likely, for any given individual over any
given period of time, many of these sources of performance change can be operating simultaneously.
Performance dynamics are complex, and attempts to
model the complexity have taken many forms. For
example, there could be characteristic growth curves
for occupations (e.g., Murphy, 1989b), differential
growth curves across individuals (Hofmann, Jacobs, &
Gerras, 1992; Ployhart & Hakel, 1998; Stewart &
Nandkeolyar, 2006; Zyphur, Chaturvedi, & Arvey,
2008), both linear and nonlinear components for
growth curves (Deadrick, Bennett, & Russell, 1997;
Reb & Cropanzano, 2007; Sturman, 2003), and
cyclical changes resulting from a number of selfregulatory mechanisms (Lord et al., 2010).
361

John P. Campbell

Empirical demonstrations of each of these have


been established.
Adapting to dynamics. Adaptability can be viewed
either as a characteristic of performance itself (i.e.,
a category of performance actions), as did Hesketh
and Neal (1999), or as a property of the individual
(i.e., as a determinant of performance). Ployhart
and Bliese (2006) presented a thorough discussion
of this issue and argued that it is more useful to
model (i.e., identify the characteristics of) the adaptive individual than it is to propose adaptability as
a distinct content dimension of performance. One
reason is that the general definition of adaptability
is not content domain specific, and providing specifications for adaptability as a distinct performance
dimension has been difficult (e.g., see Pulakos,
Arad, Donovan, & Plamondon, 2000).
Domain-specific dynamics. In sum, it can be taken
as a given that work-role performance requirements
change over time, sometimes over very short periods
of time, and that individuals change (i.e., adapt) to
meet them. Individuals can also change in anticipation of changes in performance requirements. Many
interventions (e.g., training, goal setting, reward
systems) have been developed to help individuals adapt to changing performance requirements.
Individuals can also actively engage in their own
self-management to develop additional knowledge
and skill and to regulate the direction and intensity
of their effort. If the freedom to do so exists, they
can even proactively change their own performance
responsibilities, or at least their relative emphases,
so as to better use their own knowledge and skill or
to better accomplish unit goals. Even if performance
requirements remain relatively constant, individual
performance can change over time as the result of
practice, feedback, increasing experience, cognitive
and physical changes resulting from aging, or even
fluctuation in affect or subjective well-being.
As a result of all this, one might ask what implications performance dynamics and individual adaptability have for substantive models of individual
work performance. This question is not the right
question. A more appropriate question is, What are
the implications of substantive models of performance for the assessment of performance dynamics
362

and individual adaptability? The argument here is


that although the latent dimensions of performance
may be interdependent (e.g., higher technical performance could enhance leadership), the assessment
of performance change must be linked to the individual performance dimensions. That is, the nature
of performance changes may be different for different dimensions.
Summary. Why devote so much space to the
basic modeling of individual performance in what
is supposed to be an overview of assessment in I/O
psychology? There are two reasons. First, individual
performance is I/O psychologys most important
dependent variable. Second, considering the assessment of individual performance raises some very
fundamental issues that are relevant for the assessment of virtually all other variables, both dependent
and independent. For example, what is the most
useful specification for the latent structure? To what
extent is the most useful specification a function
of value judgments? Judgments by whom? Aside
from conventional considerations of reliability, are
the latent variables dynamic? What is the expected
nature of the within-person variation? All of these
issues have implications for the choice of assessment
methods and for the purposes for which specific
assessments are used.

Performance Assessment
The assessment of individual work-role performance
may be I/O psychologys most difficult assessment
requirement. J. T. Austin and Villanova (1992) provided ample documentation of the problem. Archival objective measures are few and far between and
frequently suffer from contamination. Ratings,
although they do yield meaningful assessments
(W. Bennett, Lance, & Woehr, 2006; Conway &
Huffcut, 1997), tend to suffer from low reliability,
method variance, contamination, and the possible
intrusion of implicit models of performance held by
the raters that do not correspond to the stated specifications of the assessment procedure (Borman, 1987;
Conway, 1998). Alternatives to ratings have been
methods such as performance in a simulator, performance on various forms of job samples (Campbell &
Knapp, 2001), and using various indicators of goal

Assessment in Industrial and Organizational Psychology

accomplishment when goals are specified such that


accomplishing them is virtually under the individuals total control (Pulakos & OLeary, 2010).
In addition to these considerations, taking
account of the purpose of assessment is also critical.
The three major reasons for assessing performance
are (a) for research purposes that have no highstakes consequences; (b) for developmental purposes that carry the assurance that low scores do not
carry negative consequences; and (c) for high-stakes
appraisal situations such as promotion, compensation, termination, and so forth. Most likely, different
assessment methods would be appropriate for each.
Also, depending on which of the three is operative,
the same assessment procedure could produce different assessments. For example, raters could be trying to satisfy different goals when doing operational
performance appraisals versus providing ratings for
research purposes only. Murphy and Cleveland
(1995) discussed these issues at some length. The
overall moral is that the measurement purposes
must never be confused.

goals exists, but it could include such things as


meeting production goals, producing solutions
to specific problems, developing policy, creating
designs, modeling resource allocation decisions,
and so forth.
2. The second factor is the degree to which team
members feel rewarded by, or satisfied with, their
role and committed to the teams goals so that
they continue to commit effort toward team goal
accomplishment. This factor is analogous to the
effortinitiative factor in individual performance.
3. The third factor is the degree to which the team
improves its resources, skills, and coordination
over time.
By implication, assessment of team performance
would involve assessment of these three factors. The
last two factors are sometimes combined into a
higher order factor referred to as team viability, or
the teams capability to maintain its technical performance over time.

Unit and Organizational Effectiveness


Team Performance
Research, theory, and professional discussion
regarding team effectiveness, team performance,
the determinants of team performance, and the processes by which the determinants (independent
variables) affect team performance (dependent variables) has expanded exponentially over the past
20 years (e.g., Ilgen, Hollenbeck, Johnson, & Jundt,
2005; Kozlowski & Ilgen, 2006: Mathieu, Maynard,
Rapp, & Gilson, 2008). However, most of the attention is given to the determinants of team performance and effectiveness and to the processes by
which they have their effects. Modeling team performance itself for purposes of guiding assessment has
received relatively little attention.
The dominant model is still that articulated by
Hackman (1992), that is, that three major factors of
groupteam performance exist (as distinct from
individual performance):
1. The first factor is the degree to which it accomplishes its major substantive task goals. This
factor is analogous to the technical factor for
individual performance. No taxonomy of team

Organizations, and organizational units, do have a


bottom line. That is, by some set of value judgments, a set of outcomes is identified that the organization or unit wants to maximize, optimize, or at
least maintain at certain levels, such as quantity or
quality of output (be it goods or services), sales, revenue, costs, earnings, return on investment, stock
price, asset values, and so forth. The outcomes
deemed important are a management choice, and
choices can vary across organizations and across
time within organizations. For an educational organization, the outcome could be number of students,
graduation rates, time to degree, mean SAT or GRE
scores for the student body, prestige of postgraduation job placements, and so forth. Again, by definition, the level and variation of such outcomes is the
result of multiple determinants, in addition to individual performance. Although the term organizational effectiveness is used frequently in the I/O
literature relative to both research and practice,
attempts to model organizational or unit effectiveness for purposes of assessment have been sparse.
An early taxonomy was developed by Campbell
(1977), which was given a three-dimensional higher
363

John P. Campbell

order structure by Quinn and Rohrbaugh (1983)


and Cameron and Quinn (1999).

Productivity
Productivity, particularly with regard to its assessment, is a frequently misused term in I/O psychology. Its origins are in the economics of the firm,
where it refers to the ratio of the value of output
(i.e., effectiveness) to the costs of achieving that
level of output. Holding output constant, productivity increases as the costs associated with achieving
that level of output decrease. It is possible to talk
about the productivity of capital, the productivity of
technology, and the productivity of labor, which are
usually indexed by the value of output divided by
the cost of the labor hours needed to produce it. For
the productivity of labor, it would be possible to
consider individual productivity, team productivity,
or organizational productivity. Assessment of individual productivity would be a bit tricky, but it must
be specified as the ratio of performance level (on
each major dimension) to the cost of reaching that
level (on each major dimension). Costs could be
reflected by number of hours needed or wage rates.
For example, terminating high wage-rate employees
and hiring cheaper (younger?) individuals who can
do the same thing would increase individual
productivity.

Turnover
Turnover refers to the act of leaving an organization.
Turnover can be voluntary or involuntary, as when
an individual is terminated by the organization.
Both voluntary turnover and involuntary termination can be good or bad depending on the circumstances. Depending on the work role, turnover
could also vary as a function of determinants that
operate at various times (e.g., variation in turnover
could occur as a function of the initial socialization
process, early vs. late promotions, vesting of retirement benefits).
For assessment purposes, great benefit would
result if a latent structure for turnover could be
specified in terms of the substantive reasons individuals leave. The beginnings of such a latent structure
can be found in the integrative reviews of turnover
research by Griffeth, Hom, and Gaertner (2000),
364

Mitchell and Lee (2001), and Maertz and Campion


(2004).
DEPENDENT VARIABLE ASSESSMENT
FROM THE INDIVIDUALS POINT OF VIEW
Again, the defining characteristic is that higher
scores on such variables are of value to the individual for his or her own sake. They are not of value
because they correlate with or predict something
else that is of value. Consequently, what is a dependent variable for the individual could be an independent variable for the organization.

Job Satisfaction
One taxonomy of such dependent variables valued
by the individual is represented by the 20 dimensions assessed by the Minnesota Importance Questionnaire (Dawis & Lofquist, 1984), which are listed
in Exhibit 22.3.
Within the theory of work adjustment (Dawis,
Dohm, Lofquist, Chartrand, & Due, 1987; Dawis &
Lofquist, 1984), the variables in Exhibit 22.3 are
assessed in different ways for different reasons. The
Occupational Reinforcer Pattern is a rating by supervisors or managers of the extent to which a particular work role provides outcomes representing each
of the variables. The Minnesota Importance Questionnaire is a self-rating by the individual of the
importance of being able to experience high levels of
each of the 20 dimensions. The Minnesota Satisfaction Questionnaire is a self-rating of the degree to
which the individual is satisfied with the level of
each variable that he or she is currently experiencing. According to the theory of work adjustment,
overall work satisfaction should be a function of the
degree to which the work-role characteristics judged
to be important by the individual are indeed provided by the work role, or job.
Exhibit 22.3 represents the literatures most
finely differentiated portrayal of the latent structure
of what individuals want from work. There are other
portrayals. For example, a long time ago, Herzberg
(1959) grouped 16 outcomes obtained via a critical
incident procedure (he called it story-telling) into
two higher order factors variously called motivators
and hygienes or intrinsic and extrinsic. The Job

Assessment in Industrial and Organizational Psychology

Exhibit 22.3
The 20 First-Level Job Outcomes Incorporated in Dawis and Lofquists (1984) Minnesota Theory
of Work and Adjustment
1. Ability utilization: The chance to do things that make use of ones abilities
2. Achievement: Obtaining a feeling of accomplishment and achievement from work
3. Activity: Being able to keep busy all the time, freedom from boredom
4. Advancement: Having realistic chances for promotion and advancement
5. Authority: Being given the opportunity to direct the work of others
6. Company policies and practices: Company policies and practices that are useful, fair, and well thought out
7. Compensation: Compensation that is fair, equitable, and sufficient for the work being done
8. Coworkers: Good interpersonal relationships among coworkers
9. Creativity: The opportunity to innovate and try out new ways of doing things in ones job
10. Independence: The chance to work without constant and close supervision
11. Moral values: Working does not require being unethical or going against ones conscience
12. Recognition: Receiving praise and recognition for doing a good job
13. Responsibility: The freedom to use ones own judgment
14. Security: Not having to worry about losing ones job
15. Social service: Opportunities to do things for other people as a function of being in a particular work role
16. Social status: The opportunity to be somebody in the community, as a function of working in a particular job and
organization
17. Supervisionhuman relations: The respect and consideration shown by ones manager or supervisor
18. Supervisiontechnical: Having a manager or supervisor who is technically competent and makes good decisions
19. Variety: Having a job that incorporates a variety of things to do
20. Working conditions: Having working conditions that are clean, safe, and comfortable

Note. From Oxford Handbook of Industrial and Organizational Psychology (p. 173), by S. Kozlowski (Ed.), 2012, New
York, NY: Oxford University Press. Copyright 2012 by Oxford University Press. Adapted with permission.

Descriptive Index (Smith, Kendall, & Hulin, 1969)


focuses on five factors: the nature of the work itself;
the characteristics of pay; the characteristics of
supervision; the nature of promotion opportunities;
and the characteristics of ones coworkers. There
have also been several measures of overall, or general, job satisfaction (e.g., Hoppock, 1935; Kunin,
1955), which might use one item or several items.
Job satisfaction is a complex construct, and
assessment issues revolve around the number of
latent factors; the nature of the general factor;
whether the sum of the parts (i.e., adding factor
scores) captures all the variance in a rating of overall
satisfaction; the dynamics of within-person variation; whether the frame of reference should be a
description of the individuals state, an evaluation of
that state, or the affective response to the evaluation;
and how levels of satisfaction should be scaled (e.g.,
see Hulin & Judge, 2003). Assessment must deal
with all of these issues.

It is instructive, or at least interesting, to compare the 20 job characteristics listed in Exhibit 22.3
with other individual work outcomes that the list
does not seem to include but that have received
important research or assessment attention.
Examples follow.

Justice
A considerable literature exists on distributive and
procedural justice (Colquitt, 2001; Colquitt, Conlon, Wesson, Porter, & Ng, 2001) that could be
viewed as subfactors of Outcome 6 in Exhibit 22.3.
Distributive justice refers to an individuals selfassessment of how well he or she is being rewarded
by the organization. Procedural justice refers to the
individuals assessment of the relative fairness of the
organizations procedure for managing and dispensing rewards. A meta-analysis by Crede (2006)
showed perceptions of procedural justice to have a
somewhat higher mean correlation with overall job

365

John P. Campbell

satisfaction than did distributive justice (.56 vs. .62)


when correlations were corrected for artifacts.

Overall Well-Being
Several dependent variables in the workplace, from
the individuals point of view, go beyond job satisfaction and perceived distributive and procedural
justice to include additional facets of overall wellbeing, such as the following:

Physical health: In terms of its relationship to


work roles, physical health is most often talked
about in terms of a safe physical environment
(Tetrick, Perrew, & Griffin, 2010), that is, protections from environmental hazards, effective
safety procedures, manageable physical demands,
and available preventive care for potential illness. Assessment could involve the independent
measurement of such factors or the individuals
perception of them.
Mental and psychological health. Although positive psychological health associated with working
is a valued outcome from the individual point of
view, it presents assessment complications. After
controlling for basic personality characteristics,
the framework proposed by Warr (1994) could
be adopted that would then seek to assess (a) the
individuals level of happiness or unhappiness,
(b) relative feelings of comfort versus anxiety,
and (c) feelings of depression versus enthusiasm.
Lurking in the background is the research on set
points (e.g., Lykken, 1999), which has argued
that individuals have a characteristic level of happiness or well-being that determines much of the
variance in their reactions to the work environment on these dimensions.
Workfamily conflict. This literature is growing,
and the implication is that individuals value a
work situation that does not produce undue conflict with family life or nonwork relationships.
The determinants of workfamily conflict are
many and varied, and several models have been
offered relating the determinants to workfamily
conflict (e.g., J. E. Edwards & Rothbard, 2000;
Greenhaus & Powell, 2006; Grzywacz & Carlson, 2007). Some of the issues are whether work
interferes with family or vice versa; whether

366

the goals of the family and the goals of the individual at work are different; and the influence of
gender (e.g., whether the man or woman stays
home). The touchstone for assessment of the
dependent variable is defining high scores as the
perception (by the job holder) that work and
family demands are in balance. That is, work
demands do not degrade family goals, and family demands do not degrade individual work
goals. Consequently, assessment should take
into account how well the two sets of goals are
aligned, and they may not be weighted equally
(e.g., for economic reasons). Regardless of the
relative weights, Cleveland and Colella (2010)
made a strong argument for why both sets of
goals strongly influence workfamily conflict
assessments.
Work-related stress. The study of work stress has
generated a very large literature (Sonnentag &
Frese, 2003), and work stress is frequently
offered as an important criterion variable because
of the high frequencies with which it is reported
(Harnois & Gabriel, 2000; Levi & Lunde-Jensen,
1996; National Institute for Occupational Safety
and Health, 1999). Stress can be defined as a set
of physiological, behavioral, or psychological
responses to demands (work, family, or environmental) that are perceived to be challenging
or threatening (Neuman, 2004). Assessment of
individual stress levels is a more complex enterprise than assessment of job satisfaction, mental
or physical health, or workfamily conflict. The
measurement operations could be physiological (e.g., cortisol levels in the blood), behavioral
(e.g., absenteeism), psychological (depression),
or perceptual (e.g., self-descriptions of stress
levels), and the construct validity of any one of
them is not assured given the complexities of
modeling stress as a construct.
A somewhat overly simplistic model of stress
as a criterion would be that the workfamily situation incorporates potential stressors. Whether a
potential stressor (e.g., a new project deadline)
leads to a stress reaction is a function of how it is
evaluated by the individual. For some, the new
deadline might be threatening (e.g., it increases
the probability of a debilitating failure or makes

Assessment in Industrial and Organizational Psychology

it difficult to care for a sick child). For others, it is


merely an interesting challenge that will be fun to
tackle. If potential stressors are evaluated as
threatening, stress levels go up unless the individual has the resources to cope with them (Hobfoll,
1998). The Selye (1975) principle of optimum
stress levels says that individuals need a certain
amount of perceived stress to be optimally activated (Cooper, Dewe, & Driscoll, 2001). Similar
models have been offered by Robert and Hockey
(1997) and Warr (1987). However, if stress is too
high, several counterproductive outcomes
(labeled strains) can occur. These outcomes can
be physical (fatigue, headaches), behavioral
(reduced performance), or psychological (anxiety, sleep impairment). Consequently, assessment
must choose among alternative measurement
operations, must deal with the appraisal component (i.e., is a potential stressor actually a
stressor?), and must make a case for the construct
validity of the assessment of strains.

Individual Perspective:
A Summary Comment
Job satisfaction, distributed and procedural justice,
physical health, mental and psychological health,
workfamily conflict, stress, or simply evaluation of
overall well-being have been discussed as dependent
variables in the work setting that are important to
individuals. That is, most people value being satisfied
with their work, being physically and psychologically
healthy, achieving a work lifenon-work-life balance,
and experiencing optimal stress levels. However, in
the I/O psychology literature, these variables are usually not discussed as ends in themselves, but as independent variables that have an effect on the
organizations bottom line (Cleveland & Colella,
2010; Tetrick et al., 2010). Depending on which perspective is chosen, the purpose of assessment is different, and the choice of assessment methods may
differ as well.
INDEPENDENT VARIABLE LANDSCAPE
Compared with the dependent variable domain, the
independent variable domain is a lush and verdant
landscapeand much more intensely researched

and assessed. It has also been well discussed by


others and is the subject of many recent handbooks
(Farr & Tippins, 2010; Scott & Reynolds, 2010;
Zedeck, 2010). What follows is a brief outline primarily for the purpose of making certain distinctions that are discussed less often. As might be
expected, the outline follows Campbell et al. (1993),
Campbell and Kuncel (2001), and Campbell (2012).
The Campbell et al. (1993) model of performance posited two general kinds of performance
determinants: direct and indirect. That is, individual
differences in performance (either between or
within) are a direct function of the current levels of
performance-related knowledge and performancerelated skills. There are different kinds of knowledge
(e.g., facts, procedures) and different kinds of skills
(e.g., cognitive, physical, psychomotor, expressive).
The critical factor is that they are the real-time
knowledge and skills determinants of performance.
The only other direct determinants are motivational
and are represented by three choices: (a) where to
direct effort, (b) at what levels, and (c) for how
long. All other performance determinants must
exercise their effects by changing one or more of the
direct determinants. It follows that a diagnosis of
the direct causes of low or high performance must
assess knowledge, skill levels, and choice behaviors
that are specific to the work roles performance
requirements in real time. For example, reading skill
as a direct determinant refers to how well the individual reads the material required by the job in the
work setting. Reading skill (ability?) as measured by
the SAT is an indirect determinant. A multitude of
indirect determinants of knowledge, skills, and
choice behaviors exists, and a brief outline follows.

Traits: Abilities
The individual differences tradition in psychology in
general, and I/O psychology in particular, has
devoted much attention to the assessment of individual characteristics that are relatively stable over
the adult working years. Assessments of such characteristics are used to predict future performance for
selection and promotion purposes, predict who will
benefit from specific training or development experiences, predict performance failures, provide the
individual profiles needed to determine personjob
367

John P. Campbell

or personorganization fit, counsel individuals on


career options, and serve as control variables in a
wide variety of experiments on interventions (e.g.,
procedures for stress reduction). A brief outline of
the major trait domains follows. An overarching
distinction is made between abilities and skills
(assessed with so-called maximum performance
measures) and dispositions (assessed with typical
performance measures).
Cognitive abilities. The value of using cognitive
abilities to predict important dependent variables is
well documented, and general cognitive ability (g)
dominates (Ones, Dilchert, Viswesvaran, & Salgado,
2010; F. Schmidt & Hunter, 1998). The existence of
g in virtually any matrix of cognitive tests and the
correlation of near unity between the general factors
estimated from different test batteries (e.g., see
W. Johnson, Nijenhuis, & Bouchard, 2008) has been
well established. The nature of the latent subfactors that make up the general factor is not a totally
settled issue. The most comprehensive portrayal is
still that of Carroll (1993), who acknowledged g as
a single general factor that had eight (Carroll, 1993)
or 10 (Carroll, 2003) subfactors. This portrayal is
somewhat in opposition to that of Cattell (1971)
and Horn (1989), who argued for the crystallized g
and fluid g distinction with no general factor. Later
investigations (W. Johnson & Bouchard, 2005) have
tended not to support the crystallized gfluid
g structure. W. Johnson and Bouchard (2005) reanalyzed several data sets, using more sophisticated
methods, and argued strongly that g has three subfactors: verbal, perceptualspatial, and image rotation. However, a quantitative factor did not appear
as a fourth subfactor, which might be because of the
restriction of quantitative ability to simple number
facility in the test batteries.
The most finely differentiated picture of how g
could be decomposed is the comprehensive model
of human abilities proposed by Fleishman and Reilly
(1992), which is incorporated into O*NET (Peterson et al., 1999). It includes 21 cognitive abilities.
Although some evidence has been found for differential prediction of performance across different
jobs using cognitive ability subfactors (Rosse,
Campbell, & Peterson, 2001; Zeidner, Johnson, &
368

Scholarios, 1997), the incremental gains are small


compared with the variance accounted for by g.
However, even small gains are significant in the context of large-scale selection and classification in
large organizations. It is also true that the advantages of using specific subfactors rather than g for
particular measurement purposes have not been
evaluated against highly specific performance subfactors (e.g., operating specific kinds of equipment
that may require highly specific abilities).
Psychomotor abilities. The Fleishman and Reilly
(1992) taxonomy includes 10 specific psychomotor
abilities grouped into three higher order subfactors: (a) hand and finger dexterity and steadiness;
(b) control, coordination, and speed of multilimb
movements; and (c) complex reaction time and
speed of movement involving hands, arms, legs, or
all of these. Standardized performance-based tests
are available for each of the 10 specific abilities, and
they may (should?) be differentially important for
predicting performance or specific job tasks, such as
using a keyboard versus landing military jet aircraft
at sea. No data are available for this domain, but it is
interesting to speculate as to whether, for surgeons,
open incision surgery requires somewhat different
psychomotor abilities than robotic surgery.
Physical abilities. Although most occupations
probably do not, several key occupations (e.g., firefighter, police officer, certain military occupations)
have specialized physical ability requirements. The
assessment of physical ability is also critical when
considering the suitability of people with disabilities
for various jobs. The latent structure of physical
abilities was first investigated comprehensively by
Fleishman and his colleagues (Fleishman, 1964;
Fleishman & Quaintance, 1984; J. Hogan, 1991;
Myers, Gebhardt, Crump, & Fleishman, 1993), who
eventually arrived at a six-factor latent structure
(i.e., static strength, explosive strength, dynamic
strength, stamina, trunk strength, and flexibility).
Because physical ability assessment has not
received as much research attention as cognitive
ability assessment, at least two critical issues should
be considered. First, any of the six factors may be
broken down into more specific subfactors (e.g.,
arm and shoulder strength vs. leg strength), and for

Assessment in Industrial and Organizational Psychology

each specific factor, there are two or more specific


assessment techniques (e.g., lifting a weight off the
ground vs. pushing a weight along the ground).
Consequently, both the specific subfactors and the
assessment method are critical choices. Gebhardt
and Baker (2010) provided a thorough discussion of
these issues and the research pertaining to establishing the physical requirements of work roles.
Sensory abilities. Certain occupations have specialized requirements for visual and auditory abilities (e.g., airline pilot). The Fleishman and Reilly
(1992) taxonomy of sensory abilities incorporated
in O*NET includes nine factors (e.g., far vision,
peripheral vision, sound localization, speed recognition), each of which could be assessed by several
different tests. For purposes of selection, certification, or licensure, criterion-referenced measurement
is particularly critical for sensory abilities. That is,
certain minimum levels of such abilities could be
required, and top-down scoring would not suffice.
Somewhat strangely, the Fleishman and Reilly
(1992), and consequently the O*NET, taxonomy
does not include taste or olfactory abilities. Given
the importance of marketing food and drink in current culture, this omission is potentially serious.
Speaking ability. O*NET includes only one such
ability, speech clarity, but others may exist as well
(e.g., speech modulation). Given the importance
of oral communication in many occupations, this
omission, too, would seem to be serious.
Other intelligences. The independent variable
assessment landscape is also dotted with numerous variables that might be best described as not
g (Lievens & Chan, 2010). The basic theme is
that important abilities exist that are independent
of g and that play a role in success at work but are
not part of mainstream research. The two most
prominent abilities in this category are practical intelligence (Sternberg, Wagner, Williams, &
Horvath, 1995), not to be confused with a higher
order construct labeled successful intelligence (which
includes creative, analytical, and practical intelligence; Sternberg, 2003), and emotional intelligence, measured either as cognitive ability (Salovey
& Mayer, 1990) or as personality (Bar-On, 1997).

The available evidence pertaining to these constructs has been reviewed at some length elsewhere
(Gottfredson, 2003; Landy, 2005; Lievens & Chan,
2010; Murphy, 2006). The overall conclusion must
still be that construct validity is lacking for measures
of these non-g intelligences and that they are in fact
better represented by other already existing variables. For example, a recent study by Baum, Bird,
and Singh (2011) evaluated a carefully constructed
domain-specific situational judgment test of how
best to develop businesses in the printing industry,
which was then called a test of practical intelligence.
With this juxtaposition, knowledge of virtually any
specific domain of job-related knowledge could be
labeled practical intelligence. Whats in a name?

Traits: Dispositions
Still within the context of stable, or at least quasistable, traits, the I/O psychology independent variable landscape includes many constructs reflective of
dispositional tendencies, that is, tendencies toward
characteristic behavior in a given context. Personality, motives, goal orientation, values, interests, and attitudes are the primary labels for the different domains.
Personality. The assessment of personality dominates this landscape (Hough & Dilchert, 2010; see
also Chapter 28, this volume) in terms of both the
wide range of available assessment instruments
(R. Hogan & Kaiser, 2010) and the sheer amount
of research relating personality to a wide range of
dependent variables (Hough & Ones, 2001; Ones,
Dilchert, Viswesvaran, & Judge, 2007). The efficacy
of personality assessment for purposes of predicting
the I/O psychology dependent variables has had its
ups and downs, moving from up (Ghiselli, 1966)
to down (Guion & Gottier, 1965) to up (Barrick &
Mount, 1991, 2005), to uncertainty (Morgeson
et al., 2007), to reaffirmation (R. Hogan & Kaiser,
2010; Hough & Dilchert, 2010; Ones et al., 2007).
The ups and downs are generally reflective of how
the assessment of personality is represented (e.g.,
narrow vs. broad traits), which dependent variables
are of interest, how predictive validity is estimated,
and the utility ascribed to particular magnitudes of
estimated validity. The bottom line is that personality assessment is a very useful enterprise so long as
369

John P. Campbell

the inferences that are made are consistent with the


evidence pertaining to the dependent variables that
can be predicted by appropriate assessments.
The assessment of personality for predictive or
diagnostic purposes is complex for at least the following reasons.

The measurement operations (i.e., items) can


come from different models of what constitutes
personality description. The lexical approach is
based on the words used in normal discourse to
describe behavioral tendencies in others. The
latent structure of such descriptors can then be
investigated empirically. The five-factor model of
Costa and McRae (1992) is the dominant solution. A second model would be to consult more
basic theories of personality (e.g., Eysenck, 1967;
Markon et al., 2005; Tellegen, 1982; Tellegen &
Waller, 2000), write items reflective of the components specified by the theory, and investigate
their construct validity. The advocates of the
theory-based approach have argued that it produces a latent structure that is tied more closely
to biological substrates (DeYoung et al., 2010).
Both approaches can produce hierarchical latent
structures.
Whether the descriptors (i.e., items or scales)
are obtained by data mining normal discourse
or by following the specifications of a theory,
assessments of an individual can be obtained via
self-report or observer report. Although the bulk
of personality assessment in I/O psychology is
self-report, observer reports may be more predictive of various aspects of performance (e.g., Oh,
Wang, & Mount, 2011). Are self-reports and
observer reports different constructs? R. Hogan
and Kaiser (2010) argued the affirmative and
referred to self-descriptions as self-identity and to
observer descriptions as reputations.
The general agreement (DeYoung, Quilty, &
Peterson, 2007) is that the lexically derived Big
Five are themselves multidimensional and are
composed of distinct facets. Going the other
direction, combining two or three of the Big Five
into higher order composite dimensions (e.g.,
integrity) has also been useful. DeYoung (2006)
argued for two basic subfactors but rejects the

370

existence of a general factor. Whether an assessment should use composite dimensions, factors at the Big Five level of generality, or more
specific facets depends on the measurement
purpose.
At the Big Five level of generality, there is considerable agreement that the five-factor model is
deficient and does not include additional important constructs such as religiosity, traditionalism
or authoritarianism, and locus of control (Hough &
Dilchert, 2010).

Motives or needs. Alderfer (1969), Maslow


(1943), McClelland (1985), Murray (1938), White
(1959), and others have offered models of the latent
structure of human motives, or needs. Explicitly, or
by implication, motives are defined as inner states
that determine the outcomes that people strive to
achieve or strive to avoid. The strength of a motive
determines the strength of the striving. Different
motives are associated with different classes of
outcomes (e.g., outcomes that satisfy achievement
needs vs. outcomes that meet social needs).
Although the distinctions between the intensity
of characteristic behavioral tendencies (personality)
and the strength of striving for specific outcomes
(motives) are not always perfectly clear, the assessment methods have been different enough to warrant considering them separately. For example,
within I/O psychology the projective techniques
(ambiguous pictures) used by McClelland (1985) to
assess need achievement and fear of failure and the
sentence completion scales used by Miner (1977) to
assess the motivation to manage are not personality
scales in the sense of the NEO Personality Inventory, California Psychological Inventory, or Multidimensional Personality Questionnaire. Motive
assessment has more specific referents (for more
information on projective measures, see Volume 2,
Chapter 10, this handbook).
Goal orientation. A very specific instantiation
of motive assessment that has received increasing
attention in I/O psychology is the assessment of
goal orientation as it has developed from the
work of Dweck and colleagues (Dweck, 1986;
Elliott & Dweck, 1988). Initially, two orientations
(motives) were posited in the context of training

Assessment in Industrial and Organizational Psychology

and instruction. A performance orientation characterizes individuals who strive for a desirable final
outcome (e.g., final grade). Similar to McClelland
(1985), the goal is to achieve the final outcomes that
the culture defines as high achievement. By contrast,
a mastery or learning orientation characterizes individuals who strive to learn new things regardless of
the effort involved, the frequency of mistakes, or
the nature of the final evaluation. It is learning for
learnings sake.
As noted by DeShon and Gillespie (2005), agreement on the nature of goal orientations latent structure, and on whether it is a trait, quasi-trait, or state
variable, is not uniform. Considerable research has
focused on whether learning and performance orientations are bipolar or independent and whether one
or both of them are multidimensional (DeShon &
Gillespie, 2005). The answers seem to be that they
are not bipolar and that performance orientation
can be decomposed into performance orientation
positivethe striving toward final outcomes defined
as achievementand performance orientation
negativethe striving to avoid final outcomes
defined as failure. One major implication is that
performance-oriented people will avoid situations in
which a positive outcome is not relatively certain
and that learning-oriented individuals will relish the
opportunity to try, regardless of the probability of a
successful outcome. Assessment of goal orientations
is still at a relatively primitive stage (Payne, Youngcourt, & Beaubien, 2007) and has not addressed the
issue of whether learning or performance orientations are domain specific. For example, could an
individual have a high learning orientation in one
domain (e.g., software development) but not in
another (e.g., cost control)? Also, the question of
whether goal orientation is trait or state has not
been settled. However, even though assessment is
primitive, research has suggested that goal orientation is an important determinant of performance
and satisfaction in training and in the work role
(Payne et al., 2007).
Interests. Interest assessment receives the most
attention within the individual, not the organizational, perspective and is a major consideration in
vocational guidance, career planning, and individual

job choice. It has also played a role, albeit smaller, in


personnel selection and classification on the basis of
the notion that individuals will devote more attention and effort to things that interest them, other
things being equal, including the mastery of relevant
skills (Van Iddekinge, Putka, & Campbell, 2011).
Assessment of interests is dominated by two
inventories, the Self-Directed Search (Holland,
1994) and the Strong Interest Inventory (Harmon,
Hansen, Borgen, & Hammer, 1994). The SelfDirected Search portrays interest via the nowfamiliar RIASEC (realistic, investigative, artistic,
social, enterprising, and conventional) hexagon,
which says that the latent structure of interests is
composed of six factors with a particular pattern of
intercorrelations. The RIASEC profiles can be used
to characterize both individuals and jobs or occupations. A profile for an occupation is supposedly
indicative of the degree to which the occupation will
satisfy each of the six interest areas. Holland (1997)
viewed the Self-Directed Search as a measure of personality and essentially subsumed interests within
the overall domain of personality. The Strong Interest Inventory uses empirical weighting to differentiate individuals in an occupation from people in
general on preferences for specific activities, school
subjects, and so forth. Such preferences are not
viewed as synonymous with personality. The Strong
Interest Inventory is also scored in terms of 20 basic
interest dimensions that have relatively low correlations with personality measures (Sullivan & Hansen, 2004). Whether interests account for
incremental variance in the dependent variables,
when compared with personality or cognitive ability, has only begun to be researched (see Van
Iddekinge et al., 2011; for more information on the
assessment of interests, see Volume 2, Chapter 19,
this handbook).
Values. Although defining values presents the
usual difficulties of choosing from among alternatives, Chan (2010) presented a careful synthesis.
Values seem most usefully defined as the individuals stable beliefs that serve as general standards by
which he or she evaluates specific things, including
people, behaviors, activities, and issues (Chan, 2010,
p. 321). By this specification, which distinguishes
371

John P. Campbell

values from personality, motives, and interests, the


assessment of values can play an important role in
career planning, specific job choice, and decisions to
stay from the individuals perspective and in personnel selection, personorganization fit, organizational
commitment, and turnover from the organizations
perspective.
The latent structure of values in the context of
work has not been studied very intensively. As
noted by Chan (2010), the taxonomy produced by
Schwartz and Bilsky (1990) is perhaps the most
useful. It has 10 values dimensions for describing
individuals and seven dimensions describing culture, for comparative purposes. Another structure
is provided by Cooke and Rousseau (1988). In general, research on values and the development of
methods for the assessment of values in the work
context needs more attention in I/O psychology.
Values as indicators of cultural distinctions across
countries is another matter. Considerable research
has been done using Hofstedes dimensions, and a
comprehensive meta-analysis of these dimensions
has been provided by Taras, Kirkman, and Steel
(2010).

The State Side


The independent variables noted so far have been designated as trait variables that are relatively stable over
the individuals work life, or at least the major portion
of it. I/O psychology also deals with a complex structure of independent variables that are more statelike.
That is, they are to some degree malleable, if not
dynamic, as the result of situational effects, planned
or unplanned. State variables are no less important
than trait variables in explaining individual differences in the critical dependent variables, and the
interaction between trait and state should be considered as well. The important state variables also tend to
mirror the ability versus disposition distinction. That
is, for some state variables, the assessment of maximum performance is the goal, whereas for others, the
assessment of representative or typical dispositional
states is the goal. More concretely, the distinction is
between knowledge and skill versus attitudes and the
cognitive regulation of choice behavior. However, for
both abilities and dispositions, the distinctions
between state and trait are developmentally complex.
372

Ackerman (2000), Ackerman and Rolfhus (1999),


Kanfer and Heggestad (1997), and Lubinski (2010)
have provided a roadmap.

Knowledge and Skill


Specifications for knowledge and skill are elusive.
What follows is an elaboration on Campbell and
Kuncel (2001) and an attempt to distinguish among
(a) declarative knowledge, (b) proceduralized
knowledge, (c) skill, and (d) problem solving. It is
meant to be consistent with Anderson (1987) and
Simon (1992). The nature of competencies is a
separate issue.
Declarative knowledge is knowledge of labels and
facts pertaining to objects, events, processes, conditions, relationships, rules, ifthen relationships, and
so forth. As in the Anderson (1987) framework,
declarative knowledge is distinguished from proceduralized knowledge, which refers to knowing how
something should be done (e.g., How should shingles be put on a roof? How should a correlation
matrix be factor analyzed? How should a golf club
be swung?). In contrast to knowing how to do
something, skill refers to actually being able to do it.
Sometimes the distinction between proceduralized
knowledge and skill is relatively small (e.g., knowing how to factor analyze a matrix vs. actually doing
it), and sometimes it is huge (e.g., knowing how to
swing a three-iron and actually being able to do it
at some reasonable level of proficiency; note the
qualifierskills are not dichotomous variables).
Consequently, a skill can be defined as the application of declarative and proceduralized knowledge
capabilities to solve structured problems and accomplish specified goals. That is, the problems or goal
accomplishments at issue have known (i.e., correct)
solutions and known ways of achieving them. The
issue is not whether the problems or specified goals
are easy or difficult, it is whether correct solutions
can be specified.
The capabilities commonly labeled as problem
solving, critical thinking, or creativity should be set
apart from a discussion of knowledge and skill.
Although these capabilities appear frequently in
competency models and other forms of knowledge,
skills, and abilities lists, they are seldom, if ever,
given a concrete specification, seemingly because

Assessment in Industrial and Organizational Psychology

everyone already knows what they are. Consequently, whether problem solving, creativity, and
critical thinking are intended as trait or state variables is not clear. That is, are they distinct from general cognitive ability, and can they be enhanced via
training and experience? Attempts to assess these
capabilities must somehow deal with this lack of
specification.
Following Simon (1992), problem solving could
be defined as the application of knowledge and skill
capabilities to the development of solutions for illstructured problems. Ill-structured problems are
characterized as problems for which the methods
and procedures required to solve them cannot be
specified with certainty and for which no correct
solution can be specified a priori. Generating solutions for such problems is nonetheless fundamentally and critically important (e.g., What should be
the organizations research and development strategy? What is the optimal use of training resources?
How can the coordination among teams be maximized?). Specified in this way, a problem-solving
capability is important for virtually all occupations,
which invites a discussion of how it can be developed and assessed. The literature on problem solving within cognitive psychology in general, and with
regard to the study of expertise in particular, is reasonably large (Ericsson, Charness, Feltovich, &
Hoffman, 2006). To make a long story brief, the
conclusions seem to be that (a) there is no general
(i.e., domain-free) capability called problem solving
that can be assessed independently of g; (b) problemsolving expertise, as defined earlier, is domain specific; (c) expert problem solvers in a particular
substantive or technical specialty simply know a lot,
and what they know is organized in a framework
that makes it both useful and accessible; and (d)
experts use a variety of heuristics and cues correctly
to identify and structure problems, determine what
knowledge and skills should be applied to them, and
judge which solutions are useful.
Currently, expert problem solving is viewed as a
dual process (Evans, 2008). That is, solutions are
either retrieved from memory very quickly, seemingly with minimal effort and thought, or a much
more labor-intensive process of problem exploration
and definition occurs, thinking about and evaluating

potential solutions and finally settling on a solution


or course of action. The latter process is not a serial
progression through a specific series of steps, but it
is an organized effort to use the experts fund of
knowledge, skills, and strategies in a useful way.
The dual-process models are not strictly analogous to automatic versus controlled processing distinctions (Ackerman, 1987). The distinction is more
between identifying a solution very quickly versus
identifying one more deliberately. Different brain
processes are involved, as evidenced by functional
magnetic resonance imaging studies (Evans, 2008).
Some investigators (e.g., Salas, Rosen, & DiazGranados, 2010) have been quick to label the fast process
intuition and insert it into competency models,
knowledge, skills, and abilities lists, and the like
again with virtually no specifications for what intuition is. It is another example of an important word
from general discourse causing assessment problems
for I/O psychology when attempting to incorporate
it in research or practice.
Following Simon (1992), Kahneman and Klein
(2009) demystified intuition by defining it as a process that occurs when an ill-structured problem to
be solved exists, and the problem situation provides
cues that the expert can use to quickly access relevant information stored in memory that provides a
useful solution. Virtually by definition, intuitive
expertise must be based on a large, optimally structured base of information and on identifying the
most valid situational cues. There is no magic in
intuition. With regard to solving ill-structured problems, the distinction between quickly accessing a
useful solution (i.e., intuition) and being more
deliberative is not a clear dichotomy. A final solution might be produced quickly but then subjected
to varying degrees of deliberation.
Solving structured problems (i.e., exhibiting a
skill as defined earlier) is a somewhat different phenomenon. Certain (but certainly not all) skills can
be practiced enough so that they do become automatic (Ackerman, 1988) and can be used without
effort or conscious awareness. However, many skills
will always remain a controlled or deliberative process (e.g., creating syntax). Experts do it more
quickly and more accurately than other people, but
not automatically.
373

John P. Campbell

Creativity
What then are creativity and critical thinking?
Answering such questions in detail is beyond the
scope of this chapter, but the following discussion
seems relevant vis--vis their assessment. Comprehensive reviews of creativity theory and research are
provided by Dilchert (2008), Runco (2004), and
Zhou and Shalley (2003).
Creativity has been assessed as both a cognitive
and a dispositional trait, as in creative ability and
creative personality. Both cognitive- and personalitybased measures have been developed via both
empirical keying (e.g., against creative vs. noncreative criterion groups) and homogeneous, or
construct-based, keying. Meta-analytic estimates of
the relationships between cognitive abilities and creative ability and between established personality
dimensions (e.g., the Big Five) and creative personality scales are provided by Dilchert (2008) as well as
the correlations of creative abilities and creative personality dimensions with measures of performance.
Within a state, framework creativity can also be
viewed as a facet of ill-structured problem-solving
performance (e.g., George, 2007; Mumford, Baughman, Supinski, Costanza, & Threlfall, 1996). Here,
the difficulty is in distinguishing creative from
noncreative solutions. The specifications for the distinction tend not to go beyond stipulating that creative solutions must be both unique, or novel, and
useful (George, 2007; Unsworth, 2001). That is,
uniqueness by itself may be of no use. In the context
of problem-solving performance, is a unique (i.e.,
creative) solution just another name for a new solution, or is it a distinction between a good solution
and a really good solution (i.e., the latter has more
value than the former, given the goals being pursued)? In general, creativity as a facet of a problemsolving capability does not seem unique. Attempting
to assess creative expertise as distinct from highlevel expertise may not be a path well chosen.

Critical Thinking
Similar specification problems characterize the
assessment of critical thinking, which has assumed
rock-star construct status in education, training, and
competency modeling (e.g., Galagan, 2010; Paul &
Elder, 2006; Secretarys Commission on Achieving
374

Necessary Skills, 1999; Stice, 1987). Many, many


definitions of critical thinking have been offered in a
wide variety of contexts ranging from the Socratic
tradition, to the constructivist perspective in education, to economic theory, to problem solving in the
work role, to the value-added assessment of education, and to the scientific method itself. In all of
these, critical thinking is regarded, explicitly or
implicitly, as a state variable. That is, it is something
to be learned. Moreover, it could be regarded as a
cognitive capability or as a motivational disposition
(i.e., people differ in the degree to which they want
to think critically). Perhaps the former is a prerequisite for the latter.
Setting aside those specifications that are so general as to be indistinguishable from thinking, problem solving, or intelligence itself, the defining
characteristic of critical thinking seems to be a disposition to question the validity of any assertion
about facts, events, ongoing processes, forecasts of
the future, and so forth and to ask why the assertion
was made. The form of the questioning (i.e., critical
thinking) relies on the canons of rationality, logic,
and the scientific method and on domain-specific
knowledge. That is, to think critically is to always
question the truth value of a statement (a disposition) and to analyze (a cognitive capability) the
basis on which the statement is made.
Such a specification invites a consideration of
whether such a thing as a general critical thinking
skill exists, or whether it must always be substantially domain specific. That is, is it even possible to
talk about critical thinking independently of content
domain? This is the same issue discussed earlier in
the context of problem-solving capabilities and
creativity.
The assessment of critical thinking is most often
via rater judgment and less often by standardized
tests (Ennis, 1985; Ewell, 1991; Steedle, Kugelmass,
& Nemeth, 2010). One area of research that has
confronted both the general versus domain-specific
issue and rated versus tested assessment is the development of the value-added approach to the assessment of educational outcomes (Liu, 2011). This
effort has been in progress for some 30 or more
years but has surged recently as a means for assessing teacher effects (kindergartenGrade 12) on

Assessment in Industrial and Organizational Psychology

student achievement and the collegeuniversity


effect on undergraduate learning (Klein, Freedman,
Shavelson, & Bolus, 2008). The latter is perhaps
more relevant and involves the assessment of gains
on certain general skillscritical thinking being a
major oneas a function of a college or university
education. Three principal assessment systems are
available (Banta, 2008): the Collegiate Assessment
of Academic Proficiency from American College
Testing, the Measure of Academic Proficiency and
Progress from the Educational Testing Service, and
the Collegiate Learning Assessment from the
Council for Aid to Education. The first two have a
multiple-choice format, but the third uses openended (i.e., written) responses to three scenarios
involving (a) taking and justifying a particular position on an issue, (b) critiquing and evaluating a particular position on an issue, and (c) performing the
tasks in an in-basket simulation. The responses are
scored by expert raters to yield scores on problem
solving, analytic reasoning, critical thinking, and
writing skills. The stated expectation is that the college or university experience should increase such
skills, and schools can be ranked in terms of the
extent to which they do so (Klein et al., 2008).
Research so far has suggested that scores on such
measures do go up from freshman to senior status,
but it has been difficult to extract more than one
general factor, and the construct validity of the general factor has not been clearly established.
The moral here is that for assessment purposes,
problem solving, creativity, and critical thinking are
complex and extremely difficult constructs to specify. They are particularly difficult to specify in a
domain-free context. Moreover, is the domain-free
context even the most relevant for assessment in I/O
psychology? These issues should not be approached
in a cavalier fashion, such as listing them in a competency model without thorough specification.

Latent Structure of Knowledge and Skills


(as Determinants of Performance)
For the assessment of individual differences in
domain-specific knowledge and skills, a distinction
can be made between the direct real-time knowledge
and skills determinants of performance in a work
role and the knowledge and skills requirements that

are assessed before being hired. The former might be


assessed for diagnostic or developmental purposes
and the latter for predictive purposes. However, the
latter may also serve as a prerequisite for the former
and, as asserted in a previous section, the latter
(indirect) can only influence performance by influencing the former (direct).
In contrast to abilities, the substantive latent
structure or structures of knowledge and skills have
received scant attention. Part of the problem is simply the almost limitless number of possibilities and
the difficulty of choosing the appropriate levels of
generality or specificity. That is, many, many knowledge and skills domains exist, and they may be
sliced very coarsely or very finely.
Content-based knowledge taxonomies do exist.
A relatively general one is included in O*NET and
consists of 38 knowledge domains that are primarily
focused on undergraduate curriculum areas (e.g.,
psychology, mathematics, philosophy, physics). As
noted by Tippins and Hilton (2010), the knowledge
requirements for many skilled trades, or technical
specialties not requiring a college degree, do not
seem to be represented. A taxonomy-like structure
that does represent the nonbachelors degree specialties is the compilation of technical school curricula known as the Catalog of Instructional Programs
maintained by the U.S. Department of Education.
Knowledge taxonomies specific to particular
classes of occupations have also been developed via
comprehensive job analysis efforts over a period of
years by the U.S. Office of Personnel Management
(2007). To date, they cover these classes of
occupations:

professional and administrative,


clerical,
technical,
executive or leadership,
information technology, and
science and engineering.

Collectively, they are a part of the Office of Personnel Managements MOSAIC system and constitute a much more complete taxonomy of job
knowledge requirements than the O*NET.
Portraying the taxonomic structure for direct
and indirect skills requirements is even more
375

John P. Campbell

problematic than it is for knowledge. O*NET provides a taxonomy of 35 skills that are defined as
cross-occupational (i.e., not occupation specific)
and that vary from the basic skills such as reading,
writing, speaking, and mathematics, to interpersonal
skills such as social perceptiveness, and to technical
skills such as equipment selection and programming. As noted by Tippins and Hilton (2010), the
O*NET skills are very general in nature and generally lacking in specifications. Moreover, two of the
35 O*NET skills are complex problem solving and
critical thinking, the limitations of which were discussed earlier. Again, a wider set of more concretely
specified skills are included in the Office of Personnel Managements MOSAIC system but only for certain designated occupational groups.
Because the skills gap has been such a dominant
topic in labor market analyses (e.g., Davenport,
2006; Galagan, 2010; Liberman, 2011), one might
expect the skills gap literature to provide an array of
substantive skills that are particularly critical for
assessment. It generally does not. Virtually all skills
gap information is obtained via employer surveys in
response to items such as To what extent are you
experiencing a shortage of individuals with appropriate technical skills? However, the specific technical skills in question are seldom, if ever, specified.
Skills such as leadership, management, customer
service, sales, information technology, and project
management are as specific as it seems to get.
The purposes for which knowledge and skill
assessments might be done are, of course, varied. It
could be for selection, promotion, establishing
needs for training and development, or certification
and licensureall from the organizational perspective. From the individual perspective, it could be for
purposes such as job search, career guidance, or
self-managed training and education. For organizational purposes, the lack of a taxonomic structure
may not be a serious impediment. Organizations can
develop their own specific measures to meet their
needs, such as specific certification or licensure
examinations. However, for individual job search or
career planning purposes, the lack of a concrete and
substantive taxonomic structure for skills presents
problems. Without one, how do individuals navigate the skills domain when planning their own
376

education and training or matching themselves with


job opportunities?

State Dispositions
By definition, and in contrast to trait dispositions,
state dispositions are a class of independent variables that determines volitional choice behavior
in a work setting but that can be changed as a
result of changes in the individuals environment.
Disposition-altering changes could be planned
(e.g., training) or unplanned (e.g., peer feedback).
A selected menu of such state dispositions follows.

Job Attitudes
There are many definitions of attitudes (Eagley &
Chaiken, 1993), but one that seems inclusive
stipulates that attitudes have three components:
First, attitudes are centered on an object (e.g., Democrats, professional sports teams, the work you do);
second, an attitude incorporates certain beliefs
about the object (e.g., Democrats tax and spend,
professional sports teams are interesting, the work
you do is challenging); and third, on the basis of
ones beliefs, one has an evaluativeaffective
response to the object (e.g., Democrats are no good,
professional sports teams are worth subsidizing, you
love the challenges in your job). The evaluation
affective reaction is what influences choice behavior
(e.g., you vote Republican, you vote for tax subsidies for a professional sports stadium, you will work
hard on your job for as long as you can).
Job satisfaction. The job attitude that has dominated both the I/O research literature and human
resources practice is of course job satisfaction, which
was discussed earlier in this chapter as a dependent
variable. However, used as an independent variable
the correlation between job satisfaction and both
performance and retention has been estimated literally hundreds of times (Hulin & Judge, 2003) using
the same assessment procedures discussed previously, and the same issues apply (e.g., Weiss, 2002).
In addition to job satisfaction, several other work
attitudes have received attention for both research
and application purposes.
Commitment. As an attitude, commitment in a
work setting can take on any one of several different

Assessment in Industrial and Organizational Psychology

objects, and it is possible to assess commitment to


the organization, the immediate work group, an
occupation or profession, ones family or significant
other, and entities outside of the work situation
such as an avocation or civic responsibility. Beliefs
about any one of these attitude objects could lead to
positive or negative affect that influences decisions
to commit effort for short- or long-term durations.
The assessment issues revolve around the differentiation of attitude intensity across objects and the
distinction between commitment to and satisfaction
with. That is, measures of job satisfaction and organizational commitment both yield significant correlations with turnover and performance (Hulin &
Judge, 2003), but does one add incremental variance
over the other (Crede, 2006)? Both the latent structure of commitment and its distinctiveness from
other attitudes are not settled issues.
Job involvement. Job involvement is variously
characterized as a cognitive belief about the importance of ones work, the degree to which it satisfies
individual needs of a certain kind (e.g., achievement, belongingness), or the degree to which an
individuals self-identity is synonymous with the
work he or she does (Brown, 1996; Kanungo, 1982;
Lodahl & Kejner, 1965). Consequently, it should be
related to job satisfaction, self-assessments of longterm performance, commitment to the occupation
(but perhaps not the organization), and intentions
to stay or leave.
Job engagement. Job engagement is currently a hot
topic, as evidenced by at least two recent handbooks
(Albrecht, 2010; Bakker & Leiter, 2010) and a major
book (Macey, Schneider, Barbera, & Young, 2009).
In their focal article in the journal Industrial and
Organizational Psychology: Perspectives on Science
and Practice, Macey and Schneider (2008) made a
concerted attempt to define state engagement, which
was characterized as an evaluative or affective state
regarding ones job that goes beyond simply being
satisfied, committed, or involved and reflects the
individuals total passion and dedication for his or
her work and a willingness to be totally immersed in
it. The article elicited 13 quite varied responses that
illustrated the major assessment issues with which
such constructs must deal, in both research and

practice. For example, what is the latent structure of


this construct? Is engagement a dispositional trait,
and affective state, or a facet of performance itself?
Do measures of engagement account for unique variance over and above satisfaction and commitment?
Although managements tend to view engagement as
an important construct (Masson, Royal, Agnew, &
Fine, 2008; Vosburgh, 2008), its assessment must
deal with the preceding issues. Christian, Garza,
and Slaughter (2011) reported a meta-analysis that
engages some of the issues. Although the number
of studies is not great, and there is variation in the
measures of engagement, the evidence is supportive
of unique variance and some incremental predictive
validity that could be attributed to engagement (for
further discussion of job satisfaction and related job
attitudes, see Chapter 37, this volume).

Motivational States
Again, in contrast to trait dispositions, such as the
need for achievement, a class of more dynamic motivational states has become increasingly important, at
least in the research literature, as determinants of
choice behavior at work. Consider the following
sections.
Self-efficacy and expectancy. The Bandurian
notion of self-efficacy is the dominant construct
here and is defined as an individuals self-judgment
about his or her relative capability for effective task
performance or goal accomplishment (Bandura,
1982). Self-efficacy judgments are specific to particular domains (e.g., statistical analysis, golf) and
can change with experience or learning. Self-efficacy
is similar to, but not the same as, Vrooms (1964)
definition of expectancy as it functions in his valence
instrumentalityexpectancy model of motivated
choice behavior. Expectancy is an individuals personal probability estimate that a particular level of
effort will result in achieving a specific performance
goal. It is very much intended as a within-person
explanation for why individuals make the choices
they do across time, even though it is most frequently
used, mistakenly, as a between-persons assessment.
Instrumentality (risk) and valence (outcome
value). From subjective expected utility to
valenceinstrumentalityexpectancy theory (Vroom,
377

John P. Campbell

1964) to prospect theory (Kahneman & Tversky,


1979), the concepts of risk assessment and outcome
value estimation are viewed as state determinants of
choice behavior. Individuals want to minimize risk
and maximize outcome value and will govern their
actions accordingly. However, as noted in prospect
theory, preference for risk levels and outcome values
are discounted as a function of time. That is, individuals will take on greater risk but value specific
outcomes less the farther they are in the future. See
Steel and Konig (2006) for an integrated summary
of how such state dispositions influence choice
behavior. Such considerations have not yet played a
very large role in diagnostic assessment of the choice
to perform, but perhaps they should.
Core self-evaluation. Judge, Locke, Durham, and
Kluger (1998) have done considerable work on a set
of dispositions they referred to as core self-evaluations.
The set consists of general self-efficacy (i.e., a selfassessment of competence virtually regardless of the
domain), self-esteem, locus of control, and neuroticism. It is somewhat problematic as to whether these
facets can be considered trait or state, but they have
shown significant predictive validities (Judge, Van
Vianen, & DePater, 2004). Their distinction as separate facets is also not a settled issue and may depend
on the specific measure involved (Ferris et al., 2011;
Judge & Bono, 2001).
Mood and emotion. The dispositional effects of
mood and emotion on work behavior have received
increasing attention (Mitchell & Daniels, 2003;
A. M. Schmidt, Dolis, & Tolli, 2009; Weiss & Rupp,
2011). Specifications for these constructs are not
perfectly clear (Mitchell & Daniels, 2003), but in
general, mood is defined as an affective state that is
quite general, and emotion is usually specified as
having a specific referent. That is, ones mood is generally bad or good, but the individual is emotional
(positively or negatively) about specific things.
Why assess such dispositional states? The dominant
answer is that as state determinants of choice behavior, they help to explain the within-person variability in performance over relatively short periods of
time (e.g., Beal et al., 2005). Also, as advocated by
Weiss and Rupp (2011), the whole person cannot be
assessed without a consideration of these states.
378

Things that are known to be unknown. A list of


state determinants is probably not complete without noting that individuals are not aware of all of
the determinants of their choice behavior (e.g.,
Bargh & Chartrand, 1999). That is, people make
many choices, even at work, for which they cannot
explain the antecedents. Apparently, the reasons for
action are not in conscious awareness. Can they be
recovered via some form of assessment? That has
yet to be determined, but one avenue of investigation concerns priming effects (Gollwitzer, Sheeran,
Trotschel, & Webb, 2011).

Competencies (and Competency


Modeling)
So far, this chapter has avoided the question of
whether competencies and competency modeling
are, or are not, a distinct sector of the I/O psychology assessment landscape. That is, is competency
modeling just knowledge, skills, abilities, and other
characteristics (KSAOs) and job analysis by another
name, or should it be set apart? Previous attempts to
settle this question have been inconclusive (e.g.,
Sackett & Laczo, 2003; Schippman, 2010;
Schippman et al., 2000). In a further attempt at clarity, Campion et al. (2011) outlined best practices in
competency modeling and noted its most distinctive
features, in the context of the following definition of
competencies. That is, competencies are defined as
individual KSAOs, or collections of KSAOs, that are
needed for effective performance of the job in question. By this definition, competencies are determinants of performance, not performance itself.
Unfortunately, Campion et al.s most detailed example of a competency (p. 240) is of project management, the specifications for which seem to be a clear
characterization of performance itself, such that the
example is not consistent with the definition. The
competency modeling literature has variously
referred to knowledge, skills, abilities, personal
qualities, performance capabilities, and many other
things (e.g., attitudes, personality, motives) as competencies (Parry, 1996). In the aggregate, very little
of the I/O psychology landscape is left out, and
Clouseaus dictum potentially complicates assessmentthat is, if competencies are everything, then
they risk being nothing.

Assessment in Industrial and Organizational Psychology

Campion et al. (2011) attempted to keep that


from happening by abstracting best practices and
identifying what makes competency modeling
unique. Perhaps their most salient points are the
following:

Ideally, competency models attempt to develop


specifications for the levels of a competency that
distinguish high performers from average or low
performers. That is, to paraphrase, how do the
performance capabilities of expert performers differ substantively from the performance capabilities of nonexpert performers, and what level of
knowledge, skills, and dispositional characteristics
are required to exhibit expert performance levels?
This is very different from conventional job analysis, which tries to identify the components (e.g.,
tasks, work activities) of performance and predict
which KSAOs will be correlated (or linked) with
them. However, competency modeling is similar
to cognitive job analysis (Schraagen, Chipman, &
Shalin, 2000), which asks how experts, when
compared with novices, perform their jobs and
what resources (e.g., knowledge, skills, strategies)
do they use to perform at that level? Cognitive job
analysts and competency modelers should interact
more. They have things in common.
High-level subject matter experts (e.g., executives) are used to first specify the substantive goals
of the enterprise and then identify (to the best
of their ability) both the performance and KSAO
competencies at each organizational level that
will best facilitate goal accomplishment. This is in
contrast to conventional job analysis, which asks
incumbents or analysts to rate the importance of
KSAOs for performance in a target job, without
reference to the enterprises goals. Supposedly,
the incumbent or analyst subject matter experts
(SMEs) have these in mind when making linkage
judgments, but perhaps not.
If competencies are specified as in the first bullet,
the various components of the human resources
system can more directly address enterprise
objectives by focusing selection, training, and
development on obtaining the most critical competencies. In some respects, competency modeling is analogous to a needs analysis.

Even from this brief examination, it is apparent


that competency modeling carries a heavy assessment burden. This burden is complicated by a resistance to taxonomic thinking and a desire to specify
competencies in organizational language. These
choices may aid in selling competency modeling to
higher management, but they complicate specification for assessment. For example, how can previous
theory and research in leadership be used to define
and specify performance levels for leading with
courage? Such a competency has a nice ring to it,
but what does it mean? Tett, Guterman, Bleier, and
Murphy (2000) attempted to address some of these
issues by beginning with the research literature and
conducting a systematic content analysis of published management competencies intended to reflect
performance capabilities. On the basis of SME judgments, they identified 53 competencies grouped into
10 categories and attempted a definition of each of
them. Although this effort represents a significant
step in the right direction, a few of the competencies
still seem more like personality characteristics than
performance capabilities (e.g., orderliness, tolerance). However, their juxtaposition of the SMEdeveloped taxonomy derived from the literature
against the competency lists from several private
firms is interesting.
THE CONTEXT
So far, this chapter, in the interests of demonstrating
the complexity of assessment in I/O psychology, has
tried to outline the basic elements in the dependent
and independent variable landscape that invite measurement. Because the concern is assessment, the
complexities of research and practice focused on
estimating the interrelationships among, or
between, independent and dependent variables,
differential prediction across criteria and interactive
effects are not addressed. These are questions that,
although very critical, do not themselves change the
measurement requirements for the variables
involved.
However, I/O psychology does make a big deal of
the influence of the context, or situation, on the
interrelationships among variables. Such contextual
variables are often referred to as moderators. Also,
379

John P. Campbell

the context can take on the status of an independent


variable. For example, the organizational climate or
culture might be hypothesized to influence individual choice behavior. Consequently, it is sometimes
important to assess the context itself. For example,
Scott and Pearlman (2010) strongly made the case
that assessment for organizational change must
always deal with assessment of the context.
The literature on the assessment of the context is
in fact very large. For example, in the course of
developing the specifications for the O*NET database two taxonomies were created, one for the work
(job) context (Strong, Jeanneret, McPhail, Blakley,
& DEgidio, 1999) and one for the organizational
context (Arad, Hanson, & Schneider, 1999). They
are both multilevel hierarchical taxonomies. The
work context is portrayed as having 39 first-order
factors and 10 second-order factors, such as how
people in work roles communicate, the positions
environmental conditions, the criticality of the work
role, and the pace of the work. The organizational
context is reflected by 41 first-order dimensions and
seven second-order factors such as organizational
structure, organizational culture, and goals.
Not surprisingly, because they were based on
extensive literature searches, the O*NETs work and
organizational context taxonomies subsume much
of the literature on organizational culture and climate (e.g., James & Jones, 1974; Ostroff, Kinicki, &
Tamkins, 2003), organization development (J. R.
Austin & Bartunek, 2003), and work design (J. R.
Edwards, Scully, & Bartek, 1999; Morgeson & Campion, 2003). Within O*NET, the context is assessed
via job incumbent ratings. Although a detailed
examination of the context literature cannot be presented here, the major features of the context that
dominate the need for assessment, and the issues
that assessment of the context creates, seem in the
authors opinion to be as follows.
1. the features of the work context that are identified as rewarding or need fulfilling, such as the
20 potential reinforcers assessed by the instrumentation of the Minnesota theory of work
and adjustment or the five job characteristics
specified by Hackman and Oldhams (1976) Job
Diagnostic survey;
380

2. the full range of performance feedback provided


by the job and organizational context;
3. the nature and quality of the components of the
organizations human resources system such as
selection procedures, compensation practices,
and training opportunities;
4. the nature of the organizations operating goals,
such as those resulting from an application of
Pritchards productivity measurement system
(Pritchard, Holling, Lammers, & Clark, 2002);
5. leadership emphasis, in terms of whether it is
directive versus participative, formalized versus
informal, or centralized versus decentralized;
6. the complexity and variety of the technologies
used by the organization;
7. the relative criticality or importance of specific
jobs, positions, or roles;
8. the level of conflict among work roles or units;
9. the relative pace of work in terms of the characteristic levels of effort, intensity, and influences
of deadlines;
10. the physical nature of the environment (e.g.,
temperature, illumination, toxicity); and
11. the organizational climate and culture.
Number 11 perhaps deserves special mention.
The assessment of organizational climate and culture are important topics in I/O psychology and
have a long history (James & Jones; 1974; Lewin,
1951; Litwin & Stringer, 1968; Trice & Beyer,
1993). However, developing clear specifications for
what constitute organizational culture and climate
has proven elusive (Denison, 1996; Ostroff et al.,
2003; Verbeke, Volgering, & Hessels, 1998). Verbeke et al. (1998) surveyed the published literature
and identified 32 distinct definitions of climate and
54 definitions of culture. However, a not-uncommon
distinction is as follows.
Organizational culture refers to the informal
rules, expectations, and norms that govern behavior,
in addition to written policies, that are both relatively stable and widely perceived. Organizational
climate generally refers to individual perceptions of
the impact of the work environment on individual
well-being (e.g., see James & Jones, 1974). By convention, psychological climate refers to each individuals judgment, whereas organizational climate refers

Assessment in Industrial and Organizational Psychology

to the aggregate (e.g., mean) judgment across


individuals.
Besides the definitional problems, the assessment
of culture and climate must deal with at least the following issues as well. First, to what unit are culture
or climate referenced? Is it work group, department,
division, or organizational climate or culture? Second, are individuals asked to provide their own individual judgments about the nature of the climate or
culture or to predict the judgments of other organizational members? With either method, the construct of culture and climate requires some degree of
consensus or agreement among individuals, but how
much? Finally, is there a genuine latent structure of
distinctive subfactors for culture and climate, or
should both climate and culture be tied to any number of specific referents that would not necessarily
constitute a taxonomy of latent dimensions? James
and Jones (1974) argued for the former, and
Schneider (1990) argued for the latter. Standardized
survey questionnaires do exist for culture (e.g.,
Cooke & Rousseau, 1988) and for climate (Ostroff,
1993), and they tend to yield stable factor structures. Some evidence also exists for a general climate
factor that seems to represent the overall psychological safety and meaningfulness of the work environment (Brown & Leigh, 1996). The bottom line is
that any attempt to assess organizational culture and
climate, either as moderator variables or as independent variables in their own right, must address these
issues. As always, settling specification and assessment issues must come before considering what
mediates the relationship between culture or climate
and something else, or the boxes, arrows, and path
coefficients have little meaning.

Psychometric Landscape
Many features of the psychometric landscape, as
they pertain to measurement and assessment in I/O
psychology, are well known and have not been discussed, yet again, in this chapter. For several assessment purposes, psychologists are governed by the
Standards for Educational and Psychological Testing
(American Educational Research Association
[AERA], American Psychological Association
[APA], and National Council on Measurement
in Education [NCME], 1999) and the Society of

Industrial and Organizational Psychologys (2003)


Principles, and all professionals are familiar with
them. Also, all appropriate professionals should be
familiar with the development of measurement theory beyond the confines of Spearmans (1904) classic model of true and error scores, which becomes
a special case of the generalizability model (e.g.,
Putka & Sackett, 2010), and with the basics of
item response theory (IRT) as well (Embretson &
Reise, 2000).
The most important principle from the psychometric landscape is that all assessment, whether for
research, practice, or high-stakes decision making,
must have evidence-based validity for the purpose
or purposes for which it is to be used. This principle
is as true for asserting that self-efficacy is being measured in a research study or for stating that critical
thinking is a required competency in a competency
model as it is for using a personality measure in
high-stakes personnel selection. A large literature
is also available on what kinds of evidence support
the various purposes for which assessment is done
(e.g., see AERA et al., 1999; Farr & Tippins, 2010;
McPhail, 2007; Scott & Reynolds, 2010). This literature should be part of all I/O psychologists expert
knowledge base and is discussed in Chapter 4 of this
volume. However, for a somewhat contrarian view,
see Borsboom, Mellenbergh, and van Heerden
(2003) and Borsboom (2006).
There are also some less talked-about issues that
readers should think about. The first challenges the
very existence of applied psychology. It comes primarily from the work of Michell (1999, 2000, 2008)
and others (e.g., Kline, 1997) who asserted that psychometrics (i.e., measurement in psychology) is a
pathological science. For them, measurement in
psychology is pathological for two reasons. First,
virtually all constructs studied or used in psychology are not quantitative but are simply assumed to
be so without further justification. In this context,
being quantitative essentially means that the scores
representing individual differences on a variable
constitute at least an interval scale. Second, the lack
of justification for the assumption of such scale
properties is kept hidden (i.e., never mentioned).
Consequently, what can be inferred about individual
differences on nonquantitative variables, and what
381

John P. Campbell

do the relationships (e.g., correlations) among such


variables actually mean? For example, if psychologists assess training effects by administering an
achievement test before and after training and report
that training produced a gain of 0.5 standard deviations (i.e., d = 0.50), what does that mean in terms
of what or how much was learned? If neither job
satisfaction nor job performance are assessed on a
scale with at least interval properties, what does an
intercorrelation of .35 mean?
This issue is an old one and goes back to the
bifurcation between Stevens (1946), who asserted
that measurement is the assignment of numbers to
individuals according to rules, and Luce and Tukey
(1964), who counterargued for a conjoint measurement model that requires interval scales with additive properties. The current version of the argument
is discussed in a series of articles by Michell (2000,
2008), Borsboom and Mellenbergh (2004), and
Embretson (2006).
Everyone would probably admit that psychologists seldom deal with interval property measurement, and the response to the accusation of
pathology could be one of four kinds. First, it might
be argued that ordinal scales are okay for many
important assessment purposes (e.g., topdown
selection). Second, the purpose of assessment may
not be to scale individuals but to provide developmental feedback. Third, many of the variables psychologists study are quantitative, because when the
same variable is measured with different instruments, the results are the same. The assumption has
just not been explicitly tested. Fourth, one could
argue that psychologists do, on occasion, assess
people quantitatively, as in criterion-referenced
measurement (Cizek, 2001) or when using IRT
models (Embretson, 2006). Borsboom and Mellenbergh (2004) argued that the Rasch model (i.e., a
one-parameter IRT model) is an essentially stochastic equivalent to the deterministic conjoint measurement model, because it simultaneously scales both
items and individuals on the same scale (i.e., theta).
The preceding issue is related to the recent discussion of dominance versus ideal-point scaling for
attitude and personality assessment (Drasgow, Chernyshenko, & Stark, 2010). In psychology, these two
scaling procedures are credited to Likert (1932) and
382

Thurstone (1928), respectively. Thurstone scaling


does provide information about the relative size
of the intervals between scores on the attitude
personality continuum. Drasgow et al. (2010) argued
persuasively that embedding ideal-point scaling in an
IRT model overcomes some of the previous difficulties in Thurstones scaling and results in a more
quantitatively scaled variable. This application has
also been used for performance assessment (Borman
et al., 2001) via computer-adaptive rating scales.
Another measurement-related criticism of assessment in I/O psychology is that the field has seemed
to show little interest in test taking as a cognitive
process. That is, I/O psychologists do not ask questions about how a test taker decides on a particular
response and cannot give a cognitive account of the
processes involved (e.g., Mislevy, 2008). The implication is that two individuals may arrive at the same
response in different ways (Mislevy & Verhelst,
1990), which in turn implies that their scores do not
mean the same thing. This criticism is most often
made in the context of ability or achievement testing, but it could also be directed at attitude measurement, linking judgments in job analysis, assessor
ratings in assessment centers, and performance ratings in general.
A final issue, and perhaps the most important
one, concerns how the structure of the various
domains of dependent, independent, and situational
variables should be modeled. A very thorough and
sophisticated treatment of latent and observed structures was provided by Borsboom et al. (2003). They
discussed three distinct ways to model the covariance structure of a set of observed scores as a function of latent variables. In the first model, latent
variables are constructs that cause responses to
operational measures but are not equivalent to
them. For example, general mental ability is a latent
variable that most surely has neurological substrates, as yet unknown, that were formed by heredity, experience, and their interaction. The existence
of the latent variable is inferred from the covariances
of the measures constructed to measure it. The
observed covariances using a variety of such measures always yield a general factor. Corrected for
attenuation, the intercorrelations of scores on the
general factor when obtained from independent sets

Assessment in Industrial and Organizational Psychology

of tests approach unity. This example is the clearest


of a real latent variable. Also, it does not preclude
the existence of subfactors (e.g., verbal, quantitative) that yield highly predictable covariances
among observed scores. The latent structure of other
trait domains is not quite as clear, at least not yet,
but the evidence is sufficient to suggest that such a
latent structure exists for some of them, such as
personality and interests. In fact, much of the work
on the latent structure of personality is an attempt
to map the biological substrates of the factors
(DeYoung, 2006).
A second model, at the other extreme, is to assert
that observed factor scores are nothing more than
the sum of the individual scores (i.e., items, tests,
ratings) that compose them. Borsboom et al.s
(2003) example is from sociology. Suppose, for an
individual, socioeconomic status is taken as the sum
of income level, education level, and home value.
There is no latent variable labeled socioeconomic status that determines income, education, and home
value. It can only be defined in terms of the three
operational measures. That is not to say that the
sum score labeled socioeconomic status is not valuable; however, it does represent a different model
that cannot be used in the same way as a substantive
latent trait model. Consequently, every time socioeconomic status is used as a label for a sum score,
the specific measures being aggregated must be
spelled out. If there are correlations among the specific measures, they must be explained by common
determinants from other domains (e.g., general
mental ability and conscientiousness).
The third model represents the attack of the
postmodernists on the generally realist approach to
research and practice that characterizes applied psychology (Boisot & McKelvey, 2010; P. Johnson &
Cassell, 2001). That is, observed covariance structures are social constructions that result from how
researchers or practitioners construct the way they
observe organizational behavior. The postmodernists have asserted that assessment in research and
practice cannot be independent of this personal psychology. Agreement on such social constructions
results from the socialization and training processes
in I/O psychology. There really is no such thing as
an independent latent variable (construct) that

determines the covariance structure of observed


measures.
Models 2 and 3 are more similar to each other
than they are to the first model, and for I/O psychology the basic issue is when should Model 1 versus
Model 2 be invoked. Depending on the choice, the
structural equations are different, and the analysis
procedures are different (MacKenzie, Podsakoff, &
Jarvis, 2005; Podsakoff, MacKenzie, Podsakoff, &
Lee, 2003). Some additional implications of model
choice are at least those described next.
If it is appropriate to model the trait determinants of performance (e.g., cognitive ability, personality, motives) as a function of latent variables, then
it is appropriate, and necessary, to base assessment
on the specifications for the latent variables. Constantly inventing new variables without reference to
a known or specified latent structure is dysfunctional for research and practice.
In contrast to trait assessment, imposing a latent
variable model on state assessment is more problematic. For example, are there skill and knowledge
domains that can be specified well enough that testing and assessment can estimate a domain score that
has surplus meaning beyond the sum of a particular
set of item scores? This is one thing that made
development of knowledge and skill taxonomies for
O*NET difficult. However, IRT models provide a
way of testing whether latent models are reasonable.
A similar question could be asked about attitude or
climate assessment. For example, are there general
(latent?) dimensions of organizational climate, or
should climate always be referenced to specific organizational activities or procedures? Also, what does
a path analysis actually estimate if a latent variable
model is not appropriate?
These considerations raise another obvious question. That is, what is the latent structure of performance itself, or is there one, and what is the impact
of this issue on assessment? In this regard, some
things are certain, some things are reasonably certain, and some are currently indeterminate. For
certain, no single latent variable can be labeled as
overall, or general, performance. Overall performance
is simply a sum score of whatever measures are at
hand. If overall performance is generated by a single
rating scale labeled overall performance, then the
383

John P. Campbell

rater must compute the sum score in his or her


head, by whatever personal calculus he or she
chooses to use, which may or may not be in conscious awareness. What about the general factor that
emerges from the covariance matrix of virtually any
set of observed performance scores after controlling
for method variance (e.g., Viswesvaran, Schmidt, &
Ones, 2005)? Such a factor could arise because trait
determinants such as general mental ability and conscientiousness contribute to individual differences
in virtually all performance measures. Consequently, if one believes the general factor is a latent
variable, then it is reasonable to assert that a set of
performance measures simply constitutes another
measure of general mental ability. General mental
ability is the latent variable. No one has yet given a
substantive specification of the general factor in performance content terms. It is always specified as a
sum score of specific dimensions.
After reviewing all extant research on performance as a construct, Campbell (2012) has
argued for an eight-factor structure (discussed
earlier in this chapter) that is invariant across
work roles, organizational levels, and type of
organizations. The status of each of the factors as
a latent variable is a mixed bag. Certainly, the
technical performance factor does not represent a
latent variable. There is always a technical factor,
but it must always be specified as a sum score of
assessed performance levels on the specific technical responsibilities of the work role, and it
might need to be summed over days, weeks, or
years. In contrast, a case can be made, with varying degrees of empirical justification, for the
latent variable status of the two subfactors of
communication, for the InitiativeEffort factor,
and for the subfactors of Counterproductive Work
Behavior. Campbell (2012) also argued that the
subfactors of leadership and management (shown
in Exhibits 22.1 and 22.2) have appeared again
and again in leadership research using a variety of
measures, and it is reasonable to assert that they
represent latent variables of performance. A
recent integrative review and meta-analysis of
research on trait and behavioral leadership models
(DeRue, Nahrgang, Wellman, & Humphrey,
2011) is consistent with this view.
384

High-Stakes Assessment
As is the case for many other subfields, I/O psychology
must deal with the assessment complexities of highstakes testing. Selection for a job, for promotion, and
for entry into educational or training programs are
indeed high-stakes decisions. They make up a large
and critical segment of the research and practice
landscape in I/O psychology, and they significantly
influence the lives of tens of millions of people. The
complexities are intensified enormously by advances in
digital technology and by the ethical, legal, and political environments that influence such decision making.
Each of these testing environment complexities
(i.e., technological, ethical, legal, and political) has
generated its own literature (cf. Farr & Tippins,
2010; Outtz, 2010). The issues include how to deal
with unproctored Internet testing; what feedback to
provide to test takers; determining the presence or
absence of test bias; the currency of federal guidelines; the ethical responsibilities of I/O psychologists;
and the efficacy of using changes in standardized test
scores to evaluate the value added by teachers,
school systems, and universities. Again, these highstakes issues are simply part of the I/O psychology
assessment landscape, and the field must deal with
them as thoroughly and as directly as it can.
Some final (at last) remarks
The basic theme of this chapter is the assertion that
assessment in I/O psychology is very, very complex.
Complexity refers to the sheer number of variables
across the dependent, independent, and situational
variable spectrums; the multidimensional nature of
both the latent and the observed structures for each
variable; the difficulties involved in developing the
substantive specifications for each dimension and their
covariance structures; the multiplicity of assessment
purposes; the multiplicity of assessment methods; and
the intense interaction between science and practice.
The scientistpractitioner model still dominates, and
that opens the door to the marketplace, high-stakes
decision making, the individual versus organizational
perspectives, and the attendant value judgments that
elicit professional guidelines, governmental rule making, and litigation precedents, all of which have
important and complex implications for assessment.

Assessment in Industrial and Organizational Psychology

The future will become even more complex. The


world of work itself becomes ever more complicated
as technology, globalization, population growth, climate science, and competing political ideologies
contrive to shape it. These forces will shape psychological assessment as well. For example, Embretson
(2004) forecast measurement technologies for the
21st century that could barely be imagined a decade
ago, and the assessment methods of neuroscience
are now being adapted to focus on the neural antecedents of work performance (Parasuraman, 2011).
Will future graduate training in I/O psychology
include becoming familiar with neuroimaging
methods (e.g., functional magnetic resonance imaging, event-related potential, and magnetoencephalography)? The short answer is yes. Identity crises
(Ryan & Ford, 2010) aside, it is an interesting and
intense time in I/O psychology. It will become even
more so in the future, and I/O psychologists have
much to contribute to the future of both science
and practice.
To deal with this complexity more effectively, this
chapter made the following basic points. First, given a
particular assessment domain of interest, it is imperative to specify its constructs as completely and as
carefully as possible and model its covariance structure as precisely as possible. If a new construct is proposed, the ways in which it fits into existing
structures, or does not fit, should be specified. It is
not in the best interests of research and practice to
invent new labels for existing variables and imply that
something new and different is being assessed or to
propose new variables and let them float above the
marketplace without specification and an evidence
base. This is not an argument for never investigating
anything new. It is an argument for careful specification and research-based assessment.

References
Ackerman, P. L. (1987). Individual differences in skill
learning: An integration of psychometric and
information processing perspectives. Psychological
Bulletin, 102, 327. doi:10.1037/0033-2909.102.1.3
Ackerman, P. L. (1988). Determinants of individual
differences during skill acquisition: Cognitive
abilities and information processing. Journal of
Experimental Psychology: General, 117, 288318.
doi:10.1037/0096-3445.117.3.288

Ackerman, P. L. (2000). Domain-specific knowledge


as the dark matter of adult intelligence: Gf/Gc,
personality and interest correlates. The Journals of
Gerontology, Series B: Psychological Sciences and Social
Sciences, 55, 6984. doi:10.1093/geronb/55.2.P69
Ackerman, P. L., & Rolfhus, E. L. (1999). The locus of
adult intelligence: Knowledge, abilities, and nonability traits. Psychology and Aging, 14, 314330.
doi:10.1037/0882-7974.14.2.314
Albrecht, S. L. (Ed.). (2010). Handbook of employee
engagement: Perspectives, issues, research and practice.
Glos, England: Edward Elgar.
Alderfer, C. P. (1969). An empirical test of a new theory
of human needs. Organizational Behavior and
Human Performance, 4, 142175. doi:10.1016/00305073(69)90004-X
American Educational Research Association, American
Psychological Association, & National Council on
Measurement in Education. (1999). Standards for educational and psychological testing (3rd ed.). Washington,
DC: American Educational Research Association.
Anderson, J. R. (1987). Skill acquisition: Compilation
of weak-method problem solutions. Psychological
Review, 94, 192210. doi:10.1037/0033295X.94.2.192
Arad, S., Hanson, M., & Schneider, R. J. (1999).
Organizational context. In N. G. Peterson, M. D.
Mumford, W. C. Borman, P. R. Jenneret, & E. A.
Fleishman (Eds.), An occupational information system
for the 21st century: The development of O*NET (pp.
147174). Washington, DC: American Psychological
Association. doi:10.1037/10313-009
Austin, J. R., & Bartunek, J. M. (2003). Theories and practices of organizational development. In W. C. Borman,
D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology: Vol. 12. Industrial and organizational psychology (pp. 309332). Hoboken, NJ: Wiley.
Austin, J. T., & Villanova, P. (1992). The criterion problem: 19171992. Journal of Applied Psychology, 77,
836874. doi:10.1037/0021-9010.77.6.836
Bakker, A. B., & Leiter, M. P. (Eds.). (2010). Work
engagement: A handbook of essential theory and
research. New York, NY: Psychology Press.
Bandura, A. (1982). Self-efficacy mechanism in human
agency. American Psychologist, 37, 122147.
doi:10.1037/0003-066X.37.2.122
Banta, T. W. (2008). Editors notes: Trying to clothe the
emperor. Assessment Update, 20, 34, 1516.
Bargh, J. A., & Chartrand, T. L. (1999). The unbearable
automaticity of being. American Psychologist, 54,
462479. doi:10.1037/0003-066X.54.7.462
Bar-On, R. (1997). Bar-On Emotional Quotient Inventory:
A measure of emotional intelligence. Toronto, Ontario,
Canada: Multi-Health Systems.
385

John P. Campbell

Barrick, M. R., & Mount, M. K. (1991). The Big Five


personality dimensions and job performance: A
meta-analysis. Personnel Psychology, 44, 126.
doi:10.1111/j.1744-6570.1991.tb00688.x
Barrick, M. R., & Mount, M. K. (2005). Yes, personality matters: Moving on to more important matters.
Human Performance, 18, 359372. doi:10.1207/
s15327043hup1804_3
Bartram, D. (2005). The great eight competencies: A criterion-centric approach to validation.
Journal of Applied Psychology, 90, 11851203.
doi:10.1037/0021-9010.90.6.1185
Baum, J. R., Bird, B. J., & Singh, S. (2011). The practical
intelligence of entrepreneurs: Antecedents and a link
with new venture growth. Personnel Psychology, 64,
397425. doi:10.1111/j.1744-6570.2011.01214.x
Beal, D. J., Weiss, H. M., Barros, E., & MacDermid, S. M.
(2005). An episodic process model of affective influences on performance. Journal of Applied Psychology,
90, 10541068. doi:10.1037/0021-9010.90.6.1054
Bennett, R. J., & Robinson, S. L. (2000). Development
of a measure of workplace deviance. Journal of
Applied Psychology, 85, 349360. doi:10.1037/00219010.85.3.349
Bennett, W., Lance, C. E., & Woehr, D. J. (Eds.). (2006).
Performance measurement: Current perspectives and
future challenges. Mahwah, NJ: Erlbaum.
Berry, C. M., Ones, D. S., & Sackett, P. R. (2007).
Interpersonal deviance, organizational deviance,
and their common correlates: A review and metaanalysis. Journal of Applied Psychology, 92, 410424.
doi:10.1037/0021-9010.92.2.410
Boisot, M., & McKelvey, B. (2010). Integrating modernist and postmodernist perspectives on organizations: A complexity science bridge. Academy of
Management Review, 35, 415433. doi:10.5465/
AMR.2010.51142028
Borman, W. C. (1987). Personal constructs, performance schemata, and folk theories of subordinate effectiveness: Explorations in an Army
officer sample. Organizational Behavior and Human
Decision Processes, 40, 307322. doi:10.1016/07495978(87)90018-5
Borman, W. C., & Brush, D. H. (1993). More progress toward a taxonomy of managerial performance requirements. Human Performance, 6, 121.
doi:10.1207/s15327043hup0601_1
Borman, W. C., Buck, D. E., Hanson, M. S., Motowidlo, S.
J., Stark, S., & Drasgow, F. (2001). An examination
of the comparative reliability, validity, and accuracy
of performance ratings made using computerized
adaptive rating scales. Journal of Applied Psychology,
86, 965973. doi:10.1037/0021-9010.86.5.965
Borman, W. C., & Motowidlo, S. J. (1993). Expanding
the criterion domain to include elements of
386

contextual performance. In N. Schmitt & W. C.


Borman (Eds.), Personnel selection in organizations
(pp. 7198). San Francisco, CA: Jossey-Bass.
Borsboom, D. (2006). The attack of the psychometricians.
Psychometrika, 71, 425440. doi:10.1007/s11336006-1447-6
Borsboom, D., & Mellenbergh, G. J. (2004). Why
psychometrics is not pathological: A comment
on Michell. Theory and Psychology, 14, 105120.
doi:10.1177/0959354304040200
Borsboom, D., Mellenbergh, G. J., & van Heerden, J.
(2003). The theoretical status of latent variables.
Psychological Review, 110, 203219. doi:10.1037/
0033-295X.110.2.203
Bowers, D. G., & Seashore, W. E. (1966). Predicting
organizational effectiveness with a four factor theory
of leadership. Administrative Science Quarterly, 11,
238263. doi:10.2307/2391247
Brown, S. P. (1996). A meta-analysis and review of organizational research on job involvement. Psychological
Bulletin, 120, 235255. doi:10.1037/00332909.120.2.235
Brown, S. P., & Leigh, T. W. (1996). A new look at
psychological climate and its relationship to job
involvement, effort, and performance. Journal of
Applied Psychology, 81, 358368. doi:10.1037/00219010.81.4.358
Cameron, K. S., & Quinn, R. E. (1999). Diagnosing and
changing organizational culture: Based on the competing values framework. Reading, MA: Addison-Wesley.
Campbell, J. P. (1977). On the nature of organizational
effectiveness. In P. S. Goodman & J. M. Pennings
(Eds.), New perspectives on organizational effectiveness (pp. 1355). San Francisco, CA: Jossey-Bass.
Campbell, J. P. (2007). Profiting from history. In L. L.
Koppes, P. W. Thayer, A. J. Vinchur, & E. Salas
(Eds.), Historical perspectives in industrial and organizational psychology (pp. 441457). Mahwah, NJ:
Erlbaum.
Campbell, J. P. (2012). Behavior, performance, and effectiveness in the 21st century. In S. Kozlowski (Ed.),
Oxford handbook of industrial and organizational
psychology (pp. 159194). New York, NY: Oxford
University Press.
Campbell, J. P., & Knapp, D. (2001). Exploring the limits
of personnel selection and classification. Hillsdale, NJ:
Erlbaum.
Campbell, J. P., & Kuncel, N. R. (2001). Individual and
team training. In N. Anderson, D. S. Ones, H. K.
Sinangil, & C. Viswesvaran (Eds.), Handbook of
work and organizational psychology (pp. 278312).
London, England: Blackwell.
Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C.
E. (1993). A theory of performance. In N. Schmitt &

Assessment in Industrial and Organizational Psychology

W. C. Borman (Eds.), Frontiers in industrial/organizational psychology: Personnel selection and classification


(pp. 3571). San Francisco, CA: Jossey-Bass.
Campion, M. S., Fink, A. A., Ruggeberg, B. J., Carr, L.,
Phillips, G. M., & Odman, R. B. (2011). Doing
competencies well: Best practices in competency
modeling. Personnel Psychology, 64, 225262.
doi:10.1111/j.1744-6570.2010.01207.x
Carlson, K. D. (1997). Impact of instructional strategy on
training effectiveness. Unpublished doctoral dissertation, University of Iowa, Iowa City.
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, England:
Cambridge University Press. doi:10.1017/
CBO9780511571312
Carroll, J. B. (2003). The higher-stratum structure of
cognitive abilities: Current evidence supports g and
about 10 broad factors. In N. Nyborg (Ed.), The scientific study of general intelligence: Tribute to Arthur
R. Jensen (pp. 521). Amsterdam, the Netherlands:
Pergamon Press.
Cattell, R. B. (1971). Abilities: Their structure, growth, and
action. Boston, MA: Houghton Mifflin.
Chan, D. (2010). Values, styles, and motivational constructs. In J. Farr & N. Tippins (Eds.), Handbook of
employee selection (pp. 321337). New York, NY:
Routledge.
Christian, M. S., Garza, A. S., & Slaughter, J. E. (2011).
Work engagement: A quantitative review and test of
its relations with task and contextual performance.
Personnel Psychology, 64, 89136. doi:10.1111/
j.1744-6570.2010.01203.x
Cizek, G. J. (Ed.). (2001). Setting performance standards:
Concepts, methods, and perspectives. Mahwah, NJ:
Erlbaum.
Cleveland, J. N., & Colella, A. (2010). Employee workrelated health, stress, and safety. In J. Farr &
N. Tippins (Eds.), Handbook of employee selection
(pp. 531550). New York, NY: Routledge.
Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86, 386400.
doi:10.1037/0021-9010.86.3.386
Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C.
O., & Ng, K. Y. (2001). Justice at the millennium:
A meta-analytic review of 25 years of organizational
justice research. Journal of Applied Psychology, 86,
425445. doi:10.1037/0021-9010.86.3.425
Conway, J. M. (1998). Understanding method variance in multitraitmultirater performance appraisal
matrices: Examples using general impressions
and interpersonal effect as measured method factors. Human Performance, 11, 2955. doi:10.1207/
s15327043hup1101_2

Conway, J. M., & Huffcut, A. L. (1997). Psychometric


properties of multisource performance ratings:
A meta-analysis of subordinate, supervisor, peer,
and self-ratings. Human Performance, 10, 331360.
doi:10.1207/s15327043hup1004_2
Cooke, R. A., & Rousseau, D. M. (1988). Behavioral
norms and expectations: A quantitative approach
to the assessment of organizational culture. Group
and Organization Management, 13, 245273.
doi:10.1177/105960118801300302
Cooper, C. L., Dewe, P., & ODriscoll, M. P. (2001).
Organizational stress: A review and critique of theory,
research, and applications. Thousand Oaks, CA: Sage.
Costa, P. T., Jr., & McRae, R. R. (1992). Revised NEO
Personality Inventory (NEO PIR) and NEO FiveFactor Inventory (NEO FFI) professional manual.
Odessa, FL: Psychological Assessment Resources.
Crede, M. (2006). Job attitude and job evaluation:
Examining construct-measurement discrepancies.
Unpublished doctoral dissertation, University of
Illinois at UrbanaChampaign.
Cronbach, L. J., & Gleser, G. C. (1965). Psychological
tests and personnel decisions (2nd ed.). Urbana:
University of Illinois Press.
Dalal, R. S. (2005). A meta-analysis of the relationship
between organizational citizenship behavior and
counterproductive work behavior. Journal of Applied
Psychology, 90, 12411255.
Davenport, R. (2006). Eliminate the skills gap. Training
and Development, 60, 2732.
Dawis, R. V., Dohm, T. E., Lofquist, L. H., Chartrand, J.
M., & Due, A. M. (1987). Minnesota Occupational
Classification System III: A psychological taxonomy
of work. Minneapolis: University of Minnesota,
Department of Psychology, Vocational Psychology
Research.
Dawis, R. V., & Lofquist, L. H. (1984). A psychological
theory of work adjustment. Minneapolis: University of
Minnesota Press.
Deadrick, D. L., Bennett, N., & Russell, C. J. (1997).
Using hierarchical linear modeling to examine
dynamic performance criteria over time. Journal of
Management, 23, 745757. doi:10.1177/014920639
702300603
Denison, D. R. (1996). What is the difference between
organizational culture and organizational climate? A
natives point of view on a decade of paradigm wars.
Academy of Management Review, 21, 619654.
DeRue, D. S., Nahrgang, J. D., Wellman, N., &
Humphrey, S. E. (2011). Trait and behavioral theories of leadership: An integration and meta-analytic
test of their relative validity. Personnel Psychology,
64, 752. doi:10.1111/j.1744-6570.2010.01201.x
387

John P. Campbell

DeShon, R. P., & Gillespie, J. Z. (2005). A motivated action theory account of goal orientation.
Journal of Applied Psychology, 90, 10961127.
doi:10.1037/0021-9010.90.6.1096
DeYoung, C. G. (2006). Higher-order factors of the
Big Five in a multi-informant sample. Journal of
Personality and Social Psychology, 91, 11381151.
doi:10.1037/0022-3514.91.6.1138
DeYoung, C. G., Hirsh, J. B., Shane, M. S., Papademetris,
X., Rajeevan, N., & Gray, J. R. (2010). Testing
predictions from personality neuroscience: Brain
structure and the Big Five. Psychological Science, 21,
820828. doi:10.1177/0956797610370159
DeYoung, C. G., Quilty, L. C., & Peterson, J. B. (2007).
Between facets and domains: 10 aspects of the Big
Five. Journal of Personality and Social Psychology, 93,
880896. doi:10.1037/0022-3514.93.5.880
Dilchert, S. (2008). Measurement and prediction of creativity at work. Unpublished doctoral dissertation,
University of Minnesota, Minneapolis.
Drasgow, F., Chernyshenko, O. S., & Stark, S. (2010).
75 years after Likert: Thurstone was right! Industrial
and Organizational Psychology: Perspectives on
Science and Practice, 3, 465476. doi:10.1111/j.17549434.2010.01273.x
Dweck, C. S. (1986). Motivational processes affecting
learning. American Psychologist, 41, 10401048.
doi:10.1037/0003-066X.41.10.1040
Eagley, A. H., & Chaiken, S. (1993). The psychology of
attitudes. New York, NY: Wadsworth.
Edwards, J. E., & Rothbard, N. P. (2000). Mechanisms
linking work and family: Clarifying the relationship
between work and family constructs. Academy of
Management Review, 25, 178199.
Edwards, J. R., Scully, J. S., & Bartek, M. D. (1999).
The measurement of work: Hierarchical representation of the multimethod job design questionnaire. Personnel Psychology, 52, 305334.
doi:10.1111/j.1744-6570.1999.tb00163.x
Elliot, A. J., & Thrash, T. M. (2002). Approach
avoidance motivation in personality: Approach
and avoidance temperaments and goals. Journal
of Personality and Social Psychology, 82, 804818.
doi:10.1037/0022-3514.82.5.804
Elliott, E. S., & Dweck, C. S. (1988). Goals: An approach
to motivation and achievement. Journal of Personality
and Social Psychology, 54, 512. doi:10.1037/00223514.54.1.5
Embretson, S. E. (2004). The second century of ability
testing: Some new predictions and speculations.
Measurement, 2, 132.
Embretson, S. E. (2006). The continued search for nonarbitrary metrics in psychology. American Psychologist,
61, 5055. doi:10.1037/0003-066X.61.1.50
388

Embretson, S. E., & Reise, S. P. (Eds.). (2000). Item


response theory for psychologists. Mahwah, NJ:
Erlbaum.
Ennis, R. H. (1985). A logical basis for measuring critical
thinking skills. Educational Leadership, 43, 4448.
Ericsson, K. S., Charness, N., Feltovich, P. J., & Hoffman,
R. R. (Eds.). (2006). The Cambridge handbook of
expertise and expert performance. New York, NY:
Cambridge University Press.
Evans, J. S. B. T. (2008). Dual-processing accounts of
reasoning, judgment, and social cognition. Annual
Review of Psychology, 59, 255278. doi:10.1146/
annurev.psych.59.103006.093629
Ewell, P. T. (1991). To capture the ineffable: New forms
of assessment in higher education. Review of Research
in Education, 17, 75125.
Eysenck, H. J. (1967). The biological basis of personality.
Springfield, IL: Charles C Thomas.
Farr, J. L., & Tippins, N. T. (2010). Handbook of employee
selection. New York, NY: Routledge.
Ferris, L. D., Rosen, C. R., Johnson, R. E., Brown, D. J.,
Risavy, S. D., & Heller, D. (2011). Approach
or avoidance (or both)? Integrating core selfevaluations within an approach/avoidance
framework. Personnel Psychology, 64, 137161.
doi:10.1111/j.1744-6570.2010.01204.x
Fleishman, E. A. (1964). The structure and measurement of
physical fitness. Englewood Cliffs, NJ: Prentice Hall.
Fleishman, E. A., & Quaintance, M. K. (1984).
Taxonomies of human performance: The description of
human tasks. New York, NY: Academic Press.
Fleishman, E. A., & Reilly, M. E. (1992). Handbook
of human abilities: Definitions, measurements, and
job task requirements. Bethesda, MD: Management
Research Institute.
Gable, S. L., Reis, H. T., & Elliot, A. J. (2003). Evidence
for bivariate systems: An empirical test of appetition
and aversion across domains. Journal of Research
in Personality, 37, 349372. doi:10.1016/S00926566(02)00580-9
Galagan, P. (2010). Bridging the skills gap: New factors
compound the growing skills shortage. Alexandria, VA:
American Society for Training and Development.
Gebhardt, D. L., & Baker, T. A. (2010). Physical performance tests. In J. Farr & N. Tippins (Eds.),
Handbook of employee selection (pp. 277298). New
York, NY: Routledge.
George, J. M. (2007). Creativity in organizations.
Academy of Management Annals, 1, 439477.
doi:10.1080/078559814
Ghiselli, E. E. (1966). The validity of occupational aptitude
tests. New York, NY: Wiley.

Assessment in Industrial and Organizational Psychology

Gollwitzer, P. M., Sheeran, P., Trotschel, R., & Webb,


T. L. (2011). Self-regulation of priming effects
on behavior. Psychological Science, 22, 901907.
doi:10.1177/0956797611411586
Goodman, P. S., Devadas, R., & Griffith-Hughson, T.
L. (1988). Groups and productivity: Analyzing
the effectiveness of self-management teams. In J.
P. Campbell, R. J. Campbell, & Associates (Eds.),
Productivity in organizations: New perspectives from
industrial and organizational psychology (pp. 295
327). San Francisco: Jossey-Bass.
Gottfredson, L. S. (2003). Dissenting practical intelligence theory: Its claims and evidence. Intelligence,
31, 343397. doi:10.1016/S0160-2896(02)00085-5
Greenhaus, J. H., & Powell, G. N. (2006). When work
and family are allies: A theory of work-family enrichment. Academy of Management Review, 31, 7292.
doi:10.5465/AMR.1985.4277352
Griffeth, R. W., Hom, P. W., & Gaertner, S. (2000).
A meta-analysis of antecedents and correlates of
employee turnover: Updated moderator tests,
and research implications for the next millennium. Journal of Management, 26, 463488.
doi:10.1177/014920630002600305
Griffin, M. S., Neal, A., & Parker, S. K. (2007). A new
model of work role performance: Positive behavior
in uncertain and interdependent contexts. Academy
of Management Journal, 50, 327347. doi:10.5465/
AMJ.2007.24634438

Harnois, G., & Gabriel, P. (2000). Mental health and


work: Impact, issues and good practices. Geneva,
Switzerland: International Labour Organisation.
Herzberg, F. (1959). The motivation to work. New York,
NY: Wiley.
Hesketh, B., & Neal, A. (1999). Technology and performance. In D. R. Ilgen & E. D. Pulakos (Eds.), The
changing nature of performance: Implications for staffing, motivation, and development (pp. 2155). San
Francisco, CA: Jossey-Bass.
Hobfoll, S. E. (1998). Stress, culture, and community: The
psychology and physiology of stress. New York, NY:
Plenum Press.
Hofmann, D. A., Jacobs, R., & Gerras, S. J. (1992).
Mapping individual performance over time. Journal
of Applied Psychology, 77, 185195. doi:10.1037/
0021-9010.77.2.185
Hogan, J. (1991). Structure of physical performance in
occupational tasks. Journal of Applied Psychology, 76,
495507. doi:10.1037/0021-9010.76.4.495
Hogan, R., & Kaiser, R. B. (2010). Personality. In J. C.
Scott & D. H. Reynolds (Eds.), Handbook of workplace assessment: Evidence-based practices for selecting
and developing organizational talent (pp. 81108).
San Francisco, CA: Jossey-Bass.
Holland, J. L. (1994). The Self-Directed Search:
Professional manual. Odessa, FL: Psychological
Assessment Resources.

Gruys, M. L., & Sackett, P. R. (2003). Investigating the


dimensionality of counterproductive work behavior.
International Journal of Selection and Assessment, 11,
3042. doi:10.1111/1468-2389.00224

Holland, J. L. (1997). Making vocational choices: A theory


of vocational personalities and work environments
(3rd ed.). Odessa, FL: Psychological Assessment
Resources.

Grzywacz, J. G., & Carlson, D. S. (2007). Conceptual


izing work-family balance: Implications for practice and research. Advances in Developing Human
Resources, 9, 455471. doi:10.1177/152342230
7305487

Hoppock, R. (1935). Job satisfaction. New York, NY:


Harper.

Guion, R. M., & Gottier, R. F. (1965). Validity


of personality measures in personnel selection. Personnel Psychology, 18, 135164.
doi:10.1111/j.1744-6570.1965.tb00273.x
Hackman, J. R. (1992). Group influences on individuals
in organizations. In M. D. Dunnette & L. M. Hough
(Eds.), Handbook of industrial and organizational
psychology (Vol. 3, pp. 199267). Palo Alto, CA:
Consulting Psychologists Press.
Hackman, J. R., & Oldham, G. R. (1976). Motivation
through the design of work: Test of a theory.
Organizational Behavior and Human Performance, 16,
250279. doi:10.1016/0030-5073(76)90016-7
Harmon, L. W., Hansen, J. C., Borgen, F. H., & Hammer,
A. L. (1994). Strong Interest Inventory: Applications
and technical guide. Stanford, CA: Stanford University
Press.

Horn, J. L. (1989). Cognitive diversity: A framework


of learning. In P. L. Ackerman, R. J. Sternberg, &
R. Glaser (Eds.), Learning and individual differences
(pp. 61116). New York, NY: Freeman.
Hough, L., & Dilchert, S. (2010). Personality: Its measurement and validity for employee selection. In
J. Farr & N. Tippins (Eds.), Handbook of employee
selection (pp. 299319). New York, NY: Routledge.
Hough, L. M., & Ones, D. S. (2001). The structure, measurement, validity, and use of personality variables
in industrial, work, and organizational psychology.
In N. Anderson, D. S. Ones, H. K. Sinangil, &
C. Viswesvaran (Eds.), Handbook of industrial,
work, and organizational psychology (pp. 233277).
Thousand Oaks, CA: Sage.
Hulin, C. L., & Judge, T. A. (2003). Job attitudes. In
W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.),
Handbook of psychology: Vol. 12. Industrial and organizational psychology (pp. 255276). Hoboken, NJ:
Wiley.
389

John P. Campbell

Hunt, J. G. (1999). Transformational/charismatic leaderships transformation of the field: An historical essay.


Leadership Quarterly, 10, 129144. doi:10.1016/
S1048-9843(99)00015-6
Ilgen, D. R., Hollenbeck, J. R., Johnson, M., & Jundt, D.
(2005). Teams in organizations: From input-processoutput models to IMOI models. Annual Review of
Psychology, 56, 517543. doi:10.1146/annurev.
psych.56.091103.070250
James, L. R., & Jones, A. P. (1974). Organizational climate: A review of theory and research. Psychological
Bulletin, 81, 10961112. doi:10.1037/h0037511
Johnson, P., & Cassell, C. (2001). Epistemology
and work psychology: New agendas. Journal of
Occupational and Organizational Psychology, 74,
125143. doi:10.1348/096317901167280
Johnson, W., & Bouchard, T. J. (2005). The structure
of human intelligence: It is verbal, perceptual,
and image rotation (VPR), not fluid and crystallized. Intelligence, 33, 393416. doi:10.1016/j.
intell.2004.12.002
Johnson, W., Nijenhuis, J., & Bouchard, T. J. (2008).
Still just 1 g: Consistent result from five test batteries. Intelligence, 36, 8195. doi:10.1016/j.
intell.2007.06.001
Judge, T. A., & Bono, J. E. (2001). A rose by any other
name: Are self-esteem, generalized self-efficacy,
neuroticism, and locus of control indicators of a
common construct? In B. W. Roberts & R. Hogan
(Eds.), Personality psychology in the workplace
(pp. 93118). Washington, DC: American
Psychological Association. doi:10.1037/10434-004
Judge, T. A., Locke, E. A., Durham, C. C., & Kluger,
A. N. (1998). Dispositional effects on job life satisfaction: The role of core evaluation. Journal of
Applied Psychology, 83, 1734. doi:10.1037/00219010.83.1.17
Judge, T. A., Van Vianen, A. E. M., & DePater, I. E.
(2004). Emotional stability, core self-evaluations,
and job outcomes: A review of the evidence and an
agenda for future research. Human Performance, 17,
325346. doi:10.1207/s15327043hup1703_4

Kanungo, R. N. (1982). Measurement of job and work


involvement. Journal of Applied Psychology, 67, 341
349. doi:10.1037/0021-9010.67.3.341
Katzell, R. A., & Guzzo, R. A. (1983). Psychological
approaches to productivity improvement. American
Psychologist, 38, 468472. doi:10.1037/0003066X.38.4.468
Kelloway, E. K., Loughlin, C., Barling, J., & Nault, A.
(2002). Self-reported counterproductive behaviors and organizational citizenship behaviors:
Separate but related constructs. International
Journal of Selection and Assessment, 10, 143151.
doi:10.1111/1468-2389.00201
Klein, S., Freedman, D., Shavelson, R., & Bolus, R.
(2008). Assessing school effectiveness. Evaluation
Review, 32, 511525. doi:10.1177/0193841
X08325948
Kline, P. (1997). Commentary on Michell, quantitative
science and the definition of measurement in psychology. British Journal of Psychology, 88, 358387.
doi:10.1111/j.2044-8295.1997.tb02642.x
Koppes, L. L., Thayer, P. W., Vinchur, A. J., & Salas, E.
(Eds.). (2007). Historical perspectives in industrial and
organizational psychology. Mahwah, NJ: Erlbaum.
Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the
effectiveness of work groups and teams. Psychological
Science in the Public Interest, 7, 77124.
Kunin, T. (1955). The construction of a new type of
attitude measure. Personnel Psychology, 8, 6577.
doi:10.1111/j.1744-6570.1955.tb01189.x
Landy, F. J. (2005). Some historical and scientific issues
related to research on emotional intelligence.
Journal of Organizational Behavior, 26, 411424.
doi:10.1002/job.317
Levi, L., & Lunde-Jensen, P. (1996). A model for assessing
the costs of stressors at national level: Socio-economic
costs of work stress in two EU member states. Dublin,
Ireland: European Foundation for the Improvement
of Living and Working Conditions.
Lewin, K. (1951). Field theory in social science. New York,
NY: Harper & Row.

Kahneman, D., & Klein, G. (2009). Conditions for


intuitive expertise: A failure to disagree. American
Psychologist, 64, 515526. doi:10.1037/a0016755

Liberman, V. (2011). Why your people cant do what you


need them to do. Conference Board Review, Winter,
18.

Kahneman, D., & Tversky, A. (1979). Prospect theory:


An analysis of decision under risk. Econometrica, 47,
263291. doi:10.2307/1914185

Lievens, F., & Chan, D. (2010). Practical intelligence,


emotional intelligence, and social intelligence. In
J. Farr & N. Tippins (Eds.), Handbook of employee
selection (pp. 339359). New York, NY: Routledge.

Kanfer, R., Chen, G., & Pritchard, R. (Eds.). (2008).


Work motivation: Past, present, and future. New York,
NY: Taylor & Francis.
Kanfer, R., & Heggestad, E. D. (1997). Motivational traits
and skills: A person-centered approach to work motivation. Research in Organizational Behavior, 19, 156.
390

Likert, R. (1932). The method of constructing an attitude


scale. Archives de Psychologie, 140, 4453.
Litwin, G. H., & Stringer, R. A. (1968). Motivation
and organizational climate. Boston, MA: Harvard
University Press.

Assessment in Industrial and Organizational Psychology

Liu, O. L. (2011). Value-added assessment in higher


education: A comparison of two methods. Higher
Education, 61, 445461. doi:10.1007/s10734-0109340-8
Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. American Psychologist, 57,
705717. doi:10.1037/0003-066X.57.9.705
Lodahl, T. M., & Kejner, M. (1965). The definition and
measurement of job involvement. Journal of Applied
Psychology, 49, 2433. doi:10.1037/h0021692
Lord, R. G., Diefendorff, J. M., Schmidt, A. M., & Hall,
R. J. (2010). Self-regulation at work. Annual Review
of Psychology, 61, 543568. doi:10.1146/annurev.
psych.093008.100314
Lubinski, D. (2010). Neglected aspects and truncated
appraisals in vocational counseling: Interpreting the
interest-efficacy association from a broader perspective: Comment on Armstrong and Vogel (2009).
Journal of Counseling Psychology, 57, 226238.
doi:10.1037/a0019163
Luce, R. D., & Tukey, J. W. (1964). Simultaneous conjoint measurement: A new scale type of fundamental
measurement. Journal of Mathematical Psychology, 1,
127. doi:10.1016/0022-2496(64)90015-X

personality: An integrative hierarchical approach.


Journal of Personality and Social Psychology, 88, 139
157. doi:10.1037/0022-3514.88.1.139
Maslow, A. H. (1943). A theory of human motivation.
Psychological Review, 50, 370396. doi:10.1037/
h0054346
Masson, R. C., Royal, M. A., Agnew, T. G., & Fine, S.
(2008). Leveraging employee engagement: The
practical implications. Industrial and Organizational
Psychology: Perspectives on Science and Practice, 1,
5659. doi:10.1111/j.1754-9434.2007.00009.x
Mathieu, J., Maynard, M. T., Rapp, T., & Gilson, L.
(2008). Team effectiveness 19972007: A review
of recent advancements and a glimpse into the
future. Journal of Management, 34, 410476.
doi:10.1177/0149206308316061
McClelland, D. C. (1985). How motives, skills, and values determine what people do. American Psychologist,
40, 812825. doi:10.1037/0003-066X.40.7.812
McPhail, S. M. (Ed.). (2007). Alternative validation strategies: Developing new and leveraging existing validity
evidence. San Francisco, CA: Jossey-Bass.

Lykken, D. T. (1999). Happiness: What studies on twins


show us about nature, nurture, and the happiness set
point. New York, NY: Golden Books.

Meyer, H. H. (2007). Influence of formal and informal


organizations on the development of I-O psychology.
In L. L. Koppes, P. W. Thayer, A. J. Vinchur, & E.
Salas (Eds.), Historical perspectives in industrial and
organizational psychology (pp. 139168). Mahwah,
NJ: Erlbaum.

Macey, W. H., & Schneider, B. (2008). The meaning of


employee engagement. Industrial and Organizational
Psychology: Perspectives on Science and Practice, 1,
330. doi:10.1111/j.1754-9434.2007.0002.x

Michell, J. (1999). Measurement in psychology: Critical


history of a methodological concept. Cambridge,
England: Cambridge University Press. doi:10.1017/
CBO9780511490040

Macey, W. H., Schneider, B., Barbera, K., & Young, S.


A. (2009). Employee engagement: Tools for analysis,
practice, and competitive advantage. London, England:
Blackwell.

Michell, J. (2000). Normal science, pathological science


and psychometrics. Theory and Psychology, 10, 639
667. doi:10.1177/0959354300105004

MacKenzie, S. B., Podsakoff, P. M., & Jarvis, C. B. (2005).


The problem of measurement model misspecification in behavioral and organizational research and
some recommended solutions. Journal of Applied
Psychology, 90, 710730. doi:10.1037/00219010.90.4.710
Maertz, C. P., & Campion, M. A. (2004). Profiles in quitting: Integrating process and content turnover theory. Academy of Management Journal, 47, 566582.
doi:10.2307/20159602
Marcus, B., Schuler, H., Quell, P., & Humpfner,
G. (2002). Measuring counterproductivity:
Development and initial validation of a German selfreport questionnaire. International Journal of Selection
and Assessment, 10, 1835. doi:10.1111/14682389.00191
Markon, K. E., Krueger, R. F., & Watson, D. (2005).
Delineating the structure of normal and abnormal

Michell, J. (2008). Is psychometrics pathological science?


Measurement, 6, 724.
Miles, D. E., Borman, W. C., Spector, P. E., & Fox, S.
(2002). Building an integrative model of extra role
work behaviors: A comparison of counterproductive
work behavior with organizational citizenship behavior. International Journal of Selection and Assessment,
10, 5157. doi:10.1111/1468-2389.00193
Miner, J. B. (1977). Motivation to manage: A ten-year
update on the studies in management education
research. Atlanta, GA: Organizational Measurement
Systems Press.
Mislevy, R. J. (2008). How cognitive science challenges
the educational measurement tradition. Measurement,
6, 124.
Mislevy, R. J., & Verhelst, N. (1990). Modeling item
responses when different subjects employ different solution strategies. Psychometrika, 55, 195215.
doi:10.1007/BF02295283
391

John P. Campbell

Mitchell, T. R., & Daniels, D. (2003). Motivation. In


W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.),
Handbook of psychology: Vol. 12. Industrial and organizational psychology (pp. 225254). Hoboken, NJ:
Wiley.
Mitchell, T. R., & Lee, T. W. (2001). The unfolding
model of voluntary turnover and job embeddedness: Foundations for a comprehensive theory of
attachment. In B. Staw & R. Sutton (Eds.), Research
in organizational behavior (Vol. 23, pp. 189246).
Stamford, CT: JAI Press.
Morgeson, F. P., & Campion, M. E. (2003). Work design.
In W. C. Borman, D. R. Ilgen, & R. J. Klimoski
(Eds.), Handbook of psychology: Vol. 12. Industrial and
organizational psychology (pp. 423452). Hoboken,
NJ: Wiley.
Morgeson, F. P., Campion, M. S., Dipboye, R. L.,
Hollenbeck, J. R., Murphy, K., & Schmitt, N. (2007).
Reconsidering the use of personality tests in personnel selection contexts. Personnel Psychology, 60,
683729. doi:10.1111/j.1744-6570.2007.00089.x
Mumford, M. D., Baughman, W. S., Supinski, E. P.,
Costanza, D. P., & Threlfall, K. V. (1996). Processbased measures of creative problem solving skills:
Overall prediction. Creativity Research Journal, 9,
6376. doi:10.1207/s15326934crj0901_6
Murphy, K. R. (1989a). Dimensions of job performance.
In R. Dillon & J. Pelligrino (Eds.), Testing: Applied
and theoretical perspectives (pp. 218247). New York,
NY: Praeger.
Murphy, K. R. (1989b). Is the relationship between
cognitive ability and job performance stable over
time? Human Performance, 2, 183200. doi:10.1207/
s15327043hup0203_3
Murphy, K. R. (Ed.). (2006). A critique of emotional intelligence: What are the problems and how can they be
fixed? Mahwah, NJ: Erlbaum.
Murphy, K. R., & Cleveland, J. N. (1995). Understanding
performance appraisal: Social, organizational, and
goal-based perspectives. Thousand Oaks, CA: Sage.
Murray, H. A. (1938). Explorations in personality. New
York, NY: Oxford University Press.
Myers, D. C., Gebhardt, D. L., Crump, C. E., &
Fleishman, E. A. (1993). The dimensions of human
physical performance: Factor analysis of strength,
stamina, flexibility, and body composition measures. Human Performance, 6, 309344. doi:10.1207/
s15327043hup0604_2
National Institute of Occupational Safety and Health.
(1999). Stress at work (DHHS Publication No.
99101). Cincinnati, OH: Author.
Neuman, J. H. (2004). Injustice, stress, and aggression
in organizations. In R. W. Griffin & A. M. OLeary392

Kelly (Eds.), The dark side of organizational behavior


(pp. 62102). San Francisco, CA: Jossey-Bass.
Oh, I.-S., Wang, G., & Mount, M. K. (2011). Validity
of observer ratings of the five-factor model of personality traits: A meta-analysis. Journal of Applied
Psychology, 96, 762773. doi:10.1037/a0021832
Olson, A. M. (2000). A theory and taxonomy of individual
team member performance. Unpublished doctoral dissertation, University of Minnesota, Minneapolis.
Ones, D. S., Dilchert, S., Viswesvaran, C., & Judge, T.
A. (2007). In support of personality assessment in
organizational settings. Personnel Psychology, 60,
9951027. doi:10.1111/j.1744-6570.2007.00099.x
Ones, D. S., Dilchert, S., Viswesvaran, C., & Salgado, J. F.
(2010). Cognitive abilities. In J. Farr & N. Tippins
(Eds.), Handbook of employee selection (pp. 255275).
New York, NY: Routledge.
Ones, D. S., & Viswesvaran, C. (2003). Personality and
counterproductive work behaviors. In M. Koslowsky,
S. Stashevsky, & A. Sagie (Eds.), Misbehavior and
dysfunctional attitudes in organizations (pp. 211249).
Hampshire, England: Palgrave Macmillan.
Organ, D. W. (1988). Organizational citizenship behavior:
The good soldier syndrome. Lexington, MA: Lexington
Books.
Ostroff, C. (1993). The effects of climate and personal
influences on individual behavior and attitudes
in organizations. Organizational Behavior and
Human Decision Processes, 56, 5690. doi:10.1006/
obhd.1993.1045
Ostroff, C., Kinicki, A. J., & Tamkins, M. (2003).
Organizational culture and climate. In W. C. Borman,
D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology: Vol. 12. Industrial and organizational psychology (pp. 565594). Hoboken, NJ: Wiley.
Outtz, J. L. (2010). Addressing the flaws in our assessment decisions. In J. C. Scott & D. H. Reynolds
(Eds.), Handbook of workplace assessment: Evidencebased practices for selecting and developing organizational talent (pp. 711727). San Francisco, CA:
Jossey-Bass.
Parasuraman, R. (2011). Neuroergonomics: Brain,
cognition, and performance at work. Current
Directions in Psychological Science, 20, 181186.
doi:10.1177/0963721411409176
Parry, S. B. (1996). The quest for competencies. Training,
33, 4854.
Paul, R., & Elder, L. (2006). Critical thinking tools for taking charge of your learning and your life. Upper Saddle
River, NJ: Prentice Hall.
Payne, S. C., Youngcourt, S. S., & Beaubien, J. M. (2007).
A meta-analytic examination of the goal orientation
nomological net. Journal of Applied Psychology, 92,
128150. doi:10.1037/0021-9010.92.1.128

Assessment in Industrial and Organizational Psychology

Peterson, N. G., Mumford, M. D., Borman, W. C.,


Jeanneret, P. R., & Fleishman, E. A. (Eds.). (1999).
An occupational information system for the 21st
century: The development of O*NET. Washington,
DC: American Psychological Association.
doi:10.1037/10313-000

Robinson, S. L., & Bennett, R. J. (1995). A typology of


deviant workplace behaviors: A multidimensional
scaling study. Academy of Management Journal, 38,
555572. doi:10.2307/256693
Rosse, R. L., Campbell, J. P., & Peterson, N. G. (2001).
Personnel classification and differential job assignments: Estimating classification gains. In J. P.
Campbell & D. J. Knapp (Eds.), Exploring the limits
of personnel selection and classification (pp. 453506).
Hillsdale, NJ: Erlbaum.

Ployhart, R. E., & Bliese, P. D. (2006). Individual adaptability (I-ADAPT) theory: Conceptualizing the
antecedents, consequences, and measurement of
individual differences in adaptability. In E. Salas
(Ed.), Advances in human performance and cognitive engineering research (Vol. 6, pp. 339). Oxford,
England: Emerald Group.

Runco, M. A. (2004). Creativity. Annual Review of


Psychology, 55, 657687. doi:10.1146/annurev.
psych.55.090902.141502

Ployhart, R. E., & Hakel, M. D. (1998). The substantive nature of performance variability: Predicting
interindividual differences in intraindividual
performance. Personnel Psychology, 51, 859901.
doi:10.1111/j.1744-6570.1998.tb00744.x

Ryan, A. M., & Ford, K. J. (2010). Organizational


psychology and the tipping point of professional
identity. Industrial and Organizational Psychology:
Perspectives on Science and Practice, 3, 241258.
doi:10.1111/j.1754-9434.2010.01233.x

Podsakoff, P. M., MacKenzie, S. B., Podsakoff, N. P., &


Lee, J. Y. (2003). The mismeasure of man(agement)
and its implications for leadership research.
Leadership Quarterly, 14, 615656. doi:10.1016/j.
leaqua.2003.08.002

Sackett, P. R., & Laczo, R. M. (2003). Job and work analysis. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski
(Eds.), Handbook of psychology: Vol. 12. Industrial and
organizational psychology (pp. 2137). Hoboken, NJ:
Wiley.

Pritchard, R. D., Holling, H., Lammers, F., & Clark, B. D.


(Eds.). (2002). Improving organizational performance
with the productivity measurement and enhancement
system: An international collaboration. Huntington,
NY: Nova Science.

Salas, E., Rosen, M. S., & DiazGranados, D. (2010).


Expertise-based intuition and decision making in
organizations. Journal of Management, 36, 941973.
doi:10.1177/0149206309350084

Pulakos, E. D., Arad, S., Donovan, M. S., & Plamondon,


K. E. (2000). Adaptability in the workplace:
Development of a taxonomy of adaptive performance. Journal of Applied Psychology, 85, 612624.
doi:10.1037/0021-9010.85.4.612
Pulakos, E. D., & OLeary, R. S. (2010). Defining and
measuring results of workplace behavior. In J. Farr &
N. Tippins (Eds.), Handbook of employee selection
(pp. 513529). New York, NY: Routledge.
Putka, D. J., & Sackett, P. R. (2010). Reliability and validity. In J. L. Farr & N. T. Tippins (Eds.), Handbook
of employee selection (pp. 949). New York, NY:
Routledge.
Quinn, R. W., & Rohrbaugh, J. (1983). A spatial model
of effectiveness criteria: Towards a competing values
approach to organizational analysis. Management
Science, 29, 363377. doi:10.1287/mnsc.29.3.363
Reb, J., & Cropanzano, R. (2007). Evaluating dynamic
performance: The influence of salient gestalt
characteristics on performance ratings. Journal of
Applied Psychology, 92, 490499. doi:10.1037/00219010.92.2.490
Robert, G., & Hockey, J. (1997). Compensatory control
in the regulation of human performance under stress
and high workload: A cognitive-energetical framework. Biological Psychology, 45, 7393. doi:10.1016/
S0301-0511(96)05223-4

Salovey, P., & Mayer, J. D. (19891990). Emotional intelligence. Imagination, Cognition and Personality, 9,
185211.
Secretarys Commission on Achieving Necessary Skills.
(1999). Skills and tasks for jobs. Washington, DC:
U.S. Department of Labor.
Schippman, J. S. (2010). Competencies, job analysis, and
the next generation of modeling. In J. C. Scott &
D. H. Reynolds (Eds.), Handbook of workplace
assessment: Evidence-based practices for selecting and
developing organizational talent (pp. 197231). San
Francisco, CA: Jossey-Bass.
Schippman, J. S., Ash, R. A., Battista, M., Carr, L., Eyde, L.
D., Hesketh, B., Sanchez, J. I. (2000). The practice
of competency modeling. Personnel Psychology, 53,
703740. doi:10.1111/j.1744-6570.2000.tb00220.x
Schmidt, A. M., Dolis, C. M., & Tolli, A. P. (2009). A
matter of time: Individual differences, contextual
dynamics, and goal progress effects on multiple-goal
self-regulation. Journal of Applied Psychology, 94,
692709. doi:10.1037/a0015012
Schmidt, F., & Hunter, J. (1998). The validity and utility of selection methods in personnel psychology:
Practical and theoretical implications of 85 years of
research findings. Psychological Bulletin, 124, 262
274. doi:10.1037/0033-2909.124.2.262
Schneider, B. (1990). The climate for service: An application of the climate construct. In B. Schneider (Ed.),
393

John P. Campbell

Organizational climate and culture (pp. 383412). San


Francisco, CA: Jossey-Bass.
Schraagen, J. M., Chipman, S. F., & Shalin, V. (Eds.).
(2000). Cognitive task analysis. Mahwah, NJ:
Erlbaum.
Schwartz, S. H., & Bilsky, W. (1990). Toward a theory
of the universal content and structure of values:
Extensions and cross cultural replications. Journal
of Personality and Social Psychology, 58, 878891.
doi:10.1037/0022-3514.58.5.878
Scott, J. C., & Pearlman, K. (2010). Assessment for
organizational change: Mergers, restructuring, and
downsizing. In J. C. Scott & D. H. Reynolds (Eds.),
Handbook of workplace assessment: Evidence-based
practices for selecting and developing organizational
talent (pp. 533575). San Francisco, CA: Jossey-Bass.
Scott, J. C., & Reynolds, D. H. (Eds.). (2010). Handbook
of workplace assessment: Evidence-based practices for
selecting and developing organizational talent. San
Francisco, CA: Jossey-Bass.
Selye, H. (1975). Confusion and controversy in the stress
field. Journal of Human Stress, 1, 3744. doi:10.1080/
0097840X.1975.9940406
Simon, H. A. (1992). What is an explanation of
behavior? Psychological Science, 3, 150161.
doi:10.1111/j.1467-9280.1992.tb00017.x
Smith, P. C., Kendall, L. M., & Hulin, C. L. (1969). The
measurement of satisfaction in work and retirement.
Chicago, IL: Rand McNally.
Society for Industrial and Organizational Psychology.
(2003). Principles for the validation and use of personnel selection procedures. Bowling Green, OH: Author.
Sonnentag, S., & Frese, M. (2003). Stress in organizations. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski
(Eds.), Handbook of psychology: Vol. 12. Industrial and
organizational psychology (pp. 453492). Hoboken,
NJ: Wiley.
Sonnentag, S., & Frese, M. (2012). Performance dynamics. In S. Kozlowski (Ed.), Oxford handbook of industrial and organizational psychology (pp. 548575).
New York, NY: Oxford University Press.
Spearman, C. (1904). General intelligence, objectively
determined and measured. American Journal of
Psychology, 15, 201293. doi:10.2307/1412107
Spector, P. E., Bauer, J. A., & Fox, S. (2010).
Measurement artifacts in the assessment of counterproductive work behavior and organizational
citizenship behavior: Do we know what we think we
know? Journal of Applied Psychology, 95, 781790.
doi:10.1037/a0019477
Steedle, J., Kugelmass, H., & Nemeth, A. (2010). What
do they measure? Comparing three learning outcomes assessment. Change: The Magazine of Higher
Learning, 42, 3337. doi:10.1080/00091383.2010.4
90491
394

Steel, P., & Konig, C. (2006). Integrating theories of


motivation. Academy of Management Review, 31,
889913. doi:10.5465/AMR.2006.22527462
Sternberg, R. J. (2003). A broad view of intelligence:
The theory of successful intelligence. Consulting
Psychology Journal: Practice and Research, 55, 139
154. doi:10.1037/1061-4087.55.3.139
Sternberg, R. J., Wagner, R. K., Williams, W. M., &
Horvath, J. A. (1995). Testing common
sense. American Psychologist, 50, 912927.
doi:10.1037/0003-066X.50.11.912
Stevens, S. S. (1946). On the theory of scales of
measurement. Science, 103, 677680. doi:10.1126/
science.103.2684.677
Stewart, G. L., & Nandkeolyar, A. K. (2006). Adaptation
and intraindividual variation in sales outcomes:
Exploring the interactive effects of personality and
environmental opportunity. Personnel Psychology, 59,
307332.
Stice, J. E. (Ed.). (1987). Teaching critical thinking and
problem solving abilities. San Francisco, CA: JosseyBass.
Strong, M. H., Jeanneret, P. R., McPhail, S. M., Blakley,
B. R., & DEgidio, E. L. (1999). Work context:
Taxonomy and measurement of the work environment. In N. G. Peterson, M. D. Mumford, W.
C. Borman, P. R. Jenneret, & E. A., Fleishman
(Eds.), An occupational information system for the
21st century: The development of O*NET (pp. 127
145). Washington, DC: American Psychological
Association. doi:10.1037/10313-008
Sturman, M. C. (2003). Searching for the inverted
u-shaped relationship between time and performance: Meta-analyses of the experience/performance, tenure/performance, and age/performance
relationships. Journal of Management, 29, 609640.
Sullivan, B. A., & Hansen, J. C. (2004). Mapping associations between interests and personality: Toward a
conceptual understanding of individual differences in
vocational behavior. Journal of Counseling Psychology,
51, 287298. doi:10.1037/0022-0167.51.3.287
Taras, V., Kirkman, B. L., & Steel, P. (2010). Examining
the impact of cultures consequences: A three-decade,
multilevel, meta-analytic review of Hofstedes cultural value dimensions. Journal of Applied Psychology,
95, 405439. doi:10.1037/a0018938
Tellegen, A. (1982). Brief manual of the Multidimensional
Personality Questionnaire. Unpublished manuscript,
University of Minnesota, Minneapolis.
Tellegen, A., & Waller, N. (2000). Exploring personality through test construction: Development of the
Multidimensional Personality Questionnaire. In S.
R. Briggs & J. M. Cheek (Eds.), Personality measures:
Development and evaluation (Vol. 1, pp. 133161).
Greenwich, CT: JAI Press.

Assessment in Industrial and Organizational Psychology

Tetrick, L., Perrew, P. L., & Griffin, M. (2010).


Employee work-related health, stress, and safety. In
J. L. Farr & N. Tippins (Eds.), Handbook of employee
selection (pp. 531549). New York, NY: Routledge.
Tett, R. P., Guterman, H. A., Bleier, A., & Murphy, P.
A. (2000). Development and content validation
of a hyperdimensional taxonomy of managerial
competence. Human Performance, 13, 205251.
doi:10.1207/S15327043HUP1303_1
Thurstone, L. L. (1928). Attitudes can be measured.
American Journal of Sociology, 33, 529554.
doi:10.1086/214483
Tippins, N. T., & Hilton, M. L. (Eds.).; Panel to Review
the Occupational Informational Network (O*NET),
National Research Council. (2010). A database
for a changing economy: Review of the Occupational
Information Network (O*NET). Washington, DC:
National Academies Press.
Trice, H. M., & Beyer, J. M. (1993). The cultures of work
organizations. Englewood Cliffs, NJ: Prentice Hall.
Unsworth, K. (2001). Unpacking creativity. Academy of
Management Review, 2, 289297.
U.S. Office of Personnel Management. (2007). Delegated
examining operations handbook: A guide for federal
agency examining offices. Washington, DC: U.S.
Office of Personnel Management.
Van Iddekinge, C. H., Putka, D. J., & Campbell, J. P.
(2011). Reconsidering vocational interests for personnel selection: The validity of an interest-based
selection test in relation to job knowledge, job performance and continuance intentions. Journal of
Applied Psychology, 96, 1333. doi:10.1037/a0021193
Verbeke, W., Volgering, M., & Hessels, M. (1998).
Exploring the conceptual expansion within the field
of organizational behavior: Organizational climate
and organizational culture. Journal of Management
Studies, 35, 303329. doi:10.1111/1467-6486.00095
Viswesvaran, C., Schmidt, F. L., & Ones, D. S. (2005).
Is there a general factor in ratings of job performance? A meta-analytic framework for disentangling substantive and error influences. Journal of
Applied Psychology, 90, 108131. doi:10.1037/00219010.90.1.108
Vosburgh, R. M. (2008). State-trait returns! And one
practitioners request. Industrial and Organizational
Psychology: Perspectives on Science and Practice, 1,
7273. doi:10.1111/j.1754-9434.2007.00014.x

Vroom, V. (1964). Work and motivation. Chichester,


England: Wiley.
Warr, P. B. (1987). Work, unemployment, and mental
health. Oxford, England: Oxford University Press.
Warr, P. B. (1994). A conceptual framework for the study
of work and mental health. Work and Stress, 8, 84
97. doi:10.1080/02678379408259982
Watson, D., & Clark, L. A. (1993). Behavioral disinhibition versus constraint: A dispositional perspective. In
D. M. Wegner & J. W. Pennebaker (Eds.), Handbook
of mental control (pp. 506527). New York, NY:
Prentice Hall.
Weiss, H. M. (2002). Deconstructing job satisfaction:
Separating evaluations, beliefs, and affective experiences. Human Resource Management Review, 12,
173194. doi:10.1016/S1053-4822(02)00045-1
Weiss, H. M., & Rupp, D. E. (2011). Experiencing work:
An essay on a person-centric work psychology.
Industrial and Organizational Psychology: Perspectives
on Science and Practice, 4, 8397. doi:10.1111/j.17549434.2010.01302.x
White, R. W. (1959). Motivation reconsidered: The
concept of competence. Psychological Review, 66,
297333. doi:10.1037/h0040934
Yukl, G. A., Gordon, A., & Taber, T. (2002). A hierarchical taxonomy of leadership behavior: Integrating
a half century of behavior research. Journal of
Leadership and Organizational Studies, 9, 1532.
doi:10.1177/107179190200900102
Zedeck, S. (Ed.). (2010). APA handbook of industrial
and organizational psychology. Washington, DC:
American Psychological Association.
Zeidner, J., Johnson, C. D., & Scholarios, D. (1997).
Evaluating military selection and classification systems in the multiple job context. Military Psychology,
9, 169186. doi:10.1207/s15327876mp0902_4
Zhou, J., & Shalley, C. E. (2003). Research on employee
creativity: A critical review and directions for future
research. In J. J. Martocchio & G. R. Ferris (Eds.),
Research in personnel and human resource management
(Vol. 22, pp. 165217). Oxford, England: Elsevier
Science.
Zyphur, M. J., Chaturvedi, S., & Arvey, R. (2008).
Job performance over time is a function of latent
trajectories and previous performance. Journal of
Applied Psychology, 93, 217224. doi:10.1037/00219010.93.1.217

395

Chapter 23

Work Analysis for Assessment


Juan I. Sanchez and Edward L. Levine

The purpose of this chapter is to review extant


research and practices concerning the role that job
analysis plays in the assessment process. Job analysis
is defined via a combination of two definitions
adapted from Brannick, Levine, and Morgeson
(2007) and Sanchez and Levine (2012). Job analysis
is made up of a set of systematic methods aimed at
explaining what people do at work and the context
in which they do it, understanding the essential
nature and meaning of their role in an organization,
and elucidating the human attributes needed to
carry out their role. Although the target of the analysis is often a set of positions that together are labeled
a job, job analysis need not be confined by job
boundaries but may instead focus on segments of
the job, teams, and the broader role enacted by people in organizations. To signify this broader focus,
the more encompassing term work analysis has
been proposed in lieu of job analysis (Sanchez, 1994;
Sanchez & Levine, 1999, 2001). Work analysis has
been the term of choice in recent reviews of the literature (Morgeson & Dierdorff, 2011; Sanchez &
Levine, 2012). Both termsjob analysis and work
analysisare used interchangeably in this chapter.
Essentially, job analysis has been used since its
inception to ensure that individual assessments target those behaviors and attributes required for performance of a job or group of jobs, as opposed to
arbitrary or irrelevant behaviors and attributes
(Mnsterberg, 1913; Stern, 1911). For instance, the
preferred method for ensuring the job relatedness of
licensure and credentialing assessments for a given
occupation is to include a job analysis as part of the

assessment development (Raymond, 2001; Smith &


Hambleton, 1990). The effectiveness of virtually all
human resource management practices, including
selection, training performance management, career
planning, team performance enhancement, worker
mobility, and deployment of staff, depends on valid
assessments (Brannick et al., 2007). It has also been
a foundational assumption throughout the history of
the field of industrial and organizational psychology
that job analysis serves an irrefutable role in ensuring the development of valid assessments. This
review of practices and research highlights how job
analysis fulfills this role.
As such, the various decisions or inferences that
are supported by job analysisfrom the determination of important job behaviors and associated personal attributes to the formulation of an assessment
planare reviewed. Highlighting the purposeful
role of job analysis helps overcome the conceptualization of job analysis as merely a methodology,
which has emphasized procedural choices such as
the choice of sources and methods through which
job information should be gathered (Pearlman &
Sanchez, 2010; Sackett & Laczo, 2003; Sanchez &
Levine, 1999, 2001). This notion detracts from
attention more appropriately directed toward the
rules through which job-analytic information is
used to draw assessment-related inferences. This
focus aligns job analysis with the dominant conceptualization of construct validity as being concerned
with inferences and their consequences. In contrast
to the notion of job analysis as primarily a series
of methodological choices, it is proposed that job

DOI: 10.1037/14047-023
APA Handbook of Testing and Assessment in Psychology: Vol. 1. Test Theory and Testing and Assessment in Industrial and Organizational Psychology,
397
K. F. Geisinger (Editor-in-Chief)
Copyright 2013 by the American Psychological Association. All rights reserved.

You might also like