Capitol Campbell - Diferente Individuale Relevante Pentru Performanta
Capitol Campbell - Diferente Individuale Relevante Pentru Performanta
The basic theme of this chapter is that the assessment enterprise in industrial and organizational
(I/O) psychology is very broad, very complex, and
very intense. The major underlying reason is that
the world of work constitutes the major portion of
almost everybodys adult life, over a long period
of time. It is complicated. The major components of
this complexity are the broad array of variables that
must be assessed; the multidimensionality of virtually every one of them; the difficulties involved in
developing specifications for such a vast array of
variables; the wide variety of assessment methods;
the intense interplay among science, research, and
practice; and the critical value judgments that come
into play. This chapter gives a structured overview
of these issues, with particular reference to substantively modeling psychologys major variable
domains and the attendant assessment issues that
are raised. The conclusion is that substantive specifications for what psychologists are trying to assess
are critically important, and I/O psychologists
should not shortchange this requirement, no matter
how much the marketplace seems to demand
otherwise.
To be fair, the term assessment can take on different meanings. Perhaps its narrowest construction is
as a multifactor evaluation of specific individuals in
terms of their suitability for a specific course of
action, such as selection, training, or promotion.
However, if the full spectrum of research and practice
concerning the applications of psychology to the
world of work is considered, assessment becomes a
much, much broader activity. This chapter takes the
DOI: 10.1037/14047-022
APA Handbook of Testing and Assessment in Psychology: Vol. 1. Test Theory and Testing and Assessment in Industrial and Organizational Psychology,
355
K. F. Geisinger (Editor-in-Chief)
Copyright 2013 by the American Psychological Association. All rights reserved.
John P. Campbell
Individual Performance
Before the mid-1980s, there was, relative to the
assessment of individual performance, simply the
criterion problem (J. T. Austin & Villanova,
1992), which was the problem of finding some existing and applicable indicator that could be construed
as a measure (i.e., assessment) of individual performance (e.g., sales, number of pieces produced)
while not worrying too much about the validity,
reliability, deficiency, and contamination of the indicators. Since then, much has happened regarding
how performance is defined and how its latent structure is modeled.
In brief, the consensus is that individual performance is best defined as consisting of the actions
people engage in at work that are directed at achieving the organizations goals and that can be scaled in
terms of how much they contribute to said goals.
For example, sometimes it takes a great deal of
covert thinking before the individual does something. Performance is the action, not the thinking
that preceded the action, and someone must identify those actions that are relevant to the organizations goals and those that are not. For those that are
(i.e., performance), the level of proficiency with
which the individual performs them must be scaled.
Both the judgment of relevance and the judgment of
level of proficiency depend on a specification of the
organizations important substantive goals, not content-free goals such as make a profit.
Nothing in this definition requires that a set of
performance actions be circumscribed by the term
job or that they remain static over a significant
length of time. Neither does it require that the goals
of an organization remain fixed or that a particular
management cadre be responsible for determining
the organizations goals (also know as vision). However, for performance assessment to take place, the
major operative goals of the organization, within
some meaningful time frame, must be known, and
the methods by which individual actions are judged
to be goal relevant, and scaled in terms of what represents high and low proficiency, must be legitimized by the stakeholders empowered to do so by
the organizations charter. Otherwise, there is no
organization. This is as true for a family as it is for
a corporation.
This definition creates a distinction between performance, as defined earlier, and the outcomes of
performance (e.g., sales level, incurred costs) that
are not solely determined by the performance of a
particular individual, even one of its top executives.
If these outcome indicators represent the goals of
the organization, then individual performance
should certainly be related to them. If not, the specifications for individual performance are wrong and
need changing or, conversely, the organization is
pursuing the wrong goals. If the variability in an
outcome indicator is totally under the individuals
control, then it is a measure of performance.
Given an apparent consensus on this definition
of performance, considerable effort has been
devoted to specifying the dimensionality of performance, in the context of the latent structure of the
performance actions required by a particular occupation, job, position, or work role (see Bartram,
357
John P. Campbell
2005; Borman & Brush, 1993; Borman & Motowidlo, 1993; Campbell, McCloy, Oppler, & Sager,
1993; Griffin, Neal, & Parker, 2007; Murphy, 1989a;
Organ, 1988; Yukl, Gordon, & Taber, 2002). These
models have become known as performance models,
and they seem to offer differing specifications for
what constitutes the nature of performance as a
construct. However, the argument here is that
correspondence is virtually total.
Campbell (2012) has integrated all past and current specifications of the dimensional structure of
the dependent variable, individual performance,
including those dealing with leadership and management performance, and the result is summarized in
the eight basic factors discussed in the next section.
Orthogonality is not asserted or implied, but
content distinctions that have different implications
for selection, training, and organizational outcomes
certainly are. Although scores on the different
dimensions may be added together for a specific
measurement purpose, it is not possible to provide
a substantive specification for a general factor.
Whether dimensions can be as general as contextual
performance or citizenship behavior is also
problematic.
Basic factors. The basic substantive factors of
individual performance in a work role (which are
not synonymous with Campbell et al., 1993) are
asserted to be the following.
Factor 1: Technical Performance. All models
acknowledge that virtually all jobs or work roles
have technical performance requirements. Such
requirements can vary by substantive area (driving a
vehicle vs. analyzing data) and by level of complexity or difficulty within area (driving a taxi vs. driving
a jetliner; tabulating sales frequencies vs. modeling
institutional investment strategies). Technical performance is not to be confused with task performance.
A task is simply one possible unit of description that
could be used for any performance dimension.
The subfactors for this dimension are obviously
numerous, and the domain could be parsed into
wide or narrow slices. The Occupational Information Network (O*NET; Peterson, Mumford, Borman, Jeanneret, & Fleishman, 1999) is based on the
U.S. Department of Labors Standard Occupational
358
Exhibit 22.1
Six Basic Factors Making Up Leadership
Performance
1. Consideration, support, person centered: Providing
recognition and encouragement, being supportive
when under stress, giving constructive feedback,
helping others with difficult tasks, building networks
with and among others
2. Initiating structure, guiding, directing: Providing task
assignments; explaining work methods; clarifying
work roles; providing tools, critical knowledge, and
technical support
3. Goal emphasis: Encouraging enthusiasm and
commitment for the groups or organizations
goals, emphasizing the important missions to be
accomplished
4. Empowerment, facilitation: Delegating authority and
responsibilities to others, encouraging participation,
allowing discretion in decision making
5. Training, coaching: One-on-one coaching and
instruction regarding how to accomplish job tasks,
how to interact with other people, and how to deal
with obstacles and constraints
6. Serving as a model: Models appropriate behavior
regarding interacting with others, acting unselfishly,
working under adverse conditions, reacting to
crisis or stress, working to achieve goals, showing
confidence and enthusiasm, and exhibiting principled
and ethical behavior.
359
John P. Campbell
Exhibit 22.2
Eight Basic Factors of Management Performance
1. Decision making, problem solving, and strategic
innovation: Making sound and timely decisions about
major goals and strategies. Includes gathering
information from both inside and outside the organization,
staying connected to important information sources,
forecasting future trends, and formulating strategic
and innovative goals to take advantage of them
2. Goal setting, planning, organizing, and budgeting:
Formulating operative goals; determining how to
use personnel and resources (financial, technical,
logistical) to accomplish goals; anticipating potential
problems; estimating costs
3. Coordination: Actively coordinating the work of two
or more units or the work of several work groups
within a unit; scheduling operations; includes
negotiating and cooperating with other units
4. Monitoring unit effectiveness: Evaluating progress
and effectiveness of units against goals: monitoring
costs and resource consumption
5. External representation: Representing the organization
to those not in the organization (e.g., customers, clients,
government agencies, nongovernment organizations,
the public); maintaining a positive organizational image;
serving the community; answering questions and
complaints from outside the organization
6. Staffing: Procuring and providing for the
development of human resources; not one-on-one
coaching, training, or guidance, but providing the
human resources that the organization or unit needs
7. Administration: Performing day-to-day administrative
tasks, keeping accurate records, documenting
actions; analyzing routine information and making
information available in a timely manner
8. Commitment and compliance: Compliance with
the policies, procedures, rules, and regulations of
the organization; full commitment to orders and
directives, together with loyal constructive criticism
of organizational policies and actions
Note. From Oxford Handbook of Industrial and
Organizational Psychology (p. 173), by S. Kozlowski (Ed.),
2012, New York, NY: Oxford University Press. Copyright
2012 by Oxford University Press. Adapted with permission.
John P. Campbell
Performance Assessment
The assessment of individual work-role performance
may be I/O psychologys most difficult assessment
requirement. J. T. Austin and Villanova (1992) provided ample documentation of the problem. Archival objective measures are few and far between and
frequently suffer from contamination. Ratings,
although they do yield meaningful assessments
(W. Bennett, Lance, & Woehr, 2006; Conway &
Huffcut, 1997), tend to suffer from low reliability,
method variance, contamination, and the possible
intrusion of implicit models of performance held by
the raters that do not correspond to the stated specifications of the assessment procedure (Borman, 1987;
Conway, 1998). Alternatives to ratings have been
methods such as performance in a simulator, performance on various forms of job samples (Campbell &
Knapp, 2001), and using various indicators of goal
John P. Campbell
Productivity
Productivity, particularly with regard to its assessment, is a frequently misused term in I/O psychology. Its origins are in the economics of the firm,
where it refers to the ratio of the value of output
(i.e., effectiveness) to the costs of achieving that
level of output. Holding output constant, productivity increases as the costs associated with achieving
that level of output decrease. It is possible to talk
about the productivity of capital, the productivity of
technology, and the productivity of labor, which are
usually indexed by the value of output divided by
the cost of the labor hours needed to produce it. For
the productivity of labor, it would be possible to
consider individual productivity, team productivity,
or organizational productivity. Assessment of individual productivity would be a bit tricky, but it must
be specified as the ratio of performance level (on
each major dimension) to the cost of reaching that
level (on each major dimension). Costs could be
reflected by number of hours needed or wage rates.
For example, terminating high wage-rate employees
and hiring cheaper (younger?) individuals who can
do the same thing would increase individual
productivity.
Turnover
Turnover refers to the act of leaving an organization.
Turnover can be voluntary or involuntary, as when
an individual is terminated by the organization.
Both voluntary turnover and involuntary termination can be good or bad depending on the circumstances. Depending on the work role, turnover
could also vary as a function of determinants that
operate at various times (e.g., variation in turnover
could occur as a function of the initial socialization
process, early vs. late promotions, vesting of retirement benefits).
For assessment purposes, great benefit would
result if a latent structure for turnover could be
specified in terms of the substantive reasons individuals leave. The beginnings of such a latent structure
can be found in the integrative reviews of turnover
research by Griffeth, Hom, and Gaertner (2000),
364
Job Satisfaction
One taxonomy of such dependent variables valued
by the individual is represented by the 20 dimensions assessed by the Minnesota Importance Questionnaire (Dawis & Lofquist, 1984), which are listed
in Exhibit 22.3.
Within the theory of work adjustment (Dawis,
Dohm, Lofquist, Chartrand, & Due, 1987; Dawis &
Lofquist, 1984), the variables in Exhibit 22.3 are
assessed in different ways for different reasons. The
Occupational Reinforcer Pattern is a rating by supervisors or managers of the extent to which a particular work role provides outcomes representing each
of the variables. The Minnesota Importance Questionnaire is a self-rating by the individual of the
importance of being able to experience high levels of
each of the 20 dimensions. The Minnesota Satisfaction Questionnaire is a self-rating of the degree to
which the individual is satisfied with the level of
each variable that he or she is currently experiencing. According to the theory of work adjustment,
overall work satisfaction should be a function of the
degree to which the work-role characteristics judged
to be important by the individual are indeed provided by the work role, or job.
Exhibit 22.3 represents the literatures most
finely differentiated portrayal of the latent structure
of what individuals want from work. There are other
portrayals. For example, a long time ago, Herzberg
(1959) grouped 16 outcomes obtained via a critical
incident procedure (he called it story-telling) into
two higher order factors variously called motivators
and hygienes or intrinsic and extrinsic. The Job
Exhibit 22.3
The 20 First-Level Job Outcomes Incorporated in Dawis and Lofquists (1984) Minnesota Theory
of Work and Adjustment
1. Ability utilization: The chance to do things that make use of ones abilities
2. Achievement: Obtaining a feeling of accomplishment and achievement from work
3. Activity: Being able to keep busy all the time, freedom from boredom
4. Advancement: Having realistic chances for promotion and advancement
5. Authority: Being given the opportunity to direct the work of others
6. Company policies and practices: Company policies and practices that are useful, fair, and well thought out
7. Compensation: Compensation that is fair, equitable, and sufficient for the work being done
8. Coworkers: Good interpersonal relationships among coworkers
9. Creativity: The opportunity to innovate and try out new ways of doing things in ones job
10. Independence: The chance to work without constant and close supervision
11. Moral values: Working does not require being unethical or going against ones conscience
12. Recognition: Receiving praise and recognition for doing a good job
13. Responsibility: The freedom to use ones own judgment
14. Security: Not having to worry about losing ones job
15. Social service: Opportunities to do things for other people as a function of being in a particular work role
16. Social status: The opportunity to be somebody in the community, as a function of working in a particular job and
organization
17. Supervisionhuman relations: The respect and consideration shown by ones manager or supervisor
18. Supervisiontechnical: Having a manager or supervisor who is technically competent and makes good decisions
19. Variety: Having a job that incorporates a variety of things to do
20. Working conditions: Having working conditions that are clean, safe, and comfortable
Note. From Oxford Handbook of Industrial and Organizational Psychology (p. 173), by S. Kozlowski (Ed.), 2012, New
York, NY: Oxford University Press. Copyright 2012 by Oxford University Press. Adapted with permission.
It is instructive, or at least interesting, to compare the 20 job characteristics listed in Exhibit 22.3
with other individual work outcomes that the list
does not seem to include but that have received
important research or assessment attention.
Examples follow.
Justice
A considerable literature exists on distributive and
procedural justice (Colquitt, 2001; Colquitt, Conlon, Wesson, Porter, & Ng, 2001) that could be
viewed as subfactors of Outcome 6 in Exhibit 22.3.
Distributive justice refers to an individuals selfassessment of how well he or she is being rewarded
by the organization. Procedural justice refers to the
individuals assessment of the relative fairness of the
organizations procedure for managing and dispensing rewards. A meta-analysis by Crede (2006)
showed perceptions of procedural justice to have a
somewhat higher mean correlation with overall job
365
John P. Campbell
Overall Well-Being
Several dependent variables in the workplace, from
the individuals point of view, go beyond job satisfaction and perceived distributive and procedural
justice to include additional facets of overall wellbeing, such as the following:
366
the goals of the family and the goals of the individual at work are different; and the influence of
gender (e.g., whether the man or woman stays
home). The touchstone for assessment of the
dependent variable is defining high scores as the
perception (by the job holder) that work and
family demands are in balance. That is, work
demands do not degrade family goals, and family demands do not degrade individual work
goals. Consequently, assessment should take
into account how well the two sets of goals are
aligned, and they may not be weighted equally
(e.g., for economic reasons). Regardless of the
relative weights, Cleveland and Colella (2010)
made a strong argument for why both sets of
goals strongly influence workfamily conflict
assessments.
Work-related stress. The study of work stress has
generated a very large literature (Sonnentag &
Frese, 2003), and work stress is frequently
offered as an important criterion variable because
of the high frequencies with which it is reported
(Harnois & Gabriel, 2000; Levi & Lunde-Jensen,
1996; National Institute for Occupational Safety
and Health, 1999). Stress can be defined as a set
of physiological, behavioral, or psychological
responses to demands (work, family, or environmental) that are perceived to be challenging
or threatening (Neuman, 2004). Assessment of
individual stress levels is a more complex enterprise than assessment of job satisfaction, mental
or physical health, or workfamily conflict. The
measurement operations could be physiological (e.g., cortisol levels in the blood), behavioral
(e.g., absenteeism), psychological (depression),
or perceptual (e.g., self-descriptions of stress
levels), and the construct validity of any one of
them is not assured given the complexities of
modeling stress as a construct.
A somewhat overly simplistic model of stress
as a criterion would be that the workfamily situation incorporates potential stressors. Whether a
potential stressor (e.g., a new project deadline)
leads to a stress reaction is a function of how it is
evaluated by the individual. For some, the new
deadline might be threatening (e.g., it increases
the probability of a debilitating failure or makes
Individual Perspective:
A Summary Comment
Job satisfaction, distributed and procedural justice,
physical health, mental and psychological health,
workfamily conflict, stress, or simply evaluation of
overall well-being have been discussed as dependent
variables in the work setting that are important to
individuals. That is, most people value being satisfied
with their work, being physically and psychologically
healthy, achieving a work lifenon-work-life balance,
and experiencing optimal stress levels. However, in
the I/O psychology literature, these variables are usually not discussed as ends in themselves, but as independent variables that have an effect on the
organizations bottom line (Cleveland & Colella,
2010; Tetrick et al., 2010). Depending on which perspective is chosen, the purpose of assessment is different, and the choice of assessment methods may
differ as well.
INDEPENDENT VARIABLE LANDSCAPE
Compared with the dependent variable domain, the
independent variable domain is a lush and verdant
landscapeand much more intensely researched
Traits: Abilities
The individual differences tradition in psychology in
general, and I/O psychology in particular, has
devoted much attention to the assessment of individual characteristics that are relatively stable over
the adult working years. Assessments of such characteristics are used to predict future performance for
selection and promotion purposes, predict who will
benefit from specific training or development experiences, predict performance failures, provide the
individual profiles needed to determine personjob
367
John P. Campbell
The available evidence pertaining to these constructs has been reviewed at some length elsewhere
(Gottfredson, 2003; Landy, 2005; Lievens & Chan,
2010; Murphy, 2006). The overall conclusion must
still be that construct validity is lacking for measures
of these non-g intelligences and that they are in fact
better represented by other already existing variables. For example, a recent study by Baum, Bird,
and Singh (2011) evaluated a carefully constructed
domain-specific situational judgment test of how
best to develop businesses in the printing industry,
which was then called a test of practical intelligence.
With this juxtaposition, knowledge of virtually any
specific domain of job-related knowledge could be
labeled practical intelligence. Whats in a name?
Traits: Dispositions
Still within the context of stable, or at least quasistable, traits, the I/O psychology independent variable landscape includes many constructs reflective of
dispositional tendencies, that is, tendencies toward
characteristic behavior in a given context. Personality, motives, goal orientation, values, interests, and attitudes are the primary labels for the different domains.
Personality. The assessment of personality dominates this landscape (Hough & Dilchert, 2010; see
also Chapter 28, this volume) in terms of both the
wide range of available assessment instruments
(R. Hogan & Kaiser, 2010) and the sheer amount
of research relating personality to a wide range of
dependent variables (Hough & Ones, 2001; Ones,
Dilchert, Viswesvaran, & Judge, 2007). The efficacy
of personality assessment for purposes of predicting
the I/O psychology dependent variables has had its
ups and downs, moving from up (Ghiselli, 1966)
to down (Guion & Gottier, 1965) to up (Barrick &
Mount, 1991, 2005), to uncertainty (Morgeson
et al., 2007), to reaffirmation (R. Hogan & Kaiser,
2010; Hough & Dilchert, 2010; Ones et al., 2007).
The ups and downs are generally reflective of how
the assessment of personality is represented (e.g.,
narrow vs. broad traits), which dependent variables
are of interest, how predictive validity is estimated,
and the utility ascribed to particular magnitudes of
estimated validity. The bottom line is that personality assessment is a very useful enterprise so long as
369
John P. Campbell
370
existence of a general factor. Whether an assessment should use composite dimensions, factors at the Big Five level of generality, or more
specific facets depends on the measurement
purpose.
At the Big Five level of generality, there is considerable agreement that the five-factor model is
deficient and does not include additional important constructs such as religiosity, traditionalism
or authoritarianism, and locus of control (Hough &
Dilchert, 2010).
and instruction. A performance orientation characterizes individuals who strive for a desirable final
outcome (e.g., final grade). Similar to McClelland
(1985), the goal is to achieve the final outcomes that
the culture defines as high achievement. By contrast,
a mastery or learning orientation characterizes individuals who strive to learn new things regardless of
the effort involved, the frequency of mistakes, or
the nature of the final evaluation. It is learning for
learnings sake.
As noted by DeShon and Gillespie (2005), agreement on the nature of goal orientations latent structure, and on whether it is a trait, quasi-trait, or state
variable, is not uniform. Considerable research has
focused on whether learning and performance orientations are bipolar or independent and whether one
or both of them are multidimensional (DeShon &
Gillespie, 2005). The answers seem to be that they
are not bipolar and that performance orientation
can be decomposed into performance orientation
positivethe striving toward final outcomes defined
as achievementand performance orientation
negativethe striving to avoid final outcomes
defined as failure. One major implication is that
performance-oriented people will avoid situations in
which a positive outcome is not relatively certain
and that learning-oriented individuals will relish the
opportunity to try, regardless of the probability of a
successful outcome. Assessment of goal orientations
is still at a relatively primitive stage (Payne, Youngcourt, & Beaubien, 2007) and has not addressed the
issue of whether learning or performance orientations are domain specific. For example, could an
individual have a high learning orientation in one
domain (e.g., software development) but not in
another (e.g., cost control)? Also, the question of
whether goal orientation is trait or state has not
been settled. However, even though assessment is
primitive, research has suggested that goal orientation is an important determinant of performance
and satisfaction in training and in the work role
(Payne et al., 2007).
Interests. Interest assessment receives the most
attention within the individual, not the organizational, perspective and is a major consideration in
vocational guidance, career planning, and individual
John P. Campbell
everyone already knows what they are. Consequently, whether problem solving, creativity, and
critical thinking are intended as trait or state variables is not clear. That is, are they distinct from general cognitive ability, and can they be enhanced via
training and experience? Attempts to assess these
capabilities must somehow deal with this lack of
specification.
Following Simon (1992), problem solving could
be defined as the application of knowledge and skill
capabilities to the development of solutions for illstructured problems. Ill-structured problems are
characterized as problems for which the methods
and procedures required to solve them cannot be
specified with certainty and for which no correct
solution can be specified a priori. Generating solutions for such problems is nonetheless fundamentally and critically important (e.g., What should be
the organizations research and development strategy? What is the optimal use of training resources?
How can the coordination among teams be maximized?). Specified in this way, a problem-solving
capability is important for virtually all occupations,
which invites a discussion of how it can be developed and assessed. The literature on problem solving within cognitive psychology in general, and with
regard to the study of expertise in particular, is reasonably large (Ericsson, Charness, Feltovich, &
Hoffman, 2006). To make a long story brief, the
conclusions seem to be that (a) there is no general
(i.e., domain-free) capability called problem solving
that can be assessed independently of g; (b) problemsolving expertise, as defined earlier, is domain specific; (c) expert problem solvers in a particular
substantive or technical specialty simply know a lot,
and what they know is organized in a framework
that makes it both useful and accessible; and (d)
experts use a variety of heuristics and cues correctly
to identify and structure problems, determine what
knowledge and skills should be applied to them, and
judge which solutions are useful.
Currently, expert problem solving is viewed as a
dual process (Evans, 2008). That is, solutions are
either retrieved from memory very quickly, seemingly with minimal effort and thought, or a much
more labor-intensive process of problem exploration
and definition occurs, thinking about and evaluating
John P. Campbell
Creativity
What then are creativity and critical thinking?
Answering such questions in detail is beyond the
scope of this chapter, but the following discussion
seems relevant vis--vis their assessment. Comprehensive reviews of creativity theory and research are
provided by Dilchert (2008), Runco (2004), and
Zhou and Shalley (2003).
Creativity has been assessed as both a cognitive
and a dispositional trait, as in creative ability and
creative personality. Both cognitive- and personalitybased measures have been developed via both
empirical keying (e.g., against creative vs. noncreative criterion groups) and homogeneous, or
construct-based, keying. Meta-analytic estimates of
the relationships between cognitive abilities and creative ability and between established personality
dimensions (e.g., the Big Five) and creative personality scales are provided by Dilchert (2008) as well as
the correlations of creative abilities and creative personality dimensions with measures of performance.
Within a state, framework creativity can also be
viewed as a facet of ill-structured problem-solving
performance (e.g., George, 2007; Mumford, Baughman, Supinski, Costanza, & Threlfall, 1996). Here,
the difficulty is in distinguishing creative from
noncreative solutions. The specifications for the distinction tend not to go beyond stipulating that creative solutions must be both unique, or novel, and
useful (George, 2007; Unsworth, 2001). That is,
uniqueness by itself may be of no use. In the context
of problem-solving performance, is a unique (i.e.,
creative) solution just another name for a new solution, or is it a distinction between a good solution
and a really good solution (i.e., the latter has more
value than the former, given the goals being pursued)? In general, creativity as a facet of a problemsolving capability does not seem unique. Attempting
to assess creative expertise as distinct from highlevel expertise may not be a path well chosen.
Critical Thinking
Similar specification problems characterize the
assessment of critical thinking, which has assumed
rock-star construct status in education, training, and
competency modeling (e.g., Galagan, 2010; Paul &
Elder, 2006; Secretarys Commission on Achieving
374
Collectively, they are a part of the Office of Personnel Managements MOSAIC system and constitute a much more complete taxonomy of job
knowledge requirements than the O*NET.
Portraying the taxonomic structure for direct
and indirect skills requirements is even more
375
John P. Campbell
problematic than it is for knowledge. O*NET provides a taxonomy of 35 skills that are defined as
cross-occupational (i.e., not occupation specific)
and that vary from the basic skills such as reading,
writing, speaking, and mathematics, to interpersonal
skills such as social perceptiveness, and to technical
skills such as equipment selection and programming. As noted by Tippins and Hilton (2010), the
O*NET skills are very general in nature and generally lacking in specifications. Moreover, two of the
35 O*NET skills are complex problem solving and
critical thinking, the limitations of which were discussed earlier. Again, a wider set of more concretely
specified skills are included in the Office of Personnel Managements MOSAIC system but only for certain designated occupational groups.
Because the skills gap has been such a dominant
topic in labor market analyses (e.g., Davenport,
2006; Galagan, 2010; Liberman, 2011), one might
expect the skills gap literature to provide an array of
substantive skills that are particularly critical for
assessment. It generally does not. Virtually all skills
gap information is obtained via employer surveys in
response to items such as To what extent are you
experiencing a shortage of individuals with appropriate technical skills? However, the specific technical skills in question are seldom, if ever, specified.
Skills such as leadership, management, customer
service, sales, information technology, and project
management are as specific as it seems to get.
The purposes for which knowledge and skill
assessments might be done are, of course, varied. It
could be for selection, promotion, establishing
needs for training and development, or certification
and licensureall from the organizational perspective. From the individual perspective, it could be for
purposes such as job search, career guidance, or
self-managed training and education. For organizational purposes, the lack of a taxonomic structure
may not be a serious impediment. Organizations can
develop their own specific measures to meet their
needs, such as specific certification or licensure
examinations. However, for individual job search or
career planning purposes, the lack of a concrete and
substantive taxonomic structure for skills presents
problems. Without one, how do individuals navigate the skills domain when planning their own
376
State Dispositions
By definition, and in contrast to trait dispositions,
state dispositions are a class of independent variables that determines volitional choice behavior
in a work setting but that can be changed as a
result of changes in the individuals environment.
Disposition-altering changes could be planned
(e.g., training) or unplanned (e.g., peer feedback).
A selected menu of such state dispositions follows.
Job Attitudes
There are many definitions of attitudes (Eagley &
Chaiken, 1993), but one that seems inclusive
stipulates that attitudes have three components:
First, attitudes are centered on an object (e.g., Democrats, professional sports teams, the work you do);
second, an attitude incorporates certain beliefs
about the object (e.g., Democrats tax and spend,
professional sports teams are interesting, the work
you do is challenging); and third, on the basis of
ones beliefs, one has an evaluativeaffective
response to the object (e.g., Democrats are no good,
professional sports teams are worth subsidizing, you
love the challenges in your job). The evaluation
affective reaction is what influences choice behavior
(e.g., you vote Republican, you vote for tax subsidies for a professional sports stadium, you will work
hard on your job for as long as you can).
Job satisfaction. The job attitude that has dominated both the I/O research literature and human
resources practice is of course job satisfaction, which
was discussed earlier in this chapter as a dependent
variable. However, used as an independent variable
the correlation between job satisfaction and both
performance and retention has been estimated literally hundreds of times (Hulin & Judge, 2003) using
the same assessment procedures discussed previously, and the same issues apply (e.g., Weiss, 2002).
In addition to job satisfaction, several other work
attitudes have received attention for both research
and application purposes.
Commitment. As an attitude, commitment in a
work setting can take on any one of several different
Motivational States
Again, in contrast to trait dispositions, such as the
need for achievement, a class of more dynamic motivational states has become increasingly important, at
least in the research literature, as determinants of
choice behavior at work. Consider the following
sections.
Self-efficacy and expectancy. The Bandurian
notion of self-efficacy is the dominant construct
here and is defined as an individuals self-judgment
about his or her relative capability for effective task
performance or goal accomplishment (Bandura,
1982). Self-efficacy judgments are specific to particular domains (e.g., statistical analysis, golf) and
can change with experience or learning. Self-efficacy
is similar to, but not the same as, Vrooms (1964)
definition of expectancy as it functions in his valence
instrumentalityexpectancy model of motivated
choice behavior. Expectancy is an individuals personal probability estimate that a particular level of
effort will result in achieving a specific performance
goal. It is very much intended as a within-person
explanation for why individuals make the choices
they do across time, even though it is most frequently
used, mistakenly, as a between-persons assessment.
Instrumentality (risk) and valence (outcome
value). From subjective expected utility to
valenceinstrumentalityexpectancy theory (Vroom,
377
John P. Campbell
John P. Campbell
Psychometric Landscape
Many features of the psychometric landscape, as
they pertain to measurement and assessment in I/O
psychology, are well known and have not been discussed, yet again, in this chapter. For several assessment purposes, psychologists are governed by the
Standards for Educational and Psychological Testing
(American Educational Research Association
[AERA], American Psychological Association
[APA], and National Council on Measurement
in Education [NCME], 1999) and the Society of
John P. Campbell
John P. Campbell
High-Stakes Assessment
As is the case for many other subfields, I/O psychology
must deal with the assessment complexities of highstakes testing. Selection for a job, for promotion, and
for entry into educational or training programs are
indeed high-stakes decisions. They make up a large
and critical segment of the research and practice
landscape in I/O psychology, and they significantly
influence the lives of tens of millions of people. The
complexities are intensified enormously by advances in
digital technology and by the ethical, legal, and political environments that influence such decision making.
Each of these testing environment complexities
(i.e., technological, ethical, legal, and political) has
generated its own literature (cf. Farr & Tippins,
2010; Outtz, 2010). The issues include how to deal
with unproctored Internet testing; what feedback to
provide to test takers; determining the presence or
absence of test bias; the currency of federal guidelines; the ethical responsibilities of I/O psychologists;
and the efficacy of using changes in standardized test
scores to evaluate the value added by teachers,
school systems, and universities. Again, these highstakes issues are simply part of the I/O psychology
assessment landscape, and the field must deal with
them as thoroughly and as directly as it can.
Some final (at last) remarks
The basic theme of this chapter is the assertion that
assessment in I/O psychology is very, very complex.
Complexity refers to the sheer number of variables
across the dependent, independent, and situational
variable spectrums; the multidimensional nature of
both the latent and the observed structures for each
variable; the difficulties involved in developing the
substantive specifications for each dimension and their
covariance structures; the multiplicity of assessment
purposes; the multiplicity of assessment methods; and
the intense interaction between science and practice.
The scientistpractitioner model still dominates, and
that opens the door to the marketplace, high-stakes
decision making, the individual versus organizational
perspectives, and the attendant value judgments that
elicit professional guidelines, governmental rule making, and litigation precedents, all of which have
important and complex implications for assessment.
References
Ackerman, P. L. (1987). Individual differences in skill
learning: An integration of psychometric and
information processing perspectives. Psychological
Bulletin, 102, 327. doi:10.1037/0033-2909.102.1.3
Ackerman, P. L. (1988). Determinants of individual
differences during skill acquisition: Cognitive
abilities and information processing. Journal of
Experimental Psychology: General, 117, 288318.
doi:10.1037/0096-3445.117.3.288
John P. Campbell
John P. Campbell
DeShon, R. P., & Gillespie, J. Z. (2005). A motivated action theory account of goal orientation.
Journal of Applied Psychology, 90, 10961127.
doi:10.1037/0021-9010.90.6.1096
DeYoung, C. G. (2006). Higher-order factors of the
Big Five in a multi-informant sample. Journal of
Personality and Social Psychology, 91, 11381151.
doi:10.1037/0022-3514.91.6.1138
DeYoung, C. G., Hirsh, J. B., Shane, M. S., Papademetris,
X., Rajeevan, N., & Gray, J. R. (2010). Testing
predictions from personality neuroscience: Brain
structure and the Big Five. Psychological Science, 21,
820828. doi:10.1177/0956797610370159
DeYoung, C. G., Quilty, L. C., & Peterson, J. B. (2007).
Between facets and domains: 10 aspects of the Big
Five. Journal of Personality and Social Psychology, 93,
880896. doi:10.1037/0022-3514.93.5.880
Dilchert, S. (2008). Measurement and prediction of creativity at work. Unpublished doctoral dissertation,
University of Minnesota, Minneapolis.
Drasgow, F., Chernyshenko, O. S., & Stark, S. (2010).
75 years after Likert: Thurstone was right! Industrial
and Organizational Psychology: Perspectives on
Science and Practice, 3, 465476. doi:10.1111/j.17549434.2010.01273.x
Dweck, C. S. (1986). Motivational processes affecting
learning. American Psychologist, 41, 10401048.
doi:10.1037/0003-066X.41.10.1040
Eagley, A. H., & Chaiken, S. (1993). The psychology of
attitudes. New York, NY: Wadsworth.
Edwards, J. E., & Rothbard, N. P. (2000). Mechanisms
linking work and family: Clarifying the relationship
between work and family constructs. Academy of
Management Review, 25, 178199.
Edwards, J. R., Scully, J. S., & Bartek, M. D. (1999).
The measurement of work: Hierarchical representation of the multimethod job design questionnaire. Personnel Psychology, 52, 305334.
doi:10.1111/j.1744-6570.1999.tb00163.x
Elliot, A. J., & Thrash, T. M. (2002). Approach
avoidance motivation in personality: Approach
and avoidance temperaments and goals. Journal
of Personality and Social Psychology, 82, 804818.
doi:10.1037/0022-3514.82.5.804
Elliott, E. S., & Dweck, C. S. (1988). Goals: An approach
to motivation and achievement. Journal of Personality
and Social Psychology, 54, 512. doi:10.1037/00223514.54.1.5
Embretson, S. E. (2004). The second century of ability
testing: Some new predictions and speculations.
Measurement, 2, 132.
Embretson, S. E. (2006). The continued search for nonarbitrary metrics in psychology. American Psychologist,
61, 5055. doi:10.1037/0003-066X.61.1.50
388
John P. Campbell
John P. Campbell
Ployhart, R. E., & Bliese, P. D. (2006). Individual adaptability (I-ADAPT) theory: Conceptualizing the
antecedents, consequences, and measurement of
individual differences in adaptability. In E. Salas
(Ed.), Advances in human performance and cognitive engineering research (Vol. 6, pp. 339). Oxford,
England: Emerald Group.
Ployhart, R. E., & Hakel, M. D. (1998). The substantive nature of performance variability: Predicting
interindividual differences in intraindividual
performance. Personnel Psychology, 51, 859901.
doi:10.1111/j.1744-6570.1998.tb00744.x
Sackett, P. R., & Laczo, R. M. (2003). Job and work analysis. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski
(Eds.), Handbook of psychology: Vol. 12. Industrial and
organizational psychology (pp. 2137). Hoboken, NJ:
Wiley.
Salovey, P., & Mayer, J. D. (19891990). Emotional intelligence. Imagination, Cognition and Personality, 9,
185211.
Secretarys Commission on Achieving Necessary Skills.
(1999). Skills and tasks for jobs. Washington, DC:
U.S. Department of Labor.
Schippman, J. S. (2010). Competencies, job analysis, and
the next generation of modeling. In J. C. Scott &
D. H. Reynolds (Eds.), Handbook of workplace
assessment: Evidence-based practices for selecting and
developing organizational talent (pp. 197231). San
Francisco, CA: Jossey-Bass.
Schippman, J. S., Ash, R. A., Battista, M., Carr, L., Eyde, L.
D., Hesketh, B., Sanchez, J. I. (2000). The practice
of competency modeling. Personnel Psychology, 53,
703740. doi:10.1111/j.1744-6570.2000.tb00220.x
Schmidt, A. M., Dolis, C. M., & Tolli, A. P. (2009). A
matter of time: Individual differences, contextual
dynamics, and goal progress effects on multiple-goal
self-regulation. Journal of Applied Psychology, 94,
692709. doi:10.1037/a0015012
Schmidt, F., & Hunter, J. (1998). The validity and utility of selection methods in personnel psychology:
Practical and theoretical implications of 85 years of
research findings. Psychological Bulletin, 124, 262
274. doi:10.1037/0033-2909.124.2.262
Schneider, B. (1990). The climate for service: An application of the climate construct. In B. Schneider (Ed.),
393
John P. Campbell
395
Chapter 23
DOI: 10.1037/14047-023
APA Handbook of Testing and Assessment in Psychology: Vol. 1. Test Theory and Testing and Assessment in Industrial and Organizational Psychology,
397
K. F. Geisinger (Editor-in-Chief)
Copyright 2013 by the American Psychological Association. All rights reserved.