0% found this document useful (0 votes)
104 views84 pages

Advanced Psychological Processes Full Note

The document outlines various theories of intelligence, including Vernon's Hierarchical Theory, Bruner's cultural perspective, and Jensen's heritability focus. It discusses contemporary models like the PASS model, CHC theory, and the Parieto-Frontal Integration Theory, emphasizing the multifaceted nature of intelligence and the interplay of cognitive processes. Additionally, it covers implicit theories of intelligence, Gardner's Theory of Multiple Intelligences, and Sternberg's Theory of Successful Intelligence, highlighting the ongoing debates and criticisms surrounding these concepts.

Uploaded by

meenulvinod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views84 pages

Advanced Psychological Processes Full Note

The document outlines various theories of intelligence, including Vernon's Hierarchical Theory, Bruner's cultural perspective, and Jensen's heritability focus. It discusses contemporary models like the PASS model, CHC theory, and the Parieto-Frontal Integration Theory, emphasizing the multifaceted nature of intelligence and the interplay of cognitive processes. Additionally, it covers implicit theories of intelligence, Gardner's Theory of Multiple Intelligences, and Sternberg's Theory of Successful Intelligence, highlighting the ongoing debates and criticisms surrounding these concepts.

Uploaded by

meenulvinod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

UNIT 1

INTELLIGENCE
THEORIES OF INTELLIGENCE

P. E. VERNON: HIERARCHICAL THEORY OF INTELLIGENCE

Spearman's g-factor theory and multifactor theories such as Thurstone and Guilford's are combined
in Vernon's 1950 proposal for the Hierarchical Theory of Intelligence. It depicts intelligence as a
multi-level pyramid:

 Top Level: Spearman's g-factor, which stands for general intelligence and affects all
intellectual endeavors.
 Second Level: Group elements that are comparable to Thurstone's core competencies, like:
 Verbal-Educational (V: Ed) refers to the combination of educational, numerical, and verbal
skills.
 Practical-Mechanical (K:M) skills encompass mechanical, practical, spatial, and
physical/manual skills.
 Third Level: Like Guilford's model, minor group factors further break down V: Ed and K:M
into more specialized skills.
 Bottom Level: Spearman's s-factors, which are particular skills associated with certain tasks.

This hierarchical model integrates general, group, and specific intelligences into a structured
framework, resembling a genealogical tree. It recognizes both general and specific cognitive

JEROME BRUNER

Jerome Bruner was a prominent cognitive psychologist who proposed a theory of intelligence that
emphasized the role of culture and experience in shaping human cognition. According to Bruner,
intelligence is not a fixed entity or a set of inherent abilities, but rather a dynamic process that is
shaped by the interactions between individuals and their environment.

EARL HUNT

Hunt proposed three classes of cognitive performance identified Central to intellectual functioning

1. The persons choice about the way to internally (mentally) represent a problem

2. His or her strategy for manipulating mental representations


3. The abilities necessary to execute whatever basic information processing steps a strategy requires.

He studied the individual differences in the way problems are represented, the way material is
encoded, the way information is transferred in one's working memory and other aspects of
information processing

ARTHUR ROBERT JENSEN

American psychologist Arthur R. Jensen (1923–2012) is well-known for his contentious studies on
intelligence, especially those that deal with its heritability and racial group differences. With an
estimated 60% to 80% heritability, he maintained that intelligence is heavily influenced by genetics.
Following a 1969 article in the Harvard Educational Review, he put forth a two-level theory of
intelligence: Level I (associative learning—memory, attention, rote learning) and Level II (cognitive
learning—abstract reasoning, problem-solving, symbolic thought). This work garnered a lot of
attention. Jensen asserted that while Level II skills, which are more important for academic success,
are more common among middle-class white and Asian populations, Level I skills are equally
distributed across racial and social groups. He maintained that, despite acknowledging the possibility
of cultural bias in intelligence testing,

CONTEMPORARY THEORIES OF INTELLIGENCE

We acknowledge that there are numerous ways to organize the following information (cf. Davidson
& Kemp, 2011; Esping & Plucker, 2008; Gardner, Kornhaber, & Wake, 1996; Sternberg, 1990). The
discussion of the following theories is roughly chronological, although somewhat arbitrary, and the
reader should not infer a priority based on the order in which the material is presented.

PASS MODEL

Luria’s neuropsychological model (1966, 1970, 1973) outlines intelligence in terms of three
functional units or "Blocks" in the brain. Block 1 is responsible for attention—maintaining focus and
alertness. Block 2 handles how information is processed, using simultaneous processing
(understanding information as a whole, like viewing a painting) and successive processing (analyzing
parts step by step, in sequence). Block 3 is involved in planning, decision-making, and regulation of
behavior. This model laid the theoretical foundation for the Kaufman Assessment Battery for
Children (K-ABC), which emphasized how children solve problems (sequential vs. simultaneous
processing) rather than what content they solve (verbal vs. non-verbal). Expanding on Luria’s model,
the PASS theory (Planning, Attention, Simultaneous, and Successive) was developed by Das, Naglieri,
and Kirby (1994), incorporating all three functional blocks. This theory became the basis for the
Cognitive Assessment System (CAS) developed by Naglieri and Das (1997), offering a more process-
oriented approach to understanding cognitive abilities.

CHC THEORY (CATTELL-HORN-CARROLL)

The Cattell-Horn-Carroll (CHC) theory of intelligence is the most widely accepted and applied model
in modern IQ testing. It combines Cattell and Horn’s Gf-Gc theory with Carroll’s Three-Stratum
Theory, both of which originated from Spearman’s g-factor concept. Cattell distinguished between
fluid intelligence (Gf)—the ability to solve novel problems, and crystallized intelligence (Gc)—
knowledge gained through learning and experience. Horn later expanded the model to include
additional Broad Abilities, such as visual processing (Gv), short-term memory (Gsm), long-term
retrieval (Glr), and processing speed (Gs), among others, treating them as separate but equal factors
rather than hierarchical.

Carroll, through extensive factor analysis, proposed a hierarchical model with three levels:
 Stratum III: General intelligence (g)
 Stratum II: Broad abilities (e.g., Gf, Gc, Gv, Gs)
 Stratum I: Narrow, specific abilities (around 70 total)

The modern CHC model merges these theories into two key levels—Broad (Stratum II) and Narrow
(Stratum I) abilities—dropping an explicit general g-factor. It proposes 10 broad cognitive abilities,
but only 7 are typically measured by IQ tests (Gf, Gc, Gv, Gsm, Glr, Ga, Gs), while quantitative
knowledge (Gq) and reading/writing ability (Grw) are assessed through achievement tests, and
decision/reaction speed (Gt) is generally not tested.

The CHC theory has shaped recent major intelligence tests like the Stanford-Binet-5, KABC-II, and
Woodcock-Johnson III, shifting the focus from just a few part scores to a more nuanced view of
multiple cognitive processes. Despite its widespread acceptance, debate continues over the
importance of general intelligence (g) versus multiple intelligences in understanding cognitive
abilities.

MULTIPLE COGNITIVE MECHANISMS APPROACH

Recent research suggests that general intelligence (g) is not a single unified cognitive mechanism but
rather emerges from the interaction of multiple underlying cognitive processes that become
interconnected through development. The three most widely studied mechanisms contributing to g
are working memory, processing speed, and explicit associative learning.

Working memory refers to the ability to hold, update, and manipulate information while resisting
distractions. Individuals with strong working memory are better at maintaining task goals and
controlling attention in the face of interference. Numerous studies have shown a strong correlation
between working memory and g, with neurological evidence pointing to overlapping brain activation
patterns in the lateral prefrontal cortex and parietal regions.

Processing speed is the rate at which basic cognitive tasks are performed. People with higher
intelligence tend to process information faster, as shown in tasks involving reaction time and
inspection time. Processing speed is considered a key component of g in both the Horn-Cattell
theory (as “Gs”) and Carroll’s three-stratum model (as “general speediness”).

Explicit associative learning involves the deliberate formation and recalls of connections between
stimuli. While early studies found weak links between associative learning and intelligence, newer
research using more complex learning tasks has revealed stronger correlations, even after
accounting for working memory and processing speed.

Together, these findings indicate that g may reflect a network of interacting cognitive processes,
rather than a single unitary ability. This perspective is shaping a more dynamic and multifaceted
understanding of intelligence.

PARIETO-FRONTAL INTEGRATION THEORY

The Parieto-Frontal Integration Theory (P-FIT), proposed by Jung and Haier (2007), suggests that
intelligence arises from a distributed network of brain regions, with key roles played by the parietal
and frontal lobes. After reviewing 37 neuroimaging studies, they found consistent associations
between intelligence and activity in these areas, although supporting regions span the entire brain.
P-FIT outlines four stages of information processing:

 Sensory Input: Temporal and occipital lobes handle visual and auditory input.
 Integration: Parietal cortex processes and integrates sensory information.
 Problem Solving: Frontal lobes engage in reasoning, evaluation, and hypothesis testing, in
coordination with parietal regions.
 Response Selection: The anterior cingulate inhibits incorrect or competing responses.

The white matter pathways, especially the arcuate fasciculus, are crucial for efficiently transferring
information between regions, supporting overall cognitive performance. A core idea of P-FIT is that
different individuals may activate different combinations of these regions to achieve similar levels of
intelligence, accounting for variability in cognitive strengths and weaknesses. While the theory has
been well-received, critics have called for more research using larger samples and diverse
intelligence measures. Follow-up studies (e.g., Colom, Schmithorst) have explored P-FIT in
developmental contexts and in relation to network efficiency, offering further support and
refinement. A 2009 special issue of Intelligence compiled 11 new studies extending the theory’s
reach.

MINIMAL COGNITIVE ARCHITECTURE THEORY

Michael Anderson’s (1992, 2005) theory of Minimal Cognitive Architecture offers a developmental
model that integrates both general and specific cognitive abilities, drawing on Fodor’s (1983)
distinction between central cognitive processes and modular input systems. Anderson proposes two
distinct routes for acquiring knowledge:

Route 1involves thoughtful problem solving and is constrained by processing speed, which Anderson
identifies as the core of general intelligence (g). This route includes two independent processors:
verbal and spatial, which are normally distributed and uncorrelated. Differences in individual
intelligence, according to Anderson, stem from variations in this central processing route.

Route 2 operates through modular, domain-specific systems such as syntactic parsing, phonological
encoding, 3D perception, and theory of mind. These modules function automatically and
independently of Route 1, and are not limited by central processing speed. Though innate, such
modules can also be acquired through extensive practice and evolve over time, contributing to
cognitive development.

Anderson's model attempts to bridge general intelligence theories with Gardner’s Multiple
Intelligences, acknowledging both domain-general mechanisms (processing speed) and domain-
specific capabilities (modules). It also explains how individuals with low IQ may still excel in specific
areas and how conditions like dyslexia or autism can arise alongside average or high intelligence.

However, S.B. Kaufman (2011) criticized the model for its over-reliance on processing speed as the
sole central mechanism and its limited scope regarding Route 2. Kaufman argues that Anderson’s
model dismisses meaningful individual differences in modular processing and excludes other
domain-general learning mechanisms (like implicit learning or latent inhibition), thus narrowing the
investigation of cognitive processes involved in intelligence.

DUAL-PROCESS THEORY

The Dual-Process (DP) Theory of Human Intelligence (Davidson & Kemp, 2011; S. B. Kaufman, 2009,
2011, 2013) proposes that intelligent behavior arises from the dynamic interaction between two
types of cognitive processes: goal-directed (controlled) cognition and spontaneous (automatic)
cognition. Controlled cognition involves deliberate processes such as metacognition, self-regulation,
working memory, and planning, which are essential for tasks requiring abstract reasoning and
attentional control. In contrast, spontaneous cognition includes mind-wandering, intuition,
daydreaming, and implicit learning, contributing significantly to creativity, insight, and adaptive
functioning through unconscious and effortless mechanisms. The theory asserts that both systems
are vital to intelligence, individuals differ in their strengths across them, and no one mode is superior
—adaptiveness lies in the capacity to shift between them based on context. It also emphasizes that
intelligence is not fixed but evolves over time through passion and engagement, and that people
may reach similar intellectual outcomes via different cognitive pathways. Although early research
(e.g., Kaufman et al., 2010) indicates that implicit learning predicts intelligent behavior
independently of general intelligence (g), further empirical validation of the theory is needed.

IMPLICIT THEORIES OF INTELLIGENCE

Entity Vs. Incremental Theory of Intelligence

Students' implicit beliefs about intelligence structure their inferences, judgments, and reactions to
different actions and outcomes. In social and developmental psychology, an individual's implicit
theory of intelligence refers to his or her fundamental underlying beliefs regarding whether or not
intelligence or abilities can change, developed by Carol Dweck and colleagues.

Carol Dweck identified two different mindsets regarding intelligence beliefs. They are,

1. Entity Theory

2. Incremental Theory

According to the Entity Theory, intelligence is a personal quality that is fixed and cannot be changed.
For entity theorists, if perceived ability to perform a task is high, the perceived possibility for
mastery is also high. In turn, if perceived ability is low, there is little perceived possibility of mastery,
often regarded as an outlook of "learned helplessness" (Park & Kim, 2015).

Entity Theorists

1. believe that even if people can learn new things their intelligence stays the same.

2. will likely blame their intelligence and abilities for achievement failures.

According to the Incremental Theory, on the other hand, intelligence is not fixed and can be
improved through enough effort.

Incremental Theorists

1. will blame lack of effort and/or strategy use that are possible to mediate negative outcomes.

2. will likely act out and improve the situations with more effort.

Holding either of these theories has important consequences for people. Studies have shown that
entity theorists of intelligence react helplessly in negative outcomes. "That is, they are not only more
likely to make negative judgments about their intelligence from failures, but also more likely to show
negative affect and debilitation. In contrast, incremental theorists, who focus more on behavioral
factors (e.g., effort, problem-solving strategies) as causes of negative achievement outcomes, tend
to act on these mediators (e.g., to try harder, develop better strategies) and to continue to work
towards mastery of the task" (Dweck, Chiu, Hong, 1995, p. 268).

In their studies, Dweck, Bandura, and Leggett assessed students' theories of intelligence and found
out that students who were holding an entity theory of intelligence chose the performance goals
tasks more than those holding an incremental theory of intelligence when they were given options
to choose between the tasks that represented performance goals and learning goals (cited in Dweck,
Chiu, & Hong, 1995, p.274).
CRITICAL ANALYSIS OF MULTIPLE INTELLIGENCE THEORY AND THE THEORY OF EMOTIONAL
INTELLIGENCE

THEORY OF MULTIPLE INTELLIGENCES

Howard Gardner’s Theory of Multiple Intelligences (MI Theory), first introduced in Frames of Mind
(1983) and expanded in later works (e.g., Gardner, 2006), challenges traditional views of intelligence
by emphasizing a broader, culturally grounded definition. Gardner defines intelligence as “an ability
or set of abilities that permit an individual to solve problems or fashion products that are of
consequence in a particular cultural setting” (Ramos-Ford & Gardner, 1997), and proposes eight
distinct intelligences: linguistic, logical-mathematical, spatial, bodily-kinaesthetic, musical,
interpersonal, intrapersonal, and naturalistic. He has also explored the potential for additional
intelligences, such as spiritual and existential. Rather than relying on factor analysis, Gardner
grounded his theory in eight criteria, including brain localization, the presence of prodigies or
savants, core operations, distinct developmental paths, evolutionary roots, support from
experimental tasks and psychometric findings, and the capacity for symbolic representation. He
critiques the traditional educational focus on linguistic and logical-mathematical abilities, arguing
that this narrow emphasis marginalizes other forms of intelligence, a concern still relevant today
given the continued prioritization of standardized testing in those domains. Despite its popularity in
education, MI Theory has faced wide-ranging criticisms—philosophical, empirical, conceptual, and
cognitive. For example, Lohman (2001) contends that the theory overlooks general inductive
reasoning ability and the role of working memory, both central to fluid intelligence (gF). Additionally,
although assessment tools for MI have been developed (e.g., Gardner et al., 1998), their
psychometric validity and reliability remain under question (Plucker, 2000; Visser et al., 2006).
Nonetheless, Gardner has consistently defended his theory, asserting that many criticisms stem from
misinterpretations or misapplications of the theory in educational settings, which he argues are not
definitive evidence against its conceptual validity.

THEORY OF SUCCESSFUL INTELLIGENCE

Sternberg’s Theory of Successful Intelligence (1997) proposes that success in life results from a
balanced use of analytical, creative, and practical abilities. Analytical intelligence involves problem-
solving and evaluating ideas—abilities typically measured by conventional intelligence tests. Creative
intelligence enables individuals to generate novel ideas and formulate effective solutions, while
practical intelligence allows for the application of these ideas in real-life situations. The second major
tenet of the theory emphasizes that intelligence should be understood in relation to achieving
personal goals within one’s sociocultural context, rather than solely academic success. Third,
Sternberg posits that success depends on an individual's ability to leverage their strengths while
addressing or compensating for their weaknesses. The fourth element underscores the importance
of using intelligence to adapt to, shape, or select environments—highlighting a dynamic interaction
between the person and their surroundings. Sternberg and colleagues have demonstrated the
effectiveness of educational interventions aimed at enhancing all three intelligences, and have
shown that creative and practical intelligence predict meaningful real-world outcomes, including
academic measures like SAT scores and GPA, even beyond what analytical intelligence predicts.
However, questions remain about whether these three abilities are distinct constructs or simply
different expressions of a general intelligence factor (g), as noted by critics such as Brody (2004) and
Gottfredson (2003), leaving open the debate over the theory’s structure and empirical
distinctiveness.

Emotional Intelligence

Theories of Emotional Intelligence (EI) are grounded in the idea that people vary in how well they
can perceive, understand, use, and manage emotions to support thinking and behavior (Salovey &
Mayer, 1990). Over time, various models have emerged, including "mixed models" that blend
personality traits and emotional competencies (e.g., Bar-On, 1997; Goleman, 1998; Petrides &
Furnham, 2003). This conceptual broadness has drawn criticism for reducing EI’s scientific clarity and
precision (Eysenck, 2000; Locke, 2005). In response, Mayer, Salovey, and Caruso (2008) proposed a
more focused, ability-based four-branch model of EI, which includes: (a) perceiving emotions
accurately, (b) using emotions to enhance cognition, (c) understanding emotional meanings and
patterns, and (d) managing emotions to achieve goals. These abilities are measured through the
Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT), which consists of performance tasks
scored based on expert consensus. Research shows that EI, as measured by the MSCEIT, correlates
moderately with verbal intelligence and personality traits like Openness and Agreeableness, and
predicts outcomes related to interpersonal functioning, mental health, and behavior, even after
controlling for g and personality. However, criticisms persist—Brody (2004) argues that the MSCEIT
assesses emotional knowledge rather than its effective application, and its validity as a measure of
emotional ability remains debated. Some studies find only weak associations between EI and
cognitive abilities, while others suggest that EI may not consistently predict outcomes beyond what
is already explained by general intelligence and the Big Five personality traits. As with Gardner’s and
Sternberg’s models, the incremental validity of EI as a distinct and meaningful construct is still an
open empirical question.

MENTAL CHRONOMETRY - It study the relationship between reaction time and speed of response

Mental chronometry is the scientific study of the timing of mental processes, focusing on how
quickly individuals can perform cognitive tasks, and plays a key role in cognitive psychology,
neuroscience, and intelligence research. It includes measures such as reaction time (RT), choice
reaction time (CRT), inspection time (IT), and processing speed, which together offer insights into the
efficiency of information processing. Originating with Franciscus Donders in the 1860s, who
introduced subtractive logic to isolate mental operations, mental chronometry has since been used
to explore the link between processing speed and intelligence (g), with researchers like Arthur
Jensen emphasizing inspection time as a key indicator. Although higher intelligence is often
associated with faster processing, these correlations are generally modest, and critiques note
limitations such as low reliability in RT measures and the oversimplification of intelligence to speed
alone. Today, mental chronometry is used in areas ranging from cognitive neuroscience and clinical
assessment (e.g., ADHD, dementia) to human-computer interaction, offering a useful—though
incomplete—window into the workings of the mind.

UNIT 2
LEARNING
HABITUATION

Habituation is a fundamental psychological phenomenon in which an organism’s response to a


repeated, non-threatening stimulus gradually decreases over time. Recognized as a basic form of
learning, it occurs across a wide range of species and serves as an adaptive mechanism that allows
organisms to conserve energy and attention by filtering out stimuli that are familiar and deemed
irrelevant. For example, individuals living near a noisy road may initially find the sound distracting,
but over time, they become less responsive to it. This desensitization not only improves energy
efficiency but also helps organisms focus on novel or significant changes in the environment, thereby
enhancing survival. Habituation also plays a role in emotional regulation by reducing stress and
anxiety associated with repeated exposure to benign stimuli, leading to decreased physiological
arousal and emotional distress. Moreover, it supports cognitive efficiency by enabling organisms to
prioritize relevant information. However, habituation can have drawbacks; it may result in
overgeneralization, where similar but important stimuli are also ignored, potentially impairing
discrimination and attentional accuracy. In some cases, important signals may be overlooked if they
resemble previously habituated ones. Rehabituation, on the other hand, refers to the process of re-
adapting to a stimulus after a period of change or absence, highlighting the dynamic nature of this
learning mechanism.

SENSITIZATION

Sensitization is a psychological process that represents the opposite of habituation. It involves an


increased responsiveness to a stimulus following repeated exposure, particularly when the
stimulus is intense, painful, or threatening. For example, a person exposed to a sudden loud noise or
an electric shock may become more reactive to even mild versions of that stimulus in the future.
Sensitization serves an important adaptive function, heightening alertness and reactivity to potential
threats in the environment, thus enhancing an organism's chances of survival. It can amplify
responses to a specific stimulus, making organisms more vigilant and better prepared to respond to
danger. Additionally, sensitization contributes to associative learning and memory, as it strengthens
the connection between a stimulus and its aversive or rewarding outcomes. However, while
sensitization can be protective, it may also lead to exaggerated or maladaptive responses. For
instance, organisms might become overly reactive not just to the original stimulus, but also to similar
ones—a process known as stimulus generalization—potentially resulting in heightened anxiety, fear,
or stress. In extreme cases, this heightened sensitivity can cause sensory overload or
hypersensitivity, impairing the organism’s ability to process and distinguish between relevant and
irrelevant stimuli, thereby interfering with normal perception, emotional regulation, and behavior.

PROCESS OF HABITUATION AND SENSITIZATION

HABITUATION

1. Initial Response: When a novel stimulus is presented to an organism, it typically elicits a response.
This response can vary depending on the nature of the stimulus and the organism's sensitivity to it.
For example, a loud noise might startle an animal, or a new smell might cause it to investigate its
surroundings.

2. Repetition: Habituation occurs through repeated exposure to the stimulus. As the organism
encounters the stimulus multiple times, it gradually becomes less responsive to it. This reduction in
response is a result of the organism's nervous system adjusting its sensitivity to the stimulus.

3. Neural Mechanisms: Habituation involves changes in neural processing within the organism's
nervous system. Initially, when the stimulus is presented, there is a strong neural response.
However, with repeated exposure, the neurons involved in processing the stimulus become less
excitable. This can occur through various mechanisms, such as decreased neurotransmitter release
or changes in synaptic strength

4. Selective Attention: Habituation is often associated with selective attention. As the organism
becomes habituated to a stimulus, it allocates fewer cognitive resources to processing it. This allows
the organism to focus its attention on more relevant or novel stimuli in its environment.

5. Generalization and Discrimination: Habituation can also involve processes of generalization and
discrimination. Generalization occurs when the organism becomes habituated not only to the
original stimulus but also to similar stimuli. Discrimination, on the other hand, involves the organism
learning to differentiate between the habituated stimulus and new, potentially relevant stimulus

6. Spontaneous Recovery and Dishabituation: While habituation typically leads to a decrease in


responsiveness to a stimulus, this effect may not be permanent. In some cases, if the stimulus is not
presented for a period of time, the organism's responsiveness may recover spontaneously.
Additionally, if the stimulus changes in some way, such as becoming more intense or novel, the
organism may experience dishabituation, where its responsiveness to the stimulus increases again

SENSITIZATION

1. Initial Exposure: Sensitization begins with the organism's initial exposure to a stimulus, which may
be novel or aversive. This exposure triggers a physiological or behavioral response, which can range
from mild to intense depending on the nature of the stimulus.

2. Arousal of Neural Circuits: The stimulus activates neural circuits in the brain associated with the
perception and processing of sensory information. This arousal may involve the release of
neurotransmitters such as serotonin, norepinephrine, or glutamate, which play key roles in
modulating neuronal activity and synaptic transmission.

3. Enhanced Responsiveness: Following the initial exposure, the organism's sensitivity or


responsiveness to the stimulus is heightened. This enhanced responsiveness is characterized by an
amplification of the physiological, behavioral, or emotional response elicited by the stimulus.

4. Neuroplastic Changes: Sensitization is accompanied by neuroplastic changes in the brain, which


involve alterations in the strength and connectivity of synaptic connections between neurons. These
changes may include long-term potentiation (LTP), a process by which synaptic transmission is
strengthened, leading to enhanced neuronal excitability and responsiveness.

5. Associative Learning: Sensitization often involves associative learning, wherein the stimulus
becomes associated with aversive or rewarding consequences. This association strengthens the
organism's response to the stimulus, as it anticipates the potential outcomes associated with it.

6. Generalization: Sensitization may generalize to stimuli that are similar to the sensitized one,
resulting in an amplified response to a broader range of stimuli. This generalization can occur across
sensory modalities or environmental contexts, leading to heightened sensitivity in various situations.

7. Maintenance and Reversal: The sensitization response may be maintained over time through
continued exposure to the stimulus or reinforcement of the associative learning. Alternatively,
sensitization may gradually diminish or be reversed through processes such as habituation or
extinction, wherein the stimulus loses its potency or the associative link weakens over time.

CHARACTERISTICS OF HABITUATION AND SENSITIZATION

 Habituation or sensitization does not always occur with repeated experience.


 A number of variables affect the occurrence of habituation or sensitization.
 stimulus intensity determines the rate of habituation and sensitization. With habituation,
the weaker the stimulus, the more rapid habituation occurs to that stimulus. In fact,
habituation to very intense stimuli may not occur. The opposite is true with sensitization;
that is, a more intense stimulus produces stronger sensitization than does a weaker one.
 Habituation increases with more frequent stimulus presentations, although the amount of
habituation becomes progressively smaller over the course of habituation. Similarly, greater
sensitization occurs when a strong stimulus is experienced more frequently.
 Habituation to a stimulus appears to depend upon the specific characteristics of the
stimulus. A change in any characteristic of the stimulus results in an absence of habituation.
Unlike habituation, a change in the properties of the stimulus typically does not affect
sensitization.
 habituation and sensitization can be relatively transient phenomena. When a delay
intervenes between stimulus presentations, habituation weakens. Time also affects
sensitization. Sensitization is lost shortly after the sensitizing event ends.
 Unlike the long-term habituation effect, sensitization is always a temporary effect.
 Habituation may lead to a decreased reward effectiveness, and sensitization may lead to an
increased reward effectiveness

NATURE OF HABITUATION AND SENSITIZATION

DUAL PROCESS THEORY

 Drugs that stimulate the central nervous system increase an animal’s overall readiness to
respond, while depressive drugs suppress reactivity.

 Emotional distress can also affect responsiveness: Anxiety increases reactivity; depression
decreases responsiveness.

 Repeated presentations of unexpected stimuli can lead to either a decreased or increased


intensity of the startle reaction.

EVOLUTIONARY THEORY

Habituation and sensitization are two fundamental forms of non-associative learning observed
across a wide range of species. Evolutionary theory helps us understand why these processes occur
and how they may have developed to enhance survival and reproductive success

Habituation is a decrease in response to a repeated, benign stimulus. From an evolutionary


perspective, habituation can be advantageous because:

1. Energy Conservation: It allows organisms to conserve energy and attention for more critical
stimuli. By ignoring irrelevant or non-threatening stimuli, organisms can allocate resources more
efficiently.

2.Improved Efficiency: It prevents sensory overload, allowing animals to focus on novel or significant
changes in their environment that might indicate danger, food, or mating opportunities.

3. Adaptation to Stable Environments: In stable environments, where certain stimuli are


consistently non-threatening, habituation ensures that organisms do not waste resources on
unnecessary responses.

Sensitization is an increased response to a repeated or intense stimulus, particularly if the stimulus is


perceived as threatening. Evolutionarily, sensitization is advantageous because:

1 Enhanced Survival: It heightens responsiveness to potential threats, increasing the likelihood of


avoiding harm.

2.Rapid Response to Danger: Sensitization ensures that organisms react quickly to dangerous
stimuli, which is crucial for survival in environments where threats are frequent.

3.Learning and Memory: It may aid in learning and memory by reinforcing the importance of certain
stimuli, ensuring that organisms remember and avoid harmful situations in the future.

INGESTIONAL NEOPHOBIA

Ingestional neophobia is the reluctance or avoidance of trying new foods. This behaviour is common
in many animals, including humans, and serves as an evolutionary protective mechanism to prevent
the ingestion of potentially harmful or toxic substances. In young children, it often manifests as a
preference for familiar foods and a resistance to eating unfamiliar ones. Over time, with repeated
exposure and positive experiences, neophobia can decrease, leading to a more varied diet.

Habituation

Domjan ’s 1976 study documents the habituation of Ingestional neophobia. Rats received either a
2% saccharin and water solution or just water. The rats drank very little saccharin solution when first
exposed to this novel flavor. However, intake of saccharin increased with each subsequent
experience. These results indicate that the habituation of the neophobic response led to the
increasing consumption of saccharin.

Sensitization

Animals also can show an increased neophobic response. Suppose that an animal is sick. Under this
illness condition, the animal will exhibit an increased Ingestional neophobia the sensitization process
causes the greater neophobic response

SATIATION, DEPRIVATION, HOMEOSTASIS

Satiation: As you eat a particular food, you experience a decrease in the pleasure and desire to
continue eating that food. This reduction in enjoyment helps signal when to stop eating, promoting
energy balance and homeostasis. Repeated consumption of the same food accelerates satiation,
reducing overall intake and helping to prevent overeating.

Deprivation: When you are deprived of certain foods, your body may increase cravings and the
desire for those foods once they become available. This can lead to sensitization, where the
response to the reintroduced food is heightened, making you more likely to overconsume. The
increased response to a food after a period of deprivation can lead to binge eating or
overconsumption, disrupting homeostasis by causing an intake of calories that exceeds the body's
immediate needs.

Homeostasis and Regulation: The body's homeostatic mechanisms aim to balance energy intake and
expenditure. Habituation helps in maintaining this balance by promoting reduced intake over time,
while sensitization (often following deprivation) can lead to periods of excessive intake that
challenge homeostatic regulation

DISHABITUATION

Dishabituation refers to the restoration or recovery of a response that had previously been reduced
or eliminated through habituation. It occurs when a novel or strong stimulus is introduced, causing
the organism to once again respond to the habituated stimulus.

Adaptive Flexibility: Dishabituation allows organisms to remain adaptable and responsive to their
environments. By resetting the response to a previously habituated stimulus when a new or
significant event occurs, organisms can ensure that they are not overlooking potentially important
changes in their surroundings.

Enhanced Sensory Processing: It helps in recalibrating sensory processing systems, ensuring that
important stimuli can be re-evaluated in light of new information. This is particularly useful in
dynamic environments where the significance of stimuli can change rapidly.

Survival and Threat Detection: If a previously ignored stimulus suddenly becomes relevant due to a
change in context (e.g., a predator approaching), dishabituation ensures that the organism can
quickly recognize and respond to the potential threat.
Learning and Memory: Dishabituation is a form of non-associative learning that contributes to the
overall learning process. It allows organisms to update their understanding of their environment,
enhancing their ability to learn from and adapt to new situations

APLYSIA CALIFORNICA

In the simple marine mollusk Aplysia, which lacks a shell, a well-studied defensive withdrawal
response is triggered when one of its three external organs—the gill, mantle, or siphon—is touched.
This reflexive behavior, common across many animal species, can be modulated by experience
through two key learning processes: habituation and sensitization. When a weak tactile stimulus is
repeatedly applied to the siphon, Aplysia’s withdrawal response gradually weakens—an example of
habituation, where repeated exposure to a benign stimulus leads to decreased responsiveness. In
contrast, if the tail is shocked before the siphon is touched, the mollusk exhibits a stronger-than-
usual reaction, demonstrating sensitization, a heightened response to a stimulus following a noxious
or intense event. At the cellular level, habituation is linked to a decrease in neurotransmitter
release from sensory neurons to the central nervous system, often due to increased activity of
inhibitory interneurons, resulting in reduced synaptic transmission. Sensitization, on the other hand,
involves the release of neuromodulators like serotonin or dopamine, which enhance synaptic
strength by increasing neurotransmitter release and neuronal excitability. These physiological
changes support the cellular modification theory, which posits that learning produces lasting
changes in neural systems—either by strengthening existing neural circuits or forming new neural
connections—thus providing a biological basis for memory and experience-dependent behavioral
adaptation.

UNDERSTANDING OUR EMOTIONAL ROLLERCOASTER: SOLOMON’S OPPONENT-PROCESS THEORY

Richard Solomon’s Opponent-Process Theory, developed in the 1970s, offers a framework for
understanding the dynamics of emotions and motivation by proposing that emotional experiences
operate in opposing pairs. According to the theory, every emotional response (the A-Process)
triggers an opposing reaction (the B-Process) that works to restore emotional balance. The A-Process
arises quickly and peaks early, while the B-Process emerges more slowly and can eventually override
the initial emotion before both diminish back to baseline. This model is particularly useful in
explaining addiction, where the initial pleasurable effects of a drug (A-Process) fade with repeated
use, while withdrawal symptoms (B-Process) intensify, prompting continued drug use to avoid
discomfort. The theory also applies to other behaviours, such as thrill-seeking, where an initial fear
response is followed by exhilaration and relief, reinforcing the behavior. Overall, Solomon’s theory
reveals how the interplay of opposing emotions shapes complex human motivations.

VERBAL LEARNING

Verbal learning is the process of actively memorizing new material using mental pictures,
associations, and other activities. Verbal learning was first studied by Hermann Ebbinghaus, who
used lists of nonsense syllables to test recall.

METHODS USED IN RESEARCH OF VERBAL LEARNING

In the classical verbal learning experiment each subject learns a list of items. Each trial involves a
study phase and a test phase. Here two major testing procedures are used.

1.Anticipation method -Study an item is given after each test; a trial consists of the sequential
presentation of all items of a list for test and immediate test.

2.Study -test presentation method -Here on each trial all items are first shown one at a time for
study and presented again for testing.
Methods used in research of verbal learning are listed below

1.Paired - associate learning - A widely used procedure in verbal learning is paired-associate


learning. In paired-associate learning the subject learns a list of stimuli -response pairs, a process
similar to learning the vocabulary of a foreign language where stimulus is foreign word and response
is English word, if we encounter a person with name, then we more likely to remember the name of
that person at the next time of meeting where stimulus is face & response is name etc. The first
word serves as the stimulus for recall of the second member of the pair, which is traditionally called
the response word.

Anticipation method and study-test presentation method also used in Paired association
learning. For example, Overt-7, Rural-6, Sorry-1 etc.….

2.Serial learning - In serial learning each item serves both as a stimulus and as a response. Items are
presented always in the same order. When the subject sees one particular word exposed, he is to try
to guess or anticipate what the next one will be. A special signal serves as a stimulus for the recall of
the first item of the list and where each item serves as a stimulus for the recall of the next. For
example, learning the letters of alphabet, learning to spell like flower etc.

3.Free recall -Here subjects are given a list of items and are later asked to recall as many as possible.
Usually from 20 to 40 items are presented, one at a time. Recall may be oral or in writing. Subjects
are instructed to recall as many words as they can without regard to the order in which the items
have been presented where order of presentation is randomized from trial to trial.

Murdock found that the probability of recall of individual items in a list is a function of their position
in the list. He found that items in the end of the list were recalled better (Recency effect) and those
at the beginning of the list next (Primacy effect). The items in the middle of the list were recalled the
least. These results were independent of list size used. However, variation in terms of serial position
is dependent upon the nature of the material and the nature of the practice (rehearsal).

4.Recognition learning - Here subjects are given a list of items and after study phase, subjects are
given a test sheet and are asked to circle the items they were shown before. This method involving
distractor items.

5.verbal discrimination learning - In this type of procedure, a series of verbal items is presented,
usually visually, and the subjects are asked to learn which member of the pair is “correct”, i.e. the
one arbitrarily selected by the experimenter as the right one. There is little evidence to suggest
regarding the relation between meaningfulness association value and verbal discrimination learning.

MATERIALS

Learning items are usually words, numerals, line drawings or arbitrary letter combinations. Learning
material usually in the form of consonant-vowel-consonant (CVC) combinations (nonsense syllable)
or as consonant trigrams. Ebbinghaus objected to the use of words in verbal learning experiments
because he had noted that some words very much easier to remember than others, depending upon
their meaning and familiarity. Thus, nonsense syllables provided only an incomplete solution to this
problem and more nonsensical than others. The rate of verbal learning is determined by the
characteristics of the learning material. They are,

1.Meaningfulness facilitates verbal learning.

2.With words frequency of occurrence in the natural language is related to learning speed.

3.Similarity of items facilitates verbal learning.


4.Stimulus material with mental images ease of learning.

BIOLOGICAL INFLUNCES IN LEARNING

INTRODUCTION
A person’s biological character also affects other types of learning. Examples include developing a
preference for coffee, forming a lifelong attachment to one’s mother, or learning to avoid an
obnoxious neighbor.

GENERALITY OF THE LAWS OF LEARNING

Psychologists often use simplified and artificial setups—like training rats or monkeys to press a bar
for food or presenting a buzzer before feeding cats or dogs—not because these mimic real-world
scenarios, but because they serve to uncover the general laws of learning. In operant conditioning,
bar pressing is a preferred response because it is easily acquired by many species and lacks prior
associations, making the behavior more neutral and scientifically useful. As Skinner (1938) noted,
the specific form of the behavior is less important than its function in demonstrating how
reinforcement influences actions. These controlled conditions help reveal consistent rules of
learning, such as how various reinforcers affect behavior rates—principles shown to apply both in
the lab and the real world. Similarly, in classical conditioning, the choice of stimuli like buzzers and
food is arbitrary. Psychologists assume that any stimulus capable of triggering an unconditioned
response (UCR) can be paired with a wide range of neutral stimuli to form conditioned responses
(CRs), as Pavlov demonstrated. For instance, the same buzzer that was used to condition salivation
could have been used to condition fear if paired with an aversive event like shock. The key idea is
that these basic learning processes are generalizable—applicable across species, contexts, and
types of stimuli—which is why psychologists study them in such simplified forms.

A BEHAVIOR SYSTEMS APPROACH


The Behavior Systems Approach, proposed by William Timberlake and others (e.g., Garcia & Garcia
y Robertson, 1985; Hogan, 1989), offers an alternative perspective to the “general laws of learning,”
which traditionally view learning as the primary force shaping behavior. Instead, this approach posits
that animals come equipped with preexisting instinctive behavior systems—such as feeding,
mating, caregiving, defence, and social bonding—which are evolutionarily designed to meet specific
survival needs. Learning, from this perspective, does not build new behavioral structures but
modifies and fine-tunes these innate systems, enhancing the animal's ability to adapt to
environmental challenges. For example, through Pavlovian conditioning, a previously neutral
stimulus can come to trigger an instinctive motor response or activate a general motivational mode
that primes the animal to respond to related stimuli. Learning may alter how different components
of a behavior system—like perceptual or motor modules—are activated and integrated, improving
behavioral efficiency. Importantly, this approach acknowledges that learning varies between species
and behavioral systems, influenced by biological predispositions (where learning occurs more
readily or effectively) and constraints (where learning is slower or incomplete). These variations
reflect evolutionary adaptations that shape how each species interacts with its environment,
offering a more functionally grounded and species-specific understanding of learning than general,
one-size-fits-all models.

ANIMAL MISBEHAVIOR
Breland and Breland observed that in certain operant conditioning situations, animals' instinctive
food-foraging and food-handling behaviours could interfere with learned responses. They found
that when food was used as a reinforcer, the natural behaviours associated with obtaining and
handling food were sometimes elicited so strongly that they began to disrupt or replace the
operant response, a phenomenon they termed "instinctive drift." This drift occurs because the
instinctive behaviours, being consistently reinforced by food, eventually dominate the learned
behavior, leading to what they called animal misbehaviour—such as cats lingering around the food
dispenser instead of performing the trained task. Building on this, Boakes et al. (1978) argued that
such misbehaviour is better explained by Pavlovian conditioning, where environmental cues
associated with food come to elicit species-typical behaviours, rather than operant learning alone.
Further studies by Timberlake, Wahl, and King (1982) suggested a more integrative view: that both
operant and Pavlovian conditioning contribute to animal misbehaviour. They found that such
behavior arises when food is paired with natural cues that normally elicit foraging behaviours—and
when these behaviours themselves are reinforced, they can override the operant response.
Importantly, misbehaviour is not common in all operant conditioning settings; it tends to occur only
when (1) the training cues resemble natural foraging stimuli, and (2) the instinctive behaviours are
also reinforced, allowing them to become dominant.

SCHEDULE-INDUCED BEHAVIOR

B. F. Skinner (1948) described an interesting pattern of stereotyped behavior that pigeons exhibited
when reinforced for key pecking on a fixed-interval (FI) schedule. And referred to as superstitious
behavior. Why do animals exhibit superstitious behavior? Staddon and Simmelhag’s (1971) identified
two types of behavior produced when reinforcement (e.g., food) occurs on a regular basis:

(1) terminal behavior

(2) interim behavior

Terminal behavior occurs during the last few seconds of the interval between reinforcement
presentations, and it is reinforcement oriented.

Interim behavior, in contrast, is not reinforcement oriented

According to Staddon and Simmelhag (1971), terminal behavior occurs in stimulus situations that are
highly predictive of the occurrence of reinforcement; that is, terminal behavior is typically emitted
just prior to reinforcement on an FI schedule. In contrast, interim behavior occurs during stimulus
conditions that have a low probability of the occurrence of reinforcement; that is, interim behavior is
observed most frequently in the period following reinforcement. When FI schedules of
reinforcement elicit high levels of interim behavior, we refer to it as schedule-induced behavior.

Schedule-Induced Polydipsia
Schedule-Induced Polydipsia refers to the excessive drinking behavior observed when animals are
reinforced with food on interval schedules. John Falk (1961) first discovered SIP, where rats would
drink large amounts of water between food deliveries. It is considered a form of interim behavior
and has been replicated across many species and reinforcement schedules.

 Falk (1966) and Jacquet (1972) demonstrated SIP across various interval and compound
schedules.

 Pellon et al. (2011) observed individual differences in SIP: some animals (high drinkers)
drank significantly more than others and had greater dopamine activity, suggesting
biological variability in responsiveness to reinforcement.

Other Schedule-Induced Behaviors


Beyond polydipsia, other behaviours such as:

 Wheel running: Studies (Levitsky & Collier, 1968; Staddon & Ayres, 1975) found high activity
immediately after reinforcement.
 Aggression: Animals will sometimes exhibit aggression toward nearby targets post-
reinforcement.

These behaviours are instinctive actions triggered by the timing and predictability of reinforcement,
not by direct training.

The Nature of Schedule-Induced Behavior

Riley & Wetherington (1989) proposed that SIP and similar behaviours are instinctive, elicited by
periodic reinforcement. Their resistance to being altered by flavour aversion learning is key
evidence.

In Riley et al.'s (1979) study, rats developed a saccharin aversion after pairing it with illness. But this
aversion quickly extinguished, suggesting that SIP is relatively immune to aversive learning—
supporting the idea that it's instinct-driven, not learned.

Schedule-Induced Polydipsia and Human Alcoholism

Schedule-Induced Polydipsia (SIP) has been proposed by Gilbert (1974) as an animal model for
understanding human alcoholism, suggesting that interval reinforcement schedules in daily life—
such as work breaks or pay cycles—may lead to excessive alcohol consumption. Studies with rats
support this view, showing that under such schedules, animals consume large quantities of alcohol,
achieve blood alcohol levels comparable to human alcoholics, develop tolerance and withdrawal
symptoms, prefer alcohol over sugar, and even perform operant behaviours to gain access to
alcohol. This makes SIP a compelling model for examining the behavioral and biological
underpinnings of addiction. Genetic and neurobiological factors also play a significant role; high-
drinking animals demonstrate greater dopaminergic activity in response to reinforcement and
heightened sensitivity to amphetamines, indicating that variations in dopamine system functioning
may contribute to addiction vulnerability. Moreover, the principles of SIP extend beyond alcohol to
other substances, such as cocaine, where exposure to intermittent reinforcement schedules can
foster compulsive drug-seeking, suggesting a general mechanism of susceptibility within the brain’s
reward system. Despite these parallels, key differences between animal and human addiction must
be acknowledged. Human addictive behavior is shaped by a complex interplay of cognitive,
emotional, and social influences, and factors such as volition and contextual meaning play a much
larger role than in animal models. Therefore, while SIP helps elucidate core mechanisms, it cannot
fully capture the multifaceted nature of human addiction.

FLAVOR AVERSION LEARNING


Animals will develop aversions to taste cues even when the taste stimulus precedes illness by several
hours. The association of a flavor with illness is often referred to as long-delay learning;

The Selectivity of Flavor Aversion Learning

Some stimuli are more likely than others to become associated with a particular UCS. Garcia and
Koelling’s (1966) study show that a taste is more salient when preceding illness than when preceding
shock, whereas a light or tone is more salient when preceding shock than when preceding illness.
Garcia and Koelling proposed that rats have an evolutionary preparedness to associate tastes with
illness. Young animals also acquire a strong aversion after one pairing. Taste cues are very salient in
terms of their associability with illness. Although rats form flavor aversions more readily than
environmental aversions, other species do not show this pattern of stimulus salience. Birds acquire
visual aversions more rapidly than taste aversions

The Nature of Flavor Aversion Learning


Two very different theories attempt to explain long-delay flavor aversion learning:
 Learned-safety theory
 Concurrent interference theory

Learned-Safety Theory
Proposed by James Kalat and Paul Rozin (1971), the theory suggests that while contiguity is generally
essential in Pavlovian conditioning—such as when a child touches a flame and immediately feels
pain—a specialized mechanism evolved specifically for flavour aversion learning due to its unique
survival value. This mechanism allows animals to associate the taste of a potentially toxic food with
illness even if the symptoms occur several hours after ingestion. Such an adaptation enables animals
to avoid consuming harmful substances that don't produce immediate effects. Ingestional
neophobia, or the tendency to consume only a small amount of a novel food, also plays a crucial role
in this system. This cautious behavior has adaptive significance as it minimizes the risk of ingesting
large amounts of a potentially poisonous substance, giving the animal time to assess the safety of
the food based on later physiological consequences.

Concurrent Interference View


Sam Revusky (1971) argued that associative processes influence flavor aversion learning. According
to Revusky, proximity is a critical factor in conditioning; the stimulus occurring closest to the UCS will
become able to elicit the CR. If another stimulus intervenes between the CS and UCS, it will produce
concurrent interference, or the prevention of the CS becoming associated with the UCS and able to
elicit the CR. Thus, long-delay learning occurs in flavor aversion learning as a result of the absence of
concurrent interference.

Flavor Aversion Learning in Humans


In one study, Bernstein (1978) found that children in the early stages of cancer acquired an aversion
to a distinctively flavored Maple off ice cream consumed before toxic chemotherapy that affects the
gastrointestinal tract. In another study both adult and child cancer patients receiving radiation
therapy typically lose weight (Bernstein, 1991). These cancer patients also show an aversion to foods
eaten prior to chemotherapy. The association of hospital food with illness is a likely cause of the
weight loss seen in cancer patient. Chemotherapy is not the only event that produces flavor aversion
in humans. Excessive consumption of alcohol leads to illness (hangover) from the alcohol metabolite
acetaldehyde and to the development of a flavor aversion (Logue, Logue, & Strauss, 1983). In
addition, Havermans, Salvy, and Jansen (2009) reported that people developed an aversion to a
flavor that was either consumed or merely tasted prior to 30 minutes of a running exercise.

The Neuroscience of Flavor Aversion Learning

The lateral and central amygdala play a crucial role in both fear conditioning and flavour aversion
learning, acting as key structures in the brain’s processing of aversive experiences. Wig, Barnes, and
Pinel (2002) demonstrated that stimulating the lateral amygdala after rats consumed a novel flavor
induced a learned aversion, indicating its involvement in associating taste with negative outcomes.
Supporting this, Tucci, Rada, and Hernandez (1998) found increased glutamate activity in the
amygdala when rats encountered a flavor previously paired with illness, highlighting its role in
encoding aversive taste memories. Yamamoto and Ueji (2011) further mapped the neural circuitry
underlying flavor aversion, showing that detection begins in the gustatory cortex, then signals flow
through the amygdala and thalamic paraventricular nuclei to the prefrontal cortex, resulting in
avoidance behavior. Additionally, Agüera and Puerto (2015) found that damage to the central
amygdala impaired flavor aversion learning, reinforcing its importance. Altogether, these findings
suggest that the lateral and central amygdala are deeply involved in mediating aversive conditioning
related to both pain and illness.
FLAVOR PREFERENCE LEARNING

Distinguishing between flavor-sweetness associations and flavor nutrient associations as the basis
for flavor preferences. Tomato juice is an acquired taste. Some people like the flavor of tomato juice,
but other people do not. People’ s preference for tomato juice is an example of a conditioned flavor
preference. Flavor preferences can be learned rapidly (Ackroff, Dym, Yiin, & Sclafani, 2009) or be
acquired over a long delay (Ackroff, Drucker, and Sclafani, 2012)

Studies: Ackroff and colleagues found that rats preferred an unsweetened flavor paired with 8%
fructose and 0.2% saccharin over one paired with only 0.2% saccharin by the second preference test.
Ackroff, Drucker, and Sclafani (2012) found that flavor preferences can develop over a 60-minute
delay when an unsweetened flavor is paired with an 8% or 16% Polycose glucose nutrient solution.

The Nature of Flavor Preference Learning

People develop flavor preferences for two main reasons: association with sweetness and association
with positive nutritional outcomes (Myers & Sclafani, 2006). Flavor-sweetness preferences occur
when a nonsweet flavor is repeatedly paired with a sweet taste. For instance, Capaldi, Hunter, and
Lyn (1997) showed that rats could develop a preference for citric acid over salt when citric acid was
paired with sucrose or saccharin. Similarly, Ackroff and Sclafani (1999) demonstrated a preference
for unsweetened grape Kool-Aid paired with saccharin over cherry Kool-Aid paired with water.
Beyond sweetness, flavor-nutrient preference conditioning involves associating Flavors with
nutrient-rich substances. This type of learning helps animals, including humans, identify and select
nutrient-dense foods (Sclafani, 2001). For example, Ackroff and Sclafani (2003) found rats preferred
a grape flavor paired with a 5% ethanol nutrient solution over cherry flavor with water, suggesting
the preference stemmed from nutritional value rather than taste alone. In humans, Capaldi and
Privitera (2007) observed that college students favoured Flavors linked to high-fat cream cheese
over low-fat versions, despite similar bitterness. These preferences emerge early in life; studies show
young rats and children form flavor-sweetness and flavor-nutrient associations, even with
unsweetened Flavors paired with glucose or high-calorie drinks (Melcer & Alberts, 1989; Myers &
Hall, 1998; Birch et al., 1990). Moreover, flavor-nutrient preferences can arise whether nutrients are
ingested or infused, as shown by Myers, Ferris, and Sclafani (2005), who found rats preferred a
flavor paired with glucose over one paired with sucrose, confirming that nutrient value alone can
guide flavor preferences.

The Neuroscience of Flavor Preference Learning

Research indicates that dopamine neuron activity in the nucleus accumbens is fundamental to the
conditioning of both flavor-flavor and flavor-nutrient preferences (Sclafani, Touzani, & Bodnar,
2011). Sweet tastes like sucrose and creamy textures naturally activate dopamine neurons in this
brain region as unconditioned responses. When a previously bitter flavor—such as coffee—is paired
with sugar and cream, it can come to activate these neurons as a conditioned response. Supporting
this, Touzani, Bodnar, and Sclafani (2010) demonstrated that blocking dopamine receptors in the
nucleus accumbens disrupted the development of both types of conditioned flavor associations.
Beyond forming preferences, the nucleus accumbens also regulates dietary variety. Jang et al. (2017)
showed that while rats displayed different preferences for four equally nutritious but variably sweet
flavor, those with nucleus accumbens lesions overwhelmingly chose the sweetest one. This suggests
the region not only helps form flavor preferences but also prevents over-reliance on a single, highly
palatable option—supporting dietary balance.

IMPRINTING
Imprinting refers to the rapid formation of a strong attachment between a young animal and a
caregiver or significant object during a specific window of development, known as a sensitive
period. First described by Konrad Lorenz (1952) in goslings, imprinting involves the young following
the first moving object they see after hatching, often their mother—but it can also be humans or
inanimate objects. Certain traits—like movement, vocalizations, rhythmic sounds, and size—
enhance the likelihood of imprinting (Fabricius, 1951; Collias & Collias, 1956; Weidman, 1956;
Schulman et al., 1970).

Research by Harry Harlow (1971) expanded imprinting to primates, showing that infant monkeys
formed stronger attachments to soft, warm, and comforting surrogate mothers than to cold or
unresponsive ones. This parallels Ainsworth’s (1982) findings in humans, where securely attached
infants had responsive caregivers, while inattentive parenting led to distress and avoidant behavior.

The timing of imprinting is critical. Sensitive periods vary by species—from just a few hours in birds
and goats to several months in primates and humans. Though imprinting is strongest in early hours
(Jaynes, 1956), Brown (1975) found that with sufficient experience, it can occur later too.

Two major theories explain imprinting: genetic predisposition and associative learning. Moltz
(1960, 1963) proposed that imprinting involves both Pavlovian and operant conditioning—initial
comfort from familiar objects reduces fear and strengthens attachment. Supporting this, both birds
and primates experience fear reduction when reunited with familiar figures (Bateson, 1969; Harlow,
1971).

Ultimately, imprinting is not just about early attachment but about emotional regulation, safety,
and social development, forming the foundation for later relationships—even into adulthood, as
individuals may continue to seek comfort from early attachment figures when threatened or
distressed.

Maternal Bonding in Human Infants The specific attributes of the object are important in the
formation of a social attachment (Moltz’s, 1960, 1963)

Maternal Bonding in Human Infants draws on foundational research in animal imprinting,


emphasizing both innate and learned elements. According to Moltz (1960, 1963), imprinting involves
an initial low-arousal orientation to attention-grabbing features of an object, facilitating attachment.
Interestingly, chicks are more likely to imprint on objects that move away rather than toward them,
suggesting imprinting is not purely based on simple associative learning. Objects that resemble adult
members of the species are also more likely to be imprinted upon, highlighting that specific features
—like size, movement, and appearance—are crucial in forming lasting attachments.

Building on this, Konrad Lorenz (1935) and Hess (1973) proposed that imprinting is a genetically
programmed learning process. Hess introduced the idea of an innate schema—a built-in expectation
in young animals—that guides them to imprint on the most appropriate object, typically their
parent, during a sensitive period when imprinting is most effective. This evolutionary adaptation
ensures survival by promoting attachment to a caregiver early in life.

Graham (1989) added further support to the instinctive view by pointing out that unlike classically
conditioned responses, which fade without reinforcement, behavior directed at an imprinting object
are remarkably persistent. This stability suggests imprinting represents a distinct form of learning,
one that blends biological preparedness with early environmental cues, shaping maternal bonding
and social attachment in both animals and humans.

An Instinctive View of Imprinting


Animals appear insensitive to punishment from imprinting objects compared to conditioned stimuli
associated with reinforcement. Kovach and Hess (1963) found that chicks continued to approach
imprinting objects despite receiving electric shocks. Similarly, Harlow (1971) demonstrated the
profound social attachment of infant primates to surrogate mothers, even in the face of extreme
abuse. His experiments with abusive " monster mothers, " including those that rocked violently and
projected air blasts, showed that infants clung to them despite the abuse. Even mothers that tossed
infants off or shot brass spikes were approached again once the abuse ceased.

The Neuroscience of Social Attachments

Social attachments involve inhibiting fears and promoting attachment-related behaviors, contrasting
with the role of the lateral and central amygdala in aversive conditioning. Tottenham, Shapiro,
Telzer, and Humphreys (2012) found that the dorsal amygdala activation correlates with maternal
approach behaviors in children and adolescents. Coria-Avila et al. (2014) reported on a neural circuit
from the dorsal amygdala to the nucleus accumbens that motivates social attachment behaviors,
linked to increased dopamine activity in the nucleus accumbens during maternal attachment.
Additionally, Strathearn (2011) noted that this dopamine pathway functions more actively in secure
maternal relationships but is less active in anxious maternal relationships.

THE AVOIDANCE OF AVERSIVE EVENTS

Bolles proposed that animals are not only driven by the pursuit of rewards like food or mates but
also possess instinctive mechanisms to avoid danger. These innate strategies are essential for
survival because animals often don’t have the luxury of time to learn from repeated exposure to
threats. Bolles introduced the concept of species-specific defence reactions (SSDRs), which are
automatic, evolutionarily preserved responses used to escape danger. These vary by species: rats
instinctively freeze, flee, or fight; birds fly away; and mice exhibit timid behavior. Importantly, SSDRs
are context-dependent—Bolles and Collier (1976) found that rats shocked in different-shaped boxes
responded with context-specific SSDRs (freezing in square boxes, running in rectangular ones).
Animals readily learn avoidance behavior if those align with their SSDRs. For instance, rats easily
learn to run to avoid shock, but struggle to learn arbitrary responses like bar pressing. Bolles (1969)
demonstrated that while rats quickly learned to run in an activity wheel to avoid shock, they could
not learn to stand on their hind legs to avoid it. Bolles and Riley (1973) showed that freezing
behavior could be quickly acquired via Pavlovian conditioning and could not be reduced by
punishment, suggesting that avoidance learning is rooted in innate Pavlovian mechanisms, not
operant reinforcement.

In humans, emotional responses to threats exhibit similar SSDR-like patterns. Barbara Fredrickson’s
research found that negative emotions like fear or anger reduce cognitive flexibility and limit
behavioral responses, while positive emotions (joy, contentment) broaden a person's range of
potential actions. This suggests that positive emotional states enhance adaptive capacity, while
negative states narrow responses to instinctive, often defensive behavior. Understanding this can
help improve strategies for resilience and coping under stress.

THE BIOLOGY OF REINFORCEMENT

The discovery by Olds and Milner (1954) that rats would self-stimulate certain brain regions by
pressing a bar marked the beginning of our understanding of the brain's reinforcement system. This
intracranial self-stimulation demonstrated that electrical stimulation of the brain could serve as a
powerful reinforcer across species, including rats, pigeons, dogs, primates, and even humans. The
most critical area for this effect is the medial forebrain bundle, a part of the limbic system. Larry
Stein and colleagues (1973) showed that stimulation of this area is highly reinforcing, motivates
behavior, becomes more active in the presence of reward, and is enhanced by deprivation.
Expanding on this, Wise and Rompre (1989) proposed the mesolimbic reinforcement system, which
includes two main neural pathways: the tegmentostriatal pathway (which identifies reinforcement-
related stimuli and connects to the nucleus accumbens) and the nigrostriatal pathway (which helps
store reinforcement-related experiences). Central to both pathways is dopamine, a neurotransmitter
that regulates reinforcement by connecting the ventral tegmental area with key areas like the
nucleus accumbens and prefrontal cortex. Natural rewards (like food or water) and drugs (like
amphetamine and cocaine) both trigger dopamine release, reinforcing behavior. Animals even self-
administer these drugs, suggesting the powerful role of dopamine in addictive behavior.

Moreover, opiates like morphine and heroin activate the tegmentostriatal pathway through
separate opiate receptors, but they also boost dopamine activity in the nucleus accumbens. This
dopamine-opiate interaction strengthens the link between reinforcement and addictive potential.
The Dual Receptor Theory by Koob (1992) supports this dual influence, highlighting the combined
effect of dopamine and opiate pathways in producing strong reinforcement signals.

Individual differences in reinforcement responsiveness are also observed. Some animals, like high
sucrose feeders (HSFs), show greater dopamine activity and consume more rewards (like sugar or
amphetamines) than low sucrose feeders (LSFs). This heightened mesolimbic activity may explain
variations in compulsive behavior like gambling or hypersexuality, especially in clinical contexts such
as Parkinson’s disease, where dopamine-enhancing medications trigger such behavior. Recognizing
these differences can guide treatments for addiction by targeting the mesolimbic system.

CONDITIONING

PAVLOVIAN CONDITIONING

Ivan Pavlov, originally a physiologist studying digestion, made a groundbreaking observation when
he noticed that dogs began salivating not just when food was placed in their mouths, but also when
they saw food or objects associated with food (like dishes). He theorized that this anticipatory
salivation was a learned response. Pavlov proposed that both humans and animals possess
unconditioned reflexes, which are automatic, biologically ingrained reactions to stimuli. For
example, food (an unconditioned stimulus, UCS) naturally triggers salivation (an unconditioned
response, UCR).

When a neutral stimulus (e.g., a metronome) is repeatedly paired with the UCS, it becomes a
conditioned stimulus (CS), capable of eliciting a conditioned response (CR)—in this case, salivation.
Over time, the strength of the CR increases, demonstrating associative learning. Pavlov’s studies
revealed key principles like stimulus generalization (similar stimuli to the CS also elicit the CR) and
extinction (the CR fades when the CS is no longer paired with the UCS).

Conditioned Hunger and Motivation

Conditioned responses aren’t limited to salivation. Hunger itself can be conditioned. For instance, if
someone regularly encounters food in the kitchen, cues like the sight of the refrigerator or pantry
can trigger hunger. This is because these environmental cues (CSs) become associated with food
(UCS), which naturally elicits physiological responses like salivation, insulin release, and gastric
secretions (UCRs). Over time, the sight of a cupboard can trigger these responses (CRs), causing the
person to feel hungry—even if they’re not biologically deprived of food.

Insulin, in particular, plays a key role: it lowers blood glucose levels, which in turn stimulates the
sensation of hunger. Therefore, palatable foods that trigger stronger unconditioned responses (like
chocolate or pie) result in stronger conditioned responses, making the CR more intense and harder
to resist.
Motivational Power of Conditioned Cues

Experiments with rats have shown that food-associated environmental cues can override satiety. For
example, Gallagher and Holland found that rats trained to associate a tone or specific environment
with food ate even when they were full, simply due to the presence of these cues. This behavior
wasn’t limited to familiar food—it extended to novel foods as well.

Similar findings were observed in humans. In one study by Birch et al. (1989), children exposed to
audiovisual cues paired with snacks consumed more food in response to those cues later—even
when satiated. Ridley-Siegert et al. (2015) further found that visual cues linked with chocolate
increased food intake more than cues linked to chips or non-food stimuli, highlighting chocolate’s
high motivational value.

Neuroscience of Conditioned Hunger

The basolateral amygdala (BLA) is central to how conditioned cues drive feeding behavior. During
conditioning, the BLA becomes activated by tone-food pairings. With continued pairings, this
activation spreads to a neural pathway that includes the medial prefrontal cortex (mPFC)—a region
involved in executive decision-making. This means the brain’s higher cognitive systems get involved
in cue-driven eating.

Additionally, the neuropeptide orexin, which regulates arousal and appetite, becomes activated
during these pairings, especially in the pathway from the amygdala to the prefrontal cortex. The
nucleus accumbens, a reward-processing area, also receives input from the BLA. Importantly,
damage to the BLA disrupts conditioned feeding—both preventing learning if the damage occurs
before conditioning and erasing the CR if it happens after training.

In humans, studies show that the amygdala activates in response to the sight or thought of
preferred foods, but not neutral ones—even when individuals are already full. This supports the idea
that food-related environmental cues can elicit motivational responses that bypass actual hunger
signals.

Conditioning of Fear

Fear conditioning involves the association of a previously neutral stimulus (CS) with an aversive
unconditioned stimulus (UCS), such as sudden turbulence in an airplane. The UCS (e.g., sharp drop)
elicits an unconditioned response (UCR), which includes both psychological distress and physiological
arousal. Over time, cues predicting the UCS, like the seatbelt light or storm clouds, become
conditioned stimuli (CSs) that trigger fear responses (CRs) even in the absence of turbulence.

Factors Influencing Fear Conditioning

The severity of the UCS (e.g., intensity of turbulence) influences the intensity of the UCR.
Additionally, repeated exposure can heighten reactivity through sensitization. Stimuli associated
with past aversive events can later elicit anticipatory fear responses (CRs), as seen in real-life
scenarios like Juliette's fear of darkness resulting from a traumatic event at night.

Historical Studies on Fear Conditioning

Early work by Bechterev (1913) and John Watson (1916) showed that pairing a neutral stimulus with
an aversive event (e.g., shock) leads to conditioned emotional responses. These findings laid the
foundation for understanding how fear is learned through Pavlovian mechanisms in both animals
and humans.

Neuroscience of Conditioned Fear

Unlike conditioned hunger (linked to the basolateral amygdala), conditioned fear primarily involves
the lateral and central amygdala. Studies in rats and mice show increased activity and neural
changes in these areas during fear conditioning. Lesions in the lateral/central amygdala impair the
acquisition and expression of fear responses to aversive stimuli, highlighting their crucial role.

Conditioning Techniques

1. Eyeblink Conditioning

Involves pairing a tone (CS) with a puff of air to the eye (UCS). Over repeated pairings, the tone alone
elicits a conditioned eyeblink response (CR). This technique is widely used in both animal and human
studies to explore associative learning and its neurological basis.

2. Fear Conditioning

Fear is measured through:

 Escape/avoidance behavior: though not always a direct indicator of fear.

 Conditioned emotional responses (CERs): include freezing or suppression of an ongoing


operant behavior (e.g., bar pressing for food) in the presence of a feared CS.
Estes & Skinner (1941) developed this method to quantify fear. Fear conditioning typically
develops quickly, often within 10 trials.

3. Flavor Aversion Learning

Occurs when consumption of a food (CS) is followed by illness (UCS), resulting in long-lasting
avoidance of that food (CR). Research by John Garcia demonstrated that even highly preferred flavor
(like saccharin) is avoided if followed by illness. This kind of conditioning is strong and often occurs
after a single trial, even if the illness is delayed.

EMPIRICAL OBSERVATION

1. Development of a Conditioned Reflex

Pavlov demonstrated that placing acid in a dog’s mouth causes a natural defensive response—
mouth movements and salivation—to remove the irritant. When a neutral sound is paired
repeatedly with the acid application, the dog begins to exhibit the same salivation and mouth
movement just in response to the sound. This illustrates classical conditioning, where a neutral
stimulus (CS) becomes capable of eliciting a response (CR) due to its association with an
unconditioned stimulus (US).

2. Elements of Classical Conditioning

Classical conditioning consists of:

 Unconditioned Stimulus (US): Naturally causes a response (e.g., acid).

 Unconditioned Response (UR): Natural reaction to US (e.g., salivation).


 Conditioned Stimulus (CS): Initially neutral (e.g., sound).

 Conditioned Response (CR): Learned response to CS (e.g., salivation to sound).


The CR is typically weaker in magnitude than the UR. Pavlov initially believed that CRs are
smaller versions of URs, though later research has challenged this in some cases.

3. Experimental Extinction

If a CS is presented without the US repeatedly, the learned CR gradually weakens and disappears.
This is known as extinction. Since the CS is no longer reinforced by the US, it loses its power to elicit
the CR. In classical conditioning, the US acts as a reinforcer.

4. Spontaneous Recovery

After extinction, if some time passes and the CS is presented again, the CR may reappear
temporarily. This is called spontaneous recovery and shows that extinction does not completely
erase the learned association.

5. Higher-Order Conditioning

A conditioned stimulus (CS) can gain secondary reinforcing properties. For example, if a blinking
light (CS1) is paired with food (US), the dog learns to salivate to the light (CR). Then, a new stimulus
like a buzzer (CS2) can be paired with the light (without food). Eventually, the buzzer alone elicits
salivation—this is second-order conditioning. If a third stimulus (e.g., a tone) is paired with the
buzzer, and the tone also elicits a CR, it’s called third-order conditioning. This demonstrates how
conditioning can extend beyond the original US.

6. Generalization

After conditioning to a 2,000-cps tone, presenting tones of similar frequency also elicits a CR, though
with decreasing strength as similarity decreases. This is called stimulus generalization. The more
similar a new stimulus is to the original CS, the stronger the response. This concept is closely related
to Thorndike’s theory of transfer, where similar situations trigger similar responses. However, while
generalization depends on stimulus similarity, Thorndike’s spread of effect is more about proximity
of responses, not similarity.

APPLICATION OF PAVLOVIAN CONDITIONING

1. Extinction in Clinical Practice

Classical conditioning principles are used in therapy by assuming that maladaptive behaviours like
smoking or drinking are learned and can therefore be unlearned. For instance, the taste of alcohol
or cigarettes (CS) becomes associated with pleasurable physiological effects (US), producing
pleasure (CR). If the CS is repeatedly presented without the US (e.g., tasting alcohol without getting
intoxicated), extinction can occur, leading to a reduction or elimination of the behavior.

2. Counterconditioning

Counterconditioning is often more effective than extinction alone. In this method, the CS (e.g.,
alcohol or cigarette taste) is paired with a new, aversive US, such as a nausea-inducing drug. Over
time, the CS comes to elicit a negative response (like nausea), which helps develop an aversion. For
example, injecting anectine, a drug that creates frightening respiratory paralysis, after drinking led to
lasting behavior change in most participants of one study. However, the effects are often temporary
and not guaranteed long-term.
3. Flooding

Flooding is a treatment for phobias based on extinction. It works by forcing the person to face the
feared stimulus (CS) without escape, allowing them to learn no harm will follow (no US). This helps
extinguish the fear response. For example, a person with dog phobia must be exposed to a dog for
an extended time. Although fast-acting, flooding can produce high dropout rates and even worsen
symptoms for some, since it involves intense exposure to something the person has long feared.

4. Systematic Desensitization

Developed by Joseph Wolpe, this technique also targets phobias but in a more gradual, controlled
manner. It includes three phases, the first of which is to build an anxiety hierarchy—a ranked list of
related situations from least to most anxiety-provoking. Clients are then gradually exposed to these
situations while learning relaxation techniques, helping to replace fear with calm responses. It is
safer and generally more well-tolerated than flooding.

GARCIA EFFECT

The Garcia Effect, also known as taste-aversion learning or long-delay learning, refers to the
phenomenon where animals learn to associate specific stimuli—particularly taste—with illness, even
when the negative consequence is delayed. Research has shown that species like rats, quail, and
monkeys are biologically predisposed to form such aversions due to evolutionary adaptations. For
instance, rats readily associate taste with internal discomfort, like illness, but not with external pain
such as foot shocks, making them notoriously difficult to poison. In a classic study, Garcia and
Koelling (1966) demonstrated that rats developed a strong aversion to flavoured (tasty) water after
being exposed to illness-inducing X-rays, whereas rats exposed to bright-noisy (audiovisual) water
did not show the same aversion after X-rays but did when foot shocks were used. This suggests that
internal threats are more strongly associated with taste, while external threats are linked with
audiovisual cues. Garcia’s further experiments showed that rats could associate taste with illness
even when the onset of sickness was delayed by up to 75 minutes, proving that long-delay
associations are not only possible but also advantageous for survival. Expanding on these findings,
Seligman (1970) proposed the biological preparedness continuum, which includes prepared
associations (easily formed, like taste-illness), unprepared associations (requiring more exposure),
and contra prepared associations (difficult or impossible to learn), emphasizing that evolutionary
pressures influence learning capacity. Moreover, learned aversions are not confined to taste; quail
form aversions based on both taste and visual cues, and monkeys have been shown to avoid cookies
of a specific shape after becoming ill, indicating that aversion learning can involve multiple sensory
modalities depending on species-specific adaptations. The Garcia Effect has important practical
implications: it helps explain why cancer patients often develop aversions to foods consumed before
chemotherapy, and it can be used to condition predators to avoid certain prey, proving its relevance
in both medical and ecological applications.

APPETITIVE CONDITIONING

Appetitive conditioning is a type of learning in which behaviours are strengthened because they
lead to rewarding or satisfying outcomes, making it a central mechanism in both human and animal
behavior. This form of conditioning plays a crucial role in shaping actions related to motivation,
reinforcement, and reward-seeking. The process involves a neutral stimulus becoming associated
with a positive reinforcer—such as food, praise, or affection—which in turn increases the likelihood
that the associated behavior will be repeated. As a result, organisms learn to perform certain
behaviours more frequently when those behaviours reliably predict or lead to a desirable event,
supporting the development of goal-directed actions.

B.F. SKINNER (1938)

B.F. Skinner emphasized that reinforcement shapes behavior, and a central concept in his theory is
contingency—the specific relationship between behavior and reinforcement. According to Skinner,
it’s the environment that determines these contingencies, and individuals must behave in specific
ways to receive reinforcement. He rejected internal explanations of reinforcement, asserting that
observable behavior and external stimuli drive learning.

INSTRUMENTAL VS. OPERANT CONDITIONING IN APPETITIVE CONTEXTS:

In appetitive contexts, instrumental conditioning occurs when the environment limits opportunities
for reward, thus constraining responses. Conversely, operant conditioning allows the subject to
freely control response frequency and thereby the reinforcement received. Both forms investigate
how behavior is modified by reinforcement, but differ in the degree of response freedom.

TYPES OF REINFORCEMENT AND SHAPING

Reinforcement can be primary (inherently satisfying like food) or secondary (learned, like money).
Skinner developed the shaping technique, where behaviours closer and closer to the desired
response are reinforced progressively. This method increases the speed and precision of learning a
new behavior by reinforcing successive approximations.

SHEDULE OF REINFORCEMENT

Reinforcement can be delivered based on time (interval schedules) or the number of responses
(ratio schedules). These schedules affect the speed and pattern of learning. Different types of
schedules—fixed or variable—produce distinct response rates and patterns of behavior.

FIXED RATIO SCHEDULES

Fixed ratio (FR) schedules require a specific number of responses for reinforcement. They produce
consistent responding, with response rates increasing as the ratio increases. A post-reinforcement
pause often follows reinforcement, especially at higher ratios.

VARIABLE-RATIO SCHEDULES

Variable ratio (VR) schedules involve an unpredictable number of responses for reinforcement. They
generate high and steady response rates with minimal pauses. VR schedules are generally more
effective than FR schedules in maintaining behavior.

FIXED INTERVAL SCHEDULES

With fixed interval (FI) schedules, the first response after a set time is reinforced. This leads to a
scalloped response pattern—post-reinforcement pauses followed by an accelerating rate of
responding as the interval nears completion.

VARIABLE INTERVAL RATIO

Variable interval (VI) schedules have changing time intervals between reinforcements. Response
rates tend to be moderate and steady, and the scallop pattern seen in FI schedules is absent. Longer
intervals produce lower response rates.
DIFFERENTIAL REINFORCEMENT SCHEDULES

 Differential Reinforcement of High Responding (DRH) Schedules.

 Differential-Reinforcement-of-Other Behaviors (DRO) Schedule.

 Differential Reinforcement of Low Responding (DRL) Schedules

These include:

 DRH (High Rate): Reinforcement given for high response rates (e.g., studying consistently for
exams).
 DRL (Low Rate): Reinforcement only after slow response rates.
 DRO (Other Behaviours): Reinforcement given when a specific behavior (e.g., hitting) does
not occur for a set time.

Compound Schedules

In many real-life situations, the relationship between behavior and reinforcement involves more
than one schedule, leading to what is known as a compound schedule. In these cases, two or more
reinforcement schedules are combined in sequence or simultaneously. For example, in a compound
schedule involving FR-10 and FI-1 minute, a rat must first press a lever ten times (Fixed Ratio 10),
and then wait for one minute (Fixed Interval 1 minute) after the last press before pressing again will
yield a reward. The rat must complete both requirements in the proper order to receive
reinforcement. This demonstrates that both animals and humans can adapt to complex
contingencies in reinforcement, showcasing their sensitivity to such learning environments.

How Readily Is an Instrumental or Operant Response Learned?

Two major factors influence the strength and speed of learning in instrumental or operant
conditioning:

1. Importance of Contiguity

For conditioning to be effective, the reward must closely follow the behavior. Delays between a
response and its reinforcement greatly impair learning. In one experiment with rats, delays of just
1.2 seconds significantly reduced conditioning. According to Perin (1943), even a 10-second delay
led to only moderate learning, while delays of 30 seconds or more completely prevented acquisition
of the bar-pressing behavior. This highlights the critical role of temporal proximity between
behavior and consequence.

2. Impact of Reward Magnitude

The size of the reward also significantly affects learning. In Crespi’s (1942) study, rats receiving
larger food rewards (e.g., 64 or 256 units) learned to run faster in an alley than those receiving
smaller amounts (1 or 4 units). Similarly, Guttman (1953) found that rats reinforced with higher
concentrations of sucrose solutions learned bar pressing more quickly. These findings emphasize
that larger reinforcers enhance both the rate and strength of learning.

Importance of Past Experience: Contrast Effects

Previous reward history influences current learning performance. This is evident in two types of
contrast effects:
 Positive Contrast: Performance improves when a small reward is followed by a large reward.
The new, larger reward appears even more valuable due to the contrast with the earlier
smaller one.

 Negative Contrast: Performance drops when a large reward is followed by a small one. The
smaller reward feels disappointing in comparison.

Bower (1981) argued that a ceiling effect may explain why positive contrast effects sometimes fail to
appear — if the high reward already maximizes performance, an increase can’t elevate it further.
These contrast effects occur because expectations shaped by prior rewards alter the perceived value
of current outcomes.

Extinction of an Appetitive Response

An instrumental or operant response that has been learned through reinforcement can be
extinguished if reinforcement is consistently withheld. Over time, when a behavior no longer leads
to a reward, the strength of the response diminishes and eventually stops. This extinction process is
critical to understanding behavioral flexibility and learning limits.

Spontaneous Recovery

Spontaneous recovery refers to the temporary reappearance of a previously extinguished


response after a period without exposure to the conditioned stimulus or behavior. During extinction,
a temporary inhibition suppresses the behavior. When this inhibition fades, the response may re-
emerge. However, if the behavior continues to be unrewarded, spontaneous recovery fades again.
According to Hull (1943), conditioned inhibition — where environmental cues during non-reward
trials become associated with a suppressive state — can permanently suppress responding,
preventing spontaneous recovery.

Aversive Quality of Nonrewarded

Abram Amsel (1958) proposed that the absence of an expected reward causes frustration, which
plays a role in learning:

1. Primary Frustration: An innate, emotional reaction when an expected reward fails to


appear.

2. Learned Frustration: Over time, environmental cues associated with nonrewarded come to
trigger anticipatory frustration through classical conditioning.

3. Impact: This frustration can influence future behaviours, persistence, and the strength of
learning. It plays a key role in partial reinforcement effects and resistance to extinction.

Resistance to Extinction

Two major factors influence how resistant a behavior is to extinction:

1. Magnitude of Reward

According to D'Amato (1970), the impact of reward size on extinction resistance depends on the
amount of training:

 With minimal training, large rewards lead to greater resistance to extinction.

 With extended training, smaller rewards produce more persistence.


This is explained through the Anticipatory Goal Response (AGR). Large rewards produce AGR
quickly, which also creates stronger frustration when rewards are removed, eventually reducing
resistance. Small rewards develop AGR more slowly, resulting in less frustration and greater long-
term persistence.

2. Consistency of Reward

Extinction occurs more slowly after partial reinforcement than after continuous reinforcement. This
is known as the Partial Reinforcement Effect (PRE). Weinstock’s study demonstrated that rats
receiving rewards on fewer trials during acquisition showed greater resistance to extinction. The
lower the percentage of rewarded trials, the stronger the persistence during extinction.

Theories Explaining the Partial Reinforcement Effect (PRE)

A. Frustration Theory (Amsel, 1967, 1994)

This theory posits that intermittently rewarded animals learn to continue responding despite
frustration. The removal of a large reward produces greater frustration than a small one. In partially
reinforced animals, this frustration becomes a cue for continued responding, leading to slower
extinction.

B. Sequential Theory (E.J. Capaldi, 1971, 1994)

Capaldi proposed that animals associate the memory of a non-rewarded trial (SN) with the
instrumental response when it's followed by a reward. During extinction, these SN cues persist,
encouraging continued responding. Animals trained with continuous rewards never experience SN,
so they do not build this association — leading to quicker extinction.

Significance of PRE in Real Life

According to Flaherty (1985), the partial reinforcement effect is adaptive in natural settings:

1. It promotes persistence in the face of occasional failure, increasing chances of eventual


success.

2. It prevents giving up too early, helping organisms avoid missed opportunities.

3. At the same time, it does not promote endless responding without result — PRE helps strike
a balance between persistence and disengagement, allowing flexible, goal-directed
behavior in uncertain environments.

APPLICATION OF APPETITIVE CONDITIONING

Contingency management typically proceeds through three key stages to modify behavior
effectively.

The first stage, known as the assessment stage, involves identifying both the frequency of
appropriate and inappropriate behaviours, as well as the specific situations in which they occur.
During this phase, the reinforcers maintaining the inappropriate behavior are examined, and
potential reinforcers that could support the desired (appropriate) behavior are also identified. This
provides a foundation for targeted intervention.

The second stage, called the contingency contracting stage, involves clearly specifying the
relationship between the individual’s responses and the delivery of reinforcement. This includes
determining how reinforcement will be administered and ensuring that it is contingent upon the
occurrence of appropriate behaviours. A formal contract or plan is often developed during this phase
to establish expectations and reinforce consistency.

In the final stage, the implementation stage, the planned contingencies are put into action. This
phase focuses on monitoring behavioral changes that occur during the treatment, as well as
evaluating whether these changes are maintained after formal treatment ends. The primary goal is
to ensure that the intervention produces lasting behavioral improvements through effective use of
reinforcement principles.

THEORIES OF APPETITIVE CONDITIONING

1. Premack’s Probability-Differential Theory

Premack (1959) proposed that reinforcement is relative, and that an activity can serve as a reinforcer
if it has a higher probability of occurrence than the behavior it is meant to reinforce. In his classic
study with children, he placed a pinball machine next to a candy dispenser. Some children preferred
playing pinball (“manipulators”), while others preferred eating candy (“eaters”). In the second phase
of the study, manipulators had to eat candy to access the pinball, while eaters had to play pinball to
receive candy. The results showed that both groups increased their performance of the low-
probability activity to gain access to the high-probability one, supporting Premack’s theory that more
probable behaviours can reinforce fewer probable ones.

2. Response Deprivation Theory

Timberlake and Allison’s response deprivation theory suggests that when access to a normally
preferred activity is restricted, it becomes a powerful reinforcer, regardless of its baseline
probability. In experiments, rats increased their drinking behavior when it allowed access to a
restricted running wheel. Similarly, children increased their writing or arithmetic when access to
these activities was limited. For instance, in Evan’s case, restricting TV and computer games
increased his motivation to complete homework. The deprivation, not relative preference, drives the
reinforcing power.

AVERSIVE CONDITIONING

Principles of Aversive Conditioning

Aversive events can be either escaped or avoided, or in some cases, unavoidable. Escape behavior
involves terminating an unpleasant experience once it begins, such as attacking a mugger to end an
assault. Avoidance behavior involves taking action to prevent the aversive event, like avoiding going
out at night to prevent mugging. However, some aversive events, such as child abuse, are
inescapable. In such cases, learned helplessness may develop, where individuals stop trying to
escape due to repeated failure.

ESCAPE CONDITIONING

Miller (1948) demonstrated that rats could escape electric shock by turning a wheel to open a door.
Hiroto (1974) showed that college students terminated unpleasant noise by moving a finger across a
shuttle box. Even behaviours like closing one's eyes during a scary movie scene are examples of
escape responses.

Factors Influencing Escape Learning

1. Intensity of the Aversive Event: More intense aversive events increase the desire to escape
but may also discourage helping behavior. Piliavin et al. (1975) showed that bystanders were
less likely to help a fainting victim with a visible deformity, as the unpleasantness heightened
the cost of helping.

2. Absence of Reinforcement: Campbell and Kraeling (1953) found that rats escaped faster
from a 400-volt shock when the goal-box shock was greatly reduced. The more relief gained,
the stronger the escape learning.

3. Delayed Reinforcement: Fowler and Trapold (1962) showed that escape behavior weakened
when shock termination was delayed. Even a 3-second delay can eliminate conditioning. This
highlights the importance of immediacy in negative reinforcement.

ELIMINATION OF ESCAPE RESPONSES

Escape behavior can be extinguished in two ways:

1. Removal of Negative Reinforcement: If the aversive event continues even after the escape
behavior, the behavior eventually stops. Fazzaro and D’Amato (1969) found that rats trained
with more trials persisted longer during extinction.

2. Absence of Aversive Events: When aversive stimuli are no longer presented, escape
behaviours diminish. However, due to anticipatory cues associated with past aversive
events, escape responses may persist until these cues no longer elicit a reaction.

AVOIDANCE OF AVERSIVE EVENTS

Avoidance occurs when behavior prevents the onset of an aversive event. For instance, a teenage
girl may lie about needing to study to avoid going to a party with someone she dislikes—this is an
active avoidance response. Ignoring a dentist’s appointment reminder due to dental anxiety is a
passive avoidance response.

TYPES OF AVOIDANCE BEHAVIOR

1. Active Avoidance: Involves performing an overt action to prevent an unpleasant event.


Mowrer (1938, 1939) showed rats could jump hurdles to avoid shock. Similar avoidance
learning has been shown in dogs, cats, rabbits, and humans (e.g., opening an umbrella to
avoid rain).

2. Passive Avoidance: Involves withholding behavior to avoid punishment. Studies show that
rats avoid areas or behaviours previously associated with shock. For example, they may
refuse to leave a safe platform (Hines & Paolino, 1970) or avoid bar pressing if it previously
led to shock (Camp et al., 1967).

PUNISHMENT

Punishment is defined as the application of an aversive event following an inappropriate behavior,


with the aim of reducing the likelihood of that behavior occurring again. For instance, a parent may
take away a child’s television privileges for hitting a sibling. If this results in a decrease in the child’s
aggressive behavior, then the punishment is considered effective. The essence of punishment lies in
the contingency between the behavior and its consequence.

TYPES OF PUNISHMENT

There are two major types of punishment:


1. Positive Punishment involves introducing an aversive stimulus following a behavior.
Examples include physical punishment such as spanking or psychological punishers like
verbal criticism.

2. Negative Punishment (or omission training) involves the removal of a reinforcing stimulus
after an undesired behavior. One form of negative punishment is response cost, where a
person loses access to a reinforcer (e.g., money, points, privileges). Another form is time-out
from reinforcement, where the individual is placed in an environment where no
reinforcement is available, such as being sent to a room or isolation.

EFFECTIVENESS OF PUNISHMENT

B.F. Skinner (1953) argued that punishment only temporarily suppresses behavior. In one of his
experiments, rats trained to press a bar for food were later punished by a paw slap. While this
initially reduced bar-pressing behavior, the effect was short-lived. After 30 minutes, both punished
and unpunished rats pressed the bar at similar rates. Skinner concluded that punishment suppresses
but does not eliminate behavior, and suggested using extinction instead.

However, later research (e.g., Campbell & Church, 1969) has shown that under certain conditions,
punishment can lead to long-lasting behavioral suppression.

THE SEVERITY OF PUNISHMENT

The effectiveness of punishment increases with its severity. Mild punishments often produce weak
effects, while stronger punishments can result in permanent suppression. In Camp, Raymond, and
Church’s (1967) study, rats received shock punishments of varying intensities after pressing a bar.
Higher intensity shocks produced greater suppression of bar-pressing. These findings were
consistent across other species, including monkeys, pigeons, and humans. Human studies also
showed that more intense punishments, such as louder noises, more strongly deterred children
from playing with punished toys.

CONSISTENCY OF PUNISHMENT

Consistency is crucial for punishment to be effective. In a study by Parke and Deur (1972), boys who
were punished with a loud buzzer every time they hit a doll quickly stopped the behavior. However,
those who were punished only intermittently did not show the same suppression. This suggests that
partial punishment leads to weaker learning, much like partial reinforcement effects in reward
learning.

DELAY OF PUNISHMENT

The timing of punishment plays a critical role. Delayed punishment is less effective than immediate
punishment. In animal studies, camp et al. (1967) showed that rats who received immediate
punishment stopped bar-pressing more readily than those who received delayed shocks (2, 7.5, or
30 seconds later). Similarly, in classroom settings, immediate scolding was more effective in
reducing misbehaviour in children than delayed punishment. These findings emphasize the need for
swift consequences to enhance behavioral suppression.

NEGATIVE CONSEQUENCES OF PUNISHMENT

Though punishment may reduce undesirable behavior, it carries several potential negative side
effects:

1. Pain-Induced Aggression
Punishment, especially when painful, can trigger aggression. Studies show that animals (e.g.,
monkeys, cats) may attack others or inanimate objects after being shocked. In humans, painful
events can lead to anger-driven aggression, especially when the amygdala is activated and not
effectively regulated by the prefrontal cortex. However, previous experiences with nonaggressive
conflict resolution can reduce aggression after punishment (Hokanson, 1970).

2. Modelling of Aggression

Punishment can teach aggression through modelling, particularly in children. Bandura (1971)
emphasized that behaviours can be learned by observation. Children who watch aggressive cartoons
(Steuer et al., 1971) or experience verbal punishment (Mischel & Grusec, 1966) are more likely to
imitate aggression. Moreover, correlational studies show that physically punished children are
more likely to display aggressive behavior.

3. Aversive Quality of the Punisher

Punishers themselves can become aversive stimuli. In a study by Redd, Morris, and Martin (1975),
children performed tasks in front of adults giving either positive or negative feedback. Although the
punitive adult successfully kept children on task, the children preferred working with the positive
adult. This highlights the emotional and social consequences of using aversive punishers.

APPLICATIONS OF AVERSIVE CONDITIONING

Punishment, when used appropriately, can be an effective method for modifying undesirable
behavior. In behavioral therapy and applied psychology, several techniques that rely on aversive
conditioning—such as flooding, response cost, and time-out—have been used to treat a wide range
of behavioral issues. These approaches are often applied when more conventional methods fail or
when rapid behavioral suppression is needed.

Response Prevention (Flooding) vs. Systematic Desensitization

Two key methods used in the treatment of phobias are flooding and systematic desensitization.

 Flooding requires individuals to face the feared stimulus directly and fully, preventing
escape, in order to extinguish the avoidance response. The individual learns that the feared
stimulus is not followed by a negative outcome (no UCS), which leads to the weakening of
the conditioned fear.

 In contrast, systematic desensitization gradually pairs the feared stimulus with relaxation
responses, using a hierarchy of anxiety-provoking situations while maintaining a relaxed
state. It is less intense but slower than flooding.

Effectiveness of Flooding

Flooding differs from standard extinction procedures in that it eliminates the option to escape,
making exposure more intense and direct. Research has shown that flooding is effective for
eliminating avoidance behavior in both animals and humans (Baum, 1970; Malleson, 1959). Flooding
has been successfully used to treat various disorders such as OCD, panic disorder, PTSD, simple
phobias, and social anxiety.

Marks (1987) found that flooding could significantly reduce agoraphobic fear in as little as three
sessions. Furthermore, the effects of flooding are long-lasting. For example, Emmelkamp et al.
(1980) found that anxiety levels significantly dropped after flooding sessions and remained low six
months post-treatment.

Neuroscience of Flooding

Individual responses to feared stimuli vary greatly. Siegmund et al. (2011) discovered that the
effectiveness of flooding correlates with cortisol response—those with greater physiological
arousal (cortisol release) benefited more from the treatment. However, due to its intensity, many
individuals cannot tolerate flooding, making systematic desensitization a more viable alternative for
those with low stress tolerance.

Case Example: Vomiting Suppression Through Aversive Conditioning

Lang and Melamed (1969) presented a striking case involving a 9-month-old infant with persistent
vomiting. Standard treatments failed, so they employed a punishment-based approach using an
EMG to detect vomiting onset and applied a mild electric shock to the leg when vomiting began.
Within six sessions, the vomiting ceased. Six months later, the child maintained a healthy weight
with no recurrence—demonstrating the lasting effects of properly administered aversive
conditioning.

Response Cost: A Negative Punishment Strategy

Response cost involves taking away a reinforcer following inappropriate behavior. In lab and real-
world settings, this method has been highly successful in reducing behaviours. Peterson and
Peterson (1968) used response cost to treat self-injurious behavior in a 6-year-old boy.
Reinforcement was withheld when the boy engaged in harmful actions, resulting in the complete
cessation of self-injury. This technique has since been used in a variety of behavioral modification
programs across ages and populations.

Time-Out from Reinforcement

Time-out is another form of negative punishment that involves removing access to reinforcement
temporarily following an undesirable behavior. The person may be removed from the reinforcing
situation (e.g., a child sent to their room) or the reinforcers may be removed directly (e.g., restricting
access to social activities).

The effectiveness of time-out depends on the non-reinforcing nature of the time-out environment.
For example, if a child’s room contains toys and entertainment, it may become a reinforcing rather
than punishing setting. Solnick et al. (1977) observed that placing an autistic child in a “sterile” time-
out area for self-stimulatory behavior increased tantrums. When the researchers used physical
restraint instead, tantrums decreased rapidly, showing the importance of selecting an appropriately
non-reinforcing consequence.

Numerous studies, including those by Derenne & Baron (2001) and Everett et al. (2007), confirm the
effectiveness of time-out in suppressing inappropriate behaviours across both animals and humans.
Time-out procedures are now widely adopted in schools, homes, and clinical settings.

STIMULUS CONTROL OF BEHAVIOR

Stimulus Generalization
Stimulus generalization occurs when an individual responds similarly to stimuli that resemble the
original conditioned stimulus. For instance, someone who gets sick after dining at a specific
restaurant may avoid all restaurants afterward. The extent of generalization depends on the
similarity between stimuli. A student who gets sick from vodka may develop an aversion to similar
white liquors like gin, a milder dislike for wine, and little reaction to beer. The more similar the new
stimulus is to the original, the stronger the generalized response.

Generalization Gradients
Generalization gradients visually represent how strongly subjects respond to various stimuli based
on their similarity to the original stimulus (S+). A steep gradient indicates narrow generalization—
responding only to stimuli very similar to S+. A flat gradient suggests broad generalization. In
excitatory conditioning, S+ is paired with reinforcement and then compared to test stimuli. In
inhibitory conditioning, a stimulus (S–) suppresses response when paired with the absence of
reinforcement, and the gradient shows how much this inhibition generalizes.

Lashley and Wade’s Theory of Stimulus Generalization


Lashley and Wade (1946) proposed that generalization results from a failure to discriminate
between stimuli. If an individual cannot differentiate between the CS and test stimuli, they will
generalize. Discrimination training (where S+ is reinforced and S– is not) limits generalization, while
nondifferential reinforcement (where only S+ is experienced) leads to broader generalization.
Evidence suggests that animals generalize more when they lack perceptual experience and less when
they can easily distinguish between S+ and S–.

Discrimination Learning
Discrimination learning is the ability to distinguish between stimuli associated with reinforcement
(SD) and those associated with non-reinforcement (S∆). It allows individuals to behave appropriately
based on environmental cues. SD signals that reinforcement is available; S∆ indicates it is not. A
failure to discriminate can lead to errors, inefficiency, or even maladaptive behaviours. Both external
and internal stimuli (like drug states) can function as discriminative cues.

Neuroscience of Discrimination Learning


The prefrontal cortex (responsible for decision-making) and the hippocampus (involved in memory)
are crucial for discrimination learning. Research shows that lesions in these areas impair the ability
to learn or adapt to new discriminative tasks, as animals persist with old behaviours even when
conditions change.

Two-Choice Discrimination Tasks


In these tasks, an individual must choose between an SD and an S∆ that differ along a single
dimension. Correctly responding to SD brings reinforcement, while responding to S∆ does not. These
tasks highlight how learning sharpens stimulus control.

Behavioral Contrast
Behavioral contrast is the phenomenon where responding increases to SD and decreases to S∆, even
though the reinforcement conditions remain the same. This can be temporary (local contrast) due to
emotional reactions or long-lasting (sustained contrast) due to anticipated reinforcement shifts. The
greater the disparity in reinforcement, the greater the contrast observed.

Occasion-Setting Stimuli
Some stimuli don’t directly elicit a conditioned response (CR) but instead set the occasion for
another stimulus to produce a CR. This is called occasion setting. For instance, a light may signal that
a tone will now lead to food. The orbitofrontal cortex and hippocampus play key roles in storing and
processing these contextual relationships.

Hull-Spence Theory of Discrimination Learning


Hull and Spence explained discrimination as a result of conditioned excitation (to SD) and inhibition
(to S∆), both of which generalize to similar stimuli. Discrimination is shaped by reinforcement
history. In the peak shift phenomenon, after discrimination training, animals may respond more to a
stimulus slightly different from SD, reflecting a shift away from the aversive S∆.

Hanson’s (1959) Study


Hanson’s experiment showed that pigeons trained to discriminate between wavelengths (e.g., 550
nm vs. 560 nm) responded most strongly to a stimulus even more different from S∆ (e.g., 540 nm),
demonstrating peak shift. Those given non-discrimination training responded only to the training
stimulus, confirming predictions of Hull-Spence theory.

Relational vs. Absolute Theories – Köhler’s View


Wolfgang Köhler suggested that animals respond based on the relative properties of stimuli
(relational view), rather than their absolute features. For example, a rat trained to distinguish
between an 80-dB and 60-dB tone may respond more to a 90-dB tone than the original SD. This is
known as the transposition effect and supports Köhler’s theory. However, Hull-Spence’s absolute
view remains valid in single-stimulus contexts.

Reconciliation of Views
Schwartz and Reisberg (1991) proposed that both views are valid depending on the context. In
choice situations, relative properties matter (relational view). In generalization tests involving single
stimuli, animals respond to absolute features.

Errorless Discrimination Learning – Terrace (1963)


Herbert Terrace demonstrated that discrimination can be learned with minimal errors by gradually
introducing the S∆. In progressive training, S∆ starts as a dim or neutral stimulus and becomes more
distinct over time, reducing confusion and frustration. This method results in fewer errors compared
to abrupt or late introduction of S∆.

Applications and Findings


The fading technique has enabled pigeons and other species to master difficult discriminations (e.g.,
shapes, patterns) errorless. Human applications include training children with learning disabilities,
improving reading, vocabulary, and even complex skills like self-care and social behavior. Notably,
animals trained using errorless procedures do not experience the aversiveness or frustration
typically associated with S∆, and certain drugs that usually disrupt performance have no effect.

Sutherland and Mackintosh’s Attentional Theory


This theory views discrimination learning as a two-stage process. First, the animal identifies the
relevant stimulus dimension (e.g., colour or shape). Then, it forms an association between that
dimension and a response. Attention is focused on features that predict reinforcement, and the
analysers (e.g., for colour, brightness) become more sensitive with training.

Wagner et al. (1968) – Predictiveness of Stimuli


In experiments, when two stimuli (like light and tone) differed in how well they predicted
reinforcement, the more predictive one gained more control over the behavior. This supports the
attentional view—more predictive stimuli receive more attention.

Continuity vs. Noncontinuity Theories of Discrimination Learning


The Hull-Spence continuity theory suggests that discrimination is a gradual process of strengthening
excitation and inhibition. In contrast, Krechevsky and Lashley’s noncontinuity theory proposes that
learning occurs in sudden shifts when the correct hypothesis is discovered. The animal quickly learns
to focus on the relevant stimulus dimension and ignores others once it figures out the rule.

LEARNING AND COGNITION

1. Latent Learning
Latent learning refers to learning that occurs without any obvious reinforcement and remains hidden
until there is a reason to demonstrate it. This concept was central to Edward Tolman's theory of
learning. In the classic experiment by Tolman and Honzik (1930), three groups of rats were trained
in a maze: one group was never reinforced, one was always reinforced, and one was not reinforced
until the eleventh day. Remarkably, the third group showed a sudden improvement in performance
once reinforcement began—matching the performance of the always-reinforced group. This finding
supported the idea that learning can occur without reinforcement and only be expressed when there
is motivation. According to Tolman, learning involves forming expectations about reinforcement
rather than simply strengthening stimulus-response bonds. When extinction occurs in this context,
it's called latent extinction, because the previously reinforced response is no longer performed,
even without the usual non-reinforced trials.

2. Observational Learning

Historically, observational learning (or modelling) was thought to arise from a natural tendency to
imitate others. Early experimental attempts by Edward Thorndike (1898) and John B. Watson (1908)
failed to provide evidence for learning through observation. They concluded that learning required
direct interaction with the environment, not vicarious experience. However, Miller and Dollard
(1941) proposed that observational learning could be understood through reinforcement principles.
They viewed imitation as a form of instrumental conditioning and categorized it into three types:

 Same behavior: When individuals independently learn the same response to the same
stimulus (e.g., stopping at a red light).

 Copying behavior: When a person’s behavior is shaped through guided correction (e.g., an
art teacher helping a student).

 Matched-dependent behavior: When one blindly mimics a model and is reinforced for doing
so.

Later, Albert Bandura expanded this view, asserting that observational learning is not always
imitation. For example, swerving to avoid a pothole after seeing another car hit it is observational
learning without imitation. Bandura emphasized the learning-performance distinction—people may
learn behaviours through observation but only perform them under the right conditions. His 1965
study provided strong evidence for this view, marking a shift toward cognitive processing of
observed information.

3. Sensory Preconditioning

Sensory preconditioning demonstrates how associations between neutral stimuli can influence later
learning. For example, if you associate your neighbour (CS2) with their dog (CS1), and later the dog
bites you (CS1 + UCS), you might develop a fear response (CR) not only to the dog but also to the
neighbour—even though the neighbour was never directly paired with the bite. In experimental
settings, Brogden (1939) showed that dogs conditioned to associate a light and buzzer responded to
the unshocked stimulus after one was paired with a shock, confirming that prior neutral associations
can transfer emotional responses. However, the CR to CS2 is usually weaker than to CS1. Although
robust in early studies, the magnitude of sensory preconditioning effects was generally small
(Kimble, 1961).

4. Insight Learning

Insight learning, introduced by Wolfgang Köhler, is a key concept in Gestalt psychology, which
emphasizes holistic processing. Gestaltists believed learning occurs by perceiving the entire problem
and reorganizing the elements of a situation until a solution emerges. In a famous experiment,
Köhler observed chimpanzees solving problems using sudden insight. For instance, the chimp Sultan
used a box to reach a banana hung out of reach, or a stick to pull a banana placed outside his cage.
This kind of learning involves an "aha!" moment rather than gradual trial-and-error. Factors
influencing insight learning include experience, intelligence, the structure of the learning situation,
prior attempts, repetition, and the ability to generalize. Insight learning is creative and purposive,
differing from the mechanical associations in behaviorist theories.

5. Blocking

The blocking effect, first identified by Kamin, occurs when prior learning about one CS (A) prevents
conditioning to a new CS (B) when they are presented together as a compound stimulus (AB)
followed by a US. For example, if a tone (A) is first paired with a shock (US) and later a tone and light
(AB) are paired with the shock, little or no learning occurs to the light (B). The organism has already
learned to expect the US after the tone, so the light adds no new predictive value. This challenges
simple associative theories and supports models that include prediction error as a key factor in
learning.

6. Learned Helplessness

Learned helplessness, a theory developed by Martin Seligman (1975), explains how individuals
become passive in the face of uncontrollable negative events. Originally demonstrated in dogs
exposed to inescapable shocks, the phenomenon was later applied to human depression. According
to Seligman, when people repeatedly face failure or uncontrollable outcomes, they may come to
believe that nothing they do can change the situation. Over time, this leads to a sense of
helplessness, demotivation, and depressive symptoms. For example, repeated rejection from
medical schools might lead someone to believe further attempts are pointless, even if they have the
ability to succeed. Seligman framed depression as a cognitive expectation that events are
independent of one’s behavior—a belief that causes individuals to stop trying, even when success is
possible.

LEARNING THEORIES

CLARK L HULL

Clark L. Hull (1884–1952), a key figure in learning theory, earned his Ph.D. from the University of
Wisconsin and later worked at Yale. His 1943 book Principles of Behavior pioneered the use of
scientific theory to systematically study learning.

Hull’s approach to theorizing

Hull used a hypothetico-deductive model, constructing behavior theory using postulates and
theorems, similar to Euclid’s geometry. Postulates weren’t directly testable, but theorems derived
from them could be tested experimentally. Success strengthened the underlying postulate; failure
led to revision or rejection.

Major Theoretical Concepts

Although Hull's 1952 version of his theory is highly complex, it is still an extension of his 1943 theory;
therefore, the best way to summarize Hull's thoughts on learning is to outline the 1943 version and
then point out the major changes that were made in 1952. Following that plan, we will first discuss
Hull's sixteen major postulates as they appeared in 1943 and then, later in the chapter, we will turn
to the major revisions Hull made in 1952.
Postulate 1: Sensing the External Environment and the Stimulus Trace (Clark L. Hull, 1943)

Hull proposed that when an organism encounters an external stimulus (S), it triggers a sensory
(afferent) neural impulse. This impulse persists for a few seconds even after the actual stimulus has
ended, forming what Hull termed a stimulus trace (s). This lingering trace is important because it
allows for associations to form even when the stimulus is no longer present. As a result, Hull revised
the classic stimulus-response (S-R) model to S-s-R, emphasizing that learning involves forming an
association between the trace (s) and the response (R). Furthermore, between the stimulus trace
and the final behavior, motor neurons (r) are activated to produce the overt response (R). Hence,
Hull's full behavioral chain is represented as:

👉 S (stimulus) → s (stimulus trace) → r (motor neuron activity) → R (response)

This model reflects Hull's attempt to explain how temporal gaps between stimulus and response
can still result in learning, due to the persistence of the stimulus trace in the nervous system.

Postulate 2: The Interaction of Sensory Impulses

The interaction of sensory impulses (s) indicates the complexity of stimulation and, therefore, the
difficulties in predicting behavior. Behavior is seldom a function of only one stimulus. Rather, it is a
function of many stimuli converging on the organism at any given time. These many stimuli and their
related traces interact with one another and their synthesis determines behavior. We can now refine
the S-R formula further as follows:

where s represents the combined effects of the five stimuli acting on the organism at the moment.

Postulate 3: Unlearned Behavior

Hull believed that the organism is born with a hierarchy of responses, unlearned behavior, that is
triggered when a need arises. For example, if a foreign object enters the eye, considerable blinking
and tear secretion may follow automatically. If the temperature varies from. that which is optimal
for normal body functioning, the organism may sweat or shiver. Likewise, pain, hunger, or thirst will
trigger certain innate response patterns that have a high probability of reducing the effects of those
conditions. The term hierarchy is used in reference to these responses because more than one
reaction may occur. If the first innate response pattern does not alleviate a need, another pattern
will occur. If the second response pattern does not reduce the need, still another will occur, and so
on. If none of the innate behavior patterns is effective in reducing the need, the organism will have
to learn new response patterns.

Postulate 4: Contiguity and Drive Reduction as Necessary Conditions for Learning

If a stimulus leads to a response and if the response results in the satisfaction of a biological need,
the association between the stimulus and the response is strengthened. The more often the stimulus
and the response that leads to need satisfaction are paired, the stronger the relationship between
the stimulus and the response becomes. On this basic point, Hull is in complete agreement with
Thorndike's revised law of effect. Hull, however, is more specific about what constitutes a "satisfying
state of affairs." Primary reinforcement, according to Hull, must involve need satisfaction, or what
Hull called drive reduction. Postulate 4 also describes a secondary reinforcer as "a stimulus which
has been closely and consistently associated with the diminution of a need" (Hull, 1943, p.178).
Secondary reinforcement following a response will also increase the strength of the association
between that response and the stimulus with which it was contiguous.

It can also be said that the "habit" of giving that response to that stimulus gets stronger. Hull's term,
habit strength (SHR), will be explained below.

Habit Strength Habit strength is one of Hull's most important concepts, and as stated above, it refers
to the strength of the association between a stimulus and a response. As the number of reinforced
pairings between a stimulus and a response goes up, the habit strength of that association goes up.
The mathematical formula that describes the relationship between SHR and number of reinforced
pairings between S and R is as follows:

Postulate 5: Stimulus Generalization

Hull says that the ability of a stimulus (other than the one used during conditioning) to elicit a
conditioned response is determined by its similarity to the stimulus used during training. Thus, SHR
will generalize from one stimulus to another to the extent that the two stimuli are similar. This
postulate of stimulus generalization also indicates that prior experience will affect current learning;
that is, learning that took place under similar conditions will transfer to the new learning situation.
Hull called this process generalized habit strength (SHR). This postulate essentially describes
Thorndike's identical elements theory of the transfer of training.

Postulate 6: Stimuli Associated with Drives

Biological deficiency in the organism produces a drive (D) state and each drive is associated with
specific stimuli. Hunger pangs which accompany the hunger drive, and dry mouth, lips, and throat
which accompany the thirst drive, are examples. The existence of specific drive stimuli makes it
possible to teach an animal to behave in one way under one drive and another way under another
drive. For example, an animal can be taught to turn right in a T-maze when it is hungry and to turn
left when it is thirsty.

Postulate 7: Reaction Potential as a Function of Drive and Habit Strength

The likelihood of a learned response being made at any given moment is called reaction potential
(SER). Reaction potential is a function of both habit strength (SHR)and drive (D). For a learned
response to occur, SHR has to be activated by D. Drive does not direct behavior; it simply arouses it
and intensifies it. Without drive, the animal would not emit a learned response even though there
had been a large number of reinforced pairings between a stimulus and a response. Thus, if an
animal has learned to press a bar in a Skinner box in order to obtain food, it would press the bar only
when it was hungry, no matter how well it was trained. The basic components of Hull's theory that
we have covered thus far can be combined into the following formula:

Postulate 8: Responding Causes Fatigue, Which Operates Against the Elicitation of a Conditioned
Response
Responding requires work, and work results in fatigue. Fatigue eventually acts to inhibit responding.
Reactive inhibition (IR) is caused by the fatigue associated with muscular activity and is related to the
amount of work involved in performing a task. Since this form of inhibition is related to fatigue, it
automatically dissipates when the organism stops performing. This concept has been used to explain
the spontaneous recovery of a conditioned response after extinction. That is, the animal may stop
responding because of the buildup of IR. After a rest, the IR dissipates and the animal commences to
respond once again. For Hull, extinction is not only a function of nonreinforcement but is also
influenced by the buildup of reactive inhibition.

Reactive inhibition has also been used to explain the reminiscence effect, which is the improvement
of performance following the cessation of practice. and it is explained by assuming that IR builds up
during training and operates against tracking performance. After a rest, IR dissipates and
performance improves. Additional support for Hull's notion of IR comes from research on the
difference between massed and distributed practice. It is consistently found that when practice trials
are spaced far apart (distributed practice), performance is superior to what it is when practice trials
are close together (massed practice).

Postulate 9: The Learned Response of Not Responding

Fatigue being a negative drive state, it follows that not responding is reinforcing. Not responding
allows IR to dissipate, thereby reducing the negative drive of fatigue. The learned response of not
responding is called conditioned inhibition (SIR). Both IR and SIR operate against the elicitation of a
learned response and are therefore subtracted from reaction potential (SER). When IR and SIR are
subtracted from SER, effective reaction potential SE͞R is the result.

Postulate 10: Factors Tending to Inhibit a Learned Response Change from Moment to Moment

According to Hull, there is an "inhibitory potentiality," which varies from moment to moment and
operates against the elicitation of a learned response. This "inhibitory potentiality" is called the
oscillation effect (SOR). The oscillation effect is the "wild card" in Hull's theory- it is his way of taking
into consideration the probabilistic nature of predictions concerning behavior. There is, he said, a
factor operating against the elicitation of a learned response, whose effect varies from moment to
moment but always operates within a certain range of values; that is, although the range of the
inhibitory factor is set, the value that may be manifested at any time could vary within that range.
The values of this inhibitory factor are assumed to be normally distributed, with middle values most
likely to occur. This oscillation effect explains why a learned response may be elicited on one trial but
not on the next. Predictions concerning behavior based on the value of SER will always be influenced
by the fluctuating values of SOR and will thus always be probabilistic in nature. The SOR must be
subtracted from effective reaction potential ֿSER, which creates momentary effective reaction
potential (ֿSER..). Thus, we have

Postulate 11: Momentary Effective Reaction Potential Must Exceed a Certain Value Before a
Learned Response Can Occur

The value that SER must exceed before a conditioned response can occur is called the reaction
threshold ֿSER... Therefore, a learned response will be emitted only if ֿSER is greater than SLR
Postulate 12: The Probability that a Learned Response Will Be Made Is a Combined Function of
ֿSER, SOR and SLR

In the early stages of training, that is, after only a few reinforced trials, SER will be very close to SLR,
and therefore because of the effects of SOR, a conditioned response will be elicited on some trials
but not on others. The reason is that on some trials the value of SOR subtracted from ֿSER will be
large enough to reduce ֿSER to a value below SLR· As training continues, subtracting SOR from SER
will have less and less of an effect since the value of ֿSER will become much larger than the value of
SLR, Even after considerable training, however, it is still possible for SOR to assume a large value,
thereby preventing the occurrence of a conditioned response.

Postulate 13: The Greater the Value of SER, the Shorter Will Be the Latency Between S and R.

Latency (StR) is the time between the presentation of a stimulus to the organism and its learned
response. This postulate simply states that the reaction time between the onset of a stimulus and
the elicitation of a learned response goes down as the value of SER goes up.

Postulate 14: The Value of SER will Determine Resistance to Extinction

The value of SER at the end of training determines resistance to extinction, that is, how many non-
reinforced responses will need to be made before extinction occurs. The greater the value of SER,
the greater the number of nonreinforced responses that have to be made before extinction takes
place. Hull used n to symbolize the number of non-reinforced trials that occurred before extinction
resulted.

Postulate 15: The Amplitude of a Conditioned Response Varies Directly with SER

Some learned responses occur in degrees, for example, salivation or the galvanic skin response
(GSR). When the conditioned response is one that can occur in degrees its magnitude will be directly
related to the size of SER, the momentary effective reaction potential. Hull used A to symbolize
response amplitude.

Postulate 16: When Two or More Incompatible Responses Tend to Be Elicited in the Same
Situation, the One with the Greatest SER Will Occur
MAJOR DIFFERENCES BETWEEN HULL'S 1943 AND 1952 THEORIES

1. Incentive Motivation {K)

In the 1943 version of his theory, Hull treated the magnitude of reinforcement as a learning variable:
The greater the amount of reinforcement, the greater the amount of drive reduction, and thus the
greater the increase in SHR. Research showed this notion to be unsatisfactory. Experiments
indicated that performance was dramatically altered as the size of reinforcement was varied after
learning was complete. For example, when an animal trained to run a straight runway for a small
reinforcer was switched to a larger reinforcer, its running speed suddenly went up. When an animal
trained on a large reinforcer was shifted to a smaller reinforcer, its running speed went down. The
changes in performance following a change in magnitude of reinforcement could not be explained in
terms of changes in SHR since they were too rapid. Moreover, SHR was thought to be fairly
permanent. Unless one or more factors operated against SHR it would not decrease in value.

2. Stimulus-Intensity Dynamism

According to Hull, stimulus-intensity dynamism (V) is an intervening variable that varies along with
the intensity of the external stimulus (S). Stated simply, stimulus-intensity dynamism indicates that
the greater the intensity of a stimulus, the greater the probability that a learned response will be
elicited.

3. Change from Drive Reduction to Drive Stimulus Reduction


Originally, Hull had a drive reduction theory of learning, but later he revised it to a drive stimulus
reduction theory of learning. One reason for the change was the realization that if a thirsty animal is
given water as a reinforcer for performing some act, it takes a considerable amount of time for the
thirst drive to be satisfied by the water. The water goes into the mouth, the throat, the stomach, and
eventually the blood. The effects of ingesting water must ultimately reach the brain, and finally the
thirst drive will be reduced. Hull concluded that the drive reduction was too far removed from the
presentation of the reinforcer to explain how learning could take place. What was needed to explain
learning was something that occurred soon after the presentation of a reinforcer, and that
something was the reduction of drive stimuli (SD). As mentioned earlier, drive stimuli for the thirst
drive include dryness in the mouth and parched lips. Water almost immediately reduces such
stimulation, and thus Hull had the mechanism he needed for explaining learning.

4. The Habit Family Hierarchy

The habit family hierarchy simply refers to the fact that in any learning situation, any number of
responses are possible and the one that is most likely is the one that brings about reinforcement
most rapidly and with the least amount of effort. If that particular way is blocked, the animal will
prefer the next shortest route, and if that is blocked, it will go to the third route and so on.

Hull's Final System Summarized

There are three kinds of variables in Hull's theory:

1. Independent variables, which are stimulus events systematically manipulated by the


experimenter.

2. Intervening variables, which are processes thought to be taking place within the organism but are
not directly observable. All the intervening variables in Hull's system are operationally defined

3. Dependent variables, which are some aspects of behavior that is measured by the experimenter in
order to determine whether the independent variables had any effect.

O. HOBART MOWRER
Hobart Mowrer (1907–1982) was born in Unionville, Missouri, and earned his Ph.D. from Johns
Hopkins University in 1932. During the 1930s, he worked at Yale University as a postdoctoral fellow
and later as an instructor, where he was significantly influenced by Clark Hull. In 1940, Mowrer
joined the Harvard School of Education, where he remained until 1948. He then moved to the
University of Illinois at Urbana, where he spent the rest of his professional career.

The Problem of Avoidance Conditioning

Mowrer's career as a learning theorist began with his efforts to solve the problem that avoidance
learning posed for Hullian theory. If an apparatus is arranged so that an organism receives an electric
shock until it performs a specified response, it will quickly learn to make that response when it is
shocked. Such a procedure is called escape conditioning, and it is diagrammed below:

Escape conditioning is easily handled by Hullian theory by assuming that the response is learned
because it is followed by drive (pain) reduction. However, avoidance conditioning is not so easily
explained by Hullian theory. With avoidance conditioning, a signal, such as a light, reliably precedes
the onset of an aversive stimulus, such as an electric shock. Other than the presence of the signal
that precedes the shock, the procedure is the same as for escape conditioning. The procedure used
in avoidance conditioning is as follows:

With avoidance conditioning, the organism gradually learns to make the appropriate response when
the signal light comes on, thus avoiding the shock. Furthermore, this avoidance response is
maintained almost indefinitely even though the shock itself is no longer experienced. Avoidance
conditioning posed a problem for the Hullians because it was not clear what was reinforcing the
avoidance response.

Mowrer 's Two-Factor Theory of Learning

Mowrer proposed that avoidance learning involves two distinct learning processes. The first is
classical conditioning: when a neutral signal (like a light) is consistently followed by an aversive
stimulus (like an electric shock), the signal becomes a conditioned stimulus (CS) that elicits fear, a
conditioned emotional response. Mowrer referred to this as sign learning, because the organism
learns to interpret the signal as a sign of impending danger.

The second factor is instrumental or operant conditioning, which Mowrer called solution learning.
Once fear is elicited by the CS, the organism learns a specific response (like running or pressing a
lever) to terminate or avoid the fear-inducing stimulus. This behavior is negatively reinforced—not
by direct shock reduction—but by the reduction of fear, a conditioned emotional state. Thus,
classical conditioning creates the fear, and operant conditioning removes it.

Decremental and Incremental Reinforcement

In 1960, Mowrer expanded his two-factor theory to account for a wider range of emotional
responses beyond just fear. He proposed that the type of emotion elicited by a conditioned stimulus
(CS) depends on two factors: the type of unconditioned stimulus (US) it is paired with, and the
timing of that pairing—whether the CS appears before the onset or termination of the US.
Mowrer distinguished between two types of reinforcers: incremental reinforcers, which increase
drive (such as a shock), and decremental reinforcers, which reduce drive (such as food). Incremental
reinforcers are typically associated with negative emotional states, while decremental reinforcers
are linked to positive emotions.

Based on this framework, Mowrer identified four key emotional responses:

 When a CS is presented before the onset of an incremental US (e.g., shock), it elicits fear.

 When a CS is presented before the termination of an incremental US, it elicits relief.

 When a CS appears before the presentation of a decremental US (e.g., food), it elicits hope.

 When a CS is presented before the removal of a decremental US, it elicits disappointment.

Through this extension, Mowrer showed that emotional conditioning was not limited to fear but
could encompass a broad spectrum of emotions, depending on how the organism interprets signals
in relation to changes in drive.

All Learning Is Sign Learning

By 1960, Mowrer concluded that all learning could be understood as sign learning. External stimuli
—through their association with either positive or negative unconditioned stimuli—come to elicit
specific emotional responses. These emotional states, in turn, motivate behavior. Thus, rather than
viewing operant and classical conditioning as fundamentally separate, Mowrer proposed a unified
theory in which emotional conditioning is central to understanding behavior.

EDWIN RAY GUTHRIE

Edwin Ray Guthrie (1886–1959) served as a professor of psychology at the University of Washington
from 1914 until his retirement in 1956. His most influential work, The Psychology of Learning, was
published in 1935 and revised in 1952. Guthrie was known for his clear, simple writing style, which
avoided technical jargon and was intended to be understandable to beginners in psychology. He
illustrated his ideas with real-life anecdotes and placed a strong emphasis on practical applications
—in this way, aligning himself with the traditions of Thorndike and Skinner.

The One Law of Learning

Guthrie proposed a single principle of learning, rejecting the complexity of theories like those of
Thorndike and Pavlov. His law of contiguity stated:

“A combination of stimuli which has accompanied a movement will on its recurrence tend to be
followed by that movement.”

He emphasized that reinforcement or satisfaction was unnecessary for learning to occur. In his final
formulation before his death, he revised the law as:

“What is being noticed becomes a signal for what is being done,”


recognizing that only a subset of available stimuli is attended to at any given moment.

One-Trial Learning

A radical aspect of Guthrie's theory was his rejection of the law of frequency. He believed that a
stimulus-response association forms completely on the first pairing. In his view, repetition does not
strengthen learning; rather, if the same situation recurs and the same behavior happens again, it is
due to contiguity and recency, not frequency or reinforcement.
The Recency Principle

Building on contiguity and one-trial learning, Guthrie proposed the recency principle, which states
that the last response made in a specific stimulus situation is the one most likely to occur again
when those same stimuli reappear. This principle explains why people repeat the most recent
solution or behavior in familiar contexts.

Movement-Produced Stimuli

To address situations where the external stimulus and response are not closely timed, Guthrie
introduced the concept of movement-produced stimuli (MPS). These are internal stimuli generated
by the body's own movements—like muscle or joint sensations. He suggested that:

 Learning is not just between external stimuli and behavior.

 Once a response begins, each movement produces its own stimuli, which cue subsequent
movements.

 This allows for the chaining of responses, enabling complex behavior to be learned as a
sequence of MPS-based associations.

Guthrie’s theory was one of the simplest yet most radical in the history of learning psychology,
emphasizing stimulus-response contiguity, immediate learning, and the role of internal body cues
in guiding complex sequences of behavior.

Nature of Reinforcement

For Guthrie, reinforcement was not a central cause of learning, as it was for Hull or Skinner. Instead,
he saw reinforcement as a mechanical process that simply preserves the stimulus-response
connection by changing the stimulating conditions to prevent unlearning. Regardless of whether the
response is correct or effective, it will be repeated when the organism next encounters the same
stimuli. This reflects his belief that learning occurs through contiguity alone, not through
reinforcement or satisfaction.

HOW TO BREAK HABITS

A habit, according to Guthrie, is a response that has been associated with a wide range of stimuli.
The more stimuli that elicit the response, the stronger the habit. To break a habit, one must identify
the cues that trigger the habit and replace the response with a different one in the presence of
those cues. Guthrie proposed three techniques for doing this:
 Threshold Method: Gradually introduce the stimulus at low intensities that do not elicit the
unwanted response, and slowly increase it while maintaining an alternative response.
Example: Replacing a horse’s bucking by starting with a light blanket and progressing to a
saddle while keeping the horse calm.

 Fatigue Method: Repeatedly elicit the undesirable response until the organism becomes too
fatigued to continue, eventually choosing a different response.
Example: Letting a horse buck until it becomes tired and no longer bucks with a saddle on.

 Incompatible Response Method: Present the stimulus for the undesired behavior along with
other stimuli that evoke a mutually exclusive response.
Example: A child’s fear of a panda bear is reduced by pairing the panda with the comforting
presence of the mother.

Punishment

Guthrie rejected the idea that punishment works because of the pain it causes. Instead, he argued
that punishment is effective only if it leads to a different behavior in the presence of the same
stimuli. It works only when it causes a response that is incompatible with the punished one. If the
punishment does not change the response or occurs after the critical stimuli are gone, it can be
ineffective or even reinforce the unwanted behavior. Guthrie and Powers also advised that
commands should not be given unless they can be enforced, as failure would lead to reinforcement
of disobedience.

Guthrie and Powers (1950) also advise that a command should never be given if it could be
disobeyed:

Summary of Guthrie's Views on Punishment

Guthrie’s views on punishment are consistent with his law of contiguity. For punishment to be
effective:

1. It must lead to an incompatible behavior.

2. It must occur in the presence of the stimuli that triggered the punished behavior.

3. If these conditions are not met, punishment fails or strengthens the habit.

4. The important factor is not the pain, but what the organism does as a result of punishment.

Drives

Guthrie viewed drives as sources of maintaining stimuli that keep the organism active until the goal
is reached. For instance, hunger or anxiety produces internal stimulation that persists until the drive
is satisfied. He used this to explain habits like alcohol use: if someone feels anxious and drinking
reduces that tension, the act of drinking becomes associated with tension relief, and is repeated in
similar future states.

Intentions

When a response is conditioned to a maintaining stimulus (like hunger, anxiety, or thirst), Guthrie
called it an intention. These are sequences of behavior that are repeated whenever the maintaining
stimuli return, because the internal drive lasts over time. The result is a behavior pattern that
appears goal-directed or intentional, although Guthrie explained it purely in terms of contiguity and
stimulus control.
Transfer of Training

Like Thorndike, Guthrie rejected the formal discipline theory, which claimed that learning one
subject (like Latin) improves general reasoning. He argued instead that transfer only occurs when
the situations share common stimuli. A response will transfer to a new context only if the new
context is similar to the original one. As he put it, students learn what they do, not what they read
or hear. To him, concepts like insight and understanding were unnecessary—learning was simply the
result of stimulus-response connections formed through contiguity.

EDWARD CHACE TOLMAN

Edward Chace Tolman (1886–1959), trained initially in electrochemistry at MIT, later shifted to
psychology, earning his Ph.D. from Harvard. He was dismissed from Northwestern University—likely
due to his pacifist stance during wartime—before beginning his influential career at UC Berkeley.
Tolman’s theory blended Gestalt psychology and behaviourism, favouring the study of goal-
directed, purposeful behavior rather than the elemental reflexes emphasized by "twitchiest"
behaviourists like Pavlov and Watson. His work was deeply influenced by Gestalt theorist Kurt
Koffka.

Molar Behavior

Tolman emphasized the study of molar behavior—large, goal-directed behavior patterns—as


opposed to analysing behavior in terms of isolated, reflexive components. While he did not reject
the study of smaller units, he argued that the meaning and purpose of behavior are lost when
reduced to simple elements. Molar behavior, to Tolman, had Gestalt-like wholeness and structure.

Purposive Behaviourism

His system, often called purposive behaviourism, focused on behavior that appears directed toward
achieving goals. Importantly, Tolman used the term “purpose” in a descriptive, not mentalistic way.
For example, a rat in a maze acts “as if” it is seeking food, and this behavior persists until the goal is
reached. Thus, even without invoking conscious purpose, the behavior appears goal-directed.

MAJOR THEORETICAL CONCEPTS

Tolman introduced the use of intervening variables into psychological research and Hull borrowed
the idea from Tolman. Both Hull and Tolman used intervening variables in a similar way in their
work. Hull, however, developed a much more comprehensive and elaborate theory of learning than
did Tolman. Tolman, however, taking his lead from the Gestalt theorists, said that learning is
essentially a process of discovering what leads to what in the environment. The organism, through
exploration, discovers that certain events lead to certain other events or that one sign leads to
another sign. For example, we learn that when it's 5:00 P.M. (S1), dinner (S2) will soon follow. For
that reason, Tolman was called an S-S rather than an S-R theorist. Learning, for Tolman, was an
ongoing process that required no motivation. Learning, for Tolman, was an ongoing process that
required no motivation.

According to Tolman, what is learned is "the lay of the land"; the organism learns what is there.
Gradually it develops a picture of the environment that can be used to get around in it. Tolman
called this picture a cognitive map. Once the organism has developed a cognitive map, it can reach a
particular goal from any number of directions. The organism will, however, choose the shortest
route or the one requiring the least amount of work. This is ref erred to as the principle of least
effort.

Confirmation vs. Reinforcement


Tolman argued that learning involves forming expectancies—beliefs about what leads to what.
These are tested through behavior. When expectancies are confirmed, they solidify into means-end
readiness (a belief system). In this view, reinforcement confirms rather than causes learning,
contrasting with other behaviourists who saw reinforcement as essential.

Vicarious Trial and Error

A notable concept Tolman introduced was vicarious trial and error. This describes the rat pausing at
a decision point and seemingly “deliberating” or scanning options. While not mentalistic in a strict
sense, this behavior suggested that the rat was processing potential outcomes before acting—
another sign of cognitive involvement in learning.

Learning vs. Performance

Tolman emphasized the crucial distinction between learning and performance. He believed that
organisms can acquire knowledge without immediately using it. This knowledge lies dormant until a
relevant motivation arises. Thus, learning can occur without reinforcement and only become visible
(i.e., performed) when the situation demands it.

Latent Learning

Latent learning is learning that is not immediately reflected in behavior. Tolman's experiments
showed that rats could learn the structure of a maze without being reinforced, and only display that
knowledge later when food (a motivator) was introduced. This challenged the behaviorist
assumption that reinforcement is required for learning.

Place Learning vs. Response Learning

In a classic study by Tolman, Ritchie, and Kalish (1946), rats were divided into two groups to test
whether they learned places or motor responses:

 Response learners were reinforced for making the same turn (left or right) regardless of
starting point.

 Place learners were reinforced for reaching a specific location, which required different
turns depending on the starting position.

Results supported place learning, showing that rats formed cognitive maps of spatial relationships,
again challenging the strict S-R view that behavior was just a chain of conditioned responses.

Reinforcement Expectancy

Tolman, however, predicted that if reinforcers were changed, behavior will be disrupted since in
reinforcement expectancy a particular reinforcer becomes a part of what is expected. We learn to
expect certain events to follow other events. The animal expects that if it goes to a certain place, it
will find a certain reinforcer.

THE FORMAL ASPECTS OF TOLMAN'S THEORY

Environmental Variables

Unfortunately, the situation is not so simple as suggested above. Tolman thought of ⅀ OBO as an
independent variable since it directly influenced the t dependent variable (i.e., the behavior ratio),
and it was under the control of the experimenter who determined the number of training trials. In
addition to ⅀OBO a, number of other independent variables could have an effect on performance.
Tolman suggested the following list

M = maintenance schedule. This symbol refers to the animal's deprivation schedule, for example,
the number of hours since it has eaten.

G = appropriateness of goal object. The reinforcer must be related to the animal's current drive
state. For example, one does not reinforce a thirsty animal with food.

S = types and modes of stimuli provided. This symbol refers to the vividness of the cues or signals
available to the animal in the learning situation.

R = types of motor responses required in the learning situation, for example, running, sharp turns,
and so on.

P = pattern of succeeding and preceding maze units; the pattern of turns that needs to be made to
solve a maze as determined by the experimenter.

⅀OBO = the number of trials and their cumulative nature (see above).

It should be clear that Tolman was no longer talking only about the learning of T-mazes but the
learning of more complex mazes as well.

Individual Difference Variables

In addition to the independent variables described above, there are the variables that the individual
subjects bring into the experiment with them. The list of individual difference variables suggested by
Tolman is as follows (note that their initials create the acronym HATE, a somewhat strange word for
Tolman to use):

H = heredity

A = age

T = previous training

E = special endocrine, drug, or vitamin conditions.

Intervening Variables

Tolman defined a theory as a set of intervening variables. An intervening variable is a construct


created by the theorist to aid in explaining the relationship between an independent variable and a
dependent variable. If one says, however, that hunger varies with hours of deprivation and in turn
influences learning, the concept of hunger is being used as an intervening variable. As Tolman said,
such a concept is used to fill in the blanks in a research program. each of Tolman's intervening
variables was operationally defined. Maintenance schedule, for example, creates a demand, which in
turn is related to performance. Appropriateness of the goal object is related to appetite, which in
turn is related to performance.
NEURAL MECHANISMS OF LEARNING

Learning is a process by which we integrate new knowledge generated as a result of experiences.


The product of such experiences is converted into memories stored in our brain. There is basically no
learning without memories.

There are essentially two ways in which learning occurs,

1.Classical conditioning pertains to situations in which we tend to respond automatically, based on


the severity or repetition of a stimulus. The amygdala is involved in regulating many of our
autonomic, fight or flight type responses

2.In instrumental conditioning more brain structures appear to take an active role in encoding and
reinforcing a learned behavior. For instance, when we learn driving, the repetition or rehearsal of
that behavior will involve the perceptual and motor systems as well as the frontal lobes. As the
behavior is memorized, it is managed by the basal ganglia. The process by which we learn new
behaviours is also largely influence by specific neurotransmitters, especially dopamine which is
known to reinforce or reward specific behaviours by making us feel good about it.

Memory is typically described as either short or long-term. Short term memory is also called working
memory and can last from several minutes to a few hours. The front lobes are known to play a very
important role in the short-term memorization while the hippocampus is critical in consolidating
information into long term storage.

BRAIN REGIONS INVOLVED IN LEARNING AND MEMORY

Learning and memory involve a complex network of brain regions working together to encode,
consolidate, and retrieve information.

1.Hippocampus – For the formation of new memories, plays a crucial role in declarative memory,
which involves the conscious recollection of facts and events

2.Frontal lobe - Working memory

3.Amygdala - Emotional memory

4. Basal ganglia - Procedural memory


5. Cerebellum -Motor learning

NEUROTRANSMITTERS AND NEUROMODULATORS

1.Glutamate - primary excitatory neurotransmitter in the brain, is involved in synaptic plasticity and
the formation of new memories

2. Acetylcholine - Neurotransmitter that modulates synaptic plasticity and is particularly important


for attention and memory consolidation

3.Dopamine, norepinephrine and serotonin - Neuromodulators that influence the strength of


synaptic connections and regulate cognitive processes, including learning, motivation, and memory

CELLULAR PROCESSES AND SYNAPTIC PLASTICITY

At the cellular level, learning and memory involve changes in the strength and connectivity of
synapses, known as synaptic plasticity. Long-term potentiation (LTP) and long-term depression (LTD)
are two key forms of synaptic plasticity that underlie the cellular basis of learning and memory.

NEUROPLASTICITY AND LIFELONG LEARNING

Neuroplasticity, the brain's ability to reorganize and modify its structure and function, plays a crucial
role in lifelong learning. Learning new information and acquiring new skills can induce structural
changes in the brain, including the formation of new neurons and synapses, as well as the
strengthening of existing connections. Neuroplasticity allows the brain to adapt to new experiences
and modify its neural networks to accommodate new knowledge and skills.

We also know that we do produce new neurons as a result of learning activities at any age, which is
why additional research in this area is so critical to the future of neuroscience.

CLINICAL IMPLICATIONS

Disorders such as Alzheimer's disease, amnesia, and other cognitive impairments are characterized
by deficits in learning and memory processes. By investigating the underlying neural mechanisms,
researchers can develop targeted interventions to enhance memory function and improve cognitive
outcomes in these populations

To understand the anatomical changes that are happening in the brain as a result of learning or the
creation of memories, we need to go back to the basis of brain functioning: synaptic connections.

SYNAPTIC PLASTICITY

There are 100 billion neurons in the human brain which need to communicate effectively with one
another. This is achieved through a meeting point called a synapse, which is essentially a gap
between two neurons where neurotransmitters are released.

synaptic plasticity is the ability of synapses to strengthen or weaken over time, in response
to increases or decreases in their activity. Formation of memories may require the formation of new
synapses, or even the birth of new neurons. According to Donald Hebb when a presynaptic and a
postsynaptic neuron are repeatedly activated together, the synaptic connection between them will
become stronger and more stable (cells that fire together, wire together – Hebbian synapse).

This variation in synaptic strength is one of the forms of synaptic plasticity and it primarily depends
on the levels of activity between two neurons (activity-dependent process). Plastic change often
results from the alteration of the number of neurotransmitter receptors located on a synapse. There
are several underlying mechanisms that cooperate to achieve synaptic plasticity, including changes
in the quantity of neurotransmitters released into a synapse and changes in how effectively cells
respond to those neurotransmitters. Synaptic plasticity in both excitatory and inhibitory synapses
has been found to be dependent upon postsynaptic calcium release.

DURATION OF SYNAPTIC PLASTICITY

Synaptic plasticity can be classified according to the duration in changes to the synaptic strength
into:

1.Short-term synaptic plasticity – a change lasting from milliseconds to several minutes, with a
prompt return to normal. This type of synaptic plasticity is believed to play an important role in
transient changes in behavioral states or short-lasting forms of memory. It is mostly triggered by
short bursts of presynaptic activity. Short-term plasticity can either strengthen or weaken a synapse.

SYNAPTIC ENHANCEMENT

Short-term synaptic enhancement results from an increased probability of synaptic terminals


releasing transmitters in response to pre-synaptic action potentials. Synapses will strengthen for a
short time because of an increase in the amount of packaged transmitter released in response to
each action potential.

SYNAPTIC DEPRESSION

Synaptic fatigue or depression is usually attributed to the depletion of the readily releasable vesicles.
Depression can also arise from post-synaptic processes and from feedback activation of presynaptic
receptors.

2.Long-term synaptic plasticity – describes strength-modifying processes happening for a few


minutes, days or even years.

The major examples of this type of plasticity include long-term potentiation (LTP) and long-term
depression (LTD).

a) Long-Term Potentiation (LTP)

LTP refers to the long-lasting strengthening of synaptic connections following repeated and
synchronous activation. LTP ultimately allows the pre-synaptic neuron to evoke a greater post-
synaptic response when stimulated. It has been detected throughout the brain, including in the
cerebral cortex, amygdala and cerebellum. But it was first described in the hippocampus.

LTP has a few key features:

● It requires strong activity in both presynaptic and postsynaptic neurons i.e. neurons which
‘fire together wire together’.
● LTP is synapse-specific. It is restricted to the synapse between two activated neurons rather
than to all synapses on a particular neuron.
MECHANISMS OF LONG-TERM POTENTIATION

Process of LPT is largely governed by chemical reactions between important glutamate receptors
such as NMDA and AMPA receptors. NMDA receptors can actually block LTP by making it impossible
for calcium ions to enter dendritic spines, a chemical process that is necessary to strengthen
synapses between neurons while AMPA facilitates the release of glutamate which can amplify a post
synaptic potential.
Induction of LTP requires activation of glutamate NMDA receptors (NMDARs). At the resting
membrane potential, NMDARs are blocked by magnesium ions and only become permeable to
sodium, potassium and calcium ions upon post-synaptic depolarization (mediated by glutamate
AMPA receptors (AMPARs)). An increase in the post-synaptic calcium concentration initiates the
molecular processes necessary for LTP.

The pivotal role of NMDARs activation in LTP reflects the activity-dependent nature of synaptic
plasticity. NMDARs can be only activated upon simultaneous pre-synaptic release of glutamate and
postsynaptic depolarization mediated by AMPARs. This can be only achieved at high levels of
synaptic activity.

As NMDARs require two processes: the presynaptic release of glutamate and postsynaptic
depolarization to co-occur for their activation, they are often referred to as ‘coincidence detectors’.

b) Long-Term Depression (LTD)

LTD involves the weakening of synaptic connections through prolonged low-frequency stimulation.
Similarly, to LTP, LTD was also described for the first time in the hippocampus. However, it is
initiated by prolonged (10-15 minutes) low-frequency stimulation. This particular pattern of
stimulation creates depressed synaptic strength (reflected by a reduction in the size of excitatory
postsynaptic potentials (EPSPs)), which may last for several hours.

Importantly, LTD can erase the increase in EPSP size due to LTP and vice versa. Hence, LTP and LTD
can reversibly affect the synaptic strength guided by the patterns of synaptic activity.

MECHANISMS OF LONG-TERM DEPRESSION

Molecularly, LTD is also initiated due to a calcium influx via NMDARs. However, in contrast to a large
and fast increase in calcium concentration, which drives LTP, LTD is promoted by a small and slow
rise in calcium concentrations due to shrinkage of dendrites and decreased numbers of synaptic
receptors.

UNIT 3

MOTIVATION

MOTIVATION

Motivation refers to internal forces that activate, direct, and sustain behavior. It is the reason
behind why people initiate actions, how intensely they pursue goals, and how persistently they
continue striving toward those goals. It encompasses not just behavior but also the cognitive and
emotional aspects involved in goal pursuit (Petri, 1990).

Components of Motivation

Motivation is composed of:

1. Intensity – How much effort an individual applies.


2. Direction – The specific goal or activity the effort is directed toward.
3. Persistence – The duration of time the effort is sustained.

Measurement of Motivation

Since motivation is not directly observable, it is inferred from behavioral changes. It can be
considered:
 An intervening variable (explaining the link between stimulus and response).
 A performance variable (determining whether a learned behavior is executed or not).

Motivation as an Intervening and Performance Variable

 Intervening Variable: Example – a rat deprived of food (stimulus) runs faster in a maze
(response) due to hunger (intervening variable).
 Performance Variable: Motivation is necessary for performance; in its absence, behavior
may not occur despite prior learning.

Characteristics of Motivation

Motivation can be identified through:

 Activation: Presence or absence of observable behavior.


 Persistence: Continuing to act even with low chances of success.
 Intensity (Vigor): Degree of energy or force behind the behavior.
 Direction: Behavioral choice (e.g., preference for sugary solution over plain water).

Types of Motivation

 Intrinsic Motivation: Comes from within; activity is done for its own sake (e.g., enjoyment).
 Extrinsic Motivation: Arises from external rewards (e.g., money, praise).

The Motivation Cycle

A circular process that explains how needs lead to actions and outcomes:

 Need: A physiological or psychological lack (e.g., hunger, love).


 Drive: Internal state pushing behavior to reduce the need.
 Incentive: External object or goal that satisfies the drive.
 Reward: Satisfaction or pleasure upon achieving the goal.

Homeostasis

Introduced by Walter Cannon (1932), homeostasis refers to the body’s effort to maintain internal
balance. It is a self-regulating process that keeps physiological systems (e.g., blood sugar,
temperature) within a stable range despite external changes.

Physiological Homeostasis

Involves:

 Systems like circulatory, lymphatic, endocrine, and nervous systems.


 Mechanisms like reflexes, hormones, and automatic feedback loops.
 Internal cues (e.g., thirst) serve as triggers for behavior.
 Margin of safety: Redundancy (e.g., two kidneys) to ensure survival.

Homeostasis and Behavior (Richter’s Experiments)

Psychologist C.P. Richter demonstrated that behavior compensates when physiological regulation is
impaired:

 Rats adjusted fluid intake when regulators of sodium or calcium metabolism were removed.
 With pituitary gland removed, rats increased water intake to avoid dehydration.
 Behavioral adaptations (e.g., building nests) compensated for hormonal imbalances due to
thyroid or pituitary disruptions.

Psychological Homeostasis

Coined by Fletcher, this refers to the idea that people maintain a mental balance or psychological
equilibrium:

 Includes habits, personality traits, ideologies, and coping mechanisms.


 According to Stagner, this is a dynamic process. Rather than returning to old states,
individuals establish new, more stable equilibriums.
 Perceptual constancies (e.g., seeing objects consistently) help reduce anxiety and maintain
mental stability.

Criticism of Homeostasis Theory

Despite its strengths, homeostasis theory has limitations:

 It may overgeneralize or fail to explain behaviours that don't serve balance (e.g., risk-taking,
suicide).
 Not all motivated behavior fits into the homeostatic model—some actions pursue novelty,
stimulation, or chaos instead of equilibrium.

THEORIES OF MOTIVATION

INSTINCT THEORY

Human Recognition and Innate Behavior (Eibl-Eibesfeldt, 1972)

In 1972, Eibl-Eibesfeldt found that when a human recognizes another person as familiar, a universal
behavioral pattern is triggered—smiling and briefly raising the eyebrows. This is an innate social
signal that communicates recognition and reduces potential threat, facilitating safe social
interaction. This behavior occurs across cultures, suggesting it is genetically programmed and not
learned.

Evolution and Natural Selection (Darwin)

Evolution refers to the gradual, progressive change of organisms over time. Charles Darwin
proposed the principle of natural selection, where useful traits (including behavioral traits) are
preserved because they help individuals survive and reproduce, while disadvantageous traits
gradually disappear from the species. Thus, evolution shapes both physical traits and behaviours
that improve an organism's ability to cope with its environment.

Genetic and Learned Behavior

While some behaviours are genetically determined, many others are learned through experience
and interaction with the environment. However, learned behaviours are not passed to future
generations, since genes are not altered by experience. What can be inherited is the capacity to
learn—suggesting that even learning ability must have a genetic foundation.

Definition of Instinct

Instincts are innate, goal-directed patterns of behavior that arise without learning or prior
experience. According to instinct theory, organisms are born with biologically hardwired
behaviours that help ensure survival. These instinctive behaviours are automatic responses to
specific stimuli and are not learned, but inherited through evolution.

Role of Instinct in Motivation

Instinct theory suggests that all behaviours are driven by instincts rooted in our biological makeup.
This means human actions, desires, and thoughts are naturally programmed. People act in certain
ways because their genes predispose them to do so. These inborn tendencies motivate behavior
from birth and shape how individuals interact with the world.

Significance of Instinct Theory

Although instinct theory has limitations, it remains important because it:

 Highlights the continuity between human and animal behavior,


 Provided a foundation for later developments in ethology (the scientific study of animal
behavior in natural settings),
 Reaffirmed the idea that biological predispositions influence behavior and motivation.

INSTINCT THEORIES

• William James

• William McDougall

WILLIAM JAMES

Instincts as Impulses Similar to Reflexes


William James believed that instincts are like reflexes, automatically triggered by sensory stimuli.
However, unlike reflexes, they are more complex and occur blindly—meaning the behavior happens
without awareness of its purpose or final goal. The first occurrence of instinctive behavior is blind,
but over time, memory interacts with instinct, allowing behavior to become more purposeful.

Modification of Instinct through Experience


James emphasized that experience modifies instincts. With repeated occurrences, instinctive
actions are no longer purely automatic but are influenced by learning. He explained this flexibility
through two principles:

 First, habit (learning) can inhibit instinct.


 Second, some instincts are transitory, meaning they only appear during specific life stages
or situations (e.g., a chick follows the first moving object early on, but later avoids
movement). He positioned instinct as an intermediate form between reflex and learning,
forming a base for habit development.

Classification of Human Instincts


James identified various human instincts, pairing them with emotional or behavioral tendencies,
such as:

 Rivalry – Curiosity
 Fear – Modesty
 Sympathy – Sociability, etc.
This classification attempted to cover both social and individual motivational factors in
human behavior.
Criticism of James
Critics argued that James failed to clearly distinguish between reflex, instinct, and learned
behavior, creating conceptual confusion.

WILLIAM McDOUGALL

Instinct as Foundation of All Behavior


McDougall proposed that all behaviours are instinctive and are composed of three interrelated
components:

 Cognitive: Recognizing the object that satisfies the instinct


 Affective: The emotion aroused by the object
 Conative: The action or striving toward the object
Together, these formed a complete behavioral cycle.

Purposeful Nature of Instinctive Behavior


McDougall emphasized the goal-directed (teleological) nature of instinctive behavior—behavior that
strives toward fulfilling needs. He believed instinctive actions could be altered through learning and
experience, highlighting flexibility.

Four Ways Instincts Are Modified


Instincts, though innate, are influenced by experience:

1. They can be triggered by ideas, not just objects.


2. The motor patterns (movements) through which instincts are expressed can be changed.
3. Multiple instincts can be activated at once, resulting in a blended behavior.
4. Instinctive behavior can become focused on specific objects or ideas, reducing its expression
in other contexts

List of Seventeen Instincts


McDougall proposed seventeen basic instincts, including hunger, curiosity, sex, maternal care,
gregariousness, etc. He believed all human behavior could be traced back to these fundamental
instincts.

Anthropomorphic Method and Its Problems


McDougall often used anthropomorphism—inferring animal behavior by asking how he would feel
in a similar situation. While intuitive, this method is considered unscientific, as it projects human
emotions onto animals without objective evidence.

CRITICISM OF INSTINCT THEORIES

1. Kuo (1921)
Kuo argued that:

 There is no consensus on what counts as an instinct. Behaviours considered "instinctive" are


often learned through reinforcement. Genes do not determine behavior; instead, behavior
is shaped by environmental stimuli and learning experiences.

2. Tolman (1923)
Tolman criticized the instinct concept for being descriptive, not explanatory. Saying someone is
curious doesn’t explain their behavior—it simply labels it. He believed:

 Instincts like "playfulness" or "curiosity" do not identify causes of behavior. The line
between instinctive and learned behavior is blurry. Concepts suggesting all knowledge is
pre-existing (as in Plato's philosophy) are unscientific and vague. The theory confuses
habits and instincts, which should be kept distinct.

ETHOLOGICAL THEORY OF MOTIVATION

ETHOLOGY

Ethology is a biological approach to the study of behavior, particularly focusing on animals. It


emphasizes understanding behavior in terms of its evolutionary origin, development, and function.
Ethologists advocate for careful, systematic observation of the organism's full behavioral repertoire
before making interpretations. A catalogue of all observable behaviours of a species is called an
ethogram. Ethology, rooted in Darwin’s theory of evolution, is especially concerned with instinctive,
species-specific behavior.

Types of Behavior: Consummatory and Appetitive

William Craig distinguished two major types of behavior:

1. Consummatory Behavior

 These are well-coordinated, innate, and stereotyped responses that occur


in reaction to specific stimuli.
 They are final or goal-directed actions (e.g., chewing, swallowing food) that
usually terminate a behavioral sequence.

2. Appetitive Behavior

 These are searching, flexible, and modifiable responses initiated before the
consummatory act.
 They are influenced by learning and help the organism find the appropriate
stimulus or goal (e.g., searching for food).

Action Specific Energy and Innate Releasing Mechanism

Each instinctive behavior has a source of internal energy called Action Specific Energy (ASE). The
behavior is held back by an internal mechanism called the Innate Releasing Mechanism (IRM),
which is like a lock. A Key Stimulus (or Sign Stimulus) serves as the key that "unlocks" the behavior.
These stimuli can be:

 Environmental (e.g., colour, shape)


 Social releasers—specific behaviours from other members of the same species that act as
signals (e.g., mating postures).

Key Stimuli and Supernormal Stimuli

Key stimuli are often simple configurations that effectively release a behavior.

 Example: Male three-spined sticklebacks show aggression to other males due to their red
bellies, which act as a key stimulus.
Sometimes, artificial stimuli can trigger a stronger response than the natural one. These are
called Supernormal Stimuli or Super optimal Stimuli.
 Example 1: Ringed plovers prefer eggs with exaggerated markings for incubation.
 Example 2: Female sticklebacks prefer larger-than-normal dummy males due to their
perceived better reproductive fitness.
Fixed Action Pattern (FAP)

Behaviours released by key stimuli are termed Fixed Action Patterns. These are instinctive, species-
specific motor acts that are relatively invariant and independent of learning. They have several
distinct properties described by Moltz:

1. Stereotyped

o FAPs are mostly invariable in form, though minor variations may exist.

2. Independent of Immediate External Control

o Once triggered, an FAP continues to completion regardless of changes in


environment.

o Example: A graylag goose continues the egg-retrieval behavior even if the egg is
removed mid-action.

3. Spontaneous

o FAPs may occur without an external stimulus when internal energy builds up over
time (called vacuum activity).

o The longer the behavior is suppressed, the more likely it will occur spontaneously.

4. Independent of Learning

o FAPs are innate and not modified by learning. They are hardwired and uniform
across the species.

Taxis and FAP Distinction

While FAPs are fixed and pre-programmed, Taxis are unlearned but directed movements in
response to stimuli.

 Example: In the goose egg-retrieval example, the side movements of the bill to keep the egg
aligned are taxis, as they are responsive to the egg’s position.

ETHOLOGY - THEORETICAL MODELS

Konrad Lorenz’s Hydraulic Model

Tinbergen’s Hierarchical Model

Konrad Lorenz’s Hydraulic Model


Konrad Lorenz proposed the hydraulic model to explain the motivation behind instinctive
behaviours. He visualized behavior as being driven by a buildup of internal energy, much like water
accumulating in a reservoir. This energy, referred to as action-specific energy (ASE), motivates an
organism to act. The behavior is triggered when a key stimulus activates an innate releasing
mechanism (IRM), releasing the stored energy as a fixed action pattern (FAP). If energy builds up
without appropriate external stimuli, it can be released spontaneously, a phenomenon known as
vacuum activity.
Vacuum Activity and ASE Dynamics
As ASE accumulates, the threshold required to trigger a behavior lowers, making it more likely to
occur. If the reservoir fills completely, the behavior may erupt spontaneously, without any external
cue. The intensity of the behavior is influenced by both the pressure from the accumulated ASE and
how appropriate the triggering stimulus is. Small amounts of energy produce partial responses,
while larger buildups lead to full-blown FAPs. Some behaviours with smaller reservoirs (like fur
licking) fill up quickly and are highly probable.

Tinbergen’s Hierarchical Model


Nikolaas Tinbergen expanded on Lorenz’s ideas by proposing a hierarchical organization of
instinctive behaviours. He suggested that behaviours are controlled by a series of neural centres
arranged hierarchically. Higher centres control broad behavioral categories (e.g., reproduction),
while lower centres govern more specific acts (e.g., nest building). Fixed action patterns are released
by lower centres when unlocked by key stimuli, following inhibition from higher centres. This layered
system explains complex sequences of behavior and how they are triggered and inhibited.

Factors Influencing the Hierarchy


Tinbergen identified three factors that influence behavioral centres: hormonal changes, internal
sensory feedback, and nervous system interactions. Hormones can make behavior centres more or
less sensitive to stimuli (e.g., sexual behavior won’t occur without androgens). Internal sensory
information provides feedback about the organism’s state, such as whether to attack or flee. The
nervous system explains spontaneous behaviours and displacement activities when two
incompatible motives compete, leading to the activation of unrelated behaviours (e.g., nest building
during a conflict).

Displacement and Appetitive Behavior


Displacement behaviours occur when energy from blocked motivations spills into another behavioral
system, resulting in unrelated FAPs. For instance, male sticklebacks may build nests when caught
between fleeing and attacking. When a behavior centre is activated but the key stimulus is absent,
the organism may engage in appetitive behavior—actively searching for the right stimulus based on
past learning. Once the key stimulus is found, the blocked centre is activated, leading to
consummatory behavior which satisfies the motivation.

Intention Movements and Social Releasers


Lorenz introduced the concept of intention movements—low-intensity or incomplete behaviours
indicating the buildup of energy. Over time, these can become social releasers through ritualization,
gaining communicative value. For example, threat gestures evolve from partial attack movements.
Human behaviours also include such signals; posture shifts in conversation can signal the desire to
leave, as shown in Lockard’s study. While species relationships are largely instinctive, individual
recognition and dominance hierarchies may be learned.

Conflict Behavior and Its Types


When multiple sign stimuli are present, motivational conflict arises, leading to behaviours classified
into four types: successive ambivalent (alternating between two motives), simultaneous ambivalent
(expressing both motives at once), redirected behavior (inappropriate target), and ethological
displacement (expression through unrelated behavior). For example, a male stickleback torn
between attack and escape may show displacement by building a nest.

Reaction Chains
Behavior often unfolds in sequences called reaction chains, where each FAP sets the stage for the
next. This chaining is stimulus-dependent but may include gaps filled by learned behaviours. Lorenz
referred to such blending of instinctive and learned responses as instinct-conditioning intercalation.
An example is the courtship behavior of sticklebacks, where visual stimuli lead to a chain of ritualized
actions culminating in fertilization.

Imprinting: The Blend of Learning and Instinct


Imprinting is a special case where instinct and learning intertwine. It’s a rapid, irreversible
attachment process occurring during a sensitive period (typically within hours of birth). While the
tendency to imprint is innate, the object of attachment is learned. Ramsay and Hess demonstrated
that imprinting is most effective during a specific window. Nidicolous birds (which stay in nests)
show more permanent imprinting than nidifugous birds (which leave nests early).

Imprinting Characteristics and Limitations


Imprinting is independent of reinforcement, occurring without rewards. However, not all objects are
equally effective as imprinting targets—colour and maternal calls are more influential than shape.
The process leads to social bonding, intraspecific recognition, and sexual preferences. For instance,
Lorenz's imprinted geese treated him as a parent figure and later displayed mating behavior toward
him.

Criticisms of Classical Ethology


Critics argue that the line between learned and instinctive behavior is often blurry, and the energy
concept lacks empirical support. Detractors suggest that behavior follows a hierarchy of likelihood
rather than energy discharge. Also, both Lorenz and Tinbergen’s models ignored feedback
mechanisms; they assumed that once behavior is triggered, it runs to completion regardless of
environmental change.

New Directions in Ethological Theory


Later ethologists revised classical ideas. Konishi proposed that stimulus processing, motivation, and
behavior release occur centrally in the brain, integrating feedback and allowing correction. Mayr
introduced the concepts of open (modifiable) and closed (fixed) behavioral programs. Prepared
behaviours (like fear of snakes) are easily learned; contra-prepared ones (like fear of flowers) resist
learning. Longer-lived species have more open programs, while short-lived species rely more on
closed programs.

Modifications and Feedback in Behavior


Baerends added the idea of negative feedback, showing how ongoing behavior adjusts in response
to changes. Hailman showed how innate behaviours are refined by learning, such as young birds
gradually learning to recognize their parent’s beak. These revisions acknowledge that behavior
results from the dynamic interplay between instinct and experience.

Physiological Basis of Innate Behavior


Earlier models relied heavily on hypothetical nervous mechanisms. Willows and Hoyle offered
physiological evidence using the sea slug’s escape behavior. This reflex is controlled by a small neural
circuit and triggered by chemical cues from predators. It demonstrates how even simple organisms
can have complex, innately controlled behaviours guided by sensory input.

Human Ethology: Universality of Behavior


Eibl-Eibesfeldt explored innate human behaviours across cultures, including facial expressions,
postures, and gestures. Universality across blind, deaf, and retarded children supports their
biological origin. For example, eyebrow flicks serve as greeting signals. Joseph Hager and Paul Ekman
found that emotional expressions like happiness and surprise are easily recognized from a distance,
suggesting facial expressions function as long-range communicative signals.

Shyness and Biological Predispositions


Some human traits, such as shyness, have a genetic basis. Kagan’s research found that children with
high arousability thresholds are more socially inhibited, especially if exposed to early stress. Suomi’s
studies showed that supportive environments can buffer stress, highlighting interaction between
genes and experience in shaping behavior.

Other Innate Human Behaviours


Certain stimuli, like chubby cheeks, elicit nurturing behavior. Keating’s study on neoteny showed
that juvenile features in adult faces increase helping behavior. Physical differences in juveniles may
also reduce adult aggression and invite protection. Female flirting includes ritualized behaviours
resembling flight—eye contact, smiling, looking away—designed to invite male pursuit. Kissing and
baby talk in adult romantic relationships reflect instinctive caregiving behaviours.

Innate Responses and Communication


Fighting postures in humans, such as stamping or puffing up, mimic animal displays. Eye contact
plays diverse roles, from threat to communication. Gaze aversion in autistic or shy individuals may
stem from innate conflict between approach and avoidance. Some gestures, like staring or standing
tall, function as innate social releasers.

Speech and Vocal Behavior


Ethologists have also studied the biological roots of language. Like bird songs, human speech has a
critical learning period and is processed in a specific brain hemisphere. Children selectively imitate
species-specific sounds. Language serves dual roles: symbolic representation and emotional
communication (phonetic releasers), allowing us to interpret deeper meanings.

Sex and Aggression as Innate Drives


Both sex and aggression are innate and organized in reaction chains involving ritualized behaviours.
Interspecific aggression includes predation, mobbing, and critical reaction. Intraspecific aggression,
while less violent, serves adaptive functions—spacing, dominance, and protection of offspring.
Appeasement gestures prevent serious injury. Dominance influences reproductive success, as seen
in rhesus monkeys, where high-ranking females produce more daughters.

Complex Relationship Between Sex and Aggression


Sex and aggression are often inversely related—frequent sexual activity can reduce aggression.
However, in humans, the relationship is complex. While some studies link pornography to increased
aggression, others suggest it may reduce it, especially when individuals are emotionally satisfied.
Thus, human sexual and aggressive behaviours reflect a blend of instinctual patterns and learned
influences.

HEDONISTIC THEORY OF MOTIVATION

HEDONISM

Hedonism is the view that behavior is driven by the pursuit of pleasure and the avoidance of pain.
According to this framework, stimuli acquire motivational significance through their associations
with pleasurable or painful experiences. Philosophers like Hobbes argued that all actions are
motivated by this hedonistic principle. Spencer added an evolutionary perspective, suggesting that
pleasurable behaviours aid in survival, and therefore both pain-reducing and pleasure-seeking
behaviours became adaptive over time. This perspective aligns with Thorndike’s Law of Effect, which
states that behaviours followed by satisfying outcomes are likely to be repeated.

Pleasure, Pain, and Motivation


Pleasant and unpleasant feelings play a critical role in motivation. The nervous system responds to
emotional and sensory experiences in ways that encourage approach or avoidance behaviours,
based on whether those experiences are perceived as pleasurable or painful.
Emotion and Feeling – Brain Basis
Emotion and feeling are distinct but interrelated processes in the brain. Emotions are rapid,
automatic responses involving lower brain regions such as the amygdala, the ventromedial
prefrontal cortex, and subcortical areas. Feelings, in contrast, arise from higher-level processing in
neocortical regions. Feelings represent a conscious interpretation or mental portrayal of bodily
responses triggered by emotions. For example, encountering a snake might elicit an emotional
reaction (fear) via the amygdala, while the resulting feeling could be interpreted as horror.

Role of Amygdala and Memory


The amygdala plays a central role in emotional arousal and also regulates the release of
neurotransmitters essential for memory consolidation. This explains why emotionally charged events
are often more vividly remembered.

Feelings and Cognitive Interpretation


Feelings are shaped not only by biological responses but also by personal experiences, beliefs, and
memories. Thus, they represent our subjective reaction to emotions rather than the raw emotion
itself.

Troland’s Hedonic Categorization (1932)


Troland proposed that the nervous system is especially attuned to pleasurable and aversive events.
He identified three categories of stimulation based on their emotional valence:

 Beneception: Elicited by pleasurable stimuli (e.g., sweet tastes, erotic stimuli, pleasant
smells).

 Nociception: Triggered by unpleasant or painful stimuli (e.g., bitter tastes, sharp pain,
repugnant odors).

 Neutroception: Stimuli that do not evoke strong positive or negative emotions (e.g., visual
or auditory input under normal conditions).

Sensory Qualities and Hedonic Value


The hedonic value of an object is closely tied to its sensory properties and how those properties
interact with the nervous system. Whether a stimulus is perceived as pleasant or unpleasant
depends on how specific sense organs react to it. For example, bright pressure on the skin might be
experienced as pleasant, while dull pressure might be unpleasant.

Beebe-Center’s Hedonic Continuum (1932)


Beebe-Center conceptualized pleasure and pain as existing on a single hedonic continuum, with
pleasant and unpleasant sensations at opposite ends and a neutral zone in the middle. The quality of
a sensation—whether it's felt as pleasurable or aversive—depends on which sense organ is
stimulated and how it reacts.

THEORIES OF HEDONISM

 PAUL THOMAS YOUNG


 DAVID McCLELLAND

PAUL THOMAS YOUNG’S THEORY OF MOTIVATION

Hedonic Motivation and Food Preferences


Paul Thomas Young’s contributions to motivation theory stemmed primarily from experiments on
food preferences in rats, which laid the foundation for a hedonic theory of motivation. His central
thesis posits that positive and negative affective states shape motivational behavior, beyond mere
biological needs. He identified eight observations demonstrating that food preferences are not
always aligned with bodily requirements—for example, rats may overeat harmful substances, prefer
non-nutritive sweeteners, or reject nutritious food with foul odor.

Determinants of Food Choice


Young concluded that food selection is determined by three main factors:

1. Biochemical need state (bodily requirements),

2. Habit (past experiences with food), and

3. Palatability (intrinsic qualities like taste, texture, and temperature).


Of these, palatability is the most central to hedonic motivation, highlighting the role of
pleasure-seeking in behavior.

Experiment with Sugar vs. Casein


In a key experiment, one group of rats received sugar as a reward and another received casein. Over
time, the sugar group displayed more direct and rapid approach behavior, while the casein group
hesitated and explored. This illustrated that behavior depends on the kind of reward, showing that
affective value, not just nutritional content, directs motivation.

Specific Hunger and Proprioceptive Tension


Young introduced the idea of specific hunger, where animals seek nutrients based on deficiencies.
He also described proprioceptive tension—a persistent, internal drive that comes from the body’s
muscles, joints, and tendons during food-seeking behavior. This tension is greater when seeking
highly palatable rewards, like sugar, and reflects a learned organization of neural activity based on
past affective experiences.

Learning and Neural Trace


Young emphasized exercise or practice (rather than affect itself) as the core of learning. When
behavior leads to enjoyment or relief from distress, a neural trace is left. These traces are
strengthened through repetition, and affective intensity combined with frequency of behavior
determines food preferences and motivational strength.

DAVID C. MCCLELLAND’S THEORY OF MOTIVATION

Learned Nature of Motives


Following Henry Murray, McClelland proposed that all motives are learned. He defined a motive as
a strong affective association formed through past experiences, which includes an anticipatory goal
reaction to certain cues that were associated with pleasure or pain. These cues can be external
stimuli or internal states, such as thoughts or physiological sensations.

Reintegrated Affect and Approach-Avoidance


Motivated behavior, according to McClelland, involves a reintegrated affective state, where
previously neutral situations become charged with positive or negative feelings due to their
association with past emotional experiences. Motivation lies on a continuum of approach or
avoidance, depending on whether the cues elicit pleasant or unpleasant affect.

Emotion, Expectation, and Discrepancy


McClelland emphasized that affect arises from discrepancies between expectation and actual events.
When stimuli match expectations (adaptation level), there's no emotional response. However, small
discrepancies yield positive affect (e.g., pleasant surprise), while large discrepancies result in
negative affect (e.g., shock or disappointment). For example, a salt solution may be neutral at one
concentration, pleasant at a slightly higher one, and unpleasant at an even higher level.
Stability of Adaptation Level
To produce a hedonic (pleasure/pain) response, a sensory discrepancy must be both sufficiently
large and sustained over time. These discrepancies are judged against a person’s adaptation level,
which itself is shaped by past experiences and somatic conditions (like hunger). Boredom arises
from too much certainty, while uncertainty and expectation confirmation result in pleasurable
outcomes.

Frustration and Conflict


Frustration, in McClelland’s theory, arises from a conflict between equally strong response
tendencies, where acting on one invalidates the expectation of the other. However, if one response
is clearly stronger, and confirmed, it leads to pleasure, while the weaker one is not sufficiently
motivating to cause frustration.

Achievement Motivation and Complexity


Achievement behavior reflects a desire to work with increasingly complex tasks, developing mastery
and expectation over time. As expectation becomes certain, the novelty diminishes, potentially
leading to boredom, unless new, uncertain challenges are introduced.

Learning of Motives and Strength Dimensions


McClelland agreed that motives develop through the law of learning: repeated association of cues
with emotional states. He proposed three dimensions of motive strength:

1. Dependability – probability of motive activation with specific cues,

2. Intensity – strength of affective response (measured by speed, rate, latency of action),

3. Extensity – range of cues that can activate the motive or its resistance to extinction.

Learning of Biological Drives


Biological drives (e.g., hunger) are also learned; the motivation to eat arises only when cues of
hunger are associated with positive affect-like taste or relief from discomfort. Thus, no clear
distinction exists between primary and secondary motives.

Cultural Influence and Universality


Some motives, like eating at fixed times, may arise from cultural practices, not biological needs.
These culturally learned motives can become universal within a group, even without biological
triggers. Thus, McClelland emphasized that motives are shaped through interaction with the social
and environmental context, as well as through reinforcement of affective experiences.

ACTIVATION THEORY OF MOTIVATION

Activation theory emerged from two primary findings:

1. Behavioral efficiency is affected by the mobilization of energy and muscular activity.

2. Neurophysiological evidence indicates that cortical function is influenced by arousal


systems in the brain stem.

Emotion and motivation were interpreted in this context as mechanisms to account for variations in
arousal or behavioral vigor, with several physiological indicators (like skin conductance and EEG
patterns) overlapping with traditional measures of emotion, stress, and conflict. Thus, activation
theory bridges physiology and motivational-emotional processes, focusing on arousal level as a key
determinant of behavior.

HAROLD SCHLOSBERG
Activation Continuum and Emotion
Schlosberg proposed an activation continuum, ranging from sleep (lowest activation) to intense
emotions like blind rage (highest activation). Emotions of varying intensity occupy different positions
along this continuum. While emotion and activation are not identical, Schlosberg viewed emotion as
a designation of arousal level. The key challenge, then, is to quantify arousal and link it to
behavioral performance.

Measuring Activation: Skin Conductance and Behavior


Schlosberg emphasized skin resistance (or conductance) as a physiological index of arousal. He
focused on slow drifts in baseline conductance, rather than transient changes, to represent arousal
more accurately. Increased skin conductance correlates with higher arousal. For example, research
(e.g., Freeman, 1940) showed an inverted U-shaped relationship between conductance and
performance: moderate arousal produced optimal reaction time and hand steadiness, while both
low and high arousal were associated with inefficiency. This supports the idea that optimal
behavioral efficiency occurs at moderate levels of activation.

Limitations and Extensions


Schlosberg acknowledged that activation alone does not explain all aspects of emotion. He explored
additional dimensions of emotional expression, such as those found in facial expressions, to capture
more nuanced emotional states.

DONALD B. LINDSLEY

Origin of the Term "Activation Theory of Emotion"


Lindsley formally coined the term "activation theory of emotion" and linked it directly to
electrophysiological patterns, especially changes in the electroencephalogram (EEG). He observed
that emotions produce a characteristic “activation” pattern in the EEG—marked by the suppression
of alpha rhythms and the emergence of low amplitude, high frequency activity. This same EEG
pattern is seen during sensory stimulation and cognitive activity, suggesting a common arousal
mechanism.

Reticular Activating System (RAS)


The core of Lindsley’s theory is the Reticular Activating System (RAS), a dense network of neurons
extending from the medulla to the thalamus, passing through the pons, midbrain, and
hypothalamic regions.

 Ascending fibres of the RAS project to the cortex, facilitating wakefulness, alertness, and
attention.

 Descending fibres influence motor activity and autonomic responses, providing a


regulatory mechanism beyond the traditional pyramidal/extrapyramidal systems.

Function of RAS
The RAS acts as a central control hub that modulates cortical arousal and behavioral readiness. It
facilitates sensory filtering, motor coordination, and autonomic balance, making it essential for
adaptive, goal-directed behavior. For Lindsley, while emotion might not be the central focus, the
arousal states underlying emotion are deeply tied to RAS activity

TOLMAN’S PURPOSIVE BEHAVIORISM

Tolman emphasized that behavior must be studied holistically rather than being broken down into
stimulus–response chains. Unlike Hull's reductionist approach, Tolman believed that behavior is
molar—meaningful, organized, and goal-directed. He argued that behavior is not just a sequence of
muscular responses, but involves purpose and cognition.
Characteristics of Molar Behavior

Tolman described three key characteristics of molar behavior:

1. Goal-directedness – behavior aims toward achieving specific outcomes (e.g., a hungry rat
seeks food).
2. Persistence – behavior continues until the goal is achieved.
3. Selectivity – the most efficient path to the goal is typically chosen.
Behavior is guided by expectancies and cognitive maps—internal representations of the
environment rather than simple response chains.

Cognitive Maps and Expectancies

Tolman rejected the idea that learning is a matter of chaining S–R links. Instead, he proposed that
organisms form cognitive maps—mental representations of spatial relationships—and develop
expectancies about how behaviours will lead to goals. Learning thus involves understanding the
location of goals and the best paths to reach them, rather than memorizing specific responses.

LEWIN’S FORCE FIELD THEORY

Kurt Lewin also advocated a molar approach and introduced field theory, which proposes that
behavior results from the total forces (field) acting upon an individual at a given time. This dynamic
system reflects a balance between internal needs and environmental influences, similar to how a
kite’s flight is determined by various interacting forces.

Lewin’s Equation: B = f (P, E)

Lewin expressed behavior (B) as a function of both the person (P) and the psychological
environment (E). He highlighted that internal states (like needs and tensions) and psychological facts
(such as knowledge or perception of the environment) jointly influence behavior.

The Person: Inner Tensions and Needs

Lewin described the person as composed of regions. The inner-personal region contains various
needs, both physiological (e.g., thirst) and psychological or quasi-needs (e.g., finishing a task). These
needs create tensions, which motivate behavior to restore balance.

Tension and Locomotion

Tension is the internal motivational force. It can be reduced either by spreading throughout the
inner regions or by being discharged through locomotion—goal-directed behavior that relieves
tension by interacting with the environment (e.g., drinking water to reduce thirst).

Psychological Environment and Valence

Lewin stressed that behavior is influenced by the psychological environment, made up of


psychological facts (knowledge, beliefs). Needs give positive or negative valence to regions in the
environment—making some areas more attractive (e.g., fridge during hunger). Behavior moves
toward positive valence.

HULL’S DRIVE THEORY


Hull proposed a mechanistic model where behavior arises from a need to restore homeostasis. His
Drive Reduction Theory posits that organic needs create drives, which activate behavior.
Reinforcement occurs when behavior reduces the drive (e.g., eating reduces hunger). Hull adopted
an S-R reinforcement model influenced by Thorndike.

Hull’s sEr = sHr × D Formula

Hull formalized behavior strength as:


sEr = sHr × D

 sEr: Excitatory potential (likelihood of behavior)


 sHr: Habit strength (learning)
 D: Drive (motivation)
If either learning or drive is absent, no behavior occurs. Strong habits paired with high drive
yield strong behavior (e.g., hungry trained rat pressing bar).

Role of Reinforcement and Learning

Hull claimed reinforcement strengthens the S–R connection only if it reduces a drive. Without a
drive, no reinforcement occurs. Learning, then, depends on drive-induced behavior followed by
drive reduction—making learning impossible without motivation.

Generalized Drive Concept

Hull proposed a general pool of energy called generalized drive, which can be activated by multiple
needs (hunger, thirst, sex). Drive is nonspecific—it energizes behavior but doesn’t direct it. Drive
stimuli (Sd)—internal sensations like stomach contractions—provide directionality by connecting to
behaviours previously reinforced.

Drive Stimuli and Behavior Direction

Drive stimuli (e.g., hunger pangs) function like external cues. They become associated with specific
actions (e.g., opening the fridge) that reduce drive. Over time, the organism learns to respond to
different Sd’s (hunger vs. thirst) based on past reinforcement—demonstrating discrimination and
learned behavior patterns.

Drive’s Role in Learning and Behavior

Hull emphasized three roles of drive:

1. Prerequisite for reinforcement (learning won’t occur without drive).


2. Energizer of behavior (no action without motivation).
3. Source of discriminative stimuli (drives create sensations that guide behavior).

Primary Drives

Primary drives are innate and tied to biological needs (e.g., hunger, thirst, sex). Drive strength
depends on deprivation duration. Drive and habit strength together determine reaction potential,
the likelihood of performing a behavior.

Experiment 1: Williams and Perin

Rats trained to press a bar for food under 23-hour deprivation (high drive). Later, in extinction trials
(no reinforcement), rats with more training pressed more. Rats with higher drive during extinction
(Williams: 22 hrs) responded more than those with lower drive (Perin: 3 hrs), supporting Hull’s sEr =
sHr × D model.

Interpretation of Experiment 1

Findings showed:

 More training = stronger habit.


 Higher drive = more responses even under extinction.
 Drive multiplies learned behavior.
 Williams' results could be predicted by scaling Perin’s curve based on drive level, illustrating
the multiplicative relationship in Hull’s theory.

Experiment 2: Drive Level Effects

All rats had equal training, but drive levels (hours of deprivation) varied. Higher deprivation led to
more bar presses in extinction. Even zero-deprivation rats responded slightly, which Hull attributed
to residual drives, reinforcing the idea of a generalized drive pool.

Incentive Motivation (K)

Hull added incentive (K) to account for effects of reward quality/quantity. Final formula:
sEr = sHr × D × K
Performance is stronger for high-value rewards (e.g., steak vs. tasteless burger), even if drive is
constant. Incentive is learned (via classical conditioning), not innate.

Modifications to Hull’s Theory

Hull eventually revised his theory: drive reduction alone could not explain all motivated behavior. He
introduced drive stimuli reduction—reinforcement occurs when internal sensations (like hunger
pangs) are reduced, even before homeostasis is fully restored. This made his theory more flexible
but also more complex.

KENNETH SPENCE

Contribution to Hull's Theory


Kenneth Spence extended Hull’s learning theory but did not support tension-reduction as the basis
of motivation. Instead, he emphasized the role of incentive and emotion in enhancing general drive
levels. He also developed a theory of inhibition, where frustration was seen as a source of response
interference.

Discrimination Learning
Spence’s discrimination learning involved animals choosing between two stimuli, with one
consistently reinforced and the other not. He proposed seven key assumptions: reinforced stimuli
increase habit strength, non-reinforced stimuli build inhibition, and both generalize to similar
stimuli. The algebraic combination of habit strength and inhibition determines approach or
avoidance. The stimulus with the highest net habit strength is chosen.

Motivation and Reinforcement


Spence defined motivation as arising from primary appetitional or aversive needs due to
environmental changes. Reinforcement was not necessarily due to drive reduction but due to
environmental events increasing response probability. He distinguished classical (reinforcement-
dependent) and instrumental conditioning (contiguity-based). He rejected Hull’s idea that all
learning depends on drive reduction, showing that instrumental learning can occur without
reinforcement.
Spence & Lippitt’s Experiment
Rats were trained without deprivation, then either food- or water-deprived. They chose the path
leading to the reinforcer matching their current need. This showed learning occurred even in the
absence of drive reduction, challenging Hull’s theory.

Incentive Motivation (K)


Spence and Hull used “K” to represent incentive motivation. The intensity of behavior depended on
the vigor of the consummatory response (RG) elicited by the goal. Through classical conditioning,
stimuli associated with RG could elicit partial anticipatory responses (rg) before reaching the goal.
Feedback from these rg’s (sg) helped motivate ongoing instrumental behavior.

Fractional Anticipatory Response (rg-sg Mechanism)


The rg-sg mechanism explained how stimuli along the path to a goal elicited partial responses and
sensory feedback that increased motivation. These responses built up as the organism neared the
goal, explaining faster approach behavior. Though attempts to measure rg's were unsuccessful, the
model remained influential in motivation theory.

Emotionality as Generalized Drive


Spence distinguished between appetitional and aversive motivational states. Appetitional needs
(e.g., hunger) were linked to distinctive drive stimuli, while aversive drives (e.g., pain) varied with
stimulus intensity and internal emotional state. Emotional responses contributed to general drive
(D), which energized behavior. Greater drive produced stronger responses, especially in instrumental
reward conditioning.

Frustration – Competition Theory of Extinction


Spence argued that extinction was due to frustration when reinforcers were absent, which elicited
incompatible competing behaviours. Anticipated frustration (from repeated non-reinforcement)
produced learned interference, accelerating extinction. Larger reinforcers during learning produced
greater frustration upon removal, leading to faster extinction.

Revised Behavioral Formula


Spence revised Hull’s formula to:
R = f(E) = H × (D + K)
Here, habit strength (H) is multiplied by the additive effects of drive (D) and incentive (K) to
determine response strength.

SELF-DETERMINATION THEORY (SDT)

Developed by Deci and Ryan, SDT emphasizes intrinsic motivation as the natural tendency to
explore, learn, and grow. Intrinsic motivation leads to greater interest, performance, and well-being.
It arises from internal sources rather than external rewards or pressures.

Controlled vs. Self-Determined Behavior


Controlled behavior stems from external or internalized pressures (e.g., societal expectations), while
self-determined behavior reflects personal choice and intrinsic values. External rewards or threats
weaken intrinsic motivation by undermining autonomy.

Three Basic Psychological Needs

1. Autonomy – Desire to act in line with one's own values and choices.
2. Competence – Need to feel effective and successful in one’s actions.
3. Relatedness – Need to feel connected and cared for in social relationships.
When satisfied, these needs enhance intrinsic motivation and psychological growth.
Social Environment and Motivation
SDT identifies three social environment factors:

 Autonomy Support – Encouraging choice and innovation fosters autonomy


 Structure – Clear expectations and feedback aid competence without being overly
controlling.
 Interpersonal Involvement – Emotional support strengthens relatedness and motivation.

Types of Extrinsic Motivation (Self-Determination Continuum)

1. External Regulation – Behavior driven by external rewards or punishments.


2. Introjected Regulation – Behavior driven by internal pressures (e.g., guilt).
3. Identified Regulation – Behavior aligns with personal values, though goal-driven.
4. Integrated Regulation – Behavior fully assimilated into one’s self-concept, though still
extrinsically motivated.

Impact of Rewards and Feedback


Positive feedback enhances intrinsic motivation by boosting competence. Conversely, rewards
perceived as controlling (even positive ones) diminish autonomy and weaken intrinsic interest.

FLOW
Flow is a psychological state of complete absorption in an activity that is challenging yet matched to
one’s skills. It is deeply engaging and intrinsically rewarding.

Characteristics of Flow

 Oneness – Merging of self and activity.


 Loss of self-awareness – The "doer" disappears; the doing takes over.
 Total concentration – No internal talk or distractions.
 Time distortion – Time either speeds up or slows down.
 Clarity and exhilaration – Effort feels natural and enjoyable.

Autotelic Nature of Flow


Activities that produce flow are “autotelic” — done for their own sake, not for external rewards.
Writers, athletes, and musicians often describe this state.

Flow Quadrants
Flow occurs when both challenge and skill are high (1:1 ratio).

 Low skill/low challenge = apathy


 High challenge/low skill = anxiety
 Low challenge/high skill = boredom

Conditions for Flow

 Clear goals and immediate feedback


 Total concentration and loss of self-awareness
 Transformation of time perception
 Intrinsic reward in the activity itself

Cultural and Familial Influences


Cultures and families that provide structure, autonomy, and involvement foster more frequent flow
experiences. Encouraging children with clear goals, trust, and challenges helps promote flow.
Flow in Relationships
Sexual and emotional intimacy can also produce flow when marked by mutual trust, creativity, and
increasing complexity. Maintaining challenge and involvement sustains enjoyment.

REGULATORY FOCUS THEORY

Self-Regulation and Self-Control


Self-regulation refers to the capacity to initiate and guide actions toward achieving future goals,
while self-control pertains to managing emotions, thoughts, and behaviours in the face of
temptation. Both are vital for well-being, adjustment, and competence, with high self-control linked
to fewer addictive behaviours and better psychological health.

Foundations of Regulatory Focus Theory


E. Tory Higgins (1997) proposed Regulatory Focus Theory, asserting that individuals regulate their
actions according to two orientations when pursuing future goals: promotion focus and prevention
focus.

1. Promotion Focus
In promotion focus, individuals strive to attain gains, growth, and self-enhancement. Success
in this domain leads to feelings of joy, while failure results in sadness or disappointment.
2. Prevention Focus
In prevention focus, individuals aim to avoid harm and maintain safety. Achieving these
goals brings relief; failing to do so results in anxiety or fear due to persistent threats.

Dynamic and Trait-Level Differences


People shift between promotion and prevention focus based on situational cues. However, some
individuals consistently lean toward one orientation, focusing either on self-expansion (promotion)
or self-protection (prevention).

Self-Directed Prevention Focus


Prevention efforts can involve scanning and correcting the self to avoid wrongdoing, aligning one's
behavior with moral or societal standards.

Self-Discrepancy Theory Integration


Higgins links prevention goals to reducing the gap between the actual and the “ought” self
(socially/morally expected), and promotion goals to reducing the gap between the actual and the
“ideal” self (personal aspirations).

Intrinsic vs. Extrinsic Distinction


Although promotion goals are often intrinsic and prevention goals extrinsic, either can be both. For
instance, wanting to improve a relationship (intrinsic) or become wealthy (extrinsic) both reflect
promotion.

Emotional Outcomes of Goal Striving


Promotion goals seek rewards and personal growth—success brings joy; failure causes sadness.
Prevention goals seek safety and avoid threats—success brings relief; failure leads to anxiety.

CURIOSITY
Curiosity, an internal motivational factor, drives individuals to explore. It arises from an optimal
discrepancy between current knowledge and what could be learned, stimulated by novelty and
interest.
Types of Curiosity
There are two main types: sensory curiosity (triggered by sensory changes) and cognitive curiosity
(triggered by gaps in understanding, encouraging cognitive restructuring).

Education and Motivation


Allowing learners to choose what they want to learn enhances motivation. Teachers should ideally
align curriculum with students' interests to increase intrinsic motivation.

Curiosity as a Motivational Construct


Curiosity is both a behavior and a psychological construct underlying exploratory behavior. It
emerges when environmental novelty invites investigation and is essential for motivation.

Historical Roots
Early thinkers like Cicero saw curiosity as a “passion for learning.” Psychology originally viewed it
with suspicion but later accepted its motivational significance, especially in non-homeostatic
behaviours like maze exploration in rats.

William James’ Types of Curiosity


James distinguished between instinctive, biologically driven curiosity and higher-order scientific or
metaphysical curiosity aimed at resolving knowledge gaps.

Psychoanalytic View
Freud linked curiosity to the sexual drive, particularly in early childhood, with a shift toward more
cognitive exploratory behavior under social constraints.

Blarer’s View
Blarer emphasized that curiosity is intrinsic to perception and experience, laying the foundation for
intrinsic motivation theories.

THEORETICAL CONCEPTS OF CURIOSITY AND EXPLORATION

Drive-Based vs Learned Views


Curiosity may stem from an innate exploratory drive or be learned through conditioning. Nissen’s rat
studies support both perspectives.

Berlyne’s Theory
Berlyne proposed that curiosity is driven by arousal discrepancies caused by novelty, complexity, and
surprise, and that exploratory behavior helps maintain optimal arousal.

Arousal Models
Fiske & Maddi distinguished arousal (bodily response) from activation (central nervous system
readiness), emphasizing medium arousal as optimal for exploration.

McReynolds and Fowler’s Views


McReynolds emphasized perceptual enrichment from exploration, while Fowler viewed curiosity as a
homeostatic drive triggered and satisfied by the same stimuli, often preceded by boredom.

Hunt’s Cognitive-Motivational Model


Hunt saw curiosity as emerging from a desire to process incongruent information—cognitive
inconsistency becomes the basis for exploration and learning.

Drive Theory Challenges


Drive theories differ on whether curiosity is primary or secondary. While some evidence shows
curiosity intensifies when unsatisfied, Hebb noted that both increase and decrease in drive can be
rewarding, posing a paradox.
Intrinsic Drive Viewpoint
Harlow argued that curiosity operates independently from homeostatic drives. His intrinsic drive
theory challenges traditional drive-reduction models.

CURIOSITY AND CULTURE

Cultural Variability
Though cross-cultural similarities in curiosity exist, attitudes and opportunities for exploration vary.
Zuckerman’s concept of sensation seeking captures the desire for intense and novel experiences,
even at personal risk.

Cross-Cultural Research
Berlyne’s cross-cultural work confirms that stimulus demand characteristics transcend cultural
boundaries. Nonetheless, curiosity must be understood within specific cultural contexts.

Developmental and Environmental Factors


Children naturally explore; humans prefer slightly more complex stimuli over time. Exploration
increases competence and information processing capabilities.

Arousal and Exploration


Exploration results from a person-environment interaction. High arousal leads to withdrawal, while
moderate discrepancy spurs exploration. Attachment security reduces anxiety and enhances
exploration.

ISSUES RELATED TO CURIOSITY

Biological Influences
Temperament influences curiosity—stable or extroverted children tend to explore more. High
anxiety inhibits exploration due to a focus on returning to homeostasis and managing survival
threats.

Cognitive and Experiential Influences


Experience builds competence and complex cognition, increasing exploration. Intrinsic motivation
drives learners to seek novelty and stretch capabilities.

Self-Determination Theory
This theory emphasizes three innate needs—competence, relatedness, and autonomy—as
motivators of exploration and curiosity. Internalization of values supports persistence and goal
pursuit.

SENSATION SEEKING

Definition and Traits


Sensation seeking is the pursuit of novel, complex, and risky experiences, with four components:
thrill/adventure seeking, experience seeking, disinhibition, and boredom susceptibility.

Biological Correlates
Low monoamine oxidase (MAO) levels are linked to sensation seeking due to their effects on
dopamine, serotonin, and norepinephrine—neurochemicals associated with pleasure and reward.

Testosterone and Sensation Seeking


Testosterone levels are also implicated in male sensation-seeking behavior. Such individuals may
engage in risky behavior for stimulation.
Learned Aspects and Benefits
Thrill seekers reinterpret fear as excitement, are highly competent in coping, and often have high
IQs, social skills, and creative potential. However, they may resist long-term commitments and
prefer novelty.

UNIT 4

EMOTION

Emotion

Reactions consisting of subjective cognitive states, physiological reactions, and expressive behaviors.
Robert Plutchik (2003) has identified eight primary emotions. These are fear, surprise, sadness,
disgust, anger, anticipation, joy, and trust (acceptance).

Theories of emotion

 James-Lange Theory:
 Cannon-Bard Theory:
 Schachter–Singer two-factor theory

Components of emotion

Emotion involves (1) a subjective conscious experience (the cognitive component)

(2) bodily arousal (the physiological component)

(3) characteristic overt expressions (the behavioral component).

Measurement of emotion

Self-report measures

 Day reconstruction
 Experience sampling
 Real time technique

Observational method

 Facial action coding system (FACS)


 Electromyography (EMG)

Development of emotions
Facial Feedback Hypothesis

The facial feedback hypothesis posits that facial expressions are not only the result of emotions but
can also influence and intensify those emotions. According to this idea, when a person smiles, even
artificially, sensory feedback from the facial muscles is sent to the brain, which then enhances the
feeling of happiness; similarly, frowning can deepen feelings of sadness or anger. This suggests that
emotions are not purely mental experiences but are shaped by physical expression. Research
supporting this includes studies where participants reported feeling happier when asked to hold a
pen between their teeth (mimicking a smile), indicating that facial movement alone can influence
emotional experience.

Neural mechanisms of Emotion- Fear

Joseph LeDoux proposed that the amygdala is central to the neural circuits involved in processing
fear. Sensory inputs reach the thalamus, which directs the information via two distinct pathways:

 A fast, subcortical route to the amygdala allows for immediate emotional responses—
especially to threats—triggering autonomic arousal and hormonal changes.
 A slower, cortical route processes the information for detailed evaluation.
This dual pathway explains how fear can be triggered unconsciously, before full cognitive
awareness, and why individuals with anxiety disorders may experience irrational fear
responses.

Neural mechanisms of Emotion- Anger & Aggression


1. Serotonin: Low serotonin levels are linked to increased aggression in both animals and humans.
Serotonergic neurons appear to have an inhibitory effect on aggression.
2. Prefrontal Cortex: Especially the orbitofrontal cortex, helps regulate emotional responses and
assess the emotional relevance of complex situations. It receives inputs from various brain regions
and sends outputs to influence behavior.
3. Hormones: Elevated testosterone is correlated with aggression, including both intermale and
interfemale aggression. Castration reduces sex-related aggression, but not necessarily other forms.
4. Kluver-Bucy Syndrome: Caused by damage to the limbic system, especially the amygdala and
hippocampus, leading to a dramatic reduction in fear and aggression in animals.

Neural basis of communication of emotions-recognition & expression

Charles Darwin believed that human emotional expressions are natural and not learned, especially
facial expressions. He supported this idea by watching his own children and talking to people in
remote cultures. Later, Ekman and Friesen (1971) confirmed this by showing that people from an
isolated tribe in New Guinea could recognize Western facial expressions and also made similar
expressions themselves. Studies have also found that blind and sighted children show similar facial
expressions, suggesting that these are inborn. However, it's still unclear whether other forms of
emotional communication, like tone of voice or body posture, are learned or partly innate.

RECOGNITION

We recognize other people's emotions mainly through what we see and hear—facial expressions
and tone of voice. Research shows that the right hemisphere of the brain plays a bigger role than the
left in understanding emotions, especially negative ones. For example, emotional cues are better
recognized through the left ear and left visual field, which connect to the right hemisphere. While
the left side of the brain processes the literal meaning of speech, the right side processes the
emotional tone. Studies have shown that people with right hemisphere damage can understand the
situation but struggle to judge emotions from faces or gestures. One case showed a man who
couldn’t understand words but could still detect the emotion in the tone of voice, proving that tone
and word meaning are processed separately. Damage to the right somatosensory cortex also affects
emotion recognition, as it impairs the ability to sense facial expressions. Recognition of faces and
recognition of emotional expressions involve different brain areas—damage to the visual association
cortex may cause face blindness (prosopagnosia), but not affect emotion recognition. The amygdala
plays a key role in recognizing emotional expressions, especially fear. Brain scans show higher
amygdala activity when people view fearful faces. The basal ganglia are also involved—damage here
can impair recognition of disgust, as seen in people with Huntington’s disease or OCD.

EXPRESSION

Facial expressions of emotion are mostly automatic and involuntary. Ekman and Davidson confirmed
Duchenne de Boulogne’s early finding that genuine smiles (called Duchenne smiles) involve the
contraction of muscles near the eyes, unlike fake or polite smiles. This insight even influenced acting
methods, like those of Stanislavsky. Two neurological disorders show how different brain systems
control emotional expression: in volitional facial paresis, people can’t move their facial muscles
voluntarily but can still show genuine emotional expressions; in emotional facial paresis, the reverse
happens—they can move their facial muscles voluntarily but can’t show emotions on one side of the
face. The right hemisphere of the brain plays a major role in expressing emotions. Research shows
that the left side of the face (controlled by the right hemisphere) tends to show stronger emotional
expression. People with right hemisphere damage have trouble expressing emotion both in facial
expressions and in tone of voice. Interestingly, the amygdala, though essential for recognizing
emotions, is not necessary for expressing them. For example, a woman who had her amygdala
removed could still show facial expressions even though she could no longer recognize them in
others.

STRESS AND COPING

 GENERAL ADAPTATION SYNDROME

 SOURCES OF STRESS

 COPING STYLES

STRESS

Stress is a natural human response to challenging or threatening situations, defined as a state of


worry or mental tension. The events or situations that trigger this response are called stressors,
which can be internal (like thoughts or memories) or external (like environmental or social
pressures). Stress can be either positive (eustress) or negative (distress). Eustress helps motivate
and energize people, such as when starting a new job or preparing for a presentation, and it is
usually short-term and manageable. In contrast, distress occurs when challenges exceed a person's
ability to cope—such as in the case of a toxic work environment or personal loss—and can harm
both mental and physical health. How we handle stress depends on how we appraise the situation.
In primary appraisal, a person assesses whether the situation involves harm (damage already done),
threat (possible future harm), or challenge (a situation they feel capable of handling). This is
followed by secondary appraisal, where the person evaluates whether they have the resources to
manage the situation. If they feel equipped to cope, the event is less stressful and even seen as a
challenge; if not, it results in greater stress. The actual mental and physical response to a stressor is
called strain. Walter Cannon contributed to stress research by identifying the fight-or-flight
response, a physiological reaction that prepares the body to face or flee from danger. While this
response is adaptive in emergencies, over time it can impair emotional and physical health if
activated too frequently or in non-life-threatening situations.

GENERAL ADAPTATION SYNDROME (GAS)

Hans Selye’s work on stress led to the development of the General Adaptation Syndrome (GAS),
which describes the body’s typical physiological response to any stressor. He observed that stress
consistently led to changes like enlargement of the adrenal cortex, shrinkage of the thymus and
lymph glands, and ulcers. GAS occurs in three stages: alarm, resistance, and exhaustion. In the
alarm stage, the body mobilizes to face the threat. In the resistance stage, it tries to cope with the
stressor. If the stress continues and coping fails, the body enters exhaustion, where physiological
resources are depleted.

Alarm Reaction

The alarm stage triggers the fight-or-flight response. The sympathetic nervous system activates key
organs and releases hormones like epinephrine and norepinephrine. Simultaneously, the HPA axis is
activated, causing the pituitary to release ACTH, which stimulates the adrenal glands to produce
cortisol. These changes speed up necessary functions and suppress non-essential ones. Symptoms
include fatigue, muscle pain, fever, headache, upset stomach, shortness of breath, and low energy.

Stage of Resistance

If the stressor continues, the body enters the resistance stage, where it adjusts to the persistent
threat. The initial sympathetic arousal reduces, but HPA activation remains high. Although the body
might appear to function normally, its ability to resist new stressors declines. Hormone levels stay
elevated, and this prolonged strain may lead to diseases of adaptation, like hypertension or ulcers.
The body's defence’s stay active, but symptoms of the alarm stage may fade temporarily.

Stage of Exhaustion

When stress is prolonged and resources are depleted, the exhaustion stage begins. The body
becomes vulnerable to illness as the immune system weakens. Common signs include irritability,
apathy, anxiety, and mental fatigue. Behaviourally, people may withdraw, neglect responsibilities,
and make poor decisions. Physically, they may suffer from frequent illness, overuse of medication,
and chronic fatigue. If unresolved, this stage can result in severe health consequences or even death.

SOURCES OF STRESS THROUGHOUT LIFE

Stress affects individuals of all ages—infants, children, and adults. Although sources vary by life
stage, stress can arise from within the person, family dynamics, and wider social or environmental
factors. Each source contributes uniquely to psychological and physical well-being.

Sources Within the Person: Illness

Physical illness places both physical and emotional demands on individuals. Children tend to cope
better with illness than the elderly, whose immune systems weaken over time. Additionally, the
meaning and impact of illness varies by age. For adults, chronic illness often brings worry about
current and future challenges, increasing stress levels.

Sources Within the Person: Conflict

Stress often arises from internal motivational conflicts. These occur when individuals face
competing desires or goals. Common types of conflict include:

 Approach-Approach Conflict: Choosing between two equally desirable options (e.g., two
great job offers).
 Avoidance-Avoidance Conflict: Choosing between two undesirable options (e.g., studying
for a tough exam vs. failing it).
 Approach-Avoidance Conflict: One option has both attractive and unattractive features (e.g.,
eating dessert vs. gaining weight).

Such conflicts are stressful because they create tension between opposing desires, and the
consequences of making the "wrong" choice may feel significant.

Sources Within the Person: Social Motives

Social needs—such as connection, achievement, and recognition—also cause stress. Rejection,


conflict, failure, or disrespect can activate the body’s stress systems, raising blood pressure,
cortisol, and other stress hormones. Social evaluation and fear of negative judgment are powerful
stressors.

Sources in the Family

Family life is a major source of stress due to interpersonal relationships and shared responsibilities.
Financial strain, household disagreements, and conflicting goals are common stress triggers. Major
family stressors include new family members, marital conflict or divorce, and illness or death in the
family.

Addition to the Family

The arrival of a newborn can be a joyful yet stressful event. Mothers may face emotional and
physical challenges, while fathers often feel pressure to provide financially. A baby’s temperament
—whether easy-going or difficult—greatly affects parental stress. Difficult babies resist routines and
are harder to soothe, which can strain parental coping.

High Stress During Pregnancy

Pregnancy itself can be a source of stress, especially when external pressures or emotional strain
are present. High stress levels during pregnancy can negatively affect the baby’s health, potentially
leading to low birth weight or premature delivery.

Marital Strain and Divorce

Persistent marital conflict increases stress responses like cortisol and blood pressure. Divorce
disrupts the lives of all family members. Children may face changes in routine, living situations, and
caregiver roles. While most families adapt over time, some children show lasting effects.

Illness, Disability, and Death in the Family

Chronic illness in a child can lead to long-term family stress, sometimes resulting in PTSD-like
symptoms. Adult illness or disability affects income, time, and emotional balance. The death of a
loved one has profound psychological effects. For children, losing a parent can be traumatic and
difficult to understand. Spouses may struggle with loss of identity, purpose, and hope, and this
emotional toll can damage long-term health.

Stress from Community and Society

External stressors include school, jobs, competition, and environmental demands. People in high-
pressure jobs, like healthcare, often face intense stress. Poor relationships at work or school, lack of
recognition, and job insecurity also increase stress. Children may face academic stress, while adults
with low socioeconomic status or who experience discrimination (based on race, class, or gender
identity) are at higher risk for chronic stress and its health consequences.

COPING STYLE

Coping style refers to an individual's consistent way of responding to stress. It reflects how people
typically manage challenging situations, and these tendencies can influence both short-term
reactions and long-term psychological outcomes. There are several major types of coping styles,
including approach versus avoidance coping, problem-focused and emotion-focused coping, and
proactive coping.

Approach Versus Avoidance Coping

Approach coping involves directly confronting the stressor. People who use this style tend to face
problems head-on, plan solutions, seek social support, and take goal-directed action. For example,
talking to a manager about work stress or studying ahead of an exam are approach strategies. This
type of coping is usually constructive and problem-solving oriented.
In contrast, avoidance coping focuses on escaping the stressor or distracting oneself from it.
Individuals using this strategy may procrastinate, deny the issue, oversleep, or spend time on social
media to avoid confronting their emotions. While avoidance may bring short-term relief, it often
leads to greater stress over time, as the root problem remains unresolved.

Problem-Focused and Emotion-Focused Coping

Problem-focused coping aims at tackling the issue directly by taking concrete actions to change the
stressful situation. This method is particularly useful for controllable stressors, such as workplace
challenges or academic deadlines.

On the other hand, emotion-focused coping centres on managing the emotional distress that comes
with a stressful event. This is more helpful when the situation cannot be changed, such as coping
with chronic illness or loss. Effective stress management often requires flexibility—people who can
shift between these strategies depending on the situation tend to cope better overall.

Emotional-Approach Coping

A subtype of emotion-focused coping is emotional-approach coping, which involves processing,


expressing, and understanding one’s emotions in response to stress. This method has been shown to
improve psychological adjustment, especially in those with chronic conditions like pain, pregnancy,
or cancer. It can help individuals attach personal meaning to adversity, enhancing resilience and
well-being. Emotional-approach coping tends to be particularly effective among women and is linked
to better emotional regulation and health outcomes.

Proactive Coping

Proactive coping refers to efforts made in advance to prevent or prepare for potential stressors. It
begins with anticipating challenges by reflecting on past experiences or upcoming responsibilities.
After identifying these potential stressors, individuals develop strategies such as saving money,
building social support, or learning new skills to strengthen their resources.

Additionally, proactive coping includes having contingency plans and adopting a positive, growth-
oriented mindset. People who frame difficulties as opportunities and remain optimistic about their
ability to handle stress are more resilient. This style fosters confidence, emotional stability, and
better long-term coping.

You might also like