Advanced Psychological Processes Full Note
Advanced Psychological Processes Full Note
INTELLIGENCE
THEORIES OF INTELLIGENCE
Spearman's g-factor theory and multifactor theories such as Thurstone and Guilford's are combined
in Vernon's 1950 proposal for the Hierarchical Theory of Intelligence. It depicts intelligence as a
multi-level pyramid:
Top Level: Spearman's g-factor, which stands for general intelligence and affects all
intellectual endeavors.
Second Level: Group elements that are comparable to Thurstone's core competencies, like:
Verbal-Educational (V: Ed) refers to the combination of educational, numerical, and verbal
skills.
Practical-Mechanical (K:M) skills encompass mechanical, practical, spatial, and
physical/manual skills.
Third Level: Like Guilford's model, minor group factors further break down V: Ed and K:M
into more specialized skills.
Bottom Level: Spearman's s-factors, which are particular skills associated with certain tasks.
This hierarchical model integrates general, group, and specific intelligences into a structured
framework, resembling a genealogical tree. It recognizes both general and specific cognitive
JEROME BRUNER
Jerome Bruner was a prominent cognitive psychologist who proposed a theory of intelligence that
emphasized the role of culture and experience in shaping human cognition. According to Bruner,
intelligence is not a fixed entity or a set of inherent abilities, but rather a dynamic process that is
shaped by the interactions between individuals and their environment.
EARL HUNT
Hunt proposed three classes of cognitive performance identified Central to intellectual functioning
1. The persons choice about the way to internally (mentally) represent a problem
He studied the individual differences in the way problems are represented, the way material is
encoded, the way information is transferred in one's working memory and other aspects of
information processing
American psychologist Arthur R. Jensen (1923–2012) is well-known for his contentious studies on
intelligence, especially those that deal with its heritability and racial group differences. With an
estimated 60% to 80% heritability, he maintained that intelligence is heavily influenced by genetics.
Following a 1969 article in the Harvard Educational Review, he put forth a two-level theory of
intelligence: Level I (associative learning—memory, attention, rote learning) and Level II (cognitive
learning—abstract reasoning, problem-solving, symbolic thought). This work garnered a lot of
attention. Jensen asserted that while Level II skills, which are more important for academic success,
are more common among middle-class white and Asian populations, Level I skills are equally
distributed across racial and social groups. He maintained that, despite acknowledging the possibility
of cultural bias in intelligence testing,
We acknowledge that there are numerous ways to organize the following information (cf. Davidson
& Kemp, 2011; Esping & Plucker, 2008; Gardner, Kornhaber, & Wake, 1996; Sternberg, 1990). The
discussion of the following theories is roughly chronological, although somewhat arbitrary, and the
reader should not infer a priority based on the order in which the material is presented.
PASS MODEL
Luria’s neuropsychological model (1966, 1970, 1973) outlines intelligence in terms of three
functional units or "Blocks" in the brain. Block 1 is responsible for attention—maintaining focus and
alertness. Block 2 handles how information is processed, using simultaneous processing
(understanding information as a whole, like viewing a painting) and successive processing (analyzing
parts step by step, in sequence). Block 3 is involved in planning, decision-making, and regulation of
behavior. This model laid the theoretical foundation for the Kaufman Assessment Battery for
Children (K-ABC), which emphasized how children solve problems (sequential vs. simultaneous
processing) rather than what content they solve (verbal vs. non-verbal). Expanding on Luria’s model,
the PASS theory (Planning, Attention, Simultaneous, and Successive) was developed by Das, Naglieri,
and Kirby (1994), incorporating all three functional blocks. This theory became the basis for the
Cognitive Assessment System (CAS) developed by Naglieri and Das (1997), offering a more process-
oriented approach to understanding cognitive abilities.
The Cattell-Horn-Carroll (CHC) theory of intelligence is the most widely accepted and applied model
in modern IQ testing. It combines Cattell and Horn’s Gf-Gc theory with Carroll’s Three-Stratum
Theory, both of which originated from Spearman’s g-factor concept. Cattell distinguished between
fluid intelligence (Gf)—the ability to solve novel problems, and crystallized intelligence (Gc)—
knowledge gained through learning and experience. Horn later expanded the model to include
additional Broad Abilities, such as visual processing (Gv), short-term memory (Gsm), long-term
retrieval (Glr), and processing speed (Gs), among others, treating them as separate but equal factors
rather than hierarchical.
Carroll, through extensive factor analysis, proposed a hierarchical model with three levels:
Stratum III: General intelligence (g)
Stratum II: Broad abilities (e.g., Gf, Gc, Gv, Gs)
Stratum I: Narrow, specific abilities (around 70 total)
The modern CHC model merges these theories into two key levels—Broad (Stratum II) and Narrow
(Stratum I) abilities—dropping an explicit general g-factor. It proposes 10 broad cognitive abilities,
but only 7 are typically measured by IQ tests (Gf, Gc, Gv, Gsm, Glr, Ga, Gs), while quantitative
knowledge (Gq) and reading/writing ability (Grw) are assessed through achievement tests, and
decision/reaction speed (Gt) is generally not tested.
The CHC theory has shaped recent major intelligence tests like the Stanford-Binet-5, KABC-II, and
Woodcock-Johnson III, shifting the focus from just a few part scores to a more nuanced view of
multiple cognitive processes. Despite its widespread acceptance, debate continues over the
importance of general intelligence (g) versus multiple intelligences in understanding cognitive
abilities.
Recent research suggests that general intelligence (g) is not a single unified cognitive mechanism but
rather emerges from the interaction of multiple underlying cognitive processes that become
interconnected through development. The three most widely studied mechanisms contributing to g
are working memory, processing speed, and explicit associative learning.
Working memory refers to the ability to hold, update, and manipulate information while resisting
distractions. Individuals with strong working memory are better at maintaining task goals and
controlling attention in the face of interference. Numerous studies have shown a strong correlation
between working memory and g, with neurological evidence pointing to overlapping brain activation
patterns in the lateral prefrontal cortex and parietal regions.
Processing speed is the rate at which basic cognitive tasks are performed. People with higher
intelligence tend to process information faster, as shown in tasks involving reaction time and
inspection time. Processing speed is considered a key component of g in both the Horn-Cattell
theory (as “Gs”) and Carroll’s three-stratum model (as “general speediness”).
Explicit associative learning involves the deliberate formation and recalls of connections between
stimuli. While early studies found weak links between associative learning and intelligence, newer
research using more complex learning tasks has revealed stronger correlations, even after
accounting for working memory and processing speed.
Together, these findings indicate that g may reflect a network of interacting cognitive processes,
rather than a single unitary ability. This perspective is shaping a more dynamic and multifaceted
understanding of intelligence.
The Parieto-Frontal Integration Theory (P-FIT), proposed by Jung and Haier (2007), suggests that
intelligence arises from a distributed network of brain regions, with key roles played by the parietal
and frontal lobes. After reviewing 37 neuroimaging studies, they found consistent associations
between intelligence and activity in these areas, although supporting regions span the entire brain.
P-FIT outlines four stages of information processing:
Sensory Input: Temporal and occipital lobes handle visual and auditory input.
Integration: Parietal cortex processes and integrates sensory information.
Problem Solving: Frontal lobes engage in reasoning, evaluation, and hypothesis testing, in
coordination with parietal regions.
Response Selection: The anterior cingulate inhibits incorrect or competing responses.
The white matter pathways, especially the arcuate fasciculus, are crucial for efficiently transferring
information between regions, supporting overall cognitive performance. A core idea of P-FIT is that
different individuals may activate different combinations of these regions to achieve similar levels of
intelligence, accounting for variability in cognitive strengths and weaknesses. While the theory has
been well-received, critics have called for more research using larger samples and diverse
intelligence measures. Follow-up studies (e.g., Colom, Schmithorst) have explored P-FIT in
developmental contexts and in relation to network efficiency, offering further support and
refinement. A 2009 special issue of Intelligence compiled 11 new studies extending the theory’s
reach.
Michael Anderson’s (1992, 2005) theory of Minimal Cognitive Architecture offers a developmental
model that integrates both general and specific cognitive abilities, drawing on Fodor’s (1983)
distinction between central cognitive processes and modular input systems. Anderson proposes two
distinct routes for acquiring knowledge:
Route 1involves thoughtful problem solving and is constrained by processing speed, which Anderson
identifies as the core of general intelligence (g). This route includes two independent processors:
verbal and spatial, which are normally distributed and uncorrelated. Differences in individual
intelligence, according to Anderson, stem from variations in this central processing route.
Route 2 operates through modular, domain-specific systems such as syntactic parsing, phonological
encoding, 3D perception, and theory of mind. These modules function automatically and
independently of Route 1, and are not limited by central processing speed. Though innate, such
modules can also be acquired through extensive practice and evolve over time, contributing to
cognitive development.
Anderson's model attempts to bridge general intelligence theories with Gardner’s Multiple
Intelligences, acknowledging both domain-general mechanisms (processing speed) and domain-
specific capabilities (modules). It also explains how individuals with low IQ may still excel in specific
areas and how conditions like dyslexia or autism can arise alongside average or high intelligence.
However, S.B. Kaufman (2011) criticized the model for its over-reliance on processing speed as the
sole central mechanism and its limited scope regarding Route 2. Kaufman argues that Anderson’s
model dismisses meaningful individual differences in modular processing and excludes other
domain-general learning mechanisms (like implicit learning or latent inhibition), thus narrowing the
investigation of cognitive processes involved in intelligence.
DUAL-PROCESS THEORY
The Dual-Process (DP) Theory of Human Intelligence (Davidson & Kemp, 2011; S. B. Kaufman, 2009,
2011, 2013) proposes that intelligent behavior arises from the dynamic interaction between two
types of cognitive processes: goal-directed (controlled) cognition and spontaneous (automatic)
cognition. Controlled cognition involves deliberate processes such as metacognition, self-regulation,
working memory, and planning, which are essential for tasks requiring abstract reasoning and
attentional control. In contrast, spontaneous cognition includes mind-wandering, intuition,
daydreaming, and implicit learning, contributing significantly to creativity, insight, and adaptive
functioning through unconscious and effortless mechanisms. The theory asserts that both systems
are vital to intelligence, individuals differ in their strengths across them, and no one mode is superior
—adaptiveness lies in the capacity to shift between them based on context. It also emphasizes that
intelligence is not fixed but evolves over time through passion and engagement, and that people
may reach similar intellectual outcomes via different cognitive pathways. Although early research
(e.g., Kaufman et al., 2010) indicates that implicit learning predicts intelligent behavior
independently of general intelligence (g), further empirical validation of the theory is needed.
Students' implicit beliefs about intelligence structure their inferences, judgments, and reactions to
different actions and outcomes. In social and developmental psychology, an individual's implicit
theory of intelligence refers to his or her fundamental underlying beliefs regarding whether or not
intelligence or abilities can change, developed by Carol Dweck and colleagues.
Carol Dweck identified two different mindsets regarding intelligence beliefs. They are,
1. Entity Theory
2. Incremental Theory
According to the Entity Theory, intelligence is a personal quality that is fixed and cannot be changed.
For entity theorists, if perceived ability to perform a task is high, the perceived possibility for
mastery is also high. In turn, if perceived ability is low, there is little perceived possibility of mastery,
often regarded as an outlook of "learned helplessness" (Park & Kim, 2015).
Entity Theorists
1. believe that even if people can learn new things their intelligence stays the same.
2. will likely blame their intelligence and abilities for achievement failures.
According to the Incremental Theory, on the other hand, intelligence is not fixed and can be
improved through enough effort.
Incremental Theorists
1. will blame lack of effort and/or strategy use that are possible to mediate negative outcomes.
2. will likely act out and improve the situations with more effort.
Holding either of these theories has important consequences for people. Studies have shown that
entity theorists of intelligence react helplessly in negative outcomes. "That is, they are not only more
likely to make negative judgments about their intelligence from failures, but also more likely to show
negative affect and debilitation. In contrast, incremental theorists, who focus more on behavioral
factors (e.g., effort, problem-solving strategies) as causes of negative achievement outcomes, tend
to act on these mediators (e.g., to try harder, develop better strategies) and to continue to work
towards mastery of the task" (Dweck, Chiu, Hong, 1995, p. 268).
In their studies, Dweck, Bandura, and Leggett assessed students' theories of intelligence and found
out that students who were holding an entity theory of intelligence chose the performance goals
tasks more than those holding an incremental theory of intelligence when they were given options
to choose between the tasks that represented performance goals and learning goals (cited in Dweck,
Chiu, & Hong, 1995, p.274).
CRITICAL ANALYSIS OF MULTIPLE INTELLIGENCE THEORY AND THE THEORY OF EMOTIONAL
INTELLIGENCE
Howard Gardner’s Theory of Multiple Intelligences (MI Theory), first introduced in Frames of Mind
(1983) and expanded in later works (e.g., Gardner, 2006), challenges traditional views of intelligence
by emphasizing a broader, culturally grounded definition. Gardner defines intelligence as “an ability
or set of abilities that permit an individual to solve problems or fashion products that are of
consequence in a particular cultural setting” (Ramos-Ford & Gardner, 1997), and proposes eight
distinct intelligences: linguistic, logical-mathematical, spatial, bodily-kinaesthetic, musical,
interpersonal, intrapersonal, and naturalistic. He has also explored the potential for additional
intelligences, such as spiritual and existential. Rather than relying on factor analysis, Gardner
grounded his theory in eight criteria, including brain localization, the presence of prodigies or
savants, core operations, distinct developmental paths, evolutionary roots, support from
experimental tasks and psychometric findings, and the capacity for symbolic representation. He
critiques the traditional educational focus on linguistic and logical-mathematical abilities, arguing
that this narrow emphasis marginalizes other forms of intelligence, a concern still relevant today
given the continued prioritization of standardized testing in those domains. Despite its popularity in
education, MI Theory has faced wide-ranging criticisms—philosophical, empirical, conceptual, and
cognitive. For example, Lohman (2001) contends that the theory overlooks general inductive
reasoning ability and the role of working memory, both central to fluid intelligence (gF). Additionally,
although assessment tools for MI have been developed (e.g., Gardner et al., 1998), their
psychometric validity and reliability remain under question (Plucker, 2000; Visser et al., 2006).
Nonetheless, Gardner has consistently defended his theory, asserting that many criticisms stem from
misinterpretations or misapplications of the theory in educational settings, which he argues are not
definitive evidence against its conceptual validity.
Sternberg’s Theory of Successful Intelligence (1997) proposes that success in life results from a
balanced use of analytical, creative, and practical abilities. Analytical intelligence involves problem-
solving and evaluating ideas—abilities typically measured by conventional intelligence tests. Creative
intelligence enables individuals to generate novel ideas and formulate effective solutions, while
practical intelligence allows for the application of these ideas in real-life situations. The second major
tenet of the theory emphasizes that intelligence should be understood in relation to achieving
personal goals within one’s sociocultural context, rather than solely academic success. Third,
Sternberg posits that success depends on an individual's ability to leverage their strengths while
addressing or compensating for their weaknesses. The fourth element underscores the importance
of using intelligence to adapt to, shape, or select environments—highlighting a dynamic interaction
between the person and their surroundings. Sternberg and colleagues have demonstrated the
effectiveness of educational interventions aimed at enhancing all three intelligences, and have
shown that creative and practical intelligence predict meaningful real-world outcomes, including
academic measures like SAT scores and GPA, even beyond what analytical intelligence predicts.
However, questions remain about whether these three abilities are distinct constructs or simply
different expressions of a general intelligence factor (g), as noted by critics such as Brody (2004) and
Gottfredson (2003), leaving open the debate over the theory’s structure and empirical
distinctiveness.
Emotional Intelligence
Theories of Emotional Intelligence (EI) are grounded in the idea that people vary in how well they
can perceive, understand, use, and manage emotions to support thinking and behavior (Salovey &
Mayer, 1990). Over time, various models have emerged, including "mixed models" that blend
personality traits and emotional competencies (e.g., Bar-On, 1997; Goleman, 1998; Petrides &
Furnham, 2003). This conceptual broadness has drawn criticism for reducing EI’s scientific clarity and
precision (Eysenck, 2000; Locke, 2005). In response, Mayer, Salovey, and Caruso (2008) proposed a
more focused, ability-based four-branch model of EI, which includes: (a) perceiving emotions
accurately, (b) using emotions to enhance cognition, (c) understanding emotional meanings and
patterns, and (d) managing emotions to achieve goals. These abilities are measured through the
Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT), which consists of performance tasks
scored based on expert consensus. Research shows that EI, as measured by the MSCEIT, correlates
moderately with verbal intelligence and personality traits like Openness and Agreeableness, and
predicts outcomes related to interpersonal functioning, mental health, and behavior, even after
controlling for g and personality. However, criticisms persist—Brody (2004) argues that the MSCEIT
assesses emotional knowledge rather than its effective application, and its validity as a measure of
emotional ability remains debated. Some studies find only weak associations between EI and
cognitive abilities, while others suggest that EI may not consistently predict outcomes beyond what
is already explained by general intelligence and the Big Five personality traits. As with Gardner’s and
Sternberg’s models, the incremental validity of EI as a distinct and meaningful construct is still an
open empirical question.
MENTAL CHRONOMETRY - It study the relationship between reaction time and speed of response
Mental chronometry is the scientific study of the timing of mental processes, focusing on how
quickly individuals can perform cognitive tasks, and plays a key role in cognitive psychology,
neuroscience, and intelligence research. It includes measures such as reaction time (RT), choice
reaction time (CRT), inspection time (IT), and processing speed, which together offer insights into the
efficiency of information processing. Originating with Franciscus Donders in the 1860s, who
introduced subtractive logic to isolate mental operations, mental chronometry has since been used
to explore the link between processing speed and intelligence (g), with researchers like Arthur
Jensen emphasizing inspection time as a key indicator. Although higher intelligence is often
associated with faster processing, these correlations are generally modest, and critiques note
limitations such as low reliability in RT measures and the oversimplification of intelligence to speed
alone. Today, mental chronometry is used in areas ranging from cognitive neuroscience and clinical
assessment (e.g., ADHD, dementia) to human-computer interaction, offering a useful—though
incomplete—window into the workings of the mind.
UNIT 2
LEARNING
HABITUATION
SENSITIZATION
HABITUATION
1. Initial Response: When a novel stimulus is presented to an organism, it typically elicits a response.
This response can vary depending on the nature of the stimulus and the organism's sensitivity to it.
For example, a loud noise might startle an animal, or a new smell might cause it to investigate its
surroundings.
2. Repetition: Habituation occurs through repeated exposure to the stimulus. As the organism
encounters the stimulus multiple times, it gradually becomes less responsive to it. This reduction in
response is a result of the organism's nervous system adjusting its sensitivity to the stimulus.
3. Neural Mechanisms: Habituation involves changes in neural processing within the organism's
nervous system. Initially, when the stimulus is presented, there is a strong neural response.
However, with repeated exposure, the neurons involved in processing the stimulus become less
excitable. This can occur through various mechanisms, such as decreased neurotransmitter release
or changes in synaptic strength
4. Selective Attention: Habituation is often associated with selective attention. As the organism
becomes habituated to a stimulus, it allocates fewer cognitive resources to processing it. This allows
the organism to focus its attention on more relevant or novel stimuli in its environment.
5. Generalization and Discrimination: Habituation can also involve processes of generalization and
discrimination. Generalization occurs when the organism becomes habituated not only to the
original stimulus but also to similar stimuli. Discrimination, on the other hand, involves the organism
learning to differentiate between the habituated stimulus and new, potentially relevant stimulus
SENSITIZATION
1. Initial Exposure: Sensitization begins with the organism's initial exposure to a stimulus, which may
be novel or aversive. This exposure triggers a physiological or behavioral response, which can range
from mild to intense depending on the nature of the stimulus.
2. Arousal of Neural Circuits: The stimulus activates neural circuits in the brain associated with the
perception and processing of sensory information. This arousal may involve the release of
neurotransmitters such as serotonin, norepinephrine, or glutamate, which play key roles in
modulating neuronal activity and synaptic transmission.
5. Associative Learning: Sensitization often involves associative learning, wherein the stimulus
becomes associated with aversive or rewarding consequences. This association strengthens the
organism's response to the stimulus, as it anticipates the potential outcomes associated with it.
6. Generalization: Sensitization may generalize to stimuli that are similar to the sensitized one,
resulting in an amplified response to a broader range of stimuli. This generalization can occur across
sensory modalities or environmental contexts, leading to heightened sensitivity in various situations.
7. Maintenance and Reversal: The sensitization response may be maintained over time through
continued exposure to the stimulus or reinforcement of the associative learning. Alternatively,
sensitization may gradually diminish or be reversed through processes such as habituation or
extinction, wherein the stimulus loses its potency or the associative link weakens over time.
Drugs that stimulate the central nervous system increase an animal’s overall readiness to
respond, while depressive drugs suppress reactivity.
Emotional distress can also affect responsiveness: Anxiety increases reactivity; depression
decreases responsiveness.
EVOLUTIONARY THEORY
Habituation and sensitization are two fundamental forms of non-associative learning observed
across a wide range of species. Evolutionary theory helps us understand why these processes occur
and how they may have developed to enhance survival and reproductive success
1. Energy Conservation: It allows organisms to conserve energy and attention for more critical
stimuli. By ignoring irrelevant or non-threatening stimuli, organisms can allocate resources more
efficiently.
2.Improved Efficiency: It prevents sensory overload, allowing animals to focus on novel or significant
changes in their environment that might indicate danger, food, or mating opportunities.
2.Rapid Response to Danger: Sensitization ensures that organisms react quickly to dangerous
stimuli, which is crucial for survival in environments where threats are frequent.
3.Learning and Memory: It may aid in learning and memory by reinforcing the importance of certain
stimuli, ensuring that organisms remember and avoid harmful situations in the future.
INGESTIONAL NEOPHOBIA
Ingestional neophobia is the reluctance or avoidance of trying new foods. This behaviour is common
in many animals, including humans, and serves as an evolutionary protective mechanism to prevent
the ingestion of potentially harmful or toxic substances. In young children, it often manifests as a
preference for familiar foods and a resistance to eating unfamiliar ones. Over time, with repeated
exposure and positive experiences, neophobia can decrease, leading to a more varied diet.
Habituation
Domjan ’s 1976 study documents the habituation of Ingestional neophobia. Rats received either a
2% saccharin and water solution or just water. The rats drank very little saccharin solution when first
exposed to this novel flavor. However, intake of saccharin increased with each subsequent
experience. These results indicate that the habituation of the neophobic response led to the
increasing consumption of saccharin.
Sensitization
Animals also can show an increased neophobic response. Suppose that an animal is sick. Under this
illness condition, the animal will exhibit an increased Ingestional neophobia the sensitization process
causes the greater neophobic response
Satiation: As you eat a particular food, you experience a decrease in the pleasure and desire to
continue eating that food. This reduction in enjoyment helps signal when to stop eating, promoting
energy balance and homeostasis. Repeated consumption of the same food accelerates satiation,
reducing overall intake and helping to prevent overeating.
Deprivation: When you are deprived of certain foods, your body may increase cravings and the
desire for those foods once they become available. This can lead to sensitization, where the
response to the reintroduced food is heightened, making you more likely to overconsume. The
increased response to a food after a period of deprivation can lead to binge eating or
overconsumption, disrupting homeostasis by causing an intake of calories that exceeds the body's
immediate needs.
Homeostasis and Regulation: The body's homeostatic mechanisms aim to balance energy intake and
expenditure. Habituation helps in maintaining this balance by promoting reduced intake over time,
while sensitization (often following deprivation) can lead to periods of excessive intake that
challenge homeostatic regulation
DISHABITUATION
Dishabituation refers to the restoration or recovery of a response that had previously been reduced
or eliminated through habituation. It occurs when a novel or strong stimulus is introduced, causing
the organism to once again respond to the habituated stimulus.
Adaptive Flexibility: Dishabituation allows organisms to remain adaptable and responsive to their
environments. By resetting the response to a previously habituated stimulus when a new or
significant event occurs, organisms can ensure that they are not overlooking potentially important
changes in their surroundings.
Enhanced Sensory Processing: It helps in recalibrating sensory processing systems, ensuring that
important stimuli can be re-evaluated in light of new information. This is particularly useful in
dynamic environments where the significance of stimuli can change rapidly.
Survival and Threat Detection: If a previously ignored stimulus suddenly becomes relevant due to a
change in context (e.g., a predator approaching), dishabituation ensures that the organism can
quickly recognize and respond to the potential threat.
Learning and Memory: Dishabituation is a form of non-associative learning that contributes to the
overall learning process. It allows organisms to update their understanding of their environment,
enhancing their ability to learn from and adapt to new situations
APLYSIA CALIFORNICA
In the simple marine mollusk Aplysia, which lacks a shell, a well-studied defensive withdrawal
response is triggered when one of its three external organs—the gill, mantle, or siphon—is touched.
This reflexive behavior, common across many animal species, can be modulated by experience
through two key learning processes: habituation and sensitization. When a weak tactile stimulus is
repeatedly applied to the siphon, Aplysia’s withdrawal response gradually weakens—an example of
habituation, where repeated exposure to a benign stimulus leads to decreased responsiveness. In
contrast, if the tail is shocked before the siphon is touched, the mollusk exhibits a stronger-than-
usual reaction, demonstrating sensitization, a heightened response to a stimulus following a noxious
or intense event. At the cellular level, habituation is linked to a decrease in neurotransmitter
release from sensory neurons to the central nervous system, often due to increased activity of
inhibitory interneurons, resulting in reduced synaptic transmission. Sensitization, on the other hand,
involves the release of neuromodulators like serotonin or dopamine, which enhance synaptic
strength by increasing neurotransmitter release and neuronal excitability. These physiological
changes support the cellular modification theory, which posits that learning produces lasting
changes in neural systems—either by strengthening existing neural circuits or forming new neural
connections—thus providing a biological basis for memory and experience-dependent behavioral
adaptation.
Richard Solomon’s Opponent-Process Theory, developed in the 1970s, offers a framework for
understanding the dynamics of emotions and motivation by proposing that emotional experiences
operate in opposing pairs. According to the theory, every emotional response (the A-Process)
triggers an opposing reaction (the B-Process) that works to restore emotional balance. The A-Process
arises quickly and peaks early, while the B-Process emerges more slowly and can eventually override
the initial emotion before both diminish back to baseline. This model is particularly useful in
explaining addiction, where the initial pleasurable effects of a drug (A-Process) fade with repeated
use, while withdrawal symptoms (B-Process) intensify, prompting continued drug use to avoid
discomfort. The theory also applies to other behaviours, such as thrill-seeking, where an initial fear
response is followed by exhilaration and relief, reinforcing the behavior. Overall, Solomon’s theory
reveals how the interplay of opposing emotions shapes complex human motivations.
VERBAL LEARNING
Verbal learning is the process of actively memorizing new material using mental pictures,
associations, and other activities. Verbal learning was first studied by Hermann Ebbinghaus, who
used lists of nonsense syllables to test recall.
In the classical verbal learning experiment each subject learns a list of items. Each trial involves a
study phase and a test phase. Here two major testing procedures are used.
1.Anticipation method -Study an item is given after each test; a trial consists of the sequential
presentation of all items of a list for test and immediate test.
2.Study -test presentation method -Here on each trial all items are first shown one at a time for
study and presented again for testing.
Methods used in research of verbal learning are listed below
Anticipation method and study-test presentation method also used in Paired association
learning. For example, Overt-7, Rural-6, Sorry-1 etc.….
2.Serial learning - In serial learning each item serves both as a stimulus and as a response. Items are
presented always in the same order. When the subject sees one particular word exposed, he is to try
to guess or anticipate what the next one will be. A special signal serves as a stimulus for the recall of
the first item of the list and where each item serves as a stimulus for the recall of the next. For
example, learning the letters of alphabet, learning to spell like flower etc.
3.Free recall -Here subjects are given a list of items and are later asked to recall as many as possible.
Usually from 20 to 40 items are presented, one at a time. Recall may be oral or in writing. Subjects
are instructed to recall as many words as they can without regard to the order in which the items
have been presented where order of presentation is randomized from trial to trial.
Murdock found that the probability of recall of individual items in a list is a function of their position
in the list. He found that items in the end of the list were recalled better (Recency effect) and those
at the beginning of the list next (Primacy effect). The items in the middle of the list were recalled the
least. These results were independent of list size used. However, variation in terms of serial position
is dependent upon the nature of the material and the nature of the practice (rehearsal).
4.Recognition learning - Here subjects are given a list of items and after study phase, subjects are
given a test sheet and are asked to circle the items they were shown before. This method involving
distractor items.
5.verbal discrimination learning - In this type of procedure, a series of verbal items is presented,
usually visually, and the subjects are asked to learn which member of the pair is “correct”, i.e. the
one arbitrarily selected by the experimenter as the right one. There is little evidence to suggest
regarding the relation between meaningfulness association value and verbal discrimination learning.
MATERIALS
Learning items are usually words, numerals, line drawings or arbitrary letter combinations. Learning
material usually in the form of consonant-vowel-consonant (CVC) combinations (nonsense syllable)
or as consonant trigrams. Ebbinghaus objected to the use of words in verbal learning experiments
because he had noted that some words very much easier to remember than others, depending upon
their meaning and familiarity. Thus, nonsense syllables provided only an incomplete solution to this
problem and more nonsensical than others. The rate of verbal learning is determined by the
characteristics of the learning material. They are,
2.With words frequency of occurrence in the natural language is related to learning speed.
INTRODUCTION
A person’s biological character also affects other types of learning. Examples include developing a
preference for coffee, forming a lifelong attachment to one’s mother, or learning to avoid an
obnoxious neighbor.
Psychologists often use simplified and artificial setups—like training rats or monkeys to press a bar
for food or presenting a buzzer before feeding cats or dogs—not because these mimic real-world
scenarios, but because they serve to uncover the general laws of learning. In operant conditioning,
bar pressing is a preferred response because it is easily acquired by many species and lacks prior
associations, making the behavior more neutral and scientifically useful. As Skinner (1938) noted,
the specific form of the behavior is less important than its function in demonstrating how
reinforcement influences actions. These controlled conditions help reveal consistent rules of
learning, such as how various reinforcers affect behavior rates—principles shown to apply both in
the lab and the real world. Similarly, in classical conditioning, the choice of stimuli like buzzers and
food is arbitrary. Psychologists assume that any stimulus capable of triggering an unconditioned
response (UCR) can be paired with a wide range of neutral stimuli to form conditioned responses
(CRs), as Pavlov demonstrated. For instance, the same buzzer that was used to condition salivation
could have been used to condition fear if paired with an aversive event like shock. The key idea is
that these basic learning processes are generalizable—applicable across species, contexts, and
types of stimuli—which is why psychologists study them in such simplified forms.
ANIMAL MISBEHAVIOR
Breland and Breland observed that in certain operant conditioning situations, animals' instinctive
food-foraging and food-handling behaviours could interfere with learned responses. They found
that when food was used as a reinforcer, the natural behaviours associated with obtaining and
handling food were sometimes elicited so strongly that they began to disrupt or replace the
operant response, a phenomenon they termed "instinctive drift." This drift occurs because the
instinctive behaviours, being consistently reinforced by food, eventually dominate the learned
behavior, leading to what they called animal misbehaviour—such as cats lingering around the food
dispenser instead of performing the trained task. Building on this, Boakes et al. (1978) argued that
such misbehaviour is better explained by Pavlovian conditioning, where environmental cues
associated with food come to elicit species-typical behaviours, rather than operant learning alone.
Further studies by Timberlake, Wahl, and King (1982) suggested a more integrative view: that both
operant and Pavlovian conditioning contribute to animal misbehaviour. They found that such
behavior arises when food is paired with natural cues that normally elicit foraging behaviours—and
when these behaviours themselves are reinforced, they can override the operant response.
Importantly, misbehaviour is not common in all operant conditioning settings; it tends to occur only
when (1) the training cues resemble natural foraging stimuli, and (2) the instinctive behaviours are
also reinforced, allowing them to become dominant.
SCHEDULE-INDUCED BEHAVIOR
B. F. Skinner (1948) described an interesting pattern of stereotyped behavior that pigeons exhibited
when reinforced for key pecking on a fixed-interval (FI) schedule. And referred to as superstitious
behavior. Why do animals exhibit superstitious behavior? Staddon and Simmelhag’s (1971) identified
two types of behavior produced when reinforcement (e.g., food) occurs on a regular basis:
Terminal behavior occurs during the last few seconds of the interval between reinforcement
presentations, and it is reinforcement oriented.
According to Staddon and Simmelhag (1971), terminal behavior occurs in stimulus situations that are
highly predictive of the occurrence of reinforcement; that is, terminal behavior is typically emitted
just prior to reinforcement on an FI schedule. In contrast, interim behavior occurs during stimulus
conditions that have a low probability of the occurrence of reinforcement; that is, interim behavior is
observed most frequently in the period following reinforcement. When FI schedules of
reinforcement elicit high levels of interim behavior, we refer to it as schedule-induced behavior.
Schedule-Induced Polydipsia
Schedule-Induced Polydipsia refers to the excessive drinking behavior observed when animals are
reinforced with food on interval schedules. John Falk (1961) first discovered SIP, where rats would
drink large amounts of water between food deliveries. It is considered a form of interim behavior
and has been replicated across many species and reinforcement schedules.
Falk (1966) and Jacquet (1972) demonstrated SIP across various interval and compound
schedules.
Pellon et al. (2011) observed individual differences in SIP: some animals (high drinkers)
drank significantly more than others and had greater dopamine activity, suggesting
biological variability in responsiveness to reinforcement.
Wheel running: Studies (Levitsky & Collier, 1968; Staddon & Ayres, 1975) found high activity
immediately after reinforcement.
Aggression: Animals will sometimes exhibit aggression toward nearby targets post-
reinforcement.
These behaviours are instinctive actions triggered by the timing and predictability of reinforcement,
not by direct training.
Riley & Wetherington (1989) proposed that SIP and similar behaviours are instinctive, elicited by
periodic reinforcement. Their resistance to being altered by flavour aversion learning is key
evidence.
In Riley et al.'s (1979) study, rats developed a saccharin aversion after pairing it with illness. But this
aversion quickly extinguished, suggesting that SIP is relatively immune to aversive learning—
supporting the idea that it's instinct-driven, not learned.
Schedule-Induced Polydipsia (SIP) has been proposed by Gilbert (1974) as an animal model for
understanding human alcoholism, suggesting that interval reinforcement schedules in daily life—
such as work breaks or pay cycles—may lead to excessive alcohol consumption. Studies with rats
support this view, showing that under such schedules, animals consume large quantities of alcohol,
achieve blood alcohol levels comparable to human alcoholics, develop tolerance and withdrawal
symptoms, prefer alcohol over sugar, and even perform operant behaviours to gain access to
alcohol. This makes SIP a compelling model for examining the behavioral and biological
underpinnings of addiction. Genetic and neurobiological factors also play a significant role; high-
drinking animals demonstrate greater dopaminergic activity in response to reinforcement and
heightened sensitivity to amphetamines, indicating that variations in dopamine system functioning
may contribute to addiction vulnerability. Moreover, the principles of SIP extend beyond alcohol to
other substances, such as cocaine, where exposure to intermittent reinforcement schedules can
foster compulsive drug-seeking, suggesting a general mechanism of susceptibility within the brain’s
reward system. Despite these parallels, key differences between animal and human addiction must
be acknowledged. Human addictive behavior is shaped by a complex interplay of cognitive,
emotional, and social influences, and factors such as volition and contextual meaning play a much
larger role than in animal models. Therefore, while SIP helps elucidate core mechanisms, it cannot
fully capture the multifaceted nature of human addiction.
Some stimuli are more likely than others to become associated with a particular UCS. Garcia and
Koelling’s (1966) study show that a taste is more salient when preceding illness than when preceding
shock, whereas a light or tone is more salient when preceding shock than when preceding illness.
Garcia and Koelling proposed that rats have an evolutionary preparedness to associate tastes with
illness. Young animals also acquire a strong aversion after one pairing. Taste cues are very salient in
terms of their associability with illness. Although rats form flavor aversions more readily than
environmental aversions, other species do not show this pattern of stimulus salience. Birds acquire
visual aversions more rapidly than taste aversions
Learned-Safety Theory
Proposed by James Kalat and Paul Rozin (1971), the theory suggests that while contiguity is generally
essential in Pavlovian conditioning—such as when a child touches a flame and immediately feels
pain—a specialized mechanism evolved specifically for flavour aversion learning due to its unique
survival value. This mechanism allows animals to associate the taste of a potentially toxic food with
illness even if the symptoms occur several hours after ingestion. Such an adaptation enables animals
to avoid consuming harmful substances that don't produce immediate effects. Ingestional
neophobia, or the tendency to consume only a small amount of a novel food, also plays a crucial role
in this system. This cautious behavior has adaptive significance as it minimizes the risk of ingesting
large amounts of a potentially poisonous substance, giving the animal time to assess the safety of
the food based on later physiological consequences.
The lateral and central amygdala play a crucial role in both fear conditioning and flavour aversion
learning, acting as key structures in the brain’s processing of aversive experiences. Wig, Barnes, and
Pinel (2002) demonstrated that stimulating the lateral amygdala after rats consumed a novel flavor
induced a learned aversion, indicating its involvement in associating taste with negative outcomes.
Supporting this, Tucci, Rada, and Hernandez (1998) found increased glutamate activity in the
amygdala when rats encountered a flavor previously paired with illness, highlighting its role in
encoding aversive taste memories. Yamamoto and Ueji (2011) further mapped the neural circuitry
underlying flavor aversion, showing that detection begins in the gustatory cortex, then signals flow
through the amygdala and thalamic paraventricular nuclei to the prefrontal cortex, resulting in
avoidance behavior. Additionally, Agüera and Puerto (2015) found that damage to the central
amygdala impaired flavor aversion learning, reinforcing its importance. Altogether, these findings
suggest that the lateral and central amygdala are deeply involved in mediating aversive conditioning
related to both pain and illness.
FLAVOR PREFERENCE LEARNING
Distinguishing between flavor-sweetness associations and flavor nutrient associations as the basis
for flavor preferences. Tomato juice is an acquired taste. Some people like the flavor of tomato juice,
but other people do not. People’ s preference for tomato juice is an example of a conditioned flavor
preference. Flavor preferences can be learned rapidly (Ackroff, Dym, Yiin, & Sclafani, 2009) or be
acquired over a long delay (Ackroff, Drucker, and Sclafani, 2012)
Studies: Ackroff and colleagues found that rats preferred an unsweetened flavor paired with 8%
fructose and 0.2% saccharin over one paired with only 0.2% saccharin by the second preference test.
Ackroff, Drucker, and Sclafani (2012) found that flavor preferences can develop over a 60-minute
delay when an unsweetened flavor is paired with an 8% or 16% Polycose glucose nutrient solution.
People develop flavor preferences for two main reasons: association with sweetness and association
with positive nutritional outcomes (Myers & Sclafani, 2006). Flavor-sweetness preferences occur
when a nonsweet flavor is repeatedly paired with a sweet taste. For instance, Capaldi, Hunter, and
Lyn (1997) showed that rats could develop a preference for citric acid over salt when citric acid was
paired with sucrose or saccharin. Similarly, Ackroff and Sclafani (1999) demonstrated a preference
for unsweetened grape Kool-Aid paired with saccharin over cherry Kool-Aid paired with water.
Beyond sweetness, flavor-nutrient preference conditioning involves associating Flavors with
nutrient-rich substances. This type of learning helps animals, including humans, identify and select
nutrient-dense foods (Sclafani, 2001). For example, Ackroff and Sclafani (2003) found rats preferred
a grape flavor paired with a 5% ethanol nutrient solution over cherry flavor with water, suggesting
the preference stemmed from nutritional value rather than taste alone. In humans, Capaldi and
Privitera (2007) observed that college students favoured Flavors linked to high-fat cream cheese
over low-fat versions, despite similar bitterness. These preferences emerge early in life; studies show
young rats and children form flavor-sweetness and flavor-nutrient associations, even with
unsweetened Flavors paired with glucose or high-calorie drinks (Melcer & Alberts, 1989; Myers &
Hall, 1998; Birch et al., 1990). Moreover, flavor-nutrient preferences can arise whether nutrients are
ingested or infused, as shown by Myers, Ferris, and Sclafani (2005), who found rats preferred a
flavor paired with glucose over one paired with sucrose, confirming that nutrient value alone can
guide flavor preferences.
Research indicates that dopamine neuron activity in the nucleus accumbens is fundamental to the
conditioning of both flavor-flavor and flavor-nutrient preferences (Sclafani, Touzani, & Bodnar,
2011). Sweet tastes like sucrose and creamy textures naturally activate dopamine neurons in this
brain region as unconditioned responses. When a previously bitter flavor—such as coffee—is paired
with sugar and cream, it can come to activate these neurons as a conditioned response. Supporting
this, Touzani, Bodnar, and Sclafani (2010) demonstrated that blocking dopamine receptors in the
nucleus accumbens disrupted the development of both types of conditioned flavor associations.
Beyond forming preferences, the nucleus accumbens also regulates dietary variety. Jang et al. (2017)
showed that while rats displayed different preferences for four equally nutritious but variably sweet
flavor, those with nucleus accumbens lesions overwhelmingly chose the sweetest one. This suggests
the region not only helps form flavor preferences but also prevents over-reliance on a single, highly
palatable option—supporting dietary balance.
IMPRINTING
Imprinting refers to the rapid formation of a strong attachment between a young animal and a
caregiver or significant object during a specific window of development, known as a sensitive
period. First described by Konrad Lorenz (1952) in goslings, imprinting involves the young following
the first moving object they see after hatching, often their mother—but it can also be humans or
inanimate objects. Certain traits—like movement, vocalizations, rhythmic sounds, and size—
enhance the likelihood of imprinting (Fabricius, 1951; Collias & Collias, 1956; Weidman, 1956;
Schulman et al., 1970).
Research by Harry Harlow (1971) expanded imprinting to primates, showing that infant monkeys
formed stronger attachments to soft, warm, and comforting surrogate mothers than to cold or
unresponsive ones. This parallels Ainsworth’s (1982) findings in humans, where securely attached
infants had responsive caregivers, while inattentive parenting led to distress and avoidant behavior.
The timing of imprinting is critical. Sensitive periods vary by species—from just a few hours in birds
and goats to several months in primates and humans. Though imprinting is strongest in early hours
(Jaynes, 1956), Brown (1975) found that with sufficient experience, it can occur later too.
Two major theories explain imprinting: genetic predisposition and associative learning. Moltz
(1960, 1963) proposed that imprinting involves both Pavlovian and operant conditioning—initial
comfort from familiar objects reduces fear and strengthens attachment. Supporting this, both birds
and primates experience fear reduction when reunited with familiar figures (Bateson, 1969; Harlow,
1971).
Ultimately, imprinting is not just about early attachment but about emotional regulation, safety,
and social development, forming the foundation for later relationships—even into adulthood, as
individuals may continue to seek comfort from early attachment figures when threatened or
distressed.
Maternal Bonding in Human Infants The specific attributes of the object are important in the
formation of a social attachment (Moltz’s, 1960, 1963)
Building on this, Konrad Lorenz (1935) and Hess (1973) proposed that imprinting is a genetically
programmed learning process. Hess introduced the idea of an innate schema—a built-in expectation
in young animals—that guides them to imprint on the most appropriate object, typically their
parent, during a sensitive period when imprinting is most effective. This evolutionary adaptation
ensures survival by promoting attachment to a caregiver early in life.
Graham (1989) added further support to the instinctive view by pointing out that unlike classically
conditioned responses, which fade without reinforcement, behavior directed at an imprinting object
are remarkably persistent. This stability suggests imprinting represents a distinct form of learning,
one that blends biological preparedness with early environmental cues, shaping maternal bonding
and social attachment in both animals and humans.
Social attachments involve inhibiting fears and promoting attachment-related behaviors, contrasting
with the role of the lateral and central amygdala in aversive conditioning. Tottenham, Shapiro,
Telzer, and Humphreys (2012) found that the dorsal amygdala activation correlates with maternal
approach behaviors in children and adolescents. Coria-Avila et al. (2014) reported on a neural circuit
from the dorsal amygdala to the nucleus accumbens that motivates social attachment behaviors,
linked to increased dopamine activity in the nucleus accumbens during maternal attachment.
Additionally, Strathearn (2011) noted that this dopamine pathway functions more actively in secure
maternal relationships but is less active in anxious maternal relationships.
Bolles proposed that animals are not only driven by the pursuit of rewards like food or mates but
also possess instinctive mechanisms to avoid danger. These innate strategies are essential for
survival because animals often don’t have the luxury of time to learn from repeated exposure to
threats. Bolles introduced the concept of species-specific defence reactions (SSDRs), which are
automatic, evolutionarily preserved responses used to escape danger. These vary by species: rats
instinctively freeze, flee, or fight; birds fly away; and mice exhibit timid behavior. Importantly, SSDRs
are context-dependent—Bolles and Collier (1976) found that rats shocked in different-shaped boxes
responded with context-specific SSDRs (freezing in square boxes, running in rectangular ones).
Animals readily learn avoidance behavior if those align with their SSDRs. For instance, rats easily
learn to run to avoid shock, but struggle to learn arbitrary responses like bar pressing. Bolles (1969)
demonstrated that while rats quickly learned to run in an activity wheel to avoid shock, they could
not learn to stand on their hind legs to avoid it. Bolles and Riley (1973) showed that freezing
behavior could be quickly acquired via Pavlovian conditioning and could not be reduced by
punishment, suggesting that avoidance learning is rooted in innate Pavlovian mechanisms, not
operant reinforcement.
In humans, emotional responses to threats exhibit similar SSDR-like patterns. Barbara Fredrickson’s
research found that negative emotions like fear or anger reduce cognitive flexibility and limit
behavioral responses, while positive emotions (joy, contentment) broaden a person's range of
potential actions. This suggests that positive emotional states enhance adaptive capacity, while
negative states narrow responses to instinctive, often defensive behavior. Understanding this can
help improve strategies for resilience and coping under stress.
The discovery by Olds and Milner (1954) that rats would self-stimulate certain brain regions by
pressing a bar marked the beginning of our understanding of the brain's reinforcement system. This
intracranial self-stimulation demonstrated that electrical stimulation of the brain could serve as a
powerful reinforcer across species, including rats, pigeons, dogs, primates, and even humans. The
most critical area for this effect is the medial forebrain bundle, a part of the limbic system. Larry
Stein and colleagues (1973) showed that stimulation of this area is highly reinforcing, motivates
behavior, becomes more active in the presence of reward, and is enhanced by deprivation.
Expanding on this, Wise and Rompre (1989) proposed the mesolimbic reinforcement system, which
includes two main neural pathways: the tegmentostriatal pathway (which identifies reinforcement-
related stimuli and connects to the nucleus accumbens) and the nigrostriatal pathway (which helps
store reinforcement-related experiences). Central to both pathways is dopamine, a neurotransmitter
that regulates reinforcement by connecting the ventral tegmental area with key areas like the
nucleus accumbens and prefrontal cortex. Natural rewards (like food or water) and drugs (like
amphetamine and cocaine) both trigger dopamine release, reinforcing behavior. Animals even self-
administer these drugs, suggesting the powerful role of dopamine in addictive behavior.
Moreover, opiates like morphine and heroin activate the tegmentostriatal pathway through
separate opiate receptors, but they also boost dopamine activity in the nucleus accumbens. This
dopamine-opiate interaction strengthens the link between reinforcement and addictive potential.
The Dual Receptor Theory by Koob (1992) supports this dual influence, highlighting the combined
effect of dopamine and opiate pathways in producing strong reinforcement signals.
Individual differences in reinforcement responsiveness are also observed. Some animals, like high
sucrose feeders (HSFs), show greater dopamine activity and consume more rewards (like sugar or
amphetamines) than low sucrose feeders (LSFs). This heightened mesolimbic activity may explain
variations in compulsive behavior like gambling or hypersexuality, especially in clinical contexts such
as Parkinson’s disease, where dopamine-enhancing medications trigger such behavior. Recognizing
these differences can guide treatments for addiction by targeting the mesolimbic system.
CONDITIONING
PAVLOVIAN CONDITIONING
Ivan Pavlov, originally a physiologist studying digestion, made a groundbreaking observation when
he noticed that dogs began salivating not just when food was placed in their mouths, but also when
they saw food or objects associated with food (like dishes). He theorized that this anticipatory
salivation was a learned response. Pavlov proposed that both humans and animals possess
unconditioned reflexes, which are automatic, biologically ingrained reactions to stimuli. For
example, food (an unconditioned stimulus, UCS) naturally triggers salivation (an unconditioned
response, UCR).
When a neutral stimulus (e.g., a metronome) is repeatedly paired with the UCS, it becomes a
conditioned stimulus (CS), capable of eliciting a conditioned response (CR)—in this case, salivation.
Over time, the strength of the CR increases, demonstrating associative learning. Pavlov’s studies
revealed key principles like stimulus generalization (similar stimuli to the CS also elicit the CR) and
extinction (the CR fades when the CS is no longer paired with the UCS).
Conditioned responses aren’t limited to salivation. Hunger itself can be conditioned. For instance, if
someone regularly encounters food in the kitchen, cues like the sight of the refrigerator or pantry
can trigger hunger. This is because these environmental cues (CSs) become associated with food
(UCS), which naturally elicits physiological responses like salivation, insulin release, and gastric
secretions (UCRs). Over time, the sight of a cupboard can trigger these responses (CRs), causing the
person to feel hungry—even if they’re not biologically deprived of food.
Insulin, in particular, plays a key role: it lowers blood glucose levels, which in turn stimulates the
sensation of hunger. Therefore, palatable foods that trigger stronger unconditioned responses (like
chocolate or pie) result in stronger conditioned responses, making the CR more intense and harder
to resist.
Motivational Power of Conditioned Cues
Experiments with rats have shown that food-associated environmental cues can override satiety. For
example, Gallagher and Holland found that rats trained to associate a tone or specific environment
with food ate even when they were full, simply due to the presence of these cues. This behavior
wasn’t limited to familiar food—it extended to novel foods as well.
Similar findings were observed in humans. In one study by Birch et al. (1989), children exposed to
audiovisual cues paired with snacks consumed more food in response to those cues later—even
when satiated. Ridley-Siegert et al. (2015) further found that visual cues linked with chocolate
increased food intake more than cues linked to chips or non-food stimuli, highlighting chocolate’s
high motivational value.
The basolateral amygdala (BLA) is central to how conditioned cues drive feeding behavior. During
conditioning, the BLA becomes activated by tone-food pairings. With continued pairings, this
activation spreads to a neural pathway that includes the medial prefrontal cortex (mPFC)—a region
involved in executive decision-making. This means the brain’s higher cognitive systems get involved
in cue-driven eating.
Additionally, the neuropeptide orexin, which regulates arousal and appetite, becomes activated
during these pairings, especially in the pathway from the amygdala to the prefrontal cortex. The
nucleus accumbens, a reward-processing area, also receives input from the BLA. Importantly,
damage to the BLA disrupts conditioned feeding—both preventing learning if the damage occurs
before conditioning and erasing the CR if it happens after training.
In humans, studies show that the amygdala activates in response to the sight or thought of
preferred foods, but not neutral ones—even when individuals are already full. This supports the idea
that food-related environmental cues can elicit motivational responses that bypass actual hunger
signals.
Conditioning of Fear
Fear conditioning involves the association of a previously neutral stimulus (CS) with an aversive
unconditioned stimulus (UCS), such as sudden turbulence in an airplane. The UCS (e.g., sharp drop)
elicits an unconditioned response (UCR), which includes both psychological distress and physiological
arousal. Over time, cues predicting the UCS, like the seatbelt light or storm clouds, become
conditioned stimuli (CSs) that trigger fear responses (CRs) even in the absence of turbulence.
The severity of the UCS (e.g., intensity of turbulence) influences the intensity of the UCR.
Additionally, repeated exposure can heighten reactivity through sensitization. Stimuli associated
with past aversive events can later elicit anticipatory fear responses (CRs), as seen in real-life
scenarios like Juliette's fear of darkness resulting from a traumatic event at night.
Early work by Bechterev (1913) and John Watson (1916) showed that pairing a neutral stimulus with
an aversive event (e.g., shock) leads to conditioned emotional responses. These findings laid the
foundation for understanding how fear is learned through Pavlovian mechanisms in both animals
and humans.
Unlike conditioned hunger (linked to the basolateral amygdala), conditioned fear primarily involves
the lateral and central amygdala. Studies in rats and mice show increased activity and neural
changes in these areas during fear conditioning. Lesions in the lateral/central amygdala impair the
acquisition and expression of fear responses to aversive stimuli, highlighting their crucial role.
Conditioning Techniques
1. Eyeblink Conditioning
Involves pairing a tone (CS) with a puff of air to the eye (UCS). Over repeated pairings, the tone alone
elicits a conditioned eyeblink response (CR). This technique is widely used in both animal and human
studies to explore associative learning and its neurological basis.
2. Fear Conditioning
Occurs when consumption of a food (CS) is followed by illness (UCS), resulting in long-lasting
avoidance of that food (CR). Research by John Garcia demonstrated that even highly preferred flavor
(like saccharin) is avoided if followed by illness. This kind of conditioning is strong and often occurs
after a single trial, even if the illness is delayed.
EMPIRICAL OBSERVATION
Pavlov demonstrated that placing acid in a dog’s mouth causes a natural defensive response—
mouth movements and salivation—to remove the irritant. When a neutral sound is paired
repeatedly with the acid application, the dog begins to exhibit the same salivation and mouth
movement just in response to the sound. This illustrates classical conditioning, where a neutral
stimulus (CS) becomes capable of eliciting a response (CR) due to its association with an
unconditioned stimulus (US).
3. Experimental Extinction
If a CS is presented without the US repeatedly, the learned CR gradually weakens and disappears.
This is known as extinction. Since the CS is no longer reinforced by the US, it loses its power to elicit
the CR. In classical conditioning, the US acts as a reinforcer.
4. Spontaneous Recovery
After extinction, if some time passes and the CS is presented again, the CR may reappear
temporarily. This is called spontaneous recovery and shows that extinction does not completely
erase the learned association.
5. Higher-Order Conditioning
A conditioned stimulus (CS) can gain secondary reinforcing properties. For example, if a blinking
light (CS1) is paired with food (US), the dog learns to salivate to the light (CR). Then, a new stimulus
like a buzzer (CS2) can be paired with the light (without food). Eventually, the buzzer alone elicits
salivation—this is second-order conditioning. If a third stimulus (e.g., a tone) is paired with the
buzzer, and the tone also elicits a CR, it’s called third-order conditioning. This demonstrates how
conditioning can extend beyond the original US.
6. Generalization
After conditioning to a 2,000-cps tone, presenting tones of similar frequency also elicits a CR, though
with decreasing strength as similarity decreases. This is called stimulus generalization. The more
similar a new stimulus is to the original CS, the stronger the response. This concept is closely related
to Thorndike’s theory of transfer, where similar situations trigger similar responses. However, while
generalization depends on stimulus similarity, Thorndike’s spread of effect is more about proximity
of responses, not similarity.
Classical conditioning principles are used in therapy by assuming that maladaptive behaviours like
smoking or drinking are learned and can therefore be unlearned. For instance, the taste of alcohol
or cigarettes (CS) becomes associated with pleasurable physiological effects (US), producing
pleasure (CR). If the CS is repeatedly presented without the US (e.g., tasting alcohol without getting
intoxicated), extinction can occur, leading to a reduction or elimination of the behavior.
2. Counterconditioning
Counterconditioning is often more effective than extinction alone. In this method, the CS (e.g.,
alcohol or cigarette taste) is paired with a new, aversive US, such as a nausea-inducing drug. Over
time, the CS comes to elicit a negative response (like nausea), which helps develop an aversion. For
example, injecting anectine, a drug that creates frightening respiratory paralysis, after drinking led to
lasting behavior change in most participants of one study. However, the effects are often temporary
and not guaranteed long-term.
3. Flooding
Flooding is a treatment for phobias based on extinction. It works by forcing the person to face the
feared stimulus (CS) without escape, allowing them to learn no harm will follow (no US). This helps
extinguish the fear response. For example, a person with dog phobia must be exposed to a dog for
an extended time. Although fast-acting, flooding can produce high dropout rates and even worsen
symptoms for some, since it involves intense exposure to something the person has long feared.
4. Systematic Desensitization
Developed by Joseph Wolpe, this technique also targets phobias but in a more gradual, controlled
manner. It includes three phases, the first of which is to build an anxiety hierarchy—a ranked list of
related situations from least to most anxiety-provoking. Clients are then gradually exposed to these
situations while learning relaxation techniques, helping to replace fear with calm responses. It is
safer and generally more well-tolerated than flooding.
GARCIA EFFECT
The Garcia Effect, also known as taste-aversion learning or long-delay learning, refers to the
phenomenon where animals learn to associate specific stimuli—particularly taste—with illness, even
when the negative consequence is delayed. Research has shown that species like rats, quail, and
monkeys are biologically predisposed to form such aversions due to evolutionary adaptations. For
instance, rats readily associate taste with internal discomfort, like illness, but not with external pain
such as foot shocks, making them notoriously difficult to poison. In a classic study, Garcia and
Koelling (1966) demonstrated that rats developed a strong aversion to flavoured (tasty) water after
being exposed to illness-inducing X-rays, whereas rats exposed to bright-noisy (audiovisual) water
did not show the same aversion after X-rays but did when foot shocks were used. This suggests that
internal threats are more strongly associated with taste, while external threats are linked with
audiovisual cues. Garcia’s further experiments showed that rats could associate taste with illness
even when the onset of sickness was delayed by up to 75 minutes, proving that long-delay
associations are not only possible but also advantageous for survival. Expanding on these findings,
Seligman (1970) proposed the biological preparedness continuum, which includes prepared
associations (easily formed, like taste-illness), unprepared associations (requiring more exposure),
and contra prepared associations (difficult or impossible to learn), emphasizing that evolutionary
pressures influence learning capacity. Moreover, learned aversions are not confined to taste; quail
form aversions based on both taste and visual cues, and monkeys have been shown to avoid cookies
of a specific shape after becoming ill, indicating that aversion learning can involve multiple sensory
modalities depending on species-specific adaptations. The Garcia Effect has important practical
implications: it helps explain why cancer patients often develop aversions to foods consumed before
chemotherapy, and it can be used to condition predators to avoid certain prey, proving its relevance
in both medical and ecological applications.
APPETITIVE CONDITIONING
Appetitive conditioning is a type of learning in which behaviours are strengthened because they
lead to rewarding or satisfying outcomes, making it a central mechanism in both human and animal
behavior. This form of conditioning plays a crucial role in shaping actions related to motivation,
reinforcement, and reward-seeking. The process involves a neutral stimulus becoming associated
with a positive reinforcer—such as food, praise, or affection—which in turn increases the likelihood
that the associated behavior will be repeated. As a result, organisms learn to perform certain
behaviours more frequently when those behaviours reliably predict or lead to a desirable event,
supporting the development of goal-directed actions.
B.F. Skinner emphasized that reinforcement shapes behavior, and a central concept in his theory is
contingency—the specific relationship between behavior and reinforcement. According to Skinner,
it’s the environment that determines these contingencies, and individuals must behave in specific
ways to receive reinforcement. He rejected internal explanations of reinforcement, asserting that
observable behavior and external stimuli drive learning.
In appetitive contexts, instrumental conditioning occurs when the environment limits opportunities
for reward, thus constraining responses. Conversely, operant conditioning allows the subject to
freely control response frequency and thereby the reinforcement received. Both forms investigate
how behavior is modified by reinforcement, but differ in the degree of response freedom.
Reinforcement can be primary (inherently satisfying like food) or secondary (learned, like money).
Skinner developed the shaping technique, where behaviours closer and closer to the desired
response are reinforced progressively. This method increases the speed and precision of learning a
new behavior by reinforcing successive approximations.
SHEDULE OF REINFORCEMENT
Reinforcement can be delivered based on time (interval schedules) or the number of responses
(ratio schedules). These schedules affect the speed and pattern of learning. Different types of
schedules—fixed or variable—produce distinct response rates and patterns of behavior.
Fixed ratio (FR) schedules require a specific number of responses for reinforcement. They produce
consistent responding, with response rates increasing as the ratio increases. A post-reinforcement
pause often follows reinforcement, especially at higher ratios.
VARIABLE-RATIO SCHEDULES
Variable ratio (VR) schedules involve an unpredictable number of responses for reinforcement. They
generate high and steady response rates with minimal pauses. VR schedules are generally more
effective than FR schedules in maintaining behavior.
With fixed interval (FI) schedules, the first response after a set time is reinforced. This leads to a
scalloped response pattern—post-reinforcement pauses followed by an accelerating rate of
responding as the interval nears completion.
Variable interval (VI) schedules have changing time intervals between reinforcements. Response
rates tend to be moderate and steady, and the scallop pattern seen in FI schedules is absent. Longer
intervals produce lower response rates.
DIFFERENTIAL REINFORCEMENT SCHEDULES
These include:
DRH (High Rate): Reinforcement given for high response rates (e.g., studying consistently for
exams).
DRL (Low Rate): Reinforcement only after slow response rates.
DRO (Other Behaviours): Reinforcement given when a specific behavior (e.g., hitting) does
not occur for a set time.
Compound Schedules
In many real-life situations, the relationship between behavior and reinforcement involves more
than one schedule, leading to what is known as a compound schedule. In these cases, two or more
reinforcement schedules are combined in sequence or simultaneously. For example, in a compound
schedule involving FR-10 and FI-1 minute, a rat must first press a lever ten times (Fixed Ratio 10),
and then wait for one minute (Fixed Interval 1 minute) after the last press before pressing again will
yield a reward. The rat must complete both requirements in the proper order to receive
reinforcement. This demonstrates that both animals and humans can adapt to complex
contingencies in reinforcement, showcasing their sensitivity to such learning environments.
Two major factors influence the strength and speed of learning in instrumental or operant
conditioning:
1. Importance of Contiguity
For conditioning to be effective, the reward must closely follow the behavior. Delays between a
response and its reinforcement greatly impair learning. In one experiment with rats, delays of just
1.2 seconds significantly reduced conditioning. According to Perin (1943), even a 10-second delay
led to only moderate learning, while delays of 30 seconds or more completely prevented acquisition
of the bar-pressing behavior. This highlights the critical role of temporal proximity between
behavior and consequence.
The size of the reward also significantly affects learning. In Crespi’s (1942) study, rats receiving
larger food rewards (e.g., 64 or 256 units) learned to run faster in an alley than those receiving
smaller amounts (1 or 4 units). Similarly, Guttman (1953) found that rats reinforced with higher
concentrations of sucrose solutions learned bar pressing more quickly. These findings emphasize
that larger reinforcers enhance both the rate and strength of learning.
Previous reward history influences current learning performance. This is evident in two types of
contrast effects:
Positive Contrast: Performance improves when a small reward is followed by a large reward.
The new, larger reward appears even more valuable due to the contrast with the earlier
smaller one.
Negative Contrast: Performance drops when a large reward is followed by a small one. The
smaller reward feels disappointing in comparison.
Bower (1981) argued that a ceiling effect may explain why positive contrast effects sometimes fail to
appear — if the high reward already maximizes performance, an increase can’t elevate it further.
These contrast effects occur because expectations shaped by prior rewards alter the perceived value
of current outcomes.
An instrumental or operant response that has been learned through reinforcement can be
extinguished if reinforcement is consistently withheld. Over time, when a behavior no longer leads
to a reward, the strength of the response diminishes and eventually stops. This extinction process is
critical to understanding behavioral flexibility and learning limits.
Spontaneous Recovery
Abram Amsel (1958) proposed that the absence of an expected reward causes frustration, which
plays a role in learning:
2. Learned Frustration: Over time, environmental cues associated with nonrewarded come to
trigger anticipatory frustration through classical conditioning.
3. Impact: This frustration can influence future behaviours, persistence, and the strength of
learning. It plays a key role in partial reinforcement effects and resistance to extinction.
Resistance to Extinction
1. Magnitude of Reward
According to D'Amato (1970), the impact of reward size on extinction resistance depends on the
amount of training:
2. Consistency of Reward
Extinction occurs more slowly after partial reinforcement than after continuous reinforcement. This
is known as the Partial Reinforcement Effect (PRE). Weinstock’s study demonstrated that rats
receiving rewards on fewer trials during acquisition showed greater resistance to extinction. The
lower the percentage of rewarded trials, the stronger the persistence during extinction.
This theory posits that intermittently rewarded animals learn to continue responding despite
frustration. The removal of a large reward produces greater frustration than a small one. In partially
reinforced animals, this frustration becomes a cue for continued responding, leading to slower
extinction.
Capaldi proposed that animals associate the memory of a non-rewarded trial (SN) with the
instrumental response when it's followed by a reward. During extinction, these SN cues persist,
encouraging continued responding. Animals trained with continuous rewards never experience SN,
so they do not build this association — leading to quicker extinction.
According to Flaherty (1985), the partial reinforcement effect is adaptive in natural settings:
3. At the same time, it does not promote endless responding without result — PRE helps strike
a balance between persistence and disengagement, allowing flexible, goal-directed
behavior in uncertain environments.
Contingency management typically proceeds through three key stages to modify behavior
effectively.
The first stage, known as the assessment stage, involves identifying both the frequency of
appropriate and inappropriate behaviours, as well as the specific situations in which they occur.
During this phase, the reinforcers maintaining the inappropriate behavior are examined, and
potential reinforcers that could support the desired (appropriate) behavior are also identified. This
provides a foundation for targeted intervention.
The second stage, called the contingency contracting stage, involves clearly specifying the
relationship between the individual’s responses and the delivery of reinforcement. This includes
determining how reinforcement will be administered and ensuring that it is contingent upon the
occurrence of appropriate behaviours. A formal contract or plan is often developed during this phase
to establish expectations and reinforce consistency.
In the final stage, the implementation stage, the planned contingencies are put into action. This
phase focuses on monitoring behavioral changes that occur during the treatment, as well as
evaluating whether these changes are maintained after formal treatment ends. The primary goal is
to ensure that the intervention produces lasting behavioral improvements through effective use of
reinforcement principles.
Premack (1959) proposed that reinforcement is relative, and that an activity can serve as a reinforcer
if it has a higher probability of occurrence than the behavior it is meant to reinforce. In his classic
study with children, he placed a pinball machine next to a candy dispenser. Some children preferred
playing pinball (“manipulators”), while others preferred eating candy (“eaters”). In the second phase
of the study, manipulators had to eat candy to access the pinball, while eaters had to play pinball to
receive candy. The results showed that both groups increased their performance of the low-
probability activity to gain access to the high-probability one, supporting Premack’s theory that more
probable behaviours can reinforce fewer probable ones.
Timberlake and Allison’s response deprivation theory suggests that when access to a normally
preferred activity is restricted, it becomes a powerful reinforcer, regardless of its baseline
probability. In experiments, rats increased their drinking behavior when it allowed access to a
restricted running wheel. Similarly, children increased their writing or arithmetic when access to
these activities was limited. For instance, in Evan’s case, restricting TV and computer games
increased his motivation to complete homework. The deprivation, not relative preference, drives the
reinforcing power.
AVERSIVE CONDITIONING
Aversive events can be either escaped or avoided, or in some cases, unavoidable. Escape behavior
involves terminating an unpleasant experience once it begins, such as attacking a mugger to end an
assault. Avoidance behavior involves taking action to prevent the aversive event, like avoiding going
out at night to prevent mugging. However, some aversive events, such as child abuse, are
inescapable. In such cases, learned helplessness may develop, where individuals stop trying to
escape due to repeated failure.
ESCAPE CONDITIONING
Miller (1948) demonstrated that rats could escape electric shock by turning a wheel to open a door.
Hiroto (1974) showed that college students terminated unpleasant noise by moving a finger across a
shuttle box. Even behaviours like closing one's eyes during a scary movie scene are examples of
escape responses.
1. Intensity of the Aversive Event: More intense aversive events increase the desire to escape
but may also discourage helping behavior. Piliavin et al. (1975) showed that bystanders were
less likely to help a fainting victim with a visible deformity, as the unpleasantness heightened
the cost of helping.
2. Absence of Reinforcement: Campbell and Kraeling (1953) found that rats escaped faster
from a 400-volt shock when the goal-box shock was greatly reduced. The more relief gained,
the stronger the escape learning.
3. Delayed Reinforcement: Fowler and Trapold (1962) showed that escape behavior weakened
when shock termination was delayed. Even a 3-second delay can eliminate conditioning. This
highlights the importance of immediacy in negative reinforcement.
1. Removal of Negative Reinforcement: If the aversive event continues even after the escape
behavior, the behavior eventually stops. Fazzaro and D’Amato (1969) found that rats trained
with more trials persisted longer during extinction.
2. Absence of Aversive Events: When aversive stimuli are no longer presented, escape
behaviours diminish. However, due to anticipatory cues associated with past aversive
events, escape responses may persist until these cues no longer elicit a reaction.
Avoidance occurs when behavior prevents the onset of an aversive event. For instance, a teenage
girl may lie about needing to study to avoid going to a party with someone she dislikes—this is an
active avoidance response. Ignoring a dentist’s appointment reminder due to dental anxiety is a
passive avoidance response.
2. Passive Avoidance: Involves withholding behavior to avoid punishment. Studies show that
rats avoid areas or behaviours previously associated with shock. For example, they may
refuse to leave a safe platform (Hines & Paolino, 1970) or avoid bar pressing if it previously
led to shock (Camp et al., 1967).
PUNISHMENT
TYPES OF PUNISHMENT
2. Negative Punishment (or omission training) involves the removal of a reinforcing stimulus
after an undesired behavior. One form of negative punishment is response cost, where a
person loses access to a reinforcer (e.g., money, points, privileges). Another form is time-out
from reinforcement, where the individual is placed in an environment where no
reinforcement is available, such as being sent to a room or isolation.
EFFECTIVENESS OF PUNISHMENT
B.F. Skinner (1953) argued that punishment only temporarily suppresses behavior. In one of his
experiments, rats trained to press a bar for food were later punished by a paw slap. While this
initially reduced bar-pressing behavior, the effect was short-lived. After 30 minutes, both punished
and unpunished rats pressed the bar at similar rates. Skinner concluded that punishment suppresses
but does not eliminate behavior, and suggested using extinction instead.
However, later research (e.g., Campbell & Church, 1969) has shown that under certain conditions,
punishment can lead to long-lasting behavioral suppression.
The effectiveness of punishment increases with its severity. Mild punishments often produce weak
effects, while stronger punishments can result in permanent suppression. In Camp, Raymond, and
Church’s (1967) study, rats received shock punishments of varying intensities after pressing a bar.
Higher intensity shocks produced greater suppression of bar-pressing. These findings were
consistent across other species, including monkeys, pigeons, and humans. Human studies also
showed that more intense punishments, such as louder noises, more strongly deterred children
from playing with punished toys.
CONSISTENCY OF PUNISHMENT
Consistency is crucial for punishment to be effective. In a study by Parke and Deur (1972), boys who
were punished with a loud buzzer every time they hit a doll quickly stopped the behavior. However,
those who were punished only intermittently did not show the same suppression. This suggests that
partial punishment leads to weaker learning, much like partial reinforcement effects in reward
learning.
DELAY OF PUNISHMENT
The timing of punishment plays a critical role. Delayed punishment is less effective than immediate
punishment. In animal studies, camp et al. (1967) showed that rats who received immediate
punishment stopped bar-pressing more readily than those who received delayed shocks (2, 7.5, or
30 seconds later). Similarly, in classroom settings, immediate scolding was more effective in
reducing misbehaviour in children than delayed punishment. These findings emphasize the need for
swift consequences to enhance behavioral suppression.
Though punishment may reduce undesirable behavior, it carries several potential negative side
effects:
1. Pain-Induced Aggression
Punishment, especially when painful, can trigger aggression. Studies show that animals (e.g.,
monkeys, cats) may attack others or inanimate objects after being shocked. In humans, painful
events can lead to anger-driven aggression, especially when the amygdala is activated and not
effectively regulated by the prefrontal cortex. However, previous experiences with nonaggressive
conflict resolution can reduce aggression after punishment (Hokanson, 1970).
2. Modelling of Aggression
Punishment can teach aggression through modelling, particularly in children. Bandura (1971)
emphasized that behaviours can be learned by observation. Children who watch aggressive cartoons
(Steuer et al., 1971) or experience verbal punishment (Mischel & Grusec, 1966) are more likely to
imitate aggression. Moreover, correlational studies show that physically punished children are
more likely to display aggressive behavior.
Punishers themselves can become aversive stimuli. In a study by Redd, Morris, and Martin (1975),
children performed tasks in front of adults giving either positive or negative feedback. Although the
punitive adult successfully kept children on task, the children preferred working with the positive
adult. This highlights the emotional and social consequences of using aversive punishers.
Punishment, when used appropriately, can be an effective method for modifying undesirable
behavior. In behavioral therapy and applied psychology, several techniques that rely on aversive
conditioning—such as flooding, response cost, and time-out—have been used to treat a wide range
of behavioral issues. These approaches are often applied when more conventional methods fail or
when rapid behavioral suppression is needed.
Two key methods used in the treatment of phobias are flooding and systematic desensitization.
Flooding requires individuals to face the feared stimulus directly and fully, preventing
escape, in order to extinguish the avoidance response. The individual learns that the feared
stimulus is not followed by a negative outcome (no UCS), which leads to the weakening of
the conditioned fear.
In contrast, systematic desensitization gradually pairs the feared stimulus with relaxation
responses, using a hierarchy of anxiety-provoking situations while maintaining a relaxed
state. It is less intense but slower than flooding.
Effectiveness of Flooding
Flooding differs from standard extinction procedures in that it eliminates the option to escape,
making exposure more intense and direct. Research has shown that flooding is effective for
eliminating avoidance behavior in both animals and humans (Baum, 1970; Malleson, 1959). Flooding
has been successfully used to treat various disorders such as OCD, panic disorder, PTSD, simple
phobias, and social anxiety.
Marks (1987) found that flooding could significantly reduce agoraphobic fear in as little as three
sessions. Furthermore, the effects of flooding are long-lasting. For example, Emmelkamp et al.
(1980) found that anxiety levels significantly dropped after flooding sessions and remained low six
months post-treatment.
Neuroscience of Flooding
Individual responses to feared stimuli vary greatly. Siegmund et al. (2011) discovered that the
effectiveness of flooding correlates with cortisol response—those with greater physiological
arousal (cortisol release) benefited more from the treatment. However, due to its intensity, many
individuals cannot tolerate flooding, making systematic desensitization a more viable alternative for
those with low stress tolerance.
Lang and Melamed (1969) presented a striking case involving a 9-month-old infant with persistent
vomiting. Standard treatments failed, so they employed a punishment-based approach using an
EMG to detect vomiting onset and applied a mild electric shock to the leg when vomiting began.
Within six sessions, the vomiting ceased. Six months later, the child maintained a healthy weight
with no recurrence—demonstrating the lasting effects of properly administered aversive
conditioning.
Response cost involves taking away a reinforcer following inappropriate behavior. In lab and real-
world settings, this method has been highly successful in reducing behaviours. Peterson and
Peterson (1968) used response cost to treat self-injurious behavior in a 6-year-old boy.
Reinforcement was withheld when the boy engaged in harmful actions, resulting in the complete
cessation of self-injury. This technique has since been used in a variety of behavioral modification
programs across ages and populations.
Time-out is another form of negative punishment that involves removing access to reinforcement
temporarily following an undesirable behavior. The person may be removed from the reinforcing
situation (e.g., a child sent to their room) or the reinforcers may be removed directly (e.g., restricting
access to social activities).
The effectiveness of time-out depends on the non-reinforcing nature of the time-out environment.
For example, if a child’s room contains toys and entertainment, it may become a reinforcing rather
than punishing setting. Solnick et al. (1977) observed that placing an autistic child in a “sterile” time-
out area for self-stimulatory behavior increased tantrums. When the researchers used physical
restraint instead, tantrums decreased rapidly, showing the importance of selecting an appropriately
non-reinforcing consequence.
Numerous studies, including those by Derenne & Baron (2001) and Everett et al. (2007), confirm the
effectiveness of time-out in suppressing inappropriate behaviours across both animals and humans.
Time-out procedures are now widely adopted in schools, homes, and clinical settings.
Stimulus Generalization
Stimulus generalization occurs when an individual responds similarly to stimuli that resemble the
original conditioned stimulus. For instance, someone who gets sick after dining at a specific
restaurant may avoid all restaurants afterward. The extent of generalization depends on the
similarity between stimuli. A student who gets sick from vodka may develop an aversion to similar
white liquors like gin, a milder dislike for wine, and little reaction to beer. The more similar the new
stimulus is to the original, the stronger the generalized response.
Generalization Gradients
Generalization gradients visually represent how strongly subjects respond to various stimuli based
on their similarity to the original stimulus (S+). A steep gradient indicates narrow generalization—
responding only to stimuli very similar to S+. A flat gradient suggests broad generalization. In
excitatory conditioning, S+ is paired with reinforcement and then compared to test stimuli. In
inhibitory conditioning, a stimulus (S–) suppresses response when paired with the absence of
reinforcement, and the gradient shows how much this inhibition generalizes.
Discrimination Learning
Discrimination learning is the ability to distinguish between stimuli associated with reinforcement
(SD) and those associated with non-reinforcement (S∆). It allows individuals to behave appropriately
based on environmental cues. SD signals that reinforcement is available; S∆ indicates it is not. A
failure to discriminate can lead to errors, inefficiency, or even maladaptive behaviours. Both external
and internal stimuli (like drug states) can function as discriminative cues.
Behavioral Contrast
Behavioral contrast is the phenomenon where responding increases to SD and decreases to S∆, even
though the reinforcement conditions remain the same. This can be temporary (local contrast) due to
emotional reactions or long-lasting (sustained contrast) due to anticipated reinforcement shifts. The
greater the disparity in reinforcement, the greater the contrast observed.
Occasion-Setting Stimuli
Some stimuli don’t directly elicit a conditioned response (CR) but instead set the occasion for
another stimulus to produce a CR. This is called occasion setting. For instance, a light may signal that
a tone will now lead to food. The orbitofrontal cortex and hippocampus play key roles in storing and
processing these contextual relationships.
Reconciliation of Views
Schwartz and Reisberg (1991) proposed that both views are valid depending on the context. In
choice situations, relative properties matter (relational view). In generalization tests involving single
stimuli, animals respond to absolute features.
1. Latent Learning
Latent learning refers to learning that occurs without any obvious reinforcement and remains hidden
until there is a reason to demonstrate it. This concept was central to Edward Tolman's theory of
learning. In the classic experiment by Tolman and Honzik (1930), three groups of rats were trained
in a maze: one group was never reinforced, one was always reinforced, and one was not reinforced
until the eleventh day. Remarkably, the third group showed a sudden improvement in performance
once reinforcement began—matching the performance of the always-reinforced group. This finding
supported the idea that learning can occur without reinforcement and only be expressed when there
is motivation. According to Tolman, learning involves forming expectations about reinforcement
rather than simply strengthening stimulus-response bonds. When extinction occurs in this context,
it's called latent extinction, because the previously reinforced response is no longer performed,
even without the usual non-reinforced trials.
2. Observational Learning
Historically, observational learning (or modelling) was thought to arise from a natural tendency to
imitate others. Early experimental attempts by Edward Thorndike (1898) and John B. Watson (1908)
failed to provide evidence for learning through observation. They concluded that learning required
direct interaction with the environment, not vicarious experience. However, Miller and Dollard
(1941) proposed that observational learning could be understood through reinforcement principles.
They viewed imitation as a form of instrumental conditioning and categorized it into three types:
Same behavior: When individuals independently learn the same response to the same
stimulus (e.g., stopping at a red light).
Copying behavior: When a person’s behavior is shaped through guided correction (e.g., an
art teacher helping a student).
Matched-dependent behavior: When one blindly mimics a model and is reinforced for doing
so.
Later, Albert Bandura expanded this view, asserting that observational learning is not always
imitation. For example, swerving to avoid a pothole after seeing another car hit it is observational
learning without imitation. Bandura emphasized the learning-performance distinction—people may
learn behaviours through observation but only perform them under the right conditions. His 1965
study provided strong evidence for this view, marking a shift toward cognitive processing of
observed information.
3. Sensory Preconditioning
Sensory preconditioning demonstrates how associations between neutral stimuli can influence later
learning. For example, if you associate your neighbour (CS2) with their dog (CS1), and later the dog
bites you (CS1 + UCS), you might develop a fear response (CR) not only to the dog but also to the
neighbour—even though the neighbour was never directly paired with the bite. In experimental
settings, Brogden (1939) showed that dogs conditioned to associate a light and buzzer responded to
the unshocked stimulus after one was paired with a shock, confirming that prior neutral associations
can transfer emotional responses. However, the CR to CS2 is usually weaker than to CS1. Although
robust in early studies, the magnitude of sensory preconditioning effects was generally small
(Kimble, 1961).
4. Insight Learning
Insight learning, introduced by Wolfgang Köhler, is a key concept in Gestalt psychology, which
emphasizes holistic processing. Gestaltists believed learning occurs by perceiving the entire problem
and reorganizing the elements of a situation until a solution emerges. In a famous experiment,
Köhler observed chimpanzees solving problems using sudden insight. For instance, the chimp Sultan
used a box to reach a banana hung out of reach, or a stick to pull a banana placed outside his cage.
This kind of learning involves an "aha!" moment rather than gradual trial-and-error. Factors
influencing insight learning include experience, intelligence, the structure of the learning situation,
prior attempts, repetition, and the ability to generalize. Insight learning is creative and purposive,
differing from the mechanical associations in behaviorist theories.
5. Blocking
The blocking effect, first identified by Kamin, occurs when prior learning about one CS (A) prevents
conditioning to a new CS (B) when they are presented together as a compound stimulus (AB)
followed by a US. For example, if a tone (A) is first paired with a shock (US) and later a tone and light
(AB) are paired with the shock, little or no learning occurs to the light (B). The organism has already
learned to expect the US after the tone, so the light adds no new predictive value. This challenges
simple associative theories and supports models that include prediction error as a key factor in
learning.
6. Learned Helplessness
Learned helplessness, a theory developed by Martin Seligman (1975), explains how individuals
become passive in the face of uncontrollable negative events. Originally demonstrated in dogs
exposed to inescapable shocks, the phenomenon was later applied to human depression. According
to Seligman, when people repeatedly face failure or uncontrollable outcomes, they may come to
believe that nothing they do can change the situation. Over time, this leads to a sense of
helplessness, demotivation, and depressive symptoms. For example, repeated rejection from
medical schools might lead someone to believe further attempts are pointless, even if they have the
ability to succeed. Seligman framed depression as a cognitive expectation that events are
independent of one’s behavior—a belief that causes individuals to stop trying, even when success is
possible.
LEARNING THEORIES
CLARK L HULL
Clark L. Hull (1884–1952), a key figure in learning theory, earned his Ph.D. from the University of
Wisconsin and later worked at Yale. His 1943 book Principles of Behavior pioneered the use of
scientific theory to systematically study learning.
Hull used a hypothetico-deductive model, constructing behavior theory using postulates and
theorems, similar to Euclid’s geometry. Postulates weren’t directly testable, but theorems derived
from them could be tested experimentally. Success strengthened the underlying postulate; failure
led to revision or rejection.
Although Hull's 1952 version of his theory is highly complex, it is still an extension of his 1943 theory;
therefore, the best way to summarize Hull's thoughts on learning is to outline the 1943 version and
then point out the major changes that were made in 1952. Following that plan, we will first discuss
Hull's sixteen major postulates as they appeared in 1943 and then, later in the chapter, we will turn
to the major revisions Hull made in 1952.
Postulate 1: Sensing the External Environment and the Stimulus Trace (Clark L. Hull, 1943)
Hull proposed that when an organism encounters an external stimulus (S), it triggers a sensory
(afferent) neural impulse. This impulse persists for a few seconds even after the actual stimulus has
ended, forming what Hull termed a stimulus trace (s). This lingering trace is important because it
allows for associations to form even when the stimulus is no longer present. As a result, Hull revised
the classic stimulus-response (S-R) model to S-s-R, emphasizing that learning involves forming an
association between the trace (s) and the response (R). Furthermore, between the stimulus trace
and the final behavior, motor neurons (r) are activated to produce the overt response (R). Hence,
Hull's full behavioral chain is represented as:
This model reflects Hull's attempt to explain how temporal gaps between stimulus and response
can still result in learning, due to the persistence of the stimulus trace in the nervous system.
The interaction of sensory impulses (s) indicates the complexity of stimulation and, therefore, the
difficulties in predicting behavior. Behavior is seldom a function of only one stimulus. Rather, it is a
function of many stimuli converging on the organism at any given time. These many stimuli and their
related traces interact with one another and their synthesis determines behavior. We can now refine
the S-R formula further as follows:
where s represents the combined effects of the five stimuli acting on the organism at the moment.
Hull believed that the organism is born with a hierarchy of responses, unlearned behavior, that is
triggered when a need arises. For example, if a foreign object enters the eye, considerable blinking
and tear secretion may follow automatically. If the temperature varies from. that which is optimal
for normal body functioning, the organism may sweat or shiver. Likewise, pain, hunger, or thirst will
trigger certain innate response patterns that have a high probability of reducing the effects of those
conditions. The term hierarchy is used in reference to these responses because more than one
reaction may occur. If the first innate response pattern does not alleviate a need, another pattern
will occur. If the second response pattern does not reduce the need, still another will occur, and so
on. If none of the innate behavior patterns is effective in reducing the need, the organism will have
to learn new response patterns.
If a stimulus leads to a response and if the response results in the satisfaction of a biological need,
the association between the stimulus and the response is strengthened. The more often the stimulus
and the response that leads to need satisfaction are paired, the stronger the relationship between
the stimulus and the response becomes. On this basic point, Hull is in complete agreement with
Thorndike's revised law of effect. Hull, however, is more specific about what constitutes a "satisfying
state of affairs." Primary reinforcement, according to Hull, must involve need satisfaction, or what
Hull called drive reduction. Postulate 4 also describes a secondary reinforcer as "a stimulus which
has been closely and consistently associated with the diminution of a need" (Hull, 1943, p.178).
Secondary reinforcement following a response will also increase the strength of the association
between that response and the stimulus with which it was contiguous.
It can also be said that the "habit" of giving that response to that stimulus gets stronger. Hull's term,
habit strength (SHR), will be explained below.
Habit Strength Habit strength is one of Hull's most important concepts, and as stated above, it refers
to the strength of the association between a stimulus and a response. As the number of reinforced
pairings between a stimulus and a response goes up, the habit strength of that association goes up.
The mathematical formula that describes the relationship between SHR and number of reinforced
pairings between S and R is as follows:
Hull says that the ability of a stimulus (other than the one used during conditioning) to elicit a
conditioned response is determined by its similarity to the stimulus used during training. Thus, SHR
will generalize from one stimulus to another to the extent that the two stimuli are similar. This
postulate of stimulus generalization also indicates that prior experience will affect current learning;
that is, learning that took place under similar conditions will transfer to the new learning situation.
Hull called this process generalized habit strength (SHR). This postulate essentially describes
Thorndike's identical elements theory of the transfer of training.
Biological deficiency in the organism produces a drive (D) state and each drive is associated with
specific stimuli. Hunger pangs which accompany the hunger drive, and dry mouth, lips, and throat
which accompany the thirst drive, are examples. The existence of specific drive stimuli makes it
possible to teach an animal to behave in one way under one drive and another way under another
drive. For example, an animal can be taught to turn right in a T-maze when it is hungry and to turn
left when it is thirsty.
The likelihood of a learned response being made at any given moment is called reaction potential
(SER). Reaction potential is a function of both habit strength (SHR)and drive (D). For a learned
response to occur, SHR has to be activated by D. Drive does not direct behavior; it simply arouses it
and intensifies it. Without drive, the animal would not emit a learned response even though there
had been a large number of reinforced pairings between a stimulus and a response. Thus, if an
animal has learned to press a bar in a Skinner box in order to obtain food, it would press the bar only
when it was hungry, no matter how well it was trained. The basic components of Hull's theory that
we have covered thus far can be combined into the following formula:
Postulate 8: Responding Causes Fatigue, Which Operates Against the Elicitation of a Conditioned
Response
Responding requires work, and work results in fatigue. Fatigue eventually acts to inhibit responding.
Reactive inhibition (IR) is caused by the fatigue associated with muscular activity and is related to the
amount of work involved in performing a task. Since this form of inhibition is related to fatigue, it
automatically dissipates when the organism stops performing. This concept has been used to explain
the spontaneous recovery of a conditioned response after extinction. That is, the animal may stop
responding because of the buildup of IR. After a rest, the IR dissipates and the animal commences to
respond once again. For Hull, extinction is not only a function of nonreinforcement but is also
influenced by the buildup of reactive inhibition.
Reactive inhibition has also been used to explain the reminiscence effect, which is the improvement
of performance following the cessation of practice. and it is explained by assuming that IR builds up
during training and operates against tracking performance. After a rest, IR dissipates and
performance improves. Additional support for Hull's notion of IR comes from research on the
difference between massed and distributed practice. It is consistently found that when practice trials
are spaced far apart (distributed practice), performance is superior to what it is when practice trials
are close together (massed practice).
Fatigue being a negative drive state, it follows that not responding is reinforcing. Not responding
allows IR to dissipate, thereby reducing the negative drive of fatigue. The learned response of not
responding is called conditioned inhibition (SIR). Both IR and SIR operate against the elicitation of a
learned response and are therefore subtracted from reaction potential (SER). When IR and SIR are
subtracted from SER, effective reaction potential SE͞R is the result.
Postulate 10: Factors Tending to Inhibit a Learned Response Change from Moment to Moment
According to Hull, there is an "inhibitory potentiality," which varies from moment to moment and
operates against the elicitation of a learned response. This "inhibitory potentiality" is called the
oscillation effect (SOR). The oscillation effect is the "wild card" in Hull's theory- it is his way of taking
into consideration the probabilistic nature of predictions concerning behavior. There is, he said, a
factor operating against the elicitation of a learned response, whose effect varies from moment to
moment but always operates within a certain range of values; that is, although the range of the
inhibitory factor is set, the value that may be manifested at any time could vary within that range.
The values of this inhibitory factor are assumed to be normally distributed, with middle values most
likely to occur. This oscillation effect explains why a learned response may be elicited on one trial but
not on the next. Predictions concerning behavior based on the value of SER will always be influenced
by the fluctuating values of SOR and will thus always be probabilistic in nature. The SOR must be
subtracted from effective reaction potential ֿSER, which creates momentary effective reaction
potential (ֿSER..). Thus, we have
Postulate 11: Momentary Effective Reaction Potential Must Exceed a Certain Value Before a
Learned Response Can Occur
The value that SER must exceed before a conditioned response can occur is called the reaction
threshold ֿSER... Therefore, a learned response will be emitted only if ֿSER is greater than SLR
Postulate 12: The Probability that a Learned Response Will Be Made Is a Combined Function of
ֿSER, SOR and SLR
In the early stages of training, that is, after only a few reinforced trials, SER will be very close to SLR,
and therefore because of the effects of SOR, a conditioned response will be elicited on some trials
but not on others. The reason is that on some trials the value of SOR subtracted from ֿSER will be
large enough to reduce ֿSER to a value below SLR· As training continues, subtracting SOR from SER
will have less and less of an effect since the value of ֿSER will become much larger than the value of
SLR, Even after considerable training, however, it is still possible for SOR to assume a large value,
thereby preventing the occurrence of a conditioned response.
Postulate 13: The Greater the Value of SER, the Shorter Will Be the Latency Between S and R.
Latency (StR) is the time between the presentation of a stimulus to the organism and its learned
response. This postulate simply states that the reaction time between the onset of a stimulus and
the elicitation of a learned response goes down as the value of SER goes up.
The value of SER at the end of training determines resistance to extinction, that is, how many non-
reinforced responses will need to be made before extinction occurs. The greater the value of SER,
the greater the number of nonreinforced responses that have to be made before extinction takes
place. Hull used n to symbolize the number of non-reinforced trials that occurred before extinction
resulted.
Postulate 15: The Amplitude of a Conditioned Response Varies Directly with SER
Some learned responses occur in degrees, for example, salivation or the galvanic skin response
(GSR). When the conditioned response is one that can occur in degrees its magnitude will be directly
related to the size of SER, the momentary effective reaction potential. Hull used A to symbolize
response amplitude.
Postulate 16: When Two or More Incompatible Responses Tend to Be Elicited in the Same
Situation, the One with the Greatest SER Will Occur
MAJOR DIFFERENCES BETWEEN HULL'S 1943 AND 1952 THEORIES
In the 1943 version of his theory, Hull treated the magnitude of reinforcement as a learning variable:
The greater the amount of reinforcement, the greater the amount of drive reduction, and thus the
greater the increase in SHR. Research showed this notion to be unsatisfactory. Experiments
indicated that performance was dramatically altered as the size of reinforcement was varied after
learning was complete. For example, when an animal trained to run a straight runway for a small
reinforcer was switched to a larger reinforcer, its running speed suddenly went up. When an animal
trained on a large reinforcer was shifted to a smaller reinforcer, its running speed went down. The
changes in performance following a change in magnitude of reinforcement could not be explained in
terms of changes in SHR since they were too rapid. Moreover, SHR was thought to be fairly
permanent. Unless one or more factors operated against SHR it would not decrease in value.
2. Stimulus-Intensity Dynamism
According to Hull, stimulus-intensity dynamism (V) is an intervening variable that varies along with
the intensity of the external stimulus (S). Stated simply, stimulus-intensity dynamism indicates that
the greater the intensity of a stimulus, the greater the probability that a learned response will be
elicited.
The habit family hierarchy simply refers to the fact that in any learning situation, any number of
responses are possible and the one that is most likely is the one that brings about reinforcement
most rapidly and with the least amount of effort. If that particular way is blocked, the animal will
prefer the next shortest route, and if that is blocked, it will go to the third route and so on.
2. Intervening variables, which are processes thought to be taking place within the organism but are
not directly observable. All the intervening variables in Hull's system are operationally defined
3. Dependent variables, which are some aspects of behavior that is measured by the experimenter in
order to determine whether the independent variables had any effect.
O. HOBART MOWRER
Hobart Mowrer (1907–1982) was born in Unionville, Missouri, and earned his Ph.D. from Johns
Hopkins University in 1932. During the 1930s, he worked at Yale University as a postdoctoral fellow
and later as an instructor, where he was significantly influenced by Clark Hull. In 1940, Mowrer
joined the Harvard School of Education, where he remained until 1948. He then moved to the
University of Illinois at Urbana, where he spent the rest of his professional career.
Mowrer's career as a learning theorist began with his efforts to solve the problem that avoidance
learning posed for Hullian theory. If an apparatus is arranged so that an organism receives an electric
shock until it performs a specified response, it will quickly learn to make that response when it is
shocked. Such a procedure is called escape conditioning, and it is diagrammed below:
Escape conditioning is easily handled by Hullian theory by assuming that the response is learned
because it is followed by drive (pain) reduction. However, avoidance conditioning is not so easily
explained by Hullian theory. With avoidance conditioning, a signal, such as a light, reliably precedes
the onset of an aversive stimulus, such as an electric shock. Other than the presence of the signal
that precedes the shock, the procedure is the same as for escape conditioning. The procedure used
in avoidance conditioning is as follows:
With avoidance conditioning, the organism gradually learns to make the appropriate response when
the signal light comes on, thus avoiding the shock. Furthermore, this avoidance response is
maintained almost indefinitely even though the shock itself is no longer experienced. Avoidance
conditioning posed a problem for the Hullians because it was not clear what was reinforcing the
avoidance response.
Mowrer proposed that avoidance learning involves two distinct learning processes. The first is
classical conditioning: when a neutral signal (like a light) is consistently followed by an aversive
stimulus (like an electric shock), the signal becomes a conditioned stimulus (CS) that elicits fear, a
conditioned emotional response. Mowrer referred to this as sign learning, because the organism
learns to interpret the signal as a sign of impending danger.
The second factor is instrumental or operant conditioning, which Mowrer called solution learning.
Once fear is elicited by the CS, the organism learns a specific response (like running or pressing a
lever) to terminate or avoid the fear-inducing stimulus. This behavior is negatively reinforced—not
by direct shock reduction—but by the reduction of fear, a conditioned emotional state. Thus,
classical conditioning creates the fear, and operant conditioning removes it.
In 1960, Mowrer expanded his two-factor theory to account for a wider range of emotional
responses beyond just fear. He proposed that the type of emotion elicited by a conditioned stimulus
(CS) depends on two factors: the type of unconditioned stimulus (US) it is paired with, and the
timing of that pairing—whether the CS appears before the onset or termination of the US.
Mowrer distinguished between two types of reinforcers: incremental reinforcers, which increase
drive (such as a shock), and decremental reinforcers, which reduce drive (such as food). Incremental
reinforcers are typically associated with negative emotional states, while decremental reinforcers
are linked to positive emotions.
When a CS is presented before the onset of an incremental US (e.g., shock), it elicits fear.
When a CS appears before the presentation of a decremental US (e.g., food), it elicits hope.
Through this extension, Mowrer showed that emotional conditioning was not limited to fear but
could encompass a broad spectrum of emotions, depending on how the organism interprets signals
in relation to changes in drive.
By 1960, Mowrer concluded that all learning could be understood as sign learning. External stimuli
—through their association with either positive or negative unconditioned stimuli—come to elicit
specific emotional responses. These emotional states, in turn, motivate behavior. Thus, rather than
viewing operant and classical conditioning as fundamentally separate, Mowrer proposed a unified
theory in which emotional conditioning is central to understanding behavior.
Edwin Ray Guthrie (1886–1959) served as a professor of psychology at the University of Washington
from 1914 until his retirement in 1956. His most influential work, The Psychology of Learning, was
published in 1935 and revised in 1952. Guthrie was known for his clear, simple writing style, which
avoided technical jargon and was intended to be understandable to beginners in psychology. He
illustrated his ideas with real-life anecdotes and placed a strong emphasis on practical applications
—in this way, aligning himself with the traditions of Thorndike and Skinner.
Guthrie proposed a single principle of learning, rejecting the complexity of theories like those of
Thorndike and Pavlov. His law of contiguity stated:
“A combination of stimuli which has accompanied a movement will on its recurrence tend to be
followed by that movement.”
He emphasized that reinforcement or satisfaction was unnecessary for learning to occur. In his final
formulation before his death, he revised the law as:
One-Trial Learning
A radical aspect of Guthrie's theory was his rejection of the law of frequency. He believed that a
stimulus-response association forms completely on the first pairing. In his view, repetition does not
strengthen learning; rather, if the same situation recurs and the same behavior happens again, it is
due to contiguity and recency, not frequency or reinforcement.
The Recency Principle
Building on contiguity and one-trial learning, Guthrie proposed the recency principle, which states
that the last response made in a specific stimulus situation is the one most likely to occur again
when those same stimuli reappear. This principle explains why people repeat the most recent
solution or behavior in familiar contexts.
Movement-Produced Stimuli
To address situations where the external stimulus and response are not closely timed, Guthrie
introduced the concept of movement-produced stimuli (MPS). These are internal stimuli generated
by the body's own movements—like muscle or joint sensations. He suggested that:
Once a response begins, each movement produces its own stimuli, which cue subsequent
movements.
This allows for the chaining of responses, enabling complex behavior to be learned as a
sequence of MPS-based associations.
Guthrie’s theory was one of the simplest yet most radical in the history of learning psychology,
emphasizing stimulus-response contiguity, immediate learning, and the role of internal body cues
in guiding complex sequences of behavior.
Nature of Reinforcement
For Guthrie, reinforcement was not a central cause of learning, as it was for Hull or Skinner. Instead,
he saw reinforcement as a mechanical process that simply preserves the stimulus-response
connection by changing the stimulating conditions to prevent unlearning. Regardless of whether the
response is correct or effective, it will be repeated when the organism next encounters the same
stimuli. This reflects his belief that learning occurs through contiguity alone, not through
reinforcement or satisfaction.
A habit, according to Guthrie, is a response that has been associated with a wide range of stimuli.
The more stimuli that elicit the response, the stronger the habit. To break a habit, one must identify
the cues that trigger the habit and replace the response with a different one in the presence of
those cues. Guthrie proposed three techniques for doing this:
Threshold Method: Gradually introduce the stimulus at low intensities that do not elicit the
unwanted response, and slowly increase it while maintaining an alternative response.
Example: Replacing a horse’s bucking by starting with a light blanket and progressing to a
saddle while keeping the horse calm.
Fatigue Method: Repeatedly elicit the undesirable response until the organism becomes too
fatigued to continue, eventually choosing a different response.
Example: Letting a horse buck until it becomes tired and no longer bucks with a saddle on.
Incompatible Response Method: Present the stimulus for the undesired behavior along with
other stimuli that evoke a mutually exclusive response.
Example: A child’s fear of a panda bear is reduced by pairing the panda with the comforting
presence of the mother.
Punishment
Guthrie rejected the idea that punishment works because of the pain it causes. Instead, he argued
that punishment is effective only if it leads to a different behavior in the presence of the same
stimuli. It works only when it causes a response that is incompatible with the punished one. If the
punishment does not change the response or occurs after the critical stimuli are gone, it can be
ineffective or even reinforce the unwanted behavior. Guthrie and Powers also advised that
commands should not be given unless they can be enforced, as failure would lead to reinforcement
of disobedience.
Guthrie and Powers (1950) also advise that a command should never be given if it could be
disobeyed:
Guthrie’s views on punishment are consistent with his law of contiguity. For punishment to be
effective:
2. It must occur in the presence of the stimuli that triggered the punished behavior.
3. If these conditions are not met, punishment fails or strengthens the habit.
4. The important factor is not the pain, but what the organism does as a result of punishment.
Drives
Guthrie viewed drives as sources of maintaining stimuli that keep the organism active until the goal
is reached. For instance, hunger or anxiety produces internal stimulation that persists until the drive
is satisfied. He used this to explain habits like alcohol use: if someone feels anxious and drinking
reduces that tension, the act of drinking becomes associated with tension relief, and is repeated in
similar future states.
Intentions
When a response is conditioned to a maintaining stimulus (like hunger, anxiety, or thirst), Guthrie
called it an intention. These are sequences of behavior that are repeated whenever the maintaining
stimuli return, because the internal drive lasts over time. The result is a behavior pattern that
appears goal-directed or intentional, although Guthrie explained it purely in terms of contiguity and
stimulus control.
Transfer of Training
Like Thorndike, Guthrie rejected the formal discipline theory, which claimed that learning one
subject (like Latin) improves general reasoning. He argued instead that transfer only occurs when
the situations share common stimuli. A response will transfer to a new context only if the new
context is similar to the original one. As he put it, students learn what they do, not what they read
or hear. To him, concepts like insight and understanding were unnecessary—learning was simply the
result of stimulus-response connections formed through contiguity.
Edward Chace Tolman (1886–1959), trained initially in electrochemistry at MIT, later shifted to
psychology, earning his Ph.D. from Harvard. He was dismissed from Northwestern University—likely
due to his pacifist stance during wartime—before beginning his influential career at UC Berkeley.
Tolman’s theory blended Gestalt psychology and behaviourism, favouring the study of goal-
directed, purposeful behavior rather than the elemental reflexes emphasized by "twitchiest"
behaviourists like Pavlov and Watson. His work was deeply influenced by Gestalt theorist Kurt
Koffka.
Molar Behavior
Purposive Behaviourism
His system, often called purposive behaviourism, focused on behavior that appears directed toward
achieving goals. Importantly, Tolman used the term “purpose” in a descriptive, not mentalistic way.
For example, a rat in a maze acts “as if” it is seeking food, and this behavior persists until the goal is
reached. Thus, even without invoking conscious purpose, the behavior appears goal-directed.
Tolman introduced the use of intervening variables into psychological research and Hull borrowed
the idea from Tolman. Both Hull and Tolman used intervening variables in a similar way in their
work. Hull, however, developed a much more comprehensive and elaborate theory of learning than
did Tolman. Tolman, however, taking his lead from the Gestalt theorists, said that learning is
essentially a process of discovering what leads to what in the environment. The organism, through
exploration, discovers that certain events lead to certain other events or that one sign leads to
another sign. For example, we learn that when it's 5:00 P.M. (S1), dinner (S2) will soon follow. For
that reason, Tolman was called an S-S rather than an S-R theorist. Learning, for Tolman, was an
ongoing process that required no motivation. Learning, for Tolman, was an ongoing process that
required no motivation.
According to Tolman, what is learned is "the lay of the land"; the organism learns what is there.
Gradually it develops a picture of the environment that can be used to get around in it. Tolman
called this picture a cognitive map. Once the organism has developed a cognitive map, it can reach a
particular goal from any number of directions. The organism will, however, choose the shortest
route or the one requiring the least amount of work. This is ref erred to as the principle of least
effort.
A notable concept Tolman introduced was vicarious trial and error. This describes the rat pausing at
a decision point and seemingly “deliberating” or scanning options. While not mentalistic in a strict
sense, this behavior suggested that the rat was processing potential outcomes before acting—
another sign of cognitive involvement in learning.
Tolman emphasized the crucial distinction between learning and performance. He believed that
organisms can acquire knowledge without immediately using it. This knowledge lies dormant until a
relevant motivation arises. Thus, learning can occur without reinforcement and only become visible
(i.e., performed) when the situation demands it.
Latent Learning
Latent learning is learning that is not immediately reflected in behavior. Tolman's experiments
showed that rats could learn the structure of a maze without being reinforced, and only display that
knowledge later when food (a motivator) was introduced. This challenged the behaviorist
assumption that reinforcement is required for learning.
In a classic study by Tolman, Ritchie, and Kalish (1946), rats were divided into two groups to test
whether they learned places or motor responses:
Response learners were reinforced for making the same turn (left or right) regardless of
starting point.
Place learners were reinforced for reaching a specific location, which required different
turns depending on the starting position.
Results supported place learning, showing that rats formed cognitive maps of spatial relationships,
again challenging the strict S-R view that behavior was just a chain of conditioned responses.
Reinforcement Expectancy
Tolman, however, predicted that if reinforcers were changed, behavior will be disrupted since in
reinforcement expectancy a particular reinforcer becomes a part of what is expected. We learn to
expect certain events to follow other events. The animal expects that if it goes to a certain place, it
will find a certain reinforcer.
Environmental Variables
Unfortunately, the situation is not so simple as suggested above. Tolman thought of ⅀ OBO as an
independent variable since it directly influenced the t dependent variable (i.e., the behavior ratio),
and it was under the control of the experimenter who determined the number of training trials. In
addition to ⅀OBO a, number of other independent variables could have an effect on performance.
Tolman suggested the following list
M = maintenance schedule. This symbol refers to the animal's deprivation schedule, for example,
the number of hours since it has eaten.
G = appropriateness of goal object. The reinforcer must be related to the animal's current drive
state. For example, one does not reinforce a thirsty animal with food.
S = types and modes of stimuli provided. This symbol refers to the vividness of the cues or signals
available to the animal in the learning situation.
R = types of motor responses required in the learning situation, for example, running, sharp turns,
and so on.
P = pattern of succeeding and preceding maze units; the pattern of turns that needs to be made to
solve a maze as determined by the experimenter.
⅀OBO = the number of trials and their cumulative nature (see above).
It should be clear that Tolman was no longer talking only about the learning of T-mazes but the
learning of more complex mazes as well.
In addition to the independent variables described above, there are the variables that the individual
subjects bring into the experiment with them. The list of individual difference variables suggested by
Tolman is as follows (note that their initials create the acronym HATE, a somewhat strange word for
Tolman to use):
H = heredity
A = age
T = previous training
Intervening Variables
2.In instrumental conditioning more brain structures appear to take an active role in encoding and
reinforcing a learned behavior. For instance, when we learn driving, the repetition or rehearsal of
that behavior will involve the perceptual and motor systems as well as the frontal lobes. As the
behavior is memorized, it is managed by the basal ganglia. The process by which we learn new
behaviours is also largely influence by specific neurotransmitters, especially dopamine which is
known to reinforce or reward specific behaviours by making us feel good about it.
Memory is typically described as either short or long-term. Short term memory is also called working
memory and can last from several minutes to a few hours. The front lobes are known to play a very
important role in the short-term memorization while the hippocampus is critical in consolidating
information into long term storage.
Learning and memory involve a complex network of brain regions working together to encode,
consolidate, and retrieve information.
1.Hippocampus – For the formation of new memories, plays a crucial role in declarative memory,
which involves the conscious recollection of facts and events
1.Glutamate - primary excitatory neurotransmitter in the brain, is involved in synaptic plasticity and
the formation of new memories
At the cellular level, learning and memory involve changes in the strength and connectivity of
synapses, known as synaptic plasticity. Long-term potentiation (LTP) and long-term depression (LTD)
are two key forms of synaptic plasticity that underlie the cellular basis of learning and memory.
Neuroplasticity, the brain's ability to reorganize and modify its structure and function, plays a crucial
role in lifelong learning. Learning new information and acquiring new skills can induce structural
changes in the brain, including the formation of new neurons and synapses, as well as the
strengthening of existing connections. Neuroplasticity allows the brain to adapt to new experiences
and modify its neural networks to accommodate new knowledge and skills.
We also know that we do produce new neurons as a result of learning activities at any age, which is
why additional research in this area is so critical to the future of neuroscience.
CLINICAL IMPLICATIONS
Disorders such as Alzheimer's disease, amnesia, and other cognitive impairments are characterized
by deficits in learning and memory processes. By investigating the underlying neural mechanisms,
researchers can develop targeted interventions to enhance memory function and improve cognitive
outcomes in these populations
To understand the anatomical changes that are happening in the brain as a result of learning or the
creation of memories, we need to go back to the basis of brain functioning: synaptic connections.
SYNAPTIC PLASTICITY
There are 100 billion neurons in the human brain which need to communicate effectively with one
another. This is achieved through a meeting point called a synapse, which is essentially a gap
between two neurons where neurotransmitters are released.
synaptic plasticity is the ability of synapses to strengthen or weaken over time, in response
to increases or decreases in their activity. Formation of memories may require the formation of new
synapses, or even the birth of new neurons. According to Donald Hebb when a presynaptic and a
postsynaptic neuron are repeatedly activated together, the synaptic connection between them will
become stronger and more stable (cells that fire together, wire together – Hebbian synapse).
This variation in synaptic strength is one of the forms of synaptic plasticity and it primarily depends
on the levels of activity between two neurons (activity-dependent process). Plastic change often
results from the alteration of the number of neurotransmitter receptors located on a synapse. There
are several underlying mechanisms that cooperate to achieve synaptic plasticity, including changes
in the quantity of neurotransmitters released into a synapse and changes in how effectively cells
respond to those neurotransmitters. Synaptic plasticity in both excitatory and inhibitory synapses
has been found to be dependent upon postsynaptic calcium release.
Synaptic plasticity can be classified according to the duration in changes to the synaptic strength
into:
1.Short-term synaptic plasticity – a change lasting from milliseconds to several minutes, with a
prompt return to normal. This type of synaptic plasticity is believed to play an important role in
transient changes in behavioral states or short-lasting forms of memory. It is mostly triggered by
short bursts of presynaptic activity. Short-term plasticity can either strengthen or weaken a synapse.
SYNAPTIC ENHANCEMENT
SYNAPTIC DEPRESSION
Synaptic fatigue or depression is usually attributed to the depletion of the readily releasable vesicles.
Depression can also arise from post-synaptic processes and from feedback activation of presynaptic
receptors.
The major examples of this type of plasticity include long-term potentiation (LTP) and long-term
depression (LTD).
LTP refers to the long-lasting strengthening of synaptic connections following repeated and
synchronous activation. LTP ultimately allows the pre-synaptic neuron to evoke a greater post-
synaptic response when stimulated. It has been detected throughout the brain, including in the
cerebral cortex, amygdala and cerebellum. But it was first described in the hippocampus.
● It requires strong activity in both presynaptic and postsynaptic neurons i.e. neurons which
‘fire together wire together’.
● LTP is synapse-specific. It is restricted to the synapse between two activated neurons rather
than to all synapses on a particular neuron.
MECHANISMS OF LONG-TERM POTENTIATION
Process of LPT is largely governed by chemical reactions between important glutamate receptors
such as NMDA and AMPA receptors. NMDA receptors can actually block LTP by making it impossible
for calcium ions to enter dendritic spines, a chemical process that is necessary to strengthen
synapses between neurons while AMPA facilitates the release of glutamate which can amplify a post
synaptic potential.
Induction of LTP requires activation of glutamate NMDA receptors (NMDARs). At the resting
membrane potential, NMDARs are blocked by magnesium ions and only become permeable to
sodium, potassium and calcium ions upon post-synaptic depolarization (mediated by glutamate
AMPA receptors (AMPARs)). An increase in the post-synaptic calcium concentration initiates the
molecular processes necessary for LTP.
The pivotal role of NMDARs activation in LTP reflects the activity-dependent nature of synaptic
plasticity. NMDARs can be only activated upon simultaneous pre-synaptic release of glutamate and
postsynaptic depolarization mediated by AMPARs. This can be only achieved at high levels of
synaptic activity.
As NMDARs require two processes: the presynaptic release of glutamate and postsynaptic
depolarization to co-occur for their activation, they are often referred to as ‘coincidence detectors’.
LTD involves the weakening of synaptic connections through prolonged low-frequency stimulation.
Similarly, to LTP, LTD was also described for the first time in the hippocampus. However, it is
initiated by prolonged (10-15 minutes) low-frequency stimulation. This particular pattern of
stimulation creates depressed synaptic strength (reflected by a reduction in the size of excitatory
postsynaptic potentials (EPSPs)), which may last for several hours.
Importantly, LTD can erase the increase in EPSP size due to LTP and vice versa. Hence, LTP and LTD
can reversibly affect the synaptic strength guided by the patterns of synaptic activity.
Molecularly, LTD is also initiated due to a calcium influx via NMDARs. However, in contrast to a large
and fast increase in calcium concentration, which drives LTP, LTD is promoted by a small and slow
rise in calcium concentrations due to shrinkage of dendrites and decreased numbers of synaptic
receptors.
UNIT 3
MOTIVATION
MOTIVATION
Motivation refers to internal forces that activate, direct, and sustain behavior. It is the reason
behind why people initiate actions, how intensely they pursue goals, and how persistently they
continue striving toward those goals. It encompasses not just behavior but also the cognitive and
emotional aspects involved in goal pursuit (Petri, 1990).
Components of Motivation
Measurement of Motivation
Since motivation is not directly observable, it is inferred from behavioral changes. It can be
considered:
An intervening variable (explaining the link between stimulus and response).
A performance variable (determining whether a learned behavior is executed or not).
Intervening Variable: Example – a rat deprived of food (stimulus) runs faster in a maze
(response) due to hunger (intervening variable).
Performance Variable: Motivation is necessary for performance; in its absence, behavior
may not occur despite prior learning.
Characteristics of Motivation
Types of Motivation
Intrinsic Motivation: Comes from within; activity is done for its own sake (e.g., enjoyment).
Extrinsic Motivation: Arises from external rewards (e.g., money, praise).
A circular process that explains how needs lead to actions and outcomes:
Homeostasis
Introduced by Walter Cannon (1932), homeostasis refers to the body’s effort to maintain internal
balance. It is a self-regulating process that keeps physiological systems (e.g., blood sugar,
temperature) within a stable range despite external changes.
Physiological Homeostasis
Involves:
Psychologist C.P. Richter demonstrated that behavior compensates when physiological regulation is
impaired:
Rats adjusted fluid intake when regulators of sodium or calcium metabolism were removed.
With pituitary gland removed, rats increased water intake to avoid dehydration.
Behavioral adaptations (e.g., building nests) compensated for hormonal imbalances due to
thyroid or pituitary disruptions.
Psychological Homeostasis
Coined by Fletcher, this refers to the idea that people maintain a mental balance or psychological
equilibrium:
It may overgeneralize or fail to explain behaviours that don't serve balance (e.g., risk-taking,
suicide).
Not all motivated behavior fits into the homeostatic model—some actions pursue novelty,
stimulation, or chaos instead of equilibrium.
THEORIES OF MOTIVATION
INSTINCT THEORY
In 1972, Eibl-Eibesfeldt found that when a human recognizes another person as familiar, a universal
behavioral pattern is triggered—smiling and briefly raising the eyebrows. This is an innate social
signal that communicates recognition and reduces potential threat, facilitating safe social
interaction. This behavior occurs across cultures, suggesting it is genetically programmed and not
learned.
Evolution refers to the gradual, progressive change of organisms over time. Charles Darwin
proposed the principle of natural selection, where useful traits (including behavioral traits) are
preserved because they help individuals survive and reproduce, while disadvantageous traits
gradually disappear from the species. Thus, evolution shapes both physical traits and behaviours
that improve an organism's ability to cope with its environment.
While some behaviours are genetically determined, many others are learned through experience
and interaction with the environment. However, learned behaviours are not passed to future
generations, since genes are not altered by experience. What can be inherited is the capacity to
learn—suggesting that even learning ability must have a genetic foundation.
Definition of Instinct
Instincts are innate, goal-directed patterns of behavior that arise without learning or prior
experience. According to instinct theory, organisms are born with biologically hardwired
behaviours that help ensure survival. These instinctive behaviours are automatic responses to
specific stimuli and are not learned, but inherited through evolution.
Instinct theory suggests that all behaviours are driven by instincts rooted in our biological makeup.
This means human actions, desires, and thoughts are naturally programmed. People act in certain
ways because their genes predispose them to do so. These inborn tendencies motivate behavior
from birth and shape how individuals interact with the world.
INSTINCT THEORIES
• William James
• William McDougall
WILLIAM JAMES
Rivalry – Curiosity
Fear – Modesty
Sympathy – Sociability, etc.
This classification attempted to cover both social and individual motivational factors in
human behavior.
Criticism of James
Critics argued that James failed to clearly distinguish between reflex, instinct, and learned
behavior, creating conceptual confusion.
WILLIAM McDOUGALL
1. Kuo (1921)
Kuo argued that:
2. Tolman (1923)
Tolman criticized the instinct concept for being descriptive, not explanatory. Saying someone is
curious doesn’t explain their behavior—it simply labels it. He believed:
Instincts like "playfulness" or "curiosity" do not identify causes of behavior. The line
between instinctive and learned behavior is blurry. Concepts suggesting all knowledge is
pre-existing (as in Plato's philosophy) are unscientific and vague. The theory confuses
habits and instincts, which should be kept distinct.
ETHOLOGY
1. Consummatory Behavior
2. Appetitive Behavior
These are searching, flexible, and modifiable responses initiated before the
consummatory act.
They are influenced by learning and help the organism find the appropriate
stimulus or goal (e.g., searching for food).
Each instinctive behavior has a source of internal energy called Action Specific Energy (ASE). The
behavior is held back by an internal mechanism called the Innate Releasing Mechanism (IRM),
which is like a lock. A Key Stimulus (or Sign Stimulus) serves as the key that "unlocks" the behavior.
These stimuli can be:
Key stimuli are often simple configurations that effectively release a behavior.
Example: Male three-spined sticklebacks show aggression to other males due to their red
bellies, which act as a key stimulus.
Sometimes, artificial stimuli can trigger a stronger response than the natural one. These are
called Supernormal Stimuli or Super optimal Stimuli.
Example 1: Ringed plovers prefer eggs with exaggerated markings for incubation.
Example 2: Female sticklebacks prefer larger-than-normal dummy males due to their
perceived better reproductive fitness.
Fixed Action Pattern (FAP)
Behaviours released by key stimuli are termed Fixed Action Patterns. These are instinctive, species-
specific motor acts that are relatively invariant and independent of learning. They have several
distinct properties described by Moltz:
1. Stereotyped
o FAPs are mostly invariable in form, though minor variations may exist.
o Example: A graylag goose continues the egg-retrieval behavior even if the egg is
removed mid-action.
3. Spontaneous
o FAPs may occur without an external stimulus when internal energy builds up over
time (called vacuum activity).
o The longer the behavior is suppressed, the more likely it will occur spontaneously.
4. Independent of Learning
o FAPs are innate and not modified by learning. They are hardwired and uniform
across the species.
While FAPs are fixed and pre-programmed, Taxis are unlearned but directed movements in
response to stimuli.
Example: In the goose egg-retrieval example, the side movements of the bill to keep the egg
aligned are taxis, as they are responsive to the egg’s position.
Reaction Chains
Behavior often unfolds in sequences called reaction chains, where each FAP sets the stage for the
next. This chaining is stimulus-dependent but may include gaps filled by learned behaviours. Lorenz
referred to such blending of instinctive and learned responses as instinct-conditioning intercalation.
An example is the courtship behavior of sticklebacks, where visual stimuli lead to a chain of ritualized
actions culminating in fertilization.
HEDONISM
Hedonism is the view that behavior is driven by the pursuit of pleasure and the avoidance of pain.
According to this framework, stimuli acquire motivational significance through their associations
with pleasurable or painful experiences. Philosophers like Hobbes argued that all actions are
motivated by this hedonistic principle. Spencer added an evolutionary perspective, suggesting that
pleasurable behaviours aid in survival, and therefore both pain-reducing and pleasure-seeking
behaviours became adaptive over time. This perspective aligns with Thorndike’s Law of Effect, which
states that behaviours followed by satisfying outcomes are likely to be repeated.
Beneception: Elicited by pleasurable stimuli (e.g., sweet tastes, erotic stimuli, pleasant
smells).
Nociception: Triggered by unpleasant or painful stimuli (e.g., bitter tastes, sharp pain,
repugnant odors).
Neutroception: Stimuli that do not evoke strong positive or negative emotions (e.g., visual
or auditory input under normal conditions).
THEORIES OF HEDONISM
3. Extensity – range of cues that can activate the motive or its resistance to extinction.
Emotion and motivation were interpreted in this context as mechanisms to account for variations in
arousal or behavioral vigor, with several physiological indicators (like skin conductance and EEG
patterns) overlapping with traditional measures of emotion, stress, and conflict. Thus, activation
theory bridges physiology and motivational-emotional processes, focusing on arousal level as a key
determinant of behavior.
HAROLD SCHLOSBERG
Activation Continuum and Emotion
Schlosberg proposed an activation continuum, ranging from sleep (lowest activation) to intense
emotions like blind rage (highest activation). Emotions of varying intensity occupy different positions
along this continuum. While emotion and activation are not identical, Schlosberg viewed emotion as
a designation of arousal level. The key challenge, then, is to quantify arousal and link it to
behavioral performance.
DONALD B. LINDSLEY
Ascending fibres of the RAS project to the cortex, facilitating wakefulness, alertness, and
attention.
Function of RAS
The RAS acts as a central control hub that modulates cortical arousal and behavioral readiness. It
facilitates sensory filtering, motor coordination, and autonomic balance, making it essential for
adaptive, goal-directed behavior. For Lindsley, while emotion might not be the central focus, the
arousal states underlying emotion are deeply tied to RAS activity
Tolman emphasized that behavior must be studied holistically rather than being broken down into
stimulus–response chains. Unlike Hull's reductionist approach, Tolman believed that behavior is
molar—meaningful, organized, and goal-directed. He argued that behavior is not just a sequence of
muscular responses, but involves purpose and cognition.
Characteristics of Molar Behavior
1. Goal-directedness – behavior aims toward achieving specific outcomes (e.g., a hungry rat
seeks food).
2. Persistence – behavior continues until the goal is achieved.
3. Selectivity – the most efficient path to the goal is typically chosen.
Behavior is guided by expectancies and cognitive maps—internal representations of the
environment rather than simple response chains.
Tolman rejected the idea that learning is a matter of chaining S–R links. Instead, he proposed that
organisms form cognitive maps—mental representations of spatial relationships—and develop
expectancies about how behaviours will lead to goals. Learning thus involves understanding the
location of goals and the best paths to reach them, rather than memorizing specific responses.
Kurt Lewin also advocated a molar approach and introduced field theory, which proposes that
behavior results from the total forces (field) acting upon an individual at a given time. This dynamic
system reflects a balance between internal needs and environmental influences, similar to how a
kite’s flight is determined by various interacting forces.
Lewin expressed behavior (B) as a function of both the person (P) and the psychological
environment (E). He highlighted that internal states (like needs and tensions) and psychological facts
(such as knowledge or perception of the environment) jointly influence behavior.
Lewin described the person as composed of regions. The inner-personal region contains various
needs, both physiological (e.g., thirst) and psychological or quasi-needs (e.g., finishing a task). These
needs create tensions, which motivate behavior to restore balance.
Tension is the internal motivational force. It can be reduced either by spreading throughout the
inner regions or by being discharged through locomotion—goal-directed behavior that relieves
tension by interacting with the environment (e.g., drinking water to reduce thirst).
Hull claimed reinforcement strengthens the S–R connection only if it reduces a drive. Without a
drive, no reinforcement occurs. Learning, then, depends on drive-induced behavior followed by
drive reduction—making learning impossible without motivation.
Hull proposed a general pool of energy called generalized drive, which can be activated by multiple
needs (hunger, thirst, sex). Drive is nonspecific—it energizes behavior but doesn’t direct it. Drive
stimuli (Sd)—internal sensations like stomach contractions—provide directionality by connecting to
behaviours previously reinforced.
Drive stimuli (e.g., hunger pangs) function like external cues. They become associated with specific
actions (e.g., opening the fridge) that reduce drive. Over time, the organism learns to respond to
different Sd’s (hunger vs. thirst) based on past reinforcement—demonstrating discrimination and
learned behavior patterns.
Primary Drives
Primary drives are innate and tied to biological needs (e.g., hunger, thirst, sex). Drive strength
depends on deprivation duration. Drive and habit strength together determine reaction potential,
the likelihood of performing a behavior.
Rats trained to press a bar for food under 23-hour deprivation (high drive). Later, in extinction trials
(no reinforcement), rats with more training pressed more. Rats with higher drive during extinction
(Williams: 22 hrs) responded more than those with lower drive (Perin: 3 hrs), supporting Hull’s sEr =
sHr × D model.
Interpretation of Experiment 1
Findings showed:
All rats had equal training, but drive levels (hours of deprivation) varied. Higher deprivation led to
more bar presses in extinction. Even zero-deprivation rats responded slightly, which Hull attributed
to residual drives, reinforcing the idea of a generalized drive pool.
Hull added incentive (K) to account for effects of reward quality/quantity. Final formula:
sEr = sHr × D × K
Performance is stronger for high-value rewards (e.g., steak vs. tasteless burger), even if drive is
constant. Incentive is learned (via classical conditioning), not innate.
Hull eventually revised his theory: drive reduction alone could not explain all motivated behavior. He
introduced drive stimuli reduction—reinforcement occurs when internal sensations (like hunger
pangs) are reduced, even before homeostasis is fully restored. This made his theory more flexible
but also more complex.
KENNETH SPENCE
Discrimination Learning
Spence’s discrimination learning involved animals choosing between two stimuli, with one
consistently reinforced and the other not. He proposed seven key assumptions: reinforced stimuli
increase habit strength, non-reinforced stimuli build inhibition, and both generalize to similar
stimuli. The algebraic combination of habit strength and inhibition determines approach or
avoidance. The stimulus with the highest net habit strength is chosen.
Developed by Deci and Ryan, SDT emphasizes intrinsic motivation as the natural tendency to
explore, learn, and grow. Intrinsic motivation leads to greater interest, performance, and well-being.
It arises from internal sources rather than external rewards or pressures.
1. Autonomy – Desire to act in line with one's own values and choices.
2. Competence – Need to feel effective and successful in one’s actions.
3. Relatedness – Need to feel connected and cared for in social relationships.
When satisfied, these needs enhance intrinsic motivation and psychological growth.
Social Environment and Motivation
SDT identifies three social environment factors:
FLOW
Flow is a psychological state of complete absorption in an activity that is challenging yet matched to
one’s skills. It is deeply engaging and intrinsically rewarding.
Characteristics of Flow
Flow Quadrants
Flow occurs when both challenge and skill are high (1:1 ratio).
1. Promotion Focus
In promotion focus, individuals strive to attain gains, growth, and self-enhancement. Success
in this domain leads to feelings of joy, while failure results in sadness or disappointment.
2. Prevention Focus
In prevention focus, individuals aim to avoid harm and maintain safety. Achieving these
goals brings relief; failing to do so results in anxiety or fear due to persistent threats.
CURIOSITY
Curiosity, an internal motivational factor, drives individuals to explore. It arises from an optimal
discrepancy between current knowledge and what could be learned, stimulated by novelty and
interest.
Types of Curiosity
There are two main types: sensory curiosity (triggered by sensory changes) and cognitive curiosity
(triggered by gaps in understanding, encouraging cognitive restructuring).
Historical Roots
Early thinkers like Cicero saw curiosity as a “passion for learning.” Psychology originally viewed it
with suspicion but later accepted its motivational significance, especially in non-homeostatic
behaviours like maze exploration in rats.
Psychoanalytic View
Freud linked curiosity to the sexual drive, particularly in early childhood, with a shift toward more
cognitive exploratory behavior under social constraints.
Blarer’s View
Blarer emphasized that curiosity is intrinsic to perception and experience, laying the foundation for
intrinsic motivation theories.
Berlyne’s Theory
Berlyne proposed that curiosity is driven by arousal discrepancies caused by novelty, complexity, and
surprise, and that exploratory behavior helps maintain optimal arousal.
Arousal Models
Fiske & Maddi distinguished arousal (bodily response) from activation (central nervous system
readiness), emphasizing medium arousal as optimal for exploration.
Cultural Variability
Though cross-cultural similarities in curiosity exist, attitudes and opportunities for exploration vary.
Zuckerman’s concept of sensation seeking captures the desire for intense and novel experiences,
even at personal risk.
Cross-Cultural Research
Berlyne’s cross-cultural work confirms that stimulus demand characteristics transcend cultural
boundaries. Nonetheless, curiosity must be understood within specific cultural contexts.
Biological Influences
Temperament influences curiosity—stable or extroverted children tend to explore more. High
anxiety inhibits exploration due to a focus on returning to homeostasis and managing survival
threats.
Self-Determination Theory
This theory emphasizes three innate needs—competence, relatedness, and autonomy—as
motivators of exploration and curiosity. Internalization of values supports persistence and goal
pursuit.
SENSATION SEEKING
Biological Correlates
Low monoamine oxidase (MAO) levels are linked to sensation seeking due to their effects on
dopamine, serotonin, and norepinephrine—neurochemicals associated with pleasure and reward.
UNIT 4
EMOTION
Emotion
Reactions consisting of subjective cognitive states, physiological reactions, and expressive behaviors.
Robert Plutchik (2003) has identified eight primary emotions. These are fear, surprise, sadness,
disgust, anger, anticipation, joy, and trust (acceptance).
Theories of emotion
James-Lange Theory:
Cannon-Bard Theory:
Schachter–Singer two-factor theory
Components of emotion
Measurement of emotion
Self-report measures
Day reconstruction
Experience sampling
Real time technique
Observational method
Development of emotions
Facial Feedback Hypothesis
The facial feedback hypothesis posits that facial expressions are not only the result of emotions but
can also influence and intensify those emotions. According to this idea, when a person smiles, even
artificially, sensory feedback from the facial muscles is sent to the brain, which then enhances the
feeling of happiness; similarly, frowning can deepen feelings of sadness or anger. This suggests that
emotions are not purely mental experiences but are shaped by physical expression. Research
supporting this includes studies where participants reported feeling happier when asked to hold a
pen between their teeth (mimicking a smile), indicating that facial movement alone can influence
emotional experience.
Joseph LeDoux proposed that the amygdala is central to the neural circuits involved in processing
fear. Sensory inputs reach the thalamus, which directs the information via two distinct pathways:
A fast, subcortical route to the amygdala allows for immediate emotional responses—
especially to threats—triggering autonomic arousal and hormonal changes.
A slower, cortical route processes the information for detailed evaluation.
This dual pathway explains how fear can be triggered unconsciously, before full cognitive
awareness, and why individuals with anxiety disorders may experience irrational fear
responses.
Charles Darwin believed that human emotional expressions are natural and not learned, especially
facial expressions. He supported this idea by watching his own children and talking to people in
remote cultures. Later, Ekman and Friesen (1971) confirmed this by showing that people from an
isolated tribe in New Guinea could recognize Western facial expressions and also made similar
expressions themselves. Studies have also found that blind and sighted children show similar facial
expressions, suggesting that these are inborn. However, it's still unclear whether other forms of
emotional communication, like tone of voice or body posture, are learned or partly innate.
RECOGNITION
We recognize other people's emotions mainly through what we see and hear—facial expressions
and tone of voice. Research shows that the right hemisphere of the brain plays a bigger role than the
left in understanding emotions, especially negative ones. For example, emotional cues are better
recognized through the left ear and left visual field, which connect to the right hemisphere. While
the left side of the brain processes the literal meaning of speech, the right side processes the
emotional tone. Studies have shown that people with right hemisphere damage can understand the
situation but struggle to judge emotions from faces or gestures. One case showed a man who
couldn’t understand words but could still detect the emotion in the tone of voice, proving that tone
and word meaning are processed separately. Damage to the right somatosensory cortex also affects
emotion recognition, as it impairs the ability to sense facial expressions. Recognition of faces and
recognition of emotional expressions involve different brain areas—damage to the visual association
cortex may cause face blindness (prosopagnosia), but not affect emotion recognition. The amygdala
plays a key role in recognizing emotional expressions, especially fear. Brain scans show higher
amygdala activity when people view fearful faces. The basal ganglia are also involved—damage here
can impair recognition of disgust, as seen in people with Huntington’s disease or OCD.
EXPRESSION
Facial expressions of emotion are mostly automatic and involuntary. Ekman and Davidson confirmed
Duchenne de Boulogne’s early finding that genuine smiles (called Duchenne smiles) involve the
contraction of muscles near the eyes, unlike fake or polite smiles. This insight even influenced acting
methods, like those of Stanislavsky. Two neurological disorders show how different brain systems
control emotional expression: in volitional facial paresis, people can’t move their facial muscles
voluntarily but can still show genuine emotional expressions; in emotional facial paresis, the reverse
happens—they can move their facial muscles voluntarily but can’t show emotions on one side of the
face. The right hemisphere of the brain plays a major role in expressing emotions. Research shows
that the left side of the face (controlled by the right hemisphere) tends to show stronger emotional
expression. People with right hemisphere damage have trouble expressing emotion both in facial
expressions and in tone of voice. Interestingly, the amygdala, though essential for recognizing
emotions, is not necessary for expressing them. For example, a woman who had her amygdala
removed could still show facial expressions even though she could no longer recognize them in
others.
SOURCES OF STRESS
COPING STYLES
STRESS
Hans Selye’s work on stress led to the development of the General Adaptation Syndrome (GAS),
which describes the body’s typical physiological response to any stressor. He observed that stress
consistently led to changes like enlargement of the adrenal cortex, shrinkage of the thymus and
lymph glands, and ulcers. GAS occurs in three stages: alarm, resistance, and exhaustion. In the
alarm stage, the body mobilizes to face the threat. In the resistance stage, it tries to cope with the
stressor. If the stress continues and coping fails, the body enters exhaustion, where physiological
resources are depleted.
Alarm Reaction
The alarm stage triggers the fight-or-flight response. The sympathetic nervous system activates key
organs and releases hormones like epinephrine and norepinephrine. Simultaneously, the HPA axis is
activated, causing the pituitary to release ACTH, which stimulates the adrenal glands to produce
cortisol. These changes speed up necessary functions and suppress non-essential ones. Symptoms
include fatigue, muscle pain, fever, headache, upset stomach, shortness of breath, and low energy.
Stage of Resistance
If the stressor continues, the body enters the resistance stage, where it adjusts to the persistent
threat. The initial sympathetic arousal reduces, but HPA activation remains high. Although the body
might appear to function normally, its ability to resist new stressors declines. Hormone levels stay
elevated, and this prolonged strain may lead to diseases of adaptation, like hypertension or ulcers.
The body's defence’s stay active, but symptoms of the alarm stage may fade temporarily.
Stage of Exhaustion
When stress is prolonged and resources are depleted, the exhaustion stage begins. The body
becomes vulnerable to illness as the immune system weakens. Common signs include irritability,
apathy, anxiety, and mental fatigue. Behaviourally, people may withdraw, neglect responsibilities,
and make poor decisions. Physically, they may suffer from frequent illness, overuse of medication,
and chronic fatigue. If unresolved, this stage can result in severe health consequences or even death.
Stress affects individuals of all ages—infants, children, and adults. Although sources vary by life
stage, stress can arise from within the person, family dynamics, and wider social or environmental
factors. Each source contributes uniquely to psychological and physical well-being.
Physical illness places both physical and emotional demands on individuals. Children tend to cope
better with illness than the elderly, whose immune systems weaken over time. Additionally, the
meaning and impact of illness varies by age. For adults, chronic illness often brings worry about
current and future challenges, increasing stress levels.
Stress often arises from internal motivational conflicts. These occur when individuals face
competing desires or goals. Common types of conflict include:
Approach-Approach Conflict: Choosing between two equally desirable options (e.g., two
great job offers).
Avoidance-Avoidance Conflict: Choosing between two undesirable options (e.g., studying
for a tough exam vs. failing it).
Approach-Avoidance Conflict: One option has both attractive and unattractive features (e.g.,
eating dessert vs. gaining weight).
Such conflicts are stressful because they create tension between opposing desires, and the
consequences of making the "wrong" choice may feel significant.
Family life is a major source of stress due to interpersonal relationships and shared responsibilities.
Financial strain, household disagreements, and conflicting goals are common stress triggers. Major
family stressors include new family members, marital conflict or divorce, and illness or death in the
family.
The arrival of a newborn can be a joyful yet stressful event. Mothers may face emotional and
physical challenges, while fathers often feel pressure to provide financially. A baby’s temperament
—whether easy-going or difficult—greatly affects parental stress. Difficult babies resist routines and
are harder to soothe, which can strain parental coping.
Pregnancy itself can be a source of stress, especially when external pressures or emotional strain
are present. High stress levels during pregnancy can negatively affect the baby’s health, potentially
leading to low birth weight or premature delivery.
Persistent marital conflict increases stress responses like cortisol and blood pressure. Divorce
disrupts the lives of all family members. Children may face changes in routine, living situations, and
caregiver roles. While most families adapt over time, some children show lasting effects.
Chronic illness in a child can lead to long-term family stress, sometimes resulting in PTSD-like
symptoms. Adult illness or disability affects income, time, and emotional balance. The death of a
loved one has profound psychological effects. For children, losing a parent can be traumatic and
difficult to understand. Spouses may struggle with loss of identity, purpose, and hope, and this
emotional toll can damage long-term health.
External stressors include school, jobs, competition, and environmental demands. People in high-
pressure jobs, like healthcare, often face intense stress. Poor relationships at work or school, lack of
recognition, and job insecurity also increase stress. Children may face academic stress, while adults
with low socioeconomic status or who experience discrimination (based on race, class, or gender
identity) are at higher risk for chronic stress and its health consequences.
COPING STYLE
Coping style refers to an individual's consistent way of responding to stress. It reflects how people
typically manage challenging situations, and these tendencies can influence both short-term
reactions and long-term psychological outcomes. There are several major types of coping styles,
including approach versus avoidance coping, problem-focused and emotion-focused coping, and
proactive coping.
Approach coping involves directly confronting the stressor. People who use this style tend to face
problems head-on, plan solutions, seek social support, and take goal-directed action. For example,
talking to a manager about work stress or studying ahead of an exam are approach strategies. This
type of coping is usually constructive and problem-solving oriented.
In contrast, avoidance coping focuses on escaping the stressor or distracting oneself from it.
Individuals using this strategy may procrastinate, deny the issue, oversleep, or spend time on social
media to avoid confronting their emotions. While avoidance may bring short-term relief, it often
leads to greater stress over time, as the root problem remains unresolved.
Problem-focused coping aims at tackling the issue directly by taking concrete actions to change the
stressful situation. This method is particularly useful for controllable stressors, such as workplace
challenges or academic deadlines.
On the other hand, emotion-focused coping centres on managing the emotional distress that comes
with a stressful event. This is more helpful when the situation cannot be changed, such as coping
with chronic illness or loss. Effective stress management often requires flexibility—people who can
shift between these strategies depending on the situation tend to cope better overall.
Emotional-Approach Coping
Proactive Coping
Proactive coping refers to efforts made in advance to prevent or prepare for potential stressors. It
begins with anticipating challenges by reflecting on past experiences or upcoming responsibilities.
After identifying these potential stressors, individuals develop strategies such as saving money,
building social support, or learning new skills to strengthen their resources.
Additionally, proactive coping includes having contingency plans and adopting a positive, growth-
oriented mindset. People who frame difficulties as opportunities and remain optimistic about their
ability to handle stress are more resilient. This style fosters confidence, emotional stability, and
better long-term coping.