0% found this document useful (0 votes)
67 views51 pages

Model Based Testing

This document discusses model-driven testing and conformance testing. It introduces key concepts such as: - Using models of systems to generate tests rather than just testing implementations directly - Representing implementations and specifications as labeled transition systems (LTS) - Defining a conformance relation, such as ioco, between implementations and specifications to determine if an implementation conforms to its specification - Generating test cases from specifications to check conformance in a systematic way using algorithms like the one presented The document argues that model-driven testing provides more rigor than traditional testing by systematically identifying what should be tested based on models and specifications.

Uploaded by

arjun
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
Download as ppt, pdf, or txt
0% found this document useful (0 votes)
67 views51 pages

Model Based Testing

This document discusses model-driven testing and conformance testing. It introduces key concepts such as: - Using models of systems to generate tests rather than just testing implementations directly - Representing implementations and specifications as labeled transition systems (LTS) - Defining a conformance relation, such as ioco, between implementations and specifications to determine if an implementation conforms to its specification - Generating test cases from specifications to check conformance in a systematic way using algorithms like the one presented The document argues that model-driven testing provides more rigor than traditional testing by systematically identifying what should be tested based on models and specifications.

Uploaded by

arjun
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1/ 51

Model-driven Test Generation

Oleg Sokolsky

September 22, 2004


Outline and scope
• Classification of model-driven testing
• Conformance testing for communication
protocols
• Coverage-based testing
– Coverage criteria
– Coverage-based test generation
• Can we do more? (open questions)
Testing classification
• By component level
– Unit testing
– Integration testing
– System testing
• By abstraction level
– Black box
– White box
– Grey box ???
Testing classification
• By purpose
– Functional testing
– Performance testing
– Robustness testing
– Stress testing
• Who performs testing?
– Developers
– In-house QA
– Third-party
Functional testing
• An implementation can exhibit a variety of
behaviors
• For each behavior, we can tell whether it is
correct or not
• A test can be applied to the implementation
and accept or reject one or more behaviors
– The test fails if a behavior is rejected
• A test suite is a finite collection of tests
– Testing fails if any test in the suite fails
Formal methods in testing
• “Testing can never demonstrate the absence of
errors, only their presence.”
Edsger W. Dijkstra
• How can formal methods help?
• Add rigor!
– Reliably identify what should to be tested
– Provide basis for test generation
– Provide basis for test execution
Model-driven testing
• Rely on a model of the system
– Different interpretations of a model
• Model is a requirement
– Black-box conformance testing
– QA or third party
• Model is a design artifact
– Grey-box unit/system testing
– QA or developers
Conformance testing
• A specification prescribes legal behaviors
• Does the implementation conform to the
specification?
– Need the notion of conformance
• Not interested in:
– How the system is implemented?
– What went wrong if an error is found?
– What else the system can do?
Test hypothesis
• How do we relate beasts of different species?
– Implementation is a physical object
– Specification is a formal object
• Assume there is a formal model that is faithful
to implementation
– We do not know it!
• Define conformance between the model and
the specification
– Generate tests to demonstrate conformance
Conformance testing with LTS
• Requirement is specified as a labeled transition
system
• Implementation is modeled as an input-output
transition system
• Conformance relation is given by ioco
– [Tretmans96]
– Built upon earlier work on testing preorders
Historical reference
• Process equivalences:
– Trace equivalence/preorder is too coarse
– Bisimulation/simulation is too fine
• Middle ground:
– Failures equivalence in CSP
– may- and must-testing by Hennessy
– Testing preorder by de Nicola
– They are all the same!
• Right notion but hard to compute
Testing architecture
• Implementation
relation
• Test generation
algorithm
• Test execution
engine
Input-Output Transition Systems

dime, nickel coffee, tea


S0

?dime ? nickel
from user to machine from machine to user
initiative with user initiative with machine
S1 S2
machine cannot refuse user cannot refuse
! coffee ! tea

S3 S4 input output
LI LU

LI = { ?dime, ?nickel }
LU = { !coffee, !tea } L I  LU =  L I  LU = L
Input-Output Transition Systems
Input-Output Transition Systems

?dime ?nickel
IOTS (LI ,LU )  LTS (LI  LU )

?dime
?nickel
?dime IOTS is LTS with Input-Output
?nickel
!coffee !tea and always enabled inputs:
?dime
?dime ?nickel
?nickel for all states s,
?a
LI = { ?dime, ?nickel } for all inputs ?a  LI : S

LU = { !coffee, !tea }
Preorders on IOTS
implementation specification
i imp s

environment environment
e e

i  IOTS(LI,LU) s  LTS(LILU)

imp  IOTS (LI,LU) x LTS (LILU)


Observing IOTS where system inputs
interact with environment outputs, and vice versa
Preorders on IOTS
implementation specification
i imp s

environment environment
e e

i  IOTS(LI,LU) s  LTS(LILU)
i imp s   e  E . obs ( e, i )  obs (e,
s)

IOTS(LU,LI)
Input-Output Testing Relation
implementation specification
i iot s

environment environment
e e

i  IOTS(LI,LU) s  LTS(LILU)
i iot s   e  IOTS(LU,LI) .obs ( e, i )  obs (e, s )

obs ( e, p ) = ( traces (e||p ), qtraces (e||p ) )

qtraces(p) =   L*. p after  refuses LU


Testing preorders – a side note
• One of the reasons for using IOTS over LTS is
that iot is computationally simpler than
conventional testing preorder
– Testing preorder requires us to compare sets of
pairs (trace, refusal set)
– At the same time iot allows us to use inclusion of
weakly quiescent traces:
• inputs can never be refused by i, and
outputs can never be refused by e
• i after  refuses A  A =  or A = LU
Representing quiescence
• Extend IOTS with quiescent transitions
– deterministic -trace automata
p: p: 
?dime ?nickel ?dime ?nickel

?dime ?dime
?nickel ?nickel
?dime ?dime
?nickel ?nickel
!coffee !tea !coffee !tea

?dime ?dime
?dime ?nickel ?dime   ?nickel
?nickel ?nickel
Conformance relation ioconf
• Finally…
i iot s   L*.out( i after  )  out( s after  )
• Allow underspecification
– restrict to traces of s
i ioconf s =def
traces(s)L*.out(i after )  out(s after )
• ioconfF: use arbitrary F instead of traces of s
• Conformance relation ioco accounts for
repetitive quiescence
Test cases
• A test case is a deterministic IOTS(LU,LI) with
finite behaviors
– Note reversed inputs and outputs
– Do not allow choice between outputs or
between input and output pass
• Verdict function : S → {fail,pass} ?dime

• Test run: i passes t =def fail

(i||t) after  deadlocks  !coffee !tea


(t after )=pass
pass fail
Test generation
• Test suite Ts for a specification s is complete:
i ioconf s iff tTs . i passes t
• Test suite Ts is sound if
i ioconf s  tTs . i passes t
• Complete test suites are usually infinite
– Aim at generating sound test suites
Test generation algorithm
• Gen( s, F )
– Choose non-deterministically:
1. t = stop and (t) = pass
a
2. t = a . Gen(’s, F after a), with s → ’s
(t) = pass
3. t   x.stop | x LU , xout ( S )   x.t x | xout ( S )
 (t )  pass if  out ( S )    F ; otherwise fail
 (stop)  fail if   F ; otherwise pass
Example
• F = dime.coffee
p: 
quiescent ?dime ?nickel

pass ?dime
?nickel
output- ?dime ?dime
?nickel
enabled !coffee !tea
fail
 not in
F after dime ?dime
!coffee and not ?dime   ?nickel
!tea
quiescent ?nickel

pass fail
Wrong
Right output
output
Test purposes
• Where does F come from?
• Test purposes:
– Requirements, use cases
– Automata, message sequence charts
• Test purposes represent “interesting” or
“significant” behaviors
– Define “interesting” or “significant”…
• Can we come up with test purposes
automatically?
Summary: conformance testing
• Advantages:
– Very rigorous formal foundation
– Size of the test suite is controlled by use
cases
• Disadvantages:
– How much have we learned about the system
that passed the test suite?
– Does not guarantee coverage
Coverage-based testing
• Traditional:
– Tests are derived from the implementation
structure (code)
• Model-driven:
– Cover the model instead of code
– Model should be much closer to the
implementation in structure
• Relies on coverage criteria
Coverage criteria and tests
• [HongLeeSokolskyUral02]
• Control flow:
– all-states
– all-transitions
• Data flow:
– all-defs
– all-uses
– all-inputs
– all-outputs
• Test is a linear sequence of inputs and outputs
Specifications: EFSM
• Transition systems equipped with variables
• Transitions have guards and update blocks

t1: insert[m+x<=5]
/m:=m+x
t2: coffee[m>1]
/m:=m-1
IDLE BUSY
t3: done

t4: display/y:=m t5: display/y:=m


Coverage criteria
• Each coverage criterion is represented by a set
of temporal logic formulas
– WCTL: a subset of CTL
• Atomic propositions p1,…,pn
• Temporal operators EX, EU, EF
• Conjunctions: at most one non-atomic conjunct
• Negations is applied only to atomic propositions
• Unrestricted disjunctions
• E.g.: EF(p1 & EFp2)
– WCTL formulas have linear witnesses
All-states coverage criterion
• Requires every state be covered at least once
• With every state s, associate EF(s & EFexit)

t1: insert[m+x<=5] EF(idle & EFexit)


/m:=m+x
EF(busy & EFexit)
t2: coffee[m>1]
/m:=m-1
IDLE BUSY
t3: done

t4: display/y:=m t5: display/y:=m


All-transitions coverage criterion
• Requires every transition be covered at least
once
• With every transition t, associate EF(t & EFexit)

t1: insert[m+x<=5] EF(t1 & EFexit)


/m:=m+x EF(t2 & EFexit)
t2: coffee[m>1] EF(t3 & EFexit)
/m:=m-1 EF(t4 & EFexit)
IDLE BUSY EF(t5 & EFexit)
t3: done

t4: display/y:=m t5: display/y:=m


Data flow: definitions and uses
• Definition: a value is assigned to a variable
• Use: a value of a variable is used in an
expression
• Variables are defined and used in transitions
• Definition-use pair: (v,t,t’)
– v is defined by t
– v is used by t’
– There is a path from t to t’ free from other
definitions of v
Covering a du-pair
• With a du-pair (v, t, t’), associate
– EF(t & EXE[!def(v) U (t’ & EFexit)])
– def(v) : disjunction of all transitions that
define v
t1: insert[m+x<=5] EF(t1 & EXE[!(t1 | t2) U (t2 & EFexit)])
/m:=m+x
t2: coffee[m>1]
/m:=m-1
IDLE BUSY
t3: done

t4: display/y:=m t5: display/y:=m


Data-flow coverage criteria
• All-defs coverage criterion: a definition-clear path
– from every definition to some use
• All-uses coverage criterion: a definition-clear path
– from every definition to every use

t1: insert[m+x<=5]
All-uses coverage criterion
/m:=m+x EF(t1 & EXE[!(t1 | t2) U (t1 & EFexit)])
t2: coffee[m>1] EF(t1 & EXE[!(t1 | t2) U (t2 & EFexit)])
/m:=m-1 EF(t1 & EXE[!(t1 | t2) U (t4 & EFexit)])
EF(t1 & EXE[!(t1 | t2) U (t5 & EFexit)])
IDLE BUSY
EF(t2 & EXE[!(t1 | t2) U (t1 & EFexit)])
t3: done EF(t2 & EXE[!(t1 | t2) U (t2 & EFexit)])
EF(t2 & EXE[!(t1 | t2) U (t4 & EFexit)])
EF(t2
t5: & EXE[!(t1 | t2) U (t5 & EFexit)])
display/y:=m
t4: display/y:=m
Data flow chains
• Affect pair (v, t, v’, t’): the value of v used by t
affects the value of v’ defined at t’
– Either t=t’ ((v,t) directly affects (v’,t’)) or
– there is a du-pair (v’’,t,t’’) s.t. (v,t) directly affects
(v’’,t) and (v’’,t’’) affects (v’,t’)
t1: insert[m+x<=5] (x, t1) directly affects (m, t1)
/m:=m+x
t2: coffee[m>1] (x, t1) affects (y, t5)
/m:=m-1
IDLE BUSY
t3: done

t4: display/y:=m t5: display/y:=m


Test Generation
System True or
model false
Model
checker
Logic Witness or
formula counterexample

System
model
Model
checker
Coverage A set of A set of
criterion logic witnesses
formulas
Test Generation
• Generating a witness for a formula
– Cost: the length of a witness
– A minimal-cost witness for a formula
• Existing model checkers generate a minimal-cost
witness by breadth-first search of state space

E[ U ]
Test Generation
• Costs
– The total length of witnesses or
– The number of witnesses
• Both optimization problems are NP-hard
E[ U ]
E[ U ]

E[ U ]
E[ U ]
Coverage for distributed systems
• What if our system is a collection of
components?
• Possible solutions:
– Generate tests for each components
• Clearly unsatisfactory; does not test integration
– Generate tests from the product of
component models
• Too many redundant tests
• Non-determinism is another problem
Example
• Producer-consumer with acknowledgements

send? S
t_fwd?
send t_fwd

t_fwd! 
recv

t_bck? recv!
t_bck!
ack t_bck

ack! R
A
Covering product transition system
• Linear tests bring trouble:
send?.ack!.recv!
– May fail if the system chooses a different path
• Tests differ in interleavings
of independent events ack!
recv!
recv!

– No need to test ack!


recv! send?
send?.ack!.recv! 
send? 
send?.recv!.ack!
 recv!
separately
recv!
• State explosion in test suite! send?
 
Partial orders for test generation
• Use event structures instead of transition
systems [Heninger97]
• Test generation covers the event structure
• Allows natural generation of distributed
testers
Prime event structures (PES)
• Set of events E
– Events are occurrences of actions
• Causality relation  E E
– Partial order
• Conflict relation # E E
– irreflexive and symmetric
• Labeling function l : E  A
• Finite causality  e'| e'e is finite
• Conflict inheritance e#e  e e  e#e
Producer-consumer PES
• Structure is infinite S R A
start start start
– Initial part is shown
• Causally unrelated and send?
non-conflicting events
can occur together t_fwd!
t_fwd?
• Behaviors will start
repeating
– Can stop with t_bck! 
t_bck?
finite structure
recv! ack! recv!
Test generation with PES
• Project PES on observable actions, propagating
conflicts
• Every path in the PES should be covered
S R A
• Tests consist of distributed start start start
testers with coordination
messages between tests send?

– Coordination messages are


inserted when there is a
causal edge between
locations
recv! ack! recv!
Summary: coverage-based testing
• Advantages:
– Exercise the specification to the desired
degree
– Does not rely on test purpose selection
• Disadvantages:
– Large and unstructured test suites
– If the specification is an overapproximation,
tests may be infeasible
Generation of test purposes
• Recent work: [HenningerLuUral-03]
• Construct PES
• Generate MSC (message sequence charts) to
cover PES
• Use MSC as test purposes in ioco-based test
generation
Controllability of testing
• Conformance testing may not provide enough
guarantees
– With branching tests, test purpose behavior
may be avoided send? S
t_fwd?

– What if I never send t_fwd

see ack?
t_fwd! 
recv

t_bck? recv!
t_bck!
ack t_bck

ack! R
A

• Problem: inherent uncertainty in the system


How to contain uncertainty?
• Avoidance (no need to increase control)
– During testing, compute confidence measure
• E. g., accumulate coverage
– Stop at the desired confidence level
• Prevention (add more control)
– Use instrumentation to resolve uncertainty
– What to instrument?
• Use model for guidance
• Anyone needs a project to work on?
Test generation tools
• TorX
– Based on ioco
– On-the-fly test generation and execution
– Symbolic testing (data parameterization)
– LOTOS, Promela, …
– http://fmt.cs.utwente.nl/tools/torx/

• TGV
– Based on symbolic ioconf
– LOTOS, SDL, UML
– http://www.irisa.fr/pampa/VALIDATION/TGV/TGV.html

You might also like