Data Flow Testing
Data Flow Testing
The applications encompass almost all software, but I wouldn't use data-flow testing for low-level testing of
software that had a lot of essential control flows. Some natural applications include:
Object Oriented Software: OOP is a data-flow paradigm, which is why data-flow graphs are so often part of the
OOP development methodology. If you use data-flow testing for this purpose, you must assume that the objects
themselves have been properly tested at a lower level, so that you can trust them and replace each one with a node.
You then concentrate on whether the right objects are invoked, the right messages passed, and so on.
Integration Testing: The main issue in integration is (and should be) not if the components integrated work (that
should have been probed during unit testing), but if they are connected correctly and communicate with new another
correctly. Each component is modeled by a node, and the data-flow graph can be the program's call tree. That's nice
because we have tools to display call trees. The bad news is that the usual call tree isn't enough. If there are global
variables, you must include the data flows through them. And because some calls and intercomponent
communications may be dynamic, the static call tree determined by the compiler/linker won't tell you the whole
story - but it's a start.
Spreadsheets? Spreadsheets are about as close to a popular pure data-floe language as we have. Don't disdain
spreadsheets as not being real programming. They are very real programming to people who create complicated
business applications using spreadsheets but have very few tools and techniques used to verify them. Also, if you
buy ready-made spreadsheets and plan to entrust a hunk of your business to them, some kind of testing might be in
order.
Bug Assumptions:
All the bugs that will succumb to control-flow testing will succumb to data-slow testing because it includes
control-flow testing as a subset. Because we are avoiding unessential control flows within data-flows models, we are
assuming that the programmers have learned to rid themselves of simple control-flow bugs. This biases our bug
assumption towards data bugs, such as: initial and default values, duplication and aliases, overloading, wrong item,
wrong type, bad pointers, data-flow anomalies (e.g., closing before opening a file).
Limitations and Caveats:
Data-flow testing won't be any better than your model. And it isn't going to work if you don't do the work.
You still won't find missing requirements
You're likelier to find gratuitous features, but only if you make sure to verify every output.
Data-flow testing is likelier to reveal bugs at higher levels of integration
Data-flow testing might lose effectiveness when software and test design are done by the same person, but less
so than for control-flow testing, because data-flow testing and control-flow testing are such different paradigms that
the [paradigm shift alone is likely to bring new outlooks even when done by the same person
You still might be blind to coincidental correctness, but the possibilities thereof are easier to spot
Your tests are no better than your oracle.
Data-flow testing won't buy you much if you don't find ways to verify those intermediate nodes