Software Testing
Software Testing
Test stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the
test.
[1]
Test Stubs are mainly used in incremental testing's Top-Down approach. Stubs are software programs which act as a
module and give the output as given by an actual product/software. Test stub is also called as a 'called' function.
Example
Consider a software program which queries a database to obtain the sum price total of all products stored in the
database. However, the query is slow and consumes a large number of system resources. This reduces the number of
test runs per day. Secondly, the tests need to be conducted on values larger than what is currently in the database.
The method (or call) used to perform this is get_total(). For testing purposes, the source code in get_total() could be
temporarily replaced with a simple statement which returned a specific value. This would be a test stub.
There are several testing frameworks available and there is software that can generate test stubs based on existing
source code and testing requirements.
External links
http:/ / xunitpatterns. com/ Test%20Stub. html
[2]
References
[1] Fowler, Martin (2007), Mocks Aren't Stubs (Online) (http:/ / martinfowler. com/ articles/ mocksArentStubs.
html#TheDifferenceBetweenMocksAndStubs)
[2] http:/ / xunitpatterns. com/ Test%20Stub. html
Testware
161
Testware
Generally speaking, Testware is a sub-set of software with a special purpose, that is, for software testing, especially
for software testing automation. Automation testware for example is designed to be executed on automation
frameworks. Testware is an umbrella term for all utilities and application software that serve in combination for
testing a software package but not necessarily contribute to operational purposes. As such, testware is not a standing
configuration but merely a working environment for application software or subsets thereof.
It includes artifacts produced during the test process required to plan, design, and execute tests, such as
documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and
any additional software or utilities used in testing.
[1]
Testware is produced by both verification and validation testing methods. Like software, Testware includes codes
and binaries as well as test cases, test plan, test report and etc. Testware should be placed under the control of a
configuration management system, saved and faithfully maintained.
Compared to general software, testware is special because it has:
1. 1. a different purpose
2. 2. different metrics for quality and
3. 3. different users
The different methods should be adopted when you develop testware with what you use to develop general software.
Testware is also referred as test tools in a narrow sense.
[2]
References
[1] Fewster, M.; Graham, D. (1999), Software Test Automation, Effective use of test execution tools, Addison-Wesley, ISBN0-201-33140-3
[2] http:/ / www. homeoftester. com/ articles/ what_is_testware. htm
Test automation framework
162
Test automation framework
A test automation framework is a set of assumptions, concepts and tools that provide support for automated
software testing. The main advantage of such a framework is the low cost for maintenance. If there is change to any
test case then only the test case file needs to be updated and the Driver Script and Startup script will remain the
same. Ideally, there is no need to update the scripts in case of changes to the application.
Choosing the right framework/scripting technique helps in maintaining lower costs. The costs associated with test
scripting are due to development and maintenance efforts. The approach of scripting used during test automation has
effect on costs.
Various framework/scripting techniques are generally used:
1. 1. Linear (procedural code, possibly generated by tools like those that use record and playback)
2. Structured (uses control structures - typically if-else, switch, for, while conditions/ statements)
3. Data-driven (data is persisted outside of tests in a database, spreadsheet, or other mechanism)
4. 4. Keyword-driven
5. 5. Hybrid (two or more of the patterns above are used)
The Testing framework is responsible for:
[1]
1. 1. defining the format in which to express expectations
2. 2. creating a mechanism to hook into or drive the application under test
3. 3. executing the tests
4. 4. reporting results
Test automation interface
Test automation interface are platforms that provides a single workspace for incorporating multiple testing tools and
frameworks for System/Integration testing of application under test. The goal of Test Automation Interface is to
simplify the process of mapping tests to business criteria without coding coming in the way of the process. Test
automation interface are expected to improve the efficiency and flexibility of maintaining test scripts.
[2]
Test Automation Interface Model
Test Automation Interface consists of the following core modules:
Interface Engine
Interface Environment
Object Repository
Interface engine
Interface engines are built on top of Interface Environment. Interface
engine consists of a parser and a test runner. The parser is present to
parse the object files coming from the object repository in to the test specific scripting language. Test runner deals
with executing the test scripts using a test harness.
[2]
Test automation framework
163
Interface environment
Interface environment consists of Product/Project Library and Framework Library. Framework Library have
modules related with the overall test suite while the Product/Project Library have modules specific to the application
under test.
[2]
Object repository
Object repositories are a collection of UI/Application object data recorded by the testing tool while exploring the
application under test.
[2]
References
[1] "Selenium Meet-Up 4/20/2010 Elisabeth Hendrickson on Robot Framework 1of2" (http:/ / www. youtube. com/ watch?v=qf2i-xQ3LoY). .
Retrieved 2010-09-26.
[2] "Conquest: Interface for Test Automation Design" (http:/ / www. qualitycow. com/ Docs/ ConquestInterface. pdf). . Retrieved 2011-12-11.
Hayes, Linda G., "Automated Testing Handbook", Software Testing Institute, 2nd Edition, March 2004
Kaner, Cem, " Architectures of Test Automation (http:/ / www. kaner. com/ pdfs/ testarch. pdf)", August 2000
Data-driven testing
Data-driven testing (DDT) is a term used in the testing of computer software to describe testing done using a table
of conditions directly as test inputs and verifiable outputs as well as the process where test environment settings and
control are not hard-coded. In the simplest form the tester supplies the inputs from a row in the table and expects the
outputs which occur in the same row. The table typically contains values which correspond to boundary or partition
input spaces. In the control methodology, test configuration is "read" from a database.
Introduction
In the testing of software or programs, several methodologies are available for implementing this testing. Each of
these methods co-exist because they differ in the effort required to create and subsequently maintain. The advantage
of Data-driven testing is the ease to add additional inputs to the table when new partitions are discovered or added to
the product or System Under Test. The cost aspect makes DDT cheap for automation but expensive for manual
testing. One could confuse DDT with Table-driven testing, which this article needs to separate more clearly in
future.
Methodology Overview
Data-driven testing is the creation of test scripts to run together with their related data sets in a framework. The
framework provides re-usable test logic to reduce maintenance and improve test coverage. Input and result (test
criteria) data values can be stored in one or more central data sources or databases, the actual format and
organisation can be implementation specific.
The data comprises variables used for both input values and output verification values. In advanced (mature)
automation environments data can be harvested from a running system using a purpose-built custom tool or sniffer,
the DDT framework thus performs playback of harvested data producing a powerful automated regression testing
tool. Navigation through the program, reading of the data sources, and logging of test status and information are all
coded in the test script.
Data-driven testing
164
Data Driven
Anything that has a potential to change (also called "Variability" and includes such as environment, end points, test
data and locations, etc.), is separated out from the test logic (scripts) and moved into an 'external asset'. This can be a
configuration or test dataset. The logic executed in the script is dictated by the data values.
Keyword-driven testing is similar except that the test case is contained in the set of data values and not
embedded or "hard-coded" in the test script itself. The script is simply a "driver" (or delivery mechanism) for the
data that is held in the data source
The databases used for data-driven testing can include:-
datapools
ODBC source's
csv files
Excel files
DAO objects
ADO objects
==See also Portal|Software Testing}}
Control table
Keyword-driven testing
Test Automation Framework
Test-Driven Development
Hybrid Automation Framework
Meta Data Driven Testing
Modularity-driven testing
Hybrid testing
Model-based testing
References
Carl Nagle: Test Automation Frameworks (http:/ / safsdev. sourceforge. net/
FRAMESDataDrivenTestAutomationFrameworks. htm), Software Automation Framework Support on
SourceForge (http:/ / safsdev. sourceforge. net/ Default. htm)
Modularity-driven testing
165
Modularity-driven testing
Modularity-driven testing is a term used in the testing of software.
Test Script Modularity Framework
The test script modularity framework requires the creation of small, independent scripts that represent modules,
sections, and functions of the application-under-test. These small scripts are then used in a hierarchical fashion to
construct larger tests, realizing a particular test case.
Of all the frameworks, this one should be the simplest to grasp and master. It is a well-known programming strategy
to build an abstraction layer in front of a component to hide the component from the rest of the application. This
insulates the application from modifications in the component and provides modularity in the application design. The
test script modularity framework applies this principle of abstraction or encapsulation in order to improve the
maintainability and scalability of automated test suites.
Keyword-driven testing
Keyword-driven testing, also known as table-driven testing or action-word testing, is a software testing
methodology for automated testing that separates the test creation process into two distinct stages: a Planning Stage,
and an Implementation Stage.
Overview
Although keyword testing can be used for manual testing, it is a technique particularly well suited to automated
testing.
[1]
The advantages for automated tests are the reusability and therefore ease of maintenance of tests that have
been created at a high level of abstraction.
Methodology
The keyword-driven testing methodology divides test creation into two stages:-
Planning Stage
Implementation Stage
Definition
A keyword in its simplest form is an aggregation of one or more atomic test steps.
Planning Stage
Preparing the test resources and testing tools.
Keyword-driven testing
166
Examples of keywords*
A simple keyword (one action on one object), e.g. entering a username into a textfield.
Object Action Data
Textfield (username) Enter text <username>
A more complex keyword (a combination of test steps into a meaningful unit), e.g. logging in.
Object Action Data
Textfield (domain) Enter text <domain>
Textfield (username) Enter text <username>
Textfield (password) Enter text <password>
Button (login) Click One left click
Implementation Stage
The implementation stage differs depending on the tool or framework. Often, automation engineers implement a
framework that provides keywords like check and enter .
[1]
Testers or test designers (who do not need to know
how to program) write test cases based on the keywords defined in the planning stage that have been implemented by
the engineers. The test is executed using a driver that reads the keywords and executes the corresponding code.
Other methodologies use an all-in-one implementation stage. Instead of separating the tasks of test design and test
engineering, the test design is the test automation. Keywords, such as edit or check are created using tools in
which the necessary code has already been written. This removes the necessity for extra engineers in the test process,
because the implementation for the keywords is already a part of the tool. Examples include GUIdancer and QTP.
Pros
1. 1. Maintenance is low in the long run:
1. 1. Test cases are concise
2. 2. Test cases are readable for the stake holders
3. 3. Test cases easy to modify
4. 4. New test cases can reuse existing keywords more easily
2. 2. Keyword re-use across multiple test cases
3. 3. Not dependent on a specific tool or programming language
4. 4. Division of Labor
1. 1. Test case construction needs stronger domain expertise - lesser tool / programming skills
2. 2. Keyword implementation requires stronger tool/programming skill - with relatively lower domain skill
5. 5. Abstraction of Layers
Keyword-driven testing
167
Cons
1. 1. Longer Time to Market (as compared to manual testing or record and replay technique)
2. 2. Moderately high learning curve initially
References
[1] (http:/ / www.stickyminds. com/ sitewide. asp?Function=edetail& ObjectType=COL& ObjectId=8186), Danny R. Faught, Keyword-Driven
Testing, Sticky Minds
External links
1. Hans Buwalda (http:/ / www. logigear. com/ resources/ articles-presentations-templates/
389--key-success-factors-for-keyword-driven-testing. html), success factors for keyword driven testing.
2. SAFS (Software Automation Framework Support) (http:/ / safsdev. sourceforge. net)
3. Test automation frameworks (http:/ / safsdev. sourceforge. net/ DataDrivenTestAutomationFrameworks. htm)
4. Automation Framework - gFast: generic Framework for Automated Software Testing - QTP Framework (http:/ /
www. slideshare. net/ heydaysoft/ g-fast-presentation/ )
5. Robot Framework Open Source Test Automation Framework (http:/ / robotframework. org)
Hybrid testing
Overview
The hybrid Test Automation Framework is what most frameworks evolve into over time and multiple projects. The
most successful automation frameworks generally accommodate both Keyword-driven testing as well as Data-driven
testing. This allows data driven scripts to take advantage of the powerful libraries and utilities that usually
accompany a keyword driven architecture. The framework utilities can make the data driven scripts more compact
and less prone to failure than they otherwise would have been. The utilities can also facilitate the gradual and
manageable conversion of existing scripts to keyword driven equivalents when and where that appears desirable. On
the other hand, the framework can use scripts to perform some tasks that might be too difficult to re-implement in a
pure keyword driven approach, or where the keyword driven capabilities are not yet in place.
The Framework
The framework is defined by the Core Data Driven Engine, the Component Functions, and the Support Libraries (see
adjacent picture) . While the Support Libraries provide generic routines useful even outside the context of a keyword
driven framework, the core engine and component functions are highly dependent on the existence of all three
elements. The test execution starts with the LAUNCH TEST(1) script. This script invokes the Core Data Driven
Engine by providing one or more High-Level Test Tables to CycleDriver(2). CycleDriver processes these test tables
invoking the SuiteDriver(3) for each Intermediate-Level Test Table it encounters. SuiteDriver processes these
intermediate-level tables invoking StepDriver(4) for each Low-Level Test Table it encounters. As StepDriver
processes these low-level tables it attempts to keep the application in synch with the test. When StepDriver
encounters a low-level command for a specific component, it determines what Type of component is involved and
invokes the corresponding Component Function(5) module to handle the task.
Lightweight software test automation
168
Lightweight software test automation
Lightweight software test automation is the process of creating and using relatively short and simple computer
programs, called lightweight test harnesses, designed to test a software system. Lightweight test automation
harnesses are not tied to a particular programming language but are most often implemented with the Java, Perl,
Visual Basic .NET, and C# programming languages. Lightweight test automation harnesses are generally four pages
of source code or less, and are generally written in four hours or less. Lightweight test automation is often associated
with Agile software development methodology.
The three major alternatives to the use of lightweight software test automation are commercial test automation
frameworks, Open Source test automation frameworks, and heavyweight test automation. The primary disadvantage
of lightweight test automation is manageability. Because lightweight automation is relatively quick and easy to
implement, a test effort can be overwhelmed with harness programs, test case data files, test result files, and so on.
However, lightweight test automation has significant advantages. Compared with commercial frameworks,
lightweight automation is less expensive in initial cost and is more flexible. Compared with Open Source
frameworks, lightweight automation is more stable because there are fewer updates and external dependencies.
Compared with heavyweight test automation, lightweight automation is quicker to implement and modify.
Lightweight test automation is generally used to complement, not replace these alternative approaches.
Lightweight test automation is most useful for regression testing, where the intention is to verify that new source
code added to the system under test has not created any new software failures. Lightweight test automation may be
used for other areas of software testing such as performance testing, stress testing, load testing, security testing, code
coverage analysis, mutation testing, and so on. The most widely published proponent of the use of lightweight
software test automation is Dr. James D. McCaffrey.
References
Definition and characteristics of lightweight software test automation in: McCaffrey, James D., ".NET Test
Automation Recipes", Apress Publishing, 2006. ISBN 1-59059-663-3.
Discussion of lightweight test automation versus manual testing in: Patton, Ron, "Software Testing, 2nd ed.",
Sams Publishing, 2006. ISBN 0-672-32798-8.
An example of lightweight software test automation for .NET applications: "Lightweight UI Test Automation
with .NET", MSDN Magazine, January 2005 (Vol. 20, No. 1). See http:/ / msdn2. microsoft. com/ en-us/
magazine/ cc163864. aspx.
A demonstration of lightweight software test automation applied to stress testing: "Stress Testing", MSDN
Magazine, May 2006 (Vol. 21, No. 6). See http:/ / msdn2. microsoft. com/ en-us/ magazine/ cc163613. aspx.
A discussion of lightweight software test automation for performance testing: "Web App Diagnostics:
Lightweight Automated Performance Analysis", asp.netPRO Magazine, August 2005 (Vol. 4, No. 8).
An example of lightweight software test automation for Web applications: "Lightweight UI Test Automation for
ASP.NET Web Applications", MSDN Magazine, April 2005 (Vol. 20, No. 4). See http:/ / msdn2. microsoft. com/
en-us/ magazine/ cc163814. aspx.
A technique for mutation testing using lightweight software test automation: "Mutant Power: Create a Simple
Mutation Testing System with the .NET Framework", MSDN Magazine, April 2006 (Vol. 21, No. 5). See http:/ /
msdn2. microsoft. com/ en-us/ magazine/ cc163619. aspx.
An investigation of lightweight software test automation in a scripting environment: "Lightweight Testing with
Windows PowerShell", MSDN Magazine, May 2007 (Vol. 22, No. 5). See http:/ / msdn2. microsoft. com/ en-us/
magazine/ cc163430. aspx.
169
Testing process
Software testing controversies
There is considerable variety among software testing writers and consultants about what constitutes responsible
software testing. Members of the "context-driven" school of testing
[1]
believe that there are no "best practices" of
testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each
unique situation. In addition, prominent members of the community consider much of the writing about software
testing to be doctrine, mythology, and folklore. Some contend that this belief directly contradicts standards such as
the IEEE 829 test documentation standard, and organizations such as the Food and Drug Administration who
promote them. The context-driven school's retort is that Lessons Learned in Software Testing includes one lesson
supporting the use IEEE 829 and another opposing it; that not all software testing occurs in a regulated environment
and that practices appropriate for such environments would be ruinously expensive, unnecessary, and inappropriate
for other contexts; and that in any case the FDA generally promotes the principle of the least burdensome approach.
Some of the major controversies include:
Agile vs. traditional
Starting around 1990, a new style of writing about testing began to challenge what had come before. The seminal
work in this regard is widely considered to be Testing Computer Software, by Cem Kaner.
[2]
Instead of assuming
that testers have full access to source code and complete specifications, these writers, including Kaner and James
Bach, argued that testers must learn to work under conditions of uncertainty and constant change. Meanwhile, an
opposing trend toward process "maturity" also gained ground, in the form of the Capability Maturity Model. The
agile testing movement (which includes but is not limited to forms of testing practiced on agile development
projects) has popularity mainly in commercial circles, whereas the CMM was embraced by government and military
software providers.
However, saying that "maturity models" like CMM gained ground against or opposing Agile testing may not be
right. Agile movement is a 'way of working', while CMM is a process improvement idea.
But another point of view must be considered: the operational culture of an organization. While it may be true that
testers must have an ability to work in a world of uncertainty, it is also true that their flexibility must have direction.
In many cases test cultures are self-directed and as a result fruitless; unproductive results can ensue. Furthermore,
providing positive evidence of defects may either indicate that you have found the tip of a much larger problem, or
that you have exhausted all possibilities. A framework is a test of Testing. It provides a boundary that can measure
(validate) the capacity of our work. Both sides have, and will continue to argue the virtues of their work. The proof
however is in each and every assessment of delivery quality. It does little good to test systematically if you are too
narrowly focused. On the other hand, finding a bunch of errors is not an indicator that Agile methods was the driving
force; you may simply have stumbled upon an obviously poor piece of work.
Software testing controversies
170
Exploratory vs. scripted
Exploratory testing means simultaneous test design and test execution with an emphasis on learning. Scripted testing
means that learning and test design happen prior to test execution, and quite often the learning has to be done again
during test execution. Exploratory testing is very common, but in most writing and training about testing it is barely
mentioned and generally misunderstood. Some writers consider it a primary and essential practice. Structured
exploratory testing is a compromise when the testers are familiar with the software. A vague test plan, known as a
test charter, is written up, describing what functionalities need to be tested but not how, allowing the individual
testers to choose the method and steps of testing.
There are two main disadvantages associated with a primarily exploratory testing approach. The first is that there is
no opportunity to prevent defects, which can happen when the designing of tests in advance serves as a form of
structured static testing that often reveals problems in system requirements and design. The second is that, even with
test charters, demonstrating test coverage and achieving repeatability of tests using a purely exploratory testing
approach is difficult. For this reason, a blended approach of scripted and exploratory testing is often used to reap the
benefits while mitigating each approach's disadvantages.
Manual vs. automated
Some writers believe that test automation is so expensive relative to its value that it should be used sparingly.
[3]
Others, such as advocates of agile development, recommend automating 100% of all tests. A challenge with
automation is that automated testing requires automated test oracles (an oracle is a mechanism or principle by which
a problem in the software can be recognized). Such tools have value in load testing software (by signing on to an
application with hundreds or thousands of instances simultaneously), or in checking for intermittent errors in
software. The success of automated software testing depends on complete and comprehensive test planning.
Software development strategies such as test-driven development are highly compatible with the idea of devoting a
large part of an organization's testing resources to automated testing. Many large software organizations perform
automated testing. Some have developed their own automated testing environments specifically for internal
development, and not for resale.
Software design vs. software implementation
Ideally, software testers should not be limited only to testing software implementation, but also to testing software
design. With this assumption, the role and involvement of testers will change dramatically. In such an environment,
the test cycle will change too. To test software design, testers would review requirement and design specifications
together with designer and programmer, potentially helping to identify bugs earlier in software development.
Who watches the watchmen?
One principle in software testing is summed up by the classical Latin question posed by Juvenal: Quis Custodiet
Ipsos Custodes (Who watches the watchmen?), or is alternatively referred informally, as the "Heisenbug" concept (a
common misconception that confuses Heisenberg's uncertainty principle with observer effect). The idea is that any
form of observation is also an interaction, that the act of testing can also affect that which is being tested.
In practical terms the test engineer is testing software (and sometimes hardware or firmware) with other software
(and hardware and firmware). The process can fail in ways that are not the result of defects in the target but rather
result from defects in (or indeed intended features of) the testing tool.
There are metrics being developed to measure the effectiveness of testing. One method is by analyzing code
coverage (this is highly controversial) - where everyone can agree what areas are not being covered at all and try to
improve coverage in these areas.
Software testing controversies
171
Bugs can also be placed into code on purpose, and the number of bugs that have not been found can be predicted
based on the percentage of intentionally placed bugs that were found. The problem is that it assumes that the
intentional bugs are the same type of bug as the unintentional ones.
Finally, there is the analysis of historical find-rates. By measuring how many bugs are found and comparing them to
predicted numbers (based on past experience with similar projects), certain assumptions regarding the effectiveness
of testing can be made. While not an absolute measurement of quality, if a project is halfway complete and there
have been no defects found, then changes may be needed to the procedures being employed by QA.
References
[1] context-driven-testing.com (http:/ / www.context-driven-testing. com)
[2] Kaner, Cem; Jack Falk, Hung Quoc Nguyen (1993). Testing Computer Software (Third Edition ed.). John Wiley and Sons. ISBN
1-85032-908-7.
[3] An example is Mark Fewster, Dorothy Graham: Software Test Automation. Addison Wesley, 1999, ISBN 0-201-33140-3
Test-driven development
Test-driven development (TDD) is a software development process that relies on the repetition of a very short
development cycle: first the developer writes an (initially failing) automated test case that defines a desired
improvement or new function, then produces the minimum amount of code to pass that test and finally refactors the
new code to acceptable standards. Kent Beck, who is credited with having developed or 'rediscovered' the technique,
stated in 2003 that TDD encourages simple designs and inspires confidence.
[1]
Test-driven development is related to the test-first programming concepts of extreme programming, begun in
1999,
[2]
but more recently has created more general interest in its own right.
[3]
Programmers also apply the concept to improving and debugging legacy code developed with older techniques.
[4]
Test-driven development
172
Requirements
In test-driven development a developer creates automated unit tests that define code requirements then immediately
writes the code itself. The tests contain assertions that are either true or false. Passing the tests confirms correct
behavior as developers evolve and refactor the code. Developers often use testing frameworks, such as xUnit, to
create and automatically run sets of test cases.
Test-driven development cycle
A graphical representation of the development cycle, using a basic flowchart
The following sequence is based on the
book Test-Driven Development by
Example.
[1]
Add a test
In test-driven development, each new
feature begins with writing a test. This
test must inevitably fail because it is
written before the feature has been
implemented. (If it does not fail, then
either the proposed new feature
already exists or the test is defective.)
To write a test, the developer must
clearly understand the feature's
specification and requirements. The
developer can accomplish this through
use cases and user stories that cover the requirements and exception conditions. This could also imply a variant, or
modification of an existing test. This is a differentiating feature of test-driven development versus writing unit tests
after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but
important difference.
Run all tests and see if the new one fails
This validates that the test harness is working correctly and that the new test does not mistakenly pass without
requiring any new code. This step also tests the test itself, in the negative: it rules out the possibility that the new test
will always pass, and therefore be worthless. The new test should also fail for the expected reason. This increases
confidence (although it does not entirely guarantee) that it is testing the right thing, and will pass only in intended
cases.
Write some code
The next step is to write some code that will cause the test to pass. The new code written at this stage will not be
perfect and may, for example, pass the test in an inelegant way. That is acceptable because later steps will improve
and hone it.
It is important that the code written is only designed to pass the test; no further (and therefore untested) functionality
should be predicted and 'allowed for' at any stage.
Test-driven development
173
Run the automated tests and see them succeed
If all test cases now pass, the programmer can be confident that the code meets all the tested requirements. This is a
good point from which to begin the final step of the cycle.
Refactor code
Now the code can be cleaned up as necessary. By re-running the test cases, the developer can be confident that code
refactoring is not damaging any existing functionality. The concept of removing duplication is an important aspect of
any software design. In this case, however, it also applies to removing any duplication between the test code and the
production code for example magic numbers or strings that were repeated in both, in order to make the test pass
in step 3.
Repeat
Starting with another new test, the cycle is then repeated to push forward the functionality. The size of the steps
should always be small, with as few as 1 to 10 edits between each test run. If new code does not rapidly satisfy a new
test, or other tests fail unexpectedly, the programmer should undo or revert in preference to excessive debugging.
Continuous Integration helps by providing revertible checkpoints. When using external libraries it is important not to
make increments that are so small as to be effectively merely testing the library itself,
[3]
unless there is some reason
to believe that the library is buggy or is not sufficiently feature-complete to serve all the needs of the main program
being written.
Development style
There are various aspects to using test-driven development, for example the principles of "keep it simple, stupid"
(KISS) and "You ain't gonna need it" (YAGNI). By focusing on writing only the code necessary to pass tests,
designs can be cleaner and clearer than is often achieved by other methods.
[1]
In Test-Driven Development by
Example, Kent Beck also suggests the principle "Fake it till you make it".
To achieve some advanced design concept (such as a design pattern), tests are written that will generate that design.
The code may remain simpler than the target pattern, but still pass all required tests. This can be unsettling at first but
it allows the developer to focus only on what is important.
Write the tests first. The tests should be written before the functionality that is being tested. This has been claimed
to have two benefits. It helps ensure that the application is written for testability, as the developers must consider
how to test the application from the outset, rather than worrying about it later. It also ensures that tests for every
feature will be written. When writing feature-first code, there is a tendency by developers and the development
organisations to push the developer on to the next feature, neglecting testing entirely. The first test might not even
compile, at first, because all of the classes and methods it requires may not yet exist. Nevertheless, that first test
functions as an executable specification.
[5]
First fail the test cases. The idea is to ensure that the test really works and can catch an error. Once this is shown,
the underlying functionality can be implemented. This has been coined the "test-driven development mantra", known
as red/green/refactor where red means fail and green is pass.
Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring.
Receiving the expected test results at each stage reinforces the programmer's mental model of the code, boosts
confidence and increases productivity.
Advanced practices of test-driven development can lead to Acceptance Test-driven development (ATDD) where the
criteria specified by the customer are automated into acceptance tests, which then drive the traditional unit
test-driven development (UTDD) process.
[6]
This process ensures the customer has an automated mechanism to
decide whether the software meets their requirements. With ATDD, the development team now has a specific target
Test-driven development
174
to satisfy, the acceptance tests, which keeps them continuously focused on what the customer really wants from that
user story.
Benefits
A 2005 study found that using TDD meant writing more tests and, in turn, programmers who wrote more tests
tended to be more productive.
[7]
Hypotheses relating to code quality and a more direct correlation between TDD and
productivity were inconclusive.
[8]
Programmers using pure TDD on new ("greenfield") projects report they only rarely feel the need to invoke a
debugger. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the
last version that passed all tests may often be more productive than debugging.
[9]
Test-driven development offers more than just simple validation of correctness, but can also drive the design of a
program. By focusing on the test cases first, one must imagine how the functionality will be used by clients (in the
first case, the test cases). So, the programmer is concerned with the interface before the implementation. This benefit
is complementary to Design by Contract as it approaches code through test cases rather than through mathematical
assertions or preconceptions.
Test-driven development offers the ability to take small steps when required. It allows a programmer to focus on the
task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered
initially, and tests to create these extraneous circumstances are implemented separately. Test-driven development
ensures in this way that all written code is covered by at least one test. This gives the programming team, and
subsequent users, a greater level of confidence in the code.
While it is true that more code is required with TDD than without TDD because of the unit test code, total code
implementation time is typically shorter.
[10]
Large numbers of tests help to limit the number of defects in the code.
The early and frequent nature of the testing helps to catch defects early in the development cycle, preventing them
from becoming endemic and expensive problems. Eliminating defects early in the process usually avoids lengthy and
tedious debugging later in the project.
TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the
methodology requires that the developers think of the software in terms of small units that can be written and tested
independently and integrated together later. This leads to smaller, more focused classes, looser coupling, and cleaner
interfaces. The use of the mock object design pattern also contributes to the overall modularization of the code
because this pattern requires that the code be written so that modules can be switched easily between mock versions
for unit testing and "real" versions for deployment.
Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code
path. For example, in order for a TDD developer to add an else branch to an existing if statement, the developer
would first have to write a failing test case that motivates the branch. As a result, the automated tests resulting from
TDD tend to be very thorough: they will detect any unexpected changes in the code's behaviour. This detects
problems that can arise where a change later in the development cycle unexpectedly alters other functionality.
Test-driven development
175
Shortcomings
Test-driven development is difficult to use in situations where full functional tests are required to determine
success or failure. Examples of these are user interfaces, programs that work with databases, and some that
depend on specific network configurations. TDD encourages developers to put the minimum amount of code into
such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the
outside world.
Management support is essential. Without the entire organization believing that test-driven development is going
to improve the product, management may feel that time spent writing tests is wasted.
[11]
Unit tests created in a test-driven development environment are typically created by the developer who will also
write the code that is being tested. The tests may therefore share the same blind spots with the code: If, for
example, a developer does not realize that certain input parameters must be checked, most likely neither the test
nor the code will verify these input parameters. If the developer misinterprets the requirements specification for
the module being developed, both the tests and the code will be wrong.
The high number of passing unit tests may bring a false sense of security, resulting in fewer additional software
testing activities, such as integration testing and compliance testing.
The tests themselves become part of the maintenance overhead of a project. Badly written tests, for example ones
that include hard-coded error strings or which are themselves prone to failure, are expensive to maintain. This is
especially the case with Fragile Tests.
[12]
There is a risk that tests that regularly generate false failures will be
ignored, so that when a real failure occurs it may not be detected. It is possible to write tests for low and easy
maintenance, for example by the reuse of error strings, and this should be a goal during the code refactoring phase
described above.
The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later
date. Therefore these original tests become increasingly precious as time goes by. If a poor architecture, a poor
design or a poor testing strategy leads to a late change that makes dozens of existing tests fail, it is important that
they are individually fixed. Merely deleting, disabling or rashly altering them can lead to undetectable holes in the
test coverage.
Code visibility
Test suite code clearly has to be able to access the code it is testing. On the other hand, normal design criteria such as
information hiding, encapsulation and the separation of concerns should not be compromised. Therefore unit test
code for TDD is usually written within the same project or module as the code being tested.
In object oriented design this still does not provide access to private data and methods. Therefore, extra work
may be necessary for unit tests. In Java and other languages, a developer can use reflection to access fields that are
marked private.
[13]
Alternatively, an inner class can be used to hold the unit tests so they will have visibility of
the enclosing class's members and attributes. In the .NET Framework and some other programming languages,
partial classes may be used to expose private methods and data for the tests to access.
It is important that such testing hacks do not remain in the production code. In C and other languages, compiler
directives such as #if DEBUG ... #endif can be placed around such additional classes and indeed all other
test-related code to prevent them being compiled into the released code. This then means that the released code is not
exactly the same as that which is unit tested. The regular running of fewer but more comprehensive, end-to-end,
integration tests on the final release build can then ensure (among other things) that no production code exists that
subtly relies on aspects of the test harness.
There is some debate among practitioners of TDD, documented in their blogs and other writings, as to whether it is
wise to test private methods and data anyway. Some argue that private members are a mere implementation detail
that may change, and should be allowed to do so without breaking numbers of tests. Thus it should be sufficient to
test any class through its public interface or through its subclass interface, which some languages call the "protected"
Test-driven development
176
interface.
[14]
Others say that crucial aspects of functionality may be implemented in private methods, and that
developing this while testing it indirectly via the public interface only obscures the issue: unit testing is about testing
the smallest unit of functionality possible.
[15][16]
Fakes, mocks and integration tests
Unit tests are so named because they each test one unit of code. A complex module may have a thousand unit tests
and a simple module may have only ten. The tests used for TDD should never cross process boundaries in a
program, let alone network connections. Doing so introduces delays that make tests run slowly and discourage
developers from running the whole suite. Introducing dependencies on external modules or data also turns unit tests
into integration tests. If one module misbehaves in a chain of interrelated modules, it is not so immediately clear
where to look for the cause of the failure.
When code under development relies on a database, a web service, or any other external process or service,
enforcing a unit-testable separation is also an opportunity and a driving force to design more modular, more testable
and more reusable code.
[17]
Two steps are necessary:
1. Whenever external access is going to be needed in the final design, an interface should be defined that describes
the access that will be available. See the dependency inversion principle for a discussion of the benefits of doing
this regardless of TDD.
2. The interface should be implemented in two ways, one of which really accesses the external process, and the
other of which is a fake or mock. Fake objects need do little more than add a message such as Person object
saved to a trace log, against which a test assertion can be run to verify correct behaviour. Mock objects differ in
that they themselves contain test assertions that can make the test fail, for example, if the person's name and other
data are not as expected.
Fake and mock object methods that return data, ostensibly from a data store or user, can help the test process by
always returning the same, realistic data that tests can rely upon. They can also be set into predefined fault modes so
that error-handling routines can be developed and reliably tested. In a fault mode, a method may return an invalid,
incomplete or null response, or may throw an exception. Fake services other than data stores may also be useful in
TDD: A fake encryption service may not, in fact, encrypt the data passed; a fake random number service may always
return 1. Fake or mock implementations are examples of dependency injection.
A corollary of such dependency injection is that the actual database or other external-access code is never tested by
the TDD process itself. To avoid errors that may arise from this, other tests are needed that instantiate the test-driven
code with the "real" implementations of the interfaces discussed above. These are integration tests and are quite
separate from the TDD unit tests. There will be fewer of them, and they need to be run less often than the unit tests.
They can nonetheless be implemented using the same testing framework, such as xUnit.
Integration tests that alter any persistent store or database should always be designed carefully with consideration of
the initial and final state of the files or database, even if any test fails. This is often achieved using some combination
of the following techniques:
The TearDown method, which is integral to many test frameworks.
try...catch...finally exception handling structures where available.
Database transactions where a transaction atomically includes perhaps a write, a read and a matching delete
operation.
Taking a "snapshot" of the database before running any tests and rolling back to the snapshot after each test run.
This may be automated using a framework such as Ant or NAnt or a continuous integration system such as
CruiseControl.
Initialising the database to a clean state before tests, rather than cleaning up after them. This may be relevant
where cleaning up may make it difficult to diagnose test failures by deleting the final state of the database before
detailed diagnosis can be performed.
Test-driven development
177
References
[1] [1] Beck, K. Test-Driven Development by Example, Addison Wesley - Vaseem, 2003
[2] Lee Copeland (December 2001). "Extreme Programming" (http:/ / www. computerworld. com/ softwaretopics/ software/ appdev/ story/
0,10801,66192,00.html). Computerworld. . Retrieved January 11, 2011.
[3] Newkirk, JW and Vorontsov, AA. Test-Driven Development in Microsoft .NET, Microsoft Press, 2004.
[4] [4] Feathers, M. Working Effectively with Legacy Code, Prentice Hall, 2004
[5] "Agile Test Driven Development" (http:/ / www. agilesherpa. org/ agile_coach/ engineering_practices/ test_driven_development/ ). Agile
Sherpa. 2010-08-03. . Retrieved 2012-08-14.
[6] [6] Koskela, L. "Test Driven: TDD and Acceptance TDD for Java Developers", Manning Publications, 2007
[7] Erdogmus, Hakan; Morisio, Torchiano. "On the Effectiveness of Test-first Approach to Programming" (http:/ / nparc. cisti-icist. nrc-cnrc. gc.
ca/ npsi/ ctrl?action=shwart& index=an& req=5763742& lang=en). Proceedings of the IEEE Transactions on Software Engineering, 31(1).
January 2005. (NRC 47445). . Retrieved 2008-01-14. "We found that test-first students on average wrote more tests and, in turn, students who
wrote more tests tended to be more productive."
[8] Proffitt, Jacob. "TDD Proven Effective! Or is it?" (http:/ / theruntime. com/ blogs/ jacob/ archive/ 2008/ 01/ 22/ tdd-proven-effective-or-is-it.
aspx). . Retrieved 2008-02-21. "So TDD's relationship to quality is problematic at best. Its relationship to productivity is more interesting. I
hope there's a follow-up study because the productivity numbers simply don't add up very well to me. There is an undeniable correlation
between productivity and the number of tests, but that correlation is actually stronger in the non-TDD group (which had a single outlier
compared to roughly half of the TDD group being outside the 95% band)."
[9] Llopis, Noel (20 February 2005). "Stepping Through the Looking Glass: Test-Driven Game Development (Part 1)" (http:/ / gamesfromwithin.
com/ stepping-through-the-looking-glass-test-driven-game-development-part-1). Games from Within. . Retrieved 2007-11-01. "Comparing
[TDD] to the non-test-driven development approach, you're replacing all the mental checking and debugger stepping with code that verifies
that your program does exactly what you intended it to do."
[10] Mller, Matthias M.; Padberg, Frank. "About the Return on Investment of Test-Driven Development" (http:/ / www. ipd. kit. edu/ KarHPFn/
papers/ edser03. pdf) (PDF). Universitt Karlsruhe, Germany. p.6. . Retrieved 2012-06-14.
[11] Loughran, Steve (November 6, 2006). "Testing" (http:/ / people. apache. org/ ~stevel/ slides/ testing. pdf) (PDF). HP Laboratories. .
Retrieved 2009-08-12.
[12] "Fragile Tests" (http:/ / xunitpatterns.com/ Fragile Test. html). .
[13] Burton, Ross (11/12/2003). "Subverting Java Access Protection for Unit Testing" (http:/ / www. onjava. com/ pub/ a/ onjava/ 2003/ 11/ 12/
reflection. html). O'Reilly Media, Inc.. . Retrieved 2009-08-12.
[14] van Rossum, Guido; Warsaw, Barry (5 July 2001). "PEP 8 -- Style Guide for Python Code" (http:/ / www. python. org/ dev/ peps/ pep-0008/
). Python Software Foundation. . Retrieved 6 May 2012.
[15] Newkirk, James (7 June 2004). "Testing Private Methods/Member Variables - Should you or shouldn't you" (http:/ / blogs. msdn. com/
jamesnewkirk/ archive/ 2004/ 06/ 07/ 150361.aspx). Microsoft Corporation. . Retrieved 2009-08-12.
[16] Stall, Tim (1 Mar 2005). "How to Test Private and Protected methods in .NET" (http:/ / www. codeproject. com/ KB/ cs/
testnonpublicmembers.aspx). CodeProject. . Retrieved 2009-08-12.
[17] Fowler, Martin (1999). Refactoring - Improving the design of existing code. Boston: Addison Wesley Longman, Inc.. ISBN0-201-48567-2.
External links
TestDrivenDevelopment on WikiWikiWeb
Test or spec? Test and spec? Test from spec! (http:/ / www. eiffel. com/ general/ monthly_column/ 2004/
september. html), by Bertrand Meyer (September 2004)
Microsoft Visual Studio Team Test from a TDD approach (http:/ / msdn. microsoft. com/ en-us/ library/
ms379625(VS. 80). aspx)
Write Maintainable Unit Tests That Will Save You Time And Tears (http:/ / msdn. microsoft. com/ en-us/
magazine/ cc163665. aspx)
Improving Application Quality Using Test-Driven Development (TDD) (http:/ / www. methodsandtools. com/
archive/ archive. php?id=20)
Agile testing
178
Agile testing
Agile testing is a software testing practice that follows the principles of agile software development. Agile testing
involves all members of a cross-functional agile team, with special expertise contributed by testers, to ensure
delivering the business value desired by the customer at frequent intervals, working at a sustainable pace.
Specification by example, also known as acceptance test-driven development, is used to capture examples of desired
and undesired behavior and guide coding.
Overview
Agile development recognizes that testing is not a separate phase, but an integral part of software development,
along with coding. Agile teams use a "whole-team" approach to "baking quality in" to the software product. Testers
on agile teams lend their expertise in eliciting examples of desired behavior from customers, collaborating with the
development team to turn those into executable specifications that guide coding. Testing and coding are done
incrementally and iteratively, building up each feature until it provides enough value to release to production. Agile
testing covers all types of testing. The Agile Testing Quadrants provide a helpful taxonomy to help teams identify
and plan the testing needed.
Further reading
Lisa Crispin, Janet Gregory (2009). Agile Testing: A Practical Guide for Testers and Agile Teams.
Addison-Wesley. ISBN0-321-53446-8.
Adzic, Gojko (2011). Specification by Example: How Successful Teams Deliver the Right Software. Manning.
ISBN978-1-61729-008-4.
Ambler, Scott (2010). "Agile Testing and Quality Strategies: Discipline over Rhetoric"
[1]
. Retrieved 2010-07-15.
References
Pettichord, Bret (2002-11-11). "Agile Testing What is it? Can it work?"
[2]
. Retrieved 2011-01-10.
Hendrickson, Elisabeth (2008-08-11). "Agile Testing, Nine Principles and Six Concrete Practices for Testing on
Agile Teams"
[3]
. Retrieved 2011-04-26.
Crispin, Lisa (2003-03-21). "XP Testing Without XP: Taking Advantage of Agile Testing Practices"
[4]
.
Retrieved 2009-06-11.
References
[1] http:/ / www. ambysoft. com/ essays/ agileTesting.html
[2] http:/ / www. sasqag. org/ pastmeetings/ AgileTesting20021121. pdf
[3] http:/ / testobsessed. com/ wp-content/ uploads/ 2011/ 04/ AgileTestingOverview. pdf
[4] http:/ / www. methodsandtools. com/ archive/ archive.php?id=2
Bug bash
179
Bug bash
In software development, a bug bash is a procedure where all the developers, testers, program managers, usability
researchers, designers, documentation folks, and even sometimes marketing people, put aside their regular
day-to-day duties and pound on the product to get as many eyes on the product as possible.
[1]
Bug bash is a tool used as part of test management approach. Bug bash is usually declared in advance to the team.
The test management team sends out the scope and assigns the testers as resource to assist in setup and also collect
bugs. Test management might use this along with small token prize for good bugs found and/or have small socials
(drinks) at the end of the Bug Bash. Another interesting bug bash prize was to pie test management team members.
References
[1] Ron Patton (2001). Software Testing. Sams. ISBN0-672-31983-7.
Pair Testing
Pair testing is a software development technique in which two team members work together at one keyboard to test
the software application. One does the testing and the other analyzes or reviews the testing. This can be done
between one tester and developer or business analyst or between two testers with both participants taking turns at
driving the keyboard.
Description
This can be more related to pair programming and exploratory testing of agile software development where two team
members are sitting together to test the software application. This will help both the members to learn more about the
application. This will narrow down the root cause of the problem while continuous testing. Developer can find out
which portion of the source code is affected by the bug. This track can help to make the solid test cases and
narrowing the problem for the next time.
Benefits and drawbacks
The developer can learn more about the software application by exploring with the tester. The tester can learn more
about the software application by exploring with the developer.
Less participation is required for testing and for important bugs root cause can be analyzed very easily. The tester
can very easily test the initial bug fixing status with the developer.
This will make the developer to come up with great testing scenarios by their own
This can not be applicable to scripted testing where all the test cases are already written and one has to run the
scripts. This will not help in the evolution of any issue and its impact.
Pair Testing
180
Usage
This is more applicable where the requirements and specifications are not very clear, the team is very new, and needs
to learn the application behavior quickly.
This follows the same principles of pair programming; the two team members should be in the same level.
External links
Pair testing with developer
[1]
ISTQB Official website
[2]
References
[1] http:/ / www. testingreflections. com/ node/ view/ 272
[2] http:/ / www. istqb. org/
Manual testing
Compare with Test automation.
Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end
user, and use most of all features of the application to ensure correct behavior. To ensure completeness of testing, the
tester often follows a written test plan that leads them through a set of important test cases.
Overview
A key step in the process of software engineering is testing the software for correct behavior prior to release to end
users.
For small scale engineering efforts (including prototypes), exploratory testing may be sufficient. With this informal
approach, the tester does not follow any rigorous testing procedure, but rather explores the user interface of the
application using as many of its features as possible, using information gained in prior tests to intuitively derive
additional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester,
because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informal
approach is to gain an intuitive insight to how it feels to use the application.
Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to
maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases and
generally involves the following steps.
[1]
1. Choose a high level test plan where a general methodology is chosen, and resources such as people, computers,
and software licenses are identified and acquired.
2. Write detailed test cases, identifying clear and concise steps to be taken by the tester, with expected outcomes.
3. 3. Assign the test cases to testers, who manually follow the steps and record the results.
4. 4. Author a test report, detailing the findings of the testers. The report is used by managers to determine whether the
software can be released, and if not, it is used by engineers to identify and correct the problems.
A rigorous test case based approach is often traditional for large software engineering projects that follow a
Waterfall model.
[2]
However, at least one recent study did not show a dramatic difference in defect detection
efficiency between exploratory testing and test case based testing.
[3]
Testing can be through black-, white- or grey-box testing. In white-box testing the tester is concerned with the
execution of the statements through the source code. In black-box testing the software is run to check for the defects
Manual testing
181
and is less concerned with how the processing of the input is done. Black-box testers do not have access to the
source code. Grey-box testing is concerned with running the software while having an understanding of the source
code and algorithms.
Static and dynamic testing approach may also be used. Dynamic testing involves running the software. Static testing
includes verifying requirements, syntax of code and any other activities that do not include actually running the code
of the program.
Testing can be further divided into functional and non-functional testing. In functional testing the tester would check
the calculations, any link on the page, or any other field which on given input, output may be expected.
Non-functional testing includes testing performance, compatibility and fitness of the system under test, its security
and usability among other things.
Stages
There are several stages. They are:
[4]
Unit Testing
This initial stage in testing normally carried out by the developer who wrote the code and sometimes by a peer
using the white box testing technique.
Integration Testing
This stage is carried out in two modes, as a complete package or as a increment to the earlier package. Most of
the time black box testing technique is used. However, sometimes a combination of Black and White box
testing is also used in this stage.
Software Testing
After the integration have been tested, software tester who may be a manual tester or automator perform
software testing on complete software build. This Software testing consists of two type of testing:
1. 1. Functional(to check whether SUT(Software under testing) is working as per the Functional Software
Requirement Specification[SRS=FRS+NFRS(Non-Functional Requirements Specifications)] or NOT). This is
performed using White Box testing techniques like BVA,ECP,Decision Table, Orthogonal Arrays.This Testing
contains four Front-End testing(GUI,Control flow,Input Domain, Output or Manipulation) and one Back-End
testing i.e. Database testing.
2. 2. Non-Functional Testing /System Testing/Characterstics Testing(to check whether SUT is working as per the
NFRS,which contains characterstics of the Software to be developed like Usability,
Compatability,Configuration,Inter System Sharing, Performance, Security)
System Testing
In this stage the software is tested from all possible dimensions for all intended purposes and platforms. In this
stage Black box testing technique is normally used.
User Acceptance Testing
This testing stage carried out in order to get customer sign-off of finished product. A 'pass' in this stage also
ensures that the customer has accepted the software and is ready for their use.
Release or Deployment Testing
Onsite team will got to customer site to install the system in customer configured environment and will check
for the following points:
1. 1. Whether SetUp.exe is running or not.
2. 2. There are easy screens during installation
3. 3. How much space is occupied by system on HDD
4. 4. Is the system completely uninstalled when opted to uninstall from the system.
Manual testing
182
Comparison to Automated Testing
Test automation may be able to reduce or eliminate the cost of actual testing. A computer can follow a rote
sequence of steps more quickly than a person, and it can run the tests overnight to present the results in the morning.
However, the labor that is saved in actual testing must be spent instead authoring the test program. Depending on the
type of application to be tested, and the automation tools that are chosen, this may require more labor than a manual
approach. In addition, some testing tools present a very large amount of data, potentially creating a time consuming
task of interpreting the results.
Things such as device drivers and software libraries must be tested using test programs. In addition, testing of large
numbers of users (performance testing and load testing) is typically simulated in software rather than performed in
practice.
Conversely, graphical user interfaces whose layout changes frequently are very difficult to test automatically. There
are test frameworks that can be used for regression testing of user interfaces. They rely on recording of sequences of
keystrokes and mouse gestures, then playing them back and observing that the user interface responds in the same
way every time. Unfortunately, these recordings may not work properly when a button is moved or relabeled in a
subsequent release. An automatic regression test may also be fooled if the program output varies significations.
References
[1] [1] ANSI/IEEE 829-1983 IEEE Standard for Software Test Documentation
[2] Craig, Rick David; Stefan P. Jaskiel (2002). Systematic Software Testing. Artech House. p.7. ISBN1-58053-508-9.
[3] Itkonen, Juha; Mika V. Mntyl and Casper Lassenius (2007). "Defect Detection Efficiency: Test Case Based vs. Exploratory Testing" (http:/
/ www.soberit.hut. fi/ jitkonen/ Publications/ Itkonen_Mntyl_Lassenius_2007_ESEM. pdf). First International Symposium on Empirical
Software Engineering and Measurement. . Retrieved January 17, 2009.
[4] "Testing in Stages Software Testing|Automation Testing|Interview Faqs||Manual Testing Q&A" (http:/ / softwaretestinginterviewfaqs.
wordpress.com/ category/ testing-in-stages/ ). Softwaretestinginterviewfaqs.wordpress.com. May 30, 2009. . Retrieved July 18, 2012.
Regression testing
183
Regression testing
Regression testing is any type of software testing that seeks to uncover new software bugs, or regressions, in
existing functional and non-functional areas of a system after changes, such as enhancements, patches or
configuration changes, have been made to them.
The intent of regression testing is to ensure that a change, such as a bugfix, did not introduce new faults.
[1]
One of
the main reasons for regression testing is to determine whether a change in one part of the software affects other
parts of the software.
[2]
Common methods of regression testing include rerunning previously run tests and checking whether program
behavior has changed and whether previously fixed faults have re-emerged. Regression testing can be used to test a
system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a
particular change.
Background
Experience has shown that as software is fixed, emergence of new and/or reemergence of old faults is quite common.
Sometimes reemergence occurs because a fix gets lost through poor revision control practices (or simple human
error in revision control). Often, a fix for a problem will be "fragile" in that it fixes the problem in the narrow case
where it was first observed but not in more general cases which may arise over the lifetime of the software.
Frequently, a fix for a problem in one area inadvertently causes a software bug in another area. Finally, often when
some feature is redesigned, some of the same mistakes that were made in the original implementation of the feature
are made in the redesign.
Therefore, in most software development situations it is considered good coding practice that when a bug is located
and fixed, a test that exposes the bug is recorded and regularly retested after subsequent changes to the program.
[3]
Although this may be done through manual testing procedures using programming techniques, it is often done using
automated testing tools.
[4]
Such a test suite contains software tools that allow the testing environment to execute all
the regression test cases automatically; some projects even set up automated systems to automatically re-run all
regression tests at specified intervals and report any failures (which could imply a regression or an out-of-date
test).
[5]
Common strategies are to run such a system after every successful compile (for small projects), every night,
or once a week. Those strategies can be automated by an external tool, such as BuildBot, Tinderbox, Hudson,
Jenkins, TeamCity or Bamboo.
Regression testing is an integral part of the extreme programming software development method. In this method,
design documents are replaced by extensive, repeatable, and automated testing of the entire software package
throughout each stage of the software development cycle.
In the corporate world, regression testing has traditionally been performed by a software quality assurance team after
the development team has completed work. However, defects found at this stage are the most costly to fix. This
problem is being addressed by the rise of unit testing. Although developers have always written test cases as part of
the development cycle, these test cases have generally been either functional tests or unit tests that verify only
intended outcomes. Developer testing compels a developer to focus on unit testing and to include both positive and
negative test cases.
[6]
Regression testing
184
Uses
Regression testing can be used not only for testing the correctness of a program, but often also for tracking the
quality of its output.
[7]
For instance, in the design of a compiler, regression testing could track the code size,
simulation time and compilation time of the test suite cases.
"Also as a consequence of the introduction of new bugs, program maintenance requires far more system
testing per statement written than any other programming. Theoretically, after each fix one must run the entire
batch of test cases previously run against the system, to ensure that it has not been damaged in an obscure way.
In practice, such regression testing must indeed approximate this theoretical idea, and it is very costly."
Fred Brooks, The Mythical Man Month, p. 122
Regression tests can be broadly categorized as functional tests or unit tests. Functional tests exercise the complete
program with various inputs. Unit tests exercise individual functions, subroutines, or object methods. Both functional
testing tools and unit testing tools tend to be third-party products that are not part of the compiler suite, and both tend
to be automated. A functional test may be a scripted series of program inputs, possibly even involving an automated
mechanism for controlling mouse movements and clicks. A unit test may be a set of separate functions within the
code itself, or a driver layer that links to the code without altering the code being tested.
References
[1] Myers, Glenford (2004). The Art of Software Testing. Wiley. ISBN978-0-471-46912-4.
[2] Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov Consulting. p.386. ISBN978-0-615-23372-7.
[3] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management (http:/ / www. wiley. com/
WileyCDA/ WileyTitle/ productCd-0470042125.html). Wiley-IEEE Computer Society Press. p.73. ISBN0-470-04212-5. .
[4] Automate Regression Tests When Feasible (http:/ / safari. oreilly. com/ 0201794292/ ch08lev1sec4), Automated Testing: Selected Best
Practices, Elfriede Dustin, Safari Books Online
[5] daVeiga, Nada (February 2008). "Change Code Without Fear: Utilize a Regression Safety Net" (http:/ / www. ddj. com/ development-tools/
206105233;jsessionid=2HN1TRYZ4JGVAQSNDLRSKH0CJUNN2JVN). Dr. Dobb's Journal. .
[6] Dudney, Bill (2004-12-08). "Developer Testing Is 'In': An interview with [[Alberto Savoia (http:/ / www. sys-con. com/ read/ 47359. htm)]
and Kent Beck"]. . Retrieved 2007-11-29.
[7] Kolawa, Adam. "Regression Testing, Programmer to Programmer" (http:/ / www. wrox. com/ WileyCDA/ Section/ id-291252. html). Wrox. .
External links
Microsoft regression testing recommendations (http:/ / msdn. microsoft. com/ en-us/ library/ aa292167(VS. 71).
aspx)
Ad hoc testing
185
Ad hoc testing
Ad hoc testing is a commonly used term for software testing performed without planning and documentation (but
can be applied to early scientific experimental studies).
The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is the least formal test
method. As such, it has been criticized because it is not structured and hence defects found using this method may be
harder to reproduce (since there are no written test cases). However, the strength of ad hoc testing is that important
defects can be found quickly.
It is performed by improvisation: the tester seeks to find bugs by any means that seem appropriate. Ad hoc testing
can be seen as a light version of error guessing, which itself is a light version of exploratory testing.
References
Exploratory Testing Explained
[1]
Context-Driven School of testing
[2]
References
[1] http:/ / www. satisfice.com/ articles/ et-article. pdf
[2] http:/ / www. context-driven-testing.com/
Sanity testing
A sanity test or sanity check is a basic test to quickly evaluate whether a claim or the result of a calculation can
possibly be true. It is a simple check to see if the produced material is rational (that the material's creator was
thinking rationally, applying sanity). The point of a sanity test is to rule out certain classes of obviously false results,
not to catch every possible error. A rule-of-thumb may be checked to perform the test. The advantage of a sanity test,
over performing a complete or rigorous test, is speed.
In arithmetic, for example, when multiplying by 9, using the divisibility rule for 9 to verify that the sum of digits of
the result is divisible by 9 is a sanity test - it will not catch every multiplication error, however it's a quick and simple
method to discover many possible errors.
In computer science, a sanity test is a very brief run-through of the functionality of a computer program, system,
calculation, or other analysis, to assure that part of the system or methodology works roughly as expected. This is
often prior to a more exhaustive round of testing.
Mathematical
A sanity test can refer to various orders of magnitude and other simple rule-of-thumb devices applied to cross-check
mathematical calculations. For example:
If one were to attempt to square 738 and calculated 53,874, a quick sanity check could show that this result cannot
be true. Consider that 700 < 738, yet 700 = 7100 = 490000 > 53874. Since squaring positive numbers
preserves their inequality, the result cannot be true, and so the calculated result is incorrect. The correct answer,
738 = 544,644, is more than 10 times higher than 53,874, and so the result had been off by an order of
magnitude.
In multiplication, 918 155 is not 142135 since 918 is divisible by three but 142135 is not (digits add up to 16,
not a multiple of three). Also, the product must end in the same digit as the product of end-digits 85=40, but
142135 does not end in "0" like "40", while the correct answer does: 918155=142290. An even quicker check is
Sanity testing
186
that the product of even and odd numbers is even, whereas 142135 is odd.
When talking about quantities in physics, the power output of a car cannot be 700 kJ since that is a unit of energy,
not power (energy per unit time). See dimensional analysis.
Software development
In software development, the sanity test (a form of software testing which offers "quick, broad, and shallow
testing"
[1]
) determines whether it is reasonable to proceed with further testing.
Software sanity tests are commonly conflated with smoke tests.
[2]
A smoke test determines whether it is possible to
continue testing, as opposed to whether it is reasonable. A software smoke test determines whether the program
launches and whether its interfaces are accessible and responsive (for example, the responsiveness of a web page or
an input button). If the smoke test fails, it is impossible to conduct a sanity test. In contrast, the ideal sanity test
exercises the smallest subset of application functions needed to determine whether the application logic is generally
functional and correct (for example, an interest rate calculation for a financial application). If the sanity test fails, it is
not reasonable to attempt more rigorous testing. Both sanity tests and smoke tests are ways to avoid wasting time and
effort by quickly determining whether an application is too flawed to merit any rigorous testing. Many companies
run sanity tests and unit tests on an automated build as part of their development process.
[3]
Sanity testing may be a tool used while manually debugging software. An overall piece of software likely involves
multiple subsystems between the input and the output. When the overall system is not working as expected, a sanity
test can be used to make the decision on what to test next. If one subsystem is not giving the expected result, the
other subsystems can be eliminated from further investigation until the problem with this one is solved.
The Hello world program is often used as a sanity test for a development environment. If Hello World fails to
compile or execute, the supporting environment likely has a configuration problem. If it works, the problem being
diagnosed likely lies in the real application being diagnosed.
Another, possibly more common usage of 'sanity test' is to denote checks which are performed within program code,
usually on arguments to functions or returns therefrom, to see if the answers can be assumed to be correct. The more
complicated the routine, the more important that its response be checked. The trivial case is checking to see that a
file opened, written to, or closed, did not fail on these activities which is a sanity check often ignored by
programmers. But more complex items can also be sanity-checked for various reasons.
Examples of this include bank account management systems which check that withdrawals are sane in not requesting
more than the account contains, and that deposits or purchases are sane in fitting in with patterns established by
historical data large deposits may be more closely scrutinized for accuracy, large purchase transactions may be
double-checked with a card holder for validity against fraud, ATM withdrawals in foreign locations never before
visited by the card holder might be cleared up with him, etc.; these are "runtime" sanity checks, as opposed to the
"development" sanity checks mentioned above.
References
[1] M. A. Fecko and C. M. Lott, ``Lessons learned from automating tests for an operations support system, (http:/ / www. chris-lott. org/ work/
pubs/ 2002-spe. pdf) Software--Practice and Experience, v. 32, October 2002.
[2] Erik van Veenendaal (ED), Standard glossary of terms used in Software Testing (http:/ / www. istqb. org/ downloads/ glossary-1. 1. pdf),
International Software Testing Qualification Board.
[3] Hassan, A. E. and Zhang, K. 2006. Using Decision Trees to Predict the Certification Result of a Build (http:/ / portal. acm. org/ citation.
cfm?id=1169218.1169318& coll=& dl=ACM& type=series& idx=SERIES10803& part=series& WantType=Proceedings& title=ASE#). In
Proceedings of the 21st IEEE/ACM international Conference on Automated Software Engineering (September 18 22, 2006). Automated
Software Engineering. IEEE Computer Society, Washington, DC, 189198.
Integration testing
187
Integration testing
Integration testing (sometimes called Integration and Testing, abbreviated "I&T") is the phase in software testing in
which individual software modules are combined and tested as a group. It occurs after unit testing and before
validation testing. Integration testing takes as its input modules that have been unit tested, groups them in larger
aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the
integrated system ready for system testing.
Purpose
The purpose of integration testing is to verify functional, performance, and reliability requirements placed on major
design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using
Black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated
usage of shared data areas and inter-process communication is tested and individual subsystems are exercised
through their input interface. Test cases are constructed to test that all components within assemblages interact
correctly, for example across procedure calls or process activations, and this is done after testing individual modules,
i.e. unit testing. The overall idea is a "building block" approach, in which verified assemblages are added to a
verified base which is then used to support the integration testing of further assemblages.
Some different types of integration testing are big bang, top-down, and bottom-up.
Big Bang
In this approach, all or most of the developed modules are coupled together to form a complete software system or
major part of the system and then used for integration testing. The Big Bang method is very effective for saving time
in the integration testing process. However, if the test cases and their results are not recorded properly, the entire
integration process will be more complicated and may prevent the testing team from achieving the goal of integration
testing.
A type of Big Bang Integration testing is called Usage Model testing. Usage Model Testing can be used in both
software and hardware integration testing. The basis behind this type of integration testing is to run user-like
workloads in integrated user-like environments. In doing the testing in this manner, the environment is proofed,
while the individual components are proofed indirectly through their use. Usage Model testing takes an optimistic
approach to testing, because it expects to have few problems with the individual components. The strategy relies
heavily on the component developers to do the isolated unit testing for their product. The goal of the strategy is to
avoid redoing the testing done by the developers, and instead flesh out problems caused by the interaction of the
components in the environment. For integration testing, Usage Model testing can be more efficient and provides
better test coverage than traditional focused functional integration testing. To be more efficient and accurate, care
must be used in defining the user-like workloads for creating realistic scenarios in exercising the environment. This
gives that the integrated environment will work as expected for the target customers.
Integration testing
188
Top-down and Bottom-up
Bottom Up Testing is an approach to integrated testing where the lowest level components are tested first, then used
to facilitate the testing of higher level components. The process is repeated until the component at the top of the
hierarchy is tested.
All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration
testing of lower level integrated modules, the next level of modules will be formed and can be used for integration
testing. This approach is helpful only when all or most of the modules of the same development level are ready. This
method also helps to determine the levels of software developed and makes it easier to report testing progress in the
form of a percentage.
Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of
the module is tested step by step until the end of the related module.
Sandwich Testing is an approach to combine top down testing with bottom up testing.
The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-Down, it is easier to
find a missing branch link
Limitations
Any conditions not stated in specified integration tests, outside of the confirmation of the execution of design items,
will generally not be tested.
System testing
System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the
system's compliance with its specified requirements. System testing falls within the scope of black box testing, and
as such, should require no knowledge of the inner design of the code or logic.
[1]
As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed
integration testing and also the software system itself integrated with any applicable hardware system(s). The
purpose of integration testing is to detect any inconsistencies between the software units that are integrated together
(called assemblages) or between any of the assemblages and the hardware. System testing is a more limited type of
testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.
Testing the whole system
System testing is performed on the entire system in the context of a Functional Requirement Specification(s) (FRS)
and/or a System Requirement Specification (SRS). System testing tests not only the design, but also the behaviour
and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in
the software/hardware requirements specification(s).
Types of tests to include in system testing
The following examples are different types of testing that should be considered during System testing:
Graphical user interface testing
Usability testing
Software performance testing
Compatibility testing
Exception handling
System testing
189
Load testing
Volume testing
Stress testing
Security testing
Scalability testing
Sanity testing
Smoke testing
Exploratory testing
Ad hoc testing
Regression testing
Installation testing
Maintenance testing
Recovery testing and failover testing.
Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Although different testing organizations may prescribe different tests as part of System testing, this list serves as a
general framework or foundation to begin with.
References
[1] IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries; IEEE; New York, NY.; 1990.
Black, Rex; (2002). Managing the Testing Process (2nd ed.). Wiley Publishing. ISBN 0-471-22398-0
System integration testing
190
System integration testing
In the context of software systems and software engineering, system integration testing (SIT) is a testing process
that exercises a software system's coexistence with others. With multiple integrated systems, assuming that each
have already passed system testing, SIT proceeds to test their required interactions. Following this, the deliverables
are passed on to acceptance testing.
Introduction
SIT is part of the software testing life cycle for collaborative projects. Usually, a round of SIT precedes the user
acceptance test (UAT) round. And software providers usually run a pre-SIT round before consumers run their SIT
test cases.
For example, if integrator (company) is providing an enhancement to customer's existing solution, then they integrate
the new application layer and the new database layer with existing customer's application and existing database
layers. After the integration completes, users use the new part (extended part) of the integrated application to update
data. Along they use old part (pre-existing part) of the integrated application. A process should exist to exchange
data imports and exports between the two data layers. This data exchange process should keep both systems
up-to-date. Purpose of the system integration testing is to make sure whether these systems are successfully
integrated and been up-to-date by exchanging data with each other.
There may be more parties in the integration, for example the customer (consumer) can have their own customers;
there may be also multiple providers.
Data driven method
This is a simple method which can perform with minimum usage of the software testing tools. Exchange some data
imports and data exports. And then investigate the behavior of each data field within each individual layer. There are
three main states of data flow after the software collaboration has done.
Data state within the integration layer
Integration layer can be a middleware or web service(s) which is act as a media for data imports and data exports.
Perform some data imports and exports and check following steps.
1. Cross check the data properties within the Integration layer with technical/business specification documents.
- If web service involved with the integration layer then we can use WSDL and XSD against our web service request
for the cross check.
- If middleware involved with the integration layer then we can use data mappings against middleware logs for the
cross check.
2. Execute some unit tests. Cross check the data mappings (data positions, declarations) and requests (character
length, data types) with technical specifications.
3. Investigate the server logs/middleware logs for troubleshooting.
(Reading knowledge of WSDL, XSD, DTD, XML, and EDI might be required for this)
System integration testing
191
Data state within the database layer
1. First check whether all the data have committed to the database layer from the integration layer.
2. Then check the data properties with the table and column properties with relevant to technical/business
specification documents.
3. Check the data validations/constrains with business specification documents.
4. If there are any processing data within the database layer then check Stored Procedures with relevant
specifications.
5. Investigate the server logs for troubleshooting.
(Knowledge in SQL and reading knowledge in stored procedures might be required for this)
Data state within the Application layer
There is not that much to do with the application layer when we perform a system integration testing.
1. Mark all the fields from business requirement documents which should be visible in the UI.
2. Create a data map from database fields to application fields and check whether necessary fields are visible in UI.
3. Check data properties by some positive and negative test cases.
There are many combinations of data imports and export which we can perform by considering the time period for
system integration testing
(We have to select best combinations to perform with the limited time). And also we have to repeat some of the
above steps in order to test those combinations.
Acceptance testing
Acceptance testing of an aircraft catapult
In engineering and its various
subdisciplines, acceptance testing is a test
conducted to determine if the requirements
of a specification or contract are met. It may
involve chemical tests, physical tests, or
performance tests.
In systems engineering it may involve
black-box testing performed on a system
(for example: a piece of software, lots of
manufactured mechanical parts, or batches
of chemical products) prior to its delivery.
[1]
Software developers often distinguish
acceptance testing by the system provider
from acceptance testing by the customer (the
user or client) prior to accepting transfer of ownership. In the case of software, acceptance testing performed by the
customer is known as user acceptance testing (UAT), end-user testing, site (acceptance) testing, or field (acceptance)
testing.
A smoke test is used as an acceptance test prior to introducing a build to the main testing process.
Acceptance testing
192
Overview
Testing generally involves running a suite of tests on the completed system. Each individual test, known as a case,
exercises a particular operating condition of the user's environment or feature of the system, and will result in a pass
or fail, or boolean, outcome. There is generally no degree of success or failure. The test environment is usually
designed to be identical, or as close as possible, to the anticipated user's environment, including extremes of such.
These test cases must each be accompanied by test case input data or a formal description of the operational
activities (or both) to be performedintended to thoroughly exercise the specific caseand a formal description of
the expected results.
Acceptance Tests/Criteria (in Agile Software Development) are usually created by business customers and expressed
in a business domain language. These are high-level tests to test the completeness of a user story or stories 'played'
during any sprint/iteration. These tests are created ideally through collaboration between business customers,
business analysts, testers and developers, however the business customers (product owners) are the primary owners
of these tests. As the user stories pass their acceptance criteria, the business owners can be sure of the fact that the
developers are progressing in the right direction about how the application was envisaged to work and so it's
essential that these tests include both business logic tests as well as UI validation elements (if need be).
Acceptance test cards are ideally created during sprint planning or iteration planning meeting, before development
begins so that the developers have a clear idea of what to develop. Sometimes (due to bad planning!) acceptance
tests may span multiple stories (that are not implemented in the same sprint) and there are different ways to test them
out during actual sprints. One popular technique is to mock external interfaces or data to mimic other stories which
might not be played out during an iteration (as those stories may have been relatively lower business priority). A user
story is not considered complete until the acceptance tests have passed.
Process
The acceptance test suite is run against the supplied input data or using an acceptance test script to direct the testers.
Then the results obtained are compared with the expected results. If there is a correct match for every case, the test
suite is said to pass. If not, the system may either be rejected or accepted on conditions previously agreed between
the sponsor and the manufacturer.
The objective is to provide confidence that the delivered system meets the business requirements of both sponsors
and users. The acceptance phase may also act as the final quality gateway, where any quality defects not previously
detected may be uncovered.
A principal purpose of acceptance testing is that, once completed successfully, and provided certain additional
(contractually agreed) acceptance criteria are met, the sponsors will then sign off on the system as satisfying the
contract (previously agreed between sponsor and manufacturer), and deliver final payment.
User acceptance testing
User Acceptance Testing (UAT) is a process to obtain confirmation that a system meets mutually agreed-upon
requirements. A Subject Matter Expert (SME), preferably the owner or client of the object under test, provides such
confirmation after trial or review. In software development, UAT is one of the final stages of a project and often
occurs before a client or customer accepts the new system.
Users of the system perform these tests, which developers derive from the client's contract or the user requirements
specification.
Test designers draw up formal tests and devise a range of severity levels. Ideally the designer of the user acceptance
tests should not be the creator of the formal integration and system test cases for the same system. The UAT acts as a
final verification of the required business function and proper functioning of the system, emulating real-world usage
conditions on behalf of the paying client or a specific large customer. If the software works as intended and without
Acceptance testing
193
issues during normal use, one can reasonably extrapolate the same level of stability in production.
User tests, which are usually performed by clients or end-users, do not normally focus on identifying simple
problems such as spelling errors and cosmetic problems, nor showstopper defects, such as software crashes; testers
and developers previously identify and fix these issues during earlier unit testing, integration testing, and system
testing phases.
The results of these tests give confidence to the clients as to how the system will perform in production. There may
also be legal or contractual requirements for acceptance of the system.
Acceptance testing in Extreme Programming
Acceptance testing is a term used in agile software development methodologies, particularly Extreme Programming,
referring to the functional testing of a user story by the software development team during the implementation phase.
The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or
many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black box system
tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying
the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority.
Acceptance tests are also used as regression tests prior to a production release. A user story is not considered
complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each
iteration or the development team will report zero progress.
[2]
Types of acceptance testing
Typical types of acceptance testing include the following
User acceptance testing
This may include factory acceptance testing, i.e. the testing done by factory users before the factory is moved
to its own site, after which site acceptance testing may be performed by the users at the site.
Operational Acceptance Testing (OAT)
Also known as operational readiness testing, this refers to the checking done to a system to ensure that
processes and procedures are in place to allow the system to be used and maintained. This may include checks
done to back-up facilities, procedures for disaster recovery, training for end users, maintenance procedures,
and security procedures.
Contract and regulation acceptance testing
In contract acceptance testing, a system is tested against acceptance criteria as documented in a contract,
before the system is accepted. In regulation acceptance testing, a system is tested to ensure it meets
governmental, legal and safety standards.
Alpha and beta testing
Alpha testing takes place at developers' sites, and involves testing of the operational system by internal staff,
before it is released to external customers. Beta testing takes place at customers' sites, and involves testing by a
group of customers who use the system at their own locations and provide feedback, before the system is
released to other customers. The latter is often called field testing .
Acceptance testing
194
List of development to production (testing) environments
Development Environment
Development Testing Environment
Testing Environment
Development Integration Testing
Development System Testing
System Integration Testing
User Acceptance Testing
Production Environment
List of acceptance-testing frameworks
Cucumber, a BDD acceptance test framework
Fabasoft app.test for automated acceptance tests
FitNesse, a fork of Fit
Framework for Integrated Test (Fit)
iMacros
ItsNat Java Ajax web framework with built-in, server based, functional web testing capabilities.
Ranorex
Robot Framework
Selenium
Test Automation FX
Watir
References
[1] Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing.
Hoboken, NJ: Wiley. ISBN0-470-40415-9.
[2] Don Wells. "Acceptance Tests" (http:/ / www.extremeprogramming. org/ rules/ functionaltests. html). Extremeprogramming.org. . Retrieved
2011-09-20.
External links
Acceptance Test Engineering Guide (http:/ / testingguidance. codeplex. com) by Microsoft patterns & practices
(http:/ / msdn. com/ practices)
Article Using Customer Tests to Drive Development (http:/ / www. methodsandtools. com/ archive/ archive.
php?id=23) from Methods & Tools (http:/ / www. methodsandtools. com/ )
Article Acceptance TDD Explained (http:/ / www. methodsandtools. com/ archive/ archive. php?id=72) from
Methods & Tools (http:/ / www. methodsandtools. com/ )
Risk-based testing
195
Risk-based testing
Risk-based testing (RBT) is a type of software testing that prioritizes the tests of features and functions based on the
risk of their failure - a function of their importance and likelihood or impact of failure.
[1][2][3][4]
In theory, since there
is an infinite number of possible tests, any set of tests must be a subset of all possible tests. Test techniques such as
boundary value analysis and state transition testing aim to find the areas most likely to be defective.
Assessing risks
The changes between two releases or versions is key in order to assess risk. Evaluating critical business modules is a
first step in prioritizing tests, but it does not include the notion of evolutionary risk. This is then expanded using two
methods: change-based testing and regression testing.
Change-based testing allows test teams to assess changes made in a release and then prioritize tests towards
modified modules.
Regression testing ensures that a change, such as a bug fix, did not introduce new faults into the software under
test. One of the main reasons for regression testing is to determine whether a change in one part of the software
affects other parts of the software.
These two methods permit test teams to prioritize tests based on risk, change and criticality of business modules.
Certain technologies can make this kind of test strategy very easy to set-up and to maintain with software changes.
Types of Risks
Risk can be identified as the probability that an undetected software bug may have a negative impact on the user of a
system.
[5]
The methods assess risks along a variety of dimensions:
Business or Operational
High use of a subsystem, function or feature
Criticality of a subsystem, function or feature, including the cost of failure
Technical
Geographic distribution of development team
Complexity of a subsystem or function
External
Sponsor or executive preference
Regulatory requirements
E-Business Failure-Mode Related
[6]
Static content defects
Web page integration defects
Functional behavior-related failure
Service (Availability and Performance) related failure
Usability and Accessibility-related failure
Security vulnerability
Large Scale Integration failure
Risk-based testing
196
References
[1] Gerrard, Paul; Thompson, Neil (2002). Risk Based E-Business Testing. Artech House Publishers. ISBN1-58053-314-0.
[2] Bach, J. The Challenge of Good Enough Software (http:/ / www. satisfice. com/ articles/ gooden2. pdf) (1995)
[3] Bach, J. and Kaner, C. Exploratory and Risk Based Testing (http:/ / www. testingeducation. org/ a/ nature. pdf) (2004)
[4] Mika Lehto (October 25, 2011). "The concept of risk-based testing and its advantages and disadvantages" (https:/ / www. ictstandard. org/
article/ 2011-10-25/ concept-risk-based-testing-and-its-advantages-and-disadvantages). Ictstandard.org. . Retrieved 2012-03-01.
[5] Stephane Besson (2012-01-03). "Article info : A Strategy for Risk-Based Testing" (http:/ / www. stickyminds. com/ s.
asp?F=S7566_ART_2). Stickyminds.com. . Retrieved 2012-03-01.
[6] Gerrard, Paul and Thompson, Neil Risk-Based Testing E-Business (http:/ / www. riskbasedtesting. com) (2002)
Software testing outsourcing
Software testing outsourcing provides for software testing carried out by the forces of an additionally engaged
company or a group of people not directly involved in the process of software development. Contemporary testing
outsourcing is an independent IT field, the so called Software Testing & Quality Assurance.
Software testing is an essential phase of software development, but is definitely not the core activity of most
companies. Outsourcing enables the company to concentrate on its core activities while external software testing
experts handle the independent validation work. This offer many tangible business benefits. These include
independent assessment leading to enhanced delivery confidence, reduced time to market, lower infrastructure
investment, predictable software quality, de-risking of deadlines and increased time to focus on designing better
solutions. Today stress, performance and security testing are the most demanded types in software testing
outsourcing.
At present 5 main options of software testing outsourcing are available depending on the detected problems with
software development:
full outsourcing of the whole palette of software testing & quality assurance operations
realization of complex testing with high resource consumption
prompt resource enlargement of the company by external testing experts
support of existing program products by new releases testing
independent quality audit.
Availability of the effective channels of communication and information sharing is one of the core aspects that allow
to guarantee the high quality of testing, being at the same time the main obstacle for outsourcing. Due to this
channels software testing outsourcing allows to cut down the number of software defects 3 30 times depending on
the quality of the legacy system.
Top established global outsourcing cities
According to Tholons Global Services - Top 50,
[1]
in 2009, Top Established and Emerging Global Outsourcing
Cities in Testing function were:
1. 1. Chennai, India
2. 2. Cebu City, Philippines
3. 3. Shanghai, China
4. 4. Beijing, China
5. 5. Krakw, Poland
6. 6. Ho Chi Minh City, Vietnam
Software testing outsourcing
197
Top Emerging Global Outsourcing Cities
1. 1. Chennai
2. 2. Bucharest
3. 3. So Paulo
4. 4. Cairo
Cities were benchmark against six categories included: skills and scalability, savings, business environment,
operational environment, business risk and non-business environment.
Vietnam Outsourcing
Vietnam has become a major player in software outsourcing. Tholons Global Services annual report highlights Ho
Chi Minh City ability to competitively meet client nations needs in scale and capacity. Its rapid maturing business
environment has caught the eye of international investors aware of the countrys stability in political and labor
conditions, its increasing number of English speakers and its high service-level maturity.
[2]
Californian based companies such as Global CyberSoft Inc. and LogiGear Corporation are optimistic with Vietnams
ability to execute their global offshoring industry requirements. Despite the 2008-2009 financial crisis, both
companies expect to fulfill their projected goals. LogiGear has addressed a shortage of highly qualified software
technicians for its testing and automation services but remains confident that professionals are available to increase
its staff in anticipation of the US recovery.
[2]
Argentina Outsourcing
Argentinas software industry has experienced an exponential growth in the last decade, positioning itself as one of
the strategic economic activities in the country. Argentina has a very talented pool of technically savvy and well
educated people with great command of the English language. The country also shows a number of advantages:
Because Argentina is just one hour ahead of North America's east coast, communication takes place in real time.
Even More, Argentinas internet culture and industry is one of the most progressive in the world: 60% Broad Band
Access, Facebook penetration in Argentina ranks 3rd worldwide and the country has the highest penetration of smart
phones in Latin America (24%)
[3]
. Perhaps one of the most surprising facts is that the percentage that internet
contributes to Argentinas Gross National Product (2.2%) ranks 10th in the world
[4]
.
Fostered by the results of a blooming industry, a new software-related activity is starting to mature in the country:
testing outsourcing. At first, developing companies would absorb this business opportunity within their testing
departments. However, a considerable number of start-up Quality Assurance companies have emerged in order to
profit from this market gap.
References
[1] Tholons Global Services report 2009 (http:/ / www.itida. gov. eg/ Documents/ Tholons_study. pdf) Top Established and Emerging Global
Outsourcing
[2] (http:/ / www.logigear. com/ in-the-news/ 974-software-outsourcing-recovery-and-development. html) LogiGear, PC World Viet Nam, Jan
2011
[3] New Media Trend Watch: http:/ / www. newmediatrendwatch. com/ markets-by-country/ 11-long-haul/ 35-argentina
[4] Infobae.com: http:/ / www.infobae. com/ notas/ 645695-Internet-aportara-us24700-millones-al-PBI-de-la-Argentina-en-2016. html
Tester driven development
198
Tester driven development
Tester-driven development is an anti-pattern in software development. It should not be confused with test-driven
development. It refers to any software development project where the software testing phase is too long. The testing
phase is so long that the requirements may change radically during software testing. New or changed requirements
often appear as bug reports. Bug tracking software usually lacks support for handling requirements. As a result of
this nobody really knows what the system requirements are.
Projects that are developed using this anti-pattern often suffer from being extremely late. Another common problem
is poor code quality.
Common causes for projects ending up being run this way are often:
The testing phase started too early;
Incomplete requirements;
Inexperienced testers;
Inexperienced developers;
Poor project management.
Things get worse when the testers realise that they don't know what the requirements are and therefore don't know
how to test any particular code changes. The onus then falls on the developers of individual changes to write their
own test cases and they are happy to do so because their own tests normally pass and their performance
measurements improve. Project leaders are also delighted by the rapid reduction in the number of open change
requests.
Test effort
In software development, test effort refers to the expenses for (still to come) tests. There is a relation with test costs
and failure costs (direct, indirect, costs for fault correction). Some factors which influence test effort are: maturity of
the software development process, quality and testability of the testobject, test infrastructure, skills of staff members,
quality goals and test strategy.
Methods for estimation of the test effort
To analyse all factors is difficult, because most of the factors influence each other. Following approaches can be
used for the estimation: top-down estimation and bottom-up estimation. The top-down techniques are formula based
and they are relative to the expenses for development: Function Point Analysis (FPA) and Test Point Analysis (TPA)
amongst others. Bottom-up techniques are based on detailed information and involve often experts. The following
techniques belong here: Work Breakdown Structure (WBS) and Wide Band Delphi (WBD).
We can also use the following techniques for estimating the test effort:
Conversion of software size into person hours of effort directly using a conversion factor. For example, we assign
2 person hours of testing effort per one Function Point of software size or 4 person hours of testing effort per one
use case point or 3 person hours of testing effort per one Software Size Unit
Conversion of software size into testing project size such as Test Points or Software Test Units using a conversion
factor and then convert testing project size into effort
Compute testing project size using Test Points of Software Test Units. Methodology for deriving the testing
project size in Test Points is not well documented. However, methodology for deriving Software Test Units is
defined in a paper by Murali
Test effort
199
We can also derive software testing project size and effort using Delphi Technique or Analogy Based Estimation
technique.
Test efforts from literature
In literature test efforts relative to total costs are between 20% and 70%. These values are amongst others dependent
from the project specific conditions. When looking for the test effort in the single phases of the test process, these are
diversely distributed: with about 40% for test specification and test execution each.
References
Andreas Spillner, Tilo Linz, Hans Schfer. (2006). Software Testing Foundations - A Study Guide for the
Certified Tester Exam - Foundation Level - ISTQB compliant, 1st print. dpunkt.verlag GmbH, Heidelberg,
Germany. ISBN 3-89864-363-8.
Erik van Veenendaal (Hrsg. und Mitautor): The Testing Practitioner. 3. Auflage. UTN Publishers, CN Den
Bosch, Niederlande 2005, ISBN 90-72194-65-9.
Thomas Mller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus Olsen, Maaret Pyhjrvi, Geoff
Thompson and Erik van Veendendal. (2005). Certified Tester - Foundation Level Syllabus - Version 2005,
International Software Testing Qualifications Board (ISTQB), Mhrendorf, Germany. (PDF; 0,424 MB
[1]
).
Andreas Spillner, Tilo Linz, Thomas Roner, Mario Winter: Praxiswissen Softwaretest - Testmanagement: Aus-
und Weiterbildung zum Certified Tester: Advanced Level nach ISTQB-Standard. 1. Auflage. dpunkt.verlag
GmbH, Heidelberg 2006, ISBN 3-89864-275-5.
External links
Wide Band Delphi
[2]
Test Effort Estimation
[3]
References
[1] http:/ / www. istqb. org/ downloads/ syllabi/ SyllabusFoundation2005. pdf
[2] http:/ / tech. willeke.com/ Programing/ Guidelines/ GL-10. htm
[3] http:/ / www. chemuturi. com/ Test%20Effort%20Estimation. pdf
200
Testing artefacts
IEEE 829
IEEE Software Document Definitions
SQAP Software Quality Assurance Plan IEEE 730
SCMP Software Configuration Management Plan IEEE 828
STD Software Test Documentation IEEE 829
SRS Software requirements specification IEEE 830
SVVP Software Validation & Verification Plan IEEE 1012
SDD Software Design Description IEEE 1016
SPMP Software Project Management Plan IEEE 1058
IEEE 829-2008, also known as the 829 Standard for Software and System Test Documentation, is an IEEE
standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage
potentially producing its own separate type of document. The standard specifies the format of these documents but
does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for
these documents. These are a matter of judgment outside the purview of the standard. The documents are:
Test Plan: a management planning document that shows:
How the testing will be done (including SUT (system under test) configurations).
Who will do it
What will be tested
How long it will take (although this may vary, depending upon resource availability).
What the test coverage will be, i.e. what quality level is required
Test Design Specification: detailing test conditions and the expected results as well as test pass criteria.
Test Case Specification: specifying the test data for use in running the test conditions identified in the Test
Design Specification
Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps
that need to be followed
Test Item Transmittal Report: reporting on when tested software components have progressed from one
stage of testing to the next
Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or
failed
Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other
information intended to throw light on why a test has failed. This document is deliberately named as an
incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can
occur for a number of reasons other than a fault in the system. These include the expected results being wrong,
the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation
could be made. The report consists of all details of the incident such as actual and expected results, when it
failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an
assessment of the impact of an incident upon testing.
IEEE 829
201
Test Summary Report: A management report providing any important information uncovered by the tests
accomplished, and including assessments of the quality of the testing effort, the quality of the software system
under test, and statistics derived from Incident Reports. The report also records what testing was done and how
long it took, in order to improve any future test planning. This final document is used to indicate whether the
software system under test is fit for purpose according to whether or not it has met acceptance criteria defined
by project stakeholders.
Relationship with other standards
Other standards that may be referred to when documenting according to IEEE 829 include:
IEEE 1008, a standard for unit testing
IEEE 1012, a standard for Software Verification and Validation
IEEE 1028, a standard for software inspections
IEEE 1044, a standard for the classification of software anomalies
IEEE 1044-1, a guide to the classification of software anomalies
IEEE 830, a guide for developing system requirements specifications
IEEE 730, a standard for software quality assurance plans
IEEE 1061, a standard for software quality metrics and methodology
IEEE 12207, a standard for software life cycle processes and life cycle data
BS 7925-1, a vocabulary of terms used in software testing
BS 7925-2, a standard for software component testing
Use of IEEE 829
The standard forms part of the training syllabus of the ISEB Foundation and Practitioner Certificates in Software
Testing promoted by the British Computer Society. ISTQB, following the formation of its own syllabus based on
ISEB's and Germany's ASQF syllabi, also adopted IEEE 829 as the reference standard for software and system test
documentation.
Revisions
The latest revision to IEEE 829, known as IEEE 829-2008
[1]
, was published on 18 July 2008 and has superseded the
1998 version.
External links
BS7925-2
[2]
, Standard for Software Component Testing
[3] - IEEE Std 829-1998 (from IEEE)
[4] - IEEE Std 829-2008 (from IEEE)
References
[1] http:/ / ieeexplore. ieee. org/ Xplore/ login.jsp?url=/ ielD/ 4459216/ 4459217/ 04459218. pdf?arnumber=4459218
[2] http:/ / www. ruleworks. co.uk/ testguide/ BS7925-2.htm
[3] http:/ / ieeexplore. ieee. org/ stamp/ stamp.jsp?tp=& arnumber=741968& isnumber=16010
[4] http:/ / ieeexplore. ieee. org/ stamp/ stamp.jsp?tp=& arnumber=4578383& isnumber=4578382
Test strategy
202
Test strategy
Compare with Test plan.
A test strategy is an outline that describes the testing approach of the software development cycle. It is created to
inform project managers, testers, and developers about some key issues of the testing process. This includes the
testing objective, methods of testing new functions, total time and resources required for the project, and the testing
environment.
Test strategies describe how the product risks of the stakeholders are mitigated at the test-level, which types of test
are to be performed, and which entry and exit criteria apply. They are created based on development design
documents. System design documents are primarily used and occasionally, conceptual design documents may be
referred to. Design documents describe the functionality of the software to be enabled in the upcoming release. For
every stage of development design, a corresponding test strategy should be created to test the new feature sets.
Test Levels
The test strategy describes the test level to be performed. There are primarily three levels of testing: unit testing,
integration testing, and system testing. In most software development organizations, the developers are responsible
for unit testing. Individual testers or test teams are responsible for integration and system testing.
Roles and Responsibilities
The roles and responsibilities of test leader, individual testers, project manager are to be clearly defined at a project
level in this section. This may not have names associated: but the role has to be very clearly defined.
Testing strategies should be reviewed by the developers. They should also be reviewed by test leads for all levels of
testing to make sure the coverage is complete yet not overlapping. Both the testing manager and the development
managers should approve the test strategy before testing can begin.
Environment Requirements
Environment requirements are an important part of the test strategy. It describes what operating systems are used for
testing. It also clearly informs the necessary OS patch levels and security updates required. For example, a certain
test plan may require Windows XP Service Pack 3 to be installed as a prerequisite for testing.
Testing Tools
There are two methods used in executing test cases: manual and automated. Depending on the nature of the testing, it
is usually the case that a combination of manual and automated testing is the best testing method.
Test strategy
203
Risks and Mitigation
Any risks that will affect the testing process must be listed along with the mitigation. By documenting a risk, its
occurrence can be anticipated well ahead of time. Proactive action may be taken to prevent it from occurring, or to
mitigate its damage. Sample risks are dependency of completion of coding done by sub-contractors, or capability of
testing tools.
Test Schedule
A test plan should make an estimation of how long it will take to complete the testing phase. There are many
requirements to complete testing phases. First, testers have to execute all test cases at least once. Furthermore, if a
defect was found, the developers will need to fix the problem. The testers should then re-test the failed test case until
it is functioning correctly. Last but not the least, the tester need to conduct regression testing towards the end of the
cycle to make sure the developers did not accidentally break parts of the software while fixing another part. This can
occur on test cases that were previously functioning properly.
The test schedule should also document the number of testers available for testing. If possible, assign test cases to
each tester.
It is often difficult to make an accurate approximation of the test schedule since the testing phase involves many
uncertainties. Planners should take into account the extra time needed to accommodate contingent issues. One way to
make this approximation is to look at the time needed by the previous releases of the software. If the software is new,
multiplying the initial testing schedule approximation by two is a good way to start.
Regression Test Approach
When a particular problem is identified, the programs will be debugged and the fix will be done to the program. To
make sure that the fix works, the program will be tested again for that criteria. Regression test will make sure that
one fix does not create some other problems in that program or in any other interface. So, a set of related test cases
may have to be repeated again, to make sure that nothing else is affected by a particular fix. How this is going to be
carried out must be elaborated in this section. In some companies, whenever there is a fix in one unit, all unit test
cases for that unit will be repeated, to achieve a higher level of quality.
Test Groups
From the list of requirements, we can identify related areas, whose functionality is similar. These areas are the test
groups. For example, in a railway reservation system, anything related to ticket booking is a functional group;
anything related with report generation is a functional group. Same way, we have to identify the test groups based on
the functionality aspect.
Test Priorities
Among test cases, we need to establish priorities. While testing software projects, certain test cases will be treated as
the most important ones and if they fail, the product cannot be released. Some other test cases may be treated like
cosmetic and if they fail, we can release the product without much compromise on the functionality. This priority
levels must be clearly stated. These may be mapped to the test groups also.
Test strategy
204
Test Status Collections and Reporting
When test cases are executed, the test leader and the project manager must know, where exactly the project stands in
terms of testing activities. To know where the project stands, the inputs from the individual testers must come to the
test leader. This will include, what test cases are executed, how long it took, how many test cases passed, how many
failed, and how many are not executable. Also, how often the project collects the status is to be clearly stated. Some
projects will have a practice of collecting the status on a daily basis or weekly basis.
Test Records Maintenance
When the test cases are executed, we need to keep track of the execution details like when it is executed, who did it,
how long it took, what is the result etc. This data must be available to the test leader and the project manager, along
with all the team members, in a central location. This may be stored in a specific directory in a central server and the
document must say clearly about the locations and the directories. The naming convention for the documents and
files must also be mentioned.
Requirements traceability matrix
Ideally, the software must completely satisfy the set of requirements. From design, each requirement must be
addressed in every single document in the software process. The documents include the HLD, LLD, source codes,
unit test cases, integration test cases and the system test cases. In a requirements traceability matrix, the rows will
have the requirements. The columns represent each document. Intersecting cells are marked when a document
addresses a particular requirement with information related to the requirement ID in the document. Ideally, if every
requirement is addressed in every single document, all the individual cells have valid section ids or names filled in.
Then we know that every requirement is addressed. If any cells are empty, it represents that a requirement has not
been correctly addressed.
Test Summary
The senior management may like to have test summary on a weekly or monthly basis. If the project is very critical,
they may need it even on daily basis. This section must address what kind of test summary reports will be produced
for the senior management along with the frequency.
The test strategy must give a clear vision of what the testing team will do for the whole project for the entire
duration. This document will/may be presented to the client also, if needed. The person, who prepares this document,
must be functionally strong in the product domain, with very good experience, as this is the document that is going
to drive the entire team for the testing activities. Test strategy must be clearly explained to the testing team members
right at the beginning of the project.
Test strategy
205
References
Ammann, Paul and Offutt, Jeff. Introduction to software testing. New York: Cambridge University Press, 2008
Bach, James (1999). "Test Strategy"
[1]
. Retrieved October 31, 2011.
Dasso, Aristides. Verification, validation and testing in software engineering. Hershey, PA: Idea Group Pub.,
2007
References
[1] http:/ / www. satisfice.com/ presentations/ strategy. pdf
Test plan
A test plan is a document detailing a systematic approach to testing a system such as a machine or software. The
plan typically contains a detailed understanding of what the eventual workflow will be.
Test plans
A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design
specifications and other requirements. A test plan is usually prepared by or with significant input from Test
Engineers.
Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may
include one or more of the following:
Design Verification or Compliance test - to be performed during the development or approval stages of the
product, typically on a small sample of units.
Manufacturing or Production test - to be performed during preparation or assembly of the product in an ongoing
manner for purposes of performance verification and quality control.
Acceptance or Commissioning test - to be performed at the time of delivery or installation of the product.
Service and Repair test - to be performed as required over the service life of the product.
Regression test - to be performed on an existing operational product, to verify that existing functionality didn't get
broken when other aspects of the environment are changed (e.g., upgrading the platform on which an existing
application runs).
A complex system may have a high level test plan to address the overall requirements and supporting test plans to
address the design details of subsystems and components.
Test plan document formats can be as varied as the products and organizations to which they apply. There are three
major elements that should be described in the test plan: Test Coverage, Test Methods, and Test Responsibilities.
These are also used in a formal test strategy.
Test coverage
Test coverage in the test plan states what requirements will be verified during what stages of the product life. Test
Coverage is derived from design specifications and other requirements, such as safety standards or regulatory codes,
where each requirement or specification of the design ideally will have one or more corresponding means of
verification. Test coverage for different product life stages may overlap, but will not necessarily be exactly the same
for all stages. For example, some requirements may be verified during Design Verification test, but not repeated
during Acceptance test. Test coverage also feeds back into the design process, since the product may have to be
designed to allow test access (see Design For Test).
Test plan
206
Test methods
Test methods in the test plan state how test coverage will be implemented. Test methods may be determined by
standards, regulatory agencies, or contractual agreement, or may have to be created new. Test methods also specify
test equipment to be used in the performance of the tests and establish pass/fail criteria. Test methods used to verify
hardware design requirements can range from very simple steps, such as visual inspection, to elaborate test
procedures that are documented separately.
Test responsibilities
Test responsibilities include what organizations will perform the test methods and at each stage of the product life.
This allows test organizations to plan, acquire or develop test equipment and other resources necessary to implement
the test methods for which they are responsible. Test responsibilities also includes, what data will be collected, and
how that data will be stored and reported (often referred to as "deliverables"). One outcome of a successful test plan
should be a record or report of the verification of all design specifications and requirements as agreed upon by all
parties.
IEEE 829 test plan structure
IEEE 829-2008, also known as the 829 Standard for Software Test Documentation, is an IEEE standard that
specifies the form of a set of documents for use in defined stages of software testing, each stage potentially
producing its own separate type of document.
[1]
Test plan identifier
Introduction
Test items
Features to be tested
Features not to be tested
Approach
Item pass/fail criteria
Suspension criteria and resumption requirements
Test deliverables
Testing tasks
Environmental needs
Responsibilities
Staffing and training needs
Schedule
Risks and contingencies
Approvals
There are also other IEEE documents that suggest what should be contained in a test plan:
829-1983 IEEE Standard for Software Test Documentation (superseded by 829-1998)
[2]
829-1998 IEEE Standard for Software Test Documentation (superseded by 829-2008)
[3]
1008-1987 IEEE Standard for Software Unit Testing
[4]
1012-2004 IEEE Standard for Software Verification & Validation Plans
[5]
1059-1993 IEEE Guide for Software Verification & Validation Plans (withdrawn)
[6]
Test plan
207
References
[1] "IEEE Standard 829-2008" (http:/ / ieeexplore.ieee. org/ xpl/ freeabs_all. jsp?arnumber=4578383). Ieeexplore.ieee.org. 2008-07-18.
doi:10.1109/IEEESTD.2008.4578383. . Retrieved 2011-10-31.
[2] "IEEE Standard 829-1983" (http:/ / ieeexplore.ieee. org/ xpl/ freeabs_all. jsp?arnumber=573169). Ieeexplore.ieee.org.
doi:10.1109/IEEESTD.1983.81615. . Retrieved 2011-10-31.
[3] "IEEE Standard 829-1998" (http:/ / ieeexplore.ieee. org/ stamp/ stamp. jsp?tp=& arnumber=741968& isnumber=16010). Ieeexplore.ieee.org.
. Retrieved 2011-10-31.
[4] "IEEE Standard 1008-1987" (http:/ / ieeexplore. ieee.org/ xpl/ freeabs_all. jsp?arnumber=27763). Ieeexplore.ieee.org.
doi:10.1109/IEEESTD.1986.81001. . Retrieved 2011-10-31.
[5] "IEEE Standard 1012-2004" (http:/ / ieeexplore. ieee.org/ xpl/ freeabs_all. jsp?arnumber=1488512). Ieeexplore.ieee.org.
doi:10.1109/IEEESTD.2005.96278. . Retrieved 2011-10-31.
[6] "IEEE Standard 1059-1993" (http:/ / ieeexplore. ieee.org/ xpl/ freeabs_all. jsp?arnumber=838043). Ieeexplore.ieee.org.
doi:10.1109/IEEESTD.1994.121430. . Retrieved 2011-10-31.
External links
Public domain RUP test plan template at Sourceforge (http:/ / jdbv. sourceforge. net/ RUP. html) (templates are
currently inaccessible but sample documents can be seen here: DBV Samples (http:/ / jdbv. sourceforge. net/
Documentation. html))
Test plans and test cases (http:/ / www. stellman-greene. com/ testplan)
Traceability matrix
A traceability matrix is a document, usually in the form of a table, that correlates any two baselined documents that
require a many to many relationship to determine the completeness of the relationship. It is often used with
high-level requirements (these often consist of marketing requirements) and detailed requirements of the software
product to the matching parts of high-level design, detailed design, test plan, and test cases.
A requirements traceability matrix may be used to check to see if the current project requirements are being met,
and to help in the creation of a Request for Proposal, various deliverable documents, and project plan tasks.
[1]
Common usage is to take the identifier for each of the items of one document and place them in the left column. The
identifiers for the other document are placed across the top row. When an item in the left column is related to an item
across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and
each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It
must be determined if one must be made. Large values imply that the relationship is too complex and should be
simplified.
To ease the creation of traceability matrices, it is advisable to add the relationships to the source documents for both
backward traceability and forward traceability. In other words, when an item is changed in one baselined document,
it's easy to see what needs to be changed in the other.
Sample traceability matrix
Traceability matrix
208
Requirement
Identifiers
Reqs
Tested
REQ1
UC
1.1
REQ1
UC
1.2
REQ1
UC
1.3
REQ1
UC
2.1
REQ1
UC
2.2
REQ1
UC
2.3.1
REQ1
UC
2.3.2
REQ1
UC
2.3.3
REQ1
UC
2.4
REQ1
UC
3.1
REQ1
UC
3.2
REQ1
TECH
1.1
REQ1
TECH
1.2
REQ1
TECH
1.3
Test Cases 321 3 2 3 1 1 1 1 1 1 2 3 1 1 1
Tested
Implicitly
77
1.1.1 1 x
1.1.2 2 x x
1.1.3 2 x x
1.1.4 1 x
1.1.5 2 x x
1.1.6 1 x
1.1.7 1 x
1.2.1 2 x x
1.2.2 2 x x
1.2.3 2 x x
1.3.1 1 x
1.3.2 1 x
1.3.3 1 x
1.3.4 1 x
1.3.5 1 x
etc.
5.6.2 1 x
References
[1] Carlos, Tom (2008-10-21). Requirements Traceability Matrix - RTM. PM Hut, 21 October 2008. Retrieved on 2009-10-17 from http:/ / www.
pmhut. com/ requirements-traceability-matrix-rtm.
External links
Bidirectional Requirements Traceability (http:/ / www. compaid. com/ caiinternet/ ezine/ westfall-bidirectional.
pdf) by Linda Westfall
Requirements Traceability (http:/ / www. projectperfect. com. au/ info_requirements_traceability. php) Neville
Turbit
StickyMinds article: Traceability Matrix (http:/ / www. stickyminds. com/ r. asp?F=DART_6051) by Karthikeyan
V
Why Software Requirements Traceability Remains a Challenge (http:/ / www. crosstalkonline. org/ storage/
issue-archives/ 2009/ 200907/ 200907-Kannenberg. pdf) by Andrew Kannenberg and Dr. Hossein Saiedian
Test case
209
Test case
A test case in software engineering is a set of conditions or variables under which a tester will determine whether an
application or software system is working correctly or not. The mechanism for determining whether a software
program or system has passed or failed such a test is known as a test oracle. In some settings, an oracle could be a
requirement or use case, while in others it could be a heuristic. It may take many test cases to determine that a
software program or system is considered sufficiently scrutinized to be released. Test cases are often referred to as
test scripts, particularly when written. Written test cases are usually collected into test suites.
Formal test cases
In order to fully test that all the requirements of an application are met, there must be at least two test cases for each
requirement: one positive test and one negative test. If a requirement has sub-requirements, each sub-requirement
must have at least two test cases. Keeping track of the link between the requirement and the test is frequently done
using a traceability matrix. Written test cases should include a description of the functionality to be tested, and the
preparation required to ensure that the test can be conducted.
A formal written test-case is characterized by a known input and by an expected output, which is worked out before
the test is executed. The known input should test a precondition and the expected output should test a postcondition.
Informal test cases
For applications or systems without formal requirements, test cases can be written based on the accepted normal
operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities
and results are reported after the tests have been run.
In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These
scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or
they could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex,
and easy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios
cover a number of steps of the key.
Typical written test case format
A test case is usually a single step, or occasionally a sequence of steps, to test the correct behaviour/functionalities,
features of an application. An expected result or expected outcome is usually given.
Additional information that may be included:
test case ID
test case description
test step or order of execution number
related requirement(s)
depth
test category
author
check boxes for whether the test is automatable and has been automated.
Expected Result and Actual Result.
Additional fields that may be included and completed when the tests are executed:
pass/fail
remarks
Test case
210
Larger test cases may also contain prerequisite states or steps, and descriptions.
A written test case should also contain a place for the actual result.
These steps can be stored in a word processor document, spreadsheet, database or other common repository.
In a database system, you may also be able to see past test results and who generated the results and the system
configuration used to generate those results. These past results would usually be stored in a separate table.
Test suites often also contain
Test summary
Configuration
Besides a description of the functionality to be tested, and the preparation required to ensure that the test can be
conducted, the most time consuming part in the test case is creating the tests and modifying them when the system
changes.
Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would
evaluate if the results can be considered as a pass. This happens often on new products' performance number
determination. The first test is taken as the base line for subsequent test / product release cycles.
Acceptance tests, which use a variation of a written test case, are commonly performed by a group of end-users or
clients of the system to ensure the developed system meets the requirements specified or the contract. User
acceptance tests are differentiated by the inclusion of happy path or positive test cases to the almost complete
exclusion of negative test cases.
References
External links
Writing Software Security Test Cases - Putting security test cases into your test plan (http:/ / www. qasec. com/
cycle/ securitytestcases. shtml) by Robert Auger
How to write test cases (http:/ / templateforfree. com/ how-to-write-test-cases/ ) by Oan Bota
Software Test Case Engineering (http:/ / www. stickyminds. com/ s. asp?F=S15689_ART_2) By Ajay Bhagwat
Test data
211
Test data
Test Data is data which has been specifically identified for use in tests, typically of a computer program.
Some data may be used in a confirmatory way, typically to verify that a given set of input to a given function
produces some expected result. Other data may be used in order to challenge the ability of the program to respond to
unusual, extreme, exceptional, or unexpected input.
Test data may be produced in a focused or systematic way (as is typically the case in domain testing), or by using
other, less-focused approaches (as is typically the case in high-volume randomized automated tests). Test data may
be produced by the tester, or by a program or function that aids the tester. Test data may be recorded for re-use, or
used once and then forgotten.
Domain testing is a family of test techniques that focus on the test data. This might include identifying common or
critical inputs, representatives of a particular equivalence class model, values that might appear at the boundaries
between one equivalence class and another, outrageous values that should be rejected by the program, combinations
of inputs, or inputs that might drive the product towards a particular set of outputs.
Test Data Generation
Software Testing is an important part of the Software Development Life Cycle today. It is a labor intensive and also
accounts for nearly half of the cost of the system development. Hence, it is desired that parts of testing should be
automated. An important problem in testing is that of generating quality test data and is seen as an important step in
reducing the cost of Software Testing. Hence, Test Data Generation is an important part of software testing.
References
"The evaluation of program-based software test data adequacy criteria"
[1]
, E. J. Weyuker, Communications of the
ACM (abstract and references)
References
[1] http:/ / portal. acm. org/ citation. cfm?id=62963
Test suite
212
Test suite
In software development, a test suite, less commonly known as a validation suite, is a collection of test cases that are
intended to be used to test a software program to show that it has some specified set of behaviours. A test suite often
contains detailed instructions or goals for each collection of test cases and information on the system configuration to
be used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the
following tests.
Collections of test cases are sometimes incorrectly termed a test plan, a test script, or even a test scenario.
Types
Occasionally, test suites are used to group similar test cases together. A system might have a smoke test suite that
consists only of smoke tests or a test suite for some specific functionality in the system. It may also contain all tests
and signify if a test should be used as a smoke test or for some specific functionality.
In Model-based testing, one distinguishes between abstract test suites, which are collections of abstract test cases
derived from a high-level model of the system under test and executable test suites, which are derived from abstract
test suites by providing the concrete, lower-level details needed execute this suite by a program
[1]
. An abstract test
suite cannot be directly used on the actual system under test (SUT) because abstract test cases remain at a high
abstraction level and lack concrete details about the SUT and its environment. An executable test suite works on a
sufficiently detailed level to correctly communicate with the SUT and a test harness is usually present to interface
the executable test suite with the SUT.
A test suite for a primality testing subroutine might consist of a list of numbers and their primality (prime or
composite), along with a testing subroutine. The testing subroutine would supply each number in the list to the
primality tester, and verify that the result of each test is correct.
References
[1] Hakim Kahlouche, Csar Viho, and Massimo Zendri, "An Industrial Experiment in Automatic Generation of Executable Test Suites for a
Cache Coherency Protocol" (http:/ / cadp. inria.fr/ vasy/ publications/ Kahlouche-Viho-Zendri-98. html), Proc. International Workshop on
Testing of Communicating Systems (IWTCS'98), Tomsk, Russia, September 1998.
Test script
213
Test script
A test script in software testing is a set of instructions that will be performed on the system under test to test that the
system functions as expected.
There are various means for executing test scripts.
Manual testing. These are more commonly called test cases.
Automated testing
Short program written in a programming language used to test part of the functionality of a software system.
Test scripts written as a short program can either be written using a special automated functional GUI test tool
(such as HP QuickTest Professional, Borland SilkTest, and Rational Robot) or in a well-known programming
language (such as C++, C#, Tcl, Expect, Java, PHP, Perl, Powershell, Python, or Ruby).
Extensively parameterized short programs a.k.a. Data-driven testing
Reusable steps created in a table a.k.a. keyword-driven or table-driven testing.
These last two types are also done in manual testing.
The major advantage of automated testing is that tests may be executed continuously without the need for a human
intervention. Another advantage over manual testing in that it is faster and easily repeatable. Thus, it is worth
considering automating tests if they are to be executed several times, for example as part of regression testing.
Disadvantages of automated testing are that automated tests can like any piece of software be poorly written or
simply break during playback. They also can only examine what they have been programmed to examine. Since
most systems are designed with human interaction in mind, it is good practice that a human tests the system at some
point. A trained manual tester can notice that the system under test is misbehaving without being prompted or
directed however automated tests can only examine what they have been programmed to examine. Therefore, when
used in regression testing, manual testers can find new bugs while ensuring that old bugs do not reappear while an
automated test can only ensure the latter. That is why mixed testing with automated and manual testing can give very
good results, automating what needs to be tested often and can be easily checked by a machine, and using manual
testing to do test design to add them to the automated tests suite and to do exploratory testing.
One shouldn't fall into the trap of spending more time automating a test than it would take to simply execute it
manually, unless it is planned to be executed several times.
Test harness
214
Test harness
In software testing, a test harness
[1]
or automated test framework is a collection of software and test data
configured to test a program unit by running it under varying conditions and monitoring its behavior and outputs. It
has two main parts: the test execution engine and the test script repository.
Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and
compare the results to the desired value. The test harness is a hook to the developed code, which can be tested using
an automation framework.
A test harness should allow specific tests to run (this helps in optimising), orchestrate a runtime environment, and
provide a capability to analyse results.
The typical objectives of a test harness are to:
Automate the testing process.
Execute test suites of test cases.
Generate associated test reports.
A test harness may provide some of the following benefits:
Increased productivity due to automation of the testing process.
Increased probability that regression testing will occur.
Increased quality of software components and application.
Ensure that subsequent test runs are exact duplicates of previous ones.
Testing can occur at times that the office is not staffed (i.e. at night)
A test script may include conditions and/or uses that are otherwise difficult to simulate (load, for example)
An alternative definition of a test harness is software constructed to facilitate integration testing. Where test stubs are
typically components of the application under development and are replaced by working component as the
application is developed (top-down design), test harnesses are external to the application being tested and simulate
services or functionality not available in a test environment. For example, if you're building an application that needs
to interface with an application on a mainframe computer but none is available during development, a test harness
maybe built to use as a substitute. A test harness may be part of a project deliverable. Its kept outside of the
application source code and may be reused on multiple projects. Because a test harness simulates application
functionality - it has no knowledge of test suites, test cases or test reports. Those things are provided by a testing
framework and associated automated testing tools.
Notes
[1] [1] It is unclear who coined this term and when. It seems to first appear in the early 1990s.
215
Static testing
Static testing
Static testing is a form of software testing where the software isn't actually used. This is in contrast to dynamic
testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is
primarily syntax checking of the code and/or manually reviewing the code or document to find errors. This type of
testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs
are also used.
From the black box testing point of view, static testing involves reviewing requirements and specifications. This is
done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of
Verification and Validation.
Even static testing can be automated. A static testing test suite consists of programs to be analyzed by an interpreter
or a compiler that asserts the programs syntactic validity.
Bugs discovered at this stage of development are less expensive to fix than later in the development cycle.
The people involved in static testing are application developers and testers.
Sources
Kaner, Cem; Nguyen, Hung Q; Falk, Jack (1988). Testing Computer Software (Second ed.). Boston: Thomson
Computer Press. ISBN 0-47135-846-0.
Static Testing C++ Code: A utility to check library usability
[1]
References
[1] http:/ / www. ddj. com/ cpp/ 205801074
Software review
216
Software review
A of the work product, or by one or more colleagues of the author, to evaluate the technical content and/or
quality of the work.
[1]
Software management reviews are conducted by management representatives to evaluate the status of work done
and to make decisions regarding downstream activities.
Software audit reviews are conducted by personnel external to the software project, to evaluate compliance with
specifications, standards, contractual agreements, or other criteria.
Different types of reviews
Code review is systematic examination (often as peer review) of computer source code.
Pair programming is a type of code review where two persons develop code together at the same workstation.
Inspection is a very formal type of peer review where the reviewers are following a well-defined process to find
defects.
Walkthrough is a form of peer review where the author leads members of the development team and other
interested parties through a software product and the participants ask questions and make comments about
defects.
Technical review is a form of peer review in which a team of qualified personnel examines the suitability of the
software product for its intended use and identifies discrepancies from specifications and standards.
Formal versus informal reviews
"Formality" identifies the degree to which an activity is governed by agreed (written) rules. Software review
processes exist across a spectrum of formality, with relatively unstructured activities such as "buddy checking"
towards one end of the spectrum, and more formal approaches such as walkthroughs, technical reviews, and software
inspections, at the other. IEEE Std. 1028-1997 defines formal structures, roles, and processes for each of the last
three ("formal peer reviews"), together with software audits.
[2]
Research studies tend to support the conclusion that formal reviews greatly outperform informal reviews in
cost-effectiveness. Informal reviews may often be unnecessarily expensive (because of time-wasting through lack of
focus), and frequently provide a sense of security which is quite unjustified by the relatively small number of real
defects found and repaired.
IEEE 1028 generic process for formal reviews
IEEE Std 1028 defines a common set of activities for "formal" reviews (with some variations, especially for software
audit). The sequence of activities is largely based on the software inspection process originally developed at IBM by
Michael Fagan.
[3]
Differing types of review may apply this structure with varying degrees of rigour, but all activities
are mandatory for inspection:
0. [Entry evaluation]: The Review Leader uses a standard checklist of entry criteria to ensure that optimum
conditions exist for a successful review.
1. Management preparation: Responsible management ensure that the review will be appropriately resourced
with staff, time, materials, and tools, and will be conducted according to policies, standards, or other relevant
criteria.
2. Planning the review: The Review Leader identifies or confirms the objectives of the review, organises a team
of Reviewers, and ensures that the team is equipped with all necessary resources for conducting the review.
Software review
217
3. Overview of review procedures: The Review Leader, or some other qualified person, ensures (at a meeting if
necessary) that all Reviewers understand the review goals, the review procedures, the materials available to them,
and the procedures for conducting the review.
4. [Individual] Preparation: The Reviewers individually prepare for group examination of the work under
review, by examining it carefully for anomalies (potential defects), the nature of which will vary with the type of
review and its goals.
5. [Group] Examination: The Reviewers meet at a planned time to pool the results of their preparation activity
and arrive at a consensus regarding the status of the document (or activity) being reviewed.
6. Rework/follow-up: The Author of the work product (or other assigned person) undertakes whatever actions
are necessary to repair defects or otherwise satisfy the requirements agreed to at the Examination meeting. The
Review Leader verifies that all action items are closed.
7. [Exit evaluation]: The Review Leader verifies that all activities necessary for successful review have been
accomplished, and that all outputs appropriate to the type of review have been finalised.
Value of reviews
The most obvious value of software reviews (especially formal reviews) is that they can identify issues earlier and
more cheaply than they would be identified by testing or by field use (the defect detection process). The cost to find
and fix a defect by a well-conducted review may be one or two orders of magnitude less than when the same defect
is found by test execution or in the field.
A second, but ultimately more important, value of software reviews is that they can be used to train technical authors
in the development of extremely low-defect documents, and also to identify and remove process inadequacies that
encourage defects (the defect prevention process).
This is particularly the case for peer reviews if they are conducted early and often, on samples of work, rather than
waiting until the work has been completed. Early and frequent reviews of small work samples can identify
systematic errors in the Author's work processes, which can be corrected before further faulty work is done. This
improvement in Author skills can dramatically reduce the time it takes to develop a high-quality technical document,
and dramatically decrease the error-rate in using the document in downstream processes.
As a general principle, the earlier a technical document is produced, the greater will be the impact of its defects on
any downstream activities and their work products. Accordingly, greatest value will accrue from early reviews of
documents such as marketing plans, contracts, project plans and schedules, and requirements specifications.
Researchers and practitioners have shown the effectiveness of reviewing process in finding bugs and security
issues,
[4]
.
References
[1] Wiegers, Karl E. (2001). Peer Reviews in Software: A Practical Guide (http:/ / books. google. com/ books?id=d7BQAAAAMAAJ& pgis=1).
Addison-Wesley. p.14. ISBN0-201-73485-0. .
[2] IEEE Std . 1028-1997, "IEEE Standard for Software Reviews", clause 3.5
[3] Fagan, Michael E: "Design and Code Inspections to Reduce Errors in Program Development", IBM Systems Journal, Vol. 15, No. 3, 1976;
"Inspecting Software Designs and Code", Datamation, October 1977; "Advances In Software Inspections", IEEE Transactions in Software
Engineering, Vol. 12, No. 7, July 1986
[4] [4] Charles P.Pfleeger, Shari Lawrence Pfleeger. Security in Computing. Fourth edition. ISBN 0-13-239077-9
Software peer review
218
Software peer review
In software development, peer review is a type of software review in which a work product (document, code, or
other) is examined by its author and one or more colleagues, in order to evaluate its technical content and quality.
Purpose
The purpose of a peer review is to provide "a disciplined engineering practice for detecting and correcting defects in
software artifacts, and preventing their leakage into field operations" according to the Capability Maturity Model.
When performed as part of each Software development process activity, peer reviews identify problems that can be
fixed early in the lifecycle.
[1]
That is to say, a peer review that identifies a requirements problem during the
Requirements analysis activity is cheaper and easier to fix than during the Software architecture or Software testing
activities.
The National Software Quality Experiment,
[2]
evaluating the effectiveness of peer reviews, finds, "a favorable return
on investment for software inspections; savings exceeds costs by 4 to 1". To state it another way, it is four times
more costly, on average, to identify and fix a software problem later.
Distinction from other types of software review
Peer reviews are distinct from management reviews, which are conducted by management representatives rather than
by colleagues, and for management and control purposes rather than for technical evaluation. They are also distinct
from software audit reviews, which are conducted by personnel external to the project, to evaluate compliance with
specifications, standards, contractual agreements, or other criteria.
Review processes
Peer review processes exist across a spectrum of formality, with relatively unstructured activities such as "buddy
checking" towards one end of the spectrum, and more formal approaches such as walkthroughs, technical peer
reviews, and software inspections, at the other. The IEEE defines formal structures, roles, and processes for each of
the last three.
[3]
Management representatives are typically not involved in the conduct of a peer review except when included because
of specific technical expertise or when the work product under review is a management-level document. This is
especially true of line managers of other participants in the review.
Processes for formal peer reviews, such as software inspections, define specific roles for each participant, quantify
stages with entry/exit criteria, capture software metrics on the peer review process.
Software peer review
219
"Open source" reviews
In the free / open source community, something like peer review has taken place in the engineering and evaluation of
computer software. In this context, the rationale for peer review has its equivalent in Linus's law, often phrased:
"Given enough eyeballs, all bugs are shallow", meaning "If there are enough reviewers, all problems are easy to
solve." Eric S. Raymond has written influentially about peer review in software development.
[4]
References
[1] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management (http:/ / www. wiley. com/
WileyCDA/ WileyTitle/ productCd-0470042125.html). Wiley-IEEE Computer Society Press. p.261. ISBN0-470-04212-5. .
[2] National Software Quality Experiment Resources and Results (http:/ / members. aol. com/ ONeillDon/ nsqe-results. html)
[3] IEEE Std. 1028-2008, "IEEE Standard for Software Reviews and Audits" (http:/ / ieeexplore. ieee. org/ servlet/ opac?punumber=4601582)
[4] Eric S. Raymond. The Cathedral and the Bazaar.
Software audit review
A software audit review, or software audit, is a type of software review in which one or more auditors who are not
members of the software development organization conduct "An independent examination of a software product,
software process, or set of software processes to assess compliance with specifications, standards, contractual
agreements, or other criteria"
[1]
.
"Software product" mostly, but not exclusively, refers to some kind of technical document. IEEE Std. 1028 offers a
list of 32 "examples of software products subject to audit", including documentary products such as various sorts of
plan, contracts, specifications, designs, procedures, standards, and reports, but also non-documentary products such
as data, test data, and deliverable media.
Software audits are distinct from software peer reviews and software management reviews in that they are conducted
by personnel external to, and independent of, the software development organization, and are concerned with
compliance of products or processes, rather than with their technical content, technical quality, or managerial
implications.
The term "software audit review" is adopted here to designate the form of software audit described in IEEE Std.
1028.
Objectives and participants
"The purpose of a software audit is to provide an independent evaluation of conformance of software products and
processes to applicable regulations, standards, guidelines, plans, and procedures"
[2]
. The following roles are
recommended:
The Initiator (who might be a manager in the audited organization, a customer or user representative of the
audited organization, or a third party), decides upon the need for an audit, establishes its purpose and scope,
specifies the evaluation criteria, identifies the audit personnel, decides what follow-up actions will be required,
and distributes the audit report.
The Lead Auditor (who must be someone "free from bias and influence that could reduce his ability to make
independent, objective evaluations") is responsible for administrative tasks such as preparing the audit plan and
assembling and managing the audit team, and for ensuring that the audit meets its objectives.
The Recorder documents anomalies, action items, decisions, and recommendations made by the audit team.
The Auditors (who must be, like the Lead Auditor, free from bias) examine products defined in the audit plan,
document their observations, and recommend corrective actions. (There may be only a single auditor.)
Software audit review
220
The Audited Organization provides a liaison to the auditors, and provides all information requested by the
auditors. When the audit is completed, the audited organization should implement corrective actions and
recommendations.
Tools
Parts of Software audit could be done using static analysis tools that analyze application code and score its
conformance with standards, guidelines, best practices. From the List of tools for static code analysis some are
covering a very large spectrum from code to architecture review, and could be use for benchmarking.
References
[1] IEEE Std. 1028-1997, IEEE Standard for Software Reviews, clause 3.2
[2] [2] IEEE Std. 10281997, clause 8.1
Software technical review
A software technical review is a form of peer review in which "a team of qualified personnel ... examines the
suitability of the software product for its intended use and identifies discrepancies from specifications and standards.
Technical reviews may also provide recommendations of alternatives and examination of various alternatives" (IEEE
Std. 1028-1997, IEEE Standard for Software Reviews, clause 3.7).
[1]
"Software product" normally refers to some kind of technical document. This might be a software design document
or program source code, but use cases, business process definitions, test case specifications, and a variety of other
technical documentation, may also be subject to technical review.
Technical review differs from software walkthroughs in its specific focus on the technical quality of the product
reviewed. It differs from software inspection in its ability to suggest direct alterations to the product reviewed, and
its lack of a direct focus on training and process improvement.
The term formal technical review is sometimes used to mean a software inspection.
Objectives and participants
The purpose of a technical review is to arrive at a technically superior version of the work product reviewed, whether
by correction of defects or by recommendation or introduction of alternative approaches. While the latter aspect may
offer facilities that software inspection lacks, there may be a penalty in time lost to technical discussions or disputes
which may be beyond the capacity of some participants.
IEEE 1028 recommends the inclusion of participants to fill the following roles:
The Decision Maker (the person for whom the technical review is conducted) determines if the review objectives
have been met.
The Review Leader is responsible for performing administrative tasks relative to the review, ensuring orderly
conduct, and ensuring that the review meets its objectives.
The Recorder documents anomalies, action items, decisions, and recommendations made by the review team.
Technical staff are active participants in the review and evaluation of the software product.
Management staff may participate for the purpose of identifying issues that require management resolution.
Customer or user representatives may fill roles determined by the Review Leader prior to the review.
A single participant may fill more than one role, as appropriate.
Software technical review
221
Process
A formal technical review will follow a series of activities similar to that specified in clause 5 of IEEE 1028,
essentially summarised in the article on software review.
References
[1] "The Software Technical Review Process" (http:/ / www. sei. cmu. edu/ reports/ 88cm003. pdf). .
Management review
A Software management review is a management study into a project's status and allocation of resources. It is
different from both a software engineering peer review, which evaluates the technical quality of software products,
and a software audit, which is an externally conducted audit into a project's compliance to specifications, contractual
agreements, and other criteria.
Process
A management review can be an informal process, but generally requires a formal structure and rules of conduct,
such as those advocated in the IEEE standard, which are:
[1]
1. 1. Evaluate entry?
2. 2. Management preparation?
3. 3. Plan the structure of the review
4. 4. Overview of review procedures?
5. 5. [Individual] Preparation?
6. 6. [Group] Examination?
7. 7. Rework/follow-up?
8. 8. [Exit evaluation]?
Definition
In software engineering, a management review is defined by the IEEE as:
A systematic evaluation of a software acquisition, supply, development, operation, or maintenance
process performed by or on behalf of management ... [and conducted] to monitor progress, determine the
status of plans and schedules, confirm requirements and their system allocation, or evaluate the
effectiveness of management approaches used to achieve fitness for purpose. Management reviews
support decisions about corrective actions, changes in the allocation of resources, or changes to the
scope of the project.
Management reviews are carried out by, or on behalf of, the management personnel having direct
responsibility for the system. Management reviews identify consistency with and deviations from plans,
or adequacies and inadequacies of management procedures. This examination may require more than
one meeting. The examination need not address all aspects of the product."
[2]
Management review
222
References
[1] IEEE Std. 1028-1997, IEEE Standard for Software Reviews, clauses "Entry criteria"; 4.5, "Procedures"; 4.6, "Exit criteria"
[2] IEEE Std. 1028-1997, IEEE Standard for Software Reviews, clauses 3.4, 4.1
Software inspection
Inspection in software engineering, refers to peer review of any work product by trained individuals who look for
defects using a well defined process. An inspection might also be referred to as a Fagan inspection after Michael
Fagan, the creator of a very popular software inspection process.
Introduction
An inspection is one of the most common sorts of review practices found in software projects. The goal of the
inspection is for all of the inspectors to reach consensus on a work product and approve it for use in the project.
Commonly inspected work products include software requirements specifications and test plans. In an inspection, a
work product is selected for review and a team is gathered for an inspection meeting to review the work product. A
moderator is chosen to moderate the meeting. Each inspector prepares for the meeting by reading the work product
and noting each defect. The goal of the inspection is to identify defects. In an inspection, a defect is any part of the
work product that will keep an inspector from approving it. For example, if the team is inspecting a software
requirements specification, each defect will be text in the document which an inspector disagrees with.
The process
The inspection process was developed by Michael Fagan in the mid-1970s and it has later been extended and
modified.
The process should have entry criteria that determine if the inspection process is ready to begin. This prevents
unfinished work products from entering the inspection process. The entry criteria might be a checklist including
items such as "The document has been spell-checked".
The stages in the inspections process are: Planning, Overview meeting, Preparation, Inspection meeting, Rework and
Follow-up. The Preparation, Inspection meeting and Rework stages might be iterated.
Planning: The inspection is planned by the moderator.
Overview meeting: The author describes the background of the work product.
Preparation: Each inspector examines the work product to identify possible defects.
Inspection meeting: During this meeting the reader reads through the work product, part by part and the
inspectors point out the defects for every part.
Rework: The author makes changes to the work product according to the action plans from the inspection
meeting.
Follow-up: The changes by the author are checked to make sure everything is correct.
The process is ended by the moderator when it satisfies some predefined exit criteria.
Software inspection
223
Inspection roles
During an inspection the following roles are used.
Author: The person who created the work product being inspected.
Moderator: This is the leader of the inspection. The moderator plans the inspection and coordinates it.
Reader: The person reading through the documents, one item at a time. The other inspectors then point out
defects.
Recorder/Scribe: The person that documents the defects that are found during the inspection.
Inspector: The person that examines the work product to identify possible defects.
Related inspection types
Code review
A code review can be done as a special kind of inspection in which the team examines a sample of code and fixes
any defects in it. In a code review, a defect is a block of code which does not properly implement its requirements,
which does not function as the programmer intended, or which is not incorrect but could be improved (for example,
it could be made more readable or its performance could be improved). In addition to helping teams find and fix
bugs, code reviews are useful for both cross-training programmers on the code being reviewed and for helping junior
developers learn new programming techniques.
Peer Reviews
Peer reviews are considered an industry best-practice for detecting software defects early and learning about
software artifacts. Peer Reviews are composed of software walkthroughs and software inspections and are integral to
software product engineering activities. A collection of coordinated knowledge, skills, and behaviors facilitates the
best possible practice of Peer Reviews. The elements of Peer Reviews include the structured review process,
standard of excellence product checklists, defined roles of participants, and the forms and reports.
Software inspections are the most rigorous form of Peer Reviews and fully utilize these elements in detecting
defects. Software walkthroughs draw selectively upon the elements in assisting the producer to obtain the deepest
understanding of an artifact and reaching a consensus among participants. Measured results reveal that Peer Reviews
produce an attractive return on investment obtained through accelerated learning and early defect detection. For best
results, Peer Reviews are rolled out within an organization through a defined program of preparing a policy and
procedure, training practitioners and managers, defining measurements and populating a database structure, and
sustaining the roll out infrastructure.
External links
Review and inspection practices
[1]
Article Software Inspections
[2]
by Ron Radice
Comparison of different inspection and review techniques
[3]
References
[1] http:/ / www. stellman-greene. com/ reviews
[2] http:/ / www. methodsandtools. com/ archive/ archive.php?id=29
[3] http:/ / www. the-software-experts. de/ e_dta-sw-test-inspection. htm
Fagan inspection
224
Fagan inspection
Fagan inspection refers to a structured process of trying to find defects in development documents such as
programming code, specifications, designs and others during various phases of the software development process. It
is named after Michael Fagan who is credited with being the inventor of formal software inspections.
Definition
Fagan Inspection is a group review method used to evaluate output of a given process.
Fagan Inspection defines a process as a certain activity with a pre-specified entry and exit criteria. In every activity
or operation for which entry and exit criteria are specified Fagan Inspections can be used to validate if the output of
the process complies with the exit criteria specified for the process.
Examples of activities for which Fagan Inspection can be used are:
Requirement specification
Software/Information System architecture (for example DYA)
Programming (for example for iterations in XP or DSDM)
Software testing (for example when creating test scripts)
Usage
The software development process is a typical application of Fagan Inspection; software development process is a
series of operations which will deliver a certain end product and consists of operations like requirements definition,
design, coding up to testing and maintenance. As the costs to remedy a defect are up to 10-100 times less in the early
operations compared to fixing a defect in the maintenance phase it is essential to find defects as close to the point of
insertion as possible. This is done by inspecting the output of each operation and comparing that to the output
requirements, or exit-criteria of that operation.
Criteria
Entry criteria are the criteria or requirements which must be met to enter a specific process.
[1]
For example for Fagan
inspections the high- and low-level documents must comply with specific entry-criteria before they can be used for a
formal inspection process.
Exit criteria are the criteria or requirements which must be met to complete a specific process. For example for
Fagan inspections the low-level document must comply with specific exit-criteria (as specified in the high-level
document) before the development process can be taken to the next phase.
The exit-criteria are specified in a high-level document, which is then used as the standard to compare the operation
result (low-level document) to during the inspections. Deviations of the low-level document from the requirements
specified in the high-level document are called defects and can be categorized in Major Defects and Minor Defects.
Defects
According to M.E. Fagan, A defect is an instance in which a requirement is not satisfied.
[1]
In the process of software inspection the defects which are found are categorized in two categories: major and minor
defects (often many more categories are used). The defects which are incorrect or even missing functionality or
specifications can be classified as major defects: the software will not function correctly when these defects are not
being solved.
In contrast to major defects, minor defects do not threaten the correct functioning of the software, but are mostly
small errors like spelling mistakes in documents or optical issues like incorrect positioning of controls in a program
Fagan inspection
225
interface.
Typical operations
In a typical Fagan inspection the inspection process consists of the following operations
[1]
:
Planning
Preparation of materials
Arranging of participants
Arranging of meeting place
Overview
Group education of participants on the materials under review
Assignment of roles
Preparation
The participants review the item to be inspected and supporting material to prepare for the meeting noting any
questions or possible defects
The participants prepare their roles
Inspection meeting
Actual finding of defect
Rework
Rework is the step in software inspection in which the defects found during the inspection meeting are
resolved by the author, designer or programmer. On the basis of the list of defects the low-level document is
corrected until the requirements in the high-level document are met.
Follow-up
In the follow-up phase of software inspections all defects found in the inspection meeting should be corrected
(as they have been fixed in the rework phase). The moderator is responsible for verifying that this is indeed the
case. He should verify if all defects are fixed and no new defects are inserted while trying to fix the initial
defects. It is crucial that all defects are corrected as the costs of fixing them in a later phase of the project will
be 10 to 100 times higher compared to the current costs.
Fagan inspection basic model
Follow-up
In the follow-up phase of a Fagan Inspection, defects fixed in the rework phase should be verified. The moderator is
usually responsible for verifying rework. Sometimes fixed work can be accepted without being verified, such as
when the defect was trivial. In non-trivial cases, a full re-inspection is performed by the inspection team (not only the
moderator).
If verification fails, go back to the rework process.
Fagan inspection
226
Roles
The participants of the inspection process are normally just members of the team that is performing the project. The
participants fulfill different roles within the inspection process
[2][3]
:
Author/Designer/Coder: the person who wrote the low-level document
Reader: paraphrases the document
Reviewers: reviews the document from a testing standpoint
Moderator: responsible for the inspection session, functions as a coach
Benefits and results
By using inspections the number of errors in the final product can significantly decrease, creating a higher quality
product. In the future the team will even be able to avoid errors as the inspection sessions give them insight in the
most frequently made errors in both design and coding providing avoidance of error at the root of their occurrence.
By continuously improving the inspection process these insights can even further be used [Fagan, 1986].
Together with the qualitative benefits mentioned above major "cost improvements" can be reached as the avoidance
and earlier detection of errors will reduce the amount of resources needed for debugging in later phases of the
project.
In practice very positive results have been reported by large corporations like IBM indicating that 80-90% of defects
can be found and savings in resources up to 25% can be reached [Fagan, 1986]...
Improvements
Although the Fagan Inspection method has proved to be very effective, improvements have been suggested by
multiple researchers. Genuchten for example has been researching the usage of an Electronic Meeting System (EMS)
to improve the productivity of the meetings with positive results [Genuchten, 1997].
Other researchers propose the usage of software that keeps a database of detected errors and automatically scans
program code for these common errors [Doolan,1992]. This again should result in improved productivity.
Example
In the diagram a very simple example is given of an inspection process in which a two-line piece of code is inspected
on the basis on a high-level document with a single requirement.
As can be seen in the high-level document for this project is specified that in all software code produced variables
should be declared strong typed. On the basis of this requirement the low-level document is checked for defects.
Unfortunately a defect is found on line 1, as a variable is not declared strong typed. The defect found is then
reported in the list of defects found and categorized according to the categorizations specified in the high-level
document.
Fagan inspection
227
References
[1] Fagan, M.E., Advances in Software Inspections, July 1986, IEEE Transactions on Software Engineering, Vol. SE-12, No. 7, Page 744-751
(http:/ / www. mfagan.com/ pdfs/ aisi1986.pdf)
[2] M.E., Fagan (1976). "Design and Code inspections to reduce errors in program development". IBM Systems Journal 15 (3): pp. 182211.
(http:/ / www. mfagan.com/ pdfs/ ibmfagan.pdf)
[3] [3] Eickelmann, Nancy S, Ruffolo, Francesca, Baik, Jongmoon, Anant, A, 2003 An Empirical Study of Modifying the Fagan Inspection Process
and the Resulting Main Effects and Interaction Effects Among Defects Found, Effort Required, Rate of Preparation and Inspection, Number of
Team Members and Product 1st Pass Quality, Proceedings of the 27th Annual NASA Goddard/IEEE Software Engineering Workshop
1. 1. [Laitenberger, 1999] Laitenberger,O, DeBaud, J.M, 1999 An encompassing life cycle centric survey of software
inspection, Journal of Systems and Software 50 (2000), Page 5-31
2. 2. [So, 1995] So, S, Lim, Y, Cha, S.D., Kwon, Y,J, 1995 An Empirical Study on Software Error Detection: Voting,
Instrumentation, and Fagan Inspection *, Proceedings of the 1995 Asia Pacific Software Engineering Conference
(APSEC '95), Page 345-351
3. [Doolan,1992] Doolan, E.P.. 1992 Experience with Fagans Inspection Method, SOFTWAREPRACTICE
AND EXPERIENCE, (FEBRUARY 1992) Vol. 22(2), Page 173182
4. 4. [Genuchten, 1997] Genuchten, M, Cornelissen, W, Van Dijk, C, 1997 Supporting Inspections with an Electronic
Meeting System, Journal of Management Information Systems, Winter 1997-98/Volume 14, No. 3, Page 165-179
Software walkthrough
In software engineering, a walkthrough or walk-through is a form of software peer review "in which a designer or
programmer leads members of the development team and other interested parties through a software product, and the
participants ask questions and make comments about possible errors, violation of development standards, and other
problems"
[1]
.
"Software product" normally refers to some kind of technical document. As indicated by the IEEE definition, this
might be a software design document or program source code, but use cases, business process definitions, test case
specifications, and a variety of other technical documentation may also be walked through.
A walkthrough differs from software technical reviews in its openness of structure and its objective of
familiarization. It differs from software inspection in its ability to suggest direct alterations to the product reviewed,
its lack of a direct focus on training and process improvement, and its omission of process and product measurement.
Process
A walkthrough may be quite informal, or may follow the process detailed in IEEE 1028 and outlined in the article on
software reviews.
Objectives and participants
In general, a walkthrough has one or two broad objectives: to gain feedback about the technical quality or content of
the document; and/or to familiarize the audience with the content.
A walkthrough is normally organized and directed by the author of the technical document. Any combination of
interested or technically qualified personnel (from within or outside the project) may be included as seems
appropriate.
IEEE 1028
[1]
recommends three specialist roles in a walkthrough:
The Author, who presents the software product in step-by-step manner at the walk-through meeting, and is
probably responsible for completing most action items;
Software walkthrough
228
The Walkthrough Leader, who conducts the walkthrough, handles administrative tasks, and ensures orderly
conduct (and who is often the Author); and
The Recorder, who notes all anomalies (potential defects), decisions, and action items identified during the
walkthrough meetings.
References
[1] IEEE Std. 1028-1997, IEEE Standard for Software Reviews, clause 3.8
Code review
Code review is systematic examination (often known as peer review) of computer source code. It is intended to find
and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the
developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal
inspections.
[1]
Introduction
Code reviews can often find and remove common vulnerabilities such as format string exploits, race conditions,
memory leaks and buffer overflows, thereby improving software security. Online software repositories based on
Subversion (with Redmine or Trac), Mercurial, Git or others allow groups of individuals to collaboratively review
code. Additionally, specific tools for collaborative code review can facilitate the code review process.
Automated code reviewing software lessens the task of reviewing large chunks of code on the developer by
systematically checking source code for known vulnerabilities. A recent study by VDC Research reports that 17.6%
of the embedded software engineers surveyed currently use automated tools for peer code review and 23.7% expect
to use them within 2 years.
[2]
Capers Jones' ongoing analysis of over 12,000 software development projects showed that the latent defect discovery
rate of formal inspection is in the 60-65% range. For informal inspection, the figure is less than 50%. The latent
defect discovery rate for most forms of testing is about 30%
[3]
.
Typical code review rates are about 150 lines of code per hour. Inspecting and reviewing more than a few hundred
lines of code per hour for critical software (such as safety critical embedded software) may be too fast to find errors
[4]
[5]
. Industry data indicates that code reviews can accomplish at most an 85% defect removal rate with an average
rate of about 65%.
[6]
The types of defects detected in code reviews have also been studied. Based on empirical evidence it seems that up
to 75% of code review defects affect software evolvability rather than functionality making code reviews an
excellent tool for software companies with long product or system life cycles
[7]
[8]
.
Types
Code review practices fall into three main categories: pair programming, formal code review and lightweight code
review.
[1]
Formal code review, such as a Fagan inspection, involves a careful and detailed process with multiple participants
and multiple phases. Formal code reviews are the traditional method of review, in which software developers attend
a series of meetings and review code line by line, usually using printed copies of the material. Formal inspections are
extremely thorough and have been proven effective at finding defects in the code under review.
Lightweight code review typically requires less overhead than formal code inspections, though it can be equally
effective when done properly. Lightweight reviews are often conducted as part of the normal development process:
Code review
229
Over-the-shoulder One developer looks over the author's shoulder as the latter walks through the code.
Email pass-around Source code management system emails code to reviewers automatically after checkin is
made.
Pair Programming Two authors develop code together at the same workstation, such is common in Extreme
Programming.
Tool-assisted code review Authors and reviewers use specialized tools designed for peer code review.
Some of these may also be labeled a "Walkthrough" (informal) or "Critique" (fast and informal).
Many teams that eschew traditional, formal code review use one of the above forms of lightweight review as part of
their normal development process. A code review case study published in the book Best Kept Secrets of Peer Code
Review found that lightweight reviews uncovered as many bugs as formal reviews, but were faster and more
cost-effective.
Criticism
Historically, formal code reviews have required a considerable investment in preparation for the review event and
execution time.
Use of code analysis tools can support this activity. Especially tools that work in the IDE as they provide direct
feedback to developers of coding standard compliance.
References
[1] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management (http:/ / www. wiley. com/
WileyCDA/ WileyTitle/ productCd-0470042125.html). Wiley-IEEE Computer Society Press. p.260. ISBN0-470-04212-5. .
[2] VDC Research (2012-02-01). "Automated Defect Prevention for Embedded Software Quality" (http:/ / alm. parasoft. com/
embedded-software-vdc-report/ ). VDC Research. . Retrieved 2012-04-10.
[3] Jones, Capers; Christof, Ebert (April 2009). "Embedded Software: Facts, Figures, and Future" (http:/ / doi. ieeecomputersociety. org/ 10.
1109/ MC. 2009. 118). IEEE Computer Society. . Retrieved 2010-10-05.
[4] Ganssle, Jack (February 2010). "A Guide to Code Inspections" (http:/ / www. ganssle. com/ inspections. pdf). The Ganssle Group. . Retrieved
2010-10-05.
[5] Kemerer, C.F.; Paulk, M.C. (July-Aug. 2009). "The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on
PSP Data" (http:/ / ieeexplore. ieee. org/ xpls/ abs_all. jsp?arnumber=4815279& tag=1). IEEE Transactions on Software Engineering. .
Retrieved 2012-03-21.
[6] Jones, Capers (June 2008). "Measuring Defect Potentials and Defect Removal Efficiency" (http:/ / www. stsc. hill. af. mil/ crosstalk/ 2008/
06/ 0806jones. html). Crosstalk, The Journal of Defense Software Engineering. . Retrieved 2010-10-05.
[7] Mantyla, M.V.; Lassenius, C (May-June 2009). "What Types of Defects Are Really Discovered in Code Reviews?" (http:/ / lib. tkk. fi/ Diss/
2009/ isbn9789512298570/ article5. pdf). IEEE Transactions on Software Engineering. . Retrieved 2012-03-21.
[8] Siy, H.; Votta, L. (May-June 2001). "Does the Modern Code Inspection Have Value?" (http:/ / ieeexplore. ieee. org/ xpls/ abs_all.
jsp?arnumber=972741). IEEE Proc. International Conference of Software Maintenance. . Retrieved 2012-03-21.
Notes
Jason Cohen (2006). Best Kept Secrets of Peer Code Review (Modern Approach. Practical Advice.).
Smartbearsoftware.com. ISBN1-59916-067-6.
Code review
230
External links
AgileReview - Code Review Software (http:/ / www. agilereview. org)
"A Guide to Code Inspections" (Jack G. Ganssle) (http:/ / www. ganssle. com/ inspections. pdf)
Best Practices for Peer Code Review (http:/ / smartbear. com/ docs/ BestPracticesForPeerCodeReview. pdf) white
paper
Article Four Ways to a Practical Code Review (http:/ / www. methodsandtools. com/ archive/ archive. php?id=66)
Lightweight Tool Support for Effective Code Reviews (http:/ / www. atlassian. com/ software/ crucible/ learn/
codereviewwhitepaper. pdf) white paper
Security Code Review FAQs (http:/ / www. ouncelabs. com/ resources/ code-review-faq. asp)
Security code review guidelines (http:/ / www. homeport. org/ ~adam/ review. html)
Automated code review
Automated code review software checks source code for compliance with a predefined set of rules or best practices.
The use of analytical methods to inspect and review source code to detect bugs has been a standard development
practice. This process can be accomplished both manually and in an automated fashion
[1]
. With automation, software
tools provide assistance with the code review and inspection process. The review program or tool typically displays a
list of warnings (violations of programming standards). A review program can also provide an automated or a
programmer-assisted way to correct the issues found.
Some static code analysis tools can be used to assist with automated code review. They do not compare favorably to
manual reviews, however they can be done faster and more efficiently. These tools also encapsulate deep knowledge
of underlying rules and semantics required to perform this type analysis such that it does not require the human code
reviewer to have the same level of expertise as an expert human auditor
[1]
. Many Integrated Development
Environments also provide basic automated code review functionality. For example the Eclipse
[2]
and Microsoft
Visual Studio
[3]
IDEs support a variety of plugins that facilitate code review.
Next to static code analysis tools, there are also tools that analyze and visualize software structures and help humans
to better understand these. Such systems are geared more to analysis because they typically do not contain a
predefined set of rules to check software against. Some of these tools (e.g. Imagix 4D, Resharper, SonarJ, Sotoarc,
Structure101, ACTool
[4]
) allow one to define target architectures and enforce that target architecture constraints are
not violated by the actual software implementation.
References
[1] Gomes, Ivo; Morgado, Pedro; Gomes, Tiago; Moreira, Rodrigo (2009). "An overview of the Static Code Analysis approach in Software
Development" (http:/ / paginas.fe.up.pt/ ~ei05021/ TQSO - An overview on the Static Code Analysis approach in Software Development.
pdf). Universadide do Porto. . Retrieved 2010-10-03.
[2] "Collaborative Code Review Tool Development" (http:/ / marketplace. eclipse. org/ content/ collaborative-code-review-tool).
www.eclipse.org. . Retrieved 2010-10-13.
[3] "Code Review Plug-in for Visual Studio 2008, ReviewPal" (http:/ / www. codeproject. com/ KB/ work/ ReviewPal. aspx).
www.codeproject.com. . Retrieved 2010-10-13.
[4] Architecture Consistency plugin for Eclipse (http:/ / sourceforge. net/ projects/ actool/ )
Code reviewing software
231
Code reviewing software
Automated code review software checks source code for compliance with a predefined set of rules or best practices.
The use of analytical methods to inspect and review source code to detect bugs has been a standard development
practice. This process can be accomplished both manually and in an automated fashion
[1]
. With automation, software
tools provide assistance with the code review and inspection process. The review program or tool typically displays a
list of warnings (violations of programming standards). A review program can also provide an automated or a
programmer-assisted way to correct the issues found.
Some static code analysis tools can be used to assist with automated code review. They do not compare favorably to
manual reviews, however they can be done faster and more efficiently. These tools also encapsulate deep knowledge
of underlying rules and semantics required to perform this type analysis such that it does not require the human code
reviewer to have the same level of expertise as an expert human auditor
[1]
. Many Integrated Development
Environments also provide basic automated code review functionality. For example the Eclipse
[2]
and Microsoft
Visual Studio
[3]
IDEs support a variety of plugins that facilitate code review.
Next to static code analysis tools, there are also tools that analyze and visualize software structures and help humans
to better understand these. Such systems are geared more to analysis because they typically do not contain a
predefined set of rules to check software against. Some of these tools (e.g. Imagix 4D, Resharper, SonarJ, Sotoarc,
Structure101, ACTool
[4]
) allow one to define target architectures and enforce that target architecture constraints are
not violated by the actual software implementation.
References
[1] Gomes, Ivo; Morgado, Pedro; Gomes, Tiago; Moreira, Rodrigo (2009). "An overview of the Static Code Analysis approach in Software
Development" (http:/ / paginas.fe.up.pt/ ~ei05021/ TQSO - An overview on the Static Code Analysis approach in Software Development.
pdf). Universadide do Porto. . Retrieved 2010-10-03.
[2] "Collaborative Code Review Tool Development" (http:/ / marketplace. eclipse. org/ content/ collaborative-code-review-tool).
www.eclipse.org. . Retrieved 2010-10-13.
[3] "Code Review Plug-in for Visual Studio 2008, ReviewPal" (http:/ / www. codeproject. com/ KB/ work/ ReviewPal. aspx).
www.codeproject.com. . Retrieved 2010-10-13.
[4] Architecture Consistency plugin for Eclipse (http:/ / sourceforge. net/ projects/ actool/ )
Static code analysis
232
Static code analysis
Static program analysis (also static code analysis or SCA) is the analysis of computer software that is performed
without actually executing programs built from that software (analysis performed on executing programs is known as
dynamic analysis).
[1]
In most cases the analysis is performed on some version of the source code and in the other
cases some form of the object code. The term is usually applied to the analysis performed by an automated tool, with
human analysis being called program understanding, program comprehension or code review.
Rationale
The sophistication of the analysis performed by tools varies from those that only consider the behavior of individual
statements and declarations, to those that include the complete source code of a program in their analysis. Uses of the
information obtained from the analysis vary from highlighting possible coding errors (e.g., the lint tool) to formal
methods that mathematically prove properties about a given program (e.g., its behavior matches that of its
specification).
Software metrics and reverse engineering can be described as forms of static analysis. Deriving software metrics and
static analysis are increasingly deployed together, especially in creation of embedded systems, by defining so called
software quality objectives
[2]
.
A growing commercial use of static analysis is in the verification of properties of software used in safety-critical
computer systems and locating potentially vulnerable code
[3]
. For example the following industries have identified
the use of static code analysis as a means of improving the quality of increasingly sophisticated and complex
software:
1. Medical software: The U.S. Food and Drug Administration (FDA) has identified the use of static analysis for
medical devices.
[4]
2. Nuclear software: In the UK the Health and Safety Executive recommends the use of static analysis on Reactor
Protection Systems.
[5]
A recent study by VDC Research reports that 28.7% of the embedded software engineers surveyed currently use
static analysis tools and 39.7 % expect to use them within 2 years.
[6]
In application security industry the name Static Application Security Testing (SAST) is also used.
Formal methods
Formal methods is the term applied to the analysis of software (and computer hardware) whose results are obtained
purely through the use of rigorous mathematical methods. The mathematical techniques used include denotational
semantics, axiomatic semantics, operational semantics, and abstract interpretation.
By a straightforward reduction to the halting problem it is possible to prove that (for any Turing complete language)
finding all possible run-time errors in an arbitrary program (or more generally any kind of violation of a specification
on the final result of a program) is undecidable: there is no mechanical method that can always answer truthfully
whether a given program may or may not exhibit runtime errors. This result dates from the works of Church, Gdel
and Turing in the 1930s (see the halting problem and Rice's theorem). As with many undecidable questions, one can
still attempt to give useful approximate solutions.
Some of the implementation techniques of formal static analysis include:
Model checking considers systems that have finite state or may be reduced to finite state by abstraction;
Data-flow analysis is a lattice-based technique for gathering information about the possible set of values;
Abstract interpretation models the effect that every statement has on the state of an abstract machine (i.e., it
'executes' the software based on the mathematical properties of each statement and declaration). This abstract
Static code analysis
233
machine over-approximates the behaviours of the system: the abstract system is thus made simpler to analyze, at
the expense of incompleteness (not every property true of the original system is true of the abstract system). If
properly done, though, abstract interpretation is sound (every property true of the abstract system can be mapped
to a true property of the original system).
[7]
The Frama-c value analysis plugin and Polyspace heavily rely on
abstract interpretation.
Use of assertions in program code as first suggested by Hoare logic. There is tool support for some programming
languages (e.g., the SPARK programming language (a subset of Ada) and the Java Modeling Language JML
using ESC/Java and ESC/Java2, Frama-c WP (weakest precondition) plugin for the C language extended with
ACSL (ANSI/ISO C Specification Language) ).
References
[1] Industrial Perspective on Static Analysis. Software Engineering Journal Mar. 1995: 69-75Wichmann, B. A., A. A. Canning, D. L.
Clutterbuck, L. A. Winsbarrow, N. J. Ward, and D. W. R. Marsh. http:/ / www. ida. liu. se/ ~TDDC90/ papers/ industrial95. pdf
[2] Software Quality Objectives for Source Code. Proceedings Embedded Real Time Software and Systems 2010 Conference, ERTS2, Toulouse,
France: Patrick Briand, Martin Brochet, Thierry Cambois, Emmanuel Coutenceau, Olivier Guetta, Daniel Mainberte, Frederic Mondot, Patrick
Munier, Loic Noury, Philippe Spozio, Frederic Retailleau http:/ / www. erts2010. org/ Site/ 0ANDGY78/ Fichier/
PAPIERS%20ERTS%202010/ ERTS2010_0035_final. pdf
[3] Improving Software Security with Precise Static and Runtime Analysis, Benjamin Livshits, section 7.3 "Static Techniques for Security,"
Stanford doctoral thesis, 2006. http:/ / research.microsoft. com/ en-us/ um/ people/ livshits/ papers/ pdf/ thesis. pdf
[4] FDA (2010-09-08). "Infusion Pump Software Safety Research at FDA" (http:/ / www. fda. gov/ MedicalDevices/
ProductsandMedicalProcedures/ GeneralHospitalDevicesandSupplies/ InfusionPumps/ ucm202511. htm). Food and Drug Administration. .
Retrieved 2010-09-09.
[5] Computer based safety systems - technical guidance for assessing software aspects of digital computer based protection systems, http:/ /
www.hse.gov.uk/ foi/ internalops/ nsd/ tech_asst_guides/ tast046app1. htm
[6] VDC Research (2012-02-01). "Automated Defect Prevention for Embedded Software Quality" (http:/ / alm. parasoft. com/
embedded-software-vdc-report/ ). VDC Research. . Retrieved 2012-04-10.
[7] Jones, Paul (2010-02-09). "A Formal Methods-based verification approach to medical device software analysis" (http:/ / embeddeddsp.
embedded.com/ design/ opensource/ 222700533). Embedded Systems Design. . Retrieved 2010-09-09.
Bibliography
Syllabus and readings (http:/ / www. stanford. edu/ class/ cs295/ ) for Alex Aiken (http:/ / theory. stanford. edu/
~aiken/ )s Stanford CS295 course.
Nathaniel Ayewah, David Hovemeyer, J. David Morgenthaler, John Penix, William Pugh, " Using Static Analysis
to Find Bugs (http:/ / www2. computer. org/ portal/ web/ csdl/ doi/ 10. 1109/ MS. 2008. 130)," IEEE Software,
vol. 25, no. 5, pp. 22-29, Sep./Oct. 2008, doi:10.1109/MS.2008.130
Brian Chess, Jacob West (Fortify Software) (2007). Secure Programming with Static Analysis. Addison-Wesley.
ISBN978-0-321-42477-8.
Flemming Nielson, Hanne R. Nielson, Chris Hankin (1999, corrected 2004). Principles of Program Analysis.
Springer. ISBN978-3-540-65410-0.
"Abstract interpretation and static analysis," (http:/ / santos. cis. ksu. edu/ schmidt/ Escuela03/ home. html)
International Winter School on Semantics and Applications 2003, by David A. Schmidt (http:/ / people. cis. ksu.
edu/ ~schmidt/ )
Static code analysis
234
External links
The SAMATE Project (http:/ / samate. nist. gov), a resource for Automated Static Analysis tools
Integrate static analysis into a software development process (http:/ / www. embedded. com/ shared/
printableArticle. jhtml?articleID=193500830)
Code Quality Improvement - Coding standards conformance checking (DDJ) (http:/ / www. ddj. com/ dept/
debug/ 189401916)
Episode 59: Static Code Analysis (http:/ / www. se-radio. net/ index. php?post_id=220531) Interview (Podcast) at
Software Engineering Radio
Implementing Automated Governance for Coding Standards (http:/ / www. infoq. com/ articles/
governance-coding-standards) Explains why and how to integrate static code analysis into the build process
What is Static Code Analysis? explanation in Hebrew (http:/ / eswlab. com/ info. asp?cid=637) (Hebrew: ?
)
.NET Static Analysis (InfoQ) (http:/ / www. infoq. com/ articles/ dotnet-static-analysis)
List of tools for static code analysis
This is a list of tools for static code analysis.
Historical
Lint The original static code analyzer of C code.
NuMega Code Review now part of Micro Focus DevPartner suite.
By language
Multi-language
Axivion Bauhaus Suite A tool for Ada, C, C++, C#, and Java code that comprises various analyses such as
architecture checking, interface analyses, and clone detection.
Black Duck Suite Analyze the composition of software source code and binary files, search for reusable code,
manage open source and third-party code approval, honor the legal obligations associated with mixed-origin code,
and monitor related security vulnerabilities.
BugScout Detects security flaws in Java, PHP, ASP and C# web applications.
CAST Application Intelligence Platform Detailed, audience-specific dashboards to measure quality and
productivity. 30+ languages, C/C++, Java, .NET, Oracle, PeopleSoft, SAP, Siebel, Spring, Struts, Hibernate and
all major databases.
ChecKing Integrated software quality portal that allows manage the quality of all phases of software
development. It includes static code analyzers for Java, JSP, Javascript, HTML, XML, .NET (C#, ASP.NET,
VB.NET, etc.), PL/SQL, embedded SQL, SAP ABAP IV, Natural/Adabas, C/C++, Cobol, JCL, PowerBuilder.
Coverity Static Analysis (formerly Coverity Prevent) Identifies security vulnerabilities and code defects in C,
C++, C# and Java code. Complements Coverity Dynamic Code Analysis and Architecture Analysis.
DevPartner Code Review. Offered by Micro Focus. Static metrics and bug pattern detection for C#, VB.NET, and
ASP.NET languages. Plugin to Visual Studio. Customized parsers provide extension through regular expressions
and tailored rulesets.
DMS Software Reengineering Toolkit Supports custom analysis of C, C++, C#, Java, COBOL, PHP,
VisualBasic and many other languages. Also COTS tools for clone analysis, dead code analysis, and style
checking.
List of tools for static code analysis
235
Compuware DevEnterprise Analysis of COBOL, PL/I, JCL, CICS, DB2, IMS and others.
GrammaTech CodeSonar Analyzes C, C++.
HP Fortify Source Code Analyzer Helps developers identify software security vulnerabilities in C/C++, Java,
JSP, .NET, ASP.NET, ColdFusion, classic ASP, PHP, Visual Basic 6, VBScript, JavaScript, PL/SQL, T-SQL,
Python and COBOL and configuration files.
IBM Rational AppScan Source Edition Analyzes source code to identify security vulnerabilities while
integrating security testing with software development processes and systems. Supports C/C++, .NET, Java, JSP,
JavaScript, ColdFusion, Classic ASP, PHP, Perl, VisualBasic 6, PL/SQL, T-SQL, and COBOL
Imagix 4D Identifies problems in variable use, task interaction and concurrency, especially in embedded
applications, as part of an overall system for understanding, improving and documenting C, C++ and Java code.
Intel - Intel Parallel Studio XE: Contains Static Security Analysis (SSA) feature supports C/C++ and Fortran
JustCode Visual Studio code analysis and refactoring productivity tool by Telerik for C#, VB.NET, XAML,
ASP.NET, JavaScript, HTML, XML, CSS, Razor, WinRT and Metro apps
Klocwork Insight Provides security vulnerability, defect detection, architectural and build-over-build trend
analysis for C, C++, C#, Java.
LDRA Testbed A software analysis and testing tool suite for C, C++, Ada83, Ada95 and Assembler (Intel,
Freescale, Texas Instruments).
MALPAS; A software static analysis toolset for a variety of languages including Ada, C, Pascal and Assembler
(Intel, PowerPC and Motorola). Used primarily for safety critical applications in Nuclear and Aerospace
industries.
Micro Focus (formerly Relativity Technologies) Modernization Workbench Parsers included for C/C++,
COBOL (multiple variants including IBM, Unisys, MF, ICL, Tandem), Java, PL/I, Natural (inc. ADABAS),
Visual Basic, RPG, and other legacy languages; Extensible SDK to support 3rd party parsers. Supports automated
metrics (including function points), business rule mining, componentisation and SOA analysis. Rich ad hoc
diagramming, AST search & reporting)
Moose Moose started as a software analysis platform with many tools to manipulate, assess or visualize
software. It can evolve to a more generic data analysis platform. Supported languages are C/C++, Java, Smalltalk,
.NET, more may be added.
Parasoft Analyzes Java (Jtest), JSP, C, C++ (C++test), .NET (C#, ASP.NET, VB.NET, etc.) using .TEST,
WSDL, XML, HTML, CSS, JavaScript, VBScript/ASP, and configuration files for security,
[1]
compliance,
[2]
and
defect prevention.
Copy/Paste Detector (CPD) PMDs duplicate code detection for (e.g.) Java, JSP, C, C++, ColdFusion and PHP
code.
Polyspace Uses abstract interpretation to detect and prove the absence of certain run time errors in source code
for C, C++, and Ada
ProjectCodeMeter
[3]
Warns on code quality issues such as insufficient commenting or complex code structure.
Counts code metrics, gives cost & time estimations. Analyzes C, C++, C#, J#, Java, PHP, Objective-C,
JavaScript, UnrealEngine script, ActionScript, DigitalMars D.
Protecode Analyzes the composition of software source code and binary files, searches for open source and
third party code and their associated licensing obligations. Can also detect secuity vulnerabilities.
Rational Software Analyzer Supports Java, C, C++, others available via extensions
ResourceMiner Architecture down to details multipurpose analysis and metrics, develop own rules for
masschange and generator development. Supports 30+ legacy and modern languages and all major databases.
Semmle - supports Java, C, C++, C#.
SofCheck Inspector Static detection of logic errors, race conditions, and redundant code for Ada and Java;
automatically extracts pre/postconditions from code.
List of tools for static code analysis
236
Sonar A continuous inspection engine to manage the technical debt: unit tests, complexity, duplication, design,
comments, coding standards and potential problems. Supports languages: ABAP, C, Cobol, C#, Flex, Forms,
Groovy, Java, JavaScript, Natural, PHP, PL/SQL, Visual Basic 6, Web, XML, Python.
Sotoarc/Sotograph Architecture and quality in-depth analysis and monitoring for C, C++, C#, Java
SPARROW - SPARROW is a static analysis tool that understands the semantics of C/C++ and Java code based
on static analysis theory by automatically detecting fatal errors such as memory leaks and buffer overrun
Syhunt Sandcat Detects security flaws in PHP, Classic ASP and ASP.NET web applications.
Understand Analyzes Ada, C, C++, C#, COBOL, CSS, Delphi, Fortran, HTML, Java, JavaScript, Jovial,
Pascal, PHP, PL/M, Python, VHDL, and XML reverse engineering of source, code navigation, and metrics
tool.
Veracode Finds security flaws in application binaries and bytecode without requiring source. Supported
languages include C, C++, .NET (C#, C++/CLI, VB.NET, ASP.NET), Java, JSP, ColdFusion, PHP, Ruby on
Rails, and Objective-C, including mobile applications on the Windows Mobile, BlackBerry, Android, and iOS
platforms.
Visual Studio Team System Analyzes C++, C# source codes. only available in team suite and development
edition.
Yasca Yet Another Source Code Analyzer, a plugin-based framework to scan arbitrary file types, with plugins
for C/C++, Java, JavaScript, ASP, PHP, HTML/CSS, ColdFusion, COBOL, and other file types. It integrates with
other scanners, including FindBugs, PMD, and Pixy.
.NET
FxCop Free static analysis for Microsoft .NET programs that compile to CIL. Standalone and integrated in
some Microsoft Visual Studio editions; by Microsoft.
Gendarme Open-source (MIT License) equivalent to FxCop created by the Mono project. Extensible
rule-based tool to find problems in .NET applications and libraries, especially those containing code in ECMA
CIL format.
StyleCop Analyzes C# source code to enforce a set of style and consistency rules. It can be run from inside of
Microsoft Visual Studio or integrated into an MSBuild project. Free download from Microsoft.
CodeIt.Right Combines static code analysis and automatic refactoring to best practices which allows
automatically correct code errors and violations; supports C# and VB.NET.
CodeRush A plugin for Visual Studio, it addresses a multitude of shortcomings with the popular IDE.
Including alerting users to violations of best practices by using static code analysis.
Parasoft dotTEST A static analysis, unit testing, and code review plugin for Visual Studio; works with
languages for Microsoft .NET Framework and .NET Compact Framework, including C#, VB.NET, ASP.NET and
Managed C++.
JustCode Add-on for Visual Studio 2005/2008/2010 by Telerik for real-time, system-wide code analysis for
C#, VB.NET, ASP.NET, XAML, JavaScript, HTML, Razor, CSS and multi-language systems.
NDepend Simplifies managing a complex .NET code base by analyzing and visualizing code dependencies, by
defining design rules, by doing impact analysis, and by comparing different versions of the code. Integrates into
Visual Studio.
ReSharper Add-on for Visual Studio 2003/2005/2008/2010 from the creators of IntelliJ IDEA, which also does
static code analysis of C#.
Kalistick Mixing from the Cloud: static code analysis with best practice tips and collaborative tools for Agile
teams.
List of tools for static code analysis
237
ActionScript
Apparat A language manipulation and optimization framework consisting of intermediate representations for
ActionScript.
Ada
AdaControl - A tool to control occurrences of various entities or programming patterns in Ada code, used for
checking coding standards, enforcement of safety related rules, and support for various manual inspections.
AdaCore CodePeer Automated code review and bug finder for Ada programs that uses control-flow,
data-flow, and other advanced static analysis techniques.
LDRA Testbed A software analysis and testing tool suite for Ada83/95.
Polyspace Uses abstract interpretation to detect and prove the absence of certain run time errors in source code
SofCheck Inspector Static detection of logic errors, race conditions, and redundant code for Ada; automatically
extracts pre/postconditions from code.
C/C++
Astre; exhaustive search for runtime errors and assertion violations by abstract interpretation; tailored towards
critical code (avionics)
BLAST (Berkeley Lazy Abstraction Software verification Tool) A software model checker for C programs
based on lazy abstraction.
Cppcheck Open-source tool that checks for several types of errors, including use of STL.
cpplint - An open-source tool that checks for compliance with Google's style guide for C++ coding
Clang A compiler that includes a static analyzer.
Coccinelle Source code pattern matching and transformation
Eclipse (software) An IDE that includes a static code analyzer (CODAN
[4]
).
Flawfinder - simple static analysis tool for C/C++ programs to find potential security vulnerabilities
Frama-C A static analysis framework for C.
FlexeLint A multiplatform version of PC-Lint.
Green Hills Software DoubleCheck A software analysis tool for C/C++.
Intel - Intel Parallel Studio XE: has static security analysis (SSA) feature.
Lint The original static code analyzer for C.
LDRA Testbed A software analysis and testing tool suite for C/C++.
Monoidics INFER A sound tool for C/C++ based on Separation Logic.
Parasoft C/C++test A C/C++ tool that does static analysis, unit testing, code review, and runtime error
detection; plugins available for Visual Studio and Eclipse-based IDEs.
PC-Lint A software analysis tool for C/C++.
Polyspace Uses abstract interpretation to detect and prove the absence of certain run time errors in source code
PVS-Studio A software analysis tool for C/C++/.
QA-C (and QA-C++) Deep static analysis of C/C++ for quality assurance and guideline enforcement.
Red Lizard's Goanna Static analysis of C/C++ for command line, Eclipse and Visual Studio.
SLAM project a project of Microsoft Research for checking that software satisfies critical behavioral
properties of the interfaces it uses.
Sparse A tool designed to find faults in the Linux kernel.
Splint An open source evolved version of Lint, for C.
List of tools for static code analysis
238
Java
AgileJ StructureViews Reverse engineered Java class diagrams with an emphasis on filtering
Checkstyle Besides some static code analysis, it can be used to show violations of a configured coding
standard.
FindBugs An open-source static bytecode analyzer for Java (based on Jakarta BCEL) from the University of
Maryland.
Hammurapi Versatile code review program; free for non-commercial use.
PMD A static ruleset based Java source code analyzer that identifies potential problems.
Soot A language manipulation and optimization framework consisting of intermediate languages for Java.
Squale A platform to manage software quality (also available for other languages, using commercial analysis
tools though).
Jtest Testing and static code analysis product by Parasoft.
LDRA Testbed A software analysis and testing tool suite for Java.
SemmleCode Object oriented code queries for static program analysis.
SonarJ Monitors conformance of code to intended architecture, also computes a wide range of software
metrics.
Kalistick A Cloud-based platform to manage and optimize code quality for Agile teams with DevOps spirit
JavaScript
Closure Compiler JavaScript optimizer that rewrites code to be faster and smaller, and checks use of native
JavaScript functions.
JSLint JavaScript syntax checker and validator.
JSHint A community driven fork of JSLint.
Objective-C
Clang The free Clang project includes a static analyzer. As of version 3.2, this analyzer is included in
Xcode.
[5]
Perl
Perl::Critic - A tool to help enforce common best practices for programming in Perl. Most best practices are based
on Damian Conway's Perl Best Practices book.
PerlTidy - Program that act as a syntax checker and tester/enforcer for coding practices in Perl.
Padre - An IDE for Perl that also provides static code analysis to check for common beginner errors.
Python
Pychecker A python source code checking tool.
Pylint Static code analyzer for the Python. language.
List of tools for static code analysis
239
Formal methods tools
Tools that use a formal methods approach to static analysis (e.g., using static program assertions):
ESC/Java and ESC/Java2 Based on Java Modeling Language, an enriched version of Java.
MALPAS; A formal methods tool that uses directed graphs and regular algebra to prove that software under
analysis correctly meets its mathematical specification.
Polyspace Uses abstract interpretation, a formal methods based technique,
[6]
to detect and prove the absence of
certain run time errors in source code for C/C++, and Ada
SofCheck Inspector Statically determines and documents pre- and postconditions for Java methods; statically
checks preconditions at all call sites; also supports Ada.
SPARK Toolset including the SPARK Examiner Based on the SPARK language, a subset of Ada.
References
[1] Parasoft Application Security Solution (http:/ / www.parasoft. com/ jsp/ solutions/ application_security_solution. jsp?itemId=322)
[2] Parasoft Compliance Solution (http:/ / www. parasoft.com/ jsp/ solutions/ compliance. jsp?itemId=339)
[3] Project Code Meter site (http:/ / www.projectcodemeter. com)
[4] http:/ / wiki. eclipse.org/ CDT/ designs/ StaticAnalysis
[5] "Static Analysis in Xcode" (http:/ / developer.apple. com/ mac/ library/ featuredarticles/ StaticAnalysis/ index. html). Apple. . Retrieved
2009-09-03.
[6] Cousot, Patrick (2007). "The Role of Abstract Interpretation in Formal Methods" (http:/ / ieeexplore. ieee. org/ Xplore/ login. jsp?url=http:/ /
ieeexplore. ieee. org/ iel5/ 4343908/ 4343909/ 04343930. pdf?arnumber=4343930& authDecision=-203). IEEE International Conference on
Software Engineering and Formal Methods. . Retrieved 2010-11-08.
External links
Java Static Checkers (http:/ / www. dmoz. org/ Computers/ Programming/ Languages/ Java/ Development_Tools/
Performance_and_Testing/ Static_Checkers/ ) at the Open Directory Project
List of Java static code analysis plugins for Eclipse (http:/ / www. eclipseplugincentral. com/
Web_Links-index-req-viewcatlink-cid-14-orderby-rating. html)
List of static source code analysis tools for C (http:/ / www. spinroot. com/ static/ )
List of static source code analysis tools (https:/ / www. cert. org/ secure-coding/ tools. html) at CERT
SAMATE-Source Code Security Analyzers (http:/ / samate. nist. gov/ index. php/
Source_Code_Security_Analyzers. html)
SATE - Static Analysis Tool Exposition (http:/ / samate. nist. gov/ SATE. html)
A Comparison of Bug Finding Tools for Java (http:/ / www. cs. umd. edu/ ~jfoster/ papers/ issre04. pdf), by
Nick Rutar, Christian Almazan, and Jeff Foster, University of Maryland. Compares Bandera, ESC/Java 2,
FindBugs, JLint, and PMD.
Mini-review of Java Bug Finders (http:/ / www. oreillynet. com/ digitalmedia/ blog/ 2004/ 03/
minireview_of_java_bug_finders. html), by Rick Jelliffe, O'Reilly Media.
Parallel Lint (http:/ / www. ddj. com/ 218000153), by Andrey Karpov
Integrate static analysis into a software development process (http:/ / www. embedded. com/ shared/
printableArticle. jhtml?articleID=193500830) Explains how one goes about integrating static analysis into a
software development process
240
GUI testing and review
GUI software testing
In software engineering, graphical user interface testing is the process of testing a product's graphical user
interface to ensure it meets its written specifications. This is normally done through the use of a variety of test cases.
Test Case Generation
To generate a good set of test cases, the test designers must be certain that their suite covers all the functionality of
the system and also has to be sure that the suite fully exercises the GUI itself. The difficulty in accomplishing this
task is twofold: one has to deal with domain size and then one has to deal with sequences. In addition, the tester
faces more difficulty when they have to do regression testing.
The size problem can be easily illustrated. Unlike a CLI (command line interface) system, a GUI has many
operations that need to be tested. A relatively small program such as Microsoft WordPad has 325 possible GUI
operations.
[1]
In a large program, the number of operations can easily be an order of magnitude larger.
The second problem is the sequencing problem. Some functionality of the system may only be accomplished by
following some complex sequence of GUI events. For example, to open a file a user may have to click on the File
Menu and then select the Open operation, and then use a dialog box to specify the file name, and then focus the
application on the newly opened window. Obviously, increasing the number of possible operations increases the
sequencing problem exponentially. This can become a serious issue when the tester is creating test cases manually.
Regression testing becomes a problem with GUIs as well. This is because the GUI may change significantly across
versions of the application, even though the underlying application may not. A test designed to follow a certain path
through the GUI may not be able to follow that path since a button, menu item, or dialog may have changed location
or appearance.
These issues have driven the GUI testing problem domain towards automation. Many different techniques have been
proposed to automatically generate test suites that are complete and that simulate user behavior.
Most of the techniques used to test GUIs attempt to build on techniques previously used to test CLI (Command Line
Interface) programs. However, most of these have scaling problems when they are applied to GUIs. For example,
Finite State Machine-based modeling
[2][3]
where a system is modeled as a finite state machine and a program is
used to generate test cases that exercise all states can work well on a system that has a limited number of states
but may become overly complex and unwieldy for a GUI (see also model-based testing).
Planning and artificial intelligence
A novel approach to test suite generation, adapted from a CLI technique
[4]
involves using a planning system.
[5]
Planning is a well-studied technique from the artificial intelligence (AI) domain that attempts to solve problems that
involve four parameters:
an initial state,
a goal state,
a set of operators, and
a set of objects to operate on.
Planning systems determine a path from the initial state to the goal state by using the operators. An extremely simple
planning problem would be one where you had two words and one operation called change a letter that allowed you
GUI software testing
241
to change one letter in a word to another letter the goal of the problem would be to change one word into another.
For GUI testing, the problem is a bit more complex. In
[1]
the authors used a planner called IPP
[6]
to demonstrate this
technique. The method used is very simple to understand. First, the systems UI is analyzed to determine what
operations are possible. These operations become the operators used in the planning problem. Next an initial system
state is determined. Next a goal state is determined that the tester feels would allow exercising of the system. Lastly
the planning system is used to determine a path from the initial state to the goal state. This path becomes the test
plan.
Using a planner to generate the test cases has some specific advantages over manual generation. A planning system,
by its very nature, generates solutions to planning problems in a way that is very beneficial to the tester:
1. The plans are always valid. What this means is that the output of the system can be one of two things, a valid and
correct plan that uses the operators to attain the goal state or no plan at all. This is beneficial because much time
can be wasted when manually creating a test suite due to invalid test cases that the tester thought would work but
didnt.
2. 2. A planning system pays attention to order. Often to test a certain function, the test case must be complex and
follow a path through the GUI where the operations are performed in a specific order. When done manually, this
can lead to errors and also can be quite difficult and time consuming to do.
3. 3. Finally, and most importantly, a planning system is goal oriented. What this means and what makes this fact so
important is that the tester is focusing test suite generation on what is most important, testing the functionality of
the system.
When manually creating a test suite, the tester is more focused on how to test a function (i.e. the specific path
through the GUI). By using a planning system, the path is taken care of and the tester can focus on what function to
test. An additional benefit of this is that a planning system is not restricted in any way when generating the path and
may often find a path that was never anticipated by the tester. This problem is a very important one to combat.
[7]
Another interesting method of generating GUI test cases uses the theory that good GUI test coverage can be attained
by simulating a novice user. One can speculate that an expert user of a system will follow a very direct and
predictable path through a GUI and a novice user would follow a more random path. The theory therefore is that if
we used an expert to test the GUI, many possible system states would never be achieved. A novice user, however,
would follow a much more varied, meandering and unexpected path to achieve the same goal so its therefore more
desirable to create test suites that simulate novice usage because they will test more.
The difficulty lies in generating test suites that simulate novice system usage. Using Genetic algorithms is one
proposed way to solve this problem.
[7]
Novice paths through the system are not random paths. First, a novice user
will learn over time and generally wont make the same mistakes repeatedly, and, secondly, a novice user is not
analogous to a group of monkeys trying to type Hamlet, but someone who is following a plan and probably has some
domain or system knowledge.
Genetic algorithms work as follows: a set of genes are created randomly and then are subjected to some task. The
genes that complete the task best are kept and the ones that dont are discarded. The process is again repeated with
the surviving genes being replicated and the rest of the set filled in with more random genes. Eventually one gene (or
a small set of genes if there is some threshold set) will be the only gene in the set and is naturally the best fit for the
given problem.
For the purposes of the GUI testing, the method works as follows. Each gene is essentially a list of random integer
values of some fixed length. Each of these genes represents a path through the GUI. For example, for a given tree of
widgets, the first value in the gene (each value is called an allele) would select the widget to operate on, the
following alleles would then fill in input to the widget depending on the number of possible inputs to the widget (for
example a pull down list box would have one input the selected list value). The success of the genes are scored by
a criterion that rewards the best novice behavior.
GUI software testing
242
The system to do this testing described in
[7]
can be extended to any windowing system but is described on the X
window system. The X Window system provides functionality (via XServer and the editors' protocol) to dynamically
send GUI input to and get GUI output from the program without directly using the GUI. For example, one can call
XSendEvent() to simulate a click on a pull-down menu, and so forth. This system allows researchers to automate the
gene creation and testing so for any given application under test, a set of novice user test cases can be created.
Running the test cases
At first the strategies were migrated and adapted from the CLI testing strategies. A popular method used in the CLI
environment is capture/playback. Capture playback is a system where the system screen is captured as a bitmapped
graphic at various times during system testing. This capturing allowed the tester to play back the testing process
and compare the screens at the output phase of the test with expected screens. This validation could be automated
since the screens would be identical if the case passed and different if the case failed.
Using capture/playback worked quite well in the CLI world but there are significant problems when one tries to
implement it on a GUI-based system.
[8]
The most obvious problem one finds is that the screen in a GUI system may
look different while the state of the underlying system is the same, making automated validation extremely difficult.
This is because a GUI allows graphical objects to vary in appearance and placement on the screen. Fonts may be
different, window colors or sizes may vary but the system output is basically the same. This would be obvious to a
user, but not obvious to an automated validation system.
To combat this and other problems, testers have gone under the hood and collected GUI interaction data from the
underlying windowing system.
[9]
By capturing the window events into logs the interactions with the system are
now in a format that is decoupled from the appearance of the GUI. Now, only the event streams are captured. There
is some filtering of the event streams necessary since the streams of events are usually very detailed and most events
arent directly relevant to the problem. This approach can be made easier by using an MVC architecture for example
and making the view (i.e. the GUI here) as simple as possible while the model and the controller hold all the logic.
Another approach is to use the software's built-in assistive technology, to use an HTML interface or a three-tier
architecture that makes it also possible to better separate the user interface from the rest of the application.
Another way to run tests on a GUI is to build a driver into the GUI so that commands or events can be sent to the
software from another program.
[7]
This method of directly sending events to and receiving events from a system is
highly desirable when testing, since the input and output testing can be fully automated and user error is eliminated.
References
[1] [1] Atif M. Memon, M.E. Pollack and M.L. Soffa. Using a Goal-driven Approach to Generate Test Cases for GUIs. ICSE '99 Proceedings of the
21st international conference on Software engineering.
[2] [2] J.M. Clarke. Automated test generation from a Behavioral Model. In Proceedings of Pacific Northwest Software Quality Conference. IEEE
Press, May 1998.
[3] [3] S. Esmelioglu and L. Apfelbaum. Automated Test generation, execution and reporting. In Proceedings of Pacific Northwest Software Quality
Conference. IEEE Press, October 1997.
[4] [4] A. Howe, A. von Mayrhauser and R.T. Mraz. Test case generation as an AI planning problem. Automated Software Engineering, 4:77-106,
1997.
[5] Hierarchical GUI Test Case Generation Using Automated Planning by Atif M. Memon, Martha E. Pollack, and Mary Lou Soffa. IEEE
Trans. Softw. Eng., vol. 27, no. 2, 2001, pp. 144-155, IEEE Press.
[6] [6] J. Koehler, B. Nebel, J. Hoffman and Y. Dimopoulos. Extending planning graphs to an ADL subset. Lecture Notes in Computer Science,
1348:273, 1997.
[7] D.J. Kasik and H.G. George. Toward automatic generation of novice user test scripts. In M.J. Tauber, V. Bellotti, R. Jeffries, J.D. Mackinlay,
and J. Nielsen, editors, Proceedings of the Conference on Human Factors in Computing Systems : Common Ground, pages 244-251, New
York, 1318 April 1996, ACM Press. (http:/ / www. sigchi. org/ chi96/ proceedings/ papers/ Kasik/ djk_txt. htm)
[8] L.R. Kepple. The black art of GUI testing. Dr. Dobbs Journal of Software Tools, 19(2):40, Feb. 1994.
[9] [9] M.L. Hammontree, J.J. Hendrickson and B.W. Hensley. Integrated data capture and analysis tools for research and testing on graphical user
interfaces. In P. Bauersfeld, J. Bennett and G. Lynch, editors, Proceedings of the Conference on Human Factors in Computing System, pages
GUI software testing
243
431-432, New York, NY, USA, May 1992. ACM Press.
Usability testing
Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users.
This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system.
[1]
This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface
without involving users.
Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose. Examples of
products that commonly benefit from usability testing are foods, consumer products, web sites or web applications,
computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific
object or set of objects, whereas general human-computer interaction studies attempt to formulate universal
principles.
History of usability testing
Henry Dreyfuss in the late 1940s contracted to design the state rooms for the twin ocean liners "Independence" and
"Constitution." He built eight prototype staterooms and installed them in a warehouse. He then brought in a series of
travelers to "live" in the rooms for a short time, bringing with them all items they would normally take when
cruising. His people were able to discover over time, for example, if there was space for large steamer trunks, if light
switches needed to be added beside the beds to prevent injury, etc., before hundreds of state rooms had been built
into the ship.
[2]
A Xerox Palo Alto Research Center (PARC) employee wrote that PARC used extensive usability testing in creating
the Xerox Star, introduced in 1981.
[3]
The Inside Intuit book, says (page 22, 1984), "... in the first instance of the Usability Testing that later became
standard industry practice, LeFevre recruited people off the streets... and timed their Kwik-Chek (Quicken) usage
with a stopwatch. After every test... programmers worked to improve the program."
[4]
) Scott Cook, Intuit
co-founder, said, "... we did usability testing in 1984, five years before anyone else... there's a very big difference
between doing it and having marketing people doing it as part of their... design... a very big difference between doing
it and having it be the core of what engineers focus on.
[5]
Goals of usability testing
Usability testing is a black-box testing technique. The aim is to observe people using the product to discover errors
and areas of improvement. Usability testing generally involves measuring how well test subjects respond in four
areas: efficiency, accuracy, recall, and emotional response. The results of the first test can be treated as a baseline or
control measurement; all subsequent tests can then be compared to the baseline to indicate improvement.
Efficiency -- How much time, and how many steps, are required for people to complete basic tasks? (For example,
find something to buy, create a new account, and order the item.)
Accuracy -- How many mistakes did people make? (And were they fatal or recoverable with the right
information?)
Recall -- How much does the person remember afterwards or after periods of non-use?
Emotional response -- How does the person feel about the tasks completed? Is the person confident, stressed?
Would the user recommend this system to a friend?
To assess the usability of the system under usability testing, quantitative and/or qualitative Usability goals (also
called usability requirements
[6]
) have to be defined beforehand.
[7][6][8]
If the results of the usability testing meet the
Usability testing
244
Usability goals, the system can be considered as usable for the end-users whose representatives have tested it.
What usability testing is not
Simply gathering opinions on an object or document is market research or qualitative research rather than usability
testing. Usability testing usually involves systematic observation under controlled conditions to determine how well
people can use the product.
[9]
However, often both qualitative and usability testing are used in combination, to better
understand users' motivations/perceptions, in addition to their actions.
Rather than showing users a rough draft and asking, "Do you understand this?", usability testing involves watching
people trying to use something for its intended purpose. For example, when testing instructions for assembling a toy,
the test subjects should be given the instructions and a box of parts and, rather than being asked to comment on the
parts and materials, they are asked to put the toy together. Instruction phrasing, illustration quality, and the toy's
design all affect the assembly process.
Methods
Setting up a usability test involves carefully creating a scenario, or realistic situation, wherein the person performs a
list of tasks using the product being tested while observers watch and take notes. Several other test instruments such
as scripted instructions, paper prototypes, and pre- and post-test questionnaires are also used to gather feedback on
the product being tested. For example, to test the attachment function of an e-mail program, a scenario would
describe a situation where a person needs to send an e-mail attachment, and ask him or her to undertake this task.
The aim is to observe how people function in a realistic manner, so that developers can see problem areas, and what
people like. Techniques popularly used to gather data during a usability test include think aloud protocol,
Co-discovery Learning and eye tracking.
Hallway testing
Hallway testing (or Hall Intercept Testing) is a general methodology of usability testing. Rather than using an
in-house, trained group of testers, just five to six random people are brought in to test the product, or service. The
name of the technique refers to the fact that the testers should be random people who pass by in the hallway.
[10]
Hallway testing is particularly effective in the early stages of a new design when the designers are looking for "brick
walls," problems so serious that users simply cannot advance. Anyone of normal intelligence other than designers
and engineers can be used at this point. (Both designers and engineers immediately turn from being test subjects into
being "expert reviewers." They are often too close to the project, so they already know how to accomplish the task,
thereby missing ambiguities and false paths.)
Remote Usability Testing
In a scenario where usability evaluators, developers and prospective users are located in different countries and time
zones, conducting a traditional lab usability evaluation creates challenges both from the cost and logistical
perspectives. These concerns led to research on remote usability evaluation, with the user and the evaluators
separated over space and time. Remote testing, which facilitates evaluations being done in the context of the users
other tasks and technology can be either synchronous or asynchronous. Synchronous usability testing methodologies
involve video conferencing or employ remote application sharing tools such as WebEx. The former involves real
time one-on-one communication between the evaluator and the user, while the latter involves the evaluator and user
working separately.
[11]
Asynchronous methodologies include automatic collection of users click streams, user logs of critical incidents that
occur while interacting with the application and subjective feedback on the interface by users.
[12]
Similar to an in-lab
study, an asynchronous remote usability test is task-based and the platforms allow you to capture clicks and task
Usability testing
245
times. Hence, for many large companies this allows you to understand the WHY behind the visitors' intents when
visiting a website or mobile site. Additionally, this style of user testing also provides an opportunity to segment
feedback by demographic, attitudinal and behavioural type. The tests are carried out in the users own environment
(rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily
solicit feedback from users in remote areas quickly and with lower organisational overheads.
Numerous tools are available to address the needs of both these approaches. WebEx and Go-to-meeting are the most
commonly used technologies to conduct a synchronous remote usability test.
[13]
However, synchronous remote
testing may lack the immediacy and sense of presence desired to support a collaborative testing process. Moreover,
managing inter-personal dynamics across cultural and linguistic barriers may require approaches sensitive to the
cultures involved. Other disadvantages include having reduced control over the testing environment and the
distractions and interruptions experienced by the participants in their native environment.
[14]
One of the newer
methods developed for conducting a synchronous remote usability test is by using virtual worlds.
[15]
Expert review
Expert review is another general method of usability testing. As the name suggests, this method relies on bringing
in experts with experience in the field (possibly from companies that specialize in usability testing) to evaluate the
usability of a product.
Automated expert review
Similar to expert reviews, automated expert reviews provide usability testing but through the use of programs
given rules for good design and heuristics. Though an automated review might not provide as much detail and
insight as reviews from people, they can be finished more quickly and consistently. The idea of creating surrogate
users for usability testing is an ambitious direction for the Artificial Intelligence community.
How many users to test?
In the early 1990s, Jakob Nielsen, at that time a researcher at Sun Microsystems, popularized the concept of using
numerous small usability teststypically with only five test subjects eachat various stages of the development
process. His argument is that, once it is found that two or three people are totally confused by the home page, little is
gained by watching more people suffer through the same flawed design. "Elaborate usability tests are a waste of
resources. The best results come from testing no more than five users and running as many small tests as you can
afford."
[10]
Nielsen subsequently published his research and coined the term heuristic evaluation.
The claim of "Five users is enough" was later described by a mathematical model
[16]
which states for the proportion
of uncovered problems U
where p is the probability of one subject identifying a specific problem and n the number of subjects (or test
sessions). This model shows up as an asymptotic graph towards the number of real existing problems (see figure
below).
Usability testing
246
In later research Nielsen's claim has eagerly been questioned with both empirical evidence
[17]
and more advanced
mathematical models.
[18]
Two key challenges to this assertion are:
1. 1. Since usability is related to the specific set of users, such a small sample size is unlikely to be representative of
the total population so the data from such a small sample is more likely to reflect the sample group than the
population they may represent
2. Not every usability problem is equally easy-to-detect. Intractable problems happen to decelerate the overall
process. Under these circumstances the progress of the process is much shallower than predicted by the
Nielsen/Landauer formula.
[19]
It is worth noting that Nielsen does not advocate stopping after a single test with five users; his point is that testing
with five users, fixing the problems they uncover, and then testing the revised site with five different users is a better
use of limited resources than running a single usability test with 10 users. In practice, the tests are run once or twice
per week during the entire development cycle, using three to five test subjects per round, and with the results
delivered within 24 hours to the designers. The number of users actually tested over the course of the project can
thus easily reach 50 to 100 people.
In the early stage, when users are most likely to immediately encounter problems that stop them in their tracks,
almost anyone of normal intelligence can be used as a test subject. In stage two, testers will recruit test subjects
across a broad spectrum of abilities. For example, in one study, experienced users showed no problem using any
design, from the first to the last, while naive user and self-identified power users both failed repeatedly.
[20]
Later on,
as the design smooths out, users should be recruited from the target population.
When the method is applied to a sufficient number of people over the course of a project, the objections raised above
become addressed: The sample size ceases to be small and usability problems that arise with only occasional users
are found. The value of the method lies in the fact that specific design problems, once encountered, are never seen
again because they are immediately eliminated, while the parts that appear successful are tested over and over. While
it's true that the initial problems in the design may be tested by only five users, when the method is properly applied,
the parts of the design that worked in that initial test will go on to be tested by 50 to 100 people.
Usability testing
247
References
[1] [1] Nielsen, J. (1994). Usability Engineering, Academic Press Inc, p 165
[2] [2] NN/G Usability Week 2011 Conference "Interaction Design" Manual, Bruce Tognazzini, Nielsen Norman Group, 2011
[3] http:/ / interactions.acm. org/ content/ XV/ baecker. pdf
[4] http:/ / books. google. com/ books?id=lRs_4U43UcEC& printsec=frontcover&
sig=ACfU3U1xvA7-f80TP9Zqt9wkB9adVAqZ4g#PPA22,M1
[5] http:/ / news.zdnet. co. uk/ itmanagement/ 0,1000000308,2065537,00. htm
[6] [6] International Standardization Organization. ergonomics of human system interaction - Part 210 -: Human centred design for interactive
systems (Rep N9241-210). 2010, International Standardization Organization
[7] [7] Nielsen, Usability Engineering, 1994
[8] [8] Mayhew. The usability engineering lifecycle: a practitioner's handbook for user interface design. London, Academic press; 1999
[9] http:/ / jerz.setonhill. edu/ design/ usability/ intro.htm
[10] http:/ / www.useit. com/ alertbox/ 20000319. html
[11] Andreasen, Morten Sieker; Nielsen, Henrik Villemann; Schrder, Simon Ormholt; Stage, Jan (2007). "What happened to remote usability
testing?". Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '07. p. 1405. doi:10.1145/1240624.1240838.
ISBN9781595935939.
[12] Dray, Susan; Siegel, David (2004). "Remote possibilities?". Interactions 11 (2): 10. doi:10.1145/971258.971264.
[13] http:/ / www.boxesandarrows. com/ view/ remote_online_usability_testing_why_how_and_when_to_use_it
[14] Dray, Susan; Siegel, David (March 2004). "Remote possibilities?: international usability testing at a distance". Interactions 11 (2): 1017.
doi:10.1145/971258.971264.
[15] Chalil Madathil, Kapil; Joel S. Greenstein (May 2011). "Synchronous remote usability testing: a new approach facilitated by virtual worlds".
Proceedings of the 2011 annual conference on Human factors in computing systems. CHI '11: 22252234. doi:10.1145/1978942.1979267.
ISBN9781450302289.
[16] [16] Virzi, R.A., Refining the Test Phase of Usability Evaluation: How Many Subjects is Enough? Human Factors, 1992. 34(4): p. 457-468.
[17] http:/ / citeseer.ist.psu.edu/ spool01testing. html
[18] Caulton, D.A., Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology, 2001. 20(1): p. 1-7
[19] Schmettow, Heterogeneity in the Usability Evaluation Process. In: M. England, D. & Beale, R. (ed.), Proceedings of the HCI 2008, British
Computing Society, 2008, 1, 89-98
[20] Bruce Tognazzini. "Maximizing Windows" (http:/ / www. asktog. com/ columns/ 000maxscrns. html). .
External links
Usability.gov (http:/ / www. usability. gov/ )
A Brief History of the Magic Number 5 in Usability Testing (http:/ / www. measuringusability. com/ blog/
five-history. php)
Think aloud protocol
248
Think aloud protocol
Think-aloud protocol (or think-aloud protocols, or TAP; also talk-aloud protocol) is a method used to gather data
in usability testing in product design and development, in psychology and a range of social sciences (e.g., reading,
writing, translation research, decision making and process tracing). The think-aloud method was introduced in the
usability field by Clayton Lewis
[1]
while he was at IBM, and is explained in Task-Centered User Interface Design:
A Practical Introduction by C. Lewis and J. Rieman.
[2]
The method was developed based on the techniques of
protocol analysis by Ericsson and Simon.
[3][4][5]
Think aloud protocols involve participants thinking aloud as they are performing a set of specified tasks. Users are
asked to say whatever they are looking at, thinking, doing, and feeling, as they go about their task. This enables
observers to see first-hand the process of task completion (rather than only its final product). Observers at such a test
are asked to objectively take notes of everything that users say, without attempting to interpret their actions and
words. Test sessions are often audio and video recorded so that developers can go back and refer to what participants
did, and how they reacted. The purpose of this method is to make explicit what is implicitly present in subjects who
are able to perform a specific task.
A related but slightly different data-gathering method is the talk-aloud protocol. This involves participants only
describing their action but not giving explanations. This method is thought to be more objective in that participants
merely report how they go about completing a task rather than interpreting or justifying their actions (see the
standard works by Ericsson & Simon).
As Kuusela and Paul
[6]
state the thinking aloud protocol can be divided into two different experimental procedures:
the first one, is the concurrent thinking aloud protocol, collected during the decision task; the second procedure is the
retrospective thinking aloud protocol gathered after the decision task.
References
[1] Lewis, C. H. (1982). Using the "Thinking Aloud" Method In Cognitive Interface Design (Technical report). RC-9265.
[2] http:/ / grouplab.cpsc. ucalgary. ca/ saul/ hci_topics/ tcsd-book/ chap-1_v-1. html Task-Centered User Interface Design: A Practical
Introduction, by Clayton Lewis and John Rieman.
[3] Ericsson, K., & Simon, H. (May 1980). "Verbal reports as data". Psychological Review 87 (3): 215251. doi:10.1037/0033-295X.87.3.215.
[4] Ericsson, K., & Simon, H. (1987). "Verbal reports on thinking". In C. Faerch & G. Kasper (eds.). Introspection in Second Language
Research. Clevedon, Avon: Multilingual Matters. pp.2454.
[5] Ericsson, K., & Simon, H. (1993). Protocol Analysis: Verbal Reports as Data (2nd ed.). Boston: MIT Press. ISBN0-262-05029-3.
[6] Kuusela, H., & Paul, P. (2000). "A comparison of concurrent and retrospective verbal protocol analysis". American Journal of Psychology
(University of Illinois Press) 113 (3): 387404. doi:10.2307/1423365. JSTOR1423365. PMID10997234.
Usability inspection
249
Usability inspection
Usability inspection is the name for a set of methods where an evaluator inspects a user interface. This is in contrast
to usability testing where the usability of the interface is evaluated by testing it on real users. Usability inspections
can generally be used early in the development process by evaluating prototypes or specifications for the system that
can't be tested on users. Usability inspection methods are generally considered to be cheaper to implement than
testing on users.
[1]
Usability inspection methods include:
Cognitive walkthrough (task-specific)
Heuristic evaluation (holistic)
Pluralistic walkthrough
References
[1] [1] Nielsen, Jakob. Usability Inspection Methods. New York, NY: John Wiley and Sons, 1994
External links
Summary of Usability Inspection Methods (http:/ / www. useit. com/ papers/ heuristic/ inspection_summary.
html)
Cognitive walkthrough
The cognitive walkthrough method is a usability inspection method used to identify usability issues in a piece of
software or web site, focusing on how easy it is for new users to accomplish tasks with the system. Whereas
cognitive walkthrough is task-specific, heuristic evaluation takes a holistic view to catch problems not caught by
this and other usability inspection methods. The method is rooted in the notion that users typically prefer to learn a
system by using it to accomplish tasks, rather than, for example, studying a manual. The method is prized for its
ability to generate results quickly with low cost, especially when compared to usability testing, as well as the ability
to apply the method early in the design phases, before coding has even begun.
Introduction
A cognitive walkthrough starts with a task analysis that specifies the sequence of steps or actions required by a user
to accomplish a task, and the system responses to those actions. The designers and developers of the software then
walk through the steps as a group, asking themselves a set of questions at each step. Data is gathered during the
walkthrough, and afterwards a report of potential issues is compiled. Finally the software is redesigned to address the
issues identified.
The effectiveness of methods such as cognitive walkthroughs is hard to measure in applied settings, as there is very
limited opportunity for controlled experiments while developing software. Typically measurements involve
comparing the number of usability problems found by applying different methods. However, Gray and Salzman
called into question the validity of those studies in their dramatic 1998 paper "Damaged Merchandise",
demonstrating how very difficult it is to measure the effectiveness of usability inspection methods. The consensus in
the usability community is that the cognitive walkthrough method works well in a variety of settings and
applications.
Cognitive walkthrough
250
Walking through the tasks
After the task analysis has been made the participants perform the walkthrough by asking themselves a set of
questions for each subtask. Typically four questions are asked
[1]
:
Will the user try to achieve the effect that the subtask has? Does the user understand that this subtask is
needed to reach the user's goal?
Will the user notice that the correct action is available? E.g. is the button visible?
Will the user understand that the wanted subtask can be achieved by the action? E.g. the right button is
visible but the user does not understand the text and will therefore not click on it.
Does the user get feedback? Will the user know that they have done the right thing after performing the action?
By answering the questions for each subtask usability problems will be noticed.
Common mistakes
In teaching people to use the walkthrough method, Lewis & Rieman have found that there are two common
misunderstandings
[2]
:
1. The evaluator doesn't know how to perform the task themself, so they stumble through the interface trying to
discover the correct sequence of actions -- and then they evaluate the stumbling process. (The user should identify
and perform the optimal action sequence.)
2. 2. The walkthrough does not test real users on the system. The walkthrough will often identify many more problems
than you would find with a single, unique user in a single test session.
History
The method was developed in the early nineties by Wharton, et al., and reached a large usability audience when it
was published as a chapter in Jakob Nielsen's seminal book on usability, "Usability Inspection Methods." The
Wharton, et al. method required asking four questions at each step, along with extensive documentation of the
analysis. In 2000 there was a resurgence in interest in the method in response to a CHI paper by Spencer who
described modifications to the method to make it effective in a real software development setting. Spencer's
streamlined method required asking only two questions at each step, and involved creating less documentation.
Spencer's paper followed the example set by Rowley, et al. who described the modifications to the method that they
made based on their experience applying the methods in their 1992 CHI paper "The Cognitive Jogthrough".
References
[1] C. Wharton et al. "The cognitive walkthrough method: a practitioner's guide" in J. Nielsen & R. Mack "Usability Inspection Methods" pp.
105-140.
[2] http:/ / hcibib.org/ tcuid/ chap-4.html#4-1
Further reading
Blackmon, M. H. Polson, P.G. Muneo, K & Lewis, C. (2002) Cognitive Walkthrough for the Web CHI 2002
vol.4 No.1 pp463470
Blackmon, M. H. Polson, Kitajima, M. (2003) Repairing Usability Problems Identified by the Cognitive
Walkthrough for the Web CHI (http:/ / idisk. mac. com/ mkitajima-Public/ english/ papers-e/
LSA-Handbook-Ch18. pdf) 2003 pp497504.
Dix, A., Finlay, J., Abowd, G., D., & Beale, R. (2004). Human-computer interaction (3rd ed.). Harlow, England:
Pearson Education Limited. p321.
Gabrielli, S. Mirabella, V. Kimani, S. Catarci, T. (2005) Supporting Cognitive Walkthrough with Video Data: A
Mobile Learning Evaluation Study MobileHCI 05 pp7782.
Cognitive walkthrough
251
Goillau, P., Woodward, V., Kelly, C. & Banks, G. (1998) Evaluation of virtual prototypes for air traffic control -
the MACAW technique. In, M. Hanson (Ed.) Contemporary Ergonomics 1998.
Good, N. S. & Krekelberg, A. (2003) Usability and Privacy: a study of KaZaA P2P file-sharing CHI 2003 Vol.5
no.1 pp137144.
Gray, W. & Salzman, M. (1998). Damaged merchandise? A review of experiments that compare usability
evaluation methods, Human-Computer Interaction vol.13 no.3, 203-61.
Gray, W.D. & Salzman, M.C. (1998) Repairing Damaged Merchandise: A rejoinder. Human-Computer
Interaction vol.13 no.3 pp325335.
Hornbaek, K. & Frokjaer, E. (2005) Comparing Usability Problems and Redesign Proposal as Input to Practical
Systems Development CHI 2005 391-400.
Jeffries, R. Miller, J. R. Wharton, C. Uyeda, K. M. (1991) User Interface Evaluation in the Real World: A
comparison of Four Techniques Conference on Human Factors in Computing Systems pp 119 124
Lewis, C. Polson, P, Wharton, C. & Rieman, J. (1990) Testing a Walkthrough Methodology for Theory-Based
Design of Walk-Up-and-Use Interfaces Chi 90 Proceedings pp235242.
Mahatody, Thomas / Sagar, Mouldi / Kolski, Christophe (2010). State of the Art on the Cognitive Walkthrough
Method, Its Variants and Evolutions, International Journal of Human-Computer Interaction, 2, 8 741-785.
Rowley, David E., and Rhoades, David G (1992). The Cognitive Jogthrough: A Fast-Paced User Interface
Evaluation Procedure. Proceedings of CHI '92, 389-395.
Sears, A. (1998) The Effect of Task Description Detail on Evaluator Performance with Cognitive Walkthroughs
CHI 1998 pp259260.
Spencer, R. (2000) The Streamlined Cognitive Walkthrough Method, Working Around Social Constraints
Encountered in a Software Development Company CHI 2000 vol.2 issue 1 pp353359.
Wharton, C. Bradford, J. Jeffries, J. Franzke, M. Applying Cognitive Walkthroughs to more Complex User
Interfaces: Experiences, Issues and Recommendations CHI 92 pp381388.
External links
Cognitive Walkthrough (http:/ / www. pages. drexel. edu/ ~zwz22/ CognWalk. htm)
Heuristic evaluation
252
Heuristic evaluation
A heuristic evaluation is a discount usability inspection method for computer software that helps to identify
usability problems in the user interface (UI) design. It specifically involves evaluators examining the interface and
judging its compliance with recognized usability principles (the "heuristics"). These evaluation methods are now
widely taught and practiced in the New Media sector, where UIs are often designed in a short space of time on a
budget that may restrict the amount of money available to provide for other types of interface testing.
Introduction
The main goal of heuristic evaluations is to identify any problems associated with the design of user interfaces.
Usability consultant Jakob Nielsen developed this method on the basis of several years of experience in teaching and
consulting about usability engineering.
Heuristic evaluations are one of the most informal methods
[1]
of usability inspection in the field of human-computer
interaction. There are many sets of usability design heuristics; they are not mutually exclusive and cover many of the
same aspects of user interface design.
Quite often, usability problems that are discovered are categorizedoften on a numeric scaleaccording to their
estimated impact on user performance or acceptance. Often the heuristic evaluation is conducted in the context of
use cases (typical user tasks), to provide feedback to the developers on the extent to which the interface is likely to
be compatible with the intended users needs and preferences.
The simplicity of heuristic evaluation is beneficial at the early stages of design. This usability inspection method
does not require user testing which can be burdensome due to the need for users, a place to test them and a payment
for their time. Heuristic evaluation requires only one expert, reducing the complexity and expended time for
evaluation. Most heuristic evaluations can be accomplished in a matter of days. The time required varies with the
size of the artifact, its complexity, the purpose of the review, the nature of the usability issues that arise in the
review, and the competence of the reviewers. Using heuristic evaluation prior to user testing will reduce the number
and severity of design errors discovered by users. Although heuristic evaluation can uncover many major usability
issues in a short period of time, a criticism that is often leveled is that results are highly influenced by the knowledge
of the expert reviewer(s). This one-sided review repeatedly has different results than software performance testing,
each type of testing uncovering a different set of problems.
Nielsen's heuristics
Jakob Nielsen's heuristics are probably the most-used usability heuristics for user interface design. Nielsen
developed the heuristics based on work together with Rolf Molich in 1990.
[1][2]
The final set of heuristics that are
still used today were released by Nielsen in 1994.
[3]
The heuristics as published in Nielsen's book Usability
Engineering are as follows
[4]
Visibility of system status:
The system should always keep users informed about what is going on, through appropriate feedback within
reasonable time.
Match between system and the real world:
The system should speak the user's language, with words, phrases and concepts familiar to the user, rather than
system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
User control and freedom:
Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the
unwanted state without having to go through an extended dialogue. Support undo and redo.
Heuristic evaluation
253
Consistency and standards:
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow
platform conventions.
Error prevention:
Even better than good error messages is a careful design which prevents a problem from occurring in the first place.
Either eliminate error-prone conditions or check for them and present users with a confirmation option before they
commit to the action.
Recognition rather than recall:
Minimize the user's memory load by making objects, actions, and options visible. The user should not have to
remember information from one part of the dialogue to another. Instructions for use of the system should be visible
or easily retrievable whenever appropriate.
Flexibility and efficiency of use:
Acceleratorsunseen by the novice usermay often speed up the interaction for the expert user such that the
system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Aesthetic and minimalist design:
Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a
dialogue competes with the relevant units of information and diminishes their relative visibility.
Help users recognize, diagnose, and recover from errors:
Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively
suggest a solution.
Help and documentation:
Even though it is better if the system can be used without documentation, it may be necessary to provide help and
documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be
carried out, and not be too large.
Gerhardt-Powalscognitive engineering principles
Although Nielsen is considered the expert and field leader in heuristics, Jill Gerhardt-Powals
[5]
also developed a set
of cognitive principles for enhancing computer performance.
[6]
These heuristics, or principles, are similar to
Nielsens heuristics but take a more holistic approach to evaluation. Gerhardt Powals principles
[7]
are listed below.
Automate unwanted workload:
free cognitive resources for high-level tasks.
eliminate mental calculations, estimations, comparisons, and unnecessary thinking.
Reduce uncertainty:
display data in a manner that is clear and obvious.
Fuse data:
reduce cognitive load by bringing together lower level data into a higher-level summation.
Present new information with meaningful aids to interpretation:
use a familiar framework, making it easier to absorb.
use everyday terms, metaphors, etc.
Use names that are conceptually related to function:
Context-dependent.
Attempt to improve recall and recognition.
Group data in consistently meaningful ways to decrease search time.
Heuristic evaluation
254
Limit data-driven tasks:
Reduce the time spent assimilating raw data.
Make appropriate use of color and graphics.
Include in the displays only that information needed by the user at a given time.
Provide multiple coding of data when appropriate.
Practice judicious redundancy.
Weinschenk and Barker classification
Susan Weinschenk and Dean Barker created a categorization of heuristics and guidelines by several major providers
into the following twenty types:
[8]
1. User Control: heuristics that check whether the user has enough control of the interface.
2. Human Limitations: the design takes into account human limitations, cognitive and sensorial, to avoid
overloading them.
3. Modal Integrity: the interface uses the most suitable modality for each task: auditory, visual, or
motor/kinesthetic.
4. Accommodation: the design is adequate to fulfill the needs and behaviour of each targeted user group.
5. Linguistic Clarity: the language used to communicate is efficient and adequate to the audience.
6. Aesthetic Integrity: the design is visually attractive and tailored to appeal to the target population.
7. Simplicity: the design will not use unnecessary complexity.
8. Predictability: users will be able to form a mental model of how the system will behave in response to actions.
9. Interpretation: there are codified rules that try to guess the user intentions and anticipate the actions needed.
10. Accuracy: There are no errors, i.e. the result of user actions correspond to their goals.
11. Technical Clarity: the concepts represented in the interface have the highest possible correspondence to the
domain they are modeling.
12. Flexibility: the design can be adjusted to the needs and behaviour of each particular user.
13. Fulfillment: the user experience is adequate.
14. Cultural Propriety: user's cultural and social expectations are met.
15. Suitable Tempo: the pace at which users works with the system is adequate.
16. Consistency: different parts of the system have the same style, so that there are no different ways to represent the
same information or behavior.
17. User Support: the design will support learning and provide the required assistance to usage.
18. Precision: the steps and results of a task will be what the user wants.
19. Forgiveness: the user will be able to recover to an adequate state after an error.
20.Responsiveness: the interface provides enough feedback information about the system status and the task
completion.
Heuristic evaluation
255
References
[1] Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI'90 Conf. (Seattle, WA, 15 April), 249-256
[2] [2] Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM 33, 3 (March), 338-348
[3] Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods, John Wiley & Sons, New York,
NY
[4] Nielsen, Jakob (1994). Usability Engineering. San Diego: Academic Press. pp.115148. ISBN0-12-518406-9.
[5] http:/ / loki. stockton.edu/ ~gerhardj/
[6] [ |Gerhardt-Powals, Jill (http:/ / loki.stockton.edu/ ~gerhardj/ )] (1996). "Cognitive engineering principles for enhancing human - computer
performance". International Journal of Human-Computer Interaction 8 (2): 189211.
[7] Heuristic Evaluation - Usability Methods What is a heuristic evaluation? (http:/ / usability. gov/ methods/ test_refine/ heuristic.
html#WhatisaHeuristicEvaluation) Usability.gov
[8] Jeff Sauro. "Whats the difference between a Heuristic Evaluation and a Cognitive Walkthrough?" (http:/ / www. measuringusability. com/
blog/ he-cw. php). MeasuringUsability.com. .
Further reading
Dix, A., Finlay, J., Abowd, G., D., & Beale, R. (2004). Human-computer interaction (3rd ed.). Harlow, England:
Pearson Education Limited. p324
Gerhardt-Powals, Jill (1996). Cognitive Engineering Principles for Enhancing Human-Computer Performance.
International Journal of Human-Computer Interaction , 8(2), 189-21
Hvannberg, E., Law, E., & Lrusdttir, M. (2007) Heuristic Evaluation: Comparing Ways of Finding and
Reporting Usability Problems , Interacting with Computers, 19 (2), 225-240
Nielsen, J. and Mack, R.L. (eds) (1994). Usability Inspection Methods, John Wiley & Sons Inc
External links
Jakob Nielsen's introduction to Heuristic Evaluation (http:/ / www. useit. com/ papers/ heuristic/ ) - Including
fundamental points, methodologies and benefits.
Alternate First Principles (Tognazzini) (http:/ / www. asktog. com/ basics/ firstPrinciples. html) - Including Jakob
Nielsen's ten rules of thumb
Heuristic Evaluation at Usability.gov (http:/ / www. usability. gov/ methods/ test_refine/ heuristic. html)
Heuristic Evaluation in the RKBExplorer (http:/ / www. rkbexplorer. com/ explorer/ #display=mechanism-{http:/
/ resex. rkbexplorer. com/ id/ resilience-mechanism-4331d919})
Pluralistic walkthrough
256
Pluralistic walkthrough
The Pluralistic Walkthrough (also called a Participatory Design Review, User-Centered Walkthrough,
Storyboarding, Table-Topping, or Group Walkthrough) is a usability inspection method used to identify usability
issues in a piece of software or website in an effort to create a maximally usable human-computer interface. The
method centers around using a group of users, developers and usability professionals to step through a task scenario,
discussing usability issues associated with dialog elements involved in the scenario steps. The group of experts used
is asked to assume the role of typical users in the testing. The method is prized for its ability to be utilized at the
earliest design stages, enabling the resolution of usability issues quickly and early in the design process. The method
also allows for the detection of a greater number of usability problems to be found at one time due to the interaction
of multiple types of participants (users, developers and usability professionals). This type of usability inspection
method has the additional objective of increasing developers sensitivity to users concerns about the product design.
Procedure
Walkthrough Team
A walkthrough team must be assembled prior to the pluralistic walkthrough. Three types of participants are included
in the walkthrough: representative users, product developers and human factors (usability) engineers/professionals.
Users should be representative of the target audience, and are considered the primary participants in the usability
evaluation. Product developers answer questions about design and suggest solutions to interface problems users have
encountered. Human factors professionals usually serve as the facilitators and are also there to provide feedback on
the design as well as recommend design improvements. The role of the facilitator is to guide users through tasks and
facilitate collaboration between users and developers. It is best to avoid having a product developer assume the role
of facilitator, as they can get defensive to criticism of their product.
Materials
The following materials are needed to conduct a pluralistic walkthrough:
Room large enough to accommodate approximately 6-10 users, 6-10 developers and 2-3 usability engineers
Printed screen-shots (paper prototypes) put together in packets in the same order that the screens would be
displayed when users were carrying out the specific tasks. This includes hard copy panels of screens, dialog
boxes, menus, etc. presented in order.
Hard copy of the task scenario for each participant. There are several scenarios defined in this document complete
with the data to be manipulated for the task. Each participant receives a package that enables him or her to write a
response (i.e. the action to take on that panel) directly onto the page. The task descriptions for the participant are
short direct statements.
Writing utensils for marking up screen shots and filling out documentation and questionnaires.
Participants are given written instructions and rules at the beginning of the walkthrough session. The rules indicate to
all participants (users, designers, usability engineers) to:
Assume the role of the user
To write on the panels the actions they would take in pursuing the task at hand
To write any additional comments about the task
Not flip ahead to other panels until they are told to
To hold discussion on each panel until the facilitator decides to move on
Pluralistic walkthrough
257
Tasks
Pluralistic walkthroughs are group activities that require the following steps be followed:
1. 1. Participants are presented with the instructions and the ground rules mentioned above. The task description and
scenario package are also distributed.
2. 2. Next, a product expert (usually a product developer) gives a brief overview of key product concepts and interface
features. This overview serves the purpose of stimulating the participants to envision the ultimate final product
(software or website), so that the participants would gain the same knowledge and expectations of the ultimate
product that product end users are assumed to have.
3. 3. The usability testing then begins. The scenarios are presented to the panel of participants and they are asked to
write down the sequence of actions they would take in attempting to complete the specified task (i.e. moving from
one screen to another). They do this individually without conferring amongst each other.
4. 4. Once everyone has written down their actions independently, the participants discuss the actions that they
suggested for that task. They also discuss potential usability problems. The order of communication is usually
such that the representative users go first so that they are not influenced by the other panel members and are not
deterred from speaking.
5. 5. After the users have finished, the usability experts present their findings to the group. The developers often
explain their rationale behind their design. It is imperative that the developers assume an attitude of welcoming
comments that are intended to improve the usability of their product.
6. 6. The walkthrough facilitator presents the correct answer if the discussion is off course and clarifies any unclear
situations.
7. 7. After each task, the participants are given a brief questionnaire regarding the usability of the interface they have
just evaluated.
8. 8. Then the panel moves on to the next task and round of screens. This process continues until all the scenarios have
been evaluated.
Throughout this process, usability problems are identified and classified for future action. The presence of the
various types of participants in the group allows for a potential synergy to develop that often leads to creative and
collaborative solutions. This allows for a focus on user-centered perspective while also considering the engineering
constraints of practical system design.
Characteristics of Pluralistic Walkthrough
Other types of usability inspection methods include: Cognitive Walkthroughs, Interviews, Focus Groups, Remote
Testing and Think Aloud Protocol. Pluralistic Walkthroughs share some of the same characteristics with these other
traditional walkthroughs, especially with cognitive walkthroughs, but there are some defining characteristics
(Nielsen, 1994):
The main modification, with respect to usability walkthroughs, was to include three types of participants:
representative users, product developers, and human factors (usability) professionals.
Hard-copy screens (panels) are presented in the same order in which they would appear online. A task scenario is
defined, and participants confront the screens in a linear path, through a series of user interface panels, just as they
would during the successful conduct of the specified task online, as the site/software is currently designed.
Participants are all asked to assume the role of the user for whatever user population is being tested. Thus, the
developers and the usability professionals are supposed to try to put themselves in the place of the users when
making written responses.
The participants write down the action they would take in pursuing the designated task online, before any further
discussion is made. Participants are asked to write their responses in as much detail as possible down to the
keystroke or other input action level. These written responses allow for some production of quantitative data on
user actions that can be of value.
Pluralistic walkthrough
258
It is only after all participants have written the actions they would take that discussion would begin. The
representative users offer their discussion first and discuss each scenario step. Only after the users have exhausted
their suggestions do the usability experts and product developers offer their opinions.
Benefits and Limitations
Benefits
There are several benefits that make the pluralistic usability walkthrough a valuable tool.
Early systematic look at a new product, gaining early performance and satisfaction data from users about a
product. Can provide early performance and satisfaction data before costly design strategies have been
implemented.
Strong focus on user centered design in task analysis, leading to more problems identified at an earlier point in
development. This reduces the iterative test-redesign cycle by utilizing immediate feedback and discussion of
design problems and possible solutions while users are present.
Synergistic redesign because of the group process involving users, developers and usability engineers. The
discussion of the identified problems in a multidisciplinary team will spawn creative, usable and quick solutions.
Valuable quantitative and qualitative data is generated through users actions documented by written responses.
Product developers at the session gain appreciation for common user problems, frustrations or concerns regarding
the product design. Developers become more sensitive to users concerns.
Limitations
There are several limitations to the pluralistic usability walkthrough that affect its usage.
The walkthrough can only progress as quickly as the slowest person on each panel. The walkthrough is a group
exercise and, therefore, in order to discuss a task/screen as a group, we must wait for all participants to have
written down their responses to the scenario. The session can feel laborious if too slow.
A fairly large group of users, developers and usability experts has to be assembled at the same time. Scheduling
could be a problem.
All the possible actions cant be simulated on hard copy. Only one viable path of interest is selected per scenario.
This precludes participants from browsing and exploring, behaviors that often lead to additional learning about
the user interface.
Product developers might not feel comfortable hearing criticism about their designs.
Only a limited number of scenarios (i.e. paths through the interface) can be explored due to time constraints.
Only a limited amount of recommendations can be discussed due to time constraints.
Further reading
Dix, A., Finlay, J., Abowd, G., D., and Beale, R. Human-computer interaction (3rd ed.). Harlow, England:
Pearson Education Limited, 2004.
Nielsen, Jakob. Usability Inspection Methods. New York, NY: John Wiley and Sons, 1994.
Preece, J., Rogers, Y., and Sharp, H. Interaction Design. New York, NY: John Wiley and Sons, 2002.
Bias, Randolph G., "The Pluralistic Usability Walkthrough: Coordinated Emphathies," in Nielsen, Jakob, and
Mack, R. eds, Usability Inspection Methods. New York, NY: John Wiley and Sons. 1994.
Pluralistic walkthrough
259
External links
List of Usability Evaluation Methods and Techniques
[1]
Pluralistic Usability Walkthrough
[2]
References
[1] http:/ / www. usabilityhome. com/ FramedLi.htm?PlurWalk. htm
[2] http:/ / www. usabilitybok. org/ methods/ p2049
Comparison of usability evaluation methods
Evaluation
Method
Evaluation
Method
Type
Applicable
Stages
Description Advantages Disadvantages
Think aloud
protocol
Testing Design, coding,
testing and
release of
application
Participants in testing express
their thoughts on the application
while executing set tasks
Less expensive
Results are close to
what is experienced by
users
The Environment is not
natural to the user
Remote
Usability
testing
Testing Design, coding,
testing and
release of
application
The experimenter does not
directly observe the users while
they use the application though
activity may be recorded for
subsequent viewing
Efficiency,
effectiveness and
satisfaction, the three
usability issues, are
covered
Additional Software is
necessary to observe the
participants from a
distance
Focus groups Inquiry Testing and
release of
application
A moderator guides a discussion
with a group of users of the
application
If done before
prototypes are
developed, can save
money
Produces a lot of useful
ideas from the users
themselves
Can improve customer
relations
The environment is not
natural to the user and
may provide inaccurate
results.
The data collected tends
to have low validity due
to the unstructured nature
of the discussion
Interviews Inquiry Design, coding,
testing and
release of
application
The users are interviewed to find
out about their experience and
expectations
Good at obtaining
detailed information
Few participants are
needed
Can improve customer
relations
Can not be conducted
remotely
Does not address the
usability issue of
efficiency
Cognitive
walkthrough
Inspection Design, coding,
testing and
release of
application
A team of evaluators walk
through the application discussing
usability issues through the use of
a paper prototype or a working
prototype
Good at refining
requirements
does not require a fully
functional prototype
Does not address user
satisfaction or efficiency
The designer may not
behave as the average
user when using the
application
Pluralistic
walkthrough
Inspection Design A team of users, usability
engineers and product developers
review the usability of the paper
prototype of the application
Usability issues are
resolved faster
Greater number of
usability problems can
be found at one time
Does not address the
usability issue of
efficiency
Comparison of usability evaluation methods
260
Source: Genise, Pauline. Usability Evaluation: Methods and Techniques: Version 2.0 August 28, 2002. University
of Texas.
Article Sources and Contributors
261
Article Sources and Contributors
Software testing Source: http://en.wikipedia.org/w/index.php?oldid=509426689 Contributors: 0612, 144.132.75.xxx, 152.98.195.xxx, 166.46.99.xxx, 192.193.196.xxx, 212.153.190.xxx,
28bytes, 2D, 2mcm, 62.163.16.xxx, A Man In Black, A R King, A.R., A5b, AGK, Abdull, AbsolutDan, Academic Challenger, Acather96, Ad88110, Adam Hauner, Addihockey10, Ag2402,
Agopinath, Ahoerstemeier, Ahy1, Aitias, Akamad, Akhiladi007, AlMac, AlanUS, Alappuzhakaran, Albanaco, Albertnetymk, Aleek vivk, Alhenry2006, AliaksandrAA, AliveFreeHappy, Allan
McInnes, Allens, Allstarecho, Alphius, Alvestrand, Amire80, Amty4all, Andonic, Andre Engels, Andreas Kaufmann, Andres, Andrew Gray, Andrewcmcardle, Andygreeny, Ankit Maity, Ankurj,
Anna Frodesiak, Anna88banana, Annepetersen, Anon5791, Anonymous Dissident, Anonymous anonymous, Anonymous editor, Anorthup, Anthonares, Anwar saadat, Aphstein, Apparition11,
Aravindan Shanmugasundaram, ArmadilloFromHell, Arno La Murette, Ash, Ashdurbat, Avoided, Barunbiswas, Bavinothkumar, Baxtersmalls, Bazzargh, Beland, Bentogoa, Betterusername,
Bex84, Bigtwilkins, Bigwyrm, Bilbo1507, Bindu Laxminarayan, Bkil, Blair Bonnett, Blake8086, Bluerasberry, Bobdanny, Bobisthebest, Bobo192, Bonadea, Bornhj, Bovineone, Boxplot, Bpluss,
Breno, Brequinda, Brion VIBBER, BruceRuxton, Brunodeschenes.qc, Bryan Derksen, Bsdlogical, Burakseren, Buxbaum666, Calton, Cangoroo11, CanisRufus, Canterbury Tail, Canterj,
CardinalDan, Carlos.l.sanchez, CattleGirl, CemKaner, Certellus, Certes, Cgvak, Chairboy, Chaiths, Chamolinaresh, Chaser, Cheetal heyk, ChiLlBeserker, Chowbok, Chris Pickett, ChrisB,
ChrisSteinbach, ChristianEdwardGruber, Chrzastek, Cjhawk22, Claygate, Closedmouth, Cometstyles, Conan, Contributor124, Conversion script, CopperMurdoch, Corruptcopper, Cpl Syx,
Cptchipjew, Craigwb, Cvcby, Cybercobra, CyborgTosser, DARTH SIDIOUS 2, DMacks, DRogers, DVdm, Dacoutts, DaisyMLL, Dakart, Dalric, Danhash, Danimal, Davewild, David.alex.lamb,
Dazzla, Dbelhumeur02, Dcarrion, Declan Kavanagh, DeltaQuad, Denisarona, Deogratias5, Der Falke, DerHexer, Derek farn, Dev1240, Dicklyon, Diego.pamio, Digitalfunda, Discospinster,
Dnddnd80, Downsize43, Dravecky, Drewster1829, Drivermadness, Drxim, DryCleanOnly, Dvansant, Dvyost, E2eamon, ELinguist, ESkog, Ea8f93wala, Ebde, Ed Poor, Edward Z. Yang,
Electiontechnology, ElfriedeDustin, Ellenaz, Enumera, Enviroboy, Epim, Epolk, Eptin, Ericholmstrom, Erkan Yilmaz, ErkinBatu, Esoltas, Eumolpo, Excirial, Exert, Falcon8765, FalconL,
Faught, Faye dimarco, Fayenatic london, Felix Wiemann, Filadifei, Flavioxavier, Forlornturtle, FrankCostanza, Fredrik, FreplySpang, Furrykef, G0gogcsc300, GABaker, Gail, Gar3t, Gary King,
Gary Kirk, Gdavidp, Gdo01, GeoTe, Georgie Canadian, Geosak, Giggy, Gil mo, Gogo Dodo, Goldom, Gonchibolso12, Gorson78, GraemeL, Graham87, GregorB, Gsmgm, Guehene, Gurchzilla,
GururajOaksys, Guybrush1979, Hadal, Halovivek, Halsteadk, HamburgerRadio, Harald Hansen, Havlatm, Haza-w, Hdt83, Headbomb, Helix84, Hemnath18, Henri662, Hghyux, Honey88foru,
Hooperbloob, Hsingh77, Hu12, Hubschrauber729, Huge Bananas, Hutch1989r15, I dream of horses, IJA, IceManBrazil, Ignasiokambale, ImALion, Imroy, Incnis Mrsi, Indon, Infrogmation,
Intray, Inwind, J.delanoy, JASpencer, JPFitzmaurice, Ja 62, JacobBramley, Jake Wartenberg, Jakew, Jarble, Jeff G., Jehochman, Jenny MacKinnon, JesseHogan, JimD, Jjamison, Jluedem, Jm266,
Jmax-, Jmckey, Jobin RV, JoeSmack, John S Eden, Johndci, Johnny.cache, Johnuniq, JonJosephA, Joneskoo, JosephDonahue, Josheisenberg, Joshymit, Joyous!, Jsled, Jstastny, Jtowler,
Juliancolton, JuneGloom07, Jwoodger, Kalkundri, KamikazeArchon, Kanenas, Kdakin, Keithklain, KellyHass, Kelstrup, Kevin, Kgf0, Khalid hassani, Kingpin13, Kingpomba, Kitdaddio, Kku,
KnowledgeOfSelf, Kompere, Konstable, Kothiwal, Krashlandon, Kuru, Lagrange613, LeaveSleaves, Lee Daniel Crocker, Leomcbride, Leszek Jaczuk, Leujohn, Listmeister, Little Mountain 5,
Lomn, Losaltosboy, Lotje, Lowellian, Lradrama, Lumpish Scholar, M Johnson, MER-C, MPerel, Mabdul, Madhero88, Madvin, Mailtoramkumar, Manekari, ManojPhilipMathen, Mark Renier,
Materialscientist, MattGiuca, Matthew Stannard, MaxHund, MaxSem, Mazi, Mblumber, Mburdis, Mdd, MelbourneStar, Mentifisto, Menzogna, MertyWiki, Metagraph, Mfactor,
Mhaitham.shammaa, Michael B. Trausch, Michael Bernstein, MichaelBolton, Michecksz, Michig, Mike Doughney, MikeDogma, Miker@sundialservices.com, Mikethegreen, Millermk, Misza13,
Mitch Ames, Miterdale, Mmgreiner, Moa3333, Mpilaeten, Mpradeep, Mr Minchin, MrJones, MrOllie, Mrh30, Msm, Mtoxcv, Munaz, Mxn, N8mills, NAHID, Nambika.marian, Nanobug,
Neokamek, Netra Nahar, Newbie59, Nibblus, Nick Hickman, Nigholith, Nimowy, Nine smith, Nksp07, Noah Salzman, Noq, Notinasnaid, Nuno Tavares, OBloodyHell, Oashi, Ocee, Oddity-,
Ohnoitsjamie, Oicumayberight, Okal Otieno, Oliver1234, Omicronpersei8, Orange Suede Sofa, Orphan Wiki, Ospalh, Otis80hobson, PL290, Paranomia, Pascal.Tesson, Pashute, Paudelp, Paul
August, Paul.h, Pcb21, Peashy, Pepsi12, PhilHibbs, Philip Trueman, PhilipO, PhilippeAntras, Phoe6, Piano non troppo, Piast93, Pieleric, Pine, Pinecar, Pinethicket, Plainplow, Pmberry,
Pointillist, Pomoxis, Poulpy, Pplolpp, Prari, Praveen.karri, Priya4212, Promoa1, Psychade, Puraniksameer, Puzzlefan123asdfas, Pysuresh, QTCaptain, Qaiassist, Qatutor, Qazwsxedcrfvtgbyhn,
Qwyrxian, RA0808, RHaworth, Radagast83, Rahuljaitley82, Rajesh mathur, RameshaLB, Randhirreddy, Raspalchima, Ravialluru, Raynald, RedWolf, RekishiEJ, Remi0o, ReneS, Retired
username, Rex black, Rgoodermote, Rhobite, Riagu, Rich Farmbrough, Richard Harvey, RitigalaJayasena, Rje, Rjwilmsi, Rlsheehan, Rmattson, Rmstein, Robbie098, Robert Merkel, Robinson
weijman, Rockynook, Ronhjones, Ronwarshawsky, Ronz, Roscelese, Rowlye, Rp, Rror, Rschwieb, Ruptan, Rwwww, Ryoga Godai, S.K., SD5, SJP, SP-KP, SURIV, Sachipra, Sachxn, Sam
Hocevar, Samansouri, Sankshah, Sapphic, Sardanaphalus, Sasquatch525, SatishKumarB, ScaledLizard, ScottSteiner, Scottri, Sega381, Selket, Senatum, Serge Toper, Sergeyl1984, Shadowcheets,
Shahidna23, Shanes, Shepmaster, Shimeru, Shishirhegde, Shiv sangwan, Shoejar, Shubo mu, Shze, Silverbullet234, Sitush, Skalra7, Skyqa, Slowbro, Smack, Smurrayinchester, Snowolf,
Softtest123, Softwareqa, Softwaretest1, Softwaretesting1001, Softwaretesting101, Softwrite, Solde, Somdeb Chakraborty, Someguy1221, Sooner Dave, SpaceFlight89, Spadoink, SpigotMap,
Spitfire, Srikant.sharma, Srittau, Staceyeschneider, Stansult, StaticGull, Stephen Gilbert, Stephenb, Steveozone, Stickee, Storm Rider, Strmore, SunSw0rd, Superbeecat, SwirlBoy39, Sxm20,
Sylvainmarquis, T4tarzan, TCL India, Tagro82, Tdjones74021, Techsmith, Tedickey, Tejas81, Terrillja, Testersupdate, Testingexpert, Testingfan, Testinggeek, Testmaster2010, ThaddeusB, The
Anome, The Thing That Should Not Be, The prophet wizard of the crayon cake, Thehelpfulone, TheyCallMeHeartbreaker, ThomasO1989, ThomasOwens, Thread-union, Thv, Tipeli, Tippers,
Tmaufer, Tobias Bergemann, Toddst1, Tommy2010, Tonym88, Tprosser, Trusilver, Ttam, Tulkolahten, Tusharpandya, TutterMouse, Uktim63, Uncle G, Unforgettableid, Useight, Utcursch,
Uzma Gamal, VMS Mosaic, Valenciano, Vaniac, Vasywriter, Venkatreddyc, Venu6132000, Verloren, VernoWhitney, Versageek, Vijay.ram.pm, Vijaythormothe, Vishwas008, Vsoid, W.D.,
W2qasource, Walter Grlitz, Wavelength, Wbm1058, Wifione, WikHead, Wiki alf, WikiWilliamP, Wikieditor06, Will Beback Auto, Willsmith, Winchelsea, Wlievens, Wombat77, Wwmbes,
Yamamoto Ichiro, Yesyoubee, Yngupta, Yosri, Yuckfoo, ZenerV, Zephyrjs, ZhonghuaDragon2, ZooFari, Zurishaddai, 2200 anonymous edits
Black-box testing Source: http://en.wikipedia.org/w/index.php?oldid=508853837 Contributors: A bit iffy, A'bad group, AKGhetto, Aervanath, Ag2402, AndreniW, Andrewpmk, Ash,
Asparagus, Avi260192, Benito78, Betterusername, Blake-, CWY2190, Caesura, Canterbury Tail, Chris Pickett, Chrys, ClementSeveillac, Colinky, Courcelles, DRogers, DanDoughty,
Daveydweeb, Deb, Discospinster, DividedByNegativeZero, Docboat, DylanW, Ebde, Electiontechnology, Epim, Erkan Yilmaz, ErkinBatu, Fluzwup, Frap, Gayathri nambiar, Geeoharee,
Haymaker, Hooperbloob, Hu12, Hugh.glaser, Ian Pitchford, Ileshko, Isnow, JimVC3, Jmabel, Jondel, Karl Naylor, Kgf0, Khym Chanur, Kuru, LOL, Lahiru k, Lambchop, Liao, Mark.murphy,
Mathieu, Michael Hardy, Michig, Mpilaeten, Mr Minchin, MrOllie, NEUrOO, NawlinWiki, Nitinqai, Notinasnaid, Nschoot, OlEnglish, Otheus, PAS, PerformanceTester, Picaroon, Pinecar, Poor
Yorick, Pradameinhoff, Radiojon, Retiono Virginian, Rich Farmbrough, Rstens, Rsutherland, Rwwww, S.K., Sergei, Shadowjams, Shijaz, Solar Police, Solde, Subversive.sound, SuperMidget,
Tedickey, TheyCallMeHeartbreaker, Thumperward, Tobias Bergemann, Toddst1, UnitedStatesian, WJBscribe, Walter Grlitz, Xaosflux, Zephyrjs, 226 anonymous edits
Exploratory testing Source: http://en.wikipedia.org/w/index.php?oldid=492750755 Contributors: Alai, BUPHAGUS55, Bender235, Chris Pickett, DRogers, Decltype, Doab, Dougher, Elopio,
Epim, Erkan Yilmaz, Fiftyquid, GoingBatty, IQDave, Imageforward, Jeff.fry, JnRouvignac, Kgf0, Lakeworks, Leomcbride, Morrillonline, Mpilaeten, Oashi, Pinecar, Quercus basaseachicensis,
Shadowjams, SiriusDG, Softtest123, SudoGhost, Testingfan, TheParanoidOne, Toddst1, Vegaswikian, VilleAine, Walter Grlitz, Whylom, 54 anonymous edits
San Francisco depot Source: http://en.wikipedia.org/w/index.php?oldid=476848835 Contributors: Andreas Kaufmann, Auntof6, Centrx, DRogers, EagleFan, Fabrictramp, PigFlu Oink,
Pinecar, Walter Grlitz, 2 anonymous edits
Session-based testing Source: http://en.wikipedia.org/w/index.php?oldid=483248118 Contributors: Alai, Bjosman, Chris Pickett, DRogers, DavidMJam, Engpharmer, Jeff.fry, JenKilmer,
JulesH, Pinecar, Walter Grlitz, WikHead, 14 anonymous edits
Scenario testing Source: http://en.wikipedia.org/w/index.php?oldid=491628482 Contributors: Abdull, Alai, Bobo192, Brandon, Chris Pickett, Cindamuse, Epim, Hu12, Karbinski, Kingpin13,
Kuru, Nimmalik77, Pas007, Pinecar, Ronz, Sainianu088, Shepard, Tikiwont, Walter Grlitz, , 22 anonymous edits
Equivalence partitioning Source: http://en.wikipedia.org/w/index.php?oldid=500676985 Contributors: Attilios, AvicAWB, Blaisorblade, DRogers, Dougher, Ebde, Erechtheus, Frank1101,
HobbyWriter, HossMo, Ianr44, Ingenhut, JennyRad, Jerry4100, Jj137, Jtowler, Kjtobo, Martinkeesen, Mbrann747, Michig, Mirokado, Nmondal, Pinecar, Rakesh82, Retired username, Robinson
weijman, SCEhardt, Stephan Leeds, Sunithasiri, Tedickey, Throw it in the Fire, Vasinov, Walter Grlitz, Wisgary, Zoz, 37 anonymous edits
Boundary-value analysis Source: http://en.wikipedia.org/w/index.php?oldid=500312597 Contributors: Ahoerstemeier, Andreas Kaufmann, AndreniW, Attilios, Benito78, Ccady, DRogers,
Duggpm, Ebde, Eumolpo, Freek Verkerk, Ianr44, IceManBrazil, Jtowler, Krishjugal, Linuxbabu, Michaeldunn123, Mirokado, Nmondal, Pinecar, Psiphiorg, Radiojon, Retired username,
Robinson weijman, Ruchir1102, Sesh, Sophus Bie, Stemburn, Stemonitis, Sunithasiri, Velella, Walter Grlitz, Wisgary, Zoz, 63 anonymous edits
All-pairs testing Source: http://en.wikipedia.org/w/index.php?oldid=500255206 Contributors: Ash, Ashwin palaparthi, Bookworm271, Brandon, Capricorn42, Chris Pickett, Cmdrjameson,
Drivermadness, Erkan Yilmaz, Faye dimarco, Jeremy Reeder, Kjtobo, LuisCavalheiro, MER-C, Melcombe, MrOllie, Nmondal, Pinecar, Qwfp, Raghu1234, Rajushalem, Regancy42, Rexrange,
Rstens, RussBlau, SteveLoughran, Tassedethe, Walter Grlitz, 54 anonymous edits
Fuzz testing Source: http://en.wikipedia.org/w/index.php?oldid=507605931 Contributors: Andypdavis, Aphstein, Ari.takanen, Autarch, Blashyrk, Bovlb, ChrisRuvolo, David Gerard, Dcoetzee,
Derek farn, Dirkbb, Doradus, Edward, Emurphy42, Enric Naval, ErrantX, Fluffernutter, FlyingToaster, Furrykef, GregAsche, Guy Harris, Gwern, Haakon, HaeB, Hooperbloob, Hu12,
Informationh0b0, Irishguy, Jim.henderson, JonHarder, Jruderman, Jvase, Kgfleischmann, Kku, Leonard G., Letdorf, Lionaneesh, Malvineous, Manuel.oriol, Marqueed, Martinmeyer,
Marudubshinki, McGeddon, Mezzaluna, MikeEddington, Monty845, Mpeisenbr, MrOllie, Nandhp, Neale Monks, Neelix, Niri.M, Pedro Victor Alves Silvestre, Pinecar, Posix memalign,
Povman, Rcsprinter123, Ronz, Sadeq, Softtest123, Starofale, Stephanakib, Stevehughes, SwissPokey, T0pgear09, The Anome, The Cunctator, Tmaufer, Tremilux, User At Work, Victor Stinner,
Walter Grlitz, Yurymik, Zarkthehackeralliance, Zippy, Zirconscot, 145 anonymous edits
Cause-effect graph Source: http://en.wikipedia.org/w/index.php?oldid=469661912 Contributors: Andreas Kaufmann, Bilbo1507, DRogers, Michael Hardy, Nbarth, OllieFury, Pgr94, Rjwilmsi,
The Anome, Tony1, Wleizero, 3 anonymous edits
Article Sources and Contributors
262
Model-based testing Source: http://en.wikipedia.org/w/index.php?oldid=509568628 Contributors: Adivalea, Alvin Seville, Anthony.faucogney, Antti.huima, Arjayay, Atester, Bluemoose,
Bobo192, Click23, Drilnoth, Ehheh, Eldad.palachi, FlashSheridan, Gaius Cornelius, Garganti, Hooperbloob, Jluedem, Jtowler, Jzander, Kku, MDE, Mark Renier, MarkUtting, Mattisse, Mdd,
Michael Hardy, Micskeiz, Mirko.conrad, Mjchonoles, MrOllie, Pinecar, Richard R White, S.K., Sdorrance, Smartesting, Solde, Suka, Tatzelworm, Tedickey, Test-tools, That Guy, From That
Show!, TheParanoidOne, Thv, Vonkje, Vrenator, Williamglasby, Yan Kuligin, Yxl01, 113 anonymous edits
Web testing Source: http://en.wikipedia.org/w/index.php?oldid=508478860 Contributors: 5nizza, Andreas Kaufmann, Andy Dingley, Cbuckley, Ctcdiddy, Danielcornell, Darth Panda,
Dhiraj1984, DthomasJL, Emumt, Erwin33, Harshadsamant, In.Che., JASpencer, JamesBWatson, Jetfreeman, JimHolmesOH, Jwoodger, KarlDubost, Komper, MER-C, Macrofiend, Nara Sangaa,
Narayanraman, P199, Pinecar, Rchandra, Runnerweb, SEWilco, Softtest123, Tawaregs08.it, Testgeek, Thadius856, TubularWorld, Walter Grlitz, Woella, 41 anonymous edits
Installation testing Source: http://en.wikipedia.org/w/index.php?oldid=465801587 Contributors: April kathleen, Aranel, Catrope, CultureDrone, Hooperbloob, Matthew Stannard,
MichaelDeady, Mr.sqa, Paulbulman, Pinecar, Telestylo, Thardas, TheParanoidOne, WhatamIdoing, 15 anonymous edits
White-box testing Source: http://en.wikipedia.org/w/index.php?oldid=502932643 Contributors: Ag2402, Aillema, AnOddName, Andreas Kaufmann, Arthena, Bobogoobo, CSZero, Caesura,
Chris Pickett, Chrys, Closedmouth, Culix, DRogers, DanDoughty, DeadEyeArrow, Deb, Denisarona, Dupz, Ebde, Erkan Yilmaz, Err0neous, Faught, Furrykef, Gaur1982, Hooperbloob, Hu12,
Hyad, Hyenaste, Isnow, Ixfd64, JStewart, JYolkowski, Jacksprat, Johntex, Jpalm 98, Juanmamb, Kanigan, Kasukurthi.vrc, Kuru, Lfstevens, Mark.murphy, Mathieu, MaxDel, Menthaxpiperita,
Mentifisto, Mezod, Michaeldunn123, Michig, Moeron, Mpilaeten, Mr Minchin, MrOllie, Noisy, Noot al-ghoubain, Nvrijn, Old Moonraker, PankajPeriwal, Philip Trueman, Pinecar, Pluke,
Pradameinhoff, Prari, Pushparaj k, Qxz, Radiojon, Ravialluru, Rsutherland, S.K., Solde, Suffusion of Yellow, Sushiflinger, Sven Manguard, Svick, Tedickey, The Rambling Man, Thumperward,
Toddst1, Velella, Walter Grlitz, Yadyn, Yilloslime, 136 anonymous edits
Code coverage Source: http://en.wikipedia.org/w/index.php?oldid=509397598 Contributors: 194.237.150.xxx, Abdull, Abednigo, Ad88110, Agasta, Aislingdonnelly, Aitias, Aivosto,
AliveFreeHappy, Alksub, Allen Moore, Alonergan76, Altenmann, Andreas Kaufmann, Andresmlinar, Anorthup, Attilios, Auteurs, Beetstra, BenFrantzDale, Billinghurst, Bingbangbong,
BlackMamba, Blacklily, Blaxthos, Centic, Chester Markel, Cindamuse, Conversion script, Coombes358, Coveragemeter, DagErlingSmrgrav, Damian Yerrick, Derek farn, Didgeedoo,
Digantorama, Dr ecksk, Ebelular, Erkan Yilmaz, Faulknerck2, FredCassidy, Gaudol, Ghettoblaster, Gibber blot, Greensburger, HaeB, Henri662, Hertzsprung, Hob Gadling, Hooperbloob, Hqb,
Hunghuuhoang, Ianb1469, Infofred, JASpencer, JJMax, Jamelan, JavaTenor, Jdpipe, Jerryobject, Jkeen, Johannes Simon, JorisvS, Jtheires, Julias.shaw, JustAnotherJoe, Kdakin, Ken Gallager,
Kku, Kurykh, LDRA, LouScheffer, M4gnum0n, MER-C, Materialscientist, Mati22081979, Matt Crypto, MehrdadAfshari, Millerlyte87, Miracleworker5263, Mittgaurav, Mj1000, MrOllie,
MywikiaccountSA, Nat hillary, NawlinWiki, NickHodges, Nigelj, Nin1975, Nintendude64, Nixeagle, Ntalamai, Parasoft-pl, Penumbra2000, Phatom87, Picapica, Pinecar, Ptrb, QARon,
Quamrana, Quinntaylor, Quux, RedWolf, Roadbiker53, Rob amos, Robert Merkel, Rpapo, RuggeroB, Rwwww, Scubamunki, Sdesalas, Sebastian.Dietrich, Sferik, SimonKagstrom, Smharr4,
Snow78124, Snoyes, Stoilkov, Suruena, Taibah U, Technoparkcorp, Test-tools, Testcocoon, Thumperward, Tiagofassoni, TutterMouse, U2perkunas, Veralift, Walter Grlitz, Walterkelly-dms,
WimdeValk, Witten rules, Wlievens, Wmwmurray, X746e, 251 anonymous edits
Modified Condition/Decision Coverage Source: http://en.wikipedia.org/w/index.php?oldid=465647767 Contributors: Andreas Kaufmann, Crazypete101, Freek Verkerk, Jabraham mw,
Markiewp, Pindakaas, Tony1, Tsunhimtse, Vardhanw, 20 anonymous edits
Fault injection Source: http://en.wikipedia.org/w/index.php?oldid=486043190 Contributors: Andreas Kaufmann, Ari.takanen, Auntof6, BrianPatBeyond, CapitalR, Chowbok, CyborgTosser,
DaGizza, DatabACE, Firealwaysworks, Foobiker, GoingBatty, Jeff G., Joriki, Paff1, Paul.Dan.Marinescu, Piano non troppo, RHaworth, SteveLoughran, Suruena, Tmaufer, Tony1, WillDo, 29
anonymous edits
Bebugging Source: http://en.wikipedia.org/w/index.php?oldid=409176429 Contributors: Andreas Kaufmann, Dawynn, Erkan Yilmaz, Foobiker, Jchaw, Kaihsu, O keyes, 6 anonymous edits
Mutation testing Source: http://en.wikipedia.org/w/index.php?oldid=507651311 Contributors: Andreas Kaufmann, Antonielly, Ari.takanen, Brilesbp, Davidmus, Derek farn, Dogaroon, El
Pantera, Felixwikihudson, Fuhghettaboutit, GiuseppeDiGuglielmo, Htmlapps, Jarfil, Jeffoffutt, JonHarder, LFaraone, Martpol, Mycroft.Holmes, Pieleric, Pinecar, Quuxplusone, Rohansahgal,
Sae1962, Usrnme h8er, Walter Grlitz, Wikid77, Yuejia, 63 anonymous edits
Non-functional testing Source: http://en.wikipedia.org/w/index.php?oldid=472260365 Contributors: Addere, Burakseren, Dima1, JaGa, Kumar74, Mikethegreen, Ontist, Open2universe,
P.srikanta, Pinecar, Walter Grlitz, 6 anonymous edits
Software performance testing Source: http://en.wikipedia.org/w/index.php?oldid=508112055 Contributors: AMbroodEY, Abhasingh.02, AbsolutDan, Alex Vinokur, Andreas Kaufmann,
Andy Dingley, Apodelko, Argyriou, Armadillo-eleven, Bbryson, Bourgeoisspy, Brian.a.wilson, Burakseren, CaroleHenson, Cit helper, Ckoenigsberg, Coroberti, D6, David Johnson,
Davidschmelzer, Deicool, Dhiraj1984, Dwvisser, Edepriest, Eitanklein75, Filadifei, Freek Verkerk, Ghewgill, Gnowor, Grotendeels Onschadelijk, GururajOaksys, Gururajs, HenryJames141,
Hooperbloob, Hu12, Ianmolynz, Iulus Ascanius, J.delanoy, JaGa, Jdlow1, Jeremy Visser, Jewbacca, Jncraton, KAtremer, Kbustin00, Ken g6, KnowledgeOfSelf, M4gnum0n, MER-C,
Maimai009, Matt Crypto, Matthew Stannard, Michig, MrOllie, Mrmatiko, Msadler, Muhandes, Mywikicontribs, Nono64, Notinasnaid, Noveltywh, Ocaasi, Oliver Lineham, Optakeover, Pinecar,
Pnieloud, Pratheepraj, R'n'B, Ravialluru, Raysecurity, Rjwilmsi, Robert Merkel, Ronz, Rsbarber, Rstens, Rwalker, SchreiberBike, Sebastian.Dietrich, ShelfSkewed, Shimser, Shirtwaist,
Shoeofdeath, SimonP, Softlogica, Solstan, SunSw0rd, Swtechwr, Timgurto, Veinor, Versageek, Vrenator, Wahab80, Walter Grlitz, Weregerbil, Wilsonmar, Wizzard, Wktsugue, Woohookitty,
Wselph, 272 anonymous edits
Stress testing Source: http://en.wikipedia.org/w/index.php?oldid=499936178 Contributors: Brian R Hunter, Con-struct, CyborgTosser, Hu12, Ndanielm, Pinecar, Tobias Bergemann, Trevj, 14
anonymous edits
Load testing Source: http://en.wikipedia.org/w/index.php?oldid=506380776 Contributors: 2001:470:36:61C:495E:8A6B:9FDB:D12C, 5nizza, AbsolutDan, AnonymousDDoS, Archdog99,
ArrowmanCoder, BD2412, Bbryson, Beland, Belmond, Bernard2, CanadianLinuxUser, Crossdader, Ctcdiddy, Czei, DanielaSZTBM, Daonguyen95, Derby-ridgeback, Dhiraj1984, El Tonerino,
Emumt, Ettrig, Faught, Ff1959, Gadaloo, Gail, Gaius Cornelius, Gbegic, Gene Nygaard, Gordon McKeown, Gururajs, Hooperbloob, Hu12, Icairns, In.Che., Informationh0b0, JHunterJ, JaGa,
Jo.witte, Joe knepley, Jpg, Jpo, Jruuska, Ken g6, LinguistAtLarge, M4gnum0n, MER-C, Magioladitis, Manzee, Merrill77, Michig, NameIsRon, Nimowy, Nurg, PerformanceTester, Philip2001,
Photodeus, Pinecar, Pushtotest, Radagast83, Ravialluru, Rklawton, Rlonn, Rlsheehan, Robert.maclean, Ronwarshawsky, Rstens, S.K., Scoops, ScottMasonPrice, Shadowjams, Shadriner,
Shashi1212, Shilpagpt, Shinhan, SireenOMari, SpigotMap, Swtechwr, Testgeek, Tusharpandya, Veinor, VernoWhitney, Wahab80, Walter Grlitz, Whitejay251, Wilsonmar, Woohookitty,
Wrp103, 178 anonymous edits
Volume testing Source: http://en.wikipedia.org/w/index.php?oldid=458978799 Contributors: Closedmouth, EagleFan, Faught, Kumar74, Octahedron80, Pinecar, Terry1944, Thingg, Thru the
night, Walter Grlitz, 9 anonymous edits
Scalability testing Source: http://en.wikipedia.org/w/index.php?oldid=503882511 Contributors: Beland, ChrisGualtieri, GregorB, JaGa, Kumar74, Malcolma, Methylgrace, Mo ainm, Pinecar,
Velella, 11 anonymous edits
Compatibility testing Source: http://en.wikipedia.org/w/index.php?oldid=508314108 Contributors: Alison9, Arkitus, BPositive, DexDor, Iain99, Jimj wpg, Kumar74, Mean as custard, Neelov,
Pinecar, RekishiEJ, Rwwww, Suvarna 25, Thine Antique Pen, 9 anonymous edits
Portability testing Source: http://en.wikipedia.org/w/index.php?oldid=415815752 Contributors: Andreas Kaufmann, Biscuittin, Cmdrjameson, Nibblus, OSborn, Pharos, Tapir Terrific, The
Public Voice, 2 anonymous edits
Security testing Source: http://en.wikipedia.org/w/index.php?oldid=504256770 Contributors: Aaravind, Andreas Kaufmann, Bigtimepeace, Brookie, Bwpach, ConCompS, DanielPharos, David
Stubley, Dxwell, Ecram, Epbr123, Gardener60, Gavenko a, Glane23, ImperatorExercitus, Ixim dschaefer, JonHarder, Joneskoo, Kinu, Lotje, MichaelBillington, Pinecar, Pinethicket,
Ravi.alluru@applabs.com, Shadowjams, Softwaretest1, Someguy1221, Spitfire, Stenaught, ThisIsAce, Uncle Milty, WereSpielChequers, 114 anonymous edits
Attack patterns Source: http://en.wikipedia.org/w/index.php?oldid=503859029 Contributors: Bachrach44, Bender235, Bobbyquine, DouglasHeld, Dudecon, Enauspeaker, Falcon Kirtaran,
FrankTobia, Friedfish, Hooperbloob, Jkelly, Manionc, Natalie Erin, Nono64, Od Mishehu, R00m c, Retired username, Rich257, RockyH, Smokizzy, 3 anonymous edits
Pseudolocalization Source: http://en.wikipedia.org/w/index.php?oldid=504299213 Contributors: A:-)Brunu, Andy Dingley, Arithmandar, ArthurDenture, Autoterm, Bdjcomic, CyborgTosser,
Dawn Bard, Gavrant, Gnter Lissner, Josh Parris, Khazar, Kutulu, Kznf, Mboverload, Miker@sundialservices.com, Nlhenk, Pinecar, Pnm, Svick, Thumperward, Traveler78, Vipinhari, 10
anonymous edits
Recovery testing Source: http://en.wikipedia.org/w/index.php?oldid=410383530 Contributors: .digamma, DH85868993, Elipongo, Habam, LAAFan, Leandromartinez, Nikolay Shtabel,
Pinecar, Rich257, Rjwilmsi, Vikramsharma13, 15 anonymous edits
Soak testing Source: http://en.wikipedia.org/w/index.php?oldid=508738294 Contributors: A1r, DanielPharos, JPFitzmaurice, JnRouvignac, Mdd4696, Midlandstoday, P mohanavan, Pinecar,
Vasywriter, Walter Grlitz, 12 anonymous edits
Article Sources and Contributors
263
Characterization test Source: http://en.wikipedia.org/w/index.php?oldid=445835207 Contributors: Alberto Savoia, Andreas Kaufmann, BrianOfRugby, Colonies Chris, David Edgar,
Dbenbenn, GabrielSjoberg, JLaTondre, Jjamison, Jkl, Mathiastck, PhilippeAntras, Pinecar, Robofish, Swtechwr, Ulner, 12 anonymous edits
Unit testing Source: http://en.wikipedia.org/w/index.php?oldid=506301241 Contributors: .digamma, Ahc, Ahoerstemeier, AliveFreeHappy, Allan McInnes, Allen Moore, Alumd, Anderbubble,
Andreas Kaufmann, Andy Dingley, Angadn, Anorthup, Ardonik, Asavoia, Attilios, Autarch, Bakersg13, Bdijkstra, BenFrantzDale, Brian Geppert, CanisRufus, Canterbury Tail, Chris Pickett,
ChristianEdwardGruber, ChuckEsterbrook, Ciaran.lyons, Clausen, Colonies Chris, Corvi, Craigwb, DRogers, DanMS, Denisarona, Derbeth, Dflam, Dillard421, Discospinster, Dmulter,
Earlypsychosis, Edaelon, Edward Z. Yang, Eewild, El T, Elilo, Evil saltine, Excirial, FlashSheridan, FrankTobia, Fredrik, Furrykef, GTBacchus, Garionh, Gggggdxn, Goswamivijay,
Guille.hoardings, Haakon, Hanacy, Hari Surendran, Hayne, Hfastedge, Hooperbloob, Hsingh77, Hypersonic12, Ibbn, Influent1, J.delanoy, JamesBWatson, Jjamison, Joeggi, Jogloran, Jonhanson,
Jpalm 98, Kamots, KaragouniS, Karl Dickman, Kku, Konman72, Kuru, Leomcbride, Longhorn72, Looxix, Mark.summerfield, Martin Majlis, Martinig, MaxHund, MaxSem, Mcsee, Mheusser,
Mhhanley, Michig, MickeyWiki, Miker@sundialservices.com, Mild Bill Hiccup, Mortense, Mr. Disguise, MrOllie, Mtomczak, Nat hillary, Nate Silva, Nbryant, Neilc, Nick Lewis CNH,
Notinasnaid, Ohnoitsjamie, Ojan53, OmriSegal, Ottawa4ever, PGWG, Pablasso, Paling Alchemist, Pantosys, Paul August, Paulocheque, Pcb21, Pinecar, Pmerson, Radagast3, RainbowOfLight,
Ravialluru, Ravindrat, RenniePet, Rich Farmbrough, Richardkmiller, Rjnienaber, Rjwilmsi, Rogerborg, Rookkey, RoyOsherove, Ryans.ryu, S.K., S3000, SAE1962, Saalam123, ScottyWZ,
Shyam 48, SimonTrew, Sketch051, Skunkboy74, Sligocki, Smalljim, So God created Manchester, Solde, Sozin, Ssd, Sspiro, Stephenb, SteveLoughran, Stumps, Sujith.srao, Svick, Swtechwr,
Sybersnake, TFriesen, Themillofkeytone, Thv, Timo Honkasalo, Tlroche, Tobias Bergemann, Toddst1, Tony Morris, Tyler Oderkirk, Unittester123, User77764, VMS Mosaic, Veghead, Verec,
Vishnava, Vrenator, Walter Grlitz, Willem-Paul, Winhunter, Wmahan, Zed toocool, 499 anonymous edits
Self-testing code Source: http://en.wikipedia.org/w/index.php?oldid=326302253 Contributors: Andreas Kaufmann, Ed Poor, GregorB, Malcolma, Rich Farmbrough, Spoon!, 2 anonymous edits
Test fixture Source: http://en.wikipedia.org/w/index.php?oldid=502584310 Contributors: 2A01:E34:EF01:44E0:141E:79A2:CD19:30A5, Andreas Kaufmann, Brambleclawx, Heathd,
Humanoc, Ingeniero-aleman, Jeodesic, Martarius, Patricio Paez, Pkgx, RCHenningsgard, Ripounet, Rlsheehan, Rohieb, Silencer1981, Tabletop, WHonekamp, Walter Grlitz, Wernight,
ZacParkplatz, 16 anonymous edits
Method stub Source: http://en.wikipedia.org/w/index.php?oldid=509557387 Contributors: Andreas Kaufmann, Antonielly, Bhadani, Bratch, Can't sleep, clown will eat me, Cander0000,
Ceyockey, Dasoman, Deep Alexander, Dicklyon, Drbreznjev, Ermey, Extvia, Ggoddard, Hollih, IguanaScales, Itai, Joaopaulo1511, Kku, MBisanz, Mange01, Mark Renier, Michig, Mityaha,
Perey, Pinecar, Radagast83, Rich Farmbrough, RitigalaJayasena, Rrburke, S.K., Sae1962, Segv11, Sj, Thisarticleisastub, Vary, Walter Grlitz, 34 anonymous edits
Mock object Source: http://en.wikipedia.org/w/index.php?oldid=500986464 Contributors: 16x9, A. B., ABF, AN(Ger), Acather96, Allanlewis, Allen Moore, Andreas Kaufmann, Andy Dingley,
Antonielly, Ataru, Autarch, Babomb, BenWilliamson, Blaxthos, Charles Matthews, Ciphers, ClinkingDog, CodeCaster, Colcas, Cst17, Cybercobra, DHGarrette, Dcamp314, Derbeth, Dhoerl,
Edward Z. Yang, Elilo, Ellissound, Eric Le Bigot, Frap, Ghettoblaster, Hanavy, HangingCurve, Hooperbloob, Hu12, IceManBrazil, JamesShore, Jprg1966, Kc8tpz, Khalid hassani, Kku,
Le-sens-commun, Lmajano, Lotje, Mange01, Marchaos, Martarius, Martinig, Marx Gomes, MaxSem, Mkarlesky, NickHodges, Nigelj, Nrabinowitz, Paul Foxworthy, Pecaperopeli, Philip
Trueman, Pinecar, R'n'B, Redeagle688, Repentsinner, Rodrigez, RoyOsherove, Rstandefer, Scerj, Simonwacker, SkyWalker, SlubGlub, Spurrymoses, Stephan Leeds, SteveLoughran, TEB728,
Thumperward, Tobias Bergemann, Tomrbj, Whitehawk julie, WikiPuppies, 148 anonymous edits
Lazy systematic unit testing Source: http://en.wikipedia.org/w/index.php?oldid=198751809 Contributors: AJHSimons, Andreas Kaufmann, RHaworth
Test Anything Protocol Source: http://en.wikipedia.org/w/index.php?oldid=502718451 Contributors: Andreas Kaufmann, AndyArmstrong, BrotherE, Brunodepaulak, Frap, Gaurav, Jarble,
Justatheory, Mindmatrix, Myfreeweb, Pinecar, RJHerrick, Schwern, Shlomif, Shunpiker, Tarchannen, Thr4wn, Wrelwser43, Ysth, 32 anonymous edits
xUnit Source: http://en.wikipedia.org/w/index.php?oldid=494942844 Contributors: Ahoerstemeier, Andreas Kaufmann, BurntSky, C.horsdal, Caesura, Chris Pickett, Damian Yerrick, Dvib,
FlashSheridan, Furrykef, Green caterpillar, Jpalm 98, Kenyon, Khatru2, Kku, Kleb, Kranix, Lasombra, LilHelpa, Lucienve, MBisanz, Mat i, MaxSem, MindSpringer, Mortense, MrOllie, Nate
Silva, Ori Peleg, Pagrashtak, Patrikj, Pengo, PhilippeAntras, Pinecar, Qef, RedWolf, Rhphillips, RudaMoura, Schwern, SebastianBergmann, Simonwacker, Slakr, Srittau, Tlroche, Uzume,
Woohookitty, 72 anonymous edits
List of unit testing frameworks Source: http://en.wikipedia.org/w/index.php?oldid=509787499 Contributors: A-Evgeniy, AJHSimons, Abdull, Akadruid, Alan0098, AliveFreeHappy, Alumd,
Andreas Kaufmann, AndreasBWagner, AndreasMangold, Andrey86, Andy Dingley, Anorthup, Antonylees, Arjayay, Arjenmarkus, Artem M. Pelenitsyn, Asashour, Asimjalis, Ates Goral,
Autarch, Avantika789, Avi.kaye, BP, Banus, Basvodde, Bdcon, Bdicroce, Bdijkstra, Beetstra, Berny68, Bigwhite.cn, Billyoneal, Boemmels, Brandf, BrotherE, Burschik, Bvenners, C1vineoflife,
Calrfa Wn, Chompx, Chris Pickett, Chris the speller, ChronoKinetic, Ckrahe, Clements, Codefly, CompSciStud4U, Cpunit root, Cromlech666, Cruftcraft, Cybjit, D3j409, Dalepres, Damieng,
DaoKaioshin, Darac, Daruuin, DataWraith, David smallfield, Decatur-en, Dennislloydjr, Diego Moya, Dlindqui, Donald Hosek, DrMiller, Duffbeerforme, Duthen, EagleFan, Ebar7207, Edward,
Eeera, Ellissound, Eoinwoods, Erkan Yilmaz, Figureouturself, Fltoledo, FredericTorres, Furrykef, Fuzlyssa, GabiS, Gaurav, Generalov.sergey, Ggeldenhuys, Gpremer, GregoryCrosswhite,
Grincho, Grshiplett, Gurdiga, Haprog, Harrigan, Harryboyles, Hboutemy, Hlopetz, Holger.krekel, Huntc, Ian-blumel, IceManBrazil, Icseaturtles, Ilya78, Imsky, JLaTondre, James Hugard,
JavaCS, Jdpipe, Jens Ldemann, Jeremy.collins, Jevon, Jim Kring, Jluedem, Joelittlejohn, John of Reading, Johnuniq, Jokes Free4Me, JoshDuffMan, Jrosdahl, Justatheory, Jvoegele, Jwgrenning,
KAtremer, Kenguest, Kiranthorat, Kku, Kleb, Kristofer Karlsson, Kwiki, LDRA, Lcorneliussen, Legalize, Leomcbride, Loopology, M4gnum0n, MMSequeira, Madgarm, Maine3002, Mandarax,
Marclevel3, Mark Renier, Markvp, Martin Moene, Mdkorhon, MebSter, MeekMark, Mengmeng, Metalim, MiguelMunoz, MikeSchinkel, Mindmatrix, Mitmacher313, Mj1000, Mkarlesky,
Morder, Mortense, NagyLoutre, Neilvandyke, Nereocystis, Nick Number, Nimowy, Nirocr, Nlu, Norrby, Northgrove, ObjexxWiki, Oestape, Ospalh, Paddy3118, Pagrashtak, Papeschr,
PensiveCoder, Pentapus, Pesto, Pgr94, Philippe.beaudoin, Phoe6, Pinecar, Praseodymium, Prekageo, Ptrb, QARon, R'n'B, RalfHandl, RandalSchwartz, Ravidgemole, Rawoke, Rcunit, Rhphillips,
Rjollos, Rmkeeble, Rnagrodzki, Robkam, Roguer, Ropata, Rsiman, Ryadav, SF007, SHIMODA Hiroshi, Saalam123, Sarvilive, Schwern, Sellerbracke, Senfo, Sgould, Shabbychef, Shadriner,
Siffert, Simeonfs, Simoneau, Simonscarfe, SirGeek CSP, Skiwi, Slhynju, Squares, Stassats, Stenyak, SteveLoughran, SummerWithMorons, Sutirthadatta, Swtechwr, Sydevelopments, Sylvestre,
Tabletop, Tadpole9, Tarvaina, Tassedethe, TempestSA, Ten0s, ThevikasIN, ThomasAagaardJensen, Thv, Tobias.trelle, TobyFernsler, Tognopop, Torsknod, Traviscj, Uniwalk, Updatehelper,
User77764, Uzume, Vassilvk, Vcmpk, Vibhuti.amit, Virtualblackfox, Walter Grlitz, Wdevauld, Weitzman, Wernight, Whart222, Wickorama, Winterst, Wodka, X!, Yince, Yipdw, Yukoba,
Yurik, Zanhsieh, Zootm, - , 621 anonymous edits
SUnit Source: http://en.wikipedia.org/w/index.php?oldid=491700857 Contributors: Andreas Kaufmann, Chris Pickett, D6, Diegof79, Djmckee1, Frank Shearar, HenryHayes, Hooperbloob,
Jerryobject, Mcsee, Nigosh, Olekva, TheParanoidOne, 7 anonymous edits
JUnit Source: http://en.wikipedia.org/w/index.php?oldid=504450574 Contributors: 194.237.150.xxx, Abelson, AliveFreeHappy, Andmatt, Andreas Kaufmann, Andy Dingley, Anomen,
Antonielly, Artaxiad, Ashwinikvp, Ausir, B3t, BeauMartinez, Biyer, Bluerasberry, Byj2000, Cat Parade, Cmdrjameson, Conversion script, DONOVAN, DaoKaioshin, Darc, Darth Panda, Doug
Bell, Dsaff, Duplicity, East718, Epbr123, Eptin, Esminis, Eye of slink, Faisal.akeel, Frap, Frecklefoot, Free Software Knight, Frogging101, Ftiercel, Funkymanas, Furrykef, Ghostkadost, Gioto,
Gracenotes, Green caterpillar, Grendelkhan, Harrisony, Hervegirod, Hooperbloob, Ilya, Iosif, J0506, JLaTondre, Jerryobject, Jpalm 98, KellyCoinGuy, Kenguest, Kenji Toyama, Kent Beck, Kleb,
KuwarOnline, M4gnum0n, MER-C, Mahmutuludag, Manish85dave, Mark Renier, Matt Crypto, Mdediana, MrOllie, Nate Silva, Nigelj, Ntalamai, Ohnoitsjamie, POajdbhf, PaulHurleyuk,
Paulsharpe, Pbb, Pcap, Plasmafire, Poulpy, Pseudomonas, Quinntaylor, Randomalious, Raztus, RedWolf, Resurgent insurgent, Rich Farmbrough, RossPatterson, SF007, Salvan, Sandipk singh,
Science4sail, Silvestre Zabala, SirGeek CSP, Softtest123, Softwaresavant, Stypex, TakuyaMurata, TerraFrost, Thumperward, Tikiwont, Tlroche, Tobias.trelle, Torc2, Tumbarumba, Tweisbach,
UkPaolo, VOGELLA, Vina, Vlad, WiseWoman, Yamla, 129 anonymous edits
CppUnit Source: http://en.wikipedia.org/w/index.php?oldid=501373416 Contributors: Amenel, Andreas Kaufmann, Anthony Appleyard, Arranna, Conrad Braam, DSParillo, DrMiller, Frap,
GoldenMedian, Ike-bana, Lews Therin, Martin Rizzo, Mecanismo, Mgfz, Rjwilmsi, Sysuphos, TheParanoidOne, Thumperward, Tobias Bergemann, WereSpielChequers, Yanxiaowen, 22
anonymous edits
Test::More Source: http://en.wikipedia.org/w/index.php?oldid=411565404 Contributors: Dawynn, Mindmatrix, Pjf, Schwern, Tassedethe, Unforgiven24
NUnit Source: http://en.wikipedia.org/w/index.php?oldid=496282949 Contributors: Abelson, Andreas Kaufmann, B0sh, Brianpeiris, CodeWonk, Cwbrandsma, Djmckee1, Gfinzer, Gypwage,
Hadal, Hooperbloob, Hosamaly, Ike-bana, Jacosi, Jerryobject, Kellyselden, Largoplazo, Magioladitis, Mattousai, MaxSem, MicahElliott, NiccciN, Nigosh, NinjaCross, PaddyMcDonald, Pinecar,
Pnewhook, RHaworth, Raztus, RedWolf, Reidhoch, Rodasmith, S.K., SamuelTheGhost, Sj, StefanPapp, Superm401, Sydevelopments, Thv, Tobias Bergemann, Toomuchsalt, Ulrich.b, Valodzka,
Whpq, Zsinj, 54 anonymous edits
NUnitAsp Source: http://en.wikipedia.org/w/index.php?oldid=467100274 Contributors: Andreas Kaufmann, Djmckee1, Edward, GatoRaider, Hooperbloob, Root4(one), SummerWithMorons
csUnit Source: http://en.wikipedia.org/w/index.php?oldid=467099984 Contributors: Andreas Kaufmann, Djmckee1, Free Software Knight, Jerryobject, MaxSem, Mengmeng, Stuartyeates, 2
anonymous edits
HtmlUnit Source: http://en.wikipedia.org/w/index.php?oldid=483544146 Contributors: Agentq314, Andreas Kaufmann, Asashour, DARTH SIDIOUS 2, Edward, Frap, Jj137, KAtremer,
Lkesteloot, Mabdul, Mguillem, Nigelj, Tobias Bergemann, Zwilson14, 39 anonymous edits
Test automation Source: http://en.wikipedia.org/w/index.php?oldid=509579476 Contributors: 5nizza, 83nj1, 9th3mpt, ADobey, Abdull, Akr7577, Alaattinoz, Aleksd, AliveFreeHappy, Ameya
barve, Amitkaria2k, Ancheta Wis, AndrewN, Andy Dingley, Ankurj, Anupam naik, Apparition11, Asashour, Ash, Auntof6, Bbryson, Beland, Benjamin Geiger, Bhagat.Abhijeet, Bigtwilkins,
Article Sources and Contributors
264
Bihco, Caltas, Carioca, Checkshirt, Chills42, Chrisbepost, Christina thi, CindyJokinen, CodeWonk, Crazycomputers, DARTH SIDIOUS 2, DRAGON BOOSTER, DRogers, Dbelhumeur02,
DivineAlpha, Dreftymac, Drivermadness, Eaowens, Edustin, EdwardMiller, Egivoni, ElfriedeDustin, Elipongo, Enoch the red, Excirial, Faris747, Faye dimarco, Ferpectionist, FlashSheridan,
Flopsy Mopsy and Cottonmouth, Florian Huber, Fumitol, G0gogcsc300, Gaggarwal2000, Gbegic, Gherget, Gibs2001, Gmacgregor, Goutham, Grafen, Gtucker78, Harobed, Hatch68, Helix84,
HenryJames141, Hesa, Heydaysoft, Hooperbloob, Hswiki, Hu12, In.Che., Ixfd64, JASpencer, JamesBWatson, Jaxtester, Jay-Sebastos, Jkoprax, Jluedem, Johndunham, Johnuniq, Jpg, Krishnaegs,
Kumarsameer, Kuru, Ldimaggi, Leomcbride, M4gnum0n, MC10, MER-C, Marasmusine, Mark Kilby, Marudubshinki, Matthewedwards, Mdanrel, Megaride, MendipBlue, Michael Bernstein,
Michecksz, Mikaelfries, Morrillonline, Mortense, Mr.scavenger, MrOllie, Nara Sangaa, Nima.shahhosini, Nimowy, Notinasnaid, O.Koslowski, Octoferret, Ohnoitsjamie, OracleDBGuru,
Palmirotheking, PeterBizz, Pfhjvb0, Pomoxis, Prakash Nadkarni, ProfessionalTST, Qatutor, Qlabs impetus, Qtpautomation, Qwyrxian, R'n'B, RHaworth, Radagast83, Radiant!, Radiostationary,
Raghublr, Rapd56, Raymondlafourchette, Rich Farmbrough, RichardHoultz, Rickjpelleg, Rjwilmsi, Robertvan1, Robinson Weijman, Ryadav, Ryepie, SSmithNY, Sbono, ScottSteiner, Seaphoto,
Shankar.sathiamurthi, Shijuraj, Shlomif, SoCalSuperEagle, Softwaretest1, Srideep TestPlant, Ssingaraju, SteveLoughran, Ststeinbauer, Suna bocha, Sundaramkumar, Swtechwr, Testautomator,
Thv, Ttrevers, Tumaka, Tushar291081, Vadimka, Veledan, Versageek, Vogelt, Waikh, Walter Grlitz, Webbbbbbber, Winmacro, Woella, WordSurd, Worksoft-wayne, Wrp103, Xadhix, Yan
Kuligin, ZachGT, Zorgon7, Zulfikaralib, , 361 anonymous edits
Test bench Source: http://en.wikipedia.org/w/index.php?oldid=508483435 Contributors: Abdull, Ali65, Amitgusain, Arch dude, Briancarlton, Dolovis, E2eamon, FreplySpang, J. Sparrow, Joe
Decker, Ktr101, Pinecar, Remotelysensed, Rich Farmbrough, Singamayya, Testbench, Tgruwell, 13 anonymous edits
Test execution engine Source: http://en.wikipedia.org/w/index.php?oldid=503663527 Contributors: Abdull, Ali65, Andreas Kaufmann, Cander0000, ChildofMidnight, Fabrictramp, Grafen,
Rontaih, Roshan220195, Walter Grlitz, 4 anonymous edits
Test stubs Source: http://en.wikipedia.org/w/index.php?oldid=497396843 Contributors: Andreas Kaufmann, Chiefhuggybear, Christianvinter, Deb, Meridith K, Thisarticleisastub, Tomrbj, 4
anonymous edits
Testware Source: http://en.wikipedia.org/w/index.php?oldid=458763704 Contributors: Andreas Kaufmann, Assadmalik, Avalon, Gzkn, Northamerica1000, Robofish, SteveLoughran, Wireless
friend, ZhonghuaDragon, 7 anonymous edits
Test automation framework Source: http://en.wikipedia.org/w/index.php?oldid=503712596 Contributors: Abdull, Aby74, Adethya, Akr7577, Al95521, AliveFreeHappy, Anandgopalkri, Andy
Dingley, Anshooarora, Apparition11, Beland, ChrisGualtieri, Chrisbepost, Closedmouth, Drpaule, Excirial, Flopsy Mopsy and Cottonmouth, Gibs2001, Giraffedata, Heydaysoft, Homfri,
Iridescent, JamesBWatson, Jonathan Webley, Ktr101, LedgendGamer, Mitch Ames, Mountk2, Nalinnew, Oziransky, Paul dexxus, Peneh, PeterBizz, Pinecar, Qlabs impetus, RHaworth,
Regancy42, Rsciaccio, Sachxn, Sbasan, SerejkaVS, Slon02, SteveLoughran, Vishwas008, Walter Grlitz, West.andrew.g, 57 anonymous edits
Data-driven testing Source: http://en.wikipedia.org/w/index.php?oldid=506394520 Contributors: 2Alen, Amorymeltzer, Andreas Kaufmann, ChrisGualtieri, Cornellrockey, EdGl, Fabrictramp,
Lockley, MrOllie, Mrinmayee.p, Phanisrikar, Pinecar, Rajwiki, Rjwilmsi, Rwwww, SAE1962, Sbono, Sean.co.za, Zaphodikus, 34 anonymous edits
Modularity-driven testing Source: http://en.wikipedia.org/w/index.php?oldid=497319192 Contributors: Avalon, Minnaert, Phanisrikar, Pinecar, Ron Ritzman, Walter Grlitz, 5 anonymous
edits
Keyword-driven testing Source: http://en.wikipedia.org/w/index.php?oldid=508404646 Contributors: 5nizza, Conortodd, Culudamar, Download, Erkan Yilmaz, Heydaysoft, Hooperbloob, Jeff
seattle, Jessewgibbs, Jonathan Webley, Jonathon Wright, Jtowler, Ken g6, Lowmagnet, Maguschen, MarkCTest, MrOllie, Phanisrikar, Pinecar, Rjwilmsi, Rwwww, SAE1962, Scraimer,
Sean.co.za, Sparrowman980, Swtesterinca, Tobias.trelle, Ukkuru, Ultimus, Yun-Yuuzhan (lost password), Zoobeerhall, 69 anonymous edits
Hybrid testing Source: http://en.wikipedia.org/w/index.php?oldid=479258850 Contributors: Bunnyhop11, Horologium, MrOllie, Vishwas008, 6 anonymous edits
Lightweight software test automation Source: http://en.wikipedia.org/w/index.php?oldid=491059651 Contributors: Colonies Chris, Greenrd, JamesDmccaffrey, John Vandenberg,
OracleDBGuru, Pnm, Rjwilmsi, Torc2, Tutterz, Verbal, 9 anonymous edits
Software testing controversies Source: http://en.wikipedia.org/w/index.php?oldid=492750279 Contributors: Andreas Kaufmann, Derelictfrog, JASpencer, PigFlu Oink, Pinecar, RHaworth,
Softtest123, Testingfan, Walter Grlitz, 5 anonymous edits
Test-driven development Source: http://en.wikipedia.org/w/index.php?oldid=508445968 Contributors: 1sraghavan, 2001:470:1F0B:448:221:5AFF:FE21:1022, Abdull, Achorny,
AliveFreeHappy, Alksentrs, Anorthup, AnthonySteele, Antonielly, Asgeirn, Astaines, Attilios, Autarch, AutumnSnow, Bcwhite, Beland, Blutfink, CFMWiki1, Calrfa Wn, Canterbury Tail,
Chris Pickett, Closedmouth, Craig Stuntz, CraigTreptow, DHGarrette, Dally Horton, Damian Yerrick, David-Sarah Hopwood, Deuxpi, Dhdblues, Dougluce, Download, Downsize43, Droob,
Dtmilano, Dugosz, Ed Poor, Edaelon, Ehheh, Electriccatfish2, Emurphy42, Enochlau, Eurleif, Excirial, Falcn42, Faught, Fbeppler, Fre0n, Furrykef, Gakrivas, Gary King, Geometry.steve, Gigi
fire, Gishu Pillai, Gmcrews, Gogo Dodo, Hadal, Hagai Cibulski, Hariharan wiki, Heirpixel, Hzhbcl, JDBravo, JLaTondre, JacobProffitt, Jglynn43, Jleedev, Jonb ee, Jonkpa, Jpalm 98, Jrvz,
Kbdank71, Kellen`, KellyCoinGuy, Kevin Rector, Khalid hassani, Kristjan Wager, Krzyk2, Kvdveer, LeaveSleaves, Lenin1991, Lumberjake, Madduck, Mark Renier, Martial75, Martinig,
Mathiasl26, MaxSem, Mberteig, Mboverload, Mckoss, Mdd, MeUser42, MelbourneStar, Mhhanley, Michael miceli, Michig, Middayexpress, Mkarlesky, Mkksingha, Mnorbury, Mortense,
Mosquitopsu, Mossd, Mr2001, MrOllie, Nigelj, Nohat, Notnoisy, Nuggetboy, O.Koslowski, Ojcit, Oligomous, On5deu, Parklandspanaway, Patrickdepinguin, Pengo, PhilipR, Phlip2005, Pinecar,
PradeepArya1109, R. S. Shaw, Radak, Raghunathan.george, RickBeton, RoyOsherove, Rulesdoc, SAE1962, Sam Hocevar, Samwashburn3, San chako, Sanchom, SchreiberBike, SethTisue,
Shadowjams, SharShar, Shenme, Shyam 48, SimonP, St.General, Stemcd, SteveLoughran, Sullivan.t, Supreme Deliciousness, Sverdrup, Svick, Swasden, Szwejkc, TakuyaMurata, Tedickey,
Themacboy, Thumperward, Tobias Bergemann, Topping, Trum123, Underpants, V6Zi34, Virgiltrasca, WLU, Walter Grlitz, Waratah, Wikid77, Xagronaut, - , 434
anonymous edits
Agile testing Source: http://en.wikipedia.org/w/index.php?oldid=508737746 Contributors: AGK, Agiletesting, Alanbly, Athought, Chowbok, Eewild, Ehendrickson, Ericholmstrom,
GoingBatty, Gurch, Hemnath18, Henri662, Icaruspassion, Janetgregoryca, Johnuniq, LilHelpa, Lisacrispin, Luiscolorado, M2Ys4U, Manistar, MarkCTest, MathMaven, Mdd, Okevin, ParaTom,
Patrickegan, Pinecar, Pnm, Podge82, Random name, Sardanaphalus, ScottWAmbler, Vaibhav.nimbalkar, Vertium, Walter Grlitz, Webrew, Weimont, Zonafan39, 81 anonymous edits
Bug bash Source: http://en.wikipedia.org/w/index.php?oldid=492197997 Contributors: Andreas Kaufmann, Archippus, BD2412, Cander0000, DragonflySixtyseven, Freek Verkerk,
MisterHand, Pinecar, Retired username, Rich Farmbrough, Thumperward, 1 anonymous edits
Pair Testing Source: http://en.wikipedia.org/w/index.php?oldid=494706610 Contributors: Andreas Kaufmann, Bjosman, Cmr08, Jafeluv, LilHelpa, MrOllie, Neonleif, Prasantam, Tabletop,
Tony1, Universal Cereal Bus, Woohookitty, 9 anonymous edits
Manual testing Source: http://en.wikipedia.org/w/index.php?oldid=505115483 Contributors: ArielGold, Ashish.aggrawal17, DARTH SIDIOUS 2, Donperk, Eewild, Hairhorn, Iridescent,
Kgarima, L Kensington, Meetusingh, Morrillonline, Nath1991, OlEnglish, Orenburg1, Pinecar, Pinethicket, Predatoraction, Rwxrwxrwx, Saurabha5, Softwrite, Somdeb Chakraborty,
SwisterTwister, Tumaka, Walter Grlitz, Woohookitty, 84 anonymous edits
Regression testing Source: http://en.wikipedia.org/w/index.php?oldid=494762096 Contributors: 7, Abdull, Abhinavvaid, Ahsan.nabi.khan, Alan ffm, AliveFreeHappy, Amire80, Andrew
Eisenberg, Anorthup, Antonielly, Baccyak4H, Benefactor123, Boongoman, Brenda Kenyon, Cabalamat, Carlos.l.sanchez, Cdunn2001, Chris Pickett, DRogers, Dacian.epure, Dee Jay Randall,
Designatevoid, Doug.hoffman, Eewild, Elsendero, Emj, Enti342, Estyler, Forlornturtle, G0gogcsc300, Gregbard, Hadal, Hector224, Henri662, Herve272, HongPong, Hooperbloob, Iiiren, Jacob
grace, Jwoodger, Kamarou, Kesla, Kmincey, L Kensington, Labalius, LandruBek, Luckydrink1, MER-C, Marijn, Mariotto2009, Materialscientist, Matthew Stannard, Maxwellb, Menzogna,
Michaelas10, Michig, MickeyWiki, Mike Rosoft, MikeLynch, Msillil, NameIsRon, Neilc, Neurolysis, Noq, Philipchiappini, Pinecar, Qatutor, Qfissler, Ravialluru, Robert Merkel, Rsavenkov,
Ryans.ryu, S3000, Scoops, Snarius, Spock of Vulcan, SqueakBox, Srittau, Strait, Svick, Swtechwr, Throwaway85, Thv, Tobias Bergemann, Tobias Hoevekamp, Toon05, Urhixidur, Walter
Grlitz, Will Beback Auto, Wlievens, Zhenqinli, Zvn, 201 anonymous edits
Ad hoc testing Source: http://en.wikipedia.org/w/index.php?oldid=501690463 Contributors: DRogers, Epim, Erkan Yilmaz, Faught, IQDave, Josh Parris, Lhb1239, Ottawa4ever, Pankajkittu,
Pinecar, Pmod, Robinson weijman, Sharkanana, Sj, Solde, Walter Grlitz, Yunshui, 17 anonymous edits
Sanity testing Source: http://en.wikipedia.org/w/index.php?oldid=506326121 Contributors: Accelerometer, Andrey86, Andycjp, Arjayay, Auric, BenFrantzDale, Chillum, Chris Pickett,
Closedmouth, D4g0thur, Dysprosia, Fittysix, Fullstop, Gorank4, Haus, Histrion, Itai, JForget, Kaimiddleton, Karada, Kingpin13, LeaW, Lechatjaune, Lee Daniel Crocker, Martinwguy, Matma
Rex, Melchoir, Mikewalk, Mild Bill Hiccup, Mmckmg, NeilFraser, Nunh-huh, Oboler, PierreAbbat, Pinecar, Pinethicket, Polluks, R'n'B, Ricardol, Rrburke, Saberwyn, Sietse Snel, SimonTrew,
Strait, Stratadrake, UlrichAAB, Verloren, Viriditas, Walter Grlitz, Webinfoonline, Wikid77, 100 anonymous edits
Integration testing Source: http://en.wikipedia.org/w/index.php?oldid=506507449 Contributors: 2002:82ec:b30a:badf:203:baff:fe81:7565, Abdull, Addshore, Amire80, Arunka, Arzach,
Cbenedetto, Cellovergara, ChristianEdwardGruber, Cmungall, DRogers, DataSurfer, Discospinster, Ehabmehedi, Faradayplank, Furrykef, Gggh, Gilliam, GreatWhiteNortherner, Hooperbloob,
J.delanoy, Jewbacca, Jiang, Jtowler, Kmerenkov, Krashlandon, Lordfaust, Marek69, Mheusser, Michael Rawdon, Michael miceli, Michig, Myhister, Notinasnaid, Onebyone, Paul August,
Pegship, Pinecar, Qaddosh, Ravedave, Ravindrat, SRCHFD, SkyWalker, Solde, Spokeninsanskrit, Steven Zhang, Svick, TheRanger, Thv, Walter Grlitz, Wyldtwyst, Zhenqinli, 154 anonymous
Article Sources and Contributors
265
edits
System testing Source: http://en.wikipedia.org/w/index.php?oldid=506381000 Contributors: A bit iffy, Abdull, AliveFreeHappy, Aman sn17, Anant vyas2002, AndreChou, Argon233, Ash,
Beland, Bex84, Bftsg, BiT, Bobo192, Ccompton, ChristianEdwardGruber, Closedmouth, DRogers, Downsize43, Freek Verkerk, GeorgeStepanek, Gilliam, Harveysburger, Hooperbloob, Ian
Dalziel, Jewbacca, Kingpin13, Kubigula, Lauwerens, Manway, Michig, Morning277, Mpilaeten, Myhister, NickBush24, Philip Trueman, Pinecar, RCHenningsgard, RainbowOfLight, Ravialluru,
Ronz, Solde, Ssweeting, Suffusion of Yellow, SusanLarson, Thv, Tmopkisn, Vishwas008, Vmahi9, Walter Grlitz, Wchkwok, Woohookitty, Zhenqinli, 151 anonymous edits
System integration testing Source: http://en.wikipedia.org/w/index.php?oldid=495453620 Contributors: Aliasgarshakir, Andreas Kaufmann, Andrewmillen, Anna Lincoln, AvicAWB, Barbzie,
Bearcat, Charithk, DRogers, Fat pig73, Flup, Gaius Cornelius, JeromeJerome, Jpbowen, Kku, Kubanczyk, Mawcs, Mikethegreen, Myasuda, Panchitaville, Pinecar, Radagast83, Rich Farmbrough,
Rwwww, Walter Grlitz, 36 anonymous edits
Acceptance testing Source: http://en.wikipedia.org/w/index.php?oldid=509553464 Contributors: Ace of Spades, Alphajuliet, Amire80, Amitg47, Apparition11, Ascnder, Bournejc, Caesura,
Caltas, CapitalR, Carse, Chris Pickett, Claudio figueiredo, CloudNine, Conversion script, DRogers, DVD R W, Dahcalan, Daniel.r.bell, Davidbatet, Dhollm, Divyadeepsharma, Djmckee1,
Dlevy-telerik, Eco30, Eloquence, Emilybache, Enochlau, F, GTBacchus, GraemeL, Granburguesa, Gwernol, HadanMarv, Halovivek, Hooperbloob, Hu12, Hutcher, Hyad, Infrablue,
Jamestochter, Jemtreadwell, Jgladding, JimJavascript, Jmarranz, Jpp, Kaitanen, Kekir, Ksnow, Liftoph, Lotje, MartinDK, MeijdenB, Meise, Melizg, Michael Hardy, Midnightcomm, Mifter, Mike
Rosoft, Mjemmeson, Mortense, Mpilaeten, Muhandes, Myhister, Myroslav, Newbie59, Normxxx, Old Moonraker, Olson.sr, PKT, Panzi, Pearle, PeterBrooks, Phamti, Pill, Pine, Pinecar, Qem,
RHaworth, RJFerret, Riki, Rlsheehan, Rodasmith, Salimchami, Shirulashem, Swpb, TheAMmollusc, Timmy12, Timo Honkasalo, Toddst1, Viridae, Walter Grlitz, Well-rested, Whaa?, William
Avery, Winterst, Woohookitty, 168 anonymous edits
Risk-based testing Source: http://en.wikipedia.org/w/index.php?oldid=502779013 Contributors: Andreas Kaufmann, Belgarath7000, DRogers, Deb, Gilliam, Henri662, Herve272, Hu12,
IQDave, Jim1138, Lorezsky, MSGJ, Noq, Paulgerrard, Ronhjones, Ronz, Tdjones74021, VestaLabs, Walter Grlitz, 20 anonymous edits
Software testing outsourcing Source: http://en.wikipedia.org/w/index.php?oldid=504440172 Contributors: Algebraist, Anujgupta2 979, ChrisGualtieri, Dawn Bard, Discospinster, Elagatis,
Gonarg90, Hu12, JaneStewart123, Kirk Hilliard, Lolawrites, MelbourneStar, NewbieIT, Piano non troppo, Pinecar, Pratheepraj, Promoa1, Robofish, TastyPoutine, Tedickey, Tesstty, Tom1492,
Woohookitty, 19 anonymous edits
Tester driven development Source: http://en.wikipedia.org/w/index.php?oldid=460931859 Contributors: Arneeiri, BirgitteSB, Chris Pickett, Fram, Gdavidp, Int19h, Josh Parris, Mdd, Pinecar,
Sardanaphalus, Smjg, Tony1, 11 anonymous edits
Test effort Source: http://en.wikipedia.org/w/index.php?oldid=496553341 Contributors: Chemuturi, Chris the speller, Contributor124, DCDuring, Downsize43, Erkan Yilmaz, Furrykef,
Helodia, Lakeworks, Lockley, Mr pand, Notinasnaid, Pinecar, Ronz, 12 anonymous edits
IEEE 829 Source: http://en.wikipedia.org/w/index.php?oldid=500923833 Contributors: 1exec1, A.R., Antariksawan, CesarB, Das.steinchen, Donmillion, Firefox13, Fredrik, GABaker,
Ghalloun, Grendelkhan, Haakon, Inukjuak, J.delanoy, Korath, Malindrom, Matthew Stannard, Methylgrace, Nasa-verve, Paulgerrard, Pinecar, Pmberry, RapPayne, Robertvan1, Shizhao, Utuado,
Walter Grlitz, 38 anonymous edits
Test strategy Source: http://en.wikipedia.org/w/index.php?oldid=503207289 Contributors: AlexWolfx, Ankitamor, Autoerrant, Avalon, BartJandeLeuw, Christopher Lamothe, D6, Denisarona,
Downsize43, Fabrictramp, Freek Verkerk, HarlandQPitt, Henri662, Jayaramg, John of Reading, Liheng300, LogoX, M4gnum0n, Malcolma, Mandarhambir, Mboverload, Michael Devore,
Minhaj21, Pinecar, RHaworth, Rpyle731, Santhoshmars, Shepard, Walter Grlitz, 77 anonymous edits
Test plan Source: http://en.wikipedia.org/w/index.php?oldid=504821395 Contributors: -Ril-, Aaronbrick, Aecis, Alynna Kasmira, AndrewStellman, AuburnPilot, Bashnya25, Charles
Matthews, Craigwb, Dave6, Downsize43, Drable, E Wing, Foobaz, Freek Verkerk, Grantmidnight, Hennessey, Patrick, Hongooi, Icbkr, Ismarc, Jaganathcfs, Jason Quinn, Jeff3000, Jgorse,
Jlao04, Ken tabor, Kindx, Kitdaddio, LogoX, M4gnum0n, MarkSweep, Matthew Stannard, Mellissa.mcconnell, Michig, Mk*, Moonbeachx, NHSavage, NSR, Niceguyedc, OllieFury,
Omicronpersei8, OndraK, Oriwall, Padma vgp, Pedro, Pine, Pinecar, RJFJR, RL0919, Randhirreddy, Rlsheehan, Roshanoinam, Rror, SWAdair, Schmiteye, Scope creep, Shadowjams, SimonP,
Stephenb, Tgeairn, The Thing That Should Not Be, Theopolisme, Thunderwing, Thv, Uncle Dick, Wacko, Waggers, Walter Grlitz, Yparedes, 335 anonymous edits
Traceability matrix Source: http://en.wikipedia.org/w/index.php?oldid=506386309 Contributors: AGK, Ahoerstemeier, Andreas Kaufmann, Charles Matthews, ChrisGualtieri, Craigwbrown,
DRogers, Dgw, Discospinster, Donmillion, Excirial, Fry-kun, Furrykef, Graham87, Gurch, IPSOS, Kuru, Mdd, MrOllie, Pamar, Pinecar, Pravinparmarce, Rettetast, Ronz, Sardanaphalus,
Shambhaviroy, Thebluemanager, Timneu22, Walter Grlitz, WikiTome, , 86 anonymous edits
Test case Source: http://en.wikipedia.org/w/index.php?oldid=508587144 Contributors: AGK, AliveFreeHappy, Allstarecho, Chris Pickett, ColBatGuano, Cst17, DarkBlueSeid, DarkFalls, Darth
Panda, Eastlaw, Epbr123, Flavioxavier, Freek Verkerk, Furrykef, Gothmog.es, Hooperbloob, Iggy402, Iondiode, Jtowler, Jwh335, Jwoodger, Kevinmon, LeaveSleaves, Lenoxus, MadGuy7023,
Magioladitis, Maniacs29, MaxHund, Mdd, Merutak, Mo ainm, Mr Adequate, MrOllie, Nibblus, Niri.M, Nmthompson, Pavel Zubkov, Peter7723, Pilaf, Pinecar, PrimeObjects, RJFJR,
RainbowOfLight, RayAYang, Renu gautam, Sardanaphalus, Sciurin, Sean D Martin, Shervinafshar, Srikaaa123, Suruena, System21, Thejesh.cg, Thorncrag, Thv, Tomaxer, Travelbird, Velella,
Vikasbucha, Vrenator, Walter Grlitz, Wernight, Yennth, Zack wadghiri, 190 anonymous edits
Test data Source: http://en.wikipedia.org/w/index.php?oldid=482013544 Contributors: AlexandrDmitri, Alvestrand, Craigwb, Fg2, Gakiwate, JASpencer, Nnesbit, Onorem, Pinecar, Qwfp,
Stephenb, Uncle G, 14 anonymous edits
Test suite Source: http://en.wikipedia.org/w/index.php?oldid=509101088 Contributors: A-hiro, Abdull, Abhirajan12, Alai, Andreas Kaufmann, CapitalR, Denispir, Derek farn, FreplySpang,
JzG, KGasso, Kenneth Burgener, Lakeworks, Liao, Martpol, Newman.x, Pinecar, Stephenwanjau, Tomjenkins52, Unixtastic, VasilievVV, Vasywriter, Walter Grlitz, 31 anonymous edits
Test script Source: http://en.wikipedia.org/w/index.php?oldid=396053008 Contributors: Alai, Eewild, Falterion, Freek Verkerk, Hooperbloob, JLaTondre, JnRouvignac, Jruuska, Jwoodger,
Michig, PaulMEdwards, Pfhjvb0, Pinecar, RJFJR, Rchandra, Redrocket, Sean.co.za, Sujaikareik, Teiresias, Thv, Ub, Walter Grlitz, 28 anonymous edits
Test harness Source: http://en.wikipedia.org/w/index.php?oldid=494651880 Contributors: Abdull, Ali65, Allen Moore, Avalon, Brainwavz, Caesura, Caknuck, Calrfa Wn, ChrisGualtieri,
DenisYurkin, Downtown dan seattle, Dugrocker, Furrykef, Greenrd, Kgaughan, Ktr101, Pinecar, SQAT, Tony Sidaway, Urhixidur, Wlievens, 45 anonymous edits
Static testing Source: http://en.wikipedia.org/w/index.php?oldid=483717655 Contributors: Aldur42, Amberved, Andreas Kaufmann, Avenue X at Cicero, Bearcat, Carlo.milanesi, Chris Pickett,
Epim, Erkan Yilmaz, Iflapp, Iq9, Jim1138, Kauczuk, Kothiwal, Nla128, Pinecar, Railwayfan2005, Rnsanchez, Robert Skyhawk, Ruud Koot, Sripradha, TiMike, Walter Grlitz, Yaris678, 34
anonymous edits
Software review Source: http://en.wikipedia.org/w/index.php?oldid=490605938 Contributors: A Nobody, AliveFreeHappy, Andreas Kaufmann, Audriusa, Bovineone, Colonel Warden, Danno
uk, David Biddulph, Dima1, Donmillion, Gail, Irfibwp, Jschnur, Karada, Madjidi, Matchups, Mitatur, Rcsprinter123, Rolf acker, Tassedethe, William M. Connolley, Woohookitty, XLerate, 38
anonymous edits
Software peer review Source: http://en.wikipedia.org/w/index.php?oldid=500479139 Contributors: AliveFreeHappy, Andreas Kaufmann, Anonymous101, Bovineone, Danno uk, Donmillion,
Ed Brey, Ed Poor, Gronky, Karada, Kezz90, Kjenks, Lauri.pirttiaho, MarkKozel, Michael Hardy, Sdornan, Zakahori, 12 anonymous edits
Software audit review Source: http://en.wikipedia.org/w/index.php?oldid=416400975 Contributors: Andreas Kaufmann, Donmillion, JaGa, Katharineamy, Kralizec!, Romain Jouvet,
Tregoweth, Woohookitty, Zro, 6 anonymous edits
Software technical review Source: http://en.wikipedia.org/w/index.php?oldid=491024605 Contributors: Andreas Kaufmann, Donmillion, Edward, Gnewf, Sarahj2107, 5 anonymous edits
Management review Source: http://en.wikipedia.org/w/index.php?oldid=505482640 Contributors: Donmillion, Vasywriter
Software inspection Source: http://en.wikipedia.org/w/index.php?oldid=486404635 Contributors: A.R., Alvarogili, Andreas Kaufmann, AndrewStellman, Anujasp, Arminius, AutumnSnow,
BigMikeW, Bigbluefish, Bovlb, David Biddulph, Ebde, Ft1, Fuzheado, ISTB351, Ivan Pozdeev, JohnDavidson, Kku, Michaelbusch, Mtilli, Nmcou, Occono, PeterNuernberg, Rmallins, Seaphoto,
Secdio, Stephenb, SteveLoughran, Vivio Testarossa, Wik, 63 anonymous edits
Fagan inspection Source: http://en.wikipedia.org/w/index.php?oldid=506107759 Contributors: Altenmann, Arthena, Ash, Attilios, Bigbluefish, Can't sleep, clown will eat me, ChrisG,
Courcelles, Drbreznjev, Epeefleche, Gaff, Gaius Cornelius, Gimmetrow, Hockeyc, Icarusgeek, Iwearavolcomhat, JIP, Kezz90, MacGyverMagic, Mjevans, Mkjadhav, Nick Number, Okok,
Pedro.haruo, Slightsmile, Tagishsimon, Talkaboutquality, Tassedethe, The Font, The Letter J, Zerodamage, Zundark, 43 anonymous edits
Article Sources and Contributors
266
Software walkthrough Source: http://en.wikipedia.org/w/index.php?oldid=455795839 Contributors: Andreas Kaufmann, DanielPharos, Donmillion, Gnewf, Jherm, Jocoder, Karafias, Ken g6,
MathsPoetry, OriolBonjochGassol, Reyk, Stuartyeates, 12 anonymous edits
Code review Source: http://en.wikipedia.org/w/index.php?oldid=506619278 Contributors: 5nizza, Adange, Aivosto, AlcherBlack, AliveFreeHappy, Alla tedesca, Andreas Kaufmann, Argaen,
Bevo, BlueNovember, Brucevdk, Bunyk, Cander0000, CanisRufus, ChipX86, Craigwb, DanielVale, Derek farn, Digsav, DoctorCaligari, Dwheeler, Ed Poor, Enigmasoldier, Flamurai, Fnegroni,
Furrykef, Gbolton, Gioto, Hooperbloob, Intgr, J.delanoy, Jabraham mw, Jamelan, Jesselong, Khalid hassani, Kirian, Kispa, Lauciusa, Madjidi, Martinig, Matchups, MattOConnor,
MattiasAndersson, MrOllie, Mratzloff, Msabramo, Mutilin, NateEag, Nevware, Oneiros, Pcb21, Pchap10k, Project2501a, Pvlasov, Rajeshd, Ronz, Rrobason, Ryguasu, Salix alba, Scottb1978,
Sh41pedia, Smartbear, Srice13, StefanVanDerWalt, Steleki, Stephenb, Stevietheman, Sverdrup, Swtechwr, Talkaboutquality, Themfromspace, ThurnerRupert, Tlaresch, Tom-, TyA, Ynhockey,
114 anonymous edits
Automated code review Source: http://en.wikipedia.org/w/index.php?oldid=506353422 Contributors: Aivosto, AliveFreeHappy, Amoore, Andreas Kaufmann, Closedmouth, Download, Elliot
Shank, Fehnker, Gaudol, HelloAnnyong, IO Device, JLaTondre, Jabraham mw, JnRouvignac, John Vandenberg, Jxramos, Leolaursen, Lmerwin, Mellery, Nacx08, NathanoNL, OtherMichael,
Pgr94, Ptrb, Pvlasov, RedWolf, Rwwww, Swtechwr, ThaddeusB, Tracerbee, Wknight94, 28 anonymous edits
Code reviewing software Source: http://en.wikipedia.org/w/index.php?oldid=394379382 Contributors: Aivosto, AliveFreeHappy, Amoore, Andreas Kaufmann, Closedmouth, Download, Elliot
Shank, Fehnker, Gaudol, HelloAnnyong, IO Device, JLaTondre, Jabraham mw, JnRouvignac, John Vandenberg, Jxramos, Leolaursen, Lmerwin, Mellery, Nacx08, NathanoNL, OtherMichael,
Pgr94, Ptrb, Pvlasov, RedWolf, Rwwww, Swtechwr, ThaddeusB, Tracerbee, Wknight94, 28 anonymous edits
Static code analysis Source: http://en.wikipedia.org/w/index.php?oldid=410764388 Contributors: 212.153.190.xxx, A5b, Ablighnicta, Ahoerstemeier, Alex, AliveFreeHappy, Andareed,
Andreas Kaufmann, Antonielly, Anujgoyal, Berrinam, CaliforniaAliBaba, Conversion script, Creando, Crowfeather, Cryptic, DatabACE, David.Monniaux, Dbelhumeur02, Dekisugi, Derek farn,
Diego Moya, Ebde, Ed Brey, Erkan Yilmaz, Fderepas, Ferengi, FlashSheridan, Gadfium, Goffrie, GraemeL, Graham87, Ground Zero, Hoco24, Ixfd64, JForget, Jabraham mw, JacobTrue,
Jan1nad, Jisunjang, JoelSherrill, JohnGDrever, Jpbowen, Jschlosser, Julesd, Kazvorpal, Kravietz, Ks0stm, Kskyj, Leibniz, Lgirvin, Marudubshinki, Mike Van Emmerik, Mutilin, Peter M Gerdes,
Pinecar, Ptrb, Qwertyus, Renox, Rjwilmsi, Rpm, Ruud Koot, Rwwww, Sashakir, Schwallex, Shadowjams, StaticCast, Sttaft, Suruena, Swtechwr, TUF-KAT, Ted Longstaffe, Thumperward, Thv,
Tinus74, Tjarrett, Tregoweth, Villeez, Vina, Vkuncak, Vp, Wbm1058, Wlievens, Wolfch, Yonkie, 123 anonymous edits
List of tools for static code analysis Source: http://en.wikipedia.org/w/index.php?oldid=509485999 Contributors: 2A02:788:13:20:286A:AA0C:CB90:D91C, 70x7plus1, A.zitzewitz,
AManWithNoPlan, Ablighnicta, Achituv, Adarw, Aetheling, Aivosto, Albert688, Alex, Alexcenthousiast, Alexius08, Alextelea, AliveFreeHappy, Alumd, Amette, Amire80, Amoore, Andreas
Kaufmann, AndrewHowse, Angusmclellan, Apcman, Ariefwn, Armadillo-eleven, Asim, Athaenara, Atif.hussain, Avraham, Azrael Nightwalker, BB-Froggy, Baijum81, Bakotat, Bantoo12,
Bchess, Bdoserror, Bellingard, Benneman, Benrick, Bensonwu, Bernhard.kaindl, Bgi, Bjcosta, Bknittel, Bkuhn, Bnmike, Borishollas, Breakpoint, Camwik75, Caoilte.guiry, Capi x,
Catamorphism, Cate, Cgisquet, Checkshirt, Chick Bowen, Claytoncarney, Collinpark, Cpparchitect, Cryptic, CxQL, Dash, DatabACE, David Gerard, David wild2, David.Monniaux,
Dbelhumeur02, Dclucas, Dekisugi, Demarant, Derek farn, Devpitcher, Diego Moya, Dinis.Cruz, Diomidis Spinellis, Disavian, Dmkean, Dmooney, Dmulter, Dnozay, DomQ,
Donaldsbell@yahoo.com, Douglaska, Dpnew, Drdeee, Drpaule, Dtgriscom, Dvice null, Dwheeler, Ed Brey, Ehajiyev, Elliot Shank, Epierrel, Esdev, Ettl.martin, Exatex, Excirial, Faganp,
Falcon9x5, Felmon, FergusBolger, Fewaffles, Fishoak, Flamingcyanide, FlashSheridan, Fowlay, Frap, Freddy.mallet, FutureDomain, Fwaldman, G b hall, Gahs, Gaius Cornelius, GarenParham,
Gaudol, Gbickford, Gesslein, Giggy, Gogege, Gotofritz, Grauenwolf, Guillem.Bernat, Gwandoya, Haakon, Hello484, HelloAnnyong, Henk Poley, HillGyuri, Hooperbloob, Hsardin, Hyd danmar,
Iceberg1414, Imeshev, Imology, InaToncheva, InaTonchevaToncheva, Irishguy, Issam lahlali, Istoyanov, Iulian.serbanoiu, JLaTondre, Jabraham mw, Jamieayre, Javier.salado, Jayabra17, Jayjg,
Jcuk 2007, Jdabney, Jeff Song, Jehiah, Jeodesic, Jerryobject, Jersyko, Jessethompson, Jisunjang, JnRouvignac, Joebeone, John of Reading, JohnGDrever, Jopa fan, Jpbowen, Jredwards,
Jschlosser, Jsub, JzG, Kengell, Kenguest, Kent SofCheck, Kfhiejf6, Kgnazdowsky, Khozman, Klausjansen, Kravietz, Krischik, Krlooney, Kskyj, LDRA, Lajmon, Lalb, Libouban, LilHelpa,
Linehanjt, Llib xoc, Lmerwin, Malcolma, Mandrikov, MarkusLitz, Martarius, MartinMarcher, Matsgd, Mcculley, Mdjohns5, Mike Van Emmerik, Mikeblas, Minhyuk.kwon, Mj1000, Mmernex,
Monathan, Moonwolf14, Mrlongleg, Mrwojo, Msmithers6, N5iln, Nandorjozsef, Nandotamu, Nbougalis, Neerajsangal, NewSkool, Newtang, Nick Number, Nickj, Nico.anquetil, Nixeagle,
Northgrove, Notopia, O2user, Oorang, Optimyth, Orangemike, PSeibert, PSmacchia, Parasoft-pl, Parikshit Narkhede, PaulEremeeff, Pauljansen42, Paulwells, Pausch, Pavel Vozenilek, Pdohara,
Perrella, Petdance, Pfunk1410, Phatom87, Piano non troppo, Pinecar, Pitkelevo, Pizzutillo, Pkortve, Pkuczynski, Pmjtoca, Pmollins, Pokeypokes, Prasanna vps, PraveenNet, Psychonaut, Pth81,
Ptrb, Pvlasov, Qu3a, RHaworth, RMatthias, Rainco, Rajah9, Ralthor, Rdbuckley, Rhuuck, Rich Farmbrough, Richsz, RickScott, Rodolfo Borges, Romgerale, Rosen, Rpapo, Rpelisse, Rrtuckwell,
Rssh, Runehalfdan, Runtime, Ruud Koot, Sachrist, Sadovnikov, Sander123, Sashakir, Schwallex, Scovetta, Serge Baranovsky, Sffubs, ShelfSkewed, Shiva.rock, Siva77, Skilner, Skrik69,
Solodon, Sourceanalysis, Sreich, StanContributor, Staniuk, StaticCast, Stephen.gorton, Sttaft, Stubb, Swtechwr, Tabletop, Taed, Tasc, Tddcodemaster, Tedickey, Test-tools, The.gaboo,
Timekeeper77, Tjarrett, Tkvavle, Tlegall, Tlownie, Tomtheeditor, Tonygrout, Toutoune25, Traal, Tracerbee, Tradsud, Tregoweth, Uncopy, Vaucouleur, Velizar.vesselinov, Venkatreddyc,
Verdatum, Verilog, Vfeditor, Vor4, Vp, Vrenator, Wakusei, Wdfarmer, Wegra, Weregerbil, Wesnerm, Wickorama, Wiki jmeno, Wikieditoroftoday, Wikimaf, Woohookitty, Wws, Xodlop, Xoloz,
Yansky, Ydegraw, Yoderj, Ysangkok, Zfalconz, 802 anonymous edits
GUI software testing Source: http://en.wikipedia.org/w/index.php?oldid=413588113 Contributors: 10metreh, Alexius08, AliveFreeHappy, Andreas Kaufmann, Anupam, Chaser, Cmbay,
Craigwb, Dreftymac, Dru of Id, Equatin, Gururajs, Hardburn, Hesa, Hu12, Imroy, Jeff G., JnRouvignac, Josephtate, Jruuska, Jwoodger, Ken g6, Liberatus, MER-C, Mcristinel, Mdjohns5, Mild
Bill Hiccup, O.Koslowski, Paul6feet1, Pinecar, Pnm, Rdancer, Rich Farmbrough, Rjwilmsi, Rockfang, Ronz, SAE1962, SiriusDG, Staceyeschneider, SteveLoughran, Steven Zhang,
Unforgettableid, Wahab80, Wakusei, Walter Grlitz, 59 anonymous edits
Usability testing Source: http://en.wikipedia.org/w/index.php?oldid=508578162 Contributors: 137.28.191.xxx, A Quest For Knowledge, Aapo Laitinen, Al Tereego, Alan Pascoe, Alvin-cs,
Antariksawan, Arthena, Azrael81, Bihco, Bkillam, Bkyzer, Brandon, Breakthru10technologies, Bretclement, ChrisJMoor, Christopher Agnew, Cjohansen, Ckatz, Conversion script, Crnica,
DXBari, Dennis G. Jerz, Dickohead, Diego Moya, Dobrien, DrJohnBrooke, Dvandersluis, EagleFan, Farreaching, Fredcondo, Geoffsauer, Gmarinp, Gokusandwich, GraemeL, Gubbernet,
Gumoz, Headbomb, Hede2000, Hooperbloob, Hstetter, JDBravo, JaGa, Jean-Frdric, Jetuusp, Jhouckwh, Jmike80, Jtcedinburgh, Karl smith, Kolyma, Kuru, Lakeworks, Leonard^Bloom,
LizardWizard, Malross, Mandalaz, Manika, MaxHund, Mchalil, Miamichic, Michael Hardy, MichaelMcGuffin, MikeBlockQuickBooksCPA, Millahnna, Mindmatrix, Omegatron, Pavel
Vozenilek, Pghimire, Philipumd, Pigsonthewing, Pindakaas, Pinecar, QualMod, Ravialluru, Researcher1999, Rich Farmbrough, Rlsheehan, Ronz, Rossami, Schmettow, Shadowjams, Siddhi,
Spalding, Tamarkot, Technopat, Tobias Bergemann, Toghome, Tomhab, Toomuchwork, TwoMartiniTuesday, UsabilityCDSS, Vmahi9, Wikinstone, Wikitonic, Willem-Paul, Woohookitty,
Wwheeler, Yettie0711, ZeroOne, 140 anonymous edits
Think aloud protocol Source: http://en.wikipedia.org/w/index.php?oldid=494121725 Contributors: Akamad, Angela, Aranel, Calebjc, Crnica, DXBari, Delldot, Diego Moya, Dragice, Hetar,
Icairns, Jammycaketin, Khalid hassani, Manika, Ms2ger, Nuggetboy, Ofol, Ohnoitsjamie, PeregrineAY, Pinecar, Robin S, Robksw, Ronz, Sae1962, Schultem, Shanes, Shevek57, Simone.borsci,
Suruena, TIY, Technopat, Tillwe, Wik, Zojiji, Zunk, 26 anonymous edits
Usability inspection Source: http://en.wikipedia.org/w/index.php?oldid=382541633 Contributors: Andreas Kaufmann, Diego Moya, Lakeworks, 2 anonymous edits
Cognitive walkthrough Source: http://en.wikipedia.org/w/index.php?oldid=498045938 Contributors: American Eagle, Andreas Kaufmann, Avillia, Beta m, DXBari, David Eppstein, Diego
Moya, Elusive Pete, Firsfron, FrancoisJordaan, Gene Nygaard, Karada, Kevin B12, Lakeworks, Macdorman, Masran Silvaris, Moephan, Naerii, Quale, Rdrozd, Rich Farmbrough, SimonB1212,
Spalding, Srbauer, SupperTina, Tassedethe, Vacarme, Wavelength, Xionbox, 35 anonymous edits
Heuristic evaluation Source: http://en.wikipedia.org/w/index.php?oldid=498325831 Contributors: 0403554d, Andreas Kaufmann, Angela, Art LaPella, Bigpinkthing, Catgut, Clayoquot,
DXBari, DamienT, Delldot, Diego Moya, Edward, Felix Folio Secundus, Fredcondo, Fyhuang, Hugh.glaser, JamesBWatson, Jonmmorgan, JulesH, Karada, KatieUM, Khazar, Kjtobo,
Lakeworks, Luiscarlosrubino, Mrmatiko, PhilippWeissenbacher, RichardF, Rjwilmsi, Ronz, SMasters, Subversive, Turadg, Verne Equinox, Wikip rhyre, Woohookitty, Zeppomedio, 53
anonymous edits
Pluralistic walkthrough Source: http://en.wikipedia.org/w/index.php?oldid=504055506 Contributors: Andreas Kaufmann, ChrisGualtieri, Diego Moya, Lakeworks, Minnaert, RHaworth, Team
Estonia, 4 anonymous edits
Comparison of usability evaluation methods Source: http://en.wikipedia.org/w/index.php?oldid=483891612 Contributors: Andreala, Diego Moya, Eastlaw, Jtcedinburgh, Lakeworks,
RHaworth, Ronz, Simone.borsci, 5 anonymous edits
Image Sources, Licenses and Contributors
267
Image Sources, Licenses and Contributors
File:Blackbox.svg Source: http://en.wikipedia.org/w/index.php?title=File:Blackbox.svg License: Public Domain Contributors: Original uploader was Frap at en.wikipedia
File:ECP.png Source: http://en.wikipedia.org/w/index.php?title=File:ECP.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Nmondal
Image:mbt-overview.png Source: http://en.wikipedia.org/w/index.php?title=File:Mbt-overview.png License: Public Domain Contributors: Antti.huima, Monkeybait
Image:mbt-process-example.png Source: http://en.wikipedia.org/w/index.php?title=File:Mbt-process-example.png License: Public Domain Contributors: Antti.huima, Monkeybait
File:Three point flexural test.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Three_point_flexural_test.jpg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0
Contributors: Cjp24
File:US Navy 070409-N-3038W-002 Aviation Structural Mechanic 3rd Class Rene Tovar adjusts a connection point on a fixture hydraulic supply servo cylinder test station in the
hydraulics shop aboard the Nimitz-class aircraft carrier U.jpg Source:
http://en.wikipedia.org/w/index.php?title=File:US_Navy_070409-N-3038W-002_Aviation_Structural_Mechanic_3rd_Class_Rene_Tovar_adjusts_a_connection_point_on_a_fixture_hydraulic_supply_servo_cylinder_test_station_in_the_hydraulics_shop_aboard_the_Nimitz-class_aircraft_carrier_U.jpg
License: Public Domain Contributors: -
File:2009-0709-earthquake.jpg Source: http://en.wikipedia.org/w/index.php?title=File:2009-0709-earthquake.jpg License: Public Domain Contributors: Photo Credit: Colorado State
University
File:US Navy 070804-N-1745W-122 A Sailor assigned to Aircraft Intermediate Maintenance Department (AIMD) tests an aircraft jet engine for defects while performing Jet Engine
Test Instrumentation, (JETI) Certification-Engine Runs.jpg Source:
http://en.wikipedia.org/w/index.php?title=File:US_Navy_070804-N-1745W-122_A_Sailor_assigned_to_Aircraft_Intermediate_Maintenance_Department_(AIMD)_tests_an_aircraft_jet_engine_for_defects_while_performing_Jet_Engine_Test_Instrumentation,_(JETI)_Certification-Engine_Runs.jpg
License: Public Domain Contributors: -
File:TH11-50kN-pincer-grip.jpg Source: http://en.wikipedia.org/w/index.php?title=File:TH11-50kN-pincer-grip.jpg License: Creative Commons Attribution 3.0 Contributors:
Ingeniero-aleman
File:THS527-50.jpg Source: http://en.wikipedia.org/w/index.php?title=File:THS527-50.jpg License: Creative Commons Attribution 3.0 Contributors: Ingeniero-aleman
File:TH-screw-grips.jpg Source: http://en.wikipedia.org/w/index.php?title=File:TH-screw-grips.jpg License: GNU Free Documentation License Contributors: Ingeniero-aleman
File:THS766-5.jpg Source: http://en.wikipedia.org/w/index.php?title=File:THS766-5.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Ingeniero-aleman
File:THS314-2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:THS314-2.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Ingeniero-aleman
File:THS13k-02-200N.jpg Source: http://en.wikipedia.org/w/index.php?title=File:THS13k-02-200N.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Ingeniero-aleman
File:Temperaturkammer-spannzeug THS321-250-5.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Temperaturkammer-spannzeug_THS321-250-5.jpg License: Creative
Commons Attribution-Sharealike 3.0 Contributors: Ingeniero-aleman
File:TH149 .jpg Source: http://en.wikipedia.org/w/index.php?title=File:TH149_.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Ingeniero-aleman
File:THS137-4-fr.jpg Source: http://en.wikipedia.org/w/index.php?title=File:THS137-4-fr.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Ingeniero-aleman
http://www.grip.de
File:Biegevorrichtung TH165.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Biegevorrichtung_TH165.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Ingeniero-aleman
File:Abzugsvorrichtung TH50+SW .jpg Source: http://en.wikipedia.org/w/index.php?title=File:Abzugsvorrichtung_TH50+SW_.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: Ingeniero-aleman
Image:NUnit GUI.png Source: http://en.wikipedia.org/w/index.php?title=File:NUnit_GUI.png License: unknown Contributors: MaxSem
Image:CsUnit2.5Gui.png Source: http://en.wikipedia.org/w/index.php?title=File:CsUnit2.5Gui.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Manfred Lange
File:Test Automation Interface.png Source: http://en.wikipedia.org/w/index.php?title=File:Test_Automation_Interface.png License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:Anandgopalkri
Image:Test-driven development.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Test-driven_development.PNG License: Creative Commons Attribution-Sharealike 3.0
Contributors: Excirial (Contact me, Contribs)
File:US Navy 090407-N-4669J-042 Sailors assigned to the air department of the aircraft carrier USS George H.W. Bush (CVN 77) test the ship's catapult systems during acceptance
trials.jpg Source:
http://en.wikipedia.org/w/index.php?title=File:US_Navy_090407-N-4669J-042_Sailors_assigned_to_the_air_department_of_the_aircraft_carrier_USS_George_H.W._Bush_(CVN_77)_test_the_ship's_catapult_systems_during_acceptance_trials.jpg
License: Public Domain Contributors: -
Image:Fagan Inspection Simple flow.svg Source: http://en.wikipedia.org/w/index.php?title=File:Fagan_Inspection_Simple_flow.svg License: Creative Commons Zero Contributors: Bignose
Image:Virzis Formula.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Virzis_Formula.PNG License: Public Domain Contributors: Original uploader was Schmettow at
en.wikipedia. Later version(s) were uploaded by NickVeys at en.wikipedia.
License
268
License
Creative Commons Attribution-Share Alike 3.0 Unported
//creativecommons.org/licenses/by-sa/3.0/