0% found this document useful (1 vote)
458 views61 pages

Test-Driven Development Wikipedia Collection

Test-driven development involves automating manual software testing processes using software. There are two main approaches: code-driven testing which tests application programming interfaces, and graphical user interface testing which simulates user interactions. Automated testing can improve reliability by better code coverage and frequent testing, and reduce costs compared to manual testing. Frameworks provide reusable components to simplify test automation. Popular automated testing tools include JUnit, Selenium, and HP QuickTest Professional.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (1 vote)
458 views61 pages

Test-Driven Development Wikipedia Collection

Test-driven development involves automating manual software testing processes using software. There are two main approaches: code-driven testing which tests application programming interfaces, and graphical user interface testing which simulates user interactions. Automated testing can improve reliability by better code coverage and frequent testing, and reduce costs compared to manual testing. Frameworks provide reusable components to simplify test automation. Popular automated testing tools include JUnit, Selenium, and HP QuickTest Professional.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 61

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.

PDF generated at: Sat, 25 Sep 2010 01:05:13 UTC


Test-Driven Development
@ Wikipedia
Contents
Articles
Test automation 1
Test-driven development 5
Behavior Driven Development 11
Acceptance test 18
Integration testing 22
Unit testing 24
Code refactoring 29
Test case 33
xUnit 35
Test stubs 37
Mock object 38
Separation of concerns 42
Dependency injection 45
Dependency inversion principle 51
Assertion (computing) 52
References
Article Sources and Contributors 56
Image Sources, Licenses and Contributors 58
Article Licenses
License 59
Test automation
1
Test automation
Compare with Manual testing.
Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to
predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions
[1]
.
Commonly, test automation involves automating a manual process already in place that uses a formalized testing
process.
Overview
Although manual tests may find many defects in a software application, it is a laborious and time consuming
process. In addition, it may not be effective in finding certain classes of defects. Test automation is a process of
writing a computer program to do testing that would otherwise need to be done manually. Once tests have been
automated, they can be run quickly. This is often the most cost effective method for software products that have a
long maintenance life, because even minor patches over the lifetime of the application can cause features to break
which were working at an earlier point in time.
There are two general approaches to test automation:
Code-driven testing. The public (usually) interfaces to classes, modules, or libraries are tested with a variety of
input arguments to validate that the results that are returned are correct.
Graphical user interface testing. A testing framework generates user interface events such as keystrokes and
mouse clicks, and observes the changes that result in the user interface, to validate that the observable behavior of
the program is correct.
Test automation tools can be expensive, and it is usually employed in combination with manual testing. It can be
made cost-effective in the longer term, especially when used repeatedly in regression testing.
One way to generate test cases automatically is model-based testing through use of a model of the system for test
case generation but research continues into a variety of alternative methodologies for doing so.
What to automate, when to automate, or even whether one really needs automation are crucial decisions which the
testing (or development) team must make. Selecting the correct features of the product for automation largely
determines the success of the automation. Automating unstable features or features that are undergoing changes
should be avoided.
[2]
Code-driven testing
A growing trend in software development is the use of testing frameworks such as the xUnit frameworks (for
example, JUnit and NUnit) that allow the execution of unit tests to determine whether various sections of the code
are acting as expected under various circumstances. Test cases describe tests that need to be run on the program to
verify that the program runs as expected.
Code driven test automation is a key feature of Agile software development, where it is known as Test-driven
development (TDD). Unit tests are written to define the functionality before the code is written. Only when all tests
pass is the code considered complete. Proponents argue that it produces software that is both more reliable and less
costly than code that is tested by manual exploration. It is considered more reliable because the code coverage is
better, and because it is run constantly during development rather than once at the end of a waterfall development
cycle. The developer discovers defects immediately upon making a change, when it is least expensive to fix. Finally,
code refactoring is safer; transforming the code into a simpler form with less code duplication, but equivalent
behavior, is much less likely to introduce new defects.
Test automation
2
Graphical User Interface (GUI) testing
Many test automation tools provide record and playback features that allow users to interactively record user actions
and replay them back any number of times, comparing actual results to those expected. The advantage of this
approach is that it requires little or no software development. This approach can be applied to any application that
has a graphical user interface. However, reliance on these features poses major reliability and maintainability
problems. Relabelling a button or moving it to another part of the window may require the test to be re-recorded.
Record and playback also often adds irrelevant activities or incorrectly records some activities.
A variation on this type of tool is for testing of web sites. Here, the "interface" is the web page. This type of tool also
requires little or no software development. However, such a framework utilizes entirely different techniques because
it is reading html instead of observing window events.
Another variation is scriptless test automation that does not use record and playback, but instead builds a model of
the application under test and then enables the tester to create test cases by simply editing in test parameters and
conditions. This requires no scripting skills, but has all the power and flexibility of a scripted approach. Test-case
maintenance is easy, as there is no code to maintain and as the application under test changes the software objects
can simply be re-learned or added. It can be applied to any GUI-based software application.
What to test
Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem
detection (consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarily
automating tests in an end-to-end fashion.
One must keep satisfying popular requirements when thinking of test automation:
Platform and OS independence
Data driven capability (Input Data, Output Data, Meta Data)
Customizable Reporting (DB Access, crystal reports)
Easy debugging and logging
Version control friendly - minimal binary files
Extensible & Customizable (Open APIs to be able to integrate with other tools)
Common Driver (For example, in the Java development ecosystem, that means Ant or Maven and the popular
IDEs). This enables tests to integrate with the developers' workflows.
Support unattended test runs for integration with build processes and batch runs. Continuous Integration servers
require this.
Email Notifications (automated notification on failure or threshold levels). This may be the test runner or tooling
that executes it.
Support distributed execution environment (distributed test bed)
Distributed application support (distributed SUT)
Test automation
3
Framework approach in automation
A framework is an integrated system that sets the rules of Automation of a specific product. This system integrates
the function libraries, test data sources, object details and various reusable modules. These components act as small
building blocks which need to be assembled to represent a business process. The framework provides the basis of
test automation and simplifies the automation effort.
There are various types of frameworks. They are categorized on the basis of the automation component they
leverage. These are:
1. Data-driven testing
2. Modularity-driven testing
3. Keyword-driven testing
4. Hybrid testing
5. Model-based testing
Popular Test Automation Tools
Tool Name Company Name Latest Version
HP QuickTest Professional HP 10.5
IBM Rational Functional
Tester
IBM Rational 8.1.0.3
Parasoft SOAtest Parasoft 9.0
Rational robot IBM Rational 2001
Selenium OpenSource Tool 1.0.6
SilkTest Micro Focus 2009
TestComplete SmartBear Software 8.0
TestPartner Micro Focus 6.3
WATIR OpenSource Tool 1.6.5
See also
List of GUI testing tools
Software testing
System testing
Test automation framework
Unit test
Test automation
4
References
[1] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention. Best Practices in Software Management (http:/ / www. wiley. com/
WileyCDA/ WileyTitle/ productCd-0470042125. html). Wiley-IEEE Computer Society Press. p.74. ISBN0470042125. .
[2] Brian Marick. "When Should a Test Be Automated?" (http:/ / www.stickyminds. com/ sitewide.asp?Function=edetail& ObjectType=ART&
ObjectId=2010). StickyMinds.com. . Retrieved 2009-08-20.
Elfriede Dustin, et al.: Automated Software Testing. Addison Wesley, 1999, ISBN 0-20143-287-0
Elfriede Dustin, et al.: Implementing Automated Software Testing. Addison Wesley, ISBN 10 0321580516,
ISBN-13: 978-0321580511
Mark Fewster & Dorothy Graham (1999). Software Test Automation. ACM Press/Addison-Wesley.
ISBN978-0201331400.
Roman Savenkov: How to Become a Software Tester. Roman Savenkov Consulting, 2008, ISBN
978-0-615-23372-7
Hong Zhu et al. (2008). AST 08. Proceedings of the 3rd International Workshop on Automation of Software Test
(http:/ / portal. acm. org/ citation. cfm?id=1370042#). ACM Press. ISBN978-1-60558-030-2.
External links
Automation Myths (http:/ / www. benchmarkqa. com/ pdf/ papers_automation_myths. pdf) by M. N. Alam
Generating Test Cases Automatically (http:/ / www. osc-es. de/ media/ pdf/
dSPACENEWS2007-3_TargetLink_EmbeddedTester_en_701. pdf)
Practical Experience in Automated Testing (http:/ / www. methodsandtools. com/ archive/ archive. php?id=33)
Test Automation: Delivering Business Value (http:/ / www. applabs. com/ internal/
app_whitepaper_test_automation_delivering_business_value_1v00. pdf)
Test Automation Snake Oil (http:/ / www.satisfice. com/ articles/ test_automation_snake_oil. pdf) by James Bach
When Should a Test Be Automated? (http:/ / www. stickyminds. com/ r. asp?F=DART_2010) by Brian Marick
Why Automation Projects Fail (http:/ / martproservice. com/ Why_Software_Projects_Fail. pdf) by Art Beall
Guidelines for Test Automation framework (http:/ / info.allianceglobalservices. com/ Portals/ 30827/ docs/ test
automation framework and guidelines. pdf)
Advanced Test Automation (http:/ / www. testars. com/ docs/ 5GTA. pdf)
Test-driven development
5
Test-driven development
Test-driven development (TDD) is a software development process that relies on the repetition of a very short
development cycle: first the developer writes a failing automated test case that defines a desired improvement or new
function, then produces code to pass that test and finally refactors the new code to acceptable standards. Kent Beck,
who is credited with having developed or 'rediscovered' the technique, stated in 2003 that TDD encourages simple
designs and inspires confidence.
[1]
Test-driven development is related to the test-first programming concepts of extreme programming, begun in
1999,
[2]
but more recently has created more general interest in its own right.
[3]
Programmers also apply the concept to improving and debugging legacy code developed with older techniques.
[4]
Requirements
Test-driven development requires developers to create automated unit tests that define code requirements
(immediately) before writing the code itself. The tests contain assertions that are either true or false. Passing the tests
confirms correct behavior as developers evolve and refactor the code. Developers often use testing frameworks, such
as xUnit, to create and automatically run sets of test cases.
Test-driven development cycle
A graphical representation of the development cycle, using a basic flowchart
The following sequence is based on the
book Test-Driven Development by
Example
[1]
.
1. Add a test
In test-driven development, each new
feature begins with writing a test. This
test must inevitably fail because it is
written before the feature has been
implemented. (If it does not fail, then
either the proposed "new" feature
already exists or the test is defective.)
To write a test, the developer must
clearly understand the feature's
specification and requirements. The
developer can accomplish this through
use cases and user stories that cover the requirements and exception conditions. This could also imply a variant, or
modification of an existing test. This is a differentiating feature of test-driven development versus writing unit tests
after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but
important difference.
Test-driven development
6
2. Run all tests and see if the new one fails
This validates that the test harness is working correctly and that the new test does not mistakenly pass without
requiring any new code. This step also tests the test itself, in the negative: it rules out the possibility that the new test
will always pass, and therefore be worthless. The new test should also fail for the expected reason. This increases
confidence (although it does not entirely guarantee) that it is testing the right thing, and will pass only in intended
cases.
3. Write some code
The next step is to write some code that will cause the test to pass. The new code written at this stage will not be
perfect and may, for example, pass the test in an inelegant way. That is acceptable because later steps will improve
and hone it.
It is important that the code written is only designed to pass the test; no further (and therefore untested) functionality
should be predicted and 'allowed for' at any stage.
4. Run the automated tests and see them succeed
If all test cases now pass, the programmer can be confident that the code meets all the tested requirements. This is a
good point from which to begin the final step of the cycle.
5. Refactor code
Now the code can be cleaned up as necessary. By re-running the test cases, the developer can be confident that
refactoring is not damaging any existing functionality. The concept of removing duplication is an important aspect of
any software design. In this case, however, it also applies to removing any duplication between the test code and the
production code - for example magic numbers or strings that were repeated in both, in order to make the test pass
in step 3.
Repeat
Starting with another new test, the cycle is then repeated to push forward the functionality. The size of the steps
should always be small, with as few as 1 to 10 edits between each test run. If new code does not rapidly satisfy a new
test, or other tests fail unexpectedly, the programmer should undo or revert in preference to excessive debugging.
Continuous Integration helps by providing revertible checkpoints. When using external libraries it is important not to
make increments that are so small as to be effectively merely testing the library itself,
[3]
unless there is some reason
to believe that the library is buggy or is not sufficiently feature-complete to serve all the needs of the main program
being written.
Development style
There are various aspects to using test-driven development, for example the principles of "keep it simple, stupid"
(KISS) and "You ain't gonna need it" (YAGNI). By focusing on writing only the code necessary to pass tests,
designs can be cleaner and clearer than is often achieved by other methods.
[1]
In Test-Driven Development by
Example Kent Beck also suggests the principle "Fake it till you make it".
To achieve some advanced design concept (such as a design pattern), tests are written that will generate that design.
The code may remain simpler than the target pattern, but still pass all required tests. This can be unsettling at first but
it allows the developer to focus only on what is important.
Write the tests first. The tests should be written before the functionality that is being tested. This has been claimed
to have two benefits. It helps ensure that the application is written for testability, as the developers must consider
how to test the application from the outset, rather than worrying about it later. It also ensures that tests for every
Test-driven development
7
feature will be written. When writing feature-first code, there is a tendency by developers and the development
organisations to push the developer onto the next feature, neglecting testing entirely.
First fail the test cases. The idea is to ensure that the test really works and can catch an error. Once this is shown,
the underlying functionality can be implemented. This has been coined the "test-driven development mantra", known
as red/green/refactor where red means fail and green is pass.
Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring.
Receiving the expected test results at each stage reinforces the programmer's mental model of the code, boosts
confidence and increases productivity.
Advanced practices of test-driven development can lead to Acceptance Test-driven development (ATDD) where the
criteria specified by the customer are automated into acceptance tests, which then drive the traditional unit
test-driven development (UTDD) process.
[5]
This process ensures the customer has an automated mechanism to
decide whether the software meets their requirements. With ATDD, the development team now has a specific target
to satisfy, the acceptance tests, which keeps them continuously focused on what the customer really wants from that
user story.
Benefits
A 2005 study found that using TDD meant writing more tests and, in turn, programmers that wrote more tests tended
to be more productive.
[6]
Hypotheses relating to code quality and a more direct correlation between TDD and
productivity were inconclusive.
[7]
Programmers using pure TDD on new ("greenfield") projects report they only rarely feel the need to invoke a
debugger. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the
last version that passed all tests may often be more productive than debugging.
[8]

[9]
Test-driven development offers more than just simple validation of correctness, but can also drive the design of a
program. By focusing on the test cases first, one must imagine how the functionality will be used by clients (in the
first case, the test cases). So, the programmer is concerned with the interface before the implementation. This benefit
is complementary to Design by Contract as it approaches code through test cases rather than through mathematical
assertions or preconceptions.
Test-driven development offers the ability to take small steps when required. It allows a programmer to focus on the
task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered
initially, and tests to create these extraneous circumstances are implemented separately. Test-driven development
ensures in this way that all written code is covered by at least one test. This gives the programming team, and
subsequent users, a greater level of confidence in the code.
While it is true that more code is required with TDD than without TDD because of the unit test code, total code
implementation time is typically shorter.
[10]
Large numbers of tests help to limit the number of defects in the code.
The early and frequent nature of the testing helps to catch defects early in the development cycle, preventing them
from becoming endemic and expensive problems. Eliminating defects early in the process usually avoids lengthy and
tedious debugging later in the project.
TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the
methodology requires that the developers think of the software in terms of small units that can be written and tested
independently and integrated together later. This leads to smaller, more focused classes, looser coupling, and cleaner
interfaces. The use of the mock object design pattern also contributes to the overall modularization of the code
because this pattern requires that the code be written so that modules can be switched easily between mock versions
for unit testing and "real" versions for deployment.
Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code
path. For example, in order for a TDD developer to add an else branch to an existing if statement, the developer
Test-driven development
8
would first have to write a failing test case that motivates the branch. As a result, the automated tests resulting from
TDD tend to be very thorough: they will detect any unexpected changes in the code's behaviour. This detects
problems that can arise where a change later in the development cycle unexpectedly alters other functionality.
Vulnerabilities
Test-driven development is difficult to use in situations where full functional tests are required to determine
success or failure. Examples of these are user interfaces, programs that work with databases, and some that
depend on specific network configurations. TDD encourages developers to put the minimum amount of code into
such modules and to maximise the logic that is in testable library code, using fakes and mocks to represent the
outside world.
Management support is essential. Without the entire organization believing that test-driven development is going
to improve the product, management may feel that time spent writing tests is wasted.
[11]
The tests themselves become part of the maintenance overhead of a project. Badly written tests, for example ones
that include hard-coded error strings or which are themselves prone to failure, are expensive to maintain. There is
a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs it may not
be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings,
and this should be a goal during the 'Refactor' phase described above.
The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later
date. Therefore these original tests become increasingly precious as time goes by. If a poor architecture, a poor
design or a poor testing strategy leads to a late change that makes dozens of existing tests fail, it is important that
they are individually fixed. Merely deleting, disabling or rashly altering them can lead to un-detectable holes in
the test coverage.
Unexpected gaps in test coverage may exist or occur for a number of reasons. Perhaps one or more developers in
a team was not so committed to the TDD strategy and did not write tests properly, perhaps some sets of tests have
been invalidated, deleted or disabled accidentally or on purpose during later work. If this happens, the confidence
that a large set of TDD tests lend to further fixes and refactorings will actually be misplaced. Alterations may be
made that result in no test failures when in fact bugs are being introduced and remaining undetected.
Unit tests created in a test-driven development environment are typically created by the developer who will also
write the code that is being tested. The tests may therefore share the same blind spots with the code: If, for
example, a developer does not realize that certain input parameters must be checked, most likely neither the test
nor the code will verify these input parameters. If the developer misinterprets the requirements specification for
the module being developed, both the tests and the code will be wrong.
The high number of passing unit tests may bring a false sense of security, resulting in fewer additional QA
activities, such as integration testing and compliance testing.
Code Visibility
Test-suite code clearly has to be able to access the code it is testing. On the other hand normal design criteria such as
information hiding, encapsulation and the separation of concerns should not be compromised. Therefore unit test
code for TDD is usually written within the same project or module as the code being tested.
In object oriented design this still does not provide access to private data and methods. Therefore, extra work may be
necessary for unit tests. In Java and other languages, a developer can use reflection to access fields that are marked
private.
[12]
Alternatively, an inner class can be used to hold the unit tests so they will have visibility of the enclosing
class's members and attributes. In the .NET Framework and some other programming languages, partial classes may
be used to expose private methods and data for the tests to access.
It is important that such testing hacks do not remain in the production code. In C# and other languages, compiler
directives such as #if DEBUG ... #endif can be placed around such additional classes and indeed all other test-related
Test-driven development
9
code to prevent them being compiled into the released code. This then means that the released code is not exactly the
same as that which is unit tested. The regular running of fewer but more comprehensive, end-to-end, integration tests
on the final release build can then ensure (among other things) that no production code exists that subtly relies on
aspects of the test harness.
There is some debate among practitioners of TDD, documented in their blogs and other writings, as to whether it is
wise to test private and protected methods and data anyway. Some argue that it should be sufficient to test any class
through its public interface as the private members are a mere implementation detail that may change, and should be
allowed to do so without breaking numbers of tests. Others say that crucial aspects of functionality may be
implemented in private methods, and that developing this while testing it indirectly via the public interface only
obscures the issue: unit testing is about testing the smallest unit of functionality possible.
[13]

[14]
Fakes, mocks and integration tests
Unit tests are so named because they each test one unit of code. Whether a module of code has hundreds of unit tests
or only five is irrelevant. A test suite for use in TDD should never cross process boundaries in a program, let alone
network connections. Doing so introduces delays that make tests run slowly and discourage developers from running
the whole suite. Introducing dependencies on external modules or data also turns unit tests into integration tests. If
one module misbehaves in a chain of interrelated modules, it is not so immediately clear where to look for the cause
of the failure.
When code under development relies on a database, a Web service, or any other external process or service,
enforcing a unit-testable separation is also an opportunity and a driving force to design more modular, more testable
and more reusable code.
[15]
Two steps are necessary:
1. Whenever external access is going to be needed in the final design, an interface should be defined that describes
the access that will be available. See the dependency inversion principle for a discussion of the benefits of doing
this regardless of TDD.
2. The interface should be implemented in two ways, one of which really accesses the external process, and the
other of which is a fake or mock. Fake objects need do little more than add a message such as "Person object
saved" to a trace log, against which a test assertion can be run to verify correct behaviour. Mock objects differ in
that they themselves contain test assertions that can make the test fail, for example, if the person's name and other
data are not as expected. Fake and mock object methods that return data, ostensibly from a data store or user, can
help the test process by always returning the same, realistic data that tests can rely upon. They can also be set into
predefined fault modes so that error-handling routines can be developed and reliably tested. Fake services other
than data stores may also be useful in TDD: Fake encryption services may not, in fact, encrypt the data passed;
fake random number services may always return 1. Fake or mock implementations are examples of dependency
injection.
A corollary of such dependency injection is that the actual database or other external-access code is never tested by
the TDD process itself. To avoid errors that may arise from this, other tests are needed that instantiate the test-driven
code with the "real" implementations of the interfaces discussed above. These tests are quite separate from the TDD
unit tests, and are really integration tests. There will be fewer of them, and they need to be run less often than the
unit tests. They can nonetheless be implemented using the same testing framework, such as xUnit.
Integration tests that alter any persistent store or database should always be designed carefully with consideration of
the initial and final state of the files or database, even if any test fails. This is often achieved using some combination
of the following techniques:
The TearDown method, which is integral to many test frameworks.
try...catch...finally exception handling structures where available.
Database transactions where a transaction atomically includes perhaps a write, a read and a matching delete
operation.
Test-driven development
10
Taking a "snapshot" of the database before running any tests and rolling back to the snapshot after each test run.
This may be automated using a framework such as Ant or NAnt or a continuous integration system such as
CruiseControl.
Initialising the database to a clean state before tests, rather than cleaning up after them. This may be relevant
where cleaning up may make it difficult to diagnose test failures by deleting the final state of the database before
detailed diagnosis can be performed.
Frameworks such as Moq, jMock, NMock, EasyMock, Typemock, jMockit, Unitils, Mockito, Mockachino,
PowerMock or Rhino Mocks exist to make the process of creating and using complex mock objects easier.
See also
Behavior driven development
Design by contract
List of software development philosophies
List of unit testing frameworks
Mock object
Software testing
Test case
Unit testing
External links
TestDrivenDevelopment on WikiWikiWeb
Test or spec? Test and spec? Test from spec!
[16]
, by Bertrand Meyer (September 2004)
Microsoft Visual Studio Team Test from a TDD approach
[17]
Write Maintainable Unit Tests That Will Save You Time And Tears
[18]
Improving Application Quality Using Test-Driven Development (TDD)
[19]
References
[1] Beck, K. Test-Driven Development by Example, Addison Wesley, 2003
[2] "Extreme Programming", Computerworld (online), December 2001, webpage:
Computerworld-appdev-92 (http:/ / www. computerworld. com/ softwaretopics/ software/ appdev/ story/
0,10801,66192,00. html).
[3] Newkirk, JW and Vorontsov, AA. Test-Driven Development in Microsoft .NET, Microsoft Press, 2004.
[4] Feathers, M. Working Effectively with Legacy Code, Prentice Hall, 2004
[5] Koskela, L. "Test Driven: TDD and Acceptance TDD for Java Developers", Manning Publications, 2007
[6] Erdogmus, Hakan; Morisio, Torchiano. "On the Effectiveness of Test-first Approach to Programming" (http:/ / iit-iti. nrc-cnrc. gc. ca/
publications/ nrc-47445_e. html). Proceedings of the IEEE Transactions on Software Engineering, 31(1). January 2005. (NRC 47445). .
Retrieved 2008-01-14. "We found that test-first students on average wrote more tests and, in turn, students who wrote more tests tended to be
more productive."
[7] Proffitt, Jacob. "TDD Proven Effective! Or is it?" (http:/ / theruntime. com/ blogs/ jacob/ archive/ 2008/ 01/ 22/ tdd-proven-effective-or-is-it.
aspx). . Retrieved 2008-02-21. "So TDD's relationship to quality is problematic at best. Its relationship to productivity is more interesting. I
hope there's a follow-up study because the productivity numbers simply don't add up very well to me. There is an undeniable correlation
between productivity and the number of tests, but that correlation is actually stronger in the non-TDD group (which had a single outlier
compared to roughly half of the TDD group being outside the 95% band)."
[8] Clark, Mike. "Test-Driven Development with JUnit Workshop" (http:/ / clarkware.com/ courses/ TDDWithJUnit.html). Clarkware
Consulting, Inc.. . Retrieved 2007-11-01. "In fact, test-driven development actually helps you meet your deadlines by eliminating debugging
time, minimizing design speculation and re-work, and reducing the cost and fear of changing working code."
[9] Llopis, Noel (20 February 2005). "Stepping Through the Looking Glass: Test-Driven Game Development (Part 1)" (http:/ / www.
gamesfromwithin. com/ articles/ 0502/ 000073.html). Games from Within. . Retrieved 2007-11-01. "Comparing [TDD] to the non-test-driven
development approach, you're replacing all the mental checking and debugger stepping with code that verifies that your program does exactly
Test-driven development
11
what you intended it to do."
[10] Mller, Matthias M.; Padberg, Frank. "About the Return on Investment of Test-Driven Development" (http:/ / www.ipd. uka. de/
mitarbeiter/ muellerm/ publications/ edser03. pdf) (PDF). Universitt Karlsruhe, Germany. pp. 6. . Retrieved 2007-11-01.
[11] Loughran, Steve (November 6th, 2006). "Testing" (http:/ / people. apache. org/ ~stevel/ slides/ testing. pdf) (PDF). HP Laboratories. .
Retrieved 2009-08-12.
[12] Burton, Ross (11/12/2003). "Subverting Java Access Protection for Unit Testing" (http:/ / www. onjava.com/ pub/ a/ onjava/ 2003/ 11/ 12/
reflection. html). O'Reilly Media, Inc.. . Retrieved 2009-08-12.
[13] Newkirk, James (7 June 2004). "Testing Private Methods/Member Variables - Should you or shouldn't you" (http:/ / blogs.msdn. com/
jamesnewkirk/ archive/ 2004/ 06/ 07/ 150361.aspx). Microsoft Corporation. . Retrieved 2009-08-12.
[14] Stall, Tim (1 Mar 2005). "How to Test Private and Protected methods in .NET" (http:/ / www. codeproject.com/ KB/ cs/
testnonpublicmembers.aspx). CodeProject. . Retrieved 2009-08-12.
[15] Fowler, Martin (1999). Refactoring - Improving the design of existing code. Boston: Addison Wesley Longman, Inc.. ISBN0-201-48567-2.
[16] http:/ / www. eiffel.com/ general/ monthly_column/ 2004/ september. html
[17] http:/ / msdn.microsoft. com/ en-us/ library/ ms379625(VS. 80). aspx
[18] http:/ / msdn.microsoft. com/ en-us/ magazine/ cc163665. aspx
[19] http:/ / www. methodsandtools. com/ archive/ archive. php?id=20
Behavior Driven Development
Behavior driven development (or BDD) is an agile software development technique that encourages collaboration
between developers, QA and non-technical or business participants in a software project. It was originally named in
2003 by Dan North
[1]
as a response to Test Driven Development, including Acceptance Test or Customer Test
Driven Development practices as found in Extreme Programming. It has evolved over the last few years
[2]
.
On the "Agile Specifications, BDD and Testing eXchange" in November 2009 in London, Dan North
[3]
gave the
following definition of BDD:
BDD is a second-generation, outside-in, pull-based, multiple-stakeholder, multiple-scale,
high-automation, agile methodology. It describes a cycle of interactions with well-defined outputs,
resulting in the delivery of working, tested software that matters.
BDD focuses on obtaining a clear understanding of desired software behaviour through discussion with stakeholders.
It extends TDD by writing test cases in a natural language that non-programmers can read. Behavior-driven
developers use their native language in combination with the ubiquitous language of domain driven design to
describe the purpose and benefit of their code. This allows the developers to focus on why the code should be
created, rather than the technical details, and minimizes translation between the technical language in which the code
is written and the domain language spoken by the business, users, stakeholders, project management, etc.
Dan North created the first ever BDD framework, JBehave
[1]
, followed by a story-level BDD framework for Ruby
called RBehave
[4]
which was later integrated into the RSpec project
[5]
. He also worked with David Chelimsky,
Aslak Hellesy and others to develop RSpec and also to write "The RSpec Book: Behaviour Driven Development
with RSpec, Cucumber, and Friends". The first story-based framework in RSpec was later replaced by Cucumber
[6]
mainly developed by Aslak Hellesy.
In 2008, Chris Matts, who was involved in the first discussions around BDD, came up with the idea of Feature
Injection
[7]
, allowing BDD to cover the analysis space and provide a full treatment of the software lifecycle from
vision through to code and release.
Behavior Driven Development
12
BDD practices
The practices of BDD include:
Establishing the goals of different stakeholders required for a vision to be implemented
Drawing out features which will achieve those goals using #Feature_injection feature injection
Involving stakeholders in the implementation process through outside-in software development
Using examples to describe the behavior of the application, or of units of code
Automating those examples to provide quick feedback and regression testing
Using 'should' when describing the behavior of software to help clarify responsibility and allow the software's
functionality to be questioned
Using 'ensure' when describing responsibilities of software to differentiate outcomes in the scope of the code in
question from side-effects of other elements of code.
Using mocks to stand-in for collaborating modules of code which have not yet been written
Feature injection
A company may have several visions which will deliver value to the business, usually by making money, saving
money or protecting money. Once a vision is identified by a group as being the best vision for the conditions, they
will need additional help to make the vision a success.
The primary stakeholders who have identified the vision then bring in incidental stakeholders. Each stakeholder
defines the goals they need to achieve in order for the vision to be successful. For example, a legal department might
ask for certain regulatory requirements to be met. The head of marketing might want to engage the community who
will be using the software. A security expert will make sure that the software won't be vulnerable to SQL injection
attacks.
From these goals, broad themes or feature sets are defined which will achieve them; for instance, "allow users to
rank contributions" or "audit transactions".
From these themes, user features and the first details of the user interface can be established.
Outside-in
BDD is driven by business value
[8]
; that is, the benefit to the business which accrues once the application is in
production. The only way in which this benefit can be realized is through the user interface(s) to the application,
usually (but not always) a GUI.
In the same way, each piece of code, starting with the UI, can be considered a stakeholder of the other modules of
code which it uses. Each element of code provides some aspect of behavior which, in collaboration with the other
elements, provides the application behavior.
The first piece of production code that BDD developers implement is the UI. Developers can then benefit from quick
feedback as to whether the UI looks and behaves appropriately. Through code, and using principles of good design
and refactoring, developers discover collaborators of the UI, and of every unit of code thereafter. This helps them
adhere to the principle of YAGNI, since each piece of production code is required either by the business, or by
another piece of code already written.
Behavior Driven Development
13
Application examples
The requirements of a retail application might be, "Refunded or replaced items should be returned to stock."
In BDD, a developer or QA might clarify the requirements by breaking this down into specific examples, e.g.
Scenario 1: Refunded items should be returned to stock
Given a customer buys a black jumper
and I have three black jumpers left in stock
when he returns the jumper for a refund
then I should have four black jumpers in stock
Scenario 2: Replaced items should be returned to stock
Given that a customer buys a blue garment
and I have two blue garments in stock
and three black garments in stock.
When he returns the garment for a replacement in black,
Then I should have three blue garments in stock
and two black garments in stock
Each scenario is an exemplar, designed to illustrate a specific aspect of behavior of the application.
When discussing the scenarios, participants question whether the outcomes described always result from those
events occurring in the given context. This can help to uncover further scenarios which clarify the requirements
[9]
.
For instance, a domain expert noticing that refunded items are not always returned to stock might reword the
requirements as "Refunded or replaced items should be returned to stock unless faulty."
This in turn helps participants to pin down the scope of requirements, which leads to better estimates of how long
those requirements will take to implement.
The words Given, When and Then are often used to help drive out the scenarios, but are not mandated.
These scenarios can also be automated, if an appropriate tool exists to allow automation at the UI level. If no such
tool exists then it may be possible to automate at the next level in, ie: if an MVC design pattern has been used, the
level of the Controller.
Unit-level examples and behavior
The same principles of examples, using contexts, events and outcomes are used to drive development at a unit level.
For instance, the following examples describe an aspect of behavior of a list:
Example 1: New lists are empty
Given a new list
Then the list should be empty.
Example 2: Lists with things in them are not empty.
Given a new list
When we add an object
Then the list should not be empty.
Both these examples are required to describe the behavior of the
1is.isEmpy()
method, and to derive the benefit of the method. These examples are usually automated using TDD frameworks. In
BDD these examples are often encapsulated in a single method, with the name of the method being a complete
Behavior Driven Development
14
description of the behavior. Both examples are required for the code to be valuable, and encapsulating them in this
way makes it easy to question, remove or change the behaviour.
For instance, using Java and JUnit 4, the above examples might become:
public class ListTest {
@Tes
public void shou1dKnowWheherIIsEmpy() {
Lis 1is1 = new Lis(),
asserTrue(1is1.isEmpy()),
Lis 1is2 = new Lis(),
1is2.add(new Ob_ec()),
asserEa1se(1is2.isEmpy()),
}
}
Other practitioners, particularly in the Ruby community, prefer to split these into two separate examples, based on
separate contexts for when the list is empty or has items in. This technique is based on Dave Astels' practice, "One
assertion per test
[10]
".
Sometimes the difference between the context, events and outcomes is made more explicit. For instance:
public class WindowControlBehavior {
@Tes
public void shou1dC1oseWindows() {

// Given
WindowConro1 conro1 = new WindowConro1("My AErame"),
AErame frame = new AErame(),

// When
conro1.c1oseWindow(),

// Then
ensureTha(!frame.isShowing()),
}
}
However the example is phrased, the effect describes the behavior of the code in question. For instance, from the
examples above one can derive:
List should know when it is empty
WindowControl should close windows
The description is intended to be useful if the test fails, and to provide documentation of the code's behavior. Once
the examples have been written they are then run and the code implemented to make them work in the same way as
TDD. The examples then become part of the suite of regression tests.
Behavior Driven Development
15
Using mocks
BDD proponents claim that the use of "should" and "ensureThat" in BDD examples encourages developers to
question whether the responsibilities they're assigning to their classes are appropriate, or whether they can be
delegated or moved to another class entirely. Practitioners use an object which is simpler than the collaborating code,
and provides the same interface but more predictable behavior. This is injected into the code which needs it, and
examples of that code's behavior are written using this object instead of the production version.
These objects can either be created by hand, or created using a mocking framework such as Mockito, Moq, NMock,
Rhino Mocks, JMock or EasyMock.
Questioning responsibilities in this way, and using mocks to fulfill the required roles of collaborating classes,
encourages the use of Role-based Interfaces. It also helps to keep the classes small and loosely coupled.
Tools
JBee
[11]
- Java
ASSpec
[12]
- ActionScript 3
Aero
[13]
- PHP 5
Aubergine
[14]
- .NET
BDoc
[15]
- Extracting documentation from unit tests, supporting behavior driven development
BDD in Python
[16]
is core module doctest
[17]
Bumblebee
[18]
- Extract documentation from JUnit tests with support for adding text, code-snippets, screenshots
and more. Puts focus on the end-user.
beanSpec
[19]
- Java
CppSpec
[20]
- C++
cfSpec
[21]
- ColdFusion
CSpec
[22]
- C
dSpec
[23]
- Delphi
Concordion
[24]
- a Java automated testing tool for BDD that uses plain English to describe behaviors.
Cucumber
[6]
- Plain text + Ruby. Works against Java, .NET, Ruby, Flex or any web application via Watir or
Selenium.
easyb
[25]
- Groovy/Java
EasySpec
[26]
- Groovy, usable in Java. Developer also working on Perception
[27]
a tool for doing
Context/Specification reporting for many different tools.
FitNesse
[28]
- Java, .NET, C++, Delphi, Python, Ruby, Smalltalk, Perl. Now supports BDD directly with plain
text tables and scenarios.
Freshen
[29]
- Python - clone of the Cucumber BDD framework
GivWenZen
[30]
- Java and FitNesse
GivWenZen for Flex and ActionScript3
[31]
- Flex cousin of Java GivWenZen
GSpec
[32]
- Groovy
Igloo
[33]
- C++
Instinct
[34]
- Java
Jasmine
[35]
- JavaScript - framework-independent BDD with easy CI integration
JavaStubs
[36]
- Java - BDD framework supporting partial-mocking/method stubbing
JBehave
[37]
- Java
JDave
[38]
- Java
JFXtras Test
[39]
- JavaFX
JSpec
[40]
- JavaScript - BDD framework independent, async support, multiple reporters (terminal, dom, server,
console, etc.), Rhino support, over 50 matchers and much more
Behavior Driven Development
16
JSSpec
[41]
- JavaScript
Morelia viridis
[42]
- Cucumber clone for Python
MSpec
[43]
- .NET
NBehave
[44]
- .NET
NSpec
[45]
- .NET
NUnit
[46]
- A TDD framework in .NET which can be used for BDD examples and scenarios
ObjectiveMatchy
[47]
- iPhone - A Matcher System for iPhone development.
Pyccuracy
[48]
- Behavior-driven framework in Python.
Pyhistorian
[49]
- General purpose BDD Story Runner in Python (internal DSL, not plain-text)
PyCukes
[50]
- Cucumber-like BDD tool built on top of Pyhistorian
Robot Framework
[51]
- Generic keyword-driven test automation framework for acceptance level testing and
acceptance test-driven development (ATDD) written in Python
RSpec
[52]
- Ruby
Spock
[53]
- Spock is a testing and specification framework for Java and Groovy
SpecFlow
[54]
- SpecFlow is inspired by Cucumber and the community around it. Binding business requirements
to .NET code
screw-unit
[55]
- JavaScript
ScalaTest
[56]
- Scala
specs
[57]
- Scala
spec-cpp
[58]
- C++
Spectacular
[59]
- Open source BDD and ATDD tool incorporating several types of tests in a single document and
introduces Executable Use Cases
Specter
[60]
- Another implementation of BDD framework in .NET with focus on specification readability
StoryQ
[61]
- .NET 3.5, can be integrated with NUnit to provide both specification readability and testing
tspec
[62]
- Groovy/Java (Thai syntax)
Twist
[63]
- Commercial Eclipse-based tool for creating executable specifications
Vows
[64]
- JavaScript
XSpec
[65]
- XPath, XSLT and XQuery
External links
Dan North's article introducing BDD
[66]
Introduction to Behavior Driven Development
[67]
Behavior Driven Development Using Ruby (Part 1)
[68]
Behavior-Driven Development Using Ruby (Part 2)
[69]
In pursuit of code quality: Adventures in behavior-driven development by Andrew Glover
[70]
Behavior Driven Database Development by Pramodkumar Sadalage
[71]
The RSpec Book: Behaviour Driven Development with RSpec, Cucumber, and Friends
[72]
Good Test, Better Code - From Unit Testing to Behavior-Driven Development
[73]
BDD in practice
[74]
BDD section in the Software Development Tools Directory
[75]
Behavior Driven Development
17
References
[1] D.North, Introducing Behaviour Driven Development (http:/ / dannorth. net/ introducing-bdd)
[2] D.North, comments, The RSpec Book - Question about Chapter 11: Writing software that matters (http:/ / forums. pragprog. com/ forums/ 95/
topics/ 3035)
[3] Dan North: How to sell BDD to the business (http:/ / skillsmatter. com/ podcast/ java-jee/ how-to-sell-bdd-to-the-business)
[4] D.North, Introducing RBehave (http:/ / dannorth. net/ 2007/ 06/ introducing-rbehave)
[5] S.Miller, InfoQ: RSpec incorporates RBehave (http:/ / www.infoq. com/ news/ 2007/ 10/ RSpec-incorporates-RBehave)
[6] http:/ / cukes.info/
[7] Chris Matts, Feature Injection (http:/ / picasaweb. google. co. uk/ chris. matts/ FeatureInjection#)
[8] E.Keogh, BDD - TDD done well? (http:/ / lizkeogh. com/ 2007/ 06/ 13/ bdd-tdd-done-well/ )
[9] D.North, What's in a Story (http:/ / dannorth.net/ whats-in-a-story)
[10] D. Astels, One assertion per test (http:/ / techblog. daveastels. com/ tag/ bdd/ )
[11] http:/ / sites.google.com/ site/ jbeetest/
[12] http:/ / www. gointeractive.se/ articles/ asspec. html
[13] http:/ / code.google.com/ p/ aero-php/
[14] http:/ / wiki.github.com/ ToJans/ Aubergine/
[15] http:/ / bdoc. googlecode.com
[16] http:/ / blog.ianbicking.org/ behavior-driven-programming. html
[17] http:/ / python.org/ doc/ current/ lib/ module-doctest. html
[18] http:/ / agical. com/ bumblebee/ bumblebee_doc. html
[19] http:/ / sourceforge.net/ projects/ beanspec
[20] http:/ / www. laughingpanda.org/ projects/ cppspec
[21] http:/ / cfspec.riaforge.org/
[22] http:/ / github.com/ visionmedia/ cspec
[23] http:/ / sourceforge.net/ projects/ dspec/
[24] http:/ / www. concordion. org
[25] http:/ / www. easyb.org/
[26] http:/ / code.google.com/ p/ easyspec/
[27] http:/ / code.google.com/ p/ perception/
[28] http:/ / www. fitnesse. org
[29] http:/ / github.com/ rlisagor/ freshen
[30] http:/ / code.google.com/ p/ givwenzen/
[31] http:/ / bitbucket.org/ loomis/ givwenzen-flex
[32] http:/ / groovy. codehaus.org/ Using+ GSpec+ with+ Groovy
[33] http:/ / igloo-testing.org/
[34] http:/ / code.google.com/ p/ instinct/
[35] http:/ / pivotal.github.com/ jasmine
[36] http:/ / javastubs.sourceforge.net/
[37] http:/ / jbehave.org/
[38] http:/ / www. jdave. org/
[39] http:/ / code.google.com/ p/ jfxtras/
[40] http:/ / jspec. info
[41] http:/ / jania. pe. kr/ aw/ moin. cgi/ JSSpec
[42] http:/ / c2.com/ cgi/ wiki?MoreliaViridis
[43] http:/ / github.com/ machine/ machine. specifications
[44] http:/ / nbehave.googlecode.com/
[45] http:/ / nspec. tigris. org/
[46] http:/ / www. nunit.org/ index. php?p=home
[47] http:/ / github.com/ mhennemeyer/ objectivematchy
[48] http:/ / www. pyccuracy.org/
[49] http:/ / github.com/ hugobr/ pyhistorian
[50] http:/ / github.com/ hugobr/ pycukes
[51] http:/ / code.google.com/ p/ robotframework/
[52] http:/ / rspec. info/
[53] http:/ / code.google.com/ p/ spock/
[54] http:/ / specflow. org/
[55] http:/ / github.com/ nkallen/ screw-unit
[56] http:/ / www. scalatest. org/
Behavior Driven Development
18
[57] http:/ / code.google.com/ p/ specs/
[58] http:/ / deanberris.com/ spec-cpp
[59] http:/ / spectacular.googlecode.com/
[60] http:/ / specter. sf. net/ /
[61] http:/ / www. codeplex. com/ StoryQ/
[62] http:/ / github.com/ chanwit/ tspec/ tree/ master
[63] http:/ / studios. thoughtworks.com/ twist
[64] http:/ / vowsjs. org/
[65] http:/ / xspec. googlecode. com/
[66] http:/ / dannorth. net/ introducing-bdd
[67] http:/ / behavior-driven. org/
[68] http:/ / www. oreillynet.com/ pub/ a/ ruby/ 2007/ 08/ 09/ behavior-driven-development-using-ruby-part-1. html
[69] http:/ / www. oreillynet.com/ pub/ a/ ruby/ 2007/ 08/ 30/ behavior-driven-development-using-ruby-part-2. html
[70] http:/ / www. ibm.com/ developerworks/ java/ library/ j-cq09187/ index. html
[71] http:/ / www. methodsandtools. com/ archive/ archive. php?id=78
[72] http:/ / www. pragprog. com/ titles/ achbd/ the-rspec-book
[73] http:/ / www. tvagile.com/ 2009/ 08/ 13/ good-test-better-code-from-unit-testing-to-behavior-driven-development/
[74] http:/ / humanmatters.tumblr.com/ post/ 393569612/ whyistartedusingbehaviordrivendevelopmentatwork
[75] http:/ / www. softdevtools.com/ modules/ weblinks/ viewcat. php?cid=128
Acceptance test
In engineering and its various subdisciplines, acceptance testing is black-box testing performed on a system (e.g.
software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.
[1]
It is also
known as functional testing, black-box testing, release acceptance, QA testing, application testing, confidence
testing, final testing, validation testing, or factory acceptance testing.
In software development, acceptance testing by the system provider is often distinguished from acceptance testing
by the customer (the user or client) prior to accepting transfer of ownership. In such environments, acceptance
testing performed by the customer is known as user acceptance testing (UAT). This is also known as end-user
testing, site (acceptance) testing, or field (acceptance) testing.
A smoke test is used as an acceptance test prior to introducing a build to the main testing process.
Overview
Acceptance testing generally involves running a suite of tests on the completed system. Each individual test, known
as a case, exercises a particular operating condition of the user's environment or feature of the system, and will result
in a pass or fail, or boolean, outcome. There is generally no degree of success or failure. The test environment is
usually designed to be identical, or as close as possible, to the anticipated user's environment, including extremes of
such. These test cases must each be accompanied by test case input data or a formal description of the operational
activities (or both) to be performed-intended to thoroughly exercise the specific case-and a formal description of
the expected results.
Acceptance Tests/Criteria (in Agile Software Development) are usually created by business customers and expressed
in a business domain language. These are high level tests to test the completeness of a user story or stories 'played'
during any sprint/iteration. These tests are created ideally through collaboration between business customers,
business analysts, testers and developers, however the business customers (product owners) are the primary owners
of these tests. As the user stories pass their acceptance criteria, the business owners can be sure of the fact that the
developers are progressing in the right direction about how the application was envisaged to work and so it's
essential that these tests include both business logic tests as well as UI validation elements (if need be).
Acceptance test cards are ideally created during sprint planning or iteration planning meeting, before development
begins so that the developers have a clear idea of what to develop. Sometimes (due to bad planning!) acceptance
Acceptance test
19
tests may span multiple stories (that are not implemented in the same sprint) and there are different ways to test them
out during actual sprints. One popular technique is to mock external interfaces or data to mimick other stories which
might not be played out during an iteration (as those stories may have been relatively lower business priority). A user
story is not considered complete until the acceptance tests have passed.
Process
The acceptance test suite is run against the supplied input data or using an acceptance test script to direct the testers.
Then the results obtained are compared with the expected results. If there is a correct match for every case, the test
suite is said to pass. If not, the system may either be rejected or accepted on conditions previously agreed between
the sponsor and the manufacturer.
The objective is to provide confidence that the delivered system meets the business requirements of both sponsors
and users. The acceptance phase may also act as the final quality gateway, where any quality defects not previously
detected may be uncovered.
A principal purpose of acceptance testing is that, once completed successfully, and provided certain additional
(contractually agreed) acceptance criteria are met, the sponsors will then sign off on the system as satisfying the
contract (previously agreed between sponsor and manufacturer), and deliver final payment.
User acceptance testing
User Acceptance Testing (UAT) is a process to obtain confirmation by a Subject Matter Expert (SME), preferably
the owner or client of the object under test, through trial or review, that a system meets mutually agreed-upon
requirements. In software development, UAT is one of the final stages of a project and often occurs before a client or
customer accepts the new system.
Users of the system perform these tests, which developers derive from the client's contract or the user requirements
specification.
Test designers draw up formal tests and devise a range of severity levels. It is preferable that the designer of the user
acceptance tests not be the creator of the formal integration and system test cases for the same system, however there
are some situations where this may not be avoided. The UAT acts as a final verification of the required business
function and proper functioning of the system, emulating real-world usage conditions on behalf of the paying client
or a specific large customer. If the software works as intended and without issues during normal use, one can
reasonably infer the same level of stability in production. These tests, which are usually performed by clients or
end-users, are not usually focused on identifying simple problems such as spelling errors and cosmetic problems, nor
show stopper defects, such as software crashes; testers and developers previously identify and fix these issues during
earlier unit testing, integration testing, and system testing phases.
The results of these tests give confidence to the clients as to how the system will perform in production. There may
also be legal or contractual requirement for acceptance of the system.
Q-UAT - Quantified User Acceptance Testing
Quantified User Acceptance Testing (Q-UAT or, more simply, the Quantified Approach) is a revised Business
Acceptance Testing process which aims to provide a smarter and faster alternative to the traditional UAT phase.
Depth-testing is carried out against Business Requirement only at specific planned points in the application or
service under test. A reliance on better quality code delivery from Development/Build phase is assumed and a
complete understanding of the appropriate Business Process is a pre-requisite. This methodology if carried out
correctly results in a quick turnaround against plan, a decreased number of test scenarios which are more complex
and wider in breadth than traditional UAT and ultimately the equivalent confidence level attained via a shorter
delivery window, allowing products/changes to be brought to market quicker.
Acceptance test
20
The Approach is based on a 'gated' 3-dimensional model the key concepts of which are:
Linear Testing (LT, the 1st dimension)
Recursive Testing (RT, the 2nd dimension)
Adaptive Testing (AT, the 3rd dimension).
The four 'gates' which conjoin and support the 3-dimensional model act as quality safeguards and include
contemporary testing concepts such as:
Internal Consistency Checks (ICS)
Major Systems/Services Checks (MSC)
Realtime/Reactive Regression (RTR).
The Quantified Approach was shaped by the former "guerilla" method of Acceptance Testing which was itself a
response to testing phases which proved too costly to be sustainable for many small/medium-scale projects.
Acceptance testing in Extreme Programming
Acceptance testing is a term used in agile software development methodologies, particularly Extreme Programming,
referring to the functional testing of a user story by the software development team during the implementation phase.
The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or
many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black box system
tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying
the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority.
Acceptance tests are also used as regression tests prior to a production release. A user story is not considered
complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each
iteration or the development team will report zero progress.
[2]
Types of acceptance testing
Typical types of acceptance testing include the following
User acceptance testing
This may include factory acceptance testing, i.e. the testing done by factory users before the factory is moved
to its own site, after which site acceptance testing may be performed by the users at the site.
Operational acceptance testing
Also known as operational readiness testing, this refers to the checking done to a system to ensure that
processes and procedures are in place to allow the system to be used and maintained. This may include checks
done to back-up facilities, procedures for disaster recovery, training for end users, maintenance procedures,
and security procedures.
Contract and regulation acceptance testing
In contract acceptance testing, a system is tested against acceptance criteria as documented in a contract,
before the system is accepted. In regulation acceptance testing, a system is tested to ensure it meets
governmental, legal and safety standards.
Alpha and beta testing
Alpha testing takes place at developers' sites, and involves testing of the operational system by internal staff,
before it is released to external customers. Beta testing takes place at customers' sites, and involves testing by a
group of customers who use the system at their own locations and provide feedback, before the system is
released to other customers. The latter is often called "field testing".
Acceptance test
21
List of development to production (testing) environments
DEV, Development Environment [1]
DTE, Development Testing Environment
QA, Quality Assurance (Testing Environment) [2]
DIT, Development Integration Testing
DST, Development System Testing
SIT, System Integration Testing
UAT, User Acceptance Testing [3]
OAT, Operations Acceptance Testing
PROD, Production Environment [4]
[1-4] Usual development environment stages in medium sized development projects.
List of Acceptance Testing Frameworks
Framework for Integrated Test (Fit)
FitNesse, a fork of Fit
ItsNat Java AJAX web framework with built-in, server based, functional web testing capabilities.
Selenium (software)
iMacros
Ranorex
Watir
Test Automation FX
See also
Black box testing
Development stage
Dynamic testing
Software testing
Test-driven development
White box testing
Unit testing
System testing
External links
Acceptance Test Engineering Guide
[3]
by Microsoft patterns & practices
[4]
Article Using Customer Tests to Drive Development
[5]
from Methods & Tools
[6]
Article Acceptance TDD Explained
[7]
from Methods & Tools
[6]
ITIL Definition - Release Acceptance (a sub process of Release Management) -- The Activity responsible for
testing a Release, and its implementation and Back-out Plans, to ensure they meet the agreed Business and IT
Operations Requirements.
Acceptance test
22
References
[1] Black, Rex (August 2009). Managing the Testing Process. Practical Tools and Techniques for Managing Hardware and Software Testing.
Hoboken, NJ: Wiley. ISBN0470404159.
[2] Acceptance Tests (http:/ / www. extremeprogramming. org/ rules/ functionaltests. html)
[3] http:/ / testingguidance.codeplex.com
[4] http:/ / msdn.com/ practices
[5] http:/ / www. methodsandtools. com/ archive/ archive. php?id=23
[6] http:/ / www. methodsandtools. com/
[7] http:/ / www. methodsandtools. com/ archive/ archive. php?id=72
Integration testing
Integration testing (sometimes called Integration and Testing, abbreviated "I&T") is the phase in software testing in
which individual software modules are combined and tested as a group. It occurs after unit testing and before system
testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates,
applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system
ready for system testing.
Purpose
The purpose of integration testing is to verify functional, performance, and reliability requirements placed on major
design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using
Black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated
usage of shared data areas and inter-process communication is tested and individual subsystems are exercised
through their input interface. Test cases are constructed to test that all components within assemblages interact
correctly, for example across procedure calls or process activations, and this is done after testing individual modules,
i.e. unit testing. The overall idea is a "building block" approach, in which verified assemblages are added to a
verified base which is then used to support the integration testing of further assemblages.
Some different types of integration testing are big bang, top-down, and bottom-up.
Big Bang
In this approach, all or most of the developed modules are coupled together to form a complete software system or
major part of the system and then used for integration testing. The Big Bang method is very effective for saving time
in the integration testing process. However, if the test cases and their results are not recorded properly, the entire
integration process will be more complicated and may prevent the testing team from achieving the goal of integration
testing.
A type of Big Bang Integration testing is called Usage Model testing. Usage Model testing can be used in both
software and hardware integration testing. The basis behind this type of integration testing is to run user-like
workloads in integrated user-like environments. In doing the testing in this manner, the environment is proofed,
while the individual components are proofed indirectly through their use. Usage Model testing takes an optimistic
approach to testing, because it expects to have few problems with the individual components. The strategy relies
heavily on the component developers to do the isolated unit testing for their product. The goal of the strategy is to
avoid redoing the testing done by the developers, and instead flesh out problems caused by the interaction of the
components in the environment. For integration testing, Usage Model testing can be more efficient and provides
better test coverage than traditional focused functional integration testing. To be more efficient and accurate, care
must be used in defining the user-like workloads for creating realistic scenarios in exercising the environment. This
gives that the integrated environment will work as expected for the target customers.
Integration testing
23
Top-down and Bottom-up
Bottom Up Testing is an approach to integrated testing where the lowest level components are tested first, then used
to facilitate the testing of higher level components. The process is repeated until the component at the top of the
hierarchy is tested.
All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration
testing of lower level integrated modules, the next level of modules will be formed and can be used for integration
testing. This approach is helpful only when all or most of the modules of the same development level are ready. This
method also helps to determine the levels of software developed and makes it easier to report testing progress in the
form of a percentage.
Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of
the module is tested step by step until the end of the related module.
Sandwich Testing is an approach to combine top down testing with bottom up testing.
The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-Down, it is easier to
find a missing branch link.
Limitations
Any conditions not stated in specified integration tests, outside of the confirmation of the execution of design items,
will generally not be tested.
See also
Design predicates
Software testing
System testing
Unit testing
Unit testing
24
Unit testing
In computer programming, unit testing is a method by which individual units of source code are tested to determine
if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be
an individual function or procedure. Unit tests are created by programmers or occasionally by white box testers.
Ideally, each test case is independent from the others: substitutes like method stubs, mock objects,
[1]
fakes and test
harnesses can be used to assist testing a module in isolation. Unit tests are typically written and run by software
developers to ensure that code meets its design and behaves as intended. Its implementation can vary from being
very manual (pencil and paper) to being formalized as part of build automation.
Benefits
The goal of unit testing is to isolate each part of the program and show that the individual parts are correct.
[2]
A unit
test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits. Unit
tests find problems early in the development cycle.
Facilitates change
Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly
(i.e., regression testing). The procedure is to write test cases for all functions and methods so that whenever a change
causes a fault, it can be quickly identified and fixed.
Readily-available unit tests make it easy for the programmer to check whether a piece of code is still working
properly.
In continuous unit testing environments, through the inherent practice of sustained maintenance, unit tests will
continue to accurately reflect the intended use of the executable and code in the face of any change. Depending upon
established development practices and unit test coverage, up-to-the-second accuracy can be maintained.
Simplifies integration
Unit testing may reduce uncertainty in the units themselves and can be used in a bottom-up testing style approach.
By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier.
An elaborate hierarchy of unit tests does not equal integration testing. Integration testing cannot be fully automated
and thus still relies heavily on human testers.
Documentation
Unit testing provides a sort of living documentation of the system. Developers looking to learn what functionality is
provided by a unit and how to use it can look at the unit tests to gain a basic understanding of the unit API.
Unit test cases embody characteristics that are critical to the success of the unit. These characteristics can indicate
appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A unit test
case, in and of itself, documents these critical characteristics, although many software development environments do
not rely solely upon code to document the product in development.
On the other hand, ordinary narrative documentation is more susceptible to drifting from the implementation of the
program and will thus become outdated (e.g., design changes, feature creep, relaxed practices in keeping documents
up-to-date).
Unit testing
25
Design
When software is developed using a test-driven approach, the unit test may take the place of formal design. Each unit
test can be seen as a design element specifying classes, methods, and observable behaviour. The following Java
example will help illustrate this point.
Here is a test class that specifies a number of elements of the implementation. First, that there must be an interface
called Adder, and an implementing class with a zero-argument constructor called AdderImpl. It goes on to assert that
the Adder interface should have a method called add, with two integer parameters, which returns another integer. It
also specifies the behaviour of this method for a small range of values.
public class TestAdder {
public void esSum() {
Adder adder = new AdderImp1(),
assert(adder.add(1, 1) == 2),
assert(adder.add(1, 2) == 3),
assert(adder.add(2, 2) == 4),
assert(adder.add(0, 0) == 0),
assert(adder.add(-1, -2) == -3),
assert(adder.add(-1, 1) == 0),
assert(adder.add(1234, 988) == 2222),
}
}
In this case the unit test, having been written first, acts as a design document specifying the form and behaviour of a
desired solution, but not the implementation details, which are left for the programmer. Following the "do the
simplest thing that could possibly work" practice, the easiest solution that will make the test pass is shown below.
interface Adder {
in add(in a, in b),
}
class AdderImpl implements Adder {
in add(in a, in b) {
return a + b,
}
}
Unlike other diagram-based design methods, using a unit-test as a design has one significant advantage. The design
document (the unit-test itself) can be used to verify that the implementation adheres to the design. With the unit-test
design method, the tests will never pass if the developer does not implement the solution according to the design.
It is true that unit testing lacks some of the accessibility of a diagram, but UML diagrams are now easily generated
for most modern languages by free tools (usually available as extensions to IDEs). Free tools, like those based on the
xUnit framework, outsource to another system the graphical rendering of a view for human consumption.
Separation of interface from implementation
Because some classes may have references to other classes, testing a class can frequently spill over into testing
another class. A common example of this is classes that depend on a database: in order to test the class, the tester
often writes code that interacts with the database. This is a mistake, because a unit test should usually not go outside
of its own class boundary, and especially should not cross such process/network boundaries because this can
introduce unacceptable performance problems to the unit test-suite. Crossing such unit boundaries turns unit tests
Unit testing
26
into integration tests, and when test cases fail, makes it less clear which component is causing the failure. See also
Fakes, mocks and integration tests
Instead, the software developer should create an abstract interface around the database queries, and then implement
that interface with their own mock object. By abstracting this necessary attachment from the code (temporarily
reducing the net effective coupling), the independent unit can be more thoroughly tested than may have been
previously achieved. This results in a higher quality unit that is also more maintainable.
Unit testing limitations
Testing cannot be expected to catch every error in the program: it is impossible to evaluate every execution path in
all but the most trivial programs. The same is true for unit testing. Additionally, unit testing by definition only tests
the functionality of the units themselves. Therefore, it will not catch integration errors or broader system-level errors
(such as functions performed across multiple units, or non-functional test areas such as performance). Unit testing
should be done in conjunction with other software testing activities. Like all forms of software testing, unit tests can
only show the presence of errors; they cannot show the absence of errors.
Software testing is a combinatorial problem. For example, every boolean decision statement requires at least two
tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written,
programmers often need 3 to 5 lines of test code.
[3]
This obviously takes time and its investment may not be worth
the effort. There are also many problems that cannot easily be tested at all - for example those that are
nondeterministic or involve multiple threads. In addition, writing code for a unit test is as likely to be at least as
buggy as the code it is testing. Fred Brooks in The Mythical Man-Month quotes: never take two chronometers to sea.
Always take one or three. Meaning, if two chronometers contradict, how do you know which one is correct?
To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the software development
process. It is essential to keep careful records not only of the tests that have been performed, but also of all changes
that have been made to the source code of this or any other unit in the software. Use of a version control system is
essential. If a later version of the unit fails a particular test that it had previously passed, the version-control software
can provide a list of the source code changes (if any) that have been applied to the unit since that time.
It is also essential to implement a sustainable process for ensuring that test case failures are reviewed daily and
addressed immediately.
[4]
If such a process is not implemented and ingrained into the team's workflow, the
application will evolve out of sync with the unit test suite, increasing false positives and reducing the effectiveness of
the test suite.
Applications
Extreme Programming
Unit testing is the cornerstone of Extreme Programming, which relies on an automated unit testing framework. This
automated unit testing framework can be either third party, e.g., xUnit, or created within the development group.
Extreme Programming uses the creation of unit tests for test-driven development. The developer writes a unit test
that exposes either a software requirement or a defect. This test will fail because either the requirement isn't
implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the
simplest code to make the test, along with other tests, pass.
Most code in a system is unit tested, but not necessarily all paths through the code. Extreme Programming mandates
a "test everything that can possibly break" strategy, over the traditional "test every execution path" method. This
leads developers to develop fewer tests than classical methods, but this isn't really a problem, more a restatement of
fact, as classical methods have rarely ever been followed methodically enough for all execution paths to have been
thoroughly tested. Extreme Programming simply recognizes that testing is rarely exhaustive (because it is often too
Unit testing
27
expensive and time-consuming to be economically viable) and provides guidance on how to effectively focus limited
resources.
Crucially, the test code is considered a first class project artifact in that it is maintained at the same quality as the
implementation code, with all duplication removed. Developers release unit testing code to the code repository in
conjunction with the code it tests. Extreme Programming's thorough unit testing allows the benefits mentioned
above, such as simpler and more confident code development and refactoring, simplified code integration, accurate
documentation, and more modular designs. These unit tests are also constantly run as a form of regression test.
Techniques
Unit testing is commonly automated, but may still be performed manually. The IEEE does not favor one over the
other.
[5]
A manual approach to unit testing may employ a step-by-step instructional document. Nevertheless, the
objective in unit testing is to isolate a unit and validate its correctness. Automation is efficient for achieving this, and
enables the many benefits listed in this article. Conversely, if not planned carefully, a careless manual unit test case
may execute as an integration test case that involves many software components, and thus preclude the achievement
of most if not all of the goals established for unit testing.
Under the automated approach, to fully realize the effect of isolation, the unit or code body subjected to the unit test
is executed within a framework outside of its natural environment, that is, outside of the product or calling context
for which it was originally created. Testing in an isolated manner has the benefit of revealing unnecessary
dependencies between the code being tested and other units or data spaces in the product. These dependencies can
then be eliminated.
Using an automation framework, the developer codes criteria into the test to verify the correctness of the unit.
During execution of the test cases, the framework logs those that fail any criterion. Many frameworks will also
automatically flag and report in a summary these failed test cases. Depending upon the severity of a failure, the
framework may halt subsequent testing.
As a consequence, unit testing is traditionally a motivator for programmers to create decoupled and cohesive code
bodies. This practice promotes healthy habits in software development. Design patterns, unit testing, and refactoring
often work together so that the best solution may emerge.
Unit testing frameworks
Unit testing frameworks are most often third-party products that are not distributed as part of the compiler suite.
They help simplify the process of unit testing, having been developed for a wide variety of languages.
It is generally possible to perform unit testing without the support of a specific framework by writing client code that
exercises the units under test and uses assertions, exception handling, or other control flow mechanisms to signal
failure. Unit testing without a framework is valuable in that there is a barrier to entry for the adoption of unit testing;
having scant unit tests is hardly better than having none at all, whereas once a framework is in place, adding unit
tests becomes relatively easy.
[6]
In some frameworks many advanced unit test features are missing or must be
hand-coded.
Unit testing
28
Language-level unit testing support
Some programming languages support unit testing directly. Their grammar allows the direct declaration of unit tests
without importing a library (whether third party or standard). Additionally, the boolean conditions of the unit tests
can be expressed in the same syntax as boolean expressions used in non-unit test code, such as what is used for if
and whi1e statements.
Languages that directly support unit testing include:
Cobra
D
See also
Characterization test
Design predicates
Extreme Programming
Integration testing
List of unit testing frameworks
Regression testing
Software archaeology
Software testing
Test case
Test-driven development
xUnit - a family of unit testing frameworks.
External links
The evolution of Unit Testing Syntax and Semantics
[7]
Unit Testing Guidelines from GeoSoft
[8]
Test Driven Development (Ward Cunningham's Wiki)
[9]
Unit Testing 101 for the Non-Programmer
[10]
Step-by-Step Guide to JPA-Enabled Unit Testing (Java EE)
[11]
References
[1] Fowler, Martin (2007-01-02). "Mocks aren't Stubs" (http:/ / martinfowler. com/ articles/ mocksArentStubs.html). . Retrieved 2008-04-01.
[2] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention. Best Practices in Software Management (http:/ / www. wiley. com/
WileyCDA/ WileyTitle/ productCd-0470042125. html). Wiley-IEEE Computer Society Press. p.75. ISBN0470042125. .
[3] Cramblitt, Bob (2007-09-20). "Alberto Savoia sings the praises of software testing" (http:/ / searchsoftwarequality.techtarget.com/
originalContent/ 0,289142,sid92_gci1273161,00.html). . Retrieved 2007-11-29.
[4] daVeiga, Nada (2008-02-06). "Change Code Without Fear: Utilize a regression safety net" (http:/ / www. ddj. com/ development-tools/
206105233). . Retrieved 2008-02-08.
[5] IEEE Standards Board, "IEEE Standard for Software Unit Testing: An American National Standard, ANSI/IEEE Std 1008-1987" (http:/ /
iteso. mx/ ~pgutierrez/ calidad/ Estandares/ IEEE 1008. pdf) in IEEE Standards. Software Engineering, Volume Two. Process Standards,
1999 Edition, published by The Institute of Electrical and Electronics Engineers, Inc. Software Engineering Technical Committee of the IEEE
Computer Society.
[6] Bullseye Testing Technology (2006-2008). ""Intermediate Coverage Goals"" (http:/ / www. bullseye. com/ coverage.html#intermediate). .
Retrieved 24 March 2009.
[7] http:/ / weblogs. asp.net/ rosherove/ archive/ 2008/ 01/ 17/ the-evolution-of-unit-testing-and-syntax. aspx
[8] http:/ / geosoft.no/ development/ unittesting. html
[9] http:/ / c2. com/ cgi/ wiki?TestDrivenDevelopment
[10] http:/ / www. saravanansubramanian.com/ Saravanan/ Articles_On_Software/ Entries/ 2010/ 1/
19_Unit_Testing_101_For_Non-Programmers. html
[11] http:/ / www. sizovpoint.com/ 2010/ 01/ step-by-step-guide-to-jpa-enabled-unit. html
Code refactoring
29
Code refactoring
Code refactoring is the process of changing a computer program's source code without modifying its external
functional behavior in order to improve some of the nonfunctional attributes of the software. Advantages include
improved code readability and reduced complexity to improve the maintainability of the source code, as well as a
more expressive internal architecture or object model to improve extensibility.
!
By continuously improving the design of code, we make it easier and easier to work with. This is in sharp contrast to what typically happens.
little refactoring and a great deal of attention paid to expediently adding new features. If you get into the hygienic habit of refactoring
continuously, youll find that it is easier to extend and maintain code. "
-- Joshua Kerievsky, Refactoring to Patterns
[1]
Overview
Refactoring is usually motivated by noticing a code smell.
[2]
For example the method at hand may be very long, or it
may be a near duplicate of another nearby method. Once recognized, such problems can be addressed by refactoring
the source code, or transforming it into a new form that behaves the same as before but that no longer "smells". For a
long routine, extract one or more smaller subroutines. Or for duplicate routines, remove the duplication and utilize
one shared function in their place. Failure to perform refactoring can result in accumulating technical debt.
There are two general categories of benefits to the activity of refactoring.
1. Maintainability. It is easier to fix bugs because the source code is easy to read and the intent of its author is easy
to grasp.
[3]
This might be achieved by reducing large monolithic routines into a set of individually concise,
well-named, single-purpose methods. It might be achieved by moving a method to a more appropriate class, or by
removing misleading comments.
2. Extensibility. It is easier to extend the capabilities of the application if it uses recognizable design patterns, and it
provides some flexibility where none before may have existed.
[1]
Before refactoring a section of code, a solid set of automatic unit tests is needed. The tests should demonstrate in a
few seconds that the behavior of the module is correct. The process is then an iterative cycle of making a small
program transformation, testing it to ensure correctness, and making another small transformation. If at any point a
test fails, you undo your last small change and try again in a different way. Through many small steps the program
moves from where it was to where you want it to be. Proponents of extreme programming and other agile
methodologies describe this activity as an integral part of the software development cycle.
List of refactoring techniques
Here some examples of code refactorings. A longer list can be found in Fowler's Refactoring book
[2]
and on Fowler's
Refactoring Website.
[4]
Techniques that allow for more abstraction
Encapsulate Field - force code to access the field with getter and setter methods
Generalize Type - create more general types to allow for more code sharing
Replace type-checking code with State/Strategy
[5]
Replace conditional with polymorphism
[6]
Techniques for breaking code apart into more logical pieces
Extract Method, to turn part of a larger method into a new method. By breaking down code in smaller pieces, it
is more easily understandable. This is also applicable to functions.
Extract Class moves part of the code from an existing class into a new class.
Code refactoring
30
Techniques for improving names and location of code
Move Method or Move Field - move to a more appropriate Class or source file
Rename Method or Rename Field - changing the name into a new one that better reveals its purpose
Pull Up - in OOP, move to a superclass
Push Down - in OOP, move to a subclass
Hardware refactoring
While the term refactoring originally referred exclusively to refactoring of software code, in recent years code
written in hardware description languages (HDLs) has also been refactored. The term hardware refactoring is used
as a shorthand term for refactoring of code in hardware description languages. Since HDLs are not considered to be
programming languages by most hardware engineers,
[7]
hardware refactoring is to be considered a separate field
from traditional code refactoring.
Automated refactoring of analog hardware descriptions (in VHDL-AMS) has been proposed by Zeng and Huss.
[8]
In
their approach, refactoring preserves the simulated behavior of a hardware design. The non-functional measurement
that improves is that refactored code can be processed by standard synthesis tools, while the original code cannot.
Refactoring of digital HDLs, albeit manual refactoring, has also been investigated by Synopsys fellow Mike
Keating.
[9]

[10]
His target is to make complex systems easier to understand, which increases the designers'
productivity.
In the summer of 2008, there was an intense discussion about refactoring of VHDL code on the
news://comp.lang.vhdl newsgroup.
[11]
The discussion revolved around a specific manual refactoring performed by
one engineer, and the question to whether or not automated tools for such refactoring exist.
As of late 2009, Sigasi is offering automated tool support for VHDL refactoring.
[12]
History
Although refactoring code has been done informally for years, William Opdyke's 1992 Ph.D. dissertation
[13]
is the
first known paper to specifically examine refactoring,
[14]
although all the theory and machinery have long been
available as program transformation systems. All of these resources provide a catalog of common methods for
refactoring; a refactoring method has a description of how to apply the method and indicators for when you should
(or should not) apply the method.
Martin Fowler's book Refactoring. Improving the Design of Existing Code
[2]
is the canonical reference.
The first known use of the term "refactoring" in the published literature was in a September, 1990 article by William
F. Opdyke and Ralph E. Johnson.
[15]
Opdyke's Ph.D. thesis,
[13]
published in 1992, also used this term.
[14]
The term "factoring" has been used in the Forth community since at least the early 1980s. Chapter Six of Leo
Brodie's book Thinking Forth (1984) is dedicated to the subject.
In extreme programming, the Extract Method refactoring technique has essentially the same meaning as factoring in
Forth; to break down a "word" (or function) into smaller, more easily maintained functions.
Code refactoring
31
Automated code refactoring
Many software editors and IDEs have automated refactoring support. Here is a list of a few of these editors, or
so-called refactoring browsers.
IntelliJ IDEA (for Java)
Eclipse's Java Development Toolkit (JDT)
NetBeans (for Java)
Embarcadero Delphi
Visual Studio (for .NET)
JustCode (addon for Visual Studio)
ReSharper (addon for Visual Studio)
Coderush (addon for Visual Studio)
Visual Assist (addon for Visual Studio with refactoring support for VB, VB.NET. C# and C++)
DMS Software Reengineering Toolkit (Implements large-scale refactoring for C, C++, C#, COBOL, Java, PHP
and other languages)
Photran a Fortran plugin for the Eclipse IDE
SharpSort addin for Visual Studio 2008
Sigasi HDT (for VHDL)
XCode
Smalltalk Refactoring Browser (for Smalltalk)
See also
Code review
Design pattern (computer science)
Obfuscated code
Peer review
Prefactoring
Rewrite (programming)
Separation of concerns
Test-driven development
Unit testing
Code Factoring
Redesign (software)
Further reading
Fowler, Martin (1999). Refactoring. Improving the Design of Existing Code. Addison-Wesley.
ISBN0-201-48567-2.
Wake, William C. (2003). Refactoring Workbook. Addison-Wesley. ISBN0-321-10929-5.
Mens, Tom and Tourw, Tom (2004) A Survey of Software Refactoring
[16]
, IEEE Transactions on Software
Engineering, February 2004 (vol. 30 no. 2), pp. 126-139
Feathers, Michael C (2004). Working Effectively with Legacy Code. Prentice Hall. ISBN0-13-117705-2.
Kerievsky, Joshua (2004). Refactoring To Patterns. Addison-Wesley. ISBN0-321-21335-1.
Arsenovski, Danijel (2008). Professional Refactoring in Visual Basic. Wrox. ISBN0-47-017979-1.
Arsenovski, Danijel (2009). Professional Refactoring in C# and ASP.NET. Wrox. ISBN978-0470434529.
Ritchie, Peter (2010). Refactoring with Visual Studio 2010. Packt. ISBN978-1849680103.
Code refactoring
32
External links
What Is Refactoring?
[17]
(c2.com article)
Martin Fowler's homepage about refactoring
[18]
Aspect-Oriented Refactoring
[19]
by Ramnivas Laddad
A Survey of Software Refactoring
[20]
by Tom Mens and Tom Tourw
Refactoring Java Code
[21]
Refactoring To Patterns Catalog
[22]
Extract Boolean Variable from Conditional
[23]
(a refactoring pattern not listed in the above catalog)
Test-Driven Development With Refactoring
[24]
Revisiting Fowler's Video Store: Refactoring Code, Refining Abstractions
[25]
References
[1] Kerievsky, Joshua (2004). Refactoring to Patterns. Addison Wesley.
[2] Fowler, Martin (1999). Refactoring. Improving the design of existing code. Addison Wesley.
[3] Martin, Robert (2009). Clean Code. Prentice Hall.
[4] Refactoring techniques in Fowler's refactoring Website (http:/ / www. refactoring. com/ catalog/ index. html)
[5] Replace type-checking code with State/Strategy (http:/ / www. refactoring. com/ catalog/ replaceTypeCodeWithStateStrategy.html)
[6] Replace conditional with polymorphism (http:/ / www. refactoring. com/ catalog/ replaceConditionalWithPolymorphism. html)
[7] Hardware description languages#HDL and programming languages
[8] Kaiping Zeng, Sorin A. Huss, "Architecture refinements by code refactoring of behavioral VHDL-AMS models". ISCAS 2006
[9] M. Keating :"Complexity, Abstraction, and the Challenges of Designing Complex Systems", in DAC'08 tutorial (http:/ / www. dac. com/
events/ eventdetails.aspx?id=77-130)"Bridging a Verification Gap: C++ to RTL for Practical Design"
[10] M. Keating, P. Bricaud: Reuse Methodology Manual for System-on-a-Chip Designs, Kluwer Academic Publishers, 1999.
[11] http:/ / newsgroups. derkeiler.com/ Archive/ Comp/ comp. lang. vhdl/ 2008-06/ msg00173. html
[12] www.eetimes.com/news/latest/showArticle.jhtml?articleID=222001855 (http:/ / www.eetimes. com/ news/ latest/ showArticle.
jhtml?articleID=222001855)
[13] Opdyke, William F (June 1992) (compressed Postscript). Refactoring Object-Oriented Frameworks (ftp:/ / st. cs.uiuc. edu/ pub/ papers/
refactoring/ opdyke-thesis. ps.Z). Ph.D. thesis. University of Illinois at Urbana-Champaign. . Retrieved 2008-02-12.
[14] Martin Fowler, "MF Bliki: EtymologyOfRefactoring" (http:/ / martinfowler. com/ bliki/ EtymologyOfRefactoring. html)
[15] Opdyke, William F.; Johnson, Ralph E. (September 1990). "Refactoring: An Aid in Designing Application Frameworks and Evolving
Object-Oriented Systems". Proceedings of the Symposium on Object Oriented Programming Emphasizing Practical Applications (SOOPPA).
ACM.
[16] http:/ / doi.ieeecomputersociety.org/ 10. 1109/ TSE.2004. 1265817
[17] http:/ / c2.com/ cgi/ wiki?WhatIsRefactoring
[18] http:/ / www. refactoring.com/
[19] http:/ / www. theserverside.com/ articles/ article. tss?l=AspectOrientedRefactoringPart1
[20] http:/ / csdl. computer.org/ comp/ trans/ ts/ 2004/ 02/ e2toc. htm
[21] http:/ / www. methodsandtools. com/ archive/ archive. php?id=4
[22] http:/ / industriallogic.com/ xp/ refactoring/ catalog. html
[23] http:/ / www. industriallogic.com/ papers/ extractboolean. html
[24] http:/ / www. testingtv.com/ 2009/ 09/ 24/ test-driven-development-with-refactoring/
[25] http:/ / blog.symprise.net/ 2009/ 04/ revisiting-fowlers-video-store-refactoring-code-reengineering-abstractions/
Test case
33
Test case
A test case in software engineering is a set of conditions or variables under which a tester will determine whether an
application or software system is working correctly or not. The mechanism for determining whether a software
program or system has passed or failed such a test is known as a test oracle. In some settings, an oracle could be a
requirement or use case, while in others it could be a heuristic. It may take many test cases to determine that a
software program or system is functioning correctly. Test cases are often referred to as test scripts, particularly
when written. Written test cases are usually collected into test suites.
Formal test cases
In order to fully test that all the requirements of an application are met, there must be at least two test cases for each
requirement: one positive test and one negative test; unless a requirement has sub-requirements. In that situation,
each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and the
test is frequently done using a traceability matrix. Written test cases should include a description of the functionality
to be tested, and the preparation required to ensure that the test can be conducted.
A formal written test-case is characterized by a known input and by an expected output, which is worked out before
the test is executed. The known input should test a precondition and the expected output should test a postcondition.
Informal test cases
For applications or systems without formal requirements, test cases can be written based on the accepted normal
operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities
and results are reported after the tests have been run.
In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These
scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or
they could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex,
and easy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios
cover a number of steps.
Typical written test case format
A test case is usually a single step, or occasionally a sequence of steps, to test the correct behaviour/functionalities,
features of an application. An expected result or expected outcome is usually given.
Additional information that may be included:
test case ID
test case description
test step or order of execution number
related requirement(s)
depth
test category
author
check boxes for whether the test is automatable and has been automated.
Additional fields that may be included and completed when the tests are executed:
pass/fail
remarks
Larger test cases may also contain prerequisite states or steps, and descriptions.
Test case
34
A written test case should also contain a place for the actual result.
These steps can be stored in a word processor document, spreadsheet, database or other common repository.
In a database system, you may also be able to see past test results and who generated the results and the system
configuration used to generate those results. These past results would usually be stored in a separate table.
Test suites often also contain
Test summary
Configuration
Besides a description of the functionality to be tested, and the preparation required to ensure that the test can be
conducted, the most time consuming part in the test case is creating the tests and modifying them when the system
changes.
Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would
evaluate if the results can be considered as a pass. This happens often on new products' performance number
determination. The first test is taken as the base line for subsequent test / product release cycles.
Acceptance tests, which use a variation of a written test case, are commonly performed by a group of end-users or
clients of the system to ensure the developed system meets the requirements specified or the contract. User
acceptance tests are differentiated by the inclusion of happy path or positive test cases to the almost complete
exclusion of negative test cases.
External links
Writing Software Security Test Cases - Putting security test cases into your test plan
[1]
by Robert Auger
References
[1] http:/ / www. qasec. com/ cycle/ securitytestcases. shtml
xUnit
35
xUnit
Various code-driven testing frameworks have come to be known collectively as xUnit. These frameworks allow
testing of different elements (units) of software, such as functions and classes. The main advantage of xUnit
frameworks is that they provide an automated solution with no need to write the same tests many times, and no need
to remember what should be the result of each test. Such frameworks are based on a design by Kent Beck, originally
implemented for Smalltalk as SUnit. Erich Gamma ported SUnit to Java, creating JUnit. Out of that the SUnit
framework was also ported to other languages, e.g., CppUnit (for C++), NUnit (for .NET). They are all referred to as
xUnit and are usually free, open source software. They are now available for many programming languages and
development platforms.
xUnit architecture
All xUnit frameworks share the following basic component architecture, with some varied implementation details.
Test case
This is the most elemental class. All unit tests are inherited from it.
Test fixtures
A test fixture (also known as a test context) is the set of preconditions or state needed to run a test. The developer
should set up a known good state before the tests, and after the tests return to the original state.
Test suites
A test suite is a set of tests that all share the same fixture. The order of the test shouldn't matter.
Test execution
The execution of an individual unit test proceeds as follows:
seup(), /* Eirst, we shou1d prepare our 'wor1d' to make an iso1ated
environment for testing */
...
/* Body of test - Here we make a11 the tests */
...
eardown(), /* In the end, whether succeed or fai1 we shou1d c1ean up
our 'wor1d' to not disturb other tests or code */
The setup() and teardown() methods serve to initialize and clean up test fixtures.
xUnit
36
Assertions
An assertion is a function or macro that verifies the behavior (or the state) of the unit under test. Failure of an
assertion typically throws an exception, aborting the execution of the current test.
xUnit Frameworks
Many xUnit frameworks exist for various programming languages and development platforms.
List of unit testing frameworks
xUnit Extensions
Extensions are available to extend xUnit frameworks with additional specialized functionality. Examples of such
extensions include XMLUnit
[1]
, DbUnit
[2]
, HtmlUnit and HttpUnit.
See also
Unit testing in general:
Unit testing
Software testing
Programming approach to unit testing:
Test-driven development
Extreme programming
External links
Kent Beck's original testing framework paper
[3]
Other list of various unit testing frameworks
[4]
OpenSourceTesting.org lists many unit testing frameworks, performance testing tools and other tools
programmers/developers may find useful
[5]
Test automation patterns for writing tests/specs in xUnit.
[6]
Martin Fowler on the background of xUnit.
[7]
References
[1] http:/ / xmlunit.sourceforge.net/
[2] http:/ / www. dbunit.org/
[3] http:/ / www. xprogramming.com/ testfram. htm
[4] http:/ / www. xprogramming.com/ software. htm
[5] http:/ / opensourcetesting.org/
[6] http:/ / xunitpatterns. com/
[7] http:/ / www. martinfowler. com/ bliki/ Xunit. html
Test stubs
37
Test stubs
In computer science, test stubs are programs which simulate the behaviors of software components (or modules) that
are the dependent modules of the module being tested.
!
Test stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for
the test.
[1]
"
Test Stubs are mainly used in incremental testing's Top-Down approach. Stubs are software programs which act as a
module and give the output as given by an actual product/software.
Example
Consider a software program which queries a database to obtain the sum price total of all products stored in the
database. However, the query is slow and consumes a large number of system resources. This reduces the number of
test runs per day. Secondly, the tests need to be conducted on values larger than what is currently in the database.
The method (or call) used to perform this is get_total(). For testing purposes, the source code in get_total() could be
temporarily replaced with a simple statement which returned a specific value. This would be a test stub.
There are several testing frameworks available and there is software that can generate test stubs based on existing
source code and testing requirements.
External links
http:/ / xunitpatterns. com/ Test%20Stub. html
[2]
See also
Software testing
Test Double
References
[1] Fowler, Martin (2007), Mocks Arent Stubs (Online) (http:/ / martinfowler. com/ articles/ mocksArentStubs.
html#TheDifferenceBetweenMocksAndStubs)
[2] http:/ / xunitpatterns. com/ Test%20Stub.html
Mock object
38
Mock object
In object-oriented programming, mock objects are simulated objects that mimic the behavior of real objects in
controlled ways. A computer programmer typically creates a mock object to test the behavior of some other object,
in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in
vehicle impacts.
Reasons for use
In a unit test, mock objects can simulate the behavior of complex, real (non-mock) objects and are therefore useful
when a real object is impractical or impossible to incorporate into a unit test. If an object has any of the following
characteristics, it may be useful to use a mock object in its place:
supplies non-deterministic results (e.g. the current time or the current temperature);
has states that are difficult to create or reproduce (e.g. a network error);
is slow (e.g. a complete database, which would have to be initialized before the test);
does not yet exist or may change behavior;
would have to include information and methods exclusively for testing purposes (and not for its actual task).
For example, an alarm clock program which causes a bell to ring at a certain time might get the current time from the
outside world. To test this, the test must wait until the alarm time to know whether it has rung the bell correctly. If a
mock object is used in place of the real object, it can be programmed to provide the bell-ringing time (whether it is
actually that time or not) so that the alarm clock program can be tested in isolation.
Technical details
Mock objects have the same interface as the real objects they mimic, allowing a client object to remain unaware of
whether it is using a real object or a mock object. Many available mock object frameworks allow the programmer to
specify which, and in what order, methods will be invoked on a mock object and what parameters will be passed to
them, as well as what values will be returned. Thus, the behavior of a complex object such as a network socket can
be mimicked by a mock object, allowing the programmer to discover whether the object being tested responds
appropriately to the wide variety of states such objects may be in.
Mocks, fakes and stubs
Some authors
[1]
draw a distinction between fake and mock objects. Fakes are the simpler of the two, simply
implementing the same interface as the object that they represent and returning pre-arranged responses. Thus a fake
object merely provides a set of method stubs.
In the book "The art of unit testing"
[2]
mocks are described as a fake object that helps decide if a test failed or
passed, by verifying if an interaction on an object occurred or not. Everything else is defined as a stub. In that book,
"Fakes" are anything that is not real. Based on their usage, they are either stubs or mocks.
Mock objects in this sense do a little more: their method implementations contain assertions of their own. This
means that a true mock, in this sense, will examine the context of each call- perhaps checking the order in which its
methods are called, perhaps performing tests on the data passed into the method calls as arguments.
Mock object
39
Setting expectations
Consider an example where an authorization sub-system has been mocked. The mock object implements an
isUserAllowed(task : Task) : boolean
[3]
method to match that in the real authorization class. Many advantages
follow if it also exposes an isAllowed : boolean property, which is not present in the real class. This allows test code
easily to set the expectation that a user will, or will not, be granted permission in the next call and therefore readily
to test the behavior of the rest of the system in either case.
Similarly, a mock-only setting could ensure that subsequent calls to the sub-system will cause it to throw an
exception, or hang without responding, or return null etc. Thus it is possible to develop and test client behaviors for
all realistic fault conditions in back-end sub-systems as well as for their expected responses. Without such a simple
and flexible mock system, testing each of these situations may be too laborious for them to be given proper
consideration.
Writing log strings
A mock database object's save(person : Person) method may not contain much (if any) implementation code. It
might or might not check the existence and perhaps the validity of the Person object passed in for saving (see fake
vs. mock discussion above), but beyond that there might be no other implementation.
This is a missed opportunity. The mock method could add an entry to a public log string. The entry need be no more
than "Person saved",
[4]
or it may include some details from the person object instance, such as a name or ID. If the
test code also checks the final contents of the log string after various series of operations involving the mock
database then it is possible to verify that in each case exactly the expected number of database saves have been
performed. This can find otherwise invisible performance-sapping bugs, for example, where a developer, nervous of
losing data, has coded repeated calls to save() where just one would have sufficed.
Use in test-driven development
Programmers working with the test-driven development (TDD) method make use of mock objects when writing
software. Mock objects meet the interface requirements of, and stand in for, more complex real ones; thus they allow
programmers to write and unit-test functionality in one area without actually calling complex underlying or
collaborating classes.
[5]
Using mock objects allows developers to focus their tests on the behavior of the system
under test (SUT) without worrying about its dependencies. For example, testing a complex algorithm based on
multiple objects being in particular states can be clearly expressed using mock objects in place of real objects.
Apart from complexity issues and the benefits gained from this separation of concerns, there are practical speed
issues involved. Developing a realistic piece of software using TDD may easily involve several hundred unit tests. If
many of these induce communication with databases, web services and other out-of-process or networked systems,
then the suite of unit tests will quickly become too slow to be run regularly. This in turn leads to bad habits and a
reluctance by the developer to maintain the basic tenets of TDD.
When mock objects are replaced by real ones then the end-to-end functionality will need further testing. These will
be integration tests rather than unit tests.
Limitations
The use of mock objects can closely couple the unit tests to the actual implementation of the code that is being
tested. For example, many mock object frameworks allow the developer to specify the order of and number of times
that the methods on a mock object are invoked; subsequent refactoring of the code that is being tested could
therefore cause the test to fail even though the method still obeys the contract of the previous implementation. This
illustrates that unit tests should test a method's external behavior rather than its internal implementation. Over-use of
mock objects as part of a suite of unit tests can result in a dramatic increase in the amount of maintenance that needs
Mock object
40
to be performed on the tests themselves during system evolution as refactoring takes place. The improper
maintenance of such tests during evolution could allow bugs to be missed that would otherwise be caught by unit
tests that use instances of real classes. Conversely, simply mocking one method might require far less configuration
than setting up an entire real class and therefore reduce maintenance needs.
Mock objects have to accurately model the behavior of the object they are mocking, which can be difficult to achieve
if the object being mocked comes from another developer or project or if it hasn't even been written yet. If the
behavior isn't modeled correctly then the unit tests may register a pass even though a failure would occur at run time
under the same conditions that the unit test is exercising, thus rendering the unit test inaccurate.
[6]
See also
Abstract method
Dummy code
List of mock object frameworks
Hamcrest
Method stub
Test Double
External links
Tim Mackinnon (8 September 2009). "A Brief History of Mock Objects"
[7]
. Mockobjects.com/.
mocks vs stubs by roy osherove
[8]
changing terms from mocking to isolation frameworks
[9]
a poll on mocking frameworks usage in .net
[10]
Interaction Testing with the Typemock Isolator Mocking framework
[11]
Great Java mock frameworks comparison article: Java mock framework comparison
[12]
Test Doubles
[13]
: a section of a book on unit testing patterns.
All about mock objects! Portal concerning mock objects
[14]
Mock Roles, not Objects
[15]
, a paper on the technique that was presented at OOPSLA 2004.
Using mock objects for complex unit tests
[16]
IBM developerWorks
Unit testing with mock objects
[17]
IBM developerWorks
Using Mock Objects with Test Driven Development
[18]
Mock Object Patterns at Hillside
[19]
Mock Object Design Patterns
Mocks Aren't Stubs
[20]
(Martin Fowler) Article about developing tests with Mock objects. Identifies and
compares the "classical" and "mockist" schools of testing. Touches on points about the impact on design and
maintenance.
Mocking the Embedded World
[21]
Paper and sample project concerned with adapting mocking and Presenter
First for embedded software development.
Surviving Mock Abuse
[17]
Pitfalls of overuse of mocks and advice for avoiding them
Overly Mocked
[22]
Words of advice for using mocks
Don't mock infrastructure
[23]
Responsibility Driven Design with Mock Objects
[24]
Mock framework for Microsoft Dynamics AX 2009
[25]
Interaction Based Testing with Rhino Mocks
[26]
Unit Testing with Mock Objects via MockBox
[27]
Mock object
41
References
[1] Feathers, Michael (2005). "Sensing and separation". Working effectively with legacy code. NJ: Prentice Hall. p.23 et seq.
ISBN0-13-117705-2.
[2] Osherove, Roy (2009). "Interaction testing with mock objects et seq". The art of unit testing. Manning. ISBN978-1933988276.
[3] These examples use a nomenclature that is similar to that used in Unified Modeling Language
[4] Beck, Kent (2003). Test-Driven Development By Example. Boston: Addison Wesley. pp.146-7. ISBN0-321-14653-0.
[5] Beck, Kent (2003). Test-Driven Development By Example. Boston: Addison Wesley. pp.144-5. ISBN0-321-14653-0.
[6] InJava.com (http:/ / www. onjava.com/ pub/ a/ onjava/ 2004/ 02/ 11/ mocks. html#Approaches) to Mocking | O'Reilly Media
[7] http:/ / www. mockobjects.com/ 2009/ 09/ brief-history-of-mock-objects. html
[8] http:/ / weblogs. asp.net/ rosherove/ archive/ 2007/ 09/ 16/ mocks-and-stubs-the-difference-is-in-the-flow-of-information. aspx
[9] http:/ / devlicio. us/ blogs/ derik_whittaker/ archive/ 2008/ 12/ 09/ changing-terms-from-mocking-framework-to-isolation-framework. aspx
[10] http:/ / weblogs. asp.net/ rosherove/ archive/ 2009/ 09/ 30/ poll-which-mocking-isolation-framework-do-you-use. aspx
[11] http:/ / typemock.org/ getting-started-step-1-set/
[12] http:/ / www. sizovpoint.com/ 2009/ 03/ java-mock-frameworks-comparison. html
[13] http:/ / xunitpatterns.com/ Test%20Double.html
[14] http:/ / www. mockobjects.com
[15] http:/ / www. jmock.org/ oopsla2004.pdf
[16] http:/ / www-128. ibm.com/ developerworks/ rational/ library/ oct06/ pollice/ index. html
[17] http:/ / www. ibm.com/ developerworks/ library/ j-mocktest. html
[18] http:/ / www. theserverside.com/ tt/ articles/ article. tss?l=JMockTestDrivenDev
[19] http:/ / hillside. net/ plop/ plop2003/ Papers/ Brown-mock-objects. pdf
[20] http:/ / martinfowler. com/ articles/ mocksArentStubs. html
[21] http:/ / www. atomicobject.com/ pages/ Embedded+ Software#MockingEmbeddedWorld
[22] http:/ / fishbowl.pastiche.org/ 2003/ 12/ 16/ overly_mocked/
[23] http:/ / www. harukizaemon.com/ 2003/ 11/ don-mock-infrastructure. html
[24] http:/ / www. methodsandtools. com/ archive/ archive. php?id=90
[25] http:/ / axmocks. codeplex.com/
[26] http:/ / www. testingtv.com/ 2009/ 08/ 28/ interaction-based-testing-with-rhino-mocks/
[27] http:/ / blog.coldbox. org/ post.cfm/ unit-testing-with-mock-objects-amp-mockbox
Separation of concerns
42
Separation of concerns
In computer science, separation of concerns (SoC) is the process of separating a computer program into distinct
features that overlap in functionality as little as possible. A concern is any piece of interest or focus in a program.
Typically, concerns are synonymous with features or behaviors. Progress towards SoC is traditionally achieved
through modularity of programming and encapsulation (or "transparency" of operation), with the help of information
hiding. Layered designs in information systems are also often based on separation of concerns (e.g., presentation
layer, business logic layer, data access layer, database layer).
Implementation
All programming paradigms aid developers in the process of improving SoC. For example, object-oriented
programming languages such as Delphi, C++, Java, and C# can separate concerns into objects, and a design pattern
like MVC can separate content from presentation and data-processing (model) from content. Service-oriented design
can separate concerns into services. Procedural programming languages such as C and Pascal can separate concerns
into procedures. Aspect-oriented programming languages can separate concerns into aspects and objects.
Separation of concerns is an important design principle in many other areas as well, such as urban planning,
architecture and information design. The goal is to design systems so that functions can be optimized independently
of other functions, so that failure of one function does not cause other functions to fail, and in general to make it
easier to understand, design and manage complex interdependent systems. Common examples include using
corridors to connect rooms rather than having rooms open directly into each other, and keeping the stove on one
circuit and the lights on another.
Origin
The term separation of concerns was probably coined by Edsger W. Dijkstra in his 1974 paper "On the role of
scientific thought"
[1]
.
Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is
willing to study in depth an aspect of one's subject matter in isolation for the sake of its own
consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know
that a program must be correct and we can study it from that viewpoint only; we also know that it should
be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask
ourselves whether, and if so: why, the program is desirable. But nothing is gained --on the contrary!-- by
tackling these various aspects simultaneously. It is what I sometimes have called 'the separation of
concerns', which, even if not perfectly possible, is yet the only available technique for effective
ordering of one's thoughts, that I know of. This is what I mean by "focusing one's attention upon some
aspect": it does not mean ignoring the other aspects, it is just doing justice to the fact that from this
aspect's point of view, the other is irrelevant. It is being one- and multiple-track minded simultaneously.
Fifteen years later, it was evident the term Separation of Concerns was becoming an accepted idea. In 1989, Chris
Reade wrote a book titled "Elements of Functional Programming"
[2]
that describes separation of concerns:
The programmer is having to do several things at the same time, namely,
1. describe wha is o be compued,
2. organise he compuaion sequencing ino sma11 seps,
3. organise memory managemen during he compuaion.
Reade continues to say,
Separation of concerns
43
Ideally, the programmer should be able to concentrate on the first of the three tasks (describing what is to be
computed) without being distracted by the other two, more administrative, tasks. Clearly, administration is
important but by separating it from the main task we are likely to get more reliable results and we can ease the
programming problem by automating much of the administration.
The separation of concerns has other advantages as well. For example, program proving becomes much more
feasible when details of sequencing and memory management are absent from the program. Furthermore,
descriptions of what is to be computed should be free of such detailed step-by-step descriptions of how to do it
if they are to be evaluated with different machine architectures. Sequences of small changes to a data object
held in a store may be an inappropriate description of how to compute something when a highly parallel
machine is being used with thousands of processors distributed throughout the machine and local rather than
global storage facilities.
Automating the administrative aspects means that the language implementor has to deal with them, but he/she
has far more opportunity to make use of very different computation mechanisms with different machine
architectures.
Examples
Separation of concerns is crucial to the design of the Internet. In the Internet Protocol Suite great efforts have been
made to separate concerns into well-defined layers. This allows protocol designers to focus on the concerns in one
layer, and ignore the other layers. The Application Layer protocol SMTP, for example, is concerned about all the
details of conducting an email session over a reliable transport service (usually TCP), but not the least concerned
about how the transport service makes that service reliable. Similarly, TCP is not concerned about the routing of data
packets, which is handled at the Internet Layer.
HyperText Markup Language (HTML) and cascading style sheets (CSS) are languages intended to separate style
from content. Where HTML elements define the abstract structure of a document, CSS directives are interpreted by
the web browser to render those elements in visual form. In practice, one must sometimes alter HTML in order to
obtain the desired result with CSS, in part because style and content are not completely orthogonalized by any
existing browser implementation of CSS, and in part because CSS does not allow one to remap the document tree.
This particular problem can be avoided by using XML instead of HTML and XSLT instead of CSS - XSL does
allow remapping the XML tree in arbitrary ways.
Subject-oriented programming allows separate concerns to be addressed as separate software constructs, each on an
equal footing with the others. Each concern provides its own class-structure into which the objects in common are
organized, and contributes state and methods to the composite result where they cut across one another.
Correspondence rules describe how the classes and methods in the various concerns are related to each other at
points where they interact, allowing composite behavior for a method to be derived from several concerns.
Multi-dimensional Separation of Concerns allows the analysis and composition of concerns to be manipulated as a
multi-dimensional "matrix" in which each concern provides a dimension in which different points of choice are
enumerated, with the cells of the matrix occupied by the appropriate software artifacts.
Aspect-oriented programming allows cross-cutting concerns to be addressed as secondary concerns. For example,
most programs require some form of security and logging. Security and logging are often secondary concerns,
whereas the primary concern is often on accomplishing business goals.
Most project organization tasks are seen as secondary tasks. For example, build automation is an approach to
automating the process of compiling source code into binary code. The primary goals in build automation are
reducing the risk of human error and saving time.
Separation of concerns
44
See also
Abstraction principle (programming)
Aspect-oriented software development
Concern (computer science)
Core concern
Cross-cutting concern
Holism
Modular design
Modular programming
Separation of presentation and content
Coupling (computer science)
External references
The Art of Separation of Concerns
[3]
Multi-Dimensional Separation of Concerns
[4]
TAOSAD
[5]
Tutorial and Workshop on Aspect-Oriented Programming and Separation of Concerns
[6]
References
[1] Dijkstra, Edsger W. (1982). "On the role of scientific thought" (http:/ / www. cs.utexas. edu/ users/ EWD/ transcriptions/ EWD04xx/
EWD447. html). in Dijkstra, Edsger W.. Selected writings on Computing. A Personal Perspective. New York, NY, USA: Springer-Verlag
New York, Inc.. pp.60-66. ISBN0-387-90652-5.
[2] Reade, Chris (1989). Elements of Functional Programming. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc.. pp.600
pages. ISBN0201129159.
[3] http:/ / www. aspiringcraftsman.com/ 2008/ 01/ art-of-separation-of-concerns/
[4] http:/ / www. research.ibm.com/ hyperspace/
[5] http:/ / trese.cs. utwente. nl/ taosad/ separation_of_concerns. htm
[6] http:/ / www. comp. lancs. ac.uk/ computing/ users/ marash/ aopws2001/
Dependency injection
45
Dependency injection
Dependency injection (DI) in object-oriented computer programming is a design pattern with a core principle of
separating behavior from dependency resolution. In other words: a technique for decoupling highly dependent
software components.
Within a software application, a consumer component which depends on another service component would be highly
coupled if the consumer component needs to create an instance of the service component, since all the details of the
creation of the service will need to be explicit in the consumer component. Dependency injection works by adding an
external dependency into the consumer component-that is, externally injecting a reference to the service component
inside the consumer component.
Dependency injection is a specific form of inversion of control where the concern being inverted is the process of
obtaining the needed dependency. The term was first coined by Martin Fowler to describe the mechanism more
clearly.
[1]
Basics
Without the concept of dependency injection, a consumer component who needs a particular service in order to
accomplish a certain task will depend not only on the interface of the service but also on the details of a particular
implementation of the service. The user component would be responsible for handling the life-cycle of that service;
this comprises: creating an instance, opening and closing streams, disposing of unneeded objects, etc.
Using the concept of dependency injection, however, the life-cycle of a service is handled by a dependency provider
rather than the consumer. The dependency provider is a third component that links the consumer component and the
service component. The consumer would thus only need a reference to an implementation of the service that it
needed in order to accomplish the necessary task.
Such a pattern involves at least three elements: a dependent consumer, its service dependencies and an injector
(sometimes referred to as a provider or container). The dependent is a consumer component that needs to
accomplish a task in a computer program. In order to do so, it needs the help of various services (the dependencies)
that execute certain sub-tasks. The provider is the component that is able to compose the dependent and its
dependencies so that they are ready to be used, while also managing these objects' life-cycles. The provider may be
implemented, for example, as a service locator, an abstract factory, a factory method or a more complex abstraction
such as a framework.
The following is an example. A car (the consumer) depends upon an engine (the dependency) in order to move. The
car's engine is made by an automaker (the dependency provider). The car does not know how to install an engine into
itself, but it needs an engine in order to move. The automaker installs an engine into the car and the car utilizes the
engine to move.
When the concept of dependency injection is used, it decouples high-level modules from low-level services. The
result is called the dependency inversion principle.
Dependency injection
46
Code illustration using Java
Using the car/engine example above mentioned, the following Java examples show how coupled dependencies
(manually-injected dependencies) and framework-injected dependencies are typically staged.
public interface Engine {
public f1oa geEngineRRM(),
public void seEue1ConsumpionRae(f1oa f1owInGa11onsRerMinue),
}
public interface Car {
public f1oa geSpeedInMRH(),
public void seReda1Rressure(f1oa peda1RressureInRounds),
}
Highly coupled dependency
The following shows a common arrangement with no dependency injection applied:
public class DefaultEngineImpl implements Engine {
private f1oa engineRRM = 0,

public f1oa geEngineRRM() {
return engineRRM,
}
public void seEue1ConsumpionRae(f1oa f1owInGa11onsRerMinue) {
engineRRM = ...,
}
}
public class DefaultCarImpl implements Car {
private Engine engine = new efau1EngineImp1(),
public f1oa geSpeedInMRH() {
return engine.geEngineRRM() * ...,
}
public void seReda1Rressure(f1oa peda1RressureInRounds) {
engine.seEue1ConsumpionRae(...),
}
}
public class MyApplication {
public static void main(Sring[| args) {
Car car = new efau1CarImp1(),
car.seReda1Rressure(5),
f1oa speed = car.geSpeedInMRH(),
Dependency injection
47
}
}
In the above example, the Car class creates an instance of an Engine implementation in order to perform operations
on the car. Hence, it is considered highly coupled because it couples a car directly with a particular engine
implementation.
In cases where the DefaultEngineImpl dependency is managed outside of the scope of the Car class, the Car class
must not instantiate the DefaultEngineImpl dependency. Instead, that dependency is injected externally.
Manually-injected dependency
Refactoring the above example to use manual injection:
public class DefaultCarImpl implements Car {
private Engine engine,
public efau1CarImp1(Engine engineImp1) {
engine = engineImp1,
}
public f1oa geSpeedInMRH() {
return engine.geEngineRRM() * ...,
}
public void seReda1Rressure(f1oa peda1RressureInRounds) {
engine.seEue1ConsumpionRae(...),
}
}
public class CarEactory {
public static Car bui1dCar() {
return new efau1CarImp1(new efau1EngineImp1()),
}
}
public class MyApplication {
public static void main(Sring[| args) {
Car car = CarEacory.bui1dCar(),
car.seReda1Rressure(5),
f1oa speed = car.geSpeedInMRH(),
}
}
In the example above, the CarFactory class assembles a car and an engine together by injecting a particular engine
implementation into a car. This moves the dependency management from the Car class into the CarFactory class. As
a consequence, if the Car needed to be assembled with a different Engine implementation, the Car code would not be
changed.
In a more realistic software application, this may happen if a new version of a base application is constructed with a
different service implementation. Using factories, only the service code and the Factory code would need to be
Dependency injection
48
modified, but not the code of the multiple users of the service.
However, this still may not be enough abstraction for some applications, since in a realistic application there would
be multiple Factory classes to create and update.
Framework-managed dependency injection
There are several frameworks available that automate dependency management by delegating the management of
dependencies. Typically, this is accomplished by a Container using XML or "meta data" definitions. Refactoring the
above example to use an external XML-definition framework:
<service-point id="CarBui1derService">
<invoke-factory>
<construct c1ass="Car">
<service>efau1CarImp1</service>
<service>efau1EngineImp1</service>
</construct>
</invoke-factory>
</service-point>
/** Imp1ementation not shown **/
public class MyApplication {
public static void main(Sring[| args) {
Service service =
(Service)ependencyManager.ge("CarBui1derService"),
Car car = (Car)service.geService(Car.c1ass),
car.seReda1Rressure(5),
f1oa speed = car.geSpeedInMRH(),
}
}
In the above example, a dependency injection service is used to retrieve a CarBuilderService service. When a Car is
requested, the service returns an appropriate implementation for both the car and its engine.
As there are many ways to implement dependency injection, only a small subset of examples is shown herein.
Dependencies can be registered, bound, located, externally injected, etc., by many different means. Hence, moving
dependency management from one module to another can be accomplished in a plethora of ways. However, there
should exist a definite reason for moving a dependency away from the object that needs it because doing so can
complicate the code hierarchy to such an extent that its usage appears to be "magical". For example, suppose a Web
container is initialized with an association between two dependencies and that a user who wants to use one of those
dependencies is unaware of the association. The user would thus not be able to detect any linkage between those
dependencies and hence might cause drastic problems by using one of those dependencies .
Dependency injection
49
Benefits and drawbacks
One benefit of using the dependency injection approach is the reduction of boilerplate code in the application objects
since all work to initialize or setup dependencies is handled by a provider component.
[2]
Another benefit is that it offers configuration flexibility because alternative implementations of a given service can
be used without recompiling code. This is useful in unit testing because it is easy to inject a fake implementation of a
service into the object being tested by changing the configuration file.
One drawback is that excessive or inappropriate use of dependency injection can make applications more
complicated, harder to understand and more difficult to modify. Code that uses dependency injection can seem
magical to some developers, since instantiation and initialization of objects is handled completely separately from
the code that uses it. This separation can also result in problems that are hard to diagnose. Additionally, some
dependency injection frameworks maintain verbose configuration files, requiring that a developer understand the
configuration as well as the code in order to change it.
Another drawback is that some IDEs might not be able to accurately analyze or refactor code when configuration is
"invisible" to it. Some IDEs mitigate this problem by providing explicit support for various frameworks.
Additionally, some frameworks provide configuration using the programming language itself, allowing refactoring
directly. Other frameworks, such as the Grok web framework, introspect the code and use convention over
configuration as an alternative form of deducing configuration information. For example, if a Model and View class
were in the same module, then an instance of the View will be created with the appropriate Model instance passed
into the constructor.
Criticisms
A criticism of dependency injection is that it is simply a re-branding of existing object-oriented design concepts. The
examples typically cited (including the one above) simply show how to fix bad code, not a new programming
paradigm. Offering constructors and/or setter methods that take interfaces, relieving the implementing class from
having to choose an implementation, is an idea that was rooted in object-oriented programming long before Martin
Fowler's article or the creation of any of the recent frameworks that champion it.
Types
Fowler identifies three ways in which an object can get a reference to an external module, according to the pattern
used to provide the dependency:
[3]
Type 1 or interface injection, in which the exported module provides an interface that its users must implement in
order to get the dependencies at runtime.
Type 2 or setter injection, in which the dependent module exposes a setter method that the framework uses to
inject the dependency.
Type 3 or constructor injection, in which the dependencies are provided through the class constructor.
It is possible for other frameworks to have other types of injection, beyond those presented above.
[4]
Dependency injection
50
See also
Plug-in (computing)
Strategy pattern
Architecture description language
Further reading
A beginners guide to Dependency Injection
[5]
Simplifying Dependency Injection
[6]
, article from April 14th, 2010
What is Dependency Injection?
[7]
- An alternative explanation - Jakob Jenkov
Dependency Injection & Testable Objects: Designing loosely coupled and testable objects
[8]
- Jeremy
Weiskotten; Dr. Dobb's Journal, May 2006.
Design Patterns: Dependency Injection -- MSDN Magazine, September 2005
[9]
Writing More Testable Code with Dependency Injection -- Developer.com, October 2006
[10]
Domain Specific Modeling (DSM) in IOC frameworks
[11]
The Rich Engineering Heritage Behind Dependency Injection
[12]
- Andrew McVeigh - A detailed history of
dependency injection.
P of EAA: Plugin
[13]
Dependency Injection in Ruby
[14]
Dependency Injection in Scala
[15]
References
[1] http:/ / martinfowler.com/ articles/ injection. html#InversionOfControl
[2] http:/ / jcp.org/ en/ jsr/ detail?id=330
[3] http:/ / www. martinfowler. com/ articles/ injection. html#FormsOfDependencyInjection
[4] http:/ / yan.codehaus.org/ Dependency+ Injection+ Types
[5] http:/ / www. theserverside. com/ tt/ articles/ article. tss?l=IOCBeginners
[6] http:/ / blog.architexa.com/ 2010/ 04/ simplifying-dependency-injection/
[7] http:/ / tutorials.jenkov.com/ dependency-injection/ index. html
[8] http:/ / www. ddj. com/ 185300375
[9] http:/ / msdn.microsoft. com/ msdnmag/ issues/ 05/ 09/ DesignPatterns/ default. aspx
[10] http:/ / www. developer.com/ net/ net/ article. php/ 3636501
[11] http:/ / www. pocomatic.com/ docs/ whitepapers/ dsm
[12] http:/ / www. javalobby.org/ articles/ di-heritage/
[13] http:/ / martinfowler. com/ eaaCatalog/ plugin. html
[14] http:/ / onestepback.org/ index.cgi/ Tech/ Ruby/ DependencyInjectionInRuby. rdoc
[15] http:/ / jonasboner.com/ 2008/ 10/ 06/ real-world-scala-dependency-injection-di. html
Dependency inversion principle
51
Dependency inversion principle
In object-oriented programming, the dependency inversion principle refers to a specific form of decoupling where
conventional dependency relationships established from high-level, policy-setting modules to low-level, dependency
modules are inverted (e.g. reversed) for the purpose of rendering high-level modules independent of the low-level
module implementation details. The principle states:
A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend upon details. Details should depend upon abstractions.
Description
In conventional application architecture, lower-level components are designed to be consumed by higher-level
components which enable increasingly complex systems to be built. In this composition, higher-level components
depend directly upon lower-level components to achieve some task. This dependency upon lower-level components
limits the reuse opportunities of the higher-level components.
The goal of the dependency inversion principle is to decouple high-level components from low-level components
such that reuse with different low-level component implementations become possible. This is facilitated by the
separation of high-level components and low-level components into separate packages/libraries, where interfaces
defining the behavior/services required by the high-level component are owned by, and exist within the high-level
component's package. The implementation of the high-level component's interface by the low level component
requires that the low-level component package depend upon the high-level component for compilation, thus
inverting the conventional dependency relationship. Various patterns such as Plugin, Service Locator, or
Dependency Injection are then employed to facilitate the run-time provisioning of the chosen low-level component
implementation to the high-level component.
Applying the dependency inversion principle can also be seen as applying the Adapter pattern, i.e. the high-level
class defines its own adapter interface which is the abstraction that the high-level class depends on. The adaptee
implementation also depends on the adapter interface abstraction (of course, since it implements its interface) while
it can be implemented by using code from within its own low-level module. The high-level has no dependency to the
low-level module since it only uses the low-level indirectly through the adapter interface by invoking polymorphic
methods to the interface which are implemented by the adaptee and its low-level module.
History
The dependency inversion principle was postulated by Robert C. Martin and described in several publications
including the paper Object Oriented Design Quality Metrics. an analysis of dependencies
[1]
, an article appearing in
the C++ Report in May 1996 entitled The Dependency Inversion Principle
[2]
, and the books Agile Software
Development, Principles, Patterns, and Practices, and Agile Principles, Patterns, and Practices in C#.
See also
SOLID
Inversion of Control
Interface
Dependency Injection
Service locator pattern
Plugin
Adapter pattern
Dependency inversion principle
52
External links
Object Oriented Design Quality Metrics: an analysis of dependencies Robert C. Martin, C++ Report, Sept/Oct
1995
[3]
The Dependency Inversion Principle, Robert C. Martin, C++ Report, May 1996
[4]
Examining the Dependency Inversion Principle, Derek Greer
[5]
References
[1] Object Oriented Design Quality Metrics: an analysis of dependencies Robert C. Martin, C++ Report, Sept/Oct 1995 (http:/ / www.
objectmentor. com/ resources/ articles/ oodmetrc. pdf)
[2] The Dependency Inversion Principle, Robert C. Martin, C++ Report, May 1996 (http:/ / www.objectmentor. com/ publications/ dip. pdf)
[3] http:/ / www. objectmentor.com/ resources/ articles/ oodmetrc. pdf
[4] http:/ / www. objectmentor.com/ publications/ dip. pdf
[5] http:/ / www. ctrl-shift-b. com/ 2008/ 12/ examining-dependency-inversion. html
Assertion (computing)
In computer programming, an assertion is a predicate (for example a true-false statement) placed in a program to
indicate that the developer thinks that the predicate is always true at that place.
For example, the following code contains two assertions:
x := 5,
{x > 0}
x := x + 1
{x > 1}
x > 0 and x > 1, and they are indeed true at the indicated points during execution.
Programmers can use assertions to help specify programs and to reason about program correctness. For example, a
precondition - an assertion placed at the beginning of a section of code - determines the set of states under which
the programmer expects the code to execute. A postcondition - placed at the end - describes the expected state at
the end of execution.
The example above uses the notation for including assertions used by C.A.R. Hoare in his 1969 paper.
[1]
That
notation cannot be used in existing mainstream programming languages. However, programmers can include
unchecked assertions using the comment feature of their programming language. For example, in C:
x = 5,
// {x > 0}
x = x + 1,
// {x > 1}
The braces included in the comment help distinguish this use of a comment from other uses.
Several modern programming languages include checked assertions - statements that are checked at runtime or
sometimes statically. If an assertion evaluates to false at run-time, an assertion failure results, which typically causes
execution to abort. This draws attention to the location at which the logical inconsistency is detected and can be
preferable to the behaviour that would otherwise result.
The use of assertions helps the programmer design, develop, and reason about a program.
Assertion (computing)
53
Usage
In languages such as Eiffel, assertions form part of the design process, and in others, such as C and Java, they are
used only to check assumptions at runtime. In both cases, they can be checked for validity at runtime but can usually
also be suppressed.
Assertions in design by contract
Assertions can function as a form of documentation: they can describe the state the code expects to find before it
runs (its preconditions), and the state the code expects to result in when it is finished running (postconditions); they
can also specify invariants of a class. Eiffel integrates such assertions into the language and automatically extracts
them to document the class. This forms an important part of the method of design by contract.
This approach is also useful in languages that do not explicitly support it: the advantage of using assertion statements
rather than assertions in comments is that that the program can check the assertions every time it runs; if the
assertion no longer holds, an error can be reported. This prevents the code from getting out of sync with the
assertions (a problem that can occur with comments).
Assertions for run-time checking
An assertion may be used to verify that an assumption made by the programmer during the implementation of the
program remains valid when the program is executed. For example, consider the following Java code:
in oa1 = counNumberOfUsers(),
if (oa1 % 2 == 0) {
// tota1 is even
} else {
// tota1 is odd
assert(oa1 % 2 == 1),
}
In Java, % is the remainder operator (or modulus) - if its first operand is negative, the result can also be negative.
Here, the programmer has assumed that total is non-negative, so that the remainder of a division with 2 will always
be 0 or 1. The assertion makes this assumption explicit - if countNumberOfUsers does return a negative value, the
program may have a bug.
A major advantage of this technique is that when an error does occur it is detected immediately and directly, rather
than later through its often obscure side-effects. Since an assertion failure usually reports the code location, one can
often pin-point the error without further debugging.
Assertions are also sometimes placed at points the execution is not supposed to reach. For example, assertions could
be placed at the default clause of the switch statement in languages such as C, C++, and Java. Any case which the
programmer does not handle intentionally will raise an error and the program will abort rather than silently
continuing in an erroneous state.
In Java, assertions have been a part of the language since version 1.4. Assertion failures result in raising an
AssertionError when the program is run with the appropriate flags, without which the assert statements are ignored.
In C, they are added on by the standard header assert.h defining assert (assertion) as a macro that signals an error in
the case of failure, usually terminating the program. In standard C++ the header cassert is required instead. However,
some C++ libraries still have the assert.h available.
The danger of assertions is that they may cause side effects either by changing memory data or by changing thread
timing. Assertions should be implemented carefully so they cause no side effects on program code.
Assertion (computing)
54
Assertion constructs in a language allow for easy test-driven development (TDD) without the use of a third-party
library.
Assertions during the development cycle
During the development cycle, the programmer will typically run the program with assertions enabled. When an
assertion failure occurs, the programmer is immediately notified of the problem. Many assertion implementations
will also halt the program's execution - this is useful, since if the program continued to run after an assertion
violation occurred, it might corrupt its state and make the cause of the problem more difficult to locate. Using the
information provided by the assertion failure (such as the location of the failure and perhaps a stack trace, or even the
full program state if the environment supports core dumps or if the program is running in a debugger), the
programmer can usually fix the problem. Thus assertions provide a very powerful tool in debugging.
Static assertions
Assertions that are checked at compile time are called static assertions. They should always be well-commented.
Static assertions are particularly useful in compile time template metaprogramming, but can also be used in low-level
languages in C by introducing illegal code if (and only if) the assertion fails. For example, in C a static assertion can
be implemented like this:
#define COMRILE_TIME_ASSERT(pred) swich(0){case 0:case pred:,}
COMRILE_TIME_ASSERT( BOOLEAN CONITION ),
If the (BOOLEAN CONDITION) part evaluates to false then the above code will not compile because the compiler
will not allow two case labels with the same constant. The boolean expression must be a compile-time constant
value, for example (sizeof(int)==4) would be a valid expression in that context.
Another popular, but inferior,
[2]
way of implementing assertions in C is:
static char const saic_asserion[ (BOOLEAN CONITION)
? 1 : -1
| = {'!'},
If the (BOOLEAN CONDITION) part evaluates to false then the above code will not compile because arrays may
not have a negative length. If in fact the compiler allows a negative length then the initialization byte (the '!' part)
should cause even such over-lenient compilers to complain. The boolean expression must be a compile-time constant
value, for example (sizeof(int)==4) would be a valid expression in that context.
Disabling assertions
Most languages allow assertions to be enabled or disabled globally, and sometimes independently. Assertions are
often enabled during development and disabled during final testing and on release to the customer. Not checking
assertions avoids the cost of evaluating the assertions while, assuming the assertions are free of side effects, still
producing the same result under normal conditions. Under abnormal conditions, disabling assertion checking can
mean that a program that would have aborted will continue to run. This is sometimes preferable.
Some languages, including C/C++, completely remove assertions at compile time using the preprocessor. Java
requires an option to be passed to the run-time engine in order to enable assertions. Absent the option, assertions are
bypassed, but they always remain in the code unless optimised away by a JIT compiler at run-time or excluded by an
if(false) condition at compile time, thus they need not have a run-time space or time cost in Java either.
Assertion (computing)
55
Programmers can always build checks into their code that are always active by bypassing or manipulating the
language's normal assertion-checking mechanisms.
Comparison with error handling
It is worth distinguishing assertions from routine error-handling. Assertions should be used to document logically
impossible situations and discover programming errors - if the impossible occurs, then something fundamental is
clearly wrong. This is distinct from error handling: most error conditions are possible, although some may be
extremely unlikely to occur in practice. Using assertions as a general-purpose error handling mechanism is unwise:
assertions do not allow for recovery from errors; an assertion failure will normally halt the program's execution
abruptly. Assertions also do not display a user-friendly error message.
Consider the following example of using an assertion to handle an error:
in *pr = ma11oc(sizeof(in) * 10),
asser(pr),
// use ptr
...
Here, the programmer is aware that malloc will return a NULL pointer if memory is not allocated. This is possible:
the operating system does not guarantee that every call to malloc will succeed. If an out of memory error occurs the
program will immediately abort. Without the assertion, the program would continue running until ptr was
dereferenced, and possibly longer, depending on the specific hardware being used. So long as assertions are not
disabled, an immediate exit is assured. But if a graceful failure is desired, the program has to handle the failure. For
example, a server may have multiple clients, or may hold resources that will not be released cleanly, or it may have
uncommitted changes to write to a datastore. In such cases it is better to fail a single transaction than to abort
abruptly.
See also
Assertion definition language
Design by contract
Exception handling
Hoare logic
Static code analysis
Java Modeling Language
External links
The benefits of programming with assertions
[3]
by Philip Guo (Stanford University), 2008.
Java:
Programming With Assertions in Java
[4]
Technical Article "Using Assertions
[5]
"
References
[1] C.A.R. Hoare, An axiomatic basis for computer programming (http:/ / lambda-the-ultimate. org/ node/ 1912), Communications of the ACM,
1969.
[2] Jon Jagger, Compile Time Assertions in C (http:/ / www.jaggersoft. com/ pubs/ CVu11_3. html), 1999.
[3] http:/ / www. stanford.edu/ ~pgbovine/ programming-with-asserts. htm
[4] http:/ / java. sun. com/ j2se/ 1. 4. 2/ docs/ guide/ lang/ assert. html
[5] http:/ / java. sun. com/ developer/ JDCTechTips/ 2002/ tt0409. html
Article Sources and Contributors
56
Article Sources and Contributors
Test automation Source: http://en.wikipedia.org/w/index.php?oldid=386749411 Contributors: 5nizza, 83nj1, ADobey, Abdull, Akr7577, AliveFreeHappy, Ameya barve, Ancheta Wis, Ankurj,
Anupam naik, Apparition11, Asashour, Ash, Bbryson, Benjamin Geiger, Bhagat.Abhijeet, Bigtwilkins, Caltas, Carioca, Checkshirt, Chrisbepost, CodeWonk, DARTH SIDIOUS 2, DRogers,
Dbelhumeur02, DivineAlpha, Dreftymac, Eaowens, EdwardMiller, Egivoni, ElfriedeDustin, Elipongo, Enoch the red, Excirial, Faris747, Ferpectionist, FlashSheridan, Flopsy Mopsy and
Cottonmouth, Florian Huber, Fumitol, Gaggarwal2000, Gherget, Gibs2001, Gmacgregor, Goutham, Grafen, Harobed, Hatch68, Helix84, Hesa, Heydaysoft, Hooperbloob, Hswiki, Hu12,
JASpencer, JamesBWatson, Johnuniq, Jpg, Kumarsameer, Kuru, Ldimaggi, M4gnum0n, MC10, MER-C, Marasmusine, Mark Kilby, Marudubshinki, Matthewedwards, Michael Bernstein,
Morrillonline, MrOllie, Nimowy, Notinasnaid, Octoferret, Ohnoitsjamie, OracleDBGuru, Pfhjvb0, ProfessionalTST, Qatutor, Qlabs impetus, Qtpautomation, Qwyrxian, R'n'B, RHaworth,
Radagast83, Radiant!, Radiostationary, Raghublr, Rich Farmbrough, RichardHoultz, Rickjpelleg, Rjwilmsi, Robertvan1, Robinson Weijman, Ryadav, Ryepie, SSmithNY, Sbono, Shijuraj,
Shlomif, Softwaretest1, Srideep TestPlant, Ssingaraju, SteveLoughran, Sundaramkumar, Swtechwr, Thv, Ttrevers, Tumaka, Tushar291081, Vadimka, Veledan, Versageek, Walter Grlitz,
Webbbbbbber, Winmacro, Wrp103, Yan Kuligin, ZachGT, Zorgon7, 238 anonymous edits
Test-driven development Source: http://en.wikipedia.org/w/index.php?oldid=386490917 Contributors: 1sraghavan, Achorny, AliveFreeHappy, Alksentrs, Anorthup, AnthonySteele,
Antonielly, Asgeirn, Astaines, Attilios, Autarch, AutumnSnow, Bcwhite, CFMWiki1, Calrfa Wn, Canterbury Tail, Chris Pickett, Closedmouth, Craig Stuntz, DHGarrette, Dally Horton,
David-Sarah Hopwood, Deuxpi, Dhdblues, Dougluce, Download, Downsize43, Dtmilano, Dlugosz, Ed Poor, Edaelon, Ehheh, Emurphy42, Enochlau, Eurleif, Excirial, Faught, Furrykef,
Gakrivas, Gary King, Geometry.steve, Gigi fire, Gishu Pillai, Gmcrews, Gogo Dodo, Hadal, Hagai Cibulski, Hariharan wiki, Heirpixel, Hzhbcl, JDBravo, JLaTondre, JacobProffitt, Jglynn43,
Jleedev, Jonb ee, Jonkpa, Jpalm 98, Jrvz, Kbdank71, Kellen`, KellyCoinGuy, Kevin Rector, Khalid hassani, Kristjan Wager, Krzyk2, Kvdveer, LeaveSleaves, Lenin1991, Lumberjake, Madduck,
Mark Renier, Martial75, Martinig, MaxSem, Mberteig, Mboverload, Mckoss, Mdd, MeUser42, Michig, Middayexpress, Mkarlesky, Mkksingha, Mnorbury, Mortense, Mosquitopsu, Mr2001,
MrOllie, Nigelj, Nohat, Notnoisy, Nuggetboy, Ojcit, Oligomous, On5deu, Parklandspanaway, Patrickdepinguin, Pengo, PhilipR, Phlip2005, Pinecar, PradeepArya1109, R. S. Shaw, Radak,
RickBeton, RoyOsherove, Rulesdoc, SAE1962, Sam Hocevar, Samwashburn3, San chako, Sanchom, SethTisue, SharShar, Shenme, Shyam 48, SimonP, St.General, Stemcd, SteveLoughran,
Sullivan.t, Sverdrup, Svick, Swasden, Szwejkc, TakuyaMurata, Tedickey, Themacboy, Thumperward, Tobias Bergemann, Topping, Trum123, Underpants, V6Zi34, Virgiltrasca, WLU, Walter
Grlitz, Waratah, Wikid77, Xagronaut, Onekcanp Kpannyk, 371 anonymous edits
Behavior Driven Development Source: http://en.wikipedia.org/w/index.php?oldid=386140439 Contributors: 16x9, Adamleggett, Alexott, Ashsearle, Aslak.hellesoy, BenAveling, Brolund,
CLW, Choas, Colonies Chris, DanNorth, Davemarshall04, David Monaghan, Dcazzulino, Diego Moya, Doekman, Dols, Eleusis, Erkan Yilmaz, Espo, FGeorges, Featheredwings, Ghettoblaster,
Giardante, Greenrd, GregorB, Haakon, Hugobr, Humanmatters, Huttarl, Ianspence, Ignu, JLaTondre, Jania902, Jbandi, Johnmarkos, Johnwyles, Jutame, Kelly Martin, KellyCoinGuy, Kevin
Rector, Lenin1991, Lennarth, MaxSem, Mhennemeyer, Mortense, Ncrause, Oleganza, Paulmarrington, Philippe, Rettetast, Rick Jelliffe, Rjray, Rjwilmsi, Rodrigez, Ryanmcilmoyl, Secret9, Some
standardized rigour, SteveDonie, Steveonjava, Timhaughton, Tomjadams, TonyBallu, Vinunlr, Wesley, Weswilliams, Where next Columbus?, Yamaguchi , Yukoba, 135 anonymous edits
Acceptance test Source: http://en.wikipedia.org/w/index.php?oldid=92170638 Contributors: Alphajuliet, Amire80, Apparition11, Ascnder, Bournejc, Caesura, Caltas, CapitalR, Carse, Chris
Pickett, Claudio figueiredo, CloudNine, Conversion script, DRogers, DVD R W, Dahcalan, Davidbatet, Dhollm, Divyadeepsharma, Djmckee1, Eloquence, Emilybache, Enochlau, GTBacchus,
GraemeL, Granburguesa, Gwernol, Halovivek, Hooperbloob, Hu12, Hutcher, Hyad, Jamestochter, Jemtreadwell, Jgladding, JimJavascript, Jmarranz, Jpp, Kaitanen, Ksnow, Liftoph, MartinDK,
MeijdenB, Meise, Michael Hardy, Midnightcomm, Mifter, Mjemmeson, Mpilaeten, Muhandes, Myhister, Newbie59, Normxxx, Old Moonraker, Olson.sr, Panzi, Pearle, PeterBrooks, Pill,
Pinecar, Qem, RHaworth, RJFerret, Riki, Rodasmith, Samuel Tan, Shirulashem, Timmy12, Timo Honkasalo, Toddst1, Viridae, Walter Grlitz, Whaa?, Wikipe-tan, William Avery, Winterst, 126
anonymous edits
Integration testing Source: http://en.wikipedia.org/w/index.php?oldid=384531896 Contributors: 2002:82ec:b30a:badf:203:baff:fe81:7565, Abdull, Addshore, Amire80, Arunka, Arzach,
Cbenedetto, Cellovergara, ChristianEdwardGruber, DRogers, DataSurfer, Ehabmehedi, Faradayplank, Furrykef, Gggh, Gilliam, GreatWhiteNortherner, Hooperbloob, J.delanoy, Jewbacca, Jiang,
Jtowler, Kmerenkov, Lordfaust, Mheusser, Michael Rawdon, Michig, Myhister, Notinasnaid, Onebyone, Paul August, Pegship, Pinecar, Qaddosh, Ravedave, Ravindrat, SRCHFD, SkyWalker,
Solde, Spokeninsanskrit, Steven Zhang, Svick, TheRanger, Thv, Walter Grlitz, Wyldtwyst, Zhenqinli, 114 anonymous edits
Unit testing Source: http://en.wikipedia.org/w/index.php?oldid=384755615 Contributors: .digamma, Ahc, Ahoerstemeier, AliveFreeHappy, Allan McInnes, Allen Moore, Anderbubble, Andreas
Kaufmann, Andy Dingley, Anorthup, Ardonik, Asavoia, Attilios, Autarch, Bakersg13, Bdijkstra, BenFrantzDale, Brian Geppert, CanisRufus, Canterbury Tail, Chris Pickett,
ChristianEdwardGruber, ChuckEsterbrook, Clausen, Colonies Chris, Corvi, Craigwb, DRogers, DanMS, Derbeth, Discospinster, Dmulter, Earlypsychosis, Edaelon, Edward Z. Yang, Eewild, El
T, Elilo, Evil saltine, Excirial, FlashSheridan, FrankTobia, Fredrik, Furrykef, GTBacchus, Goswamivijay, Guille.hoardings, Haakon, Hanacy, Hari Surendran, Hayne, Hfastedge, Hooperbloob,
Hsingh77, Hypersonic12, Ibbn, Influent1, J.delanoy, Jjamison, Joeggi, Jogloran, Jonhanson, Jpalm 98, Kamots, KaragouniS, Karl Dickman, Kku, Konman72, Kuru, Longhorn72, Looxix, Martin
Majlis, Martinig, MaxHund, MaxSem, Mcsee, Mheusser, Michig, MickeyWiki, Miker@sundialservices.com, Mortense, Mr. Disguise, MrOllie, Mtomczak, Nate Silva, Nbryant, Neilc,
Notinasnaid, Ohnoitsjamie, OmriSegal, Ottawa4ever, PGWG, Pablasso, Paling Alchemist, Pantosys, Paul August, Paulocheque, Pcb21, Pinecar, Pmerson, Radagast3, RainbowOfLight,
Ravialluru, Ravindrat, RenniePet, Richardkmiller, Rjwilmsi, Rogerborg, Rookkey, RoyOsherove, Ryans.ryu, S.K., S3000, SAE1962, Shyam 48, SimonTrew, Sketch051, Sligocki, Smalljim,
Solde, Sozin, Ssd, Sspiro, Stephenb, SteveLoughran, Stumps, Svick, Swtechwr, Sybersnake, TFriesen, Themillofkeytone, Thv, Timo Honkasalo, Tlroche, Tobias Bergemann, Toddst1, Tony
Morris, Tyler Oderkirk, Unittester123, User77764, VMS Mosaic, Veghead, Vishnava, Walter Grlitz, Winhunter, Wmahan, Zed toocool, 409 anonymous edits
Code refactoring Source: http://en.wikipedia.org/w/index.php?oldid=385875196 Contributors: 6birc, Aaronbrick, Ace Coder, Akumiszcza, Alansohn, Anderswiki, Andre Engels, Antonielly,
Appljax123, ArmixZ, Arnad, Atroche, Beland, BenFrantzDale, BenLiyanage, Benjaminevans82, Beno1000, Benoit.dechateauvieux, Bevo, Bjorn Elenfors, Bobblehead, Brockert, Brusselsshrek,
BullRangifer, CLAES, Callidior, Cander0000, Charles Merriam, Connelly, Conversion script, DMacks, Danh, Davidfstr, Dcouzin, Dishayloo, Dnas, Dreftymac, Dwheeler, Ed Poor, Edward,
Elonka, Ermey, Fanghong, Finlay McWalter, FlinkBaum, FlowRate, FlyHigh, FrankTobia, Frecklefoot, Fred Bradstadt, Fredrik, Furrykef, Gandalfgeek, H, HCJK, Hede2000, Hzhbcl, IanOsgood,
Ilya, InaTonchevaToncheva, Inquam, Intgr, J.delanoy, Jaeger48917, Jasonfrye, Jerryobject, JohnOwens, JonHarder, JonathonReinhart, Joyous!, JulesH, Jwoodger, Kewlito, Khalid hassani,
Korpo, Kyellan, Kylemew, LOL, Lenoxus, Ligulem, LinguistAtLarge, Luk, Marcinjeske, Mark Renier, Martinig, Masonb986, Maximaximax, Mdd, Michig, Mintleaf, Mipadi, Morrillonline,
MrJones, Mskeel, Mwanner, Neilc, Nitromaster101, Norm mit, OMouse, Olathe, Oliver, Oliver55, On5deu, P.taylor@dotcomsoftwaresolutions.co.uk, Paddyslacker, Paul August, Pdemb,
Peteforsyth, Peterl, Phoenix80, Polyparadigm, Project2501a, RJFJR, Ravinder.kadiyan, Raymondwinn, Regregex, Rjwilmsi, Rodasmith, Rugops, S.K., STarry, Samw, Seajay, Shawn wiki,
Sigmundpetersen, Spokeninsanskrit, Spoon!, Stephan Leclercq, Steven Zhang, Sverdrup, SymlynX, TakuyaMurata, Tarinth, Teohaik, That Guy, From That Show!, Thr3ddy, Tobias Bergemann,
Tobias Hoevekamp, Tommens, Tony Sidaway, Topaz, Tranzid, Voice of All, Warren, Wx8, Xompanthy, Xyb, YordanGeorgiev, Ziroby, 192 anonymous edits
Test case Source: http://en.wikipedia.org/w/index.php?oldid=385354940 Contributors: AliveFreeHappy, Allstarecho, Chris Pickett, ColBatGuano, Cst17, DarkBlueSeid, DarkFalls, Darth
Panda, Eastlaw, Flavioxavier, Freek Verkerk, Furrykef, Gothmog.es, Hooperbloob, Iggy402, Iondiode, Jtowler, Jwh335, Jwoodger, LeaveSleaves, Lenoxus, Magioladitis, Maniacs29, MaxHund,
Mdd, Merutak, Mr Adequate, MrOllie, Nibblus, Niri.M, Nmthompson, Pavel Zubkov, Peter7723, Pilaf, Pinecar, PrimeObjects, RJFJR, RainbowOfLight, RayAYang, Renu gautam,
Sardanaphalus, Sciurin, Sean D Martin, Shervinafshar, Suruena, System21, Thejesh.cg, Thorncrag, Thv, Tomaxer, Travelbird, Vikasbucha, Walter Grlitz, Wernight, Yennth, 157 anonymous
edits
xUnit Source: http://en.wikipedia.org/w/index.php?oldid=374074736 Contributors: Ahoerstemeier, Andreas Kaufmann, BurntSky, Caesura, Chris Pickett, Damian Yerrick, Dvib, FlashSheridan,
Furrykef, Green caterpillar, Jpalm 98, Kenyon, Khatru2, Kku, Kleb, Lasombra, LilHelpa, MBisanz, Mat i, MaxSem, MrOllie, Nate Silva, Ori Peleg, Pagrashtak, Patrikj, Pengo, PhilippeAntras,
Pinecar, Qef, RedWolf, Rhphillips, RudaMoura, Schwern, SebastianBergmann, Simonwacker, Slakr, Srittau, Tlroche, Uzume, 62 anonymous edits
Test stubs Source: http://en.wikipedia.org/w/index.php?oldid=357487595 Contributors: Andreas Kaufmann, Chiefhuggybear, Christianvinter, Meridith K, Thisarticleisastub, Tomrbj, 1
anonymous edits
Mock object Source: http://en.wikipedia.org/w/index.php?oldid=384569564 Contributors: 16x9, A. B., ABF, AN(Ger), Acather96, Allanlewis, Allen Moore, Andreas Kaufmann, Andy Dingley,
Antonielly, Ataru, Autarch, Babomb, BenWilliamson, Blueboy96, Charles Matthews, Ciphers, ClinkingDog, CodeCaster, Colcas, Cybercobra, DHGarrette, Dcamp314, Derbeth, Dhoerl, Edward
Z. Yang, Elilo, Ellissound, Eric Le Bigot, Ghettoblaster, Hooperbloob, IceManBrazil, JamesShore, Kc8tpz, Khalid hassani, Kku, Le-sens-commun, Lmajano, Lotje, Mange01, Marchaos,
Martinig, MaxSem, Mkarlesky, Nigelj, Nrabinowitz, Paul Foxworthy, Pecaperopeli, Philip Trueman, Pinecar, R'n'B, Redeagle688, Rodrigez, RoyOsherove, Rstandefer, Simonwacker,
SkyWalker, SlubGlub, Spurrymoses, Stephan Leeds, SteveLoughran, TEB728, Thumperward, Tobias Bergemann, Tomrbj, Whitehawk julie, 131 anonymous edits
Separation of concerns Source: http://en.wikipedia.org/w/index.php?oldid=376875337 Contributors: Agl1, Andrew Eisenberg, Assentt, Badhmaprakash, BenFrantzDale, Blaisorblade, Bob
Badour, Bobblehead, Bunnyhop11, CSProfBill, CharlesC, Chris the speller, Colonies Chris, Dcoetzee, Derekgreer, Dreftymac, Edongliu, Eric119, GTBacchus, Gudeldar, Gurch, Gwalla, Hrvoje
Simic, Jacobolus, Jsnx, Karada, Karch, Korpo, MER-C, Macquigg, Maria C Mosak, Maurice Carbonaro, Mdd, Mgreenbe, Mh29255, Michael Hardy, Minghong, Mobius131186, Mountain,
Natkeeran, Nbarth, Nuplex, Paul August, Pcap, Pierino23, Pmerson, RekishiEJ, Remuel, RobinK, Se16teddy, Shawn wiki, Silsor, SixSix, Softtest123, StephenWeber, Steven Forth, Stimpy77,
Takaczapka, TedHusted, Tide rolls, Winston365, Woohookitty, 54 anonymous edits
Dependency injection Source: http://en.wikipedia.org/w/index.php?oldid=385903801 Contributors: 1ForTheMoney, Andreas Kaufmann, Andrew c, AndyGavin, Angus Lepper, Aparraga,
Avegas, Beland, Bender2112, Bernd vdB, Brian428, Btx40, CH3374H, Chadh, ChristianEdwardGruber, Chuck Adams, Clement.escoffier, Cybercobra, DEfusion, Daniel.Cardenas, Datoine,
Article Sources and Contributors
57
Derekgreer, DexM, Diego Moya, Donsez, Doradus, DotNetGuy, Doublecompile, Ebyabe, Ehn, Ekameleon, Elpecek, Emrysk, Errandir, FatalError, FlashSheridan, Franl, Fredrik, Gregturn,
HighKing, Jadriman, Jelaplan, Jjdawson7, Jjenkov2, Joshuatbrown, Julesd, KenFehling, Keredson, KevinTeague, Kevinb9n, Khalid hassani, Kicolobo, Kjin101, Kjkolb, Kku, Kolbasz, Lastcraft,
LedgendGamer, Lumingz, Mathieu, Mathrick, MatisseEnzer, Mfurr, Mike Fikes, Mistercupcake, Mongbei, Mortense, MrOllie, MySchizoBuddy, Neilc, Nelson, Neurolysis, Nkohari, Nosbig,
Oneiros, PabloStraub, Pandrija, Peter lawrey, Peter.c.grant, PeterProvost, Piano non troppo, RL0919, Ramiromagalhaes, RedWolf, RickBeton, Rodbeck, Rpawson, Ru1fdo, Sae1962, SetaLyas,
Sgrundsoe, Shangri67, Sikon, Sjc, SpaceFlight89, The Wild Falcon, Torabli, TreyHarris, Wikidrone, Ye-thorn, Zeflasher, 209 anonymous edits
Dependency inversion principle Source: http://en.wikipedia.org/w/index.php?oldid=386129452 Contributors: Andreas Kaufmann, Barkeep, Blaisorblade, Bmhm, Bpfurtado, Brettright,
Davewho2, Derekgreer, Diego Moya, Dlugosz, Kjkolb, Ligulem, Lpsiphi, Oneiros, RitigalaJayasena, 10 anonymous edits
Assertion (computing) Source: http://en.wikipedia.org/w/index.php?oldid=386761772 Contributors: Abdull, Ahoerstemeier, Alansohn, Alexs, BenAveling, Beneluxboy, Bkkbrad, Bluemoose,
Btx40, DavidCary, DavidGries, Dekart, Doug Bell, Dysprosia, Echoray, Excirial, FatBastardInk, Fragglet, Furrykef, Fuzzie, GSlicer, Galaxiaad, Gang65, Gennaro Prota, Goplat, Greenrd,
Gspbeetle, Hooperbloob, JMBattista, Jerazol, Jgrahn, John Vandenberg, Jpbowen, Kku, Leibniz, LittleDan, MaxSem, Michael Devore, Mipadi, Mudx77, Neilc, NeonMerlin, Oliphaunt, Orderud,
PJTraill, Peu, Prgrmr@wrk, Raise exception, Ripper234, Runefrost, Ruud Koot, Sam Van Kooten, Sdorrance, Stephanebeguin, TakuyaMurata, Tobias Bergemann, Wapcaplet, Wikibob, Winterst,
Wlievens, Zigger, Zron, +-113, 63 anonymous edits
Image Sources, Licenses and Contributors
58
Image Sources, Licenses and Contributors
Image:Test-driven development.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Test-driven_development.PNG License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:Excirial
License
59
License
Creative Commons Attribution-Share Alike 3.0 Unported
http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/

You might also like