0% found this document useful (0 votes)
164 views99 pages

MTPDF3 - Software Construction and Testing

The document discusses software construction and testing. It describes software construction as the process of coding, validating, and unit testing by programmers. It outlines the key activities in construction like coding, unit testing, integration testing, and debugging. The document emphasizes that construction is an important part of software development and its product, the source code, describes the software. It also discusses strategies for construction and ensuring quality through reliability, reusability, functionality, and testing. It describes essentials for systematic testing like test scripts, test harnesses, and test plans.

Uploaded by

Arjay Ramos
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
164 views99 pages

MTPDF3 - Software Construction and Testing

The document discusses software construction and testing. It describes software construction as the process of coding, validating, and unit testing by programmers. It outlines the key activities in construction like coding, unit testing, integration testing, and debugging. The document emphasizes that construction is an important part of software development and its product, the source code, describes the software. It also discusses strategies for construction and ensuring quality through reliability, reusability, functionality, and testing. It describes essentials for systematic testing like test scripts, test harnesses, and test plans.

Uploaded by

Arjay Ramos
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 99

SYSTEM INTEGRATION

& ARCHITECTURE 2
MODULE 3
SOFTWARE CONSTRUCTION
AND TESTING
• Software construction in the context of software development
• How software is constructed
• Good qualities of a software
• Test plan, test cases and test execution
• Application of the software quality are based on internal and external factors
• Software that meets the standard
SUBTOPIC 1
SOFTWARE CONSTRUCTION
SOFTWARE CONSTRUCTION is a fundamental act of
software engineering: the construction of working
meaningful software through a combination of coding,
validation, and testing (unit testing) by a programmer.
SOFTWARE CONSTRUCTION
Construction Activities
Problem
Definition

Corrective
Detailed Design Maintenan
ce
Requirements
Development

Construction Coding and


Planning Debugging Integration

Integration
Software Testing
Architectur Unit Testing
e

System
Testing

Figure 3-1
Construction activities are shown inside the gray circle
DETAILED TASKS INVOLVED IN CONSTRUCTION
• Verifying that the groundwork has been laid so
that construction can proceed successfully
• Determining how your code will be tested
• Designing and writing classes and routines
• Creating and naming variables and named
constants
• Selecting control structures and organizing
blocks of statements
DETAILED TASKS INVOLVED IN CONSTRUCTION
• Unit testing, integration testing, and debugging
your own code
• Reviewing other team members’ low-level
designs and code and having them review yours
• Polishing code by carefully formatting and
commenting it
• Integrating software components that were
created separately
• Tuning code to make it smaller and faster
WHY IS SOFTWARE CONSTRUCTION
IMPORTANT?
Some Reasons
• Construction is a large part of
software development
WHY IS SOFTWARE CONSTRUCTION
IMPORTANT?
Some Reasons
• Construction is the central activity
in software development
WHY IS SOFTWARE CONSTRUCTION
IMPORTANT?

Some Reasons
• With a focus on construction, the
individual programmer’s
productivity can improve
enormously
WHY IS SOFTWARE CONSTRUCTION
IMPORTANT?

Some Reasons
• Construction’s product, the source
code, is often the only accurate
description of the software
WHY IS SOFTWARE CONSTRUCTION
IMPORTANT?

Some Reasons
• Construction is the only activity
that’s guaranteed to be done
SOFTWARE CONSTRUCTION STRATEGIES

High-level to low-
• TOP-DOWN
level; user interface
to detail logic
• BOTTOM-UP Reverse of the above

• MIDDLE-OUT Some of both


QUALITY AND CONSTRUCTION

GOAL
• The goal of software construction is to build a product
that satisfies the quality requirements
• “Good enough software” Not excellent software !
IT’S ALL ABOUT QUALITY
• How do you ensure that the software
• does what it should?
• does it in the correct way?
• does it robust?
• is reliable?
• is easy to use?
• is easy to change?
• is easy to correct?
• is easy to test?
WHAT IS SOFTWARE QUALITY ?

Formal Definitions
• The totality of features and characteristics of a product or
service that bear on its ability to satisfy stated or implied
needs. (ISO 8402: 1986, 3.1)
• The degree to which a system, component, or process
meets specified requirements. (IEEE)
• A product that satisfies the stakeholders needs (Compliant
Product + Good Quality + Delivery Within
Budget/Schedule.)
SOFTWARE QUALITY

• Software has both external and


internal quality characteristics.
External characteristics are
characteristics that a user of the
software product is aware of,
including
QUALITY IS A COLLECTION OF “…ILITIES”
Quality Description
reliability the ability to operate error free

reusability the ability to use parts of the software to solve other


software problems

extendibility the ability to have enhancement changes made easily


understandability the ability to understand the software readily, in
order to change/fix it (also called maintainability)
efficiency the speed and compactness of the software
usability the ability to use the software easily
testability the ability to construct and execute test cases easily
portability the ability to move the software easily from one
environment to another
functionality what the product does

Table 3.1 Quality and its description


QUALITY AND SOFTWARE CONSTRUCTION
• Reliability [correctness + robustness]
It should be easier to build software that functions correctly, and
easier to guarantee what it does.
Run and test the software as often as possible
• Reusability [modifiability + extendibility]
It should build less software!
Software should be easier to modify.
Redesign and improve the source code as often as possible
• Functionality [+ usability]
Ensure that the software does what the user expects and does this
in an easy to use way.
Build software as early as possible and give it to the user as often
as possible
TESTING QUALITY
Quality means “conformance to requirements”
✓The best testers can only catch defects that are contrary
to specification.
✓Testing does not make the software perfect.
✓If an organization does not have good requirements
engineering practices then it will be very hard to deliver
software that fills the users’ needs, because the product
team does not really know what those needs are.
SOFTWARE TESTING
• The goal of software testing is to discover defects in
software, not to show that none are present.
• That is, software testing cannot prove that software is
correct (meets its specifications) for any realistic
system.
• The best defense against residual software errors is
proper design and coding practice.
SOFTWARE TESTING
• The goal of the testing process is to uncover faults
left by the developers.
• The goal of the development process is to enable
the developers to produce a satisfactory version of
the software within budget and on schedule.
• Thus, the two goals are somewhat in opposition.
Balancing them is one of the goals of the project
management.
SOFTWARE TESTING
• Automated software testing can be the following:
- Simple shell commands
- Test harnesses
- Or large as CASE (Computer aided software
engineering)
• Software testing can be open source or commercial
software testing tools
THREE ESSENTIALS OF SYSTEMATIC
TESTING
• Test Scripts
• Test Harnesses
• Test Plans

• These three essentials are presented in increasing


order of complexity
THREE ESSENTIALS OF SYSTEMATIC
TESTING: Test Script
• A test script is nothing more than a simple, elegant
way of providing a smooth way of getting data into an
existing program unit.
THREE ESSENTIALS OF SYSTEMATIC
TESTING: Test Harness
• A test harness provides a set of input values for the
arguments to a function and produces a systematic
way of saving and examining the function’s output on
these inputs.
• This is a typical feature of many modern IDEs and
nearly all CASE tools.
THREE ESSENTIALS OF SYSTEMATIC
TESTING: Test Plan
• A test plan is more than a collection of test cases and
a database to contain the results of these test cases.
• It is a strategy for systematically examining the
software to detect faults, then analyzing these faults
and determining an appropriate set of actions.
TEST PLANS
The goal of test planning is to establish the list of tasks which, if performed,
will identify all the requirements that have not been met in the software.
The main work product is the test plan.
✓The test plan documents the overall approach to the test. In many
ways, the test plan serves as a summary of the test activities that will
be performed.
✓It shows how the tests will be organized, and outlines all of the
testers’ needs which must be met in order to properly carry out the
test.
✓The test plan should be inspected by members of the engineering
team and senior managers.
TEST PLAN OUTLINE
TEST PLAN OUTLINE
TEST PLAN OUTLINE
TEST PLAN OUTLINE
TEST PLAN OUTLINE
TEST PLAN OUTLINE
TEST PLAN OUTLINE
TEST PLAN OUTLINE
TEST PLAN OUTLINE
ELEMENTS THAT CAN AFFECT A TEST
PLAN
• There should be a precise list of requirements that must be
tested
• The test plan must be consistent with the goals of the
organization
• After the essential requirements are tested, the next set of test
cases should consider the cases that most users are likely to
want.
• Next is to test the features that are very unlikely to be
encountered in practice
ELEMENTS THAT CAN AFFECT A TEST
PLAN
• The test plan should specify the operational environment
• The plan must allocate enough time to used for testing
• The plan must have adequate hardware and software resources
for testing
• There must be sufficient amount of available free computing
cycle resources used for testing
ELEMENTS THAT CAN AFFECT A TEST
PLAN
• The plan must address the available tools that can be used for
testing data analysis
• The plan must take account of any existing standards and
practices manual
• The plan must consider the type of software (OO, procedural,
mixed)
TEST PLAN
• The important thing is that a realistic plan must be
developed and followed during the software’s
testing. The plan may be influenced by the
organization’s typical software practices.
TEST AUTOMATION
Test automation is a practice in which testers employ a software tool
to reduce or eliminate repetitive tasks.
▪ This can save the testers a lot of time if many iterations of testing
will be required.
▪ It costs a lot to develop and maintain automated test suites, so it
is generally not worth developing them for tests that will execute
only a few times.
TEST CASES
A test case is a description of a specific interaction
that a tester will have in order to test a single
behavior of the software.
TEST CASES
• A typical test case is laid out in a table, and includes:
• - A unique name and number
• - A requirement which this test case is exercising
• - Preconditions which describe the state of the software
before the test case (which is often a previous test case that must
always be run before the current test case)
• - Steps that describe the specific steps which make up the
interaction
• - Expected Results which describe the expected state of the
software after the test case is executed
TEST CASES
•Test cases must be repeatable.
•Good test cases are data-specific and
describe each interaction necessary to repeat
the test exactly.
TEST CASES – GOOD EXAMPLE

Table 3.2 Good example of a test case


TEST CASES – BAD EXAMPLE
Steps 1. Bring up search and replace
2. Enter a lowercase word from the
document in the search term field.
3. Enter a mixed case word in the
replacement field.
4. Verify that case sensitivity is not turned
on and execute the search
Expected 1. Verify that the lowercase word has been
Results replaced with the mixed-case term in
lowercase
Table 3.3 Bad example of a test case
SMOKE TESTS
• A smoke test is a subset of the test cases that is typically
representative of the overall test plan.
• Smoke tests are good for verifying proper deployment or other
non-invasive changes.
• They are also useful for verifying a build is ready to send to test.
• Smoke tests are not substitute for actual functional testing.
TEST EXECUTION
The software testers begin executing the test plan after the programmers deliver
the alpha build, or a build that they feel is feature complete.
▪ The alpha should be of high quality—the programmers should feel that it is ready
for release, and as good as they can get it.
TEST EXECUTION
There are typically several iterations of test execution.
▪ The first iteration focuses on new functionality that has been added since
the last round of testing.
▪ A regression test is a test designed to make sure that a change to one
area of the software has not caused any other part of the software which
had previously passed its tests to stop working.
▪ Regression testing usually involves executing all test cases which have
previously been executed.
▪ There are typically at least two regression tests for any software project.
DEFECT TRACKING
The defect tracking system is a program that testers use to record and track
defects. It routes each defect between testers, developers, the project
manager and others, following a workflow designed to ensure that the defect
is verified and repaired.
▪ Every defect encountered in the test run is recorded and entered into a
defect tracking system so that it can be prioritized.
DEFECT TRACKING
• The defect workflow should track the interaction between the testers
who find the defect and the programmers who fix it. It should ensure
that every defect can be properly prioritized and reviewed by all of the
stakeholders to determine whether or not it should be repaired. This
process of review and prioritization referred to as triage.
TEST ENVIRONMENT AND PERFORMANCE TESTING

Project manager should ask questions regarding desired


performance as early as the vision and scope document
– How many users?
– Concurrency? Peak times?
– Hardware? OS? Security?
– Updates and Maintenance?
TEST EXECUTION
When is testing complete?
– No defects found
– Or defects meet acceptance criteria outlined in test plan

Table 3.4 Acceptance criteria from a test plan


TEST ENVIRONMENT AND PERFORMANCE TESTING

Adequate performance testing will usually require a large


investment in duplicate hardware and automated performance
evaluation tools.
– ALL hardware should match (routers, firewalls, load
balancers)
– If the organization cannot afford this expense, they should
not be developing the software and should seek another
solution.
SUBTOPIC 2
• Software testing strategies integrates software test case
design techniques into a well-planned series of steps that
result in the successful construction of software.

• A testing strategy must always incorporate test planning,


test case design, test execution, and the resultant data
collection and evaluation
Generic characteristics of all software testing strategies:
1. Conduct an effective technical reviews
2. Testing begins at the component level and works
“outward” toward the integration of the entire system
3. Different testing techniques for different software
engineering approaches
Generic characteristics of all software testing strategies:
4. Testing is conducted by the developer.
For large project ➔ independent test group
5. Testing and debugging are different activities, but
debugging must be accommodated in any testing strategy
VERIFICATION AND VALIDATION
• Software Testing is often referred to as verification and
validation (V&V)
• Verification refers to the set of activities that ensure that
software correctly implements a specific function, i.e.

• It asks the question: “Are we building the product


right?”
VERIFICATION AND VALIDATION
• Validation refers to a different set of activities that ensure
that the software that has been built is traceable to
customer requirements, i.e.

• It asks the question: “Are we building the right


product?”
• From the point of view of the builder, testing can
be considered (psychologically) destructive.
• So the builder treads lightly, designing and
executing tests that will demonstrate that the
program works, rather than uncovering errors.
• Unfortunately, errors will be present. And, if the
software engineer doesn’t find them, the customer
will!
1. That the developer of software should not do any
testing at all;
2. That the software should be "tossed over the wall"
to strangers who will test it mercilessly;
3. That testers get involved with the project only
when the testing steps are about to begin.
SOFTWARE TESTING STRATEGY

Figure 3.2 Spiral Software process


1. A strategy for software testing moves outward along the spiral.
2. Unit testing begins at the vortex of the spiral and concentrates on
each unit of the software as implemented in the source code.
3. Testing progresses by moving outward along the spiral to integration
testing, where the focus is on the design and the construction of the
software architecture.
4. Validation testing is next encountered, where requirements
established as part of software requirement analysis are validated
against the software that has been constructed.
5. Finally, at system testing, where the software and other system
elements are tested as a whole.
SOFTWARE TESTING STRATEGY
In a procedural point of view, testing is a series of four steps that are
implemented sequentially
1. Unit tests: focuses on each module and makes heavy use of
white box testing
2. Integration tests: focuses on the design and construction of
software architecture; black box testing is most prevalent with
limited white box testing.
3. High-order tests: conduct validation and system tests. Makes
use of black box testing exclusively.

Figure 3.3 Sequential steps in testing


• Unit testing focuses verification effort on the smallest unit
of software design - the module.

• Using the detail design description as a guide, important


control paths are tested to uncover errors within the
boundary of the module

• The unit test is always white box-oriented


UNIT TESTING

Figure 3.4 Unit Test


1. Because a module is not a stand-alone program, driver and/or stub
software must be developed for each unit test.
2. A driver is nothing more than a " main program" that accepts test
case data, passes such data to the module (to be tested), and prints
the relevant results.
3. Stubs serve to replace modules that are subordinate (called by) the
module to be tested. A stub or "dummy subprogram" uses the
subordinate module's interface, may do nominal data manipulation,
prints verification of entry, and returns.
4. Drivers and stubs also represent overhead
UNIT TESTING PROCEDURES

Figure 3.5 Unit Test Environment


• Integration testing: technique for constructing the program
structure while at the same time conducting tests to
uncover errors associated with interfacing

Objective: combine unit-tested modules and build a program


structure that has been dictated by design.
Two-types: Top-Down integration; Bottom-up Integration
• Top-down integration testing is an incremental approach
to construction of the software architecture.

• Modules are integrated by moving downward through the


hierarchy
INTEGRATION PROCESS
1. The main control module is used as a test driver and stubs are
substituted for all modules directly subordinate to the main control
module
2. Subordinate stubs are replaced one at a time with actual modules
3. Tests are conducted as each module is integrated
4. On the completion of each set of tests, another stub is replaced with
the real module
5. Regression testing (i.e., conducting all or some of the previous tests)
may be conducted to ensure that new errors have not been
introduced
TOP-DOWN TESTING
For the below program structure, the following test cases may be
derived if top-down integration is conducted:

•Test case 1: Modules A and B are integrated


•Test case 2: Modules A, B and C are integrated
•Test case 3: Modules A., B, C and D are integrated (etc.)

Figure 3.6 Unit Test Environment


• There is a major problem in top-down integration: inadequate testing
at upper levels when data flows at low levels in the hierarchy are
required
Solutions to the above problem
1. Delay many test until stubs are replaced with actual modules; but this can
lead to difficulties in determining the cause of errors and tends to violate the
highly constrained nature of the top-down approach
2. Develop stubs that perform limited functions that simulate the actual
module; but this can lead to significant overhead
3. Perform bottom-up integration
1. Low-level modules are combined into clusters (sometimes
called builds) that perform a specific software subfunction
2. A driver (a control program for testing) is written to
coordinate test case input and output
3. The cluster is tested
4. Drivers are removed and clusters are combined moving
upward in the program structure
BOTTOM-UP TESTING
Test case 1: Modules E and F are integrated
Test case 2: Modules E, F and G are integrated
Test case 3: Modules E., F, G and H are integrated
Test case 4: Modules E., F, G, H and C are integrated (etc.)
Drivers are used all round.
• Validation testing: ensuring that software functions in a
manner that can be reasonably expected by the customer.
• Achieve through a series of black tests that demonstrate
conformity with requirements.
• A test plan outlines the classes of tests to be conducted,
and a test procedure defines specific test cases that will be
used in an attempt to uncover errors in conformity with
requirements.
• A series of acceptance tests (include both alpha and beta
testing) are conducted with the end users
Alpha testing
1. Is conducted at the developer's site by a customer
2. The developer would supervise
3. Is conducted in a controlled environment

Beta testing
1. Is conducted at one or more customer sites by the end user of the
software
2. The developer is generally not present
3. Is conducted in a "live" environment
• Ultimately, software is only one component of a
larger computer-based system.
• Hence, once software is incorporated with other
system elements (e.g. new hardware, information),
a series of system integration and validation tests
are conducted.
• System testing is a series of different tests whose
primary purpose is to fully exercise the computer-
based system.
• Although each system test has a different purpose,
all work to verify that all system elements have
been properly integrated and perform allocated
functions.
•A recovery test that forces software to fail in a variety of
ways and verifies that recovery is properly performed.

•If recovery is automatic, re-initialization, check-pointing


mechanisms, data recovery, and restart are each evaluated
for correctness
•If recovery is manual, the mean time to repair is evaluated to
determine whether it is within acceptable limits.
•Security testing attempts to verify that protection
mechanisms built into a system will in fact protect it from
improper penetration.

•Particularly important to a computer-based system that


manages sensitive information or is capable of causing
actions that can improperly harm (or benefit) individuals
when targeted.
•Stress Testing is designed to confront programs with
abnormal situations where unusual quantity frequency, or
volume of resources are demanded

•A variation is called sensitivity testing;


Attempts to uncover data combinations within valid input classes that may cause
instability or improper processing
• This mode of testing seeks to test the run-time
performance of software within the context of an integrated
system.
• Extra instrumentation can monitor execution intervals, log
events (e.g., interrupts) as they occur, and sample machine
states on a regular basis
• Use of instrumentation can uncover situations that lead to
degradation and possible system failure
• Top-down integration testing is an incremental approach
to construction of the software architecture.

• Modules are integrated by moving downward through the


hierarchy
TOP-DOWN TESTING
For the below program structure, the following test cases may be
derived if top-down integration is conducted:

•Test case 1: Modules A and B are integrated


•Test case 2: Modules A, B and C are integrated
•Test case 3: Modules A., B, C and D are integrated (etc.)

Figure 3.6 Integration Testing


• There is a major problem in top-down integration: inadequate testing
at upper levels when data flows at low levels in the hierarchy are
required
Solutions to the above problem
1. Delay many test until stubs are replaced with actual modules; but this can
lead to difficulties in determining the cause of errors and tends to violate the
highly constrained nature of the top-down approach
2. Develop stubs that perform limited functions that simulate the actual
module; but this can lead to significant overhead
3. Perform bottom-up integration
1. Low-level modules are combined into clusters (sometimes
called builds) that perform a specific software subfunction
2. A driver (a control program for testing) is written to
coordinate test case input and output
3. The cluster is tested
4. Drivers are removed and clusters are combined moving
upward in the program structure
BOTTOM-UP TESTING
Test case 1: Modules E and F are integrated
Test case 2: Modules E, F and G are integrated
Test case 3: Modules E., F, G and H are integrated
Test case 4: Modules E., F, G, H and C are integrated (etc.)
Drivers are used all round.
• Validation testing: ensuring that software functions in a
manner that can be reasonably expected by the customer.
• Achieve through a series of black tests that demonstrate
conformity with requirements.
• A test plan outlines the classes of tests to be conducted,
and a test procedure defines specific test cases that will be
used in an attempt to uncover errors in conformity with
requirements.
• A series of acceptance tests (include both alpha and beta
testing) are conducted with the end users
Alpha testing
1. Is conducted at the developer's site by a customer
2. The developer would supervise
3. Is conducted in a controlled environment

Beta testing
1. Is conducted at one or more customer sites by the end user of the
software
2. The developer is generally not present
3. Is conducted in a "live" environment
• Ultimately, software is only one component of a
larger computer-based system.
• Hence, once software is incorporated with other
system elements (e.g. new hardware, information),
a series of system integration and validation tests
are conducted.
• System testing is a series of different tests whose
primary purpose is to fully exercise the computer-
based system.
• Although each system test has a different purpose,
all work to verify that all system elements have
been properly integrated and perform allocated
functions.
•A recovery test that forces software to fail in a variety of
ways and verifies that recovery is properly performed.

•If recovery is automatic, re-initialization, check-pointing


mechanisms, data recovery, and restart are each evaluated
for correctness
•If recovery is manual, the mean time to repair is evaluated to
determine whether it is within acceptable limits.
•Security testing attempts to verify that protection
mechanisms built into a system will in fact protect it from
improper penetration.

•Particularly important to a computer-based system that


manages sensitive information or is capable of causing
actions that can improperly harm (or benefit) individuals
when targeted.
•Stress Testing is designed to confront programs with
abnormal situations where unusual quantity frequency, or
volume of resources are demanded

•A variation is called sensitivity testing;


Attempts to uncover data combinations within valid input classes that may cause
instability or improper processing
• This mode of testing seeks to test the run-time
performance of software within the context of an integrated
system.
• Extra instrumentation can monitor execution intervals, log
events (e.g., interrupts) as they occur, and sample machine
states on a regular basis
• Use of instrumentation can uncover situations that lead to
degradation and possible system failure
• Pressman, R. S. (2015). Software engineering: a practitioners approach. New York, NY: McGraw-Hill
• Douglass, B. P. (2016). Agile systems engineering. Waltham, MA: Morgan Kaufmann.
• Leach, R. J. (2016). Introduction to software engineering. Boca Raton: Chapman & Hall/CRC.
• Laplante, P. A. (2018). Requirements engineering for software and systems. Boca Raton: CRC Press.

You might also like