0% found this document useful (0 votes)
18 views

unit5 testing of Software engineering

Uploaded by

Anurag R Swamy
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

unit5 testing of Software engineering

Uploaded by

Anurag R Swamy
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Software Engineering

SOFTWARE TESTING
Development testing
Development testing includes all testing activities that are carried out by the team developing
the system. The tester of the software is usually the programmer who developed that
software, although this is not always the case. Some development processes use
programmer/tester pairs (Cusamano and Selby, 1998) where each programmer has an
associated tester who develops tests and assists with the testing process. For critical systems,
a more formal process may be used, with a separate testing group within the development
team. They are responsible for developing tests and maintaining detailed records of test
results.

Software Testing: is an activity performed to identify errors so that errors can be removed
to obtain a product with greater quality. To assure and maintain the quality of software and
to represents the ultimate review of specification, design, and coding, Software testing is
required.
There are different levels of testing:

1. Unit Testing: where individual program units or object classes are tested. Unit
testing should focus on testing the functionality of objects or methods.
2. Component Testing: where several individual units are integrated to create
composite components. Component testing should focus on testing component
interfaces.
3. System Testing: where some or all of the components in a system are integrated and
the system is tested as a whole. System testing should focus on testing component
interactions.

1. Unit Testing
“Unit testing is a type of software testing that focuses on individual units or
components of a software system”. The purpose of unit testing is to validate that each
unit of the software works as intended and meets the requirements. Unit testing is typically
performed by developers, and it is performed early in the development process before the
code is integrated and tested as a whole system.
Unit tests are automated and are run each time the code is changed to ensure that
new code does not break existing functionality. Unit tests are designed to validate the
smallest possible unit of code, such as a function or a method, and test it in isolation from
the rest of the system. This allows developers to quickly identify and fix any issues early
in the development process, improving the overall quality of the software and reducing the
time required for later testing .

Objective of Unit Testing:


The objective of Unit Testing is:
1. To isolate a section of code.
2. To verify the correctness of the code.
3. To test every function and procedure.
4. To fix bugs early in the development cycle and to save costs.
5. To help the developers understand the code base and enable them to make changes
quickly.
6. To help with code reuse.

GFGC Shimoga Page 1


Software Engineering

Unit Testing Techniques:


There are 3 types of Unit Testing Techniques. They are
1. Black Box Testing: This testing technique is used in covering the unit tests for input,
user interface, and output parts.
2. White Box Testing: This technique is used in testing the functional behaviour of the
system by giving the input and checking the functionality output including the internal
design structure and code of the modules.
3. Gray Box Testing: This technique is used in executing the relevant test cases, test
methods, and test functions, and analyzing the code performance for the modules.

Advantages of unit testing:


 Early detection of problems in the development cycle
 Reduced cost
 Test-driven development
 More frequent releases
 Enables easier code refactoring
 Detects changes which may break a design contract
 Reduced uncertainty
 Documentation of system behaviour

Disadvantages of unit testing


 Time Consuming
 Increased Code Complexity
 False Sense of Security
 Maintenance Challenges
 Limitations on Test Coverage
Choosing unit test cases
Testing is expensive and time consuming, so it is important that you choose effective unit test
cases. Effectiveness, in this case, means two things:
1. The test cases should show that, when used as expected, the component that you are testing
does what it is supposed to do.
2. If there are defects in the component, these should be revealed by test cases.

We should write two kinds of test cases, The first of these should reflect “normal operation
of a program and should show that the component works”. For example, if you are testing a
component that creates and initializes a new patient record, then your test case should show
that the record exists in a database and that its fields have been set as specified. The other
kind of test case should be based on testing experience of where common problems arise. It
should use “abnormal inputs to check that these are properly processed and do not crash
the component”.

There are two possible strategies here that can be effective in helping you choose test cases.
These are:
1. Partition testing, where you identify groups of inputs that have common characteristics
and should be processed in the same way(correct input). You should choose tests from within
each of these groups.
2. Guideline-based testing, where you use testing guidelines to choose test cases. These
guidelines reflect previous experience of the kinds of errors that programmers often make
when developing components(wrong inputs).

GFGC Shimoga Page 2


Software Engineering

Some of the most general guidelines that he suggests are:


 Choose inputs that force the system to generate all error messages;
 Design inputs that cause input buffers to overflow;
 Repeat the same input or series of inputs numerous times;
 Force invalid outputs to be generated;
 Force computation results to be too large or too small.

COMPONENT TESTING
Component testing is done after unit testing. In this type of testing those test objects
can be tested independently as a component without integrating with other components e.g.
modules, classes, objects, and programs. This testing is done by the development team.

Assume in a software application consists of five components. The testing of each


component is done independently by the tester as part of the development cycle before
integration testing is performed on it. It helps in saving time by finding the bugs at a very
early stage in the cycle. Test structure tools or debugging tools are used for this type of
testing as this is performed by programmers on the code written by them and with the
support of IDE. Defects detected during component testing are fixed as soon as possible
when they are found without maintaining the records.
Component testing has an important role in finding the issue. Before processing with the
integration testing, component testing is performed in order to ensure that each component
of the application is working correctly and as per requirement.
Objective of Component Testing:
 To verify the input and output behaviour of the system.
 To check the usability of each component.
 To test the user comprehensibility of the software.
 To test the state of the each components of the system.

Component Testing Process:

GFGC Shimoga Page 3


Software Engineering

 Requirement Analysis:
User requirement related to each component is observed.
 Test Planning:
Test is planned according to the analysis of the requirements of the user.
 Test Specification:
In this section it is specified that which test case must be run and which test case
should be skipped.
 Test Execution:
Once the test cases are specified according to the user requirements, test cases are
executed.
 Test Recording:
Test recording is the having record of the defects that are detected.
 Test Verification:
Test verification is the process to determine whether the product meet specification.
 Completion:
This is the last phase of the testing process in which the result is analyzed.
Advantages of component testing
1. It finds the defects in the module and verifies the functioning of the software.
2. It helps in faster delivery.
3. More reliable systems because previously tested components are used.
4. It leads to re-usability of software which provides a lot of benefits.
5. Reduces the development cycle time.
6. Helps in reducing the project cost.
7. Leads to a significant increase in productivity.

Disadvantages of component testing


1. Less control over the evolution of the system.
2. There is a need to compromise the requirements.
SYSTEM TESTING
Testing that assesses the complete functionality and performance of a fully
integrated software system. And also known as end to end testing

System Testing is carried out on the whole system in the context of either system
requirement specifications or functional requirement specifications or in the context of
both. System testing tests the design and behaviour of the system and also the expectations
of the customer. It is performed to test the system beyond the bounds mentioned in
the software requirements specification (SRS) . System Testing is basically performed by a
testing team that is independent of the development team that helps to test the quality of
the system impartial. It has both functional and non-functional testing. System Testing is
a black-box testing. System Testing is performed after the integration testing and before
the acceptance testing.

GFGC Shimoga Page 4


Software Engineering

System Testing Process: System Testing is performed in the following steps:


 Test Environment Setup: Create testing environment for the better quality testing.
 Create Test Case: Generate test case for the testing process.
 Create Test Data: Generate the data that is to be tested.
 Execute Test Case: After the generation of the test case and the test data, test cases
are executed.
 Defect Reporting: Defects in the system are detected.
 Regression Testing: It is carried out to test the side effects of the testing process.
 Log Defects: Defects are fixed in this step.
 Retest: If the test is not successful then again test is performed.

Types of System Testing:


 Performance Testing: Performance Testing is a type of software testing that is
carried out to test the speed, scalability, stability and reliability of the software product
or application.
 Load Testing: Load Testing is a type of software Testing which is carried out to
determine the behaviour of a system or software product under extreme load.
 Stress Testing: Stress Testing is a type of software testing performed to check the
robustness of the system under the varying loads.
 Scalability Testing: Scalability Testing is a type of software testing which is carried
out to check the performance of a software application or system in terms of its
capability to scale up or scale down the number of user request load.

GFGC Shimoga Page 5


Software Engineering

Test Driven Development(TDD)


Is an approach to program development in which you interleave testing and code
development (Beck, 2002; Jeffries and Melnik, 2007). Essentially, you develop the code
incrementally, along with a test for that increment. You don’t move on to the next increment
until the code that you have developed passes its test. Test-driven development was
introduced as part of agile methods such as Extreme Programming. However, it can also be
used in plan-driven development processes.

Fig: Test Driven Development

The fundamental TDD process is shown in above Figure. The steps in the process are as
follows:
1. You start by identifying the increment of functionality that is required. This should
normally be small and implementable in a few lines of code.
2. You write a test for this functionality and implement this as an automated test. This
means that the test can be executed and will report whether or not it has passed or
failed.
3. You then run the test, along with all other tests that have been implemented. Initially,
you have not implemented the functionality so the new test will fail. This is deliberate
as it shows that the test adds something to the test set.
4. You then implement the functionality and re-run the test. This may involve
refactoring existing code to improve it and add new code to what’s already there.
5. Once all tests run successfully, you move on to implementing the next chunk of
functionality.
The following sequence of steps is generally followed:
1. Add a test – Write a test case that describes the function completely. In order to make
the test cases the developer must understand the features and requirements using user
stories and use cases.
2. Run all the test cases and make sure that the new test case fails.
3. Write the code that passes the test case
4. Run the test cases
5. Refractor code – This is done to remove duplication of code.
6. Repeat the above mentioned steps again and again

GFGC Shimoga Page 6


Software Engineering

Motto of TDD:

1. Red – Create a test case and make it fail


2. Green – Make the test case pass by any means.
3. Refactor – Change the code to remove duplicate/redundancy.

Benefits TDD:
 Unit test provides constant feedback about the functions.
 Quality of design increases which further helps in proper maintenance.
 Test driven development act as a safety net against the bugs.
 TDD ensures that your application actually meets requirements defined for it.
 TDD have very short development lifecycle.

Release testing
Is the process of testing a particular release of a system that is intended for use outside
of the development team. Normally, the system release is for customers and users. In a
complex project, however, the release could be for other teams that are developing related
systems. For software products, the release could be for product management who then
prepare it for sale.
There are two important distinctions between release testing and system testing during the
development process:
1. A separate team that has not been involved in the system development should be
responsible for release testing.
2. System testing by the development team should focus on discovering bugs in the system
(defect testing). The objective of release testing is to check that the system meets its
requirements and is good enough for external use (validation testing).
The primary goal of the release testing process is to convince the supplier of the
system that it is good enough for use. If so, it can be released as a product or delivered to the
customer. Release testing, therefore, has to show that the system delivers its specified
functionality, performance, and dependability, and that it does not fail during normal use. It
should take into account all of the system requirements, not just the requirements of the end-
users of the system.
Release testing is usually a black-box testing process where tests are derived from the
system specification. The system is treated as a black box whose behaviour can only be
determined by studying its inputs and the related outputs. Another name for this is ‘functional
testing’, so-called because the tester is only concerned with functionality and not the
implementation of the software.

GFGC Shimoga Page 7


Software Engineering

USER TESTING
User or customer testing is a stage in the testing process in which users or customers provide
input and advice on system testing. This may involve formally testing a system that has been
commissioned from an external supplier, or could be an informal process where users
experiment with a new software product to see if they like it and that it does what they need.
User testing is essential, even when comprehensive system and release testing have been
carried out. The reason for this is that influences from the user’s working environment have a
major effect on the reliability, performance, usability, and robustness of a system.
It is practically impossible for a system developer to replicate the system’s working
environment, as tests in the developer’s environment are inevitably artificial. For example, a
system that is intended for use in a hospital is used in a clinical environment where other
things are going on, such as patient emergencies, conversations with relatives, etc. These all
affect the use of a system, but developers cannot include them in their testing environment.

There are three different types of user testing:

1. Alpha testing, where users of the software work with the development team to test the
software at the developer’s site.
Or
“Alpha Testing is a type of software testing performed to identify bugs before releasing the
software product to the real users or public”.
2. Beta testing, where a release of the software is made available to users to allow them to
experiment and to raise problems that they discover with the system developers.
Or
“Beta testing is the process of testing a software product or service in a real-world
environment before its official release”.
3. Acceptance testing, where customers test a system to decide whether or not it is ready to
be accepted from the system developers and deployed in the customer environment.

ACCEPTANCE TESTING
Acceptance testing is an inherent part of custom systems development. It takes place
after release testing. It involves a customer formally testing a system to decide whether or not
it should be accepted from the system developer. Acceptance implies that payment should be
made for the system.

Fig: the acceptance testing process

GFGC Shimoga Page 8


Software Engineering

There are six stages in the acceptance testing process, as shown in above Figure. They are:
1. Define acceptance criteria This stage should, ideally, take place early in the process
before the contract for the system is signed. The acceptance criteria should be part of the
system contract and be agreed between the customer and the developer. In practice, however,
it can be difficult to define criteria so early in the process. Detailed requirements may not be
available and there may be significant requirements change during the development process.
2. Plan acceptance testing This involves deciding on the resources, time, and budget for
acceptance testing and establishing a testing schedule. The acceptance test plan should also
discuss the required coverage of the requirements and the order in which system features are
tested. It should define risks to the testing process, such as system crashes and inadequate
performance, and discuss how these risks can be mitigated.
3. Derive acceptance tests Once acceptance criteria have been established, tests have to be
designed to check whether or not a system is acceptable. Acceptance tests should aim to test
both the functional and non-functional characteristics (e.g., performance) of the system. They
should, ideally, provide complete coverage of the system requirements. In practice, it is
difficult to establish completely objective acceptance criteria. There is often scope for
argument about whether or not a test shows that a criterion has definitely been met.
4. Run acceptance tests The agreed acceptance tests are executed on the system. Ideally, this
should take place in the actual environment where the system will be used, but this may be
disruptive and impractical. Therefore, a user testing environment may have to be set up to run
these tests. It is difficult to automate this process as part of the acceptance tests may involve
testing the interactions between end-users and the system. Some training of end-users may be
required.
5. Negotiate test results It is very unlikely that all of the defined acceptance tests will pass
and that there will be no problems with the system. If this is the case, then acceptance testing
is complete and the system can be handed over. More commonly, some problems will be
discovered. In such cases, the developer and the customer have to negotiate to decide if the
system is good enough to be put into use. They must also agree on the developer’s response
to identified problems.
6. Reject/accept system This stage involves a meeting between the developers and the
customer to decide on whether or not the system should be accepted. If the system is not good
enough for use, then further development is required to fix the identified problems. Once
complete, the acceptance testing phase is repeated.

GFGC Shimoga Page 9

You might also like