Home Work-2: Subject Name
Home Work-2: Subject Name
Submitted By-
Jivtesh Singh Ahuja
Section – d3803
Evaluator
comments………………………………………………………………………………………
………………………
Q1. Why does knowing how the Software works influence how an what you
should test?.
ANS:
The whole of the testing process depends on how the software works according to requirement
specification given by the user. So the developed software has to pass through the validation
process as well as the verification process of testing. Testing will be done on the basis of
requirements of software. It should be test on the basis of overall working of software and
requirements of the software.
If you test only by running the software and not seeing the coding part i.e. by a black-box testing
the software, you won't know if your test cases adequately cover all the parts of the software ie
what types of data types its able to interact with, what are the different boundaries condition and
what to test them and how the control is moving from one module of the software to the another
one.
So all the various types of testing techniques should be involves according to software like:
Volume testing
Domain Testing
Scenario testing
Regression Testing
User Acceptance
Alpha Testing
Beta Testing
Q2. What is the biggest problem of White-Box testing either Static or
Dynamic?
White box testing (Clear Box Testing, Open Box Testing, Glass Box Testing, Transparent Box
Testing or Structural Testing) traditionally refers to the use of program source code as a test
basis, that is, as the basis for designing tests and test cases. White-box testing usually involves
tracing possible execution paths through the code and working out what input values would force
the execution of those paths. White box testing, on its own, cannot identify problems caused by
mismatches between the actual requirements or specification and the code as implemented but it
can help identify some types of design weaknesses in the code
1. Test cases are tough and challenging to design, without having clear functional specifications.
2. It is difficult to identify tricky inputs, if the test cases are not developed based on specifications.
3. It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and
difficult.
4. As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out
this type of testing, which increases the cost.
5. And it is nearly impossible to look into every bit of code to find out hidden errors, which may
create problems, resulting in failure of the application.
7. Very few white-box tests can be done without modifying the program, changing values to force
different execution paths, or to generate a full range of inputs to test a particular function.
Ans:
No, we can’t guarantee that a software would never have a configuration problem
because as a new software is developed it may be the case that its developed on the new
hardware basis and and may run according to that. But we can’t guarantee that all the user
will be having that hardware configuration and some might have evem latest
configuration on which the software is successfully tested. So it is not possible that user
update all the new hardware that are coming in the market. may be, software is not
compactable with it.therefore it is never possible that software would never have a
configuration problem.
Ans:
Consider a ymail account which has:
Password=jivteshahuja
Test cases:
1) User Id=NULL , Password=NULL
E.g.
User Id= , Password=
Result-- prompt message “please enter user id and password”
4) User ID=Not Valid (Not containing both ‘.’ And ‘@’) , Password= valid
E.g.
User Id= Jivtesh_ahujaymailcom , Password= jivteshahuja
Result-- prompt message “please enter valid user id or password”
5) User ID=Not Valid (Not containing ‘@’ but containing ‘.’) , Password= valid
E.g.
User Id= Jivtesh_ahujaymail.com , Password= jivteshahuja
Result-- prompt message “please enter valid user id or password”
6) User ID=Valid (Not containing both ‘.’ but containing ‘@’) , Password= not valid
E.g.
User Id= Jivtesh_ahuja@ymailcom , Password= jivteshahuja
Result-- prompt message “please enter valid user id or password”
7) User ID=Valid (containing both ‘.’ And ‘@’) , Password= Less than 6 characters
E.g.
User Id= Jivtesh_ahuja@ymail.com , Password= jiv
Result-- prompt message “Too small to be a password”
8) User ID=Valid (containing both ‘.’ And ‘@’) , Password= Greater than 12
characters
E.g.
User Id= Jivtesh_ahuja@ymailcom , Password= jivteshahuja12345678
Result-- prompt message “Too long to be a password”
9) User ID=Valid (containing both ‘.’ And ‘@’) , Password= valid(10 Characters)
E.g.
User Id= Jivtesh_ahuja@ymailcom , Password= jivteshahuja
Result-- prompt message “valid user id and password” Logging…….
Q5. Explain the key elements involved in formal reviews?
Ans:
There are four essential elements to a formal review:
• Identify problems.
The goal of the review is to find problems with the softwarenot just items that are
wrong, but missing items as well. All criticism should be directed at the design or
code, not the person who created it. Participants shouldn't take any criticism
personally. Leave your egos, emotions, and sensitive feelings at the door.
• Follow rules.
A fixed set of rules should be followed. They may set the amount of code to be
reviewed (usually a couple hundred lines), how much time will be spent (a couple
hours), what can be commented on, and so on. This is important so that the
participants know what their roles are and what they should expect. It helps the review
run more smoothly.
• Prepare.
Each participant is expected to prepare for and contribute to the review. Depending on
the type of review, participants may have different roles. They need to know what
their duties and responsibilities are and be ready to actively fulfill them at the review.
Most of the problems found through the review process are found during preparation,
not at the actual review.
• Write a report.
The review group must produce a written report summarizing the results of the review
and make that report available to the rest of the product development team. It's
imperative that others are told the results of the meeting how many problems were
found, where they were found, and so on.
Part – B
Q6. Is it acceptable to release a software product that has configuration bugs?
Ans:
No it is not a proper approach to release a software having configuration bugs
For this probably you will never be able to fix all of them. As in all testing, the process is
risk based. You and your team will need to decide what you can fix and what you can't.
Leaving in an obscure bug that only appears with a rare piece of hardware is an easy
decision. Others won't be as easy.
For example:
2. In 1994 the Disney releases its first multimedia CD-ROM game for
children. THE LOIN KING ANIMATED STORYBOOK. Its sale was huge in
millions. But there was a bug, the game was not able to support the customer
computer configuration.
Ans:
Region or country: is a possibility as some hardware devices such as DVD players only
work with DVDs in their geographic region like a software which is developed by a
country organization which is not globally spread doesn’t know what type of hardware are
being used in the rest of the world. Another might be consumer or business. Some
hardware is specific to one, but not the other. Think of others that might apply to your
software.
Cost of a software is directly relates to time for testing any software. Greater time for
testing the software greater will be its cost or budget. If a proper amount of testing is not
done then also also the number of buz will go on increasing. If a excess testing is done the
cost will exponentially increase. If software does not work on the current hardware i.e.,
the software is failed to pass the configuration testing, the project manager must find
those types of hardware on that the software will work. To do this, he/she has to make
some alternatives on the basis of brand, cost, models, etc. And choose the best among
them.
Q8. What are the different levels of testing and the goals of different levels?
For each level which testing approach is more suitable?
• ACCEPTANCE TESTING
Testing to verify a product meets customer specified requirements. A customer usually does
this type of testing on a product that is developed externally.
• COMPATIBILITY TESTING
Testing to ensure compatibility of an application or Web site with different browsers, OSs,
and hardware platforms. Compatibility testing can be performed manually or can be driven by
an automated functional or regression test suite.
• CONFORMANCE TESTING
Verifying implementation conformance to industry standards. Producing tests for the behavior
of an implementation to be sure it provides the portability, interoperability, and/or
compatibility a standard defines.
• FUNCTIONAL TESTING
Validating an application or Web site conforms to its specifications and correctly performs all
its required functions. This entails a series of tests which perform a feature by feature
validation of behavior, using a wide range of normal and erroneous input data. This can
involve testing of the product's user interface, APIs, database management, security,
installation, networking, etcF testing can be performed on an automated or manual basis using
black box or white box methodologies.
• INTEGRATION TESTING
Testing in which modules are combined and tested as a group. Modules are typically code
modules, individual applications, client and server applications on a network, etc. Integration
Testing follows unit testing and precedes system testing.
• LOAD TESTING
Load testing is a generic term covering Performance Testing and Stress Testing.
• PERFORMANCE TESTING
Performance testing can be applied to understand your application or WWW site's scalability,
or to benchmark the performance in an environment of third party products such as servers
and middleware for potential purchase. This sort of testing is particularly useful to identify
performance bottlenecks in high use applications. Performance testing generally involves an
automated test suite as this allows easy simulation of a variety of normal, peak, and
exceptional load conditions.
• REGRESSION TESTING
Similar in scope to a functional test, a regression test allows a consistent, repeatable validation
of each new release of a product or Web site. Such testing ensures reported product defects
have been corrected for each new release and that no new quality problems were introduced in
the maintenance process. Though regression testing can be performed manually an automated
test suite is often used to reduce the time and resources needed to perform the required testing.
• SMOKE TESTING
A quick-and-dirty test that the major functions of a piece of software work without bothering
with finer details. Originated in the hardware testing practice of turning on a new piece of
hardware for the first time and considering it a success if it does not catch on fire.
• STRESS TESTING
Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements to determine the load under which it fails and how. A graceful degradation under
load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed
using the same process as Performance Testing but employing a very high level of simulated
load.
• SYSTEM TESTING
Testing conducted on a complete, integrated system to evaluate the system's compliance with
its specified requirements. System testing falls within the scope of black box testing, and as
such, should require no knowledge of the inner design of the code or logic.
• UNIT TESTING
Functional and reliability testing in an Engineering environment. Producing tests for the
behavior of components of a product to ensure their correct behavior prior to system
integration.
ANS:
Verification is a Quality control process that is used to evaluate whether or not a product,
service, or system complies with regulations, specifications, or conditions imposed at the
start of a development phase. Verification can be in development, scale-up, or production.
This is often an internal process.
It is sometimes said that verification can be expressed by the query "Are you building the
thing right?" and validation by "Are you building the right thing?" "Building the right
thing" refers back to the user's needs, while "building it right" checks that the
specifications be correctly implemented by the system. In some contexts, it is required to
have written requirements for both as well as formal procedures or protocols for
determining compliance.
Q10. In a code review check list there are some items as given below categories
them. Does the code follow the coding conventions of the organization?
4. Has the use of similar looking operators (e.g. &,&& or =,== in C)checked ?