Software Testing Interview Questions
Software Testing Interview Questions
com
or
By
Shivprasad Koirala
Sham Sheikh
(Covering SQA, CMMI, SIX Sigma, Software automation and Software estimation)
• Sample resumes which can help you in making your resume better.
• testcomplete512demo setup is a software automation tool. In the automation
chapter we are using this tool for explanation. You can get the latest update for
this automated testing tool from
http://www.automatedqa.com/downloads/testcomplete/index.asp
• Estimation TPA sheet
• Free Estimation PDF book.
• As the name suggests “Software testing interview questions ”, it’s mainly meant
for fresher’s as well as professional’s who are searching for opportunities in
software testing field.
• Other than software testing basics this book also covers software process
interview question (i.e. SIX sigma and CMMI) which are important from SQA
point of view. SQA is a position which every tester finally wants to reach on a
senior level.
• This book also goes one step further and covers Software estimation area by
covering topics like Function points and TPA analysis.
• Automation testing is one of the most frequented sections during software testing
interview. The same is covered in this book by using Test complete software.
• From the senior level point of view Metrics forms an important aspect during
software testing interview. A complete chapter is dedicated for the same which
covers the most used metrics like DRE, spoilage, Phage, Defect density and so on.
• During software testing interview quality of a tester is judged by the various
testing techniques he is aware of. This book has dedicated a complete chapter for
the same which covers complicated concepts like boundary Value analysis
(BVA), equivalence , Exploratory ,Random/Monkey testing , pair-wise ,
orthogonal and decision tables.
• The best part of the book is other than the software testing aspect it also dwells in
non-technical aspects like resume making , salary negotiations and points to be
remembered ( why do want to leave the organization ? , Where do you see
yourself after 3 years and so on) during interviews.
• With the book we also have a CD accompanied which has testcomplete software
testing tool setup , free software estimation PDF , interview rating sheet and lot
more).
• With the CD we have also provided interview rating sheet which can be used to
judge for yourself till what extent are you ready for Software testing interviews.
• In CD we have also provided sample resume which can help for a quick kick start
to prepare your resume.
Dedication
Shivprasad Koirala
This book is dedicated to my kids Sanjana and Simran, whose dad’s play time has been
stolen and given to this book. I am thankful to my wife for constantly encouraging me
and also to BPB Publication to give new comer a platform to perform. Finally on top of
all thanks to the two old eyes my mom and dad for always are blessing me. I am blessed
to have Raju as my brother who always keeps my momentum moving on. I am grateful to
Bhavnesh Asar who initially conceptualized the idea I believe concept thinking is more
important than execution. Thanks to Shaam for all the effort. It was his tiresome three
months of continuous writing that we have finally made it. Tons of thanks to my
reviewers whose feedback provided an essential tool to improve my writing capabilities.
Sham Sheikh
The credit of being a part of this book goes to three persons my mummy n daddy for
every thing they did, sacrificed for the sake of me because of which I am here and
Mr.Shiv because of him I have learnt and understood how to share knowledge through
books.Thanks to him from my bottom of my heart for giving me a opportunity to
participate in this book. Above all thanks to almighty LORD sitting right on the top on all
of us for showing his presence and showering his blessings, love in the form of these
three persons. In case you want to get in contact with me please email at
ahteshamax@hotmail.com .
Author works in a big multinational company and has a good experience in software
industry. He is working presently as project lead and in past has led projects in banking,
travel and financial sectors. But on top of all, I am a simple developer like you all guys
there doing an 8 hour job. Writing is something I do extra and I love doing it. No one is
perfect and same holds true for me .So anything you want to comment, suggest, and point
typo / grammar mistakes or technical mistakes regarding the book you can mail me at
shiv_koirala@yahoo.com. Believe me guys your harsh words would be received with
love and treated to the top most priority. Without all you guys I am not an author. Writing
an interview question book is really a great deal of responsibility. I have tried to cover
maximum questions for the topic because I always think probably leaving one silly
question will cost someone’s job there. But huge natural variations in an interview are
something difficult to cover in this small book. So if you have come across such
questions during interview which is not addressed in this book do mail at
shiv_koirala@yahoo.com .Who knows probably that question can save some other guys
job.
Organization Hierarchy
It’s very important during interview to be clear about what position yo u are targeting.
Depending on the position you are targeting the interviewer shoots you questions.
Example if you are looking for a project manager testing position you will be asked
around 20% technical questions and 80% management.
Note: - In small scale software house and mid scale software companies
there are chances where they expect a PM to be very much technical. But
in big software houses the situations are very much different,
interview are conducted according to positions.... Unless the
interviewer changes the rule.
So employer is looking for a suitable candidate and candidate looks for a better career.
Normally in interviews the employer is very clear about what type of candidate he is
looking for. But 90% times the candidate is not clear about the positions he is looking for.
How many times it has happened with you that you have given a whole interview and
when you mention the position you are looking for...pat comes the answer, “we do not
have any requirements for this position”. So be clarified about the position right from
when you start the interview.
There are basically two streams in a software organization one is the projects team and
the other is the quality team. Projects teams are those who execute the project and quality
team comprises of testers and SQA. The quality team looks after the quality part of the
project. Director and CTO are on the top hierarchy of the company. Main role of director
is to do finance and he is the owner of all happenings in the company… he is the final
guy who earns profit. CTO’s (Chief technical officer) main role is to have a bird eye view
of technical as well as management operations happening in the project. He is closely
involved with the director in reporting, budgeting and people management across
organization. Program manager comes below the CTO and is mostly involved in
interacting with the project managers to see in to higher level day to day operation in
every project. Program manager normally does not interact directly with the developers
of testers (in case of serious scenarios is an exception), he only takes daily reports and
metrics from project manager and checks the health of the project. Now lets understand
both project and quality team hierarchy step by step. Following are the number of years
of experience according to position for projects team:-
• Junior engineers are especially fresher and work under software engineers.
• Software engineers have around 1 to 2 years of experience. Interviewer expects
software engineers to be technically at a medium level.
• Senior Software Engineers have around 2 to 4 years of experience. Interviewer
expects them to technically be very strong.
• Project leads should handle majority technical aspect of project and should have
around 4 to 8 years of experience. They are also indirect architect of the project.
Interviewer expects them to be technically strong and in terms of architecture to
be decent. Interviewer also expects them to have people management skills.
• Project Manager are expected to be around 40% technically strong and should
have experience above 10 years plus. But they are more interviewed from aspect
of project management, client interaction, people management, proposal
preparation etc. So now judge where you stand, and where you want to go..........
The quality hierarchy for various reasons comes under the project manager of the
project. Let's start from the bottom of the hierarchy:-
• Junior tester: - junior testers are normally 1 to 2 years of experience or
freshers.Their main task is executing test procedures prepared by seniors.
• Senior tester: - senior testers have mainly 2 to 4 years of experience and are
expected to know how to prepare test plans. They are also aware of various
metrics and testing techniques which help them to communicate the project
health. But the main expectation from a senior tester is that he should
independently prepare test plans looking at the requirement document.
• Team lead tester: - Team lead tester mainly helps and guides the senior tester.
They are primarily involved in decision making of what is the testing strategy,
what kind of automation tool we used etc. They also act as a bridge between the
project manager and the other team members. Not to mention they have lot of say
in your appraisal :-).
• Project manager tester: - The main job of project test manager is to collect
accurate metrics and report the same to the project manager of the project. He is
also responsible for interacting with SQA to give update about the quality of the
project. They are also involved from the requirement phase.
• SQA: - If you are starting your career as a tester then you would surely aim to
become a SQA lead sometimes down the line or when you are quiet senior. The
main job of SQA is to define the quality standard of the organization and to
ensure that every project follows the quality process.
Even before the interviewer meets you he will first meet your resume. Interviewer
looking at your resume is almost a 20% interview happening with out you knowing it. I
was always a bad guy when it comes to resume preparation. But when I looked at my
friends resume they where gorgeous. Now that I am writing series of book on interviews I
thought this will be a good point to put in. You can happily skip it if you are confident
about your resume. There is no hard and fast rule that you have to follow the same pattern
but just see if these all check list are attended.
• Use plain text when you are sending resumes through email. For instance you sent
your resume using Microsoft word and what if the interviewer is using Linux he
will never be able to read your resume. You can not be sure both wise, you sent
your resume in Word 2000 and the guy has Word 97…ouch.
• Attach a covering letter it really impresses and makes you look traditionally
formal. Yes, even if you are sending your CV through email send a covering
letter.
Below is the check list of content you should have in your resume:-
• Start with an objective or summary, for instance :-
o Working as a senior tester for more than 4 years. Implemented testing
automation in projects.
o Followed the industry’s best practices and adhered and implemented
processes, which enhanced the quality of technical delivery.
o Pledge to deliver the best technical solutions to the industry.
• Specify your Core strengths at the start of the resume by which the interviewer
can make a quick decision are you eligible for the position.
For example: -
For instance, if you are looking for senior positions specify it explicitly ‘looking for this
job profile’. Any kind of certification like CSTE etc you can make it visible in this
section.
• Once you have specified briefly your goals and what you have done its time to
specify what type of technology you have worked with. For instance BVA,
Automated QA, process (Six sigma, CMMI), TPA analysis etc.
• After that you can make a run through of your experience company wise that is
what company you have worked with, year / month joining and year / month left.
This will give an overview to the interviewer what type of companies you have
associated your self. Now its time to mention all your projects yo u have worked
till now. Best is to start in descending order that is from your current project and
go backwards.
• Project Name / Client name (It’s sometimes unethical to mention clients name; I
leave it to the readers).
• Number of team members.
• Time span of the project.
• Tools, language and technology used to complete the project.
• Brief summary of the project. Senior people who have huge experience will tend
to increase the ir CV with putting in summary for all project. Best for them is to
just put description of the first three projects in descending manner and rest they
can say verbally during interview. I have seen CV above 15 pages… I doubt who
can read it.
• Finally comes your education and personal details.
• Trying for onsite, do not forget to mention your passport number.
• Some guys tend to make there CV large and huge. I think an optimal size should
be not more than 4 to 5 pages.
• Do not mention your salary in CV. You can talk about it during interview with
HR or the interviewer.
• When you are writing your summary for project make it effective by using verbs
like managed a team of 5 members, architected the project from start to finish etc.
It brings huge weight.
• This is essential very essential take 4 to 5 Xerox copies of your resume you will
need it now and then.
• Just in case take at least 2 passport photos with you. You can escape it but many
times you will need it.
• Carry all your current office documents specially your salary slips and joining
letter.
Salary Negotiation
Ok that’s what we all do it for MONEY… not everyone but still money means a lot. This
is probably the weakest area for techno savvy guys. They are not good negotiators. I have
seen so many guys at the first instance they will smile and say “NEGOTIABLE SIR”.
So here are some points:-
• Do a study of what is the salary trend? For instance have some kind of baseline.
For example what is the salary trend on number of year of experience? Discuss
this with your friends out.
• Do not mention your expected salary on the resume?
• Let the employer first make the salary offer. Try to delay the salary discussion till
the end.
• If they say what you expect? Come with a figure with a little higher end and say
negotiable. Remember never say negotiable on something which you have aimed,
HR guys will always bring it down. So negotiate on AIMED SALARY + some
thing extra.
• The normal trend is that they look at your current salary and add a little it so that
they can pull you in. Do your home work my salary is this much and I expect this
much so whatever it is now I will not come below this.
• Do not be harsh during salary negotiations.
• It’s good to aim high. For instance I want 1 billion dollars / month but at the same
time be realistic.
• Some companies have those hidden cost attached in salary clarify it rather to be
surprised at the first salary package.
• Many of the companies add extra performance compensation in your basic which
can be surprising at times. So have a detail break down. Best is to discuss on hand
salary rather than NET or CTC.
• Talk with the employer in what frequency does the hike happen.
• Take everything in writing, go back to your house and have a look once with a
cool head is the offer worth it of what your current employer is giving.
• Do not forget once you have job in hand you can come back to your current
employer for negotiation.
• Remember the worst part is cribbing after joining the company that your
colleague is getting more. So be careful while interview or be sportive to be a
good negotiator in the next interview.
• One very important thing is that the best negotiation ground is not the new
company where you are going but the old company which you are leaving. So
once you have offer on hand get back to your old employee and show them the
offer and then make your next move. It’s my experience that negotiating with the
old employer is easy than the new one….Frankly if approached properly rarely
any one will say no as you have spent quiet a amount of time with them. Just do
not be aggressive or egoistic that you have an offer on hand.
• Top of all some time some things are worth above money: - JOB
SATISFACTION. So whatever you negotiate if you think you can get JOB
SATISFACTION aspect on higher grounds go for it. I think its worth more than
money.
Salary Structure
Below is the salary structure for a tester. This card rate is from only Indian perspective.
We have two sections below one for small and midscale and the other from MNC
perspective. The entire amounts below are CTC per month (Cost to Company).
In the CD we have provided Interview rating excel sheet. This sheet will help you in
Providing insight of really how much you are ready for Software testing, JAVA, .NET or
SQL Server Interviews. In the sheet we have nine sections:-
• Guidelines
• JAVA
• Java results
• .NET
• .NET Results
• SQL Server
• SQL Server results
• Software testing
• Software testing results
The guidelines sheet defines the guidelines for the rating. For every question you can
give One to five rating. Ratings are rated using the following guidelines:-
• 0-You have no idea about the question.
• 1-You know only the definition.
• 2-You know the concept but don’t have depth knowledge of the subject.
• 3-You know the concept and have partial knowledge about the concept.
• 4-You know the concept and have in depth knowledge about the subject.
• 5- You are an expert and no one can touch you in this area.
The remaining eight sections are questions and results. For instance we have the Software
testing section and the Software testing results section. Software testing section will take
in the rating inputs for every questions and Software testing result will show the output.
Same hold true for .NET, JAVA and SQL Server.
For every question you need to select ratings. So go through every question and see how
good you are. Definitely you do not have any one to govern you but finally you have to
clear the interview so be fair and know your results before hand.
Figure: - Overall rating values
Above figure shows how you have performed in every section and the overall rating.
• One of the first questions asked during interview is “Can you say something about
yourself”?
• Can you describe about your self and what you have achieved till now?
• Why do you want to leave the current company?
• Where do you see yourself after three years?
• What are your positive and negative points?
• How much do you rate yourself in Software testing in one out of ten?
• Are you looking for onsite opportunities? (Be careful do not show your
desperation of abroad journeys)
• Why have you changed so many jobs? (Prepare a decent answer do not blame
companies and individuals for your frequent change).
• Never talk for more than 1 minute straight during interview.
• Have you worked with Software automation?
• Do not mention client names in resume. If asked say that it’s confidential which
brings ahead qualities like honesty
• When you make your resume keep your recent projects at the top.
• Find out what the employer is looking (Automation or manual testers) for by
asking him questions at the start of interview and best is before going to
interview.
• Can you give brief about your family background?
• As you are fresher do you think you can really do this job?
• Have you heard about our company? Say five points about our company? Just
read at least once what company you are going for?
• Can you describe your best project you have worked with?
• Do you work on Saturday and Sunday?
• Which is the biggest team size you have worked with?
• Can you describe your current project you have worked with?
• How much time will you need to join our organization? What’s notice period for
your current company?
• What certifications have you cleared CSTE etc?
• Do you have pass port size photos, last year mark sheet, previous companies
employment letter, last months salary slip, pass port and other necessary
documents.
• What is the most important thing that motivates you?
• Why you want to leave the previous organization?
• Which type of job gives you greatest satisfaction?
• What is the type of environment you are looking for?
• Do you have experience in software testing project management?
• Do you like to work as a team or as individual?
• Describe your best project manager you have worked with?
• Why should I hire you?
• Have you been ever fired or forced to resign?
• Can you explain some important points that you have learnt from your past
project experiences?
• Have you gone through some unsuccessful projects, if yes can you exp lain why
did the project fail?
• Will you be comfortable with location shift? If you have personal problems say
no right at the first stage.... or else within two months you have to read my book
again.
• Do you work late nights? Best answer if there is project deadline yes. Do not
show that it’s your culture to work during nights.
• Any special achievements in your life till now...tell your best project which you
have done best in your career.
• Any plans of opening your own software company...Beware do not start pouring
your bill gate’s dream to him.....can create a wrong impression.
If you can read English, you can read this book....kidding. There are some legends which
will make your reading more effective. Every question has simple tags which mark the
rating of the questions.
These rating are given by Author and can vary according to companies and individuals.
(B) Basic Questions: - Basic Grade means according to the interviewer it’s a fundamental
question and should be answered. Example Can you explain unit testing? Guy’s
stumbling on this question will rarely pass software testing interviews.
(I) Intermediate Questions: - These are Mid- level questions and will be expected to be
answered if you are looking for a decent position in the company.
(A) Advanced Questions: - These are advanced level question which are expected when
they are looking for specialist in the field.
(P) Psyche Questions: - These levels of questions do not judge anything for a candidate
but I see it as a attitude problem of the interviewer.
Note: - While reading you can come across section marked as “Note”,
which highlight special points of that section. One advice do not read
this book from top to last. Read the index and see which sections you
are targeting and revise those.
(B) Can you explain PDCA cycle and where does testing fit?
Software testing is an important part of software development process. In normal
software development below are the four important steps and also referred in short as the
PDCA cycle.
Plan: - Define your goal and the plan of how you will achieve that goal.
Do / Execute : - Depending on the plan strategy decided during the plan stage we do
execution accordingly in this phase.
Check: - Check / Test to make sure that we are moving according to plan and we are
getting the desired results.
Act: - During the check cycle if any issues are there then take appropriate action
accordingly and revise your plan again.
So now to answer our question where does testing fit in….You guessed it right the check
part of the cycle. So developers and other stake holders of the project do the “plan and
build”, while testers do the check part of the cycle.
(B) What is the difference between white box, black box and gray box
testing?
Black box testing is a testing strategy which is based solely on the requirements and
specifications. Black box testing requires no knowledge of the internal paths, structure, or
implementation of the software under test.
White box testing is a testing strategy which is based on the internal paths, code structure,
and implementation of the software under test. White box testing generally requires
detailed programming skills.
There is one more type of testing is called gray box testing. In this we look into the "box"
under test just long enough to understand how it has been implemented. Then we close up
the box and use our knowledge to choose more effective black box tests.
Below figure shows how both the types of testers view an accounting application during
testing. In case of black box tester they view in terms of pure accounting application.
While in terms of White box testing the tester know about the internal structure of the
application. In most scenarios white box testing is done by developers as they know the
internals of the application. In Black box testing we check the overall functionality of the
application while in white box we do code reviews , view the architecture , remove bad
code practices and component level testing.
Wrong: - The requirements have been implemented incorrectly. This defect is a variance
from the given specification.
Missing : - There is a requirement given by the customer and is not done. This is a
variance from specification, an indication that the specification was not implemented, or
a requirement of the customer was noted properly.
Extra: - A requirement incorporated into the product that was not given by the end
customer. This is always a variance from specifications, but may be an attribute desired
by the user of the product. However, it is considered a defect because it’s a variance from
the existing requirements.
A risk is a condition that can result in a loss. Risk can only be controlled in many
scenarios but not eliminated completely. Defect normally converts to a risk. For instance
let’s say you are developing an accounting application and you have done wrong tax
calculation. There is a huge possibility this will lead to risk of company running under
loss. But if this defect is controlled then we can eithe r remove this risk completely or
minimize it. Below diagram shows how a risk gets converted to risk and with proper
testing how it can be controlled.
No increase in testing does not mean always good for the product, company or the
project. In real test scenarios from 100% test plans only 20% test plans are critical from
business angle. Running those critical test plans will assure that the testing is proper
rather than running the full 100% test plans again and again. Below is the graph which
explains the impact of under testing and over testing. If you under test a system your
number of defect will increase, but on the contrary if you over test a system your cost of
testing will increase. Even if your defects come down your cost of testing has shooted
up.
Note: - This question will be normally asked to test whether you can
independently setup testing departments. Many companies still think
testing as secondary. That’s where a good testing manager should show
the importance of testing. To bring in the attitude of testing in
companies which never had a formal testing department is a huge
challenge. Because it’s not about bringing in new process but about
changing the mentality.
Below are the important steps to define testing policy in general. But it can change
according to how you implemented in your organization. Let’s understand in detail the
below steps of implementing a testing policy in an organization.
Definition: - The first thing any organization need to do is define one unique definition
for testing within organization. So that every one is on the same mind set.
How to achieve: - How are we going to achieve our objective?. Is there going to be a
testing committee, will there be compulsory test plans which needs to be executed etc etc.
Evaluate : - After testing is implemented in project how do we evaluate the same. Are we
going to derive metrics of defect per phase, per programmer etc etc. Finally it’s
important to let know every one how testing has added value to the project.
Standards : - Finally what are the standards we want to achieve by testing. For instance
we can define saying that more than 20 defects per KLOC will be considered below
standard and code review should be done for the same.
Testing after code and build is a traditional approach and many companies have
improved on the same. Testing should occur in conjunction with each phase as shown in
the below figure.
Figure : - Modern way of testing
In requirement phase we can verify are the requirements according to the customer needs.
During design we can check whether the design document covers the entire requirement.
In this stage we can also generate rough functional data. We can also review the design
document from the architecture and the correctness perspective. In build and execution
phase we can execute unit test cases and generate structural and functional data. And
finally comes the testing phase as it was in the traditional way i.e. run the system test
cases and see of the system work according to the standards. During installation we need
to check if the system is compatible for the software. Finally during maintenance phase
when any fixes are made we can check retest the fixes and follow the regression testing.
Design phase is the more error prone than the execution phase. One of the most frequent
defects which occur during design is that, it does not cover the complete requirements of
the customer. Second wrong or bad architecture and technical decision makes the next
phase that is execution more prone to defects. Because the design phase drives the
execution phase it’s the most critical phase to test. The testing of the design phase can be
done by good review. On an average 60% defect occur during design and 40% during
execution phase.
Figure: - Phase wise defect percentage
(B) What kind of inputs do we need from the end user to start proper
testing?
The produc t has to be finally used by the user. So he is the most important person as he
has highest interest than any one else in the project. From the user we need the following
data as shown in the below figure:-
• The first important thing is the Acceptance test plan from the end user.
Acceptance test defines the entire test which the product has to pass so that it can
go in production.
• Requirement document from the customer. In normal scenarios customer never
writes a formal document until really he is so matured. But we need to document
and the customer should sign saying yes this is what he wants.
• Customer should also define the risky sections of the project. For instance in a
normal accounting project if a voucher entry screen does not work that will stop
the accounting functionality completely. But yes if reports are not derived the
accounting department can stay with it for some time. Customer is the right
person to say which section will hit him the most. With this feedback the testers
can prepare a proper test plan for those areas and test it thoroughly.
• Customer should also provide proper data for testing. Feeding proper data during
testing is the most important thing. In many scenarios testers key in wrong data
and expect results which are of no interest to the customer.
Latent defect is an existing defect that has not yet caused a failure just because the exact
set of conditions has never been met.
Masked defect is an existing defect that hasn't yet caused a failure, just because another
defect has prevented that part of the code from being executed.
Below scenario flow explains latent defect practically. The application has facility to
print invoice either by Laser printer or by Dot Matrix printer (DMP). In order to achieve
the same the application first searches laser printer. If he gets a laser printer he uses the
laser printer and prints it. If he does not get a laser printer, application searches for DMP.
If the application finds a DMP application prints using DMP or else an error is thrown.
Now for what ever good reason for this application till now there was never a chance that
laser printer was not present. So the application neve r got tested for DMP. That means
the exact conditions never met for DMP. This is called as Latent defect.
Now the same application has two defects one defect is in the DMP search and the other
defect is in the DMP print. But because the search of the DMP fails the print DMP defect
is never detected. So the Print DMP defect is a masked defect.
If a defect is known at the initial stage then it should be removed during that stage / phase
itself rather than doing it some later stages. It’s a recorded fact that if a defect is delayed
for later phases are proven more costly. Below figure shows how a defect is costly as the
phase moves ahead. A defect if identified and removed during the requirement and design
phase is the most cost effective, while a defect removed during maintenance is 20 times
costlier than requirement and design phases. For instance if a defect is identified during
requirement and design we only need to change documentation , but if identified during
the maintenance we not only need to fix the defect , but also change our test plans , do
regression testing and change all documentation for the same. This is why a defect should
be identified / removed in earlier phases and the testing department should be involved
right from the requirement phase and not after the execution phase.
In real scenarios projects is not made of one work bench but of many connected work
benches. A work bench gives you a way of organized thinking to perform any kind of
task with proper testing. You can visualize every software phase as a work bench with
execute and check steps. The most important point to note is if we visualize any task as a
work bench by default we have the check part in the task. Below figure shows how every
software phase can be visualized with a concept of workbench. Let us understand the
work bench concept in a detailed fashion:-
Requirement phase work bench: - Input is the customer’s requirement, we execute the
task of writing a requirement document, we check if the requirement document addresses
all the customer needs and the output is the requirement document.
Design phase work bench: - Input is the requirement document, we execute the task of
preparing a technical document, review / check is done to see if the design document is
technically correct and addresses all the requirements mentioned in the requirement
document and output is a technical document.
Execution phase work bench: - This is the actual execution of the project. Input is the
technical document; execution is nothing but implementation / coding according to the
technical document and output of this phase is the implementation / source code.
Testing phase work bench: - This is the testing phase of the project. Input is the source
code which needs to be tested; execution is executing the test case and output is the test
results.
Deployment phase work bench: - This is the deployment phase. There are two inputs
for this phase one is the source code which needs to be deployed and that is dependent on
the test results. Output of this project is that the customer gets the product which he can
now start using.
Maintenance phase work bench: - Input to this phase is the deployment results,
execution is implementing change request from the end customer, check part is nothing
but running regression testing after every change request implementation and output is a
new release after every change request execution.
Alpha and Beta testing means different for different people. Alpha testing is the
acceptance testing done at the development site. Some organizations have a bit different
visualization of Alpha testing. They consider alpha testing as a testing which is conducted
on early, unstable version of software. On the contrary Beta testing is acceptance testing
conducted at the customer end. In short summarizing the difference between Beta testing
and Alpha testing is the location where the tests are done.
(B) Can you explain how one defect leads to other defects?
Defect Cascading is a defect which is caused by other defect. So one defect triggers other
defect. For instance in the accounting application below there is one defect called which
leads to negative taxation. So the negative taxation defect affects the Ledger which in
turn affects four other modules.
(B) What are the different strategies of rollout to the end users?
Phased implementation: - In this the product is rolled out to all users in a increment
fashion. That means each successive roll out has some added functionality. So as new
functionality come in, new installations happen and customer tests progressively. The
good part for this kind of roll outs are that customer can start using the functionality and
provide valuable feedback progressively. The only issue here is with each roll out and
added functionality the integration becomes more complicated.
Parallel Implementation: - In these types of rollouts the existing application is run side
by side with the new application. If there are any issues with the new application we
again move back to old application. One of the biggest problem with parallel
implementation is we need extra hardware, software and resources.
Below figure shows the different launch strategies for a project roll out.
Figure: - Launch Strategies
In most organization testing only starts after execution / coding phase of the project. But
if the organization wants to really benefit from testing, then testers should get involved
right from the requirement phase.
If the tester is getting involved right from the requirement phase then requirement
traceability is one of the important reports which can give what kind of test coverage the
test cases have.
Below is the figure which shows how we can measure the coverage using the requirement
traceability matrix.
We have extracted the important functionality from the requirement document and
aligned the same on the left hand side of the sheet. On the other side at the top we ha ve
mapped the test cases with the requirement. With this we can ensure that all requirements
are covered by our test cases. As shown in the below figure we can have one or more test
cases covering the requirement. This is also termed as requirement coverage.
The difference between Pilot and Beta testing is that Pilot is nothing but actually using
the product (only that it is limited to some users) and in Beta testing we do not input real
data, but it’s installed at the end customer to validate if the product can be used in
production.
• The first important thing is to collect features and the concern from the current
documentation and data available from requirement phase. For instance below are
list of some features and concern :-
Features
Add a user
Check user preferences
Login User
Add new invoice
Print invoice
Concerns
Maintainability
Security
Performance
Above table shows the features and concerns. Features are functionalities which the end
user will use, while concerns are global attributes of the project. For instance Security
feature has to be applied to all the features listed above.
• Once we have listed the features and concern, we need to rate the probability /
likelihood of failures in this feature. In the below section we have rated the same
with Low, High and Medium. But you can put numerical values if you want.
• Once we have rated the failure probability, we need to rate the impact. Impact
means if we make changes to this feature in how many other features does it
affect. You can see in the below table we have marked the impact section
accordingly.
• Depending on the priority rating table we have defined priority for the below
listed features. You can see below the priority rating table below for the features.
Now depending on priority you can start testing those features first.
• Once the priority is set you can then review the same with your team members to
validate the same.
Below figure shows the summary of the above steps. So list your concerns, rate the
probabilities of failures, Give impact rating, calculate risk / priority and then review,
review and review.
Figure: - Testing Analysis and design
Entry and exit criteria are a must for the success of any project. If you do not know from
where to start and where to finish then your goals are not clear. By defining exit and entry
criteria you define your boundaries. For instance you can define entry criteria that the
customer should give the requirement document or acceptance plan. If these entry
criteria’s are not met then you will not start the project. On the other end you can also
define exit criteria for your project. For instance one of the common exit criteria’s in all
project is that customer has successfully executed all the Acceptance Test plan.
In any project Acceptance document is normally prepared using the following inputs.
This can vary from company to company and from project to project.
• Requirement document: - This document specifies what exactly is needed in the
project from the customer perspective.
• Input from customer: - This can be discussions, informal talks, emails etc.
• Project plan: - Project plan prepared by the project manager also serves as a good
input to finalize your acceptance test.
(B) What’s the relation between environment reality and test phases?
Environment reality becomes more important as test phases start moving ahead. For
instance during unit testing you need the environment to be least real , but at the
acceptance phase you should have a 100 % real environment , or we can say it should be
the real environment. Below graph shows how with every phase the environment reality
should also increase and finally during acceptance it should be to the maximum.
Figure -: Environment reality
As said in the previous sections the difference between validation and verification is that
in validation we actually execute the application, while in verification we do review
without actually running the application. Verifications are basically of two main types
Walkthrough and Inspection. Walkthrough is an informal way of verification. For
instance you can call your colleague and do an informal walkthrough to just check if the
documentation and coding is proper. Inspection is a formal procedure and official. For
instance in your organization you can have an official body which approves design
documents for any project. Every project in your organization need to go through an
inspection which reviews your design documents. In case there are issues in the design
documents then your project will get a NC (Non conformance) list. You can not proceed
ahead with out clearing the NC’s given by the inspection team.
Figure: - Walkthrough and Inspection
Regression testing is meant for regression defects. Regression defects are defects due to
which the functionality which was working first normally has stopped working. This is
probably because of changes made in the program or the environment. To uncover such
kind of defect regression testing is conducted.
Below figure shows the clear explanation and difference between regression and
confirmation testing. If we fix a defect in an existing application we use confirmation
testing to test if the defect is removed. It’s very much possible because of this defect or
changes to the application it can affect other sections of the application. So to ensure that
no other section is affected we can use regression testing to confirm the same.
(I) what do you mean by coverage and what are the different types of
coverage techniques?
Coverage is a measure used in software testing to describe to the degree to which the
source code is tested. There are three basic types of the coverage techniques as shown in
the below figure:-
• Statement coverage: - This coverage ensures that each line of source code has
been executed and tested.
• Decision coverage: - This coverage ensures that every decision (true/ false) in the
source code has been executed and tested.
• Path coverage: - In this coverage we ensure that every possible route through a
given part of code is executed and tested.
Figure: - Coverage techniques
While doing testing on the actual product, code coverage testing tool is run simultaneously.
While the testing is going on code coverage tool monitors the executed statements of the
source code. When the final testing is completed we can get a complete report of the pending
statements and also get the coverage percentage.
When changes are done in ADHOC and uncontrolled manner more chaotic situations
arise and more defects are injected. So whenever changes are done it should be done in a
controlled fashion and with proper versioning. At any moment of time we should be able
to revert back to the old version. The main intension of Configuration management is that
we can track our changes back if we have issues with the current system. Configuration
management is done by using baselines.
Base lines are logical ends in a software development cycle. For instance let’s say you
have software whose releases which will be done in phases i.e. Phase 1, Phase 2 etc. So
you can base line your software product after every phase. In this way you will now be
able to track the difference between Phase 1 and Phase 2. Changes can be in various
sections for instance the requirement document (because some requirements changed) ,
technical ( due to changes the architecture was needed to be changed )
, Source code ( source code change) , test plan change and so on.
For example consider the below figure which shows how an accounting application had
under gone changes and was then base lined with each version. When the accounting
application was released it was released with Ver 1.0 and base lined. After some time
some new features where added and version 2.0 was generated. This was again a logical
end so we again base lined the application. So now in case we want to track back and see
what are the changes from VER 2.0 to VER 1.0 we can easily understand the same as we
have logical base lines. After some time the accounting application had gone through
some defect removal i.e. VER 3.0 was generated and again base lined and so on.
Base line is very important from testing perspective. Because testing on a software
product which is constantly changing will not take you any where. So when you actually
start testing you need to first base line the application so that what you test is for that base
line. If the developer fixes something then create a new base line and perform testing on
the same. In this way any kind of conflicts will be avoided.
There are minimum four test plan documents needed in any software project. But
depending on project and team members agreement some of the test plan document can
be cut off.
Central / Project test plan: - Central test plan is one of the important communication
channels for all project participants. This document can have essentials like resource
utilization, testing strategy, estimation, risk, priorities and lot more.
Acceptance test plan: - Acceptance test plan is mostly based on user requirements and is
used to verify whether the requirements are satisfied according to customer needs.
Acceptance test cases are like a green flag for the application whether or not the
application should go in a production.
System test plan: - System test plan is where all main action of testing happens. This
testing in addition to functio nality testing has also load, performance and reliability tests.
Integration testing : - Integration testing ensures that various components in the system
interact properly and data is passed properly between them.
Unit testing : - Unit testing is more at a developer level. In unit testing we check the
individual module in isolation. For instance the developer can check his sorting function
in isolation, rather than checking in integrated fashion.
Below figure shows the interaction between the entire project test plan.
Below figure shows pictorially how test document span across SDLC. We will try to
understand step by step how it works.
Central / Project test plan: - This is the main test plan which outlines the complete test
strategy of the software project. This document should be prepared before the start of the
project and is used till the end of SDLC.
Acceptance test plan: - This test plan is normally prepared in joint venture with the end
customer. This document commences during the requirement phase and completes till the
final delivery.
System test plan: - This test plan starts during the design phase and proceeds till the end
of the project.
Integration and unit test plan: - Both of these test plans start during the execution phase
and continue till the final delivery.
Below are three important steps for doing analysis and design for testing:-
Test objective: - They are broader categories of things which need to be tested in the
application. For instance in the below figure we have four broader category of test areas
Polices, Error checking, features and speed.
Inventory: - Inventory is nothing but list of things to be tested for an objective. For
instance the below figure shows we have identified inventory like add new policy which
is tested for the object types of policies. Change / Add Address and delete customer is
tested for the features objective.
Tracking matrix: - Once we have identified our inventories we need to map the
inventory to test cases. Mapping of inventory to the test cases is termed as calibration.
Figure: - Calibration
Below is a sample of inventory tracking matrix. “Features” is the objective and “Add new
Policy “, “Change Address”, “Delete a customer” are the inventory for the objective.
Every inventory is mapped to a test case. Only the “Delete a customer” inventory is not
mapped to any test case. This way we can know if we have covered all the aspects of the
application in testing. Inventory tracking matrix gives us a quick global view of what is
pending and hence helps us to also measure coverage of the application. Below figure
shows the “delete a customer” inventory is not covered by any test case thus giving a red
signal of what is not covered.
Note: - During interview try to explain all the above three steps.
Because that’s how testing is planned and designed in big companies.
Inventory form the main back bone of Software testing.
(B) Which test cases are first written white boxes or black box?
Normally black box test cases are written first and white box test cases later. In order to
write black box test cases we need requirement document, design or project plan. All
these documents are easily available in the initial start of the project. White box test cases
can not be started in the initial phase of the project because it needs more architecture
clarity which is not available at the start of the project. So normally white box test cases
are written after black box test cases are written. Black box test cases do not require the
system understanding as such while white box needs more structural understanding. And
structural understanding is more clear in the later part of project i.e. while execution or
designing. For black box you need to only analyze more from the functional perspective
which is easily available by a simple requirement document.
Figure: - White box and black box
When we install the application at the end client it is very much possible that on the same
PC other applications also exist. It is very much possible that those application share
common DLL’s, resource etc with your application. There is a huge chance in such
situations that your changes can affect the cohabiting software working. So the best
practice is after you install your application or after any changes, tell other application
owners to run a test cycle on their application.
(B) What different impact rating’s you have used in your project?
• Minor: - Very low impact but does not affect operations on large scale.
• Major: - Affects operations on a very large scale.
• Critical: - Brings the system to halt and complete show stopper.
Figure : - Test Impact rating
The IEEE Std. 829-1998 defines the test log as a chronological record of relevant details
about the execution of test cases. It’s a detail view of activity and events in a
chronological manner. Below figure shows what a test log is , followed by a sample test
log.
Every activity has a life cycle and software development process is not an exception for
the same. Even if you are not aware of SDLC you still must be following it unknowingly.
But if a software professional is aware about SDLC he can execute the project in a much
controlled fashion. One of the big benefits of this awareness is that hot blooded
developers will not start directly execution (coding) which can really lead to project
running in an uncontrolled fashion. Second it helps customer and software professional to
avoid confusion by anticipating the problems and issues before hand. In short SDLC
defines the various stages in a software life cycle.
But before we try to understand what SDLC is all about. We need to get a broader view
of the start and end of SDLC. Any project started if it does not have a start and end then
its already in trouble. It’s like if you go out for a drive you should know where to start
and where to end or else you are moving around endlessly.
Above is a more global view of the how the start and end of SDLC. Any project should
have entry criteria and exit criteria. For instance a proper estimation document can be an
entry criteria condition. That means if you do not have a proper estimation document in
place the project will not start. It can be more practical, if half payment is not received
the project will not start. So there can be list of points which needs to be completed
before a project starts. Finally there should be end of the project also which defines
saying that this is when the project will end. For instance if all the test scenarios given by
the end customer is completed that means the project is finished. In the above figure we
have the entry criteria as an estimation document and exit criteria as a signed document
by the end client saying the software is delivered.
Below is the figure that shows typical flow in SDLC which has five main models .As per
use developers can select model for their project.
• Waterfall - Big Bang and Phased model.
• Iterative - Spiral and Incremental model.
Waterfall
Let’s have a look on Waterfall model which is basically divided into two subtypes:-
In water fall big bang model, it is assumed that all stages are freezed that means it’s a
perfect world. But in actual projects such processes are impractical.
In this model the project is divided into small chunks and delivered at intervals by
different teams. In short, chunks are developed in parallel by different teams and get
integrated in the final project. But the disadvantage of this model if there is improper
planning may lead to fall of the project during integration or any mismatch of co-
ordination between the team may cause huge failure.
Iterative model
Incremental model
In this model work is divided into chunks like phase waterfall model but the difference is
that in Incremental model one team can work on one or many chunks which was unlike in
phase waterfall model.
Spiral model
This model uses series of prototype which refine on understanding of what we are
actually going to deliver. Plans are changed if required as per refining of the prototype.
So every time in this model refining of prototype is done and again the whole process
cycle is repeated.
Evolutionary model
In Incremental and Spiral model the main problem is for any changes in between SDLC
cycle we need to iterate the whole new cycle. For instance, During the final(Deliver)stage
customer demands for change we iterate the whole cycle again that means we need to
update all the previous (Requirement, Technical documents, Source code & Test plan)
stages.
In Evolutionary model, we divide software into small units which can be earlier
delivered to the customer’s end which means we try to fulfill the customer’s needs. In the
later stages we evolve the software with new customers needs.
Note: - V model is one the favorite questions for any tester so please
do read it carefully.
V-model
This type of model was developed by testers to emphasis the importance of early testing.
In this model testers are involved from requirement stage itself. So below is the diagram
(V model cycle diagram) which shows how for every stage some testing activity is done
to ensure that the project is moving as planned.
For instance,
• In requirement stage we have acceptance test documents created by the testers.
Acceptance test document outlines that if these test pass then customer will accept
the software.
• In specification stage testers create the system test document. In the coming
section system testing is explained in more elaborate fashion.
• In design stage we have integration documents created by the testers. Integration
test documents define testing steps of how the components should work when
integrated. For instance you develop a customer class and product class. You have
tested the customer class and the product class individually. But in practical
scenario the customer class will interact with the product class. So you also need
to test is the customer class interacting with product class properly.
• In implement stage we have unit documents created by the programmers or
testers.
Lets try to understand every of this testing phase in more detail.
Unit Testing
Starting from the bottom the first test level is "Unit Testing". It involves checking that
each feature specified in the "Component Design" has been implemented in the
component.
In theory an independent tester should do this, but in practice the developer usually does
it, as they are the only people who understand how a component works. The problem
with a component is that it performs only a small part of the functionality of a system,
and it relies on co-operating with other parts of the system, which may not have been
built yet. To overcome this, the developer either builds, or uses special software to trick
the component into believe it is working in a fully functional system.
Integration Testing
As the components are constructed and tested they are then linked together to check if
they work with each other. It is a fact that two components that have passed all their tests,
when connected to each other produce one new component full of faults. These tests can
be done by specia lists, or by the developers.
Integration Testing is not focused on what the components are doing but on how they
communicate with each other, as specified in the "System Design". The "System Design"
defines relationships between components.
The tests are organized to check all the interfaces, until all the components have been
built and interfaced to each other producing the whole system.
System Testing
Once the entire system has been built then it has to be tested against the "System
Specification" to check if it delivers the features required. It is still developer focused,
although specialist developers known as systems testers are normally employed to do it.
In essence System Testing is not about checking the individual parts of the design, but
about checking the system as a whole. In fact it is one giant component.
System testing can involve a number of specialist types of test to see if all the functional
and non- functional requirements have been met. In addition to functional requirements
these may include the following types of testing for the non- functional requirements:
There are many others, the need for which is dictated by how the system is supposed to
perform.
Acceptance Testing
Acceptance Testing checks the system against the "Requirements". It is similar to
systems testing in that the whole system is checked but the important difference is the
change in focus:
Systems testing checks that the system that was specified has been delivered. Acceptance
Testing checks that the system will deliver what was requested.
The customer should always do acceptance testing and not the developer. The customer
knows what is required from the system to achieve value in the business and is the only
person qualified to make that judgment. This testing is more of getting the answer for
whether is the software delivered as defined by the customer. It’s like getting a green flag
from the customer that the software is up to the expectation and ready to be used.
In the previous section we looked through all the models. But in real projects, hardly one
complete model can fulfill the entire project requirement. In real projects, tailor model
are proven to be the best because they share features from all models such as Waterfall,
Iterative, Evolutionary models etc and can fit in to real life time projects. Tailor model
are most productive and benefited for many organization. If it’s a pure testing project
then V model is the best.
Isolated test team: - There is a special team of testers which do only testing. The testing
team is not related to any project. It like having a pool of testers in an organization, which
are picked up on demand by the project and after completion again pushed back to the
pool. This approach is costly but the best is we have a different angle of thinking from a
different group which is isolated from development. But yes because it’s a complete
isolated team it definitely comes at a cost.
Outsource: - In this we contact an external supplier, hire testing resources and do testing
for our project. Again it has two sides of the coins. Good part is resource handling is done
by the external supplier. So you are freed from the worry about resource leaving the
company, people management etc. But the bad side of the coin is because they are
outsourced vendors they do not have domain knowledge of your business. Second at the
initial stage you need to train them for domain knowledge which is again added cost.
Inside test team: - In this approach we have a separate team which belongs to the project.
Project allocates separate budget for testing and this testing team specially works this
project only. Good side you have a dedicated team and because they are involved in the
project they have good knowledge about the same. Bad part you need to budget for them
in short it increases the project cost.
Developer as Testers: - In this approach the developers of the project perform the testing
activity. Good part of this approach is developers have a very good idea of the inner
details so they can perform good level of testing. Bad part of this approach because the
developer and tester both are same person; there is no different angle, so it’s very much
likely that many defects can be missed.
QA / QC team: - In this approach the quality team is involved in to testing. Good part
because QA team is involved good quality of testing can be expected. Bad part QA and
QC team of any organization is also involved with lot of other activity which can hamper
testing quality of the project.
In projects there can be scenarios where in we need to do boundary value testing. For
instance let’s say for a bank application you can withdraw maximum 25000 and
minimum 100. So in boundary value testing we only test the exact boundaries rather
hitting in middle. That means we only test above the max and below the max. This
covers all scenarios. Below figure shows the BV testing for the bank application which
we described previously. TC1 and TC2 are sufficient to test all condition for the bank.
TC3 and TC4 are just duplicate / redundant test cases which really do not add any value
to the testing. So by applying proper BV fundamentals we can avoid dup licate test cases
which do not add value as such to testing.
In equivalence partitioning we identify inputs which are treated by system in the same
way and produce the same results. You can see from the below figure application TC1
and TC2 both give same result i.e. Result1 and TC3 and TC4 both give same result i.e.
Result2. In short we have two redundant test cases. By applying equivalence partitioning
we minimize the redundant test cases.
Figure: - Equivalence partitioning
So below test you can apply to see if it forms equivalence class or not:-
Test case 3 and 4 give same outputs so they lie in same partition. In short we are doing
redundant testing. Both TC3 and TC4 fall in one equivalence partitioning. So we can
prepare one test case by testing one value in between the boundary, thus eliminating
redundancy testing in projects.
(B) Can you explain how state transition diagram can be helpful
during testing?
Before we understand how state transition diagrams can be useful in testing, let’s
understand what exactly is a state and transition. Result of a previous input is called as a
state and transitions are actions which cause the state to change from one state to other.
Below figure shows a typical state transition diagram. The arrows signify that transition
and the oval shape signify the states. The first transition in the diagram is issue of the
cheque due to which the cheque gets a state that it is ready to be deposited. Second
transition is that cheque is deposited, because of which we can have two states one either
the cheque us cleared or it’s bounced.
Figure: - Sample state transition diagram
Now that we are clear about state and transition how does it help us in testing. By using
states and transition we can identify test cases. So we can identify test cases either using
states or transitions. But if we use only one entity i.e. either state or transition it is very
much possible that we can miss lot of scenarios. In order to get maximum benefit we
should use the combination of state and transition. Below figure shows that if we only use
states or transition in isolation it’s possible that we will have partial testing. But the
combination of state and transition can give us better test coverage for an application.
This kind of testing is really of no use and is normally performed by new comers. The
best use it is to see if the system will maintain under adverse affect. Who knows we can
be lucky to get some defects using the same....but that’s luck and testing does not really
work on luck, but works rather on planning.
A positive test is when you put in a valid input and expect some action to be completed in
accordance with the specification.
Exploratory testing is also called as adhoc testing , but in real its not completely
adhoc.Just as it means, ad hoc testing is an unplanned, unstructured, maybe even
impulsive journey through the system with the intent of finding bugs. Exploratory testing
is simultaneous learning, test design, and test execution. In other words, exploratory
testing is any testing to the extent that the tester proactively controls the design of the
tests as those tests are performed and uses information gained while testing to design
better tests. Exploratory testers are not merely keying in random data, but rather testing
areas that their experience (or imagination) tells them are important and then going where
those tests take them.
In one definition Exploratory testing is simultaneous learning, test design, and test
execution.
As the name specifies semi-random testing is nothing but controlling random testing and
removing redundant test cases. So what we do is we have random test cases , we apply
equivalence partitioning to those test cases , which in turn removes redundant test case
thus giving us semi- random test cases.
Orthogonal array is a two dimension array in which if we choose any two columns in the
array, all the combinations of number will appear in those columns. Below figure shows a
simple L9 (34 ) orthogonal array is shown. In this the 9 indicates that it has 9 rows. 4
indicate that it has 4 columns and 3 indicate that each cell contains a 1, 2 and 3. Choose
any two columns. Let’s choose column 1 and 2. It has (1,1), (1,2), (1,3), (2,1), (2,2),
(2,3), (3,1), (3,2), (3,3) combination values. If you see closely these values covers all the
values in the array. Compare the values with combination column 3 and 4 and they will
fall in some pair. This is applied in software testing which helps us eliminate duplicate
test cases.
Now let’s try to apply orthogonal array in actual testing field. Let’s say we have a
scenario in which we need to test mobile handset with different plan type, term and size.
So below are the different situations:-
So now you must be thinking that means we have 81 combinations, but we can test all
these conditions with only 9 test cases. Below is the orthogonal array for the same.
Figure: - Orthogonal array in actual testing
Orthogonal array is very useful because most defects are pair wise defects and with
orthogonal array we can reduce redundancy to a huge extent.
As the name suggests they are tables that list all possible inputs and all possible outputs.
A general form of decision table is as follows shown in the below figure. Condition1
through Condition N indicate various input conditions. Action1 through Condition N are
actions that should be taken depending on various input combinations. Each rule defines
unique combinations of conditions that’s results in actions associated with that rule.
Below is a sample decision table for discount which depends in age. Discounts are only
allowed if you are married or a student. Below is the decision table accordingly. Using
the decision table we have also derived our test cases from the decision table. Because
this is a sample example we can not get the importance of decision table. But just
imagine when you have huge possible inputs and outputs. For such kind of scenario
decision tables gives you a better view.
So below is the decision table for the scenarios described above. In the top part we have
put the condition and the below part are the actions because of the result of the
conditions. So read from the right and move to the left and then to the action. For
instance Married à Yes à then discount. In the same way for student condition. Using
decision table we can ensure to a good extent that we do not skip any validation in a
project.
There are four types of severity ratings as shown in the below table:-
Software process
A process is a series of step to solve a problem. Below figure shows a pictorial view of
how an organization has defined a way to solve any risk problem. In the below diagram
we have shown two branches one is what exactly is a process and the second branch
shows a sample risk mitigation process for an organization. For instance the below risk
mitigation process defines what step any department should follow to mitigate a risk.
• Identify the risk of the project by discussion, proper requirement gathering and
forecasting.
• Once you have identified the risk prioritize which risk has the most impact and
should be tackled on priority basis.
• Analyze how the risk can be solved by proper impact analysis and planning.
• Finally using the above analysis we mitigate the risk.
Figure: - Software Process
• Salary: - This forms the major component of implementing any process salary of
the employees. Normally while implementing process in a company either
organization can recruit full time guys or they can share a resource part time on
implementing the process.
• Consultant: - If the process is new it can also involve in taking consultants which
is again an added cost.
• Training Cost: - Employee of the company also have to undergo training in order
to implement the new process.
• Tools: - In order to implement process organization will also need to buy tools
which again need to be budgeted.
Figure: - Cost of implementing process
Model is nothing but best practices followed in an industry to solve issues and problems.
Models are not made in a day but are finalized and realized by years of experience and
continuous improvements.
Figure: - Model
Many companies reinvent the wheel rather than following already time tested models in
industry.
Maturity level specifies the level of performance expected from the organization.
Figure: - Maturity level
Process area is the area of improvement defined by CMMI. Every maturity level consists
of Process Areas. Process Area is a group of practice or activities performed collectively
to achieve specific objective. For instance you can look in to below figure we have
process areas like Project Planning, Configuration Management and Requirement
gathering.
As the name specifies tailoring is nothing but changing the action to achieve objective
according to cond itions. Whenever tailoring is done there should be adequate reasons
for why tailoring is done for that process. Remember when a process is defined in an
organization it should be followed properly. So even if tailoring is applied the process is
not bypassed or omitted.
Figure: - Tailoring
Let’s try to understand this by a simple example. Let’s say in a organization there is
process defined that every contract should have a hard copy signed. But there can be
scenarios in the organization when the person is not present physically, so for those
scenarios the contract should be signed through email. So in short the process for signing
a contract is not bypassed only the process approach is tailored.
CMMI
(B) What is CMMI and what's the advantage of implementing CMMI in
an organization?
CMMI was formed by using multiple previous CMM process. Below are the areas which
CMMI addresses because of integrating with CMM process:-
Both of these concepts are important while implementing process in any organization.
Any new process implemented has to go through these two phases.
There are aspects in CMMI one is the capability and other is the maturity. Capability
means the ability to perform task while maturity means to perform task in a matured
fashion. Both the CMMI models lie in of the above category. There are two models in
CMMI first is “staged” in which maturity level organizes the process areas, second is
“continuous ” in which the capability level organize the process area.
Below figure shows how process areas are grouped in both models.
Let’s try to understand both the models in a more detail manner. Before we move ahead
deeper in differences of the two models lets understand the basic structure of CMMI
process, goal and practices. A process area as said previously is a group of practices or
activities performed to achieve specific objective. Every process area has specific as well
as generic goal that should be satisfied to achieve that objective. To achieve those goals
we need to follow certain practices. So again to achieve those specific goals we have
specific practices and to achieve generic goals we have generic practices.
In one of our previous questions we talked about implementation and Institutionalization.
Implementation can be related to specific practice while Institutionalization can be
related to generics practices. So this is what the common basic structure in both models
Process area à Specific / Generic goals à Specific / Generic practice.
Now let’s try to understand model structures with both types of representations. In staged
representation we revolve around matur ity level as shown in figure below. So all process
have to be at one maturity level.
There are five maturity levels in staged representation as shown in figure below.
Maturity Level 2 (Managed):- In managed level basic project management are in place.
But the basic project management and practices are followed only in the project level.
Maturity Level 3 (Defined):- To reach this level the organization should have achieved
first level 2. In the previous level the good practice and process was only done at project
level. But in this level all these good practice and process is brought at the organization
level. There are set and standard practice defined at the organization level which every
project should follow. Maturity Level 3 moves ahead with defining a strong, meaningful,
organization view approach to developing products. An important distinction between
Maturity Levels 2 and 3 is that at Level 3, processes are described in more detail and
more rigorously than at Level 2 and are at organization level.
Maturity Level 4 (Quantitively measured):- To start with this level organization should
have achieved Level 2 and Level3. In this level more of statistics comes in to picture.
Organization controls its project by statistical and other quantitative techniques. Product
quality, process performance, and service quality are understood in statistical terms and
are managed throughout the life of the processes. Maturity Level 4 concentrates on using
metrics to make decisions and to truly measure whether progress is happening and the
product is becoming better. The main difference between Levels 3 and 4 are that at Level
3, processes are qualitatively predictable. At Level 4, processes are quantitatively
predictable. Level 4 addresses causes of process variation and takes corrective action.
Maturity Level 5 (Optimized):- The organization has achieved goals of Maturity Levels
2, 3, and 4. In this level Processes are continually improved, based on an understanding
of common causes of variation within the processes. This is like the final level everyone
in the team is a productive member, defects are minimized, and products are delivered on
time and within the budget boundary.
Below figure shows in detail all the maturity levels in a pictorial fashion.
Figure: - Maturity level in staged model
Continuous model is same as staged model only that the arrangement is bit different.
Continuous representation / model concentrate on the action or task to be completed
within a process area. It focuses on maturing the organization ability to perform, control
and improve the performance in that specific performance area.
This level means that any generic or specific practice of Capability level 1 is not
performed.
Capability level 1 process is expected to perform all capability level 1 specific and
generic practices for that process area. In this level performance may not be stable and
probably does not meet objectives like quality, cost and schedule , but yes still the task
can be done. It’s a first step, it’s like you are doing the process but you can not really
prove it if it’s the most effective.
(A) How many process areas are present in CMMI and in what
classification do they fall in?
All 25 process areas in CMMI are classified inside four major sections. Process
management
This process areas contain the across project tasks related to defining, planning,
executing, implementing, monitoring, controlling, measuring, and making better
processes.
Project management
Project Management process areas cover the project management activities related to
planning, monitoring, and controlling the project.
Engineering
The Engineering process areas were written using general engineering terminology so
that any technical discipline involved in the product development process (e.g., software
engineering or mechanical engineering) can use them for process improvement.
Support
The Support process areas address processes that are used in the context of performing
other processes. In general, the Support process areas address processes that are targeted
toward the project and may address processes that apply more generally to the
organization. For example, Process and Product Quality Assurance can be used with all
the process areas to provide an objective evaluation of the processes and work products
described in all the process areas.
This is the biggest steps for any organization. Because the organization moves from a
immature position to a more mature organization. Leve l l is adhoc process in which
people have created personal process to accomplish certain task. With this approach there
is lot of redundant work and people do not share their information also. This leads to
heroes’ in the project, so when people out of the organization the knowledge also moves
out and organization suffers.
In maturity level 2 individual share their lesson and best practices, which leads to
devising preliminary process at project and in some cases it also moves to organization
level. In level 2 we focus on project management issues that affect day to day routine. It
has seven process areas as shown in figure below.
Now that in Level 2 good practices are practiced at project level it is time to move these
good practices to organization level so that every one is benefited from the same. So the
biggest difference between Level 2 and Level 3 is good practices from the projects are
bubbled up to organization level. Organization approach of doing business is
documented. To perform Maturity level 3 first Maturity 2 should be achieved with the 14
process as shown in the given figure.
Maturity level 4 is all about numbers and statistics. All aspects of the project are managed
by numbers. All decisions are made by numbers. Product quality and process are
measured by numbers. So in Level 3 we if we say this is good in quality, in Level 4 we
say this is good in quality because the defect ratio is less than 1 %. So there two process
areas in Level 4 as shown below. In order to Level 4 you should have achieved all the
PA's of Level 3 and also the below two process areas.
(I) what different sources are needed to verify authenticity for CMMI
implementation?
There are three different sources from which an appraiser can verify that did the
organization follow process or not.
Interview: - It’s a formal meeting between one or more members of the organization in
which they are asked some questions and the appraiser makes some judgments based on
those interviews. During the interview the member represents some process areas or role
which he performs in context of those process areas. For instance the appraiser may
interview a tester or programmer asking him indirectly what metrics he has submitted to
his project manager. By this the appraiser gets a fair idea of CMMI implementation in
that organization.
Documents: - It’s a written work or product which serves as an evidence that a process is
followed. It can be hard copy, word document, email or any type of written official proof.
Below is the pictorial view of sources to verify how much compliant the organization is
with CMMI.
SCAMPI stands for Standard CMMI Appraisal Method for Process Improvement.
SCAMPI is an assessment process to get be CMMI certified for a organization. There are
three classes of CMMI appraisal methods class A, class B and class C. Class A is the
most aggressive, while Class B is less aggressive and Class C is the least aggressive.
Figure: - SCAMPI
Class A: - This is the only method that can provide rating and get you a CMMI
certificate. It requires all three sources of data instruments, interview and documents.
Class B: - This requires only two sources of data (interviews and either documents or
instruments). But please note you do not get rated with Class B appraisals. Class B is just
a warm up that if the organization is ready for Class A. With less verification the
appraisal takes less time. In this data sufficiency and draft presentation are optional.
First Strategy
Use class B to initiate process improvement plan. After that apply Class C to see
readiness for Class B or Class A. Below diagram shows the strategy.
Third Strategy
Class A is used to initiate organization level process. The process improvement plan is
based on identified weakness. Class B appraisal should be performed after six months to
see the readiness for second class A appraisal rating. Below diagram shows the strategy.
Using PII i.e. Practice implementation indicators we take information about the
organization. PII gives us a compliance matrix showing how practices are performed in
organization. PII basically consists of three types of information direct work product,
indirect work product and affirmations. Direct work product and indirect work product
come from document while affirmations come from interviews. The below table shows a
sample PIID information for process SAM and for one of the key process areas.
Once the PIID are filled we can rate saying is the organization compliant or not. So
below are the steps to be followed during the SCAMPI:-
• Gather documentation.
• Conduct interviews.
• Discover and document strengths and weaknesses.
• Communicate / present findings.
Note: - This question will be asked to judge whether you have actually
implemented CMMI in a proper fashion in your oganization or not. For
answering this question we will be using SAM as the process area. But
you can answer with whatever process area you have implemented in your
organization.
For SAM below are the two SG1 and SG2 practice which needs to be implemented to
satisfy the process area. SAM helps us to define our agreement with Supplier while
procuring products in the company. Let’s see in the next step how we have mapped our
existing process with SAM practices defined in CMMI.
Figure: - SAM process area
In order to specify SAM below is a process adopted by the company. If any body wants
to demand any product he has to first raise demand for the item by using the demand
form which is defined by the company. Depending on demand the supervisor defines
which acquisition type is this demand. For instance is it a production acquisition type,
office material acquisition type or others. Once the acquisition type is decided the
organization places an advertisement in the news paper to ask suppliers for quotation.
Once all quotations are received depending on cost, quality and other factors final
supplier is decided. Supplier is then called to the office and he has to sign an agreement
with the organization for the delivery of the product. Once the agreement is signed
supplier sends a samp le product which is analyzed by the organization practically. Finally
the product is accepted and supplier is then asked to send the complete delivery of all
products. The product is accepted in the organization by issuing the supplier a proper
invoice. The invoice document says that the product is accepted by the organization
officially. When the product is installed in the organization then either someone from the
supplier side comes for the demo or a help brochure is shipped with the product.
Figure: - SAM process area mapped with actual world
Ok now the above explanation was from the perspective of the how the organization
managed its transaction with the suppliers. Now let’s try to map how the above process
fits in CMMI model. In the above diagram all the circled description is nothing but
process areas of CMMI.
Note: - No one is going to ask such a question. But they would like to
know at least the purpose of each KPA. Second they would like to know
what you did to attain compatibility to these process areas. For
instance you say that you did Organizational Process Definition. They
would like to know how you did it. For instance you can justify it by
saying that you made standard documents for coding standards which was
then followed at the organization level for reference. Normally every
one follows process it’s only that they do not know. So try to map the
KPA to the process what you follow. The only purpose to paste all the
KPA is if in case you are looking for some higher positions in big
companies they really expect you to speak in term of KPA rather than
generic term. This whole stuff can be like a quick reference for you
before entering the interview room.
Each process area is defined by a set of goals and practices. There are two categories of
goals and practices: generic and specific. Generic goals and practices are a part of every
process area. Specific goals and practices are specific to a given process area. A process area
is satisfied when company processes cover all of the generic and specific goals and practices
for that process area.
Process areas
The CMMI contains 25 key process areas indicating the aspects of product development
that are to be covered by company processes.
Purpose
The purpose of Causal Analysis and Resolution (CAR) is to identify causes of defects
and other problems and take action to prevent them from occurring in the future.
Purpose
Purpose
The purpose of Decision Analysis and Resolution (DAR) is to analyze possible decisions
using a formal evaluation process that evaluates identified alternatives against established
criteria.
SG 1 Evaluate Alternatives
SP 1.1-1 Establish Guidelines for Decision Analysis
SP 1.2-1 Establish Evaluation Criteria
SP 1.3-1 Identify Alternative Solutions
SP 1.4-1 Select Evaluation Methods
SP 1.5-1 Evaluate Alternatives
SP 1.6-1 Select Solutions
Purpose
The purpose of Integrated Project Management (IPM) is to establish and manage the
project and the involvement of the relevant stakeholders according to an integrated and
defined process that is tailored from the organization's set of standard processes.
Purpose
Purpose
The purpose of Integrated Teaming (IT) is to form and sustain an integrated team for the
development of work products.
Purpose
Purpose
Purpose
The purpose of Organizational Innovation and Deployment (OID) is to select and deploy
incremental and innovative improvements that measurably improve the organization's
processes and technologies. The improvements support the organization's quality and
process-performance objectives as derived from the organization's business objectives.
SG 1 Select Improvements
SP 1.1-1 Collect and Analyze Improvement Proposals
SP 1.2-1 Identify and Analyze Innovations
SP 1.3-1 Pilot Improvements
SP 1.4-1 Select Improvements for Deployment
SG 2 Deploy Improvements
SP 2.1-1 Plan the Deployment areas
SP 2.2-1 Manage the Deployment
SP 2.3-1 Measure Improvement Effects
Purpose
Purpose
Purpose
The purpose of Organizational Training (OT) is to develop the skills and knowledge of
people so that they can perform their roles effectively and efficiently.
Purpose
The purpose of Product Integration (PI) is to assemble the product from the product
components, ensure that the product, as integrated, functions properly and deliver the
product.
Purpose
Purpose
The purpose of Project Planning (PP) is to establish and maintain plans that define project
activities.
SG 1 Establish Estimates
SP 1.1-1 Estimate the Scope of the Project
SP 1.2-1 Establish Estimates of Work Product and Task Attributes
SP 1.3-1 Define Project Life Cycle
SP 1.4-1 Determine Estimates of Effort and Cost
SG 2 Develop a Project Plan
SP 2.1-1 Establish the Budget and Schedule
SP 2.2-1 Identify Project Risks
SP 2.3-1 Plan for Data Management
SP 2.4-1 Plan for Project Resources
SP 2.5-1 Plan for Needed Knowledge and Skills
SP 2.6-1 Plan Stakeholder Involvement
SP 2.7-1 Establish the Project Plan
SG 3 Obtain Commitment to the Plan
SP 3.1-1 Review Plans that Affect the Project
SP 3.2-1 Reconcile Work and Resource Levels
SP 3.3-1 Obtain Plan Commitment
Purpose
The purpose of Process and Product Quality Assurance (PPQA) is to provide staff and
management with objective insight into processes and associated work products.
Purpose
Purpose
Purpose
SG 1 Manage Requirements
SP 1.1-1 Obtain an Understanding of Requirements
SP 1.2-2 Obtain Commitment to Requirements SP 1.3-1 Manage Requirements Changes
SP 1.4-2 Maintain Bidirectional Traceability of Requirements
SP 1.5-1 Identify Inconsistencies between Project Work and Requirements
Purpose
The purpose of Risk Management (RSKM) is to identify potential problems before they
occur so that risk-handling activities can be planned and invoked as needed across the life
of the product or project to mitigate adverse impacts on achieving objectives.
Purpose
Purpose
The purpose of Technical Solution (TS) is to design, develop, and implement solutions to
requirements. Solutions, designs, and implementations encompass products, product
components, and product-related life-cycle processes either alone or in appropriate
combination.
Purpose
Verification (VER)
An Engineering process area at Maturity Level 3
Purpose
The purpose of Verification (VER) is to ensure that a selected work product meets their
specified requirements.
Six Sigma
(B) What is six sigma?
Sigma is a statistical measure of variation in a process. We say a process has achieved six
sigma if the quality is 3.4 DPMO (Defect per Million opportunities). It’s a problem
solving methodology that can be applied to a process to eliminate the root cause of
defects and costs associated with the same.
(I) Can you explain the different methodology for execution and
design process in SIX sigma?
The main focus of SIX sigma is on reducing defects and variations in the
processes.DMAIC and DMADV are the models used in most SIX sigma initiatives.
DMADV is model for designing process while DMAIC is for improving the process.
• Define: - Determine the project goals and the requirements of customers (external
and internal).
• Measure: - Assess customer needs and specifications.
• Analyze: - Examine process options to meet customer requirements.
• Design: - Develop the process to meet the customer requirements.
• Verify: - Check the design to ensure that it’s meeting customer requirements
• Define the projects, the goals, and the deliverables to customers (internal and
external). Describe and quantify both the defect and the expected improvement.
• Measure the current performance of the process. Validate data to make sure it is
credible and set the baselines.
• Analyze and determine the root cause(s) of the defects. Narrow the causal factors
to the vital few.
• Improve the process to eliminate defects. Optimize the vital few and their
interrelationships.
• Control the performance of the process. Lock down the gains.
Figure: - Methodology in SIX Sigma
(I) What does executive leaders, champions, Master Black belt, green
belts and black belts mean?
SIX sigma is not only about techniques, tools and statistics, but the main thing depends
upon people. In SIX sigma there five key players:-
• Executive leaders
• Champions
• Master black belt
• Black belts
• Green belts
Let’s try to understand all the role of players step by step.
Executive leaders : - They are the main person who actually decides that we need to do
SIX sigma. They promote it throughout organization and ensure commitment of the
organization in SIX sigma. Executive leaders are the guys who are mainly either CEO or
from the board of directors. So in short they are the guys who fund the SIX sigma
initiative. They should believe that SIX sigma will improve the organization process and
that they will succeed. They should be determined that they ensure resources get proper
training on SIX sigma, understand how it will benefit the organization and track the
metrics.
Master Black-Belt: - This role requires highest level of technical capability in SIX
sigma. Normally organizations that are just starting up with SIX sigma will not have the
same. So normally outsiders are recruited for the same. The main role of Master Black
belt is to train, mentor and guide. He helps the executive leaders in selecting candidates,
right project, teach the basic and train resources. They regularly meet with black belt and
green belt training and mentor them.
Black-Belt: - Black belt leads a team on a selected project which has to be show cased
for SIX sigma. They are mainly responsible to find out variations and see how these
variations can be minimized. Mast black belt basically selects a project and train
resources, but black belt are the guys who actually implement it. Black belt normally
works in projects as team leads or project manager. They are central to SIX sigma as they
are actually implementing SIX sigma in the organization.
Green Belt: - Green belt assist black belt in their functional areas. They are mainly in
projects and work part time on SIX sigma implementation. They apply SIX sigma
methodologies to solve problems and improve process at the bottom level. They have just
enough knowledge of SIX sigma and they help to define the base of SIX sigma
implementation in the organization. They assist black belt in SIX sigma implementation
actually.
Figure: - SIX key players
(I) what are the different kinds of variations used in six sigma?
Variation is the basis of six sigma. It defines how much changes are happening in an
output of a process. So if a process is improved then this should reduce variations. In six
sigma we identify variations in the process, control them and reduce or eliminate defects.
Now let’s understand how we can measure variations.
There are four basic ways of measuring variations Mean, Median, Mode and Range. Let’s
understand each of these variations in more depth for better analysis.
Mean: - In mean the variations are measured and compared using math’s averaging
techniques. For instance you can see the below figure which shows two weekly measures
of how many computers are manufactured. So for that we have tracked two weeks one we
have named as Week 1 and the other as Week 2. So to calculate variation by using mean
we calculate the mean of week1 and week2. You can see from the calculations below we
have got 5.083 for week and 2.85 for week2. So we have a variation of 2.23.
Figure: - Measuring variations by using Mean
Median: - Media n value is a mid point in our range of data. Mid point can be found out
using by finding the difference between highest and lowest value then divide it by two
and finally add the lowest value to the same. For instance for the below figure in week1
we have 4 as the lowest value and 7 as the highest value. So first we subtract the lowest
value from the highest value i.e. 7 -4. Then we divide it by two and add the lowest value.
So for week1 the median is 5.5 and for week2 the median is 2.9. So the variation is 5.5 –
2.9.
Range: - Range is nothing but spread of value for a particular data range. In short it is the
difference between highest and lowest values in particular data range. For instance you
can see for recorded computer data of two week we have found out the range values by
subtracting the highest value from the lowest.
Figure: - Range for calculating variations
Mode: - Mode is nothing but the most occurred values in a data range. For instance in our
computer manufacturing data range 4 is the most occurred value in Week1 and 3 is the
most occurred value in week 2. So the variation is 1 between these data ranges.
Below is the formula for Standard deviation. “s “ symbol stands for standard deviation.
X is the observed values; X (with the top bar) is the arithmetic mean and n is the number
of observations. The formulae must be looking complicated by but let’s break up in to
steps and understand it better.
The first step is to calculate the mean. This can be calculated by adding all the observed
values and dividing the same by the number of observed values.
Figure: - Step 1 Standard deviation
The second step is to subtract the average from each observation, square them and then
sum them. Because we square them we will not get negative values. Below figure
indicates the same in very detail manner.
In the third step we divide the same with the number of observations as shown the figure.
Figure: - Step 3 Standard deviation
In the final step we take the square root which gives the standard deviation.
Note: - Below are some questions which we have not answered and left it
as an exercise to the readers. We will definitely try to cover the same
in the coming second edition.
(B) Can you explain the concept of fish bone/ Ishikawa diagram?
There are situations where we need to analyze what caused the failure or problem in a
project. Fish bone or Ishikawa diagram is one of the important concept which can help
you list down your root cause of the problem. Fish bone was conceptualized by Ishikawa,
so in the honor of its inventor this concept was named as Ishikawa diagram. Inputs to
conduct a fish bone diagram comes from discussion and brain storming with people who
were involved in the project. Below figure shows how the structure of the Ishikawa
diagram is.
Below is a sample fish bone diagram. The main bone is the problem which we need to
address and to know what caused the failure. For instance the below fish bone is
constructed to know what caused the project failure. To know this cause we have taken
four main bones as inputs Finance, Process, People and Tools. For instance on the people
front there are many resignations à this was caused because there was no job satisfaction
à this was caused because the project was a maintenance project. In the same way
causes are analyzed on the Tools front also. In tools à No tools were used in the project
à because no resource had enough knowledge about the same à this happened because
of lack of planning. In process front the process was adhoc àthis was because of tight
dead lines à this was caused because marketing people over promised and did not
negotiate properly with the end customer.
Now once the diagram is drawn the end bones of the fish bone signify the main cause of
project failure. From the below diagram here’s a list:-
Metrics
(B) What is meant by measure and metrics?
Measures are quantitative ly unit defined elements for instance Hours, Km etc. Measures
basically comprises of more than one measure for instance we can have metrics like
Km/Hr, M/S etc.
Number of defects is one of the measures used to measure test effectiveness. One of the
side effects of number of defects is that all bugs are not equal. So it becomes necessary to
weight bugs according to there criticality level. If we using Number of defects as the
metric measurement following are the issues:-
• Number of bugs that originally existed significantly impacts the number of bugs
discovered, which in turns gives a wrong measure of the software quality.
• All defects are not equal so defect should be numbered with criticality level to get
the right software quality measure.
Below are three simple tables which show number of defects SDLC phase wise , module
wise and developer wise.
Figure: - Number of defects phase wise
This is one of the most effective measure. Number of defects found in production or the
customer is recorded. The only issue with this measure is it can have latent and masked
defects which can give us wrong value regarding software quality.
Defect seeding is a technique that was developed to estimate the number of defects
resident in a piece of software. It’s a bit offline technique and should not be used by
every one. The process is the following we inject the application with defects and the see
if the defect is found or not. So for instance if we have injected 100 defects we try to get
three values first how many seeded defects where discovered, how many were not
discovered and how many new defects (unseeded) are discovered. By using defect
seeding we can predict the number of defect remaining in the system.
Figure: - Defect seeding
Let’s understand the concept of defect seeding by doing some detail calculation and also
try to understand how we can predict the number of defects remaining in a system. Below
is the calculation for the same.
• First calculate the seed ratio using the below given formulae i.e. Number of seed
bugs found divided by total number of seeded bugs.
• After that we need to calculate the total number of defect by using the formulae
Number of defects divided by seed ratio.
• Finally we can know the estimated defects by using the formulae Total number of
defects – Number of defect calculated by step 3.
Below figure shows a sample with step by step calculation. You can see first we
calculate the seed ratio, then total number of defects and finally we get the estimated
defects.
Figure:-DRE formulae
But success of DRE depends on lot of factors. Below are listed some factors:-
DRE is also useful to measure effectiveness of a particular test like acceptance, unit or
system testing. Below figure shows defect numbers at various software cycle level. The +
indicates that defects are input to the phase and – indicates that these many defects where
removed from that particular phase. For instance in requirement phase 100 defects where
present, but 20 defects are removed from the requirement phase due to code review. So if
20 defects are removed then 80 defects get carried to the new phase (design) and so on.
Figure: - Defect injected and removed per phase
First let’s calculate simple DRE of the above diagram. DRE will be total bugs found in
testing divided by total bugs found in testing plus total bugs found by user that is during
acceptance testing. So below diagram gives the DRE for the above values.
Unit testing DRE calculation will be similar to system testing DRE. As you can see from
the below figure its nothing but Number of defects found during Unit testing divided by
Number of defects found during unit testing plus Number of defects found during system
testing.
One of the important factors to be noted while calculating unit test DRE is that we need
to exclude those defects which can not produced due to limitation of unit testing. For
instance passing of data between components to other. In unit testing because we test it as
a single unit we can never reproduce defects which involves interaction. So such kind of
defects which can not be produced due to the nature of the testing itself should be
removed to get accurate test results.
(I) How do you measure test effectiveness?
Test effectiveness is measure of the bug finding ability of our tests. In short it measures
how good the tests were?. So effectiveness is the ratio of measure of bugs found during
testing to total bugs found. Total bugs are the sum of new defects found by user + bugs
found in test. Below figure explains the calculation in a more pictorial format.
Defect age is also called as Phase Age or Phage. One of the most important things to
remember in testing is later we find a defect more it costs to fix. Defect age and Defects
spoilage metrics works on the same fundamental i.e. how late you found the defect? So
the first thing we need to define is what is the scale of the defect age according to phases.
For instance the below table defines the scale according to the Phases. So for instance
requirement defects if found in design phase has a scale of 1 and the same defect if
propagated till the production phase goes up to a scale of 4.
Once the scale is decided now we can find the defect spoilage. Defect spoilage is defects
from the previous phase multiplied by the scale. For instance in the below figure we have
found 8 defects in the design phase from which 4 defects are propagated from the
requirement phase. So we multiply the 4 defects with the scale defined in the previous
table, so we get the value of 4. In the same fashion we calculate for all the phases. Below
given is the spoilage formula. It’s the ratio of sum of defects passed from the previous
phase multiplied by the discovered phase then finally divided by the total number of
defects. For instance the first row shows that total defects are 27 and sum of passed on
defects multiplied by their factor is 8 ( 4 x 1 = 4 + 2 x 2 = 4). In this way we calculate for
all phases and finally the total. The optimal value is 1. Lower value of spoilage indicates
more effective defect discovery process.
(I) which automation tool have you worked and can you explain them
in brief?
Note: - For this book we are using AutomatedQA as the tool for testing
automation. So this answer we will answer from the point of view of
AutomatedQA tool. You can install the AutomationQA tool and practice
for yourself to see how it really works.
In this answer we will be testing a tool called as “WindowsFileSearch”. This tool does
the following functionality:-
• This tool basically searches file by name and internal content of the file.
• It also has facility of wildcard as well as the extension search. Which means we
can search the file by extension for instance *.doc , *.exe etc.
So let start step by step how we can use the automatedQA tool to automate our testing
process. First start the tool by clicking all programs à AutomatedQA à TestComplete 5
à TestComplete 5. Once the tool is started you will get a screen as shown below. We
first need to create a new project by using the new project menu as shown in the below
figure.
Once the project name and path is given you will be then popped with a screen as shown
below. These are project items which we need to include in our project depending on
testing types. Because currently we are doing a windows application testing we need to
select the below project items as shown in figure. Please note events have to be selected
compulsorily.
Figure: - Select project items
Once you have clicked finished you will get the test complete project explorer as shown
below. Test complete project explorer is divided in to three main parts Events, Script and
TestedApps. Script is where all the programming logic is present. In TestedApps we add
the applications which we want to test. So lets first add the application in TestedApps.
In order to add the application EXE in TestedApps we need to right click on TestedApps
folder and click on New Item.
Figure: - Add new applications to the project
You will then be popped with a screen as shown below. Browse to your application EXE
file and add the same in the TestedApps folder.
Once the EXE is added to the you can see the Application added below TestedApps
folder as shown in the below figure. Currently we have added the WindowsFileSearch
application. Now that your application is added we need to start recording our test. In
order to start recording click on the RED button shown in the below figure or click
SHIFT + F1.
Figure: - EXE see once added successfully
Once the recording tool bar is seen right click on the Application added and run your test.
In this scenario you can see the windows file search application running. In this we have
recorded a complete test in which we gave the folder name , keyword and then tried to
see if we are getting proper results. You application under test can be something different
, so your steps can vary.
Figure: - Recording
Once the test is comple te you can stop the recording using the BLUE button on the
recording toolbar. Once you stop the recording tool will generate script of all your actions
done as shown the figure below. You can view the programming script as shown below.
Once the script is recorded you can run the script by right clicking and running the same.
Once you run the script tool will playback all the test steps which you had recorded.
(I) Can you explain how does load testing conceptually work for
websites?
In order to understand this we need to first understand the concept of how actually
websites work. Websites have software called as Web server installed on the server. User
request to the web server and receives a response. So for instance when you type
http://www.questpond.com (well that’s my official website) web server senses it and
sends you the home page as a response. This happens each time either when you click on
the links, do a submit etc. So if we want to do load testing you need to just multiply these
request and response “N” times. This is what exactly an automation tool does. It first
captures the request and response and then just multiplies it by “N” times and shoots it on
the web server, which results in load simulation.
Figure: - Concept of load testing
So once the tool captures the request and response, we just need to multiply the request
and response with the virtual user. Virtual users are logical user which actually simulates
the actual physical user by sending in the same request and response. If you want to do
load testing with 10000 users on an application it’s practically impossible. But by using
the load test tool you only need to create 1000 virtual users.
(A) Can you explain how did you perform load testing using tool?
The first step is to open a new project using the test complete.
After than select HTTP load testing from the project types.
Figure: - Select HTTP load testing
Once you click “Ok” you will get different project items which you need for the project.
For load testing only select the three i.e. Events, HTTP load testing and Script as shown
below.
Figure: - Select the three items in load testing
Project has the following items Stations, tasks, tests and scripts. Stations basically define
how much number of users the load testing will be performed for. Task is the one which
has the request and response captured. Tests and script has the script which is generated
when we record the automated test.
Figure: - Load testing explorer
You need to specify the number of virtual users, task and the browser type like internet
explorer, opera etc.
Figure: - Assign number of virtual user and browser
As said previously the basic thing in load testing is the request and response which needs
to be recorded. That can be done by using the recording task bar and click the icon below.
Once you click on the icon you need to enter the task name for the same.
In order to record the request and response tool changes the proxy setting of the browser.
So you will see the below screen just click yes and let the next screen (change proxy
settings) after it complete.
Figure: - Prompting to change proxy setting
Once the setting is changed you can then start your browser and make some request and
response. Once that is done click on the stop to stop the recording.
Tool actually generates script for task recorded. You can see the script and the code
generated as shown in the figure below. To view the code you can double click on the
Test2 script (here we have named it test2 script).
Figure: - Test2 created
If you double click the test you can see the code below.
Normally application has to be tested with multiple set of data. For instance a simple
login screen depending on user type will give different rights. For example if he is a
admin he will have full rights , while a user will have limited rights and support will only
have read-only support rights. In this scenario the testing steps are same but with
different user id and passwords. In data-driven testing inputs to the system are read from
data file like excel, CSV (comma separated values), ODBC etc etc. So the values are
read from these sources in to variables and then test steps are executed by automated
testing.
This question is left to the user. Please install the tool and try for yourself. In case of
issues you can mail to shiv_koirala@yahoo.com .
Testing Estimation
(B) What are the different ways of doing black box testing?
TPA is a technique to estimate test effort for black box testing. Inputs for TPA are the
counts which are derived from function points (function points will be discussed in more
detail in the next sections).
Note: - It’s rare that some one will ask you to say full details of
function points in one shot. They will rather ask specific sections
like GSC, ILF etc. The main interest of the interviewer will be how you
used the function point value in TPA analysis. Function point analysis
is mainly done by the development team so from testing perspective you
only need to get the function point value and then use TPA to get the
black box testing estimates.
“This document contains material which has been extracted from the
IFPUG Counting Practices Manual. It is reproduced in this document with
the permission of IFPUG.”
Function Point Analysis was developed first by Allan J. Albrecht in the mid 1970s. It was
an attempt to overcome difficulties associated with lines of code as a measure of software
size, and to assist in developing a mechanism to predict effort associated with software
development. The method was first published in 1979, then later in 1983. In 1984
Albrecht refined the method and since 1986, when the International Function Point User
Group (IFPUG) was set up, several versions of the Function Point Counting Practices
Manual
have been coming out.
Following are some terms used in FPA [Function Point ana lysis].
Application Boundary
The first step in FPA is defining boundary. There are two types of major boundaries:
• Does it have or will have any other interface to maintain its data, which is not
Developed by you. Example: Your Company is developing an “Accounts
Application” and at the end of accounting year, you have to report to tax
Department. Tax department has his own website where companies can connect
and report there Tax transaction. Tax department application has other
Maintenance and reporting screens been developed by tax software department.
These maintenance screens are used internally by the Tax department. So Tax
Online interface has other interface to maintain its data which is not your scope,
thus we can identify Tax website reporting as External Application.
• Does your program have to go through a third party API or layer? In order
your application interacts with Tax Department Application probably your
code have to interact through Tax Department API.
• The best litmus test is to ask yourself do you have full access over the system.
If you have full rights or command to change then its internal application
boundary or else external application boundary.
(B) Can you explain the concept of elementary process?
(B) Can you explain the concept of static and dynamic elementary
process?
Elementary Process
As said in introduction FPA is breaking huge systems in to smaller pieces and analyzing
them. Software application is combination of set of elementary processes.
Dynamic elementary process moves data from internal application boundary to external
Application boundary or vice-versa.
• Input data screen where user inputs data in to application. Data moves from
the input screen inside application.
• Transaction exported in export files in XML or any other standard.
• Display reports which can come from external application boundary and internal
application boundary.
(I) Can you explain concept of FTR, ILF, EIF, EI, EO , EQ and GSC ?
Figure : - RET
Please note the whole database is one supplier ILF as all belong to one logical
section. RET quantifies the relationship complexity of ILF and EIF.
DET (Data element types)
Following are the points to be noted for DET counting: -
• Each DET should be User recognizable. Example in the above given figure
we have kept auto increment field (Supplierid) for primary key. Supplierid field
from user point of view never exists at all , its only from software designing
aspect, so does not qualifies for DET.
• DET should be non-recursive field in ILF. DET should not repeat in the same
ILF again, it should be counted only once.
• Count foreign keys as one DET. “Supplierid” does not qualifies as DET but
its relationship in “supplieraddress” table is counted as DET. So “Supplierid_fk”
in supplieraddress table is counted as DET. Same folds true for
“Supplieraddressid_fk”.
• It’s a dynamic elementary process [For definition see “Dynamic and Static
Elementary Process” Section] in which data is received from external
application boundary.
Example: - User Interaction Screens, when data comes from User Interface
to Internal Application.
• EI may maintain ILF of the application, but it’s not compulsory rule.
Example: - A calculator application does not maintain any data, but still the
screen of calculator will be counted as EI.
• Most of time User Screens will be EI, again no hard and fast rule. Example: -
An import batch process running from command line does not have screen,
but still should be counted as EI as it helps passing data from External
Application Boundary to Internal Application Boundary.
• It’s a dynamic elementary process in which result data is retrieved from one or
more ILF or EIF.
• In this EP some input request has to enter the application boundary.
• Output results exits the application boundary.
• EQ does not contain any derived data. Derived data means any complex
calculated data. Derived data is not just mere retrieval but are combined with
additional formulae to generate results. Derived data is not part of ILF or EIF,
they are generated on fly.
• EQ does not update any ILF or EIF.
• EQ activity should be meaningful from user perspective.
• EP is self contained and leaves the business in consistent state.
• DET and processing logic is different from other EQ’s.
• Simple reports form good base as EQ.
Note:- No hard and fast rules that only simple reports are EQ’s.Simple
view functionality can also be counted as EQ.
• It’s a dynamic elementary process in which derived data crosses from Internal
Application Boundary to External Application Boundary.
• EO can update an ILF or EIF.
• Process should be the smallest unit of activity that is meaningful to end user
in business.
• EP is self contained and leaves the business in a consistent state.
• DET is different from other EO’s.So this ensures to us that we do not count
EO’s twice.
• They have derived data or formulae calculated data.
Major difference between EO and EQ is that data passes across application boundary.
Example: - Exporting Accounts transaction to some external file format like XML or
some other format. Which later the external accounting software can import. Second
important difference is in EQ its non-derived data and EO has derived data.
There are 14 points considered to come out with VAF (Value Added factor) and its
associated rating table.
Data Communications
How many communication facilities are there to aid in the transfer or exchange of
information with the application or system?
Rating Description
0 Application is pure batch processing or a standalone PC.
1 Application is batch but has remote data entry or remote Printing.
2 Application is batch but has remote data entry and remote Printing.
3 Application includes online data collection or TP (Teleprocessing) front end
to a batch process or query system.
Rating Description
0 Application does not aid the transfer of data or processing Function between
components of the system.
1 Application prepares data for end user processing on another component of
the system such as PC spreadsheets and PC DBMS
2 Data is prepared for transfer, then is transferred and processed on another
component of the system (not for end-user Processing).
3 Distributed processing and data transfer are online and in One direction
only.
4 Distributed processing and data transfer are online and in Both directions.
5 Processing functions are dynamically performed on the mo st Appropriate
component of the system
Rating Description
0 No special performance requirements were stated by the
User.
1 Performance and design requirements were stated and
Reviewed but no special actions were required.
2 Responsetimeorthroughputiscriticalduringpeakhours.Nospec
ialdesignforCPU utilization was required. Processing
deadline is for the next business day.
3 Response time or through put is critical during all business
hours. No special design for CPU
utilizationwasrequired.Processingdeadlinerequirementswithi
nterfacing systems Are constraining.
4 In addition, stated user performance requirements are
stringent enough to require performance analysis tasks in the
Design phase.
5 In addition, performance analysis tools were used in the
design, development, and/or implementation phases to
meet The stated user performance requirements.
Table :- Performance
How heavily used is the current hardware platform where the application will be
executed?
Rating Description
0 No explicit or implicit operational restrictions are included.
1 Operational restrictions do exist, but are less restrictive than a typical
application. No special effort is needed to meet the Restrictions.
2 Some security or timing considerations are included.
3 Specific processor requirement for a specific piece of the Application is
included.
4 Stated operation restrictions require special constraints on the application
in the central processor or a dedicated Processor.
5 In addition, there are special constraints on the application in The
distributed components of the system.
Transaction rate
How frequently are transactions executed; daily, weekly, monthly, etc.?
Rating Description
0 No peak transaction period is anticipated.
1 Peak transaction period (e.g., monthly, quarterly, seasonally, Annually) is
anticipated.
2 Weekly peak transaction period is anticipated.
3 Daily peak transaction period is anticipated.
4 High transaction rate(s) stated by the user in the application requirements or
service level agreements are high enough to Require perform ance analysis
tasks in the design phase.
5 High transaction rate(s) stated by the user in the application requirem ents or
service level agreements are high enough to require performance analysis
tasks and, in addition, require the use of performance analysis tools in the
design, Development, and/or installation phases.
Rating Description
0 All transactions are processed in batch mode.
1 1% to7% of transactions is interactive data entry.
2 8% to15% of transactions is interactive data entry.
3 16% to23% of transactions is interactive data entry.
4 24% to30% of transactions is interactive data entry.
5 M orethan30% of transactions is interactive data entry.
End-user efficiency
Was the application designed for end-user efficiency? There are seven end- user
efficiency factors which govern how this point is rated.
Rating Description
0 None of the above.
1 One to three of the above.
2 Four to five of the above.
3 Six or more of the above, but there are no specific user Requirements related
to efficiency.
4 Six or more of the above, and stated requirements for end-user efficiency are
strong enough to require design tasks for human factors to be included (for
example, minimize keystrokes, maximize defaults, use of templates).
5 Six or more of the above, and stated requirements for end-user efficiency are
strong enough to require use of special tools and processes to demonstrate
that the objectives have been achieved.
Table : End user efficiency
On-Line update
Rating Description
0 None of the above.
1 Online update of one to three control files is included. Volume of
updating I slow and recovery is easy.
2 Online update off our or more control files is included. Volume of
updating is low and recovery easy.
3 Online update of major internal logical files is included.
4 In addition, protection against data lost is essential and has been
specially designed and programmed in the system.
5 In addition, high volumes bring cost considerations into the
Recovery process. Highly automated recovery procedures With
minimum operator intervention are included.
Complex processing
Rating Description
0 None of the above.
1 Any one of the above.
2 Any two of the above.
3 Any three of the above.
4 Any four of the above.
5 All five of the above
Table: - Complex processing
Reusability
Was the application developed to meet one or many user’s needs?
Rating Description
0 No reusable code.
1 Reusable code is used within the application.
2 Less than 10% of the application considered
more than one user's needs.
3 Ten percent (10%) or more of the application
considered more than one user's needs.
4 The application was specifically packaged
and/or documented to ease re-use, and the
application is customized by the user at source
code level.
5 The application was specifically packaged
and/or documented to ease re-use, and the
application is customized for use by means of
user parameter maintenance.
Table : - reusability
Installation ease
Description Rating
0 No special considerations were stated by the user, and no special
setup is required for installation.
1 No special considerations were stated by the user but special
setup is required for installation.
2 Conversion and installation requirements were stated by the user
and conversion and installation guides were provided and
tested. The impact of conversion on the project is not considered
to be important.
3 Conversion and installation requirements were stated by the user,
and conversion and installation guides were provided And tested.
The impact of conversion on the project is Considered to be
important.
4 In addition to 2 above, automated conversion and installation
Tools were provided and tested.
5 In addition to 3 above, automated conversion and installation
Tools were provided and tested.
Operational ease
How effective and/or automated are start-up, back up, and recovery procedures?
Rating Description
0 No special operational considerations so other than the normal
Back- up procedures were stated by the user.
1-4 One, some, or all of the following items apply to the Application.
Select all that apply. Each item has a point Value of one, except as
noted otherwise.
Effective start- up, back-up, and recovery processes were Provided,
but operator intervention is required.
Effective start- up, back-up, and recovery processes were provided,
but no operator intervention is required(count as Two items).
The application minimizes the need for tape mounts.
The application minimizes the need for paper handling.
5 The application is designed for unattended operation. Unattended
operation means no operator intervention is required to operate the
system other than to startup or shutdown the application.
Automatic error recovery is a feature Of the application.
Table: - operational ease
Multiple sites
Description Rating
0 User requirements do not require considering the needs of More than one
user/installation site.
1 Needs of multiple sites were considered in the design, and the application
is designed to operate only under identical Hardware and software
environments.
2 Needs of multiple sites were considered in the design, and the application
is designed to operate only under similar Hardware and or software
environments.
3 Needs of multiple sites were considered in the design, and the application
is designed to operate under different Hardware and or software
environments.
4 Documentation and support plan are provided and tested to support the
application at multiple sites and the application is as described by 1 or 2.
5 Documentation and support plan are provided and tested to support the
application at multiple sites and the application is as described by 3.
Table : - Multiple sites
Facilitate change
Was the application specifically designed, developed, and supported to facilitate change?.
Sr no Facilitate factors
0 None of above
1 Flexible query and report facility is provided that can handle simple requests;
for example, and/or logic applied to only one internal logical file (count as
one item).
2 Flexible query and report facility is provided that can handle requests of
average complexity for example, and/or logic applied to more than one
internal logical file (count as two items).
3 Flexible query and report facility is provided that can handle complex
requests, for example, and/or logic combinations on one or more internal
logical files (count as three items).
4 Business control data is kept in tables that are maintained by the user with
online interactive Processes, but changes take effect only on the next business
day.
5 Business control data is kept in tables that are maintained by the user with
online interactive Processes and the changes take effect immediately (count
as two items)
Table : Facilitate change factors
Rating Description
0 None of the above.
1 Any one of the above.
2 Any two of the above.
3 Any three of the above.
4 Any four of the above.
5 All five of the above
Table: - Facilitate change
All the above GSC are rated from 0-5.Then VAF is calculated from the equation
below:-
Note: - GSC has not been accepted in software industry widely. Many
software companies use Unadjusted Function point rather than adjusted.
ISO has also removed GSC from its books and only kept unadjusted
function points as the base for measurement. Read GSC acceptance in
software industry Rating Tables for All elements of Function Points.
Below shown are look up tables which will be referred during counting.
EI Rating
Table
Data Elements
FTR 1to 4 5 to 15 Greater than 15
Less than 2 3 3 4
Equal to 2 3 4 6
Greater than 2 4 4 6
This table says that in any EI (External Input), if your DET count (Data Element) and
FTR (File Type Reference) exceed these limits, then this should be the FP (Function
Point). Example, if your DET (data element) exceeds >15 and FTR (File Type Reference)
is greater than 2, then the Function Point count is 6. The rest down tables also show the
same things. These tables will be there before us when we are doing function point count.
The best is put these values in Excel with formulae so that you have to only put quantity
in the appropriate section and you get the final value.
EO Rating Table
Data Elements
FTR 1 to 5 6 to 19 Greater than 19
Less than 2 4 4 5
2 or 3 4 5 7
Greater than 2 5 7 7
EQ Rating Table
Data Elements
FTR 1 to 5 6 to 19 Greater than 19
Less than 2 3 3 4
2 or 3 3 4 6
Greater than 2 4 6 6
• Counting the ILF, EIF, EI, EQ, RET, DET, FTR (this is basically all sections
discussed above): This whole FP count will be called as "unadjusted function
point".
• Then put rating values 0 to 5 to all 14 GSC. Adding total of all 14 GSC to
come out with total VAF. Formula for VAF = 0.65 + (sum of all GSC factor/
100).
• Finally, make the calculation of adjusted function point. Formula: Total
function point = VAF * Unadjusted function point.
• Make estimation how many function points you will do per day. This is also
called as "Performance factor".
• On basis of performance factor, you can calculate Man/Days
]
Let’s try to implement these details in a sample customer project.
ILF Customer
Description Number of DET Number of RET
There are total 9 DETs, 9 1
alladd and update buttons,
even the credit check
button, the address list box,
check box active, all text
boxes.
There is only one RET, the
customer addresses.
So according to the above Total function 7
ILF ranking table
EI Update Customer
Description Number of DET Number of RET
There are total 9 DETs, all add and
update buttons, even the credit
check button, the address list box,
check box active, all text boxes.
There are 3 FTRs, one is the 9 3
address and the second is the credit
card information and third is
customer himself.
So according to the above ranking Total function 6
table
While counting EI I have seen many people multiplying it by 3.That means we are going
to do all CRUD functionality (ADD, UPDATE, and DELETE).This is not fair as it just
shows laziness of the Cost estimation team. Here the customer screen has add and update.
I can say the 2 * 6 that's = 12 FP for this EI customer. But later when some refers to your
FP sheet he will be completely lost.
Credit card check process can be complex as the credit card API complexity is still not
known. Data that is credit card information crosses from credit card system to Customer
system.
So now, let’s add the total function point got from above tables:
So unadjusted function point comes to 31.Please note I have said this as Unadjusted
function as we have not accounted other variance factor of project (Programmers leaving
job, Language we will use, what architecture etc etc).
In order to make it adjusted function point, we have to calculate and tabula te the GSC
and come out with the VAF.
GSC Value(0-5)
Data communications 1
Distributed data processing 1
Performance 4
Heavily used configuration 0
Transaction rate 1
On-Line data entry 0
End-user efficiency 4
On-Line update 0
Complex processing 0
Reusability 3
Installation ease 4
Operational ease 4
Multiple sites 0
Facilitate change 0
Total 22
Table :- GSC
This factor affects the whole FP like anything, be very particular with this factor. So now,
calculating the
Adjusted FP = VAF * Total unadjusted
Now we know that the complete FP for the customer GUI is 27 FP. Now calculating the
efficiency factor, we say that we will complete 3 FP per day that is 9 working days. So,
the whole customer GUI is of 9 working days (Note do not consider Saturday and
Sundays in this). I know upper manager people will say make it 7 FP per day and over
load the programmer. That’s why programmer works at night.
Before reading this section please refer to different SDLC cycles in previous chapters.
The main intention of introducing this section is because quotations are heavily affected
by which software life cycle you follow. Because deliverables change according to SLDC
model the project manager chooses for the project. Example for waterfall model we will
have Requirement documents, Design documents, Source code and testing plans. But for
prototyping models in addition to the documents above we will also need to deliver the
rough prototype. For build and fix model we will not deliver any of the documents and
the only document delivered will be source code. So according to SDLC model
deliverables change and hence the quotation. We will divide the estimation across
requirement, design, implementation (coding) and testing .In what way the estimation has
to divide across all deliverables is all up to the project manager and his plans.
The above sample is total 100 % distribution of effort across various phases. But note
function point or any other estimation methodology only gives you total execution
estimation. So you can see in the above distribution we have given coding as 100 %. But
as said it up to the project manager to change according to scenarios .Ok now from the
above function point estimation the estimation is 7 days let’s try to divide it across all
phases.
The above table shows the division of project man/days across project. Now let’s put
down the final quotation. Just a small comment about test cases.
Total number of Test Cases = (Function Point) raised to power of 1.2.This is as suggested
from caper Jones.
20-25 % of total effort can be allocated to testing phase. Test cases are non-deterministic.
That means if test passes it takes “X” amount of time and if it does not then to amend it
take “Y” amount of time.
Final Quotation
One programmer will sit on the project with around 1000 $ salary / Month. So his 10.6
days salary comes to 318 dollars approx. The upper quotation format is in its simplest
format. Every company has his quotation format accordingly. So no hard and fast rule of
quotation template. But still if interested http://www.microsoft.com/mac/resources/
templates.aspx?pid=templates has good collection of decent templates.
CustomerSampleFP.xls is provided with the CD which has all estimation details which
you can refer for practical approach. If you have downloaded e-book then you will need
to download cddata.zip for all files.
GSC factors have been always a controversial topic. Most of the software companies do
not use GSC, rather than they base line UAFP or construct there own table depending on
company project history. ISO has also adopted function point as unit of measurement,
but they also use UAFP rather than AFP. Let’s do a small experiment to view relationship
between FP, AFP, GSC and VAF. In this experiment we will assume UAFP = 120 and
then lot graph with GSC increment of five. So the formulae is VAF = 0.65 + (GS/100).
Here’s the table with every five incremental values in formulae and plot.
FP GSC
78 0
84 5
90 10
96 15
102 20
108 25
114 30
120 35
126 40
132 45
138 50
144 55
150 60
156 65
162 70
The following are the observation from the table and plot:-
• Graph is linear. It also captures that nature of complexity is linear.
• If the GSC value is zero then VAF is 0.65. So the graph starts from
UAFP*0.65.GSC = 35 AFP = UAFP. So the VAF = 1.
• When GSC < 35 then AFP < UAFP. That means complexity decreases.
• When GSC > 35 then AFP > UAFP. That means complexity increases.
Readers must be wondering why 0.65? There are fourteen GSC factor from zero to five.
So the maximum value of VAF = 0.65 + (70/100) = 1.35. In order that VAF does not
have any affect i.e. UAFP = FP VAF should be one. VAF will be one when GSC is 35
i.e. half of 70. So in order to complete value “1” value “0.65” is taken. Note value is 0.35
when GSC is 35 to complete the one factor “0.65” is required.
But following is the main problem related to GSC. GSC is applied throughout FP even
when some GSC does not apply to whole function points. Here’s the example to
demonstrate GSC problem.
Let’s take 11th GSC factor “installation ease”. The project is of 100 UAFP and there is
no consideration of installation previously by client so the 11th factor is zero.
VAF = 0.65 + (18/100) = 0.83. So the FP = 100 * 0.83 = 83 Function Points. But later
the client demanded for full blown installation for the project with auto updating when
new version is released. So we change out GSC table with installation ease to 5.
Major software project fail not because of programmer’s or project managers but due to
moody and changing customers. In one of our huge projects we had good programmers,
very enthusiastic. The project started of well but customer called ten times in a day to
change something or other. Believe me programmers get pissed if the customer is
changing his plans every fortnight. Well from this book point of view we have to evaluate
this changes which can be addition or deletion of requirements. Function point group has
come out with a methodology called as “Enhancement Function Points”.
ADD: - This is new function points added. This value is achieved by counting all new EP
(Elementary process) given in change request.
CHGA: - Function points which are affected due to CR. This value is achieved by
counting
all DET, FTR, ILF, EI, EO and EQ which are affected. Do not count elements which are
not affected.
VAFA: - This is VAF factor which is because of CR. Example previously the application
was desktop and now is changed to web so the GSC factor is affected.
DELFP: - When CR is for removing some functionality this value is counted. It’s rare
that customer removes functionalities (at least in India), but if they ever estimator has to
take note of it by counting the deleted elementary process.
Once we are through with calculating enha nced function points, it time to count total
function points of the application.
ADD: - Newly added functionality which leads to new function points after
enhancements.
(A) Can you explain on what basis does TPA actually work?
There are three main elements which determine estimate for black box test Size, Test
strategy and Productivity. Using all these three elements we can determine the estimate
for black box testing for a given project. So let’s understand all these elements step by
step.
Size : - The prime important thing in estimation is definitely the size of the project. Size
of a project is mainly defined by number of function points. But a function point fails or
pays least importance to the following factors:-
• Interfacing :- How much does one function affect the other part of the system?. So
if this function is modified then accordingly the other systems have to be tested as
this function has high impact analysis.
• Uniformity: - How reusable is the application?. It is important to consider how
many similar structured functions exist in the system. It is important to consider
the extent to which the system allows testing with slight modifications.
Test strategy: - Every project has certain requirements. The importance of all these
requirements also affects testing estimates. Any requirement importance is from two
perspectives one is the user importance and one the user usage. Depending on these two
characteristics a requirement rating can be generated and a strategy can be chalked out
accordingly. Which also means that estimates vary accordingly?.
Below diagram shows the complete pictorial regarding the different elements which
constitute TPA analysis as discussed in the previous section.
Figure: - TPA parameters
In order to really answer this question let’s do one complete estimation practically for a
sample project. Below is a simple accounting application developed for
http://www.questpond.com (that’s my official website) website to track its sales. The first
screen is a voucher entry screen. It’s a normal simple voucher entry screen with an extra
functionality to print the voucher. The second screen is a master screen to add accounting
codes.
Figure: - Accounting application
1) The account code entered in the voucher entry screen should be a valid account
code from the defined chart of accounts by the customer.
2) User should be able to add, delete and modify account code from the chart of
account master (This is what exactly the second screen defines).
3) User will not be able to delete chart of account code if he has already entered
transactions for the same in vouchers.
4) Chart of account code master will consist of account code and description of the
account code.
5) Account code can not be greater than 10.
6) The voucher data entry screen consists of debit account code, credit account code,
date of transaction and amount.
7) Once the user enters a voucher data he should be able to print the same in future
any time.
8) The dr & cr a/c are compulsory
9) The Amount value should not be negative
10) After pressing the submit the value should be seen in the grid
11) Amt is compulsory and Amt should be more than zero.
12) The debit and credit account should be equal in value.
13) Only numeric and non-negative values are allowed in amount field.
14) Two types of entry are allowed i.e.sales and commission.
15) Date, amount and voucher number is compulsory.
16) Voucher number should be in serial wise and system should auto increment the
voucher number with every voucher added.
17) No entry allowed one month before.
18) Users should be able to access data from separate geographical location. For
instance if one user is working in India and the other in China , then both user
should be able to access each others data through their respective location.
Now that we have all the requirements lets try to estimate how we can use TPA to do get
the actual man days. Below figure shows our road map how we will achieve the same
using TPA. There are in all ten steps to achieve the same.
Figure: - TPA steps.
Step 1- Calculate function points
Note: - You will not able to follow this if you have not read the
function points explanation previously.
EI calculation
Below are the EI entries for the accounting application. Currently we have two screens
one is the master screen and one is the voucher transaction screen. In the description we
have also described which DET’s we have considered. For the add voucher screen we
have 7 DET (note the buttons are also counted as DET) and for the Account code master
we have 4 DET.
Figure: - EI for the accounting application
EIF
There are no EIF’s in the system because we do not communicate with any external
application.
EO’s are nothing but complex report. In our system we have three complex reports Trial
balance, Profit and loss and Balance sheet. By default we have assumed 20 fields which
makes it a complex report (When we do estimation some times assumptions are fine).
EQ’s are nothing but simple output sent from the inside of the application to the external
world. For instance simple report is typical type of EQ’s. In our current accounting
application we have one simple print that is the print voucher. We have assumed 20
DET’s for the same so that we can move ahead with the calculation.
GSC calculation
As said in the FPA tutorial previously GSC factor defines the other factor of the projects
which the FP counting does not accommodate. For the accounting application we have
kept all the GSC factors are 1 except for two GSC factor Data communications and
performance. We have kept communication as 2 because one the requirement point is that
we need application data to be accessed from multiple centers which increases the data
communication complexity. Other GSC factor we have considered a bit complex is the
performance because one of the requirement of the end customer is that performance
should be mediumly good. Below figure shows the GSC entries.
Total calculation
Now that we have filled in all the details we need to calculate the total man days. Below
is the image which will explain us how the calculations are done. The first five rows i.e.
ILF, EIF, EO, EQ and EI are nothing but total of the individual entries. A total unadjusted
function point is the total of ILF + EIF + EO + EQ + EI. We get the total adjusted
function which is nothing but Total Un-Adjusted function points multiplied by the GSC
factor. Depending on organization base line we define how much FP can be completed by
a programmer in one day. For instance for the below accounting application we have put
1.2 FP per day. Depending on the FP per day we get total man days. Once we have got
total man days we distribute these values across the phases. One of the very important
thing what we have just got is the total execution time. So we have assigned the total man
days to the execution phase. From the execution phase man days we distribute 20 percent
to requirement phase, 20 percent to technical design and 5 percent to testing.
Note :- We will answer a small and sweet answer in this mid of the
explanation.
The testing estimates derived from function points is actually the estimates for only white
box testing. So in the below figure 1.3 man days is actually the estimate for white box
testing of the project. It does not take in to account black box testing estimation.
Total acceptance test cases = total adjusted function points multiplied by 1.2
Now that we have completed the function point analysis for this project lets move to the
second step to calculate black box testing using TPA.
Figure: - Df calculated
But we have still not seen how Df will be calculated. Df is calculated using four inputs
User importance, Usage intensity, Interfacing and Complexity. Below figure show the
different inputs in a pictorial manner. All the four factors are rated with Low, Normal and
High and assigned to each function factors derived from function points. So let’s
understand these factors step by step.
User importance (Ue): - How important is this function factor to the user and compared
to other function factors. Below figure shows how they are rated. Voucher data, Print
voucher and Add voucher is rated with high user importance. With out these the user can
not work at all. Reports have been rated low because they do not really stop the user from
working, definitely they are of high importance but not as high as the add voucher and
print voucher. Chart of accounts master is rated low because the master data is something
which is added one time and can also be added from the back end.
Usage intensity (Uy): - This factor tells how many users use it and how often. Below
figure shows how we have assigned the values to each function factors. Add voucher,
print voucher and voucher data is the most used function factors, hence they are rated
high. Other all function factors are rated as low.
Interfacing (I): - This factor defines how much impact does this function factor affect
the other parts of the system. But how do we now find the impact?. In TPA concept of
LDS affected is used to determine the interfacing rating. LDS means logical data source.
In our project we have two logical data source one is Voucher data and the other is the
Account code data (i.e. Chart of accounts data). Following are the important points to be
noted which determine the interfacing:-
Below is the table which defines complexity level according to number of LDS and
Functions impacting on the LDS.
So now depending on the two points defined above lets try to find out the interfacing
value for our accounting project. As said previously we have two functions which modify
LDS in our project one is the Add Voucher which affects the Voucher data and Add
account code which affects the Chart of Accounts code (i.e. the Accounts code master).
So first let’s see Add voucher function, below is the diagram which explains in detail
which LDS and functions are involved in the same. Add voucher primarily affects
Voucher data LDF. But other functions like reports and print also use the LDS. So in total
there are five numbers of functions and one LDS. Now looking at the number of LDS and
number of function table the impact complexity factor is LOW.
The other function which does modification is the Add account code. The LDS affected
is the chart of account code and the function which affects it is the Add account code
function. There are other functions who indirectly affect this function for instance report
which needs to access account code , print voucher which uses the account code to print
account description and also the Add voucher function uses the chart of accounts code
LDS to validate if the account code is proper or not. So again we see the look up table
and the impact complexity factor is AVERAGE.
Figure : - Add account code LDS and functions
The other function factors do not modify any data so we give them a LOW rating. Below
is the interfacing complexity factor assigned below.
Figure: - Interfacing
Complexity (C):- This factor defines how complex is the algorithm for the particular
function factor. Add voucher is the most complex functionality in the project and it can
have more than eleven conditions so we have rated the Complexity factor as the highest.
Reports are mediumly complex and can be rated as averagely complex. So as discussed
we have assigned values accordingly as shown in the figure below.
Figure: - Complexity
Uniformity (U):- This factor defines how reusable is the system. For instance if a test
case written for one function can be again applied the n it affects the testing estimates
accordingly. Currently for this project we have taken a uniformity factor of 1. So for
example if the customer had a requirement to also update accounts code. Then we could
have had two functions i.e. Add voucher and update voucher, but the test case for both of
them are same only with minimal change.
Figure: - Uniformity
One we have all the five factors we apply the below formulae to calculate Df for all the
function factors.
Df = [(Ue + Uy + I + C)/16] * U
Step 3:- Calculate Qd:-
The third step is to calculate Qd. Qd i.e dynamic quality characteristics has two parts one
is the explicit characteristic (Qde) and other is implicit (Qdi). Qde has five important
characteristics Functionality, Security, Suitability, Performance and Portability. Below
diagram shows how we rate those ratings. Qdi define the implicit characteristic part of
the Qd. These are not standard and vary from project to project. For instance we have
identified for this accounting application four characteristics user friendly, efficiency,
performance and maintainability. From these four characteristics which ever are
important we assign 0.02 value for the same. We can see from the below figure for user
friendliness we have assigned 0.02 value other are left. In Qde part we have given
functionality normal importance and performance as relatively unimportant but we do
need to account the same. Once we have Qde and Qdi then Qd = Qde + Qdi. For this
sample you can see that the total value of Qd is 1.17 (which is obtained from 1.15 +
0.02).
Qd is calculated using the rating multiplied by the value. The below table shows the
rating and the next after it shows the actual value. So the 1.15 has come from the below
formulae
Figure: - Qd ratings
Figure: - Calculation of Qd (Dynamic characteristic)
In this step we calculate TPf (number of test points assigned to each function). This is
done by using three data values (FPf, Df and Qd) calculated till now, below is the
formulae for the same.
TPf = FPf * Df * Qd
Because we are using the excel sheet these calculations happen automatically. Below
figure which shows how the TPf calculations are done.
Figure: - Calculation of TPf
In this step we take in to account the static quality characteristic of the project. This is
done by defining a check list and then if the test team needs to consider them we assign a
value of 16 to those properties. For this project we have only considered easy to use as a
criteria and hence assigned 16 to it.
Figure: - Qs calculation
Now that we have TPf's for all function factors, FP and Qs (static test point data), its time
to calculate Tp (Total number of test points).
Tp = sum(TPf) + (FP*Qs/500)
For the accounting system total Tp = 71.21 (use a calculator to check it out yourself, just
makes the concept better to understand). Below is the figure which shows how the total
Tp is derived.
Productivity / Skill factor shows the number of test hours needed per test points. It’s a
measure of experience, knowledge, and expertise and teams ability to perform.
Productivity factor vary from project to project and also organization to organization. For
instance if we have project team with many seniors then the productivity increases. But if
we have fresher’s who are just learning then definitely the productivity decreases. Higher
the productivity factor higher is the number of test hours required.
For this project we have considered we have good resources and with great ability. So we
have entered a value of 1.50 which means we have considered the highest productivity.
Number of test hours for each test point is influenced not only by skills but also the
environment in which those resources work. Below figure shows the different
environmental factors. You can also see the table ratings for every environmental factor.
Figure :- Testware
Primary test hours are the product of Test points, Skill factor and Environmental factors.
Below formulae shows the concept in more detail:-
Every process involves planning and management activities also. We also need to take in
to account these activities. Planning and management is affected by two important
concepts Team size and Management tools. So below are the rating sheet for Team size
and Management tools. These but values are summed and the percentage of this value is
then multiplied with the primary test hours.
Finally we distribute the same across the phases. So the total black box testing for this
project in man is 101.73 man hours which 13 man day approximately.
Figure: - Distribution over phases
Note: - It’s not possible to cover all scenarios in this book. Below
are the things which are left out and we will definelty try to cover
the same in the second edition. Till then if you can send some really
good and precise answers to me at shiv_koirala@yahoo.com we will
definelty highlight your name in this book beside the answer.
What is Winrunner ?
WinRunner is suitable for which type of applications?
What are all the different types of recordings available in WinRunner?
What's the difference between Context Sensitive and Analog recordings?
What are the limitation and advantages of winrunner?
Can you explain the concept of Synchronization in Winrunner?
What is the concept of check points in WinRunner?
Explain the difference between Winrunner and testdirector?
Explain project tree in test director?
How does Winrunner identify GUI objects?
Can you explain data parameterization in WinRunner ?
Can you do database testing using Winrunner ?
How is exception handling done using Winrunner ?
What is the concept of data caching in automated testing?
Interview questions
This section is not to be included in this book. But we have done it specifically to just
have a fair idea about what’s asked in other technologies also. If you want to buy those
book you can get it from any BPB book shop mentioned in this books first page or else
you can mail the same to bpb@bol.net.in or bpb@vsnl.com.
NET Interoperability
How can we use COM Components in .NET (Twist : What is RCW ?)?
Once i have developed the COM wrapper do i have to still register the COM in
registry?
How can we use .NET components in COM (Twist :- What is CCW (COM callable
wrapper) ?, What caution needs to be taken in order that .NET components is
compatible
with COM ?) ?
How can we make Windows API calls in .NET?
When we use windows API in .NET is it managed or unmanaged code ?
What is COM ?
What is Reference counting in COM ?
Can you describe IUKNOWN interface in short ?
Can you explain what is DCOM ?
How do we create DCOM object in VB6?
How to implement DTC in .NET ?
How many types of Transactions are there in COM + .NET ?
How do you do object pooling in .NET ?
What are types of compatibility in VB6?
Threading
What is Multi- tasking ?
What is Multi- threading ?
What is a Thread ?
Did VB6 support multi- threading ?
Can we have multiple threads in one App domain ?
Which namespace has threading ?
Can you explain in brief how can we implement threading ?
How can we change priority and what the levels of priority are provided by .NET ?
What does Addressof operator do in background ?
How can you reference current thread of the method ?
What's Thread.Sleep() in threading ?
How can we make a thread sleep for infinite period ?
What is Suspend and Resume in Threading ?
What the way to stop a long running thread ?
How do i debug thread ?
What's Thread.Join() in threading ?
What are Daemon thread's and how can a thread be created as Daemon?
When working with shared data in threading how do you implement synchronization
?
Can we use events with threading ?
How can we know a state of a thread?
what is a monitor object?
what are wait handles ?(Twist :- What is a mutex object ?)
what is ManualResetEvent and AutoResetEvent ?
What is ReaderWriter Locks ?
How can you avoid deadlock in threading ?
What’s difference between thread and process?
Caching Concepts
What is application object ?
What’s the difference between Cache object and application object ?
How can get access to cache object ?
What are dependencies in cache and types of dependencies ?
Can you show a simple code showing file dependency in cache ?
What is Cache Callback in Cache ?
What is scavenging ?
What are different types of caching using cache object of ASP.NET?
How can you cache different version of same page using ASP.NET cache object ?
How will implement Page Fragment Caching ?
What are ASP.NET session and compare ASP.NET session with classic ASP session
variables?
Which various modes of storing ASP.NET session ?
Is Session_End event supported in all session modes ?
What are the precautions you will take in order that StateServer Mode work properly
?
What are the precautions you will take in order that SQLSERVER Mode work
properly ?
Where do you specify session state mode in ASP.NET ?
What are the other ways you can maintain state ?
What are benefits and Limitation of using Hidden fields ?
What is ViewState ?
Do performance vary for viewstate according to User controls ?
What are benefits and Limitation of using Viewstate for state management?
How an you use Hidden frames to cache client data ?
What are benefits and Limitation of using Hidden frames?
What are benefits and Limitation of using Cookies?
What is Query String and What are benefits and Limitation of using Query Strings?
OOPS
What is Object Oriented Programming ?
What’s a Class ?
What’s a Object ?
What’s the relation between Classes and Objects ?
What are different properties provided by Object-oriented systems ?(Twist :- Can you
explain different properties of Object Oriented Systems?)(Twist :- What’s difference
between Association , Aggregation and Inheritance relationships?)
How can we acheive inheritance in VB.NET ?
What are abstract classes ?
What’s a Interface ?
What is difference between abstract classes and interfaces?
What is a delegate ?
What are event’s ?
Do events have return type ?
Can event’s have access modifiers ?
Can we have shared events ?
What is shadowing ?
What’s difference between Shadowing and Overriding ?
What’s difference between delegate and events?
If we inherit a class do the private variables also get inherited ?
What are different accessibility levels defined in .NET ?
Can you prevent a class from overriding ?
What’s the use of “MustInherit” keyword in VB.NET ?
Why can not you specify accessibility modifier in Interface ?
What are similarities between Class and structure ?
What’s a difference between Class and structure’s ?
What does virtual keyword mean ?
What are shared (VB.NET)/Static(C#) variables?
What is Dispose method in .NET ?
Whats the use of “OverRides” and “Overridable ” keywords ?
Where are all .NET Collection classes located ?
What is ArrayList ?
What’s a HashTable ?(Twist :- What’s difference between HashTable and ArrayList ?
)
What are queues and stacks ?
What is ENUM ?
What is nested Classes ?
What’s Operator Overloading in .NET?
In below sample code if we create a object of class2 which constructor will fire first ?
What’s the significance of Finalize method in .NET?
Why is it preferred to not use finalize for clean up?
How can we suppress a finalize method?
What’s the use of DISPOSE method?
How do I force the Dispose method to be called automatically, as clients can forget to
call Dispose method?
In what instances you will declare a constructor to be private?
ASP.NET
What’s the sequence in which ASP.NET events are processed ?
In which event are the controls fully loaded ?
How can we identify that the Page is PostBack ?
How does ASP.NET maintain state in between subsequent request ?
What is event bubbling ?
How do we assign page specific attributes ?
Administrator wants to make a security check that no one has tampered with
ViewState , how can we ensure this ?
What’s the use of @ Register directives ?
What’s the use of SmartNavigation property ?
What is AppSetting Section in “Web.Config” file ?
Where is ViewState information stored ?
What’s the use of @ OutputCache directive in ASP.NET?
How can we create custom controls in ASP.NET ?
How many types of validation controls are provided by ASP.NET ?
Can you explain what is “AutoPostBack” feature in ASP.NET ?
How can you enable automatic paging in DataGrid ?
What’s the use of “GLOBAL.ASAX” file ?
What’s the difference between “Web.config” and “Machine.Config” ?
What’s a SESSION and APPLICATION object ?
What’s difference between Server.Transfer and response.Redirect ?
What’s difference between Authentication and authorization?
What is impersonation in ASP.NET ?
Can you explain in brief how the ASP.NET authentication process works?
What are the various ways of authentication techniques in ASP.NET?
How does authorization work in ASP.NET?
What’s difference between Datagrid , Datalist and repeater ?
From performance point of view how do they rate ?
What’s the method to customize columns in DataGrid?
How can we format data inside DataGrid?
How will decide the design consideration to take a Datagrid , datalist or repeater ?
Difference between ASP and ASP.NET?
What are major events in GLOBAL.ASAX file ?
What order they are triggered ?
Do session use cookies ?
How can we force all the valid ation control to run ?
How can we check if all the validation control are valid and proper ?
If you have client side validation is enabled in your Web page , Does that mean server
side code is not run?
Which JavaScript file is referenced for validating the validators at the client side ?
How to disable client side script in validators?
I want to show the entire validation error message in a message box on the client
side?
You find that one of your validation is very complicated and does not fit in any of the
validators , so what will you do ?
What is Tracing in ASP.NET ?
How do we enable tracing ?
What exactly happens when ASPX page is requested from Browser?
How can we kill a user session ?
How do you upload a file in ASP.NET ?
How do I send ema il message from ASP.NET ?
What are different IIS isolation levels?
.NET Architecture
What are design patterns ?
What’s difference between Factory and Abstract Factory Pattern’s?
What’s MVC pattern? (Twist: - How can you implement MVC pattern in ASP.NET?
)
How can we implement singleton pattern in .NET?
How do you implement prototype pattern in .NET?(Twist: - How to implement
cloning in .NET ? , What is shallow copy and deep copy ?)
What are the situations you will use a Web Service and Remoting in projects?
Can you give a practical implementation of FAÇADE patterns?
How can we implement observer pattern in .NET?
What is three tier architecture?
Have you ever worked with Microsoft Application Blocks, if yes then which?
What is Service Oriented architecture?
ADO.NET
What is the namespace in which .NET has the data functionality classes ?
Can you give a overview of ADO.NET architecture ?
What are the two fundamental objects in ADO.NET ?
What is difference between dataset and datareader ?
What are major difference between classic ADO and ADO.NET ?
What is the use of connection object ?
What is the use of command objects and what are the methods provided by the
command object ?
What is the use of dataadapter ?
What are basic methods of Dataadapter ?
What is Dataset object?
What are the various objects in Dataset ?
How can we connect to Microsoft Access , Foxpro , Oracle etc ?
How do we connect to SQL SERVER , which namespace do we use ?
How do we use stored procedure in ADO.NET and how do we provide parameters to
the stored procedures?
How can we force the connection object to close after my datareader is closed ?
I want to force the datareader to return only schema of the datastore rather than data ?
How can we fine tune the command object when we are expecting a single row or a
single value ?
Which is the best place to store connectionstring in .NET projects ?
What are steps involved to fill a dataset ?(Twist :- How can we use dataadapter to fill
a dataset ?)
What are the various methods provided by the dataset object to generate XML?
How can we save all data from dataset ?
How can we check that some changes have been made to dataset since it was loaded
?(Twist :- How can cancel all changes done in dataset ? ,How do we get changed
value dataset ? )
How add/remove row’s in “DataTable” object of “DataSet” ?
What’s basic use of “DataView” ?
What’s difference between “DataSet” and “DataReader” ?
How can we load multiple tables in a DataSet ?
How can we add relation’s between table in a DataSet ?
What’s the use of CommandBuilder ?
What’s difference between “Optimistic” and “Pessimistic” locking ?
How many way’s are there to implement locking in ADO.NET ?
How can we perform transactions in .NET?
What’s difference between Dataset. clone and Dataset. copy ?
SQL SERVER
What is normalization? What are different type of normalization?
What is denormalization ?
What is a candidate key ?
What are different types of joins and whats the difference between them ?
What are indexes and What is the difference between clustered and nonclustered
indexes?
How can you increase SQL performance ?
What is the use of OLAP ?
What's a measure in OLAP ?
What are dimensions in OLAP ?
What are levels in dimensions ?
What are fact tables and dimension tables in OLAP ? (Twist :- can you explain the
star schema for OLAP ? )
What is DTS?
What is fillfactor ? (Twist :- When does plage split occurs ?)
What is RAID and how does it work ?
What's the difference between DELETE TABLE and TRUNCATE TABLE
commands?
What are the problems that can occur if you do not implement locking properly in
SQL SERVER ?
What are different transaction levels in SQL SERVER ?(Twist :- what are different
types of locks in SQL SERVER ?)
What are different locks in SQL SERVER ?
Can we suggest locking hints to SQL SERVER ?
What is LOCK escalation?
What are the different ways of moving data/databases between servers and databases
in SQL Server?
What are advantages of SQL 2000 over SQl 7.0 ?
What is the difference between a HAVING CLAUSE and a WHERE CLAUSE?
What is difference between UNION and UNION ALL SQL syntax ?
How can you raise custom errors from stored procedure ?
What is ACID fundamental and what are transactions in SQL SERVER ?
What is DBCC?
What is purpose of Replication ?
What are different type of replication supported by SQL SERVER ?
What is BCP utility in SQL SERVER ?
What are different types of triggers in SQl SERVER 2000 ?
If we have multiple AFTER Triggers on table how can we define the sequence of the
triggers ?
What is SQl injection ?
UML
What is UML?
How many types of diagrams are there in UML ?(Twist :- Explain in short all types of
diagrams in UML ?)
What are advantages of using UML?(Twist: - What is Modeling and why UML ?)
What’s the sequence of UML diagrams in project?(Twist: - How did you implement
UML in your project?)
Just a small Twist: - Do I need all UML diagrams in a project?
Give a small brief explanation of all Elements in activity diagrams?
Explain Different elements of a collaboration diagram ?
Explain Component diagrams ?
Explain all parts of a deployment diagram?
Describe various components in sequence diagrams?
What are the element in State Chart diagrams ?
Describe different elements in Static Chart diagrams ?
Explain different elements of a Use Case ?(Twist: - What’s difference between
Activity and sequence diagrams?(I leave this to the readers))
Project Management
What is project management?
Is spending in IT project’s constant through out the project?
Who is a stakeholder ?
Can you explain project life cycle ?(Twist :- How many phases are there in software
project ?)
Are risk constant through out the project ?
Can you explain different software development life cycles ?
What is triple constraint triangle in project management ?
What is a project baselines ?
What is effort variance?
How is normally a project management plan document organized ?
How do you estimate a project?
What is CAR (Causal Analysis and Resolution)?
How do you handle change request?
What is internal change request?
What is difference between SITP and UTP in testing ?
What are the software you have used for project management?
What are the metrics followed in project management?(Twist: - What metrics will
you look at in order to see the project is moving successfully?)
You have people in your team who do not meet there deadlines or do not perform
what are the actions you will take ? (Twist :- Two of your resources have conflict’s
between them how would you sort it out ?)
How do you start a project?
How did you do resource allocations?
How do you do code reviews ?
SQL
Revisiting basic syntax of SQL?
What are “GRANT” and “REVOKE’ statements?
What is Cascade and Restrict in DROP table SQL?
What is a DDL, DML and DCL concept in RDBMS world?
What are different types of joins in SQL?
What is “CROSS JOIN”?
You want to select the first record in a given set of rows?
How do you sort in SQL?
How do you select unique rows using SQL?
Can you name some aggregate function is SQL Server?
What is the default “SORT” order for a SQL?
What is a self-join?
What's the difference between DELETE and TRUNCATE ?
Select addresses which are between ‘1/1/2004’ and ‘1/4/2004’?
What are Wildcard operators in SQL Server?
What’s the difference between “UNION” and “UNION ALL” ?
What are cursors and what are the situations you will use them?
What are the steps to create a cursor?
What are the different Cursor Types?
What are “Global” and “Local” cursors?
What is “Group by” clause?
What is ROLLUP?
What is CUBE?
What is the difference between “HAVING” and “WHERE” clause?
What is “COMPUTE” clause in SQL?
What is “WITH TIES” clause in SQL?
What does “SET ROWCOUNT” syntax achieves?
What is a Sub-Query?
What is “Correlated Subqueries”?
What is “ALL” and “ANY” operator?
What is a “CASE” statement in SQL?
What does COLLATE Keyword in SQL signify?
.NET Integration
ADO.NET
Notification Services
Service Broker
XML Integration
What is XML?
What is the version information in XML?
What is ROOT element in XML?
If XML does not have closing tag will it work?
Is XML case sensitive?
What’s the difference between XML and HTML?
Is XML meant to replace HTML?
Can you explain why your project needed XML?
What is DTD (Document Type definition)?
What is well formed XML?
What is a valid XML?
What is CDATA section in XML?
What is CSS?
What is XSL?
What is Element and attributes in XML?
Can we define a column as XML?
How do we specify the XML data type as typed or untyped?
How can we create the XSD schema?
How do I insert in to a table which has XSD schema attached to it?
What is maximum size for XML datatype?
What is Xquery?
What are XML indexes?
What are secondary XML indexes?
What is FOR XML in SQL Server?
Can I use FOR XML to generate SCHEMA of a table and how?
What is the OPENXML statement in SQL Server?
I have huge XML file which we want to load in database?
How to call stored procedure using HTTP SOAP?
What is XMLA ?
Replication
Reporting Services
Can you explain how can we make a simple report in reporting services?
How do I specify stored procedures in Reporting Services?
What is the architecture for “Reporting Services “?
Database Optimization
Threading
JDBC
EJB
What is EJB?
what are the different kind of EJB’s?
you are designing architecture for a project how do you decide whether you should
use
session, entity or message driven bean?
Can you explain “EJBHome” and “EJBObject” in EJB?
Can client directly create object of session or entity beans?
Can you explain the concept of local interfaces?
What are the limitations of using Local object?
Which application server have you used for EJB ?
Can you explain step by step practically developing and deploying EJB component?
what is Passivation and Activation in EJB?
Can beans who are involved in transaction have “Passivation” process?
How does the server decide which beans to passivate and activate?
In what format is the conversational data written to the disk?
Can you explain in brief Life cycle for Stateless and Stateful beans?
Struts
What is XML?
What is the version information in XML?
What is ROOT element in XML?
If XML does not have closing tag will it work?
Is XML case sensitive?
What is the difference between XML and HTML?
Is XML meant to replace HTML?
Can you explain why your project needed XML?
What is DTD (Document Type definition)?
What is well formed XML?
What is a valid XML?
What is CDATA section in XML?
What is CSS?
What is XSL?
What is element and attributes in XML?
What are the standard ways of parsing XML document?
In What scenarios will you use a DOM parser and SAX parser?
What is XSLT?
Define XPATH?
What is the concept of XPOINTER?
What is a Web Service ?
What is DISCO ?
What is SOAP ?
What is WSDL ?
Can you explain UDDI ?
Can you explain JAXP ?
What is a XML registry?
What is JAXR?
What is JAXM?
Can you explain how JAXM messaging model works?
Can you explain JAX-RPC?
Internationalization
Can you explain i18n and l10n?
Can you explain internationalization and localization?
What is Locale?
How do we display numbers, currency and Dates according to proper Locale format?
what are resource bundles?
How do we load a resource bundle file?
How can we do inheritance in resource bundles?
JNI
What is Native Interface in JAVA?
Can you say in brief steps required to implement Native interfaces in Java?
Can JNI be used for VB6, C# or VB.NET directly?
What are JNI functions and pointers?
How does the garbage collector know JNI objects are no more used?
Twist: - What are the different types of references JNI supports?
Twist: - How to do you delete global objects?
how does the native language C or C++ understand data types in JAVA?
Can you explain exception handling in JNI?
What are limitations for “JNIEnv” pointer in multi- threading scenarios?
What are the advantages and disadvantages of using “JNI”?
Architecture
What are design patterns ?
What is the difference between Factory and Abstract Factory Patterns?
What is MVC pattern?
Twist: - How can you implement MVC pattern in Servlets and JSP?
How can we implement singleton pattern in JAVA?
How do you implement prototype pattern in JAVA?
Twist: - How to implement cloning in JAVA? What is shallow copy and deep copy ?
Can you give a practical implementation of FAÇADE patterns?
How can we implement observer pattern in JAVA?
What is three tier architecture?
What is Service Oriented architecture?
What is aspect oriented programming?
Project Management
UML
What is UML?
How many types of diagrams are there in UML ?
Twist :- Explain in short all types of diagrams in UML ?
What are advantages of using UML?
Twist: - What is Modeling and why UML ?
What is the sequence of UML diagrams in project?
Twist: - How did you implement UML in your project?
Just a small Twist: - Do I need all UML diagrams in a project?
Give a small brief explanation of all Elements in activity diagrams?
Explain Different elements of a collaboration diagram ?
Explain Component diagrams ?
Explain all parts of a deployment diagram?
Describe the various components in sequence diagrams?
What are the element in State Chart diagrams ?
Describe different elements in Static Chart diagrams ?
Explain the different elements of a Use Case ?
Twist: - What is the difference between Activity and sequence diagrams?(I leave this
to the readers)