Project Management
Project Management
Index
Sr. No Topic Page No
1 Conventional Software Management
Evolution of Software Economics
Improving Software Economics
2 The old way and the new
Life cycle phases
Artifacts of the process
Model based software architectures
3 Work Flows of the process
Checkpoints of the process
Iterative Process Planning
4 Project Organizations and Responsibilities
Process Automation
5 Project Control and Process instrumentation
Tailoring the Process
6 Future Software Project Management
Q4. What are the factors that can be abstracted with software economics? Explain in detail.
Most software cost models can be abstracted into a function of five basic parameters: size, process,
personnel, environment, and required quality.
The size of the end product (in human-generated components), which is typically quantified in terms of
the number of source instructions or the number of function points required to develop the required
functionality.
The process used to produce the end product, in particular the ability of the process to avoid non-value-
adding activities (rework, bureaucratic delays, and communications overhead).
The capabilities of software engineering personnel and particularly their experience with the computer
science issues and the applications domain issues of the project.
The environment, which is made up of the tools and techniques available to support efficient software
development and to automate the process.
The required quality of the product, including its features, performance, reliability, and adaptability.
The relationships among these parameters and the estimated cost can be written as follows:
Effort = (Personnel)(Environment)(Quality)(SizeProcess)
Although these three levels of process overlap somewhat, they have different objectives, audiences,
metrics, concerns, and time scales, as shown in the above table.
Q9. How can the team effectiveness be improved? Explain.
It has long been understood that differences in personnel account for the greatest swings in productivity
Use tools, but be realistic. Software tools make their users more efficient.
This principle trivializes a crucial aspect of modern software engineering: the importance of the
development environment.
Avoid tricks.
Many programmers love to create programs with tricks constructs that perform a function
correctly, but in an obscure way. Show the world how smart you are by avoiding tricky code.
Obfuscated coding techniques should be avoided unless there are compelling reasons to use
them. Unfortunately, such compelling reasons are common in nontrivial projects.
Encapsulate.
Information-hiding is a simple, proven concept that results in software that is easier to test and
much easier to maintain. Component-based design, object-oriented design, and modern design
and programming notations have advanced this principle into mainstream practice.
To achieve economies of scale and higher returns on investment, we must move toward a software
manufacturing process driven by technological improvements in process automation and component-
based development. At first order are the following two stages of the life cycle:
The engineering stage, driven by less predictable but smaller teams doing design and synthesis
activities. The engineering stage of the life cycle evolves the plans, the requirements, and the
Attributing only two stages to a life cycle is a little too coarse, too simplistic, for most applications.
Consequently, the engineering stage is decomposed into two distinct phases, inception and elaboration,
and the production stage into construction and transition. These four phases of the life-cycle process are
loosely mapped to the conceptual framework of the spiral model as shown in the figure below, and are
named to depict the state of the project. In the figure, the size of the spiral corresponds to the inertia of
the project with respect to the breadth and depth of the artifacts that have been developed.
In most conventional life cycles, the phases are named after the primary activity within each phase:
requirements analysis, design, coding, unit test, integration test, and system test. Conventional software
development efforts emphasized a mostly sequential process, in which one activity was required to be
complete before the next was begun.
With an iterative process, each phase includes all the activities, in varying proportions. The primary
objectives, essential activities, and general evaluation criteria for each phase are discussed below.
Primary objectives
Minimizing development costs by optimizing resources and avoiding unnecessary scrap and rework
Achieving adequate quality as rapidly as practical
Achieving useful versions (alpha, beta, and other test releases) as rapidly as practical
Essential activities
Resource management, control, and process optimization
Complete component development and testing against evaluation criteria
Assessment of product releases against acceptance criteria of the vision
Primary evaluation criteria
Is this product baseline mature enough to be deployed in the user community?
Is this product baseline stable enough to be deployed in the user community?
Are the stakeholders ready for transition to the user community?
Are actual resource expenditures versus planned expenditures acceptable?
accept
The Management Set
The management set captures the artifacts associated with process planning and execution. These
artifacts use ad hoc notations, including text, graphics, or whatever representation is required to
capture the “contracts” among project personnel (project management, architects, developers, testers,
marketers, administrators), among stakeholders and between project personnel and stakeholders.
Management set artifacts are evaluated, assessed, and measured through a combination of the following:
Relevant stakeholder review
Analysis of changes between the current version of the artifact and previous versions
(management trends and project performance changes in terms of cost, schedule, and quality)
Each artifact set is the predominant development focus of one phase of the life cycle; the other sets
take on check and balance roles. As illustrated in the figure above, each phase has a predominant focus:
Requirements are the focus of the inception phase; design, the elaboration phase; implementation, the
construction phase; and deployment, the transition phase. The management artifacts also evolve, but at
a fairly constant level across the life cycle. Most of today's software development tools map closely to
one of the five artifact sets.
1. Management: scheduling, workflow, defect tracking, change management, documentation,
spreadsheet, resource management, and presentation tools
2. Requirements: requirements management tools
3. Design: visual modeling tools
4. Implementation: compiler/debugger tools, code analysis tools, test coverage analysis tools, and test
management tools
5. Deployment: test coverage and test automation tools, network management tools, commercial
components (operating systems, GUIs, DBMSs, networks, middleware), and installation tools
Architecture Description
The architecture description provides an organized view of the software architecture under
development. It is extracted largely from the design model and includes views of the design,
implementation, and deployment sets sufficient to understand how the operational concept of the
requirements set will be achieved.
Software User Manual
The software user manual provides the user with the reference documentation necessary to support the
delivered software. Although content is highly variable across application domains, the user manual
should include installation procedures, usage procedures and guidance, operational constraints, and a
user interface description, at a minimum.
The figure given above illustrates the relative levels of effort expected across the phases in each of the
top-level workflows. It represents one of the key signatures of a modern process framework and
provides a viewpoint from which to discuss several of the key principles.
Architecture-first approach
o Extensive requirements analysis, design, implementation, and assessment activities are performed
before the construction phase, when full-scale implementation is .the focus. This early life-cycle
focus on implementing and testing the architecture must precede full-scale development and
testing of all the components and must precede the downstream focus on completeness and quality
of the entire breadth of the product features.
Iterative life-cycle process
o In the figure given above, each phase portrays at least two iterations of each workflow. This default
is intended to be descriptive, not prescriptive. Some projects may require only one iteration in a
phase; others may require several iterations. The point is that the activities and artifacts of any
given workflow may require more than one pass to achieve adequate results.
Round-trip engineering
o Raising the environment activities to a first-class workflow is critical. The environment is the
tangible embodiment of the project’s process, methods, and notations for producing the artifacts.
Demonstration-based approach
o Implementation and assessment activities are initiated early in the life cycle, reflecting the emphasis
on constructing executable subsets of the evolving architecture.
1. Management: iteration planning to determine the content of the release and develop the detailed plan
for the iteration; assignment of work packages, or tasks, to the development team
2. Environment: evolving the software change order database to reflect all new baselines and changes to
existing baselines for all product, test, and environment components
3. Requirements: analyzing the baseline plan, the baseline architecture, and the baseline requirements set
artifacts to fully elaborate the use cases to be demonstrated at the end of this iteration and their
evaluation criteria; updating any requirements set artifacts to reflect changes necessitated by results of
this iteration's engineering activities
4. Design: evolving the baseline architecture and the baseline design set artifacts to elaborate fully the
design model and test model components necessary to demonstrate against the evaluation criteria
allocated to this iteration; updating design set artifacts to reflect changes necessitated by the results of
this iteration’s engineering activities
5. Implementation: developing or acquiring any new components, and enhancing or modifying any existing
components, to demonstrate the evaluation criteria allocated to this iteration; integrating and testing all
new and modified components with existing baselines (previous versions)
6. Assessment: evaluating the results of the iteration, including compliance with the allocated evaluation
criteria and the quality of the current baselines; identifying any rework required and determining
whether it should be performed before deployment of this release or allocated to the next release;
assessing results to improve the basis of the subsequent iteration’s plan
7. Deployment: transitioning the release either to an external organization (such as a user, independent
verification and validation contractor, or regulatory agency) or to internal closure by conducting a post-
mortem so that lessons learned can be captured and reflected in the next iteration
Periodic status assessments are crucial for focusing continuous attention on the evolving health of the
project and its dynamic priorities. They force the software project manager to collect and review the
data periodically, force outside peer review, and encourage dissemination of best practices to and from
other stakeholders.
Q8. State and explain various Drawbacks of evolutionary work break down structures.
Conventional work breakdown structures frequently suffer from three fundamental flaws.
They are prematurely structured around the product design.
They are prematurely decomposed, planned, and budgeted in either too much or too little detail.
They are project-specific, and cross-project comparisons are usually difficult or impossible.
Conventional work breakdown structures are prematurely structured around the product design.
A typical conventional WBS that structured primarily around the subsystems of its product architecture,
and then further decomposed into the components of each subsystem. Once this structure is ingrained
in the WBS and then allocated to responsible managers with budgets, schedules, and expected
deliverables, a concrete planning foundation has been set that is difficult and expensive to change.
A WBS is the architecture for the financial plan. Just as software architectures need to encapsulate
components that are likely to change, so must planning architectures. To couple the plan tightly to the
product structure may make sense if both are reasonably mature. However, a looser coupling is
desirable if either the plan or the architecture is subject to change.
Conventional work breakdown structures are prematurely decomposed, planned, and budgeted in
either too little or too much detail.
Large software projects tend to be over planned and small projects tend to be under planned. The WBS
shown in the figure given above is overly simplistic for most large-scale systems, where six or more
levels of WBS elements are commonplace. In general, a WBS elaborated to at least two or three levels
makes sense. For large-scale systems, additional levels may be necessary in later phases of the life cycle.
The basic problem with planning too much detail at the outset is that the detail does not evolve with the
level of fidelity in the plan.
Conventional work breakdown structures are project-specific, and cross-project comparisons are
usually difficult or impossible.
Most organizations allow individual projects to define their own project-specific structure tailored to the
project manager’s style, the customer's demands, or other project-specific preferences. What is the
ratio of productive activities (requirements, design, implementation, assessment, deployment) to
overhead activities (management, environment)?
Q10. Write a short note on forward looking approach Cost & schedule estimating process.
Project plans need to be derived from two perspectives. The first is a forward-looking, top-down
approach. It starts with an understanding of the general requirements and constraints, derives a macro-
level budget and schedule, and then decomposes these elements into lower level budgets and
intermediate milestones. From this perspective, the following planning sequence would occur:
1. The software project manager (and others) develops a characterization of the overall size, process,
environment, people, and quality required for the project.
2. A macro-level estimate of the total effort and schedule is developed using a software cost estimation
model.
3. The software project manager partitions the estimate for the effort into a top-level WBS. The project
manager also partitions the schedule into major milestone dates and partitions the effort into a staffing
profile across the life-cycle phases. These sorts of estimates tend to ignore many detailed project-
specific parameters.
4. At this point, subproject managers are given the responsibility for decomposing each of the WBS
elements into lower levels using their top-level allocation, staffing profile, and major milestone dates as
constraints.
The second perspective is a backward-looking, bottom-up approach. It starts with the end in mind,
analyze the micro-level budgets and schedules, and then sum all these elements into the higher level
budgets and intermediate milestones. This approach tends to define and populate the WBS from the
lowest levels upward.
From this perspective, the following planning sequence would occur:
This structure can be tailored to specific circumstances. The main features of the default organization
are as follows:
Responsibility for process definition and maintenance is specific to a cohesive line of business, where
process commonality makes sense. For example, the process for developing avionics software is
different from the process used to develop office applications.
Responsibility for process automation is an organizational role and is equal in importance to the process
definition role. Projects achieve process commonality primarily through the environment support of
common tools.
Organizational roles may be fulfilled by a single individual or several different teams, depending on the
scale of the organization. A 20-person software product company may require only a single person to
fulfill all the roles, while a 10,000-person telecommunications company may require hundreds of people
to achieve an effective software organization.
The figure given below shows the focus of software assessment team activities over the project life
cycle. There are two reasons for using an independent team for software assessment. The first has to do
with ensuring an independent quality perspective.
This often debated approach has its pros (such as ensuring that the ownership biases of developers do
not pollute the assessment of quality) and cons (such as relieving the software development team of
ownership in quality, to some extent).
Q6. Why the process is to be automated? Explain the reasons and the three levels of automation.
Need for automation:
As the software industry moves into maintaining different information sets for the engineering artifacts,
more automation support is needed to ensure efficient and error free transition of data from one
artifact to another.
Management
There are many opportunities for automating the project planning and control activities of the
management workflow. Software cost estimation tools and WBS tools are useful for generating the
planning artifacts. For managing against a plan, workflow management tools and a software project
control panel that can maintain an on-line version of the status assessment are advantageous. This
automation support can considerably improve the insight of the metrics collection and reporting
concepts.
Environment
Configuration management and version control are essential in a modern iterative development
process. Much of the metrics approach is dependent on measuring changes in software artifact
baselines.
Q9. “The project environment artifacts evolve through three discrete states”. Explain.
The project environment artifacts evolve through three discrete states: the prototyping environment,
the development environment, and the maintenance environment.
1. The prototyping environment includes an architecture test bed for prototyping project architectures to
evaluate trade-offs during the inception and elaboration phases of the life cycle. This informal
configuration of tools should be capable of supporting the following activities:
a. Performance trade-offs and technical risk analyses
b. Make/buy trade-offs and feasibility studies for commercial products
c. Fault tolerance/dynamic reconfiguration trade-offs
d. Analysis of the risks associated with transitioning to full-scale implementation
e. Development of test scenarios, tools, and instrumentation suitable for analyzing the requirements
2. The development environment should include a full suite of development tools needed to support the
various process workflows and to support round-trip engineering to the maximum extent possible.
3. The maintenance environment should typically coincide with a mature version of the development
environment. In some cases, the maintenance environment may be a subset of the development
environment delivered as one of the project's end products.
Q11. “Project software standard is to be set by Organization Policy.” Explain the statement.
Organization Policy
The organization policy is usually packaged as a handbook that defines the life cycle and the process
primitives (major milestones, intermediate artifacts, engineering repositories, metrics, roles and
responsibilities).
The seven core metrics are based on common sense and field experience with both successful and
unsuccessful metrics programs. Their attributes include the following:
They are simple, objective, easy to collect, easy to interpret, and hard to misinterpret.
Collection can be automated and nonintrusive.
They provide for consistent assessments throughout the life cycle and are derived from the evolving
product baselines rather than from a subjective assessment.
They are useful to both management and engineering personnel for communicating progress and
quality in a consistent format.
Their fidelity improves across the life cycle.
Q7. Enlist the factors of tailoring a software process framework. Explain the scale factor in detail.
Factors of tailoring software process framework:
o Scale
o Stakeholder cohesion and contention
o Process flexibility or rigor
o Process maturity
o Architectural risk
o Domain experience
Scale
Perhaps the single most important factor in tailoring a software process framework to the specific needs
of a project is the total scale of the software application. There are many ways to measure scale,
including number of source lines of code, number of function points, number of use cases, and number
of dollars. From a process tailoring perspective, the primary measure of scale is the size of the team. As
the headcount increases, the importance of consistent interpersonal communications becomes
paramount. Otherwise, the diseconomies of scale can have a serious impact on achievement of the
project objectives.
Salvi Collège Assistant Professor: Sonu Raj | 8976249271 Page 55
Several size of the project:
Trivial
Small
Medium-sized
Large
Huge
Trivial-sized projects require almost no management overhead (planning, communication,
coordination, progress assessment, review, administration). There is little need to document the
intermediate artifacts. Workflow is single-threaded. Performance is highly dependent on personnel
skills.
Small projects (5 people) require very little management overhead, but team leadership toward a
common objective is crucial. There is some need to communicate the intermediate artifacts among
team members. Project milestones are easily planned, informally conducted, and easily changed.
Performance depends primarily on personnel skills.
Moderate-sized projects (25 people) require moderate management overhead, including a dedicated
software project manager to synchronize team workflows and balance resources. Overhead workflows
across all team leads are necessary for review, coordination, and assessment. There is a definite need to
communicate the intermediate artifacts among teams.
Large projects (125 people) require substantial management overhead, including a dedicated software
project manager and several subproject managers to synchronize project-level and subproject-level
workflows and to balance resources. There is significant expenditure in overhead workflows across all
team leads for dissemination, review, coordination, and assessment. Intermediate artifacts are explicitly
emphasized to communicate engineering results across many diverse teams. Project milestones are
formally planned and conducted, and changes to milestone plans are expensive.
Huge projects (625 people) require substantial management overhead, including multiple software
project managers and many subproject managers to synchronize project-level and subproject-level
workflows and to balance resources. There is significant expenditure in overhead workflows across all
team leads for dissemination, review, coordination, and assessment. Intermediate artifacts are explicitly
emphasized to communicate engineering results across many diverse teams.
Architectural Risk
The degree of technical feasibility demonstrated before commitment to full-scale production is an
important dimension of defining a specific project’s process. There are many sources of architectural
risk. Some of the most important and recurring sources are system performance (resource utilization,
response time, throughput, accuracy), robustness to change (addition of new features, incorporation of
new technology, adaptation to dynamic operational conditions), and system reliability (predictable
behavior, fault tolerance). The degree to which these risks can be eliminated before construction begins
can have dramatic ramifications in the process tailoring.
Domain Experience
The development organization’s domain experience governs its ability to converge on an acceptable
architecture in a minimum number of iterations. An organization that has built five generations of radar
control switches may be able to converge on an adequate baseline architecture for a new radar
application in two or three prototype release iterations. A skilled software organization building its first
radar application may require four or five prototype releases before converging on an adequate
baseline.
One key aspect of the differences between the two projects is the leverage of the various process
components in the success or failure of the project. This reflects the importance of staffing or the level
of associated risk management.