SE Unit2
SE Unit2
PART-1
Software project management is an art and discipline of planning and supervising software
projects. It is a sub-discipline of software project management in which software projects
planned, implemented, monitored and controlled.
Types of Complexity
Cost Management Complexity: Estimating the total cost of the project is a very difficult
task and another thing is to keep an eye that the project does not overrun the budget.
Quality Management Complexity: The quality of the project must satisfy the
customer’s requirements. It must assure that the requirements of the customer are
fulfilled.
Risk Management Complexity: Risks are the unanticipated things that may occur
during any phase of the project. Various difficulties may occur to identify these risks and
make amendment plans to reduce the effects of these risks.
Communication Management Complexity: All the members must interact with all the
other members and there must be good communication with the customer.
Infrastructure complexity: Computing infrastructure refers to all of the operations
performed on the devices that execute our code. Networking, load balancers, queues,
firewalls, security, monitoring, databases, shading, etc.
Technical Challenges: Software projects can be complex and difficult due to the
technical challenges involved. This can include complex algorithms, database design, and
system integration, which can be difficult to manage and test effectively.
Schedule Constraints: Software projects are often subject to tight schedules and
deadlines, which can make it difficult to manage the project effectively and ensure that
all tasks are completed on time.
Quality Assurance: Ensuring that software meets the required quality standards is a
critical aspect of software project management. This can be a complex and time-
consuming process, especially when dealing with large, complex systems.
Improved software quality: Software engineering practices can help ensure the
development of high-quality software that meets user requirements and is reliable, secure,
and scalable.
Better risk management: Project management practices such as risk management can
help identify and address potential risks, reducing the likelihood of project failure.
Better maintenance and support: Software engineering practices can help ensure that
software is designed to be maintainable and supportable, making it easier to fix bugs, add
new features, and provide ongoing support to users.
A software project manager plays a pivotal role in ensuring the success of a software
development project. Their responsibilities encompass various stages of the project lifecycle,
including planning, execution, monitoring, and delivery. Here’s a breakdown of their key
responsibilities:
Develop a detailed project plan, including timelines, milestones, resource allocation, and
budget.
2. Team Management
Serve as the primary point of contact between the team and stakeholders.
4. Risk Management
Identify potential risks (technical, financial, or operational) and assess their impact.
Monitor risk factors throughout the project lifecycle and adapt plans as necessary.
6. Quality Assurance
Use project management tools to monitor task completion and resource usage.
Collaborate with technical leads and team members to resolve issues effectively.
Ensure all deliverables meet the agreed-upon specifications and quality standards.
Conduct project reviews and gather lessons learned for future improvement.
Stay updated with new tools, techniques, and trends in software project management.
Estimating the size of a software project is crucial for planning, resource allocation, scheduling,
and budgeting. Metrics for project size estimation fall into various categories and are used
depending on the nature of the project, available data, and organizational practices. Below are
common metrics for project size estimation:
Use Case: Best for projects where code size correlates with effort and complexity.
Advantages:
Limitations:
Definition: Measures functionality delivered to the user, based on inputs, outputs, data
files, interfaces, and inquiries.
Advantages:
o Technology-agnostic.
Limitations:
3. Story Points
Definition: Agile metric used to estimate effort required for a user story based on
complexity, size, and uncertainty.
Advantages:
Limitations:
Definition: Based on use cases in the system, adjusted for technical and environmental
factors.
Advantages:
Limitations:
5. Object Points
Definition: Estimates size based on the number of screens, reports, and third-party
components.
Use Case: Useful for GUI-intensive or RAD (Rapid Application Development) projects.
Advantages:
Limitations:
6. Story Maps
Advantages:
o Encourages team collaboration.
Limitations:
Definition: Breaks the project into smaller tasks or components and estimates size based
on individual elements.
Advantages:
Limitations:
o Time-consuming.
By combining multiple metrics or tailoring them to specific project needs, managers can achieve
more accurate and reliable size estimations.
PROJECT ESTIMATION TECHNIQUES
Project estimation techniques are used to predict the effort, time, cost, and resources required to
complete a project. These techniques can vary depending on the project's complexity, scope, and
available data. Below are some widely used project estimation techniques, grouped into
categories:
1. Expert Judgment
Definition: Relies on the experience and intuition of experts familiar with similar
projects.
Process:
o Gather input from subject matter experts (SMEs), project managers, or team
leads.
Advantages:
Limitations:
Definition: Estimates are based on the size, effort, or cost of similar, completed projects.
Process:
Advantages:
3. Parametric Estimation
Definition: Uses mathematical models to derive estimates based on historical data and
project variables.
Process:
Examples:
Advantages:
Limitations:
4. Bottom-Up Estimation
Definition: Breaks the project into smaller tasks, estimates each, and sums them up to
derive the overall estimate.
Process:
Limitations:
5. Top-Down Estimation
Definition: Starts with an overall estimate and breaks it down into smaller components.
Process:
o Use past project data or expert judgment to estimate the total effort.
Advantages:
Limitations:
6. Delphi Technique
Process:
Advantages:
Limitations:
o Time-intensive.
7. Proxy-Based Estimation
Process:
Advantages:
Limitations:
Story Points:
o Assign story points based on the complexity and effort of user stories.
Planning Poker:
o Team members independently estimate using a card system, then discuss
discrepancies.
Affinity Estimation:
Advantages:
Limitations:
Empirical estimation techniques rely on historical data, statistical models, and past project
experience to predict the effort, cost, and time required for a project. These techniques use
established relationships between project variables and outcomes, making them data-driven and
systematic. Below are the key empirical estimation techniques commonly used in project
management:
Description: Estimates project size based on the functionality delivered to the user rather
than the amount of code. Measures inputs, outputs, user interactions, files, and interfaces.
Process:
Advantages:
o Technology-agnostic.
Limitations:
Description: Estimates size and effort based on use cases in the system, incorporating
technical and environmental factors.
Process:
o Adjust for technical (e.g., security, performance) and environmental (e.g., team
experience) factors.
Advantages:
Limitations:
3. Delphi Technique
Process:
Advantages:
o Encourages collaboration.
Limitations:
o Time-consuming.
4. Wideband Delphi
Description: A collaborative version of the Delphi technique where experts discuss their
estimates openly in structured sessions.
Process:
Advantages:
Limitations:
o Time-intensive.
Description: Uses mathematical algorithms to estimate project effort, time, and cost
based on historical data and project size.
Process:
Advantages:
Limitations:
o Requires detailed historical data.
6. Heuristic Estimation
Description: Relies on rules of thumb or heuristics derived from past project experience.
Examples:
Advantages:
Limitations:
Description: Uses historical project data and machine learning algorithms to predict
effort, cost, or time.
Process:
Advantages:
Limitations:
1. Planning and requirements: This initial phase involves defining the scope, objectives,
and constraints of the project. It includes developing a project plan that outlines the
schedule, resources, and milestones
2. System design: : In this phase, the high-level architecture of the software system is
created. This includes defining the system’s overall structure, including major
components, their interactions, and the data flow between them.
3. Detailed design: This phase involves creating detailed specifications for each component
of the system. It breaks down the system design into detailed descriptions of each
module, including data structures, algorithms, and interfaces.
4. Module code and test: This involves writing the actual source code for each module or
component as defined in the detailed design. It includes coding the functionalities,
implementing algorithms, and developing interfaces.
5. Integration and test: This phase involves combining individual modules into a complete
system and ensuring that they work together as intended.
6. Cost Constructive model: The Constructive Cost Model (COCOMO) is a widely used
method for estimating the cost and effort required for software development projects.
1. Basic COCOMO:
2. Intermediate COCOMO:
3. Detailed COCOMO:
COCOMO Formula
Where:
Where:
D=ETD = \frac{E}{T}
Advantages of COCOMO
Limitations of COCOMO
1. Dependence on Accurate Inputs: Requires precise estimates of LOC, which may not be
available early in the project.
3. Static Constants: Assumes constants derived in the 1980s, which may not align with
current technologies or practices.
4. Focus on LOC: Overemphasizes code size, ignoring other factors like design and testing
efforts.
When to Use COCOMO
Effective for early-stage effort and cost estimation when historical data is available.
o Scale factors and cost drivers that align with modern development environments.
Halstead's Software Science, introduced by Maurice Halstead in the late 1970s, is a set of metrics
designed to measure various aspects of a software program's complexity. It is based on the idea
that the properties of software systems can be quantified using mathematical and statistical
approaches, focusing on the number of operators and operands in a program.
Key Concepts
Halstead's metrics revolve around four fundamental measures derived from the source code:
o Represents the count of unique operators (e.g., +, -, *, if, while) in the code.
o Represents the count of unique operands (e.g., variables, constants, literals) used
in the code.
Derived Metrics
Program length: The length of a program is total usage of operators and operands in the program.
Length (N) = N1 + N2
Program vocabulary: The Program vocabulary is the number of unique operators and operands
used in the program.
Vocabulary (n) = n1 + n2
Program Volume:
The Program Volume can be defined as minimum number of bits needed to encode the program.
Volume (V) = N log2 n
Length estimation:
N = n1 log2 n1 + n2 log2 n2
Applications
What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem that
could cause some loss or threaten the progress of the project, but which has not happened yet.
These potential issues might harm cost, schedule or technical success of the project and the
quality of our software device, or project team morale.
Risk Management is the system of identifying addressing and eliminating these problems before
they can damage the project.
Risk Management
A software project can be concerned with a large variety of risks. In order to be adept to
systematically identify the significant risks which might affect a software project, it is essential
to classify risks into different classes. The project manager can then check which risks from each
class are relevant to the project.
Risk management is a sequence of steps that help a software team to understand, analyze, and
manage uncertainty. Risk management process consists of
Risks Identification.
Risk Assessment.
Risks Planning.
Risk Monitoring
Risk Management Process
Risk Identification
Risk identification refers to the systematic process of recognizing and evaluating potential threats
or hazards that could negatively impact an organization, its operations, or its workforce. This
involves identifying various types of risks, ranging from IT security threats like viruses and
phishing attacks to unforeseen events such as equipment failures and extreme weather
conditions.
Risk analysis
Risk analysis is the process of evaluating and understanding the potential impact and likelihood
of identified risks on an organization. It helps determine how serious a risk is and how to best
manage or mitigate it. Risk Analysis involves evaluating each risk’s probability and potential
consequences to prioritize and manage them effectively.
Risk Planning
Risk planning involves developing strategies and actions to manage and mitigate identified risks
effectively. It outlines how to respond to potential risks, including prevention, mitigation, and
contingency measures, to protect the organization’s objectives and assets.
Risk Monitoring
Risk monitoring involves continuously tracking and overseeing identified risks to assess their
status, changes, and effectiveness of mitigation strategies. It ensures that risks are regularly
reviewed and managed to maintain alignment with organizational objectives and adapt to new
developments or challenges.
A computer code project may be laid low with an outsized sort of risk. To be ready to
consistently establish the necessary risks that could affect a computer code project, it’s necessary
to group risks into completely different categories. The project manager will then examine the
risks from every category square measure relevant to the project.
In the world of software development, the success of a project relies heavily on a crucial yet
often overlooked phase: Requirement Gathering. This initial stage acts as the foundation for the
entire development life cycle, steering the course of the software and ultimately determining its
success. Let's explore why requirement gathering is so important, what its key components are,
and how it profoundly influences the overall development process.
Requirements gathering is a crucial phase in the software development life cycle (SDLC) and
project management. It involves collecting, documenting, and managing the requirements that
define the features and functionalities of a system or application. The success of a project often
depends on the accuracy and completeness of the gathered requirements in software.
The first step is to identify and engage with all relevant stakeholders. Stakeholders can
include end-users, clients, project managers, subject matter experts, and anyone else who
has a vested interest in the software project. Understanding their perspectives is essential
for capturing diverse requirements.
Clearly define the scope of the project by outlining its objectives, boundaries, and
limitations. This step helps in establishing a common understanding of what the software
is expected to achieve and what functionalities it should include.
Schedule interviews with key stakeholders to gather information about their needs,
preferences, and expectations. Through open-ended questions and discussions, aim to
uncover both explicit and implicit requirements. These interviews provide valuable
insights that contribute to a more holistic understanding of the project.
Systematically document the gathered requirements. This documentation can take various
forms, such as user stories, use cases, or formal specifications. Clearly articulate
functional requirements (what the system should do) and non-functional requirements
(qualities the system should have, such as performance or security).
Once the requirements are documented, it's crucial to verify and validate them.
Verification ensures that the requirements align with the stakeholders' intentions, while
validation ensures that the documented requirements will meet the project's goals. This
step often involves feedback loops and discussions with stakeholders to refine and clarify
requirements.
Prioritize the requirements based on their importance to the project goals and constraints.
This step helps in creating a roadmap for development, guiding the team on which
features to prioritize. Prioritization is essential, especially when resources and time are
limited.
Requirement Gathering Techniques:
Effective requirement gathering is essential for the success of a software development project.
Various techniques are employed to collect, analyze, and document requirements.
1. Interviews:
3. Workshops:
5. Prototyping:
Developing use cases and scenarios to describe how the system will be used in
different situations. This technique helps in understanding the interactions
between users and the system, making it easier to identify and document
functional requirements.
7. Document Analysis:
Cost Reduction
Customer Satisfaction
Improved Communication.
Enhanced Quality
Risk Management
Accurate Planning
SOFTWARE REQUIREMENTS SPECIFICATION (SRS)
Purpose of an SRS
Guide design and development: Acts as a blueprint for developers to design and
implement the system.
Facilitate testing: Helps testers create test cases to verify that the software meets the
requirements.
Control scope: Helps manage changes by providing a baseline for project requirements.
as the name suggests, is a complete specification and description of requirements of the software
that need to be fulfilled for the successful development of the software system. These
requirements can be functional as well as non-functional depending upon the type of
requirement. The interaction between different customers and contractors is done because it is
necessary to fully understand the needs of customers.
Depending upon information gathered after interaction, SRS is developed which describes
requirements of software that may include changes and modifications that is needed to be done
to increase quality of product and to satisfy customer’s demand.
Benefits of an SRS
1. Introduction
1.1 Purpose
1.2 Scope
1.4 References
1.5 Overview
2. Overall Description
2.4 Constraints
3. Functional Requirements
- Detailed requirements
4. Non-Functional Requirements
6. Other Requirements
7. Appendices
Formal System Specification refers to the use of mathematical models and formal languages to
precisely define the behavior, functionality, and structure of a software or system. Unlike
informal or semi-formal methods (e.g., natural language or UML diagrams), formal
specifications provide a rigorous, unambiguous foundation for system development, helping to
reduce errors and ambiguities.
Automation: Allows for automated analysis, testing, and even code generation.
1. Formal Languages: Specifies the system using mathematical notations, symbols, and
logic.
2. Syntax: Defines how specifications are written.
3. Semantics: Defines the meaning of the specifications (e.g., the expected behavior of the
system).
4. Models: Mathematical abstractions used to represent the system’s components and their
interactions.
1. System States:
2. Operations:
3. Assertions/Constraints:
o Example:
Precondition: x > 0
Postcondition: y = x + 1
5. Proof Obligations:
o Logical statements that must be proved to ensure the specification is correct and
consistent.
2. Early Detection of Errors: Logical errors in the design can be identified early, saving
time and cost.
AXIOMATIC SPECIFICATION
Axiomatic Specification is a formal method used to specify the behavior of a software system or
its components using mathematical logic, specifically through axioms and rules. This approach
defines the preconditions and postconditions of operations without explicitly describing how
those operations are implemented. It is widely used for abstract data types (ADTs) and formal
verification of system behavior.
1. Axioms:
3. Operations:
o Preconditions specify the conditions that must hold true before an operation is
invoked.
o Postconditions describe the conditions that must hold true after the operation is
executed.
pop(push(S, e)) = S (Popping the stack after pushing e restores the original
stack.)
3. Domain Constraints:
1. Clarity:
2. Abstraction:
o Focuses on what the system does, not how it does it, allowing multiple
implementations.
3. Correctness:
4. Modularity:
1. Complexity:
o Large systems with many operations can result in very complex sets of axioms.
2. Learning Curve:
3. Undefined Operations:
4. Tool Support:
Applications
Abstract Data Types: Specification of data structures like stacks, queues, and sets.
Database Systems: Specifying the behavior of operations like insertion, deletion, and
queries.
ALGEBRAIC SPECIFICATION
1. Sorts:
2. Operations:
o Categorized into:
3. Equations/Axioms:
4. Signature:
1. Identify Sorts:
2. Define Operations:
3. Specify Axioms:
o Write equations to describe how operations interact and behave.
1. Abstraction:
2. Modularity:
3. Clarity:
4. Validation:
5. Implementation Independence:
2. Software Libraries:
3. Formal Verification:
4. System Design:
1. Executable Model:
o It allows stakeholders (e.g., clients, developers) to test the system's logic without
waiting for the final implementation.
2. Behavior Verification:
3. Rapid Prototyping:
o Early testing can be done with executable specifications to catch errors in logic or
design before the full implementation.
5. Traceability:
1. Validation:
2. Early Feedback:
o Allows users and stakeholders to interact with a prototype and provide feedback
early in the process.
3. Clarity:
4. Automatic Testing:
1. Complexity:
2. Performance:
o While useful for behavioral validation, executable specifications might lack the
detailed functionality of the final system, leading to incomplete testing.
In a banking system, an executable specification might define a simple account object with
behaviors like deposit, withdraw, and balance check. This model would be written in a high-
level language or specification tool and would allow stakeholders to simulate the behavior of
the account object and interact with it.
1. Declarative Nature:
o 4GLs often allow programmers to describe what the system should do, rather
than how it should be done. For example, a query in SQL specifies what data is
needed, not how to retrieve it.
2. High-Level Abstraction:
o 4GLs focus on reducing the amount of code developers need to write. Common
tasks, such as database interactions or report generation, can be accomplished
with relatively simple commands.
o 4GLs are designed to be closer to human language, making them easier to learn
and use for non-programmers or domain experts.
5. Productivity-Oriented:
Examples of 4GLs
o Used for querying and manipulating relational databases. SQL allows users to
describe data manipulation in a high-level declarative way.
2. Report Generators:
o Tools like Crystal Reports are 4GLs that allow users to design reports and query
databases without writing extensive code.
3. Mathematical/Statistical Tools:
o Languages like MATLAB and R focus on mathematical and statistical analysis
with concise and high-level syntax.
Advantages of 4GLs
1. Increased Productivity:
o Developers can accomplish more with less code, leading to faster development
times.
2. Easier to Learn:
o 4GLs are often designed to be more intuitive and closer to natural language,
which makes them easier for domain experts (non-programmers) to use.
o Since 4GLs abstract away much of the complexity, it is often easier to maintain
applications.
Challenges of 4GLs
1. Less Control:
o The high level of abstraction means that developers may have less control over
system performance and optimization.
2. Limited Flexibility:
o 4GLs may not support all types of applications, especially those requiring fine-
tuned or highly specific logic.
3. Performance Overhead:
o Due to the abstraction layer, applications written in 4GLs may have performance
overhead compared to applications written in lower-level languages.