QB Answer Unit I & II
QB Answer Unit I & II
Part B
1. Explain how software assurance helps in improving software security. What are its main
principles?
Software assurance is the disciplined set of activities, processes and evidence that gives stakeholders
confidence that software is built, deployed and operated to meet its security, safety and correctness
goals. It combines engineering practices (design, coding, testing), governance (policies, risk
management), and evidence (tests, reviews, certification) across the software lifecycle so that
vulnerabilities are prevented, detected and managed.
2. What are some common threats to software security? Describe their e ffects on
software.
Software systems face numerous threats that exploit design flaws, coding mistakes, or weak
security practices. These threats compromise the core security objectives of confidentiality,
integrity, and availability. Understanding common threats and their effects helps in building
more secure software.
Common Threats to Software Security and Their Effects
1. Buffer Overflow
Description: Occurs when a program writes more data to a buffer than it can
hold.
Effect: Attackers can overwrite adjacent memory, crash programs, or inject
malicious code leading to unauthorized control of the system.
2. Code Injection (e.g., SQL Injection, Script Injection)
Description: Malicious code is inserted into input fields or data streams due to
poor input validation.
Effect: Attackers can manipulate databases, steal data, or execute arbitrary
commands within the application.
3. Cross-Site Scripting (XSS)
Description: Injecting malicious scripts into web applications that run on other
users’ browsers.
Effect: Leads to session hijacking, cookie theft, defacement, or redirection to
malicious websites.
4. Session Hijacking
Description: Attackers steal or predict session tokens to impersonate a legitimate
user.
Effect: Unauthorized access to sensitive accounts and actions (e.g., banking
transactions, email access).
5. Denial of Service (DoS) / Distributed DoS (DDoS)
Description: Overloading a system with excessive requests or resource usage.
Effect: Service unavailability, performance degradation, and financial/business
losses.
6. Malware (Viruses, Worms, Trojans)
Description: Malicious software designed to damage, steal, or disrupt operations.
Effect: Data theft, corruption of files, backdoor access for attackers, or spreading
to other systems.
7. Insider Threats
Description: Malicious or careless employees exploiting internal access.
Effect: Unauthorized data disclosure, system sabotage, and violation of
confidentiality.
8. Phishing and Social Engineering
Description: Trick users into revealing sensitive credentials or performing unsafe
actions.
Effect: Credential theft, unauthorized access, and large-scale breaches.
9. Privilege Escalation
Description: Exploiting flaws to gain higher access rights than intended.
Effect: Attackers can modify critical system settings, access sensitive data, or take
full control of systems.
10. Supply Chain Attacks
Description: Compromise of third-party libraries, components, or tools integrated
into software.
Effect: Backdoors introduced into trusted systems, large-scale exploitation
through dependencies.
3. Discuss the advantages of finding and fixing software security issues early in the
development process.
In modern software development, security cannot be treated as an afterthought. Detecting and
fixing security issues early in the Software Development Life Cycle (SDLC) — during
requirements, design, and coding phases — provides significant technical and business
benefits. This approach is often called “shift-left security” or “Secure Software
Development”.
4. What are the key properties of secure software? Explain how each property
contributes to overall security.
Secure software is defined not only by the absence of defects but by a set of desirable
properties that together reduce the chance of compromise, limit impact if compromise occurs,
and make recovery possible. Below are the key properties, what each means, how it
contributes to security, typical controls that provide it, and how you can verify it.
5.Describe stack-based memory and heap-based memory attacks and list defense
mechanisms against memory-based attacks, such as stack canaries and address space
layout randomization (ASLR).
Memory-based attacks exploit weaknesses in how programs manage memory. Two common
targets are the stack and the heap, which are areas of memory used for program execution.
Attackers use these flaws to inject malicious code, corrupt execution flow, or escalate
privileges.
1. Stack-Based Memory Attacks
Stack memory stores local variables, function parameters, and return addresses.
Attack method:
o In a stack buffer overflow, more data than allocated is written into a buffer.
o This can overwrite adjacent variables, control data, or even the return
address on the stack.
o Attackers may redirect program execution to injected malicious code
(shellcode) or use the overflow to trigger arbitrary behavior.
Effects:
o Unauthorized code execution.
o System crashes (Denial of Service).
o Escalation of privileges.
2. Heap-Based Memory Attacks
Heap memory is used for dynamic memory allocation (objects, large data structures).
Attack method:
o In a heap overflow, excessive data is written into heap-allocated memory
blocks.
o Attackers manipulate heap management structures (metadata, pointers) to
overwrite function pointers or sensitive data.
Effects:
o Altering program flow by corrupting pointers.
o Bypassing access controls.
o Gaining arbitrary read/write access in memory.
3. Defense Mechanisms Against Memory-Based Attacks
1. Stack Canaries
o Small random values placed next to return addresses on the stack.
o Before returning from a function, the program checks if the canary value is
intact.
o If overwritten (indicating an overflow), the program aborts, preventing
exploitation.
2. Address Space Layout Randomization (ASLR)
o Randomizes the memory addresses where stack, heap, and libraries are loaded.
o Makes it extremely difficult for attackers to predict the location of injected
code or important structures.
o Defeats return-to-libc and code-reuse attacks.
3. Data Execution Prevention (DEP / NX-bit)
o Marks memory regions (stack/heap) as non-executable.
o Prevents injected code from executing even if placed in memory.
4. Safe Libraries and Bounds Checking
o Use of safe string-handling functions (e.g., strncpy instead of strcpy).
o Compiler-based bounds checking (e.g., stack protector flags in GCC).
5. Control Flow Integrity (CFI)
oEnsures program control flow follows a valid path.
o Prevents hijacking of return addresses or function pointers.
6. Memory-safe Languages
o Using languages like Java, Rust, or C# that manage memory automatically
reduces risks of buffer overflows.
Stack-based and heap-based memory attacks exploit weaknesses in memory management to
gain control over software execution. Defense mechanisms such as stack canaries, ASLR,
DEP, safe libraries, and memory-safe languages are critical to preventing these attacks. A
layered approach combining multiple defenses provides the strongest protection against
modern exploits.
PART C
1.Imagine a web application is experiencing frequent heap-based memory corruption.
Propose a comprehensive plan to address and prevent such issues, including code review
practices and runtime protections.
1. Mitigate customer impact
o If crashes/compromises are happening in production, throttle or take the
affected service out of rotation (canary → full) while investigating.
o Enable a canary/staggered rollout to limit blast radius for hotfixes.
2. Preserve evidence
o Enable and collect core dumps (ulimit -c unlimited) and preserve them
securely.
o Save logs, heap profiles, crash IDs, and any request payloads that triggered the
failure.
3. Enable temporary runtime safeguards
o Restrict access with firewall rules, rate limits, and stricter authentication if
exploitation is possible.
o Consider enabling stricter memory limits (cgroups) so one process doesn’t
bring down hosts.
Diagnosis & root-cause analysis
1. Reproduce the crash locally or in an isolated environment
o Recreate with the same request patterns/data. If non-deterministic, try
replaying traffic or fuzzing inputs that hit the failing code path.
2. Run memory sanitizers
o AddressSanitizer (ASan): excellent for use-after-free, out-of-bounds,
stack/heap corruption in C/C++ during testing.
o UndefinedBehaviorSanitizer (UBSan): detects undefined behavior.
o MemorySanitizer (MSan) for uninitialized reads.
o LeakSanitizer to detect leaks.
o Run with ASan-enabled builds (compile with -fsanitize=address,undefined).
o If the app is large, run targeted tests for the module that handles the failing
inputs.
3. Heavy-weight profilers / tools
o Valgrind (memcheck) — slow but thorough for root cause.
o Heap profiling (e.g., jemalloc’s stats/prof, tcmalloc heap profiler, perf
flamegraphs).
o GDB plus core dumps to inspect corrupted heap metadata and the stack trace
at crash.
4. Fuzz the vulnerable surface
o Use libFuzzer or AFL for the code path that handles untrusted inputs (parsers,
deserializers, image processing).
o Instrument fuzzing targets to run under ASan.
5. Check third-party libs
o Verify versions of libraries that allocate or manage memory (image libraries,
parsers, serialization libs). Look for known CVEs.
Short-term fixes (while permanent fixes are planned)
Apply input validation / size limits: reject obviously malformed or oversized inputs.
Sanitize/normalize inputs before forwarding to risky code paths.
Turn off or constrain features that expose the vulnerable path (e.g., file uploads,
certain parsers) when feasible.
Hotfix: if a single function is obviously corrupting memory, apply a targeted
validation or conversion to a safer API (e.g., replace strcpy with strncpy+explicit
length checks) — but treat hotfix as temporary until proper fixes + tests exist.
Code-level remediation (permanent)
1. Fix root causes
o Identify and fix the specific bug(s): off-by-one, incorrect size calculation, use-
after-free, double-free, invalid pointer arithmetic, improper realloc usage.
o Replace unsafe APIs with safe alternatives and explicit bounds checks.
2. Use safer memory idioms
o C++: prefer RAII, std::vector/std::string, unique_ptr/shared_ptr over raw
new/delete. Consider std::span for safe view semantics.
o C: use bounded APIs and encapsulate allocation/deallocation in well-tested
helpers. Consider using calloc for zero-initialization if helpful.
o Where feasible, migrate high-risk modules to memory-safe languages (Rust,
Java, Go) or sandbox them as separate services.
3. Defensive coding patterns
o Validate sizes before malloc/realloc; check return values.
o After free, set pointer to NULL to avoid double free/use-after-free.
o Avoid returning pointers into owned buffers that can be freed by caller unless
contract is clear.
o Clear sensitive memory before free if needed.
4. Harden allocator usage
o Use hardened/more-debuggable allocators for dev/testing (e.g., jemalloc with
opt flags, hardened malloc).
o Avoid custom allocators unless absolutely necessary and audit them carefully.
5. Fix concurrency-induced corruption
o Ensure correct locking for shared heap structures. Race conditions can lead to
memory corruption.
2. Evaluate a recent case study of a major software security breach. Analyze the sources
of insecurity involved and propose a set of measures to prevent similar breaches in the
future.
Short summary of the breach (what happened)
In 2020 threat actors inserted a backdoor (commonly called SUNBURST / Solorigate) into
SolarWinds’ Orion product build, which was distributed as digitally-signed, legitimate
software updates to thousands of customers. After a dormant period the backdoor contacted
attacker infrastructure and enabled follow-on intrusion activities against high-value victims
including U.S. federal agencies and major private companies. The compromise went
undetected for months and affected a large supply chain of downstream customers.
Key sources of insecurity (root causes)
Below are the primary weakness areas that enabled the attack, with brief explanations.
1. Compromise of the software build and update pipeline
o The attacker modified the Orion build artifacts so malicious code was included
in official releases. When the vendor’s build/signing process is compromised,
all customers trusting those updates are exposed. (This was the central vector.)
2. Insufficient supply-chain visibility & vendor validation
o Many customers lacked inventory (SBOMs) and relied on a trusted vendor
binary without fine-grained verification of what was inside; that increased
blast radius when signed software was trojanized.
3. Over-privileged trust relationships and lateral access
o Orion is a network management product with privileged visibility and
connectivity; attackers abused those privileges after initial compromise to
move laterally into sensitive environments. Weak segmentation and excessive
service permissions made exploitation much easier.
4. Slow detection & inadequate monitoring
o The compromise persisted months before linkage and full remediation.
Detection was hampered by stealthy attacker behavior (dormant period, living-
off-the-land techniques) and gaps in telemetry/monitoring.
5. Organizational failures (governance, communication, vendor security posture)
o Post-incident reviews highlighted weaknesses in vendor security governance,
control implementation around build systems, and disclosure/timely
information sharing. Regulatory scrutiny and audits followed.
3. You are tasked with improving the security of an existing software system. Outline a
plan that includes assessing current threats, implementing secure coding practices, and
applying memory protection techniques.
UNIT – II
1. What is Requirements Engineering?
Requirements Engineering (RE) is the systematic process of eliciting, analyzing,
documenting, and managing the requirements of a software system to ensure it meets
stakeholder needs and security objectives.
2. List the tools involved in Requirements Engineering.
Interviews & Questionnaires
Use Case Diagrams / Scenarios
Prototyping Tools
Checklists
Requirements Management Tools (e.g., DOORS, JIRA, RequisitePro)
3. What is the primary goal of the SQUARE process model in secure software
development?
The primary goal is to identify, prioritize, and document security requirements
systematically so that security is integrated into software from the early development stages.
4. Name two key activities involved in the requirements elicitation phase of secure
software development.
1. Stakeholder Interviews/Workshops – to gather requirements.
2. Threat Modeling / Risk Analysis – to identify potential security issues.
5. Give an example of how untrusted executable content can affect software security.
Example: A malicious JavaScript in a web page (XSS attack) can steal user cookies or
session IDs, leading to account hijacking.
6. What role does stack inspection play in mitigating the risk of buffer overflows?
Stack inspection checks the runtime call stack to ensure that code has proper permissions
before execution, thereby preventing untrusted code from exploiting buffer overflows.
7. Why is it important to understand vulnerability trends when developing secure
software?
Understanding trends helps developers:
Anticipate emerging threats.
Apply timely patches and best practices.
Design software that resists the most common attack patterns.
8. Write the concept of session hijacking.
Session hijacking is an attack where an attacker steals or predicts a valid session ID to
impersonate a legitimate user and gain unauthorized access to a system.
9. Mention one security design principle that helps in reducing vulnerabilities in
software.
Principle of Least Privilege – Every user or process should be given the minimum level of
access necessary to perform its function.
10. What is code injection, and why is it considered a significant security threat?
Code injection is an attack where malicious code is inserted into a program (e.g., SQL
injection, XSS).
It is significant because it can lead to data theft, unauthorized access, or full system
compromise.
PART B
1. Explain the SQUARE process model and its phases in securing software and explain
its strengths and limitations.
The SQUARE (Security Quality Requirements Engineering) process model is a structured
methodology for identifying, categorizing, and prioritizing security requirements in software
development.
It consists of nine phases:
1. Agree on Definitions – Establish common security terminology among stakeholders.
2. Identify Security Goals – Determine the overall security objectives.
3. Develop Artifacts – Collect supporting documents (use cases, system models).
4. Perform Risk Assessment – Identify threats, vulnerabilities, and risks.
5. Elicit Security Requirements – Gather requirements using interviews, checklists,
etc.
6. Categorize Requirements – Group into confidentiality, integrity, availability, etc.
7. Prioritize Requirements – Rank based on risk, impact, and feasibility.
8. Inspect Requirements – Review for clarity, consistency, completeness.
9. Document Security Requirements – Create final security requirements
specification.
Strengths:
Provides structured and repeatable process.
Early identification of security concerns.
Encourages stakeholder involvement.
Improves traceability of requirements.
Limitations:
Time-consuming and resource-intensive.
Requires skilled security experts.
May not fit agile/fast-paced development well.
Focuses more on requirements, less on design/implementation security.
Together, elicitation and prioritization ensure that software meets real-world security
demands in an efficient manner.
3. Explain how stack inspection prevents attacks like buffer overflows and compare it
with other security mechanisms.
Answer:
Stack inspection is a security mechanism used to check the call stack of a program at
runtime to verify whether the execution has sufficient permissions.
In buffer overflow attacks, attackers inject malicious code into the stack. Stack
inspection helps by:
o Verifying function calls in the call chain.
o Preventing untrusted code from performing privileged actions.
PART – C
1. For an online banking application, apply the SQUARE process model to identify and
address security requirements. Outline how each phase would contribute to securing the
application.
Applying SQUARE to online banking application:
1. Agree on Definitions – Define terms like authentication, fraud, phishing.
2. Identify Security Goals – Confidentiality of user data, integrity of transactions,
availability of services.
3. Develop Artifacts – Use cases (fund transfer, bill payments), threat models.
4. Risk Assessment – Identify risks like SQL injection, session hijacking, insider
threats.
5. Elicit Security Requirements – Multi-factor authentication, encryption of data,
secure APIs.
6. Categorize Requirements – Authentication, authorization, transaction security,
auditing.
7. Prioritize Requirements – Rank MFA and encryption as top priorities, less critical
features later.
8. Inspect Requirements – Review for completeness, e.g., whether password recovery
is secure.
9. Document Requirements – Finalized Security Requirement Specification (SRS)
for developers.
This ensures the online banking app is systematically secured against major cyber threats.
2. Design a comprehensive approach to protect a web application from SQL injection
and XSS attacks, including code review practices and security measures.
Approach:
1. SQL Injection Protection:
o Use prepared statements and stored procedures.
Conclusion: Combining secure coding, proactive reviews, and runtime defenses provides
strong protection against SQLi and XSS.