0% found this document useful (0 votes)
43 views22 pages

QB Answer Unit I & II

Uploaded by

civil.someswaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views22 pages

QB Answer Unit I & II

Uploaded by

civil.someswaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

CB3591 – Engineering Secure Software Systems

Question Bank and Answer


UNIT-I
Part A
1.What is software assurance?
Software assurance is the level of confidence that software is free from
vulnerabilities, functions as intended, and is protected against unauthorized access or
modification.
2. Name two common threats to software security.
 Buffer overflow
 Code injection (e.g., SQL injection, script injection)

3. Write the benefits of detecting software security issues early.


 Reduces overall cost of fixing vulnerabilities.
 Prevents exploitation and improves system reliability.

4. What is meant by 'confidentiality' in secure software?


Confidentiality ensures that sensitive information is accessible only to authorized users and
protected from unauthorized disclosure.
5. Define 'integrity' in the context of secure software.
Integrity means maintaining the accuracy, consistency, and trustworthiness of data by
preventing unauthorized modifications.
6. What is a stack buffer overflow?
A stack buffer overflow occurs when more data is written to a stack-allocated buffer than it
can hold, leading to overwriting of adjacent memory.
7. How does a heap overflow differ from a stack overflow?
 Heap overflow: Occurs in dynamically allocated memory (heap).
 Stack overflow: Occurs in function call stack memory.
8. Name one technique used to prevent stack buffer overflows.
Use of stack canaries (guard values) to detect buffer overwrite.
9. What role does data execution prevention (DEP) play in defending against memory-
based attacks?
DEP prevents execution of code from non-executable memory regions (like stack/heap),
blocking injected malicious code.
10. Why is input validation important in preventing memory-based attacks?
Proper input validation ensures only valid, expected data is processed, preventing malicious
input from triggering overflows or injections.

Part B
1. Explain how software assurance helps in improving software security. What are its main
principles?

Software assurance is the disciplined set of activities, processes and evidence that gives stakeholders
confidence that software is built, deployed and operated to meet its security, safety and correctness
goals. It combines engineering practices (design, coding, testing), governance (policies, risk
management), and evidence (tests, reviews, certification) across the software lifecycle so that
vulnerabilities are prevented, detected and managed.

How software assurance improves software security


1. Finds and fixes vulnerabilities early (shift-left).
Applying assurance activities during requirements, design and coding (threat modeling,
secure code reviews, SAST) reduces the number and severity of vulnerabilities that reach
production — lowering exploit risk and cost of remediation.
2. Focuses effort on the highest risks (risk-based approach).
Risk assessment and prioritization ensure scarce resources target the most damaging threats
(e.g., authentication flaws, data leakage), improving overall security ROI.
3. Raises the quality of design and architecture
Architectural reviews and secure design patterns (least-privilege, separation of concerns,
defense-in-depth) remove systemic weaknesses that single bug fixes cannot correct.
4. Improves implementation hygiene.
Coding standards, automated static analysis, dependency checks and peer reviews reduce
common programming errors (buffer overflows, injection points) that are the root cause of
many attacks.
5. Provides rigorous verification and validation.
A mix of dynamic testing (DAST), fuzzing, security unit tests, and penetration testing
demonstrates that implemented behavior matches security requirements and uncovers live-
path flaws.
6. Secures the supply chain and build process.
Practices such as software bill of materials (SBOM), signed builds, dependency vulnerability
scanning and vendor vetting reduce risks from third-party libraries and compromised
toolchains.
7. Creates operational visibility and resilience.
Logging, monitoring, alerting and patch management detect and contain exploitation attempts
quickly; assurance includes playbooks and incident response so recovery is resilient.
8. Produces traceable evidence for assurance and compliance.
9. Artifacts — threat models, test reports, code review records, SBOMs, security cases — let
auditors, customers and developers verify that security claims are supported by concrete
evidence.
Each of these improvements reduces the probability or impact of successful attacks and increases
stakeholder confidence in the software.

Main principles of software assurance


1. Risk-based prioritization — focus verification and mitigation on the assets and threats that
matter most.
2. Shift-left / Early verification — perform security activities as early as possible (requirements
→ design → code) to prevent costly late fixes.
3. Defense-in-depth — use multiple independent controls (network, host, app, identity) so a
single failure does not lead to compromise.
4. Least privilege & minimal attack surface — grant the minimum rights necessary and reduce
exposed interfaces to limit what attackers can exploit.
5. Secure-by-design and secure-by-default — make secure choices in architecture and provide
safe default configurations that do not require extra hardening.
6. Evidence-based assurance & traceability — collect and retain artifacts (tests, reviews,
SBOM, signed builds) that prove security claims and enable audits.
7. Continuous verification & monitoring — security is ongoing: use automated pipelines,
regression tests, runtime monitoring and periodic reassessments.
8. Supply-chain security — ensure components, libraries and tooling are trusted, versioned, and
verifiable (SBOMs, vendor attestations, signed binaries).
9. Human factors & governance — include training, roles (e.g., security champion), policies and
accountability to ensure practices are followed and improved.

2. What are some common threats to software security? Describe their e ffects on
software.
Software systems face numerous threats that exploit design flaws, coding mistakes, or weak
security practices. These threats compromise the core security objectives of confidentiality,
integrity, and availability. Understanding common threats and their effects helps in building
more secure software.
Common Threats to Software Security and Their Effects
1. Buffer Overflow
 Description: Occurs when a program writes more data to a buffer than it can
hold.
 Effect: Attackers can overwrite adjacent memory, crash programs, or inject
malicious code leading to unauthorized control of the system.
2. Code Injection (e.g., SQL Injection, Script Injection)
 Description: Malicious code is inserted into input fields or data streams due to
poor input validation.
 Effect: Attackers can manipulate databases, steal data, or execute arbitrary
commands within the application.
3. Cross-Site Scripting (XSS)
 Description: Injecting malicious scripts into web applications that run on other
users’ browsers.
 Effect: Leads to session hijacking, cookie theft, defacement, or redirection to
malicious websites.
4. Session Hijacking
 Description: Attackers steal or predict session tokens to impersonate a legitimate
user.
 Effect: Unauthorized access to sensitive accounts and actions (e.g., banking
transactions, email access).
5. Denial of Service (DoS) / Distributed DoS (DDoS)
 Description: Overloading a system with excessive requests or resource usage.
 Effect: Service unavailability, performance degradation, and financial/business
losses.
6. Malware (Viruses, Worms, Trojans)
 Description: Malicious software designed to damage, steal, or disrupt operations.
 Effect: Data theft, corruption of files, backdoor access for attackers, or spreading
to other systems.
7. Insider Threats
 Description: Malicious or careless employees exploiting internal access.
 Effect: Unauthorized data disclosure, system sabotage, and violation of
confidentiality.
8. Phishing and Social Engineering
 Description: Trick users into revealing sensitive credentials or performing unsafe
actions.
 Effect: Credential theft, unauthorized access, and large-scale breaches.
9. Privilege Escalation
 Description: Exploiting flaws to gain higher access rights than intended.
 Effect: Attackers can modify critical system settings, access sensitive data, or take
full control of systems.
10. Supply Chain Attacks
 Description: Compromise of third-party libraries, components, or tools integrated
into software.
 Effect: Backdoors introduced into trusted systems, large-scale exploitation
through dependencies.

General Effects of Software Security Threats


 Loss of confidentiality: Sensitive data exposed (personal info, financial details).
 Loss of integrity: Unauthorized modification or corruption of data.
 Loss of availability: Services disrupted, causing downtime and loss of trust.
 Financial and reputational damage: Customer trust declines, leading to regulatory
fines and brand harm.

3. Discuss the advantages of finding and fixing software security issues early in the
development process.
In modern software development, security cannot be treated as an afterthought. Detecting and
fixing security issues early in the Software Development Life Cycle (SDLC) — during
requirements, design, and coding phases — provides significant technical and business
benefits. This approach is often called “shift-left security” or “Secure Software
Development”.

Advantages of Finding and Fixing Security Issues Early


1. Lower Cost of Remediation
 Fixing a bug in the requirements or design stage is far cheaper than fixing it after
deployment.
 Industry studies show that costs increase exponentially when vulnerabilities are found late
(e.g., design flaw vs. production patch).
2. Reduced Risk of Exploitation
 Early detection prevents vulnerabilities from reaching end users.
 This reduces the chances of attackers exploiting weaknesses such as injection flaws,
buffer overflows, or misconfigurations.
3. Improved Software Quality
 Secure coding practices and reviews enhance not only security but also stability and
reliability.
 Preventing errors early leads to cleaner, more maintainable code.
4. Better Design Decisions
 Threat modeling and risk assessment during design ensure that security principles (least
privilege, defense-in-depth, fail-safe defaults) are embedded in architecture.
 Avoids costly redesign later.
5. Saves Time in Testing and Deployment
 If vulnerabilities are fixed early, the testing phase becomes smoother, with fewer last-
minute blockers.
 Faster time-to-market is achieved without sacrificing security.
6. Supports Compliance and Legal Requirements
 Regulations like GDPR, HIPAA, and PCI DSS require secure handling of sensitive data.
 Early security integration ensures compliance before audits, avoiding fines or penalties.
7. Enhances Customer Trust
 Delivering secure software increases user confidence.
 Prevents damage to reputation caused by data breaches or downtime.
8. Reduces Technical Debt
 Ignoring security flaws early creates “security debt” that grows over time.
 Early fixes reduce long-term maintenance costs and complexity.
9. Facilitates Continuous Security Culture
 Encourages developers, testers, and designers to consider security as part of their role, not
an afterthought.
 Builds a proactive security culture within the organization.
Finding and fixing security issues early in the development process provides both technical
benefits (lower cost, improved quality, reduced risks) and business advantages (compliance,
trust, faster delivery). By integrating security from the beginning — through secure coding
practices, threat modeling, and automated analysis — organizations can achieve stronger,
safer, and more cost-effective software.

4. What are the key properties of secure software? Explain how each property
contributes to overall security.

Secure software is defined not only by the absence of defects but by a set of desirable
properties that together reduce the chance of compromise, limit impact if compromise occurs,
and make recovery possible. Below are the key properties, what each means, how it
contributes to security, typical controls that provide it, and how you can verify it.

1. Confidentiality: Ensuring information is accessible only to authorized parties.


o How it contributes: Prevents unauthorized disclosure of sensitive data
(personal data, secrets, keys), stopping attackers from gaining intelligence that
could be used for further attacks or privacy violations.
o Typical controls: Strong encryption at rest and in transit (TLS, disk
encryption), access control lists, role-based access control (RBAC),
tokenization, data classification.
o Verification / tests: Penetration testing, access control reviews, crypto
configuration scans, data-leakage testing.
2. Integrity: Ensuring data and system state are accurate and unaltered except by
authorized actions.
o How it contributes: Prevents unauthorized modification (tampering) of code,
configuration, and data that could subvert functionality or cause incorrect
decisions.
o Typical controls: Cryptographic hashes and digital signatures, message
authentication codes (MACs), input validation, checksums, secure update
signing, immutability for critical records.
o Verification / tests: File integrity monitoring, code-signature verification,
fuzzy/DIND testing to detect unexpected state changes.
3. Availability: Ensuring system services and data are accessible when needed by
authorized users.
o How it contributes: Protects business continuity and user experience;
prevents attackers from causing outages or resource exhaustion (DoS/DDoS).
o Typical controls: Redundancy, failover, autoscaling, DDoS protection, rate-
limiting, backups, capacity planning, health checks.
o Verification / tests: Load and stress testing, chaos engineering, failover drills,
RTO/RPO verification.
4. Authenticity: Ensuring identities (users, services) and artifacts (messages, binaries)
are genuine.
o How it contributes: Stops impersonation attacks and ensures that actions or
data originate from trusted sources.
o Typical controls: Strong authentication (passwordless, MFA), PKI, mutual
TLS, signed binaries and tokens, provenance and origin checks.
o Verification / tests: Authentication flows testing, certificate/PKI audits,
verifying signatures.
5. Authorization / Access Control: Enforcing which authenticated entities may perform
which actions on which resources.
o How it contributes: Limits the blast radius when accounts are compromised
and ensures least-privilege operation.
o Typical controls: RBAC/ABAC, privilege separation, capability-based
access, privilege escalation controls, just-in-time (JIT) access.
o Verification / tests: Access-control policy reviews, permission audits,
horizontal/vertical privilege escalation tests.
6. Accountability / Auditability : The ability to trace actions to responsible actors and
review historical events.
o How it contributes: Deters misuse, enables forensics and incident response,
and supports legal/compliance requirements.
o Typical controls: Tamper-evident logging, centralized log collection (SIEM),
immutable audit trails, timestamping, user and admin activity logs.
o Verification / tests: Log integrity checks, audit log completeness reviews,
retention policy checks, simulated incident investigations.
7. Non-repudiation : Assurance that an entity cannot deny having performed an action.
o How it contributes: Provides legal and operational certainty for transactions
and security events (e.g., contract signing, financial operations).
o Typical controls: Digital signatures, secure time-stamping, signed transaction
records, cryptographic evidence.
o Verification / tests: Signature verification, end-to-end transaction traceability
exercises.
8. Resilience / Robustness / Fault Tolerance: The ability to continue functioning in the
presence of faults, attacks, or unexpected input.
o How it contributes: Reduces impact of successful attacks or failures and
helps maintain availability and integrity under stress.
o Typical controls: Graceful degradation, input validation, error handling,
circuit breakers, redundancy, sandboxing, memory-safety techniques.
o Verification / tests: Fuzzing, fault-injection tests, chaos testing, negative
testing.
9. Privacy : Protecting personal and sensitive data according to legal, ethical and user
expectations (related to but distinct from confidentiality).
o How it contributes: Limits data collection and retention, reducing exposure
and legal/regulatory risk; builds user trust.
o Typical controls: Data minimization, consent mechanisms,
anonymization/pseudonymization, retention and deletion policies, privacy
impact assessments.
o Verification / tests: Privacy audits, data-flow analysis, compliance checks
(GDPR/HIPAA mappings).
10. Maintainability & Patchability: The ease with which software can be updated,
fixed, and securely maintained over time.
o How it contributes: Enables timely vulnerability remediation and reduces the
window of exposure for newly discovered flaws.
o Typical controls: Modular code, automated testing, CI/CD pipelines with
security gates, signed and verifiable update mechanisms.
o Verification / tests: Patch deployment drills, update integrity checks, lead
time-to-patch metrics.

5.Describe stack-based memory and heap-based memory attacks and list defense
mechanisms against memory-based attacks, such as stack canaries and address space
layout randomization (ASLR).
Memory-based attacks exploit weaknesses in how programs manage memory. Two common
targets are the stack and the heap, which are areas of memory used for program execution.
Attackers use these flaws to inject malicious code, corrupt execution flow, or escalate
privileges.
1. Stack-Based Memory Attacks
 Stack memory stores local variables, function parameters, and return addresses.
 Attack method:
o In a stack buffer overflow, more data than allocated is written into a buffer.
o This can overwrite adjacent variables, control data, or even the return
address on the stack.
o Attackers may redirect program execution to injected malicious code
(shellcode) or use the overflow to trigger arbitrary behavior.
 Effects:
o Unauthorized code execution.
o System crashes (Denial of Service).
o Escalation of privileges.
2. Heap-Based Memory Attacks
 Heap memory is used for dynamic memory allocation (objects, large data structures).
 Attack method:
o In a heap overflow, excessive data is written into heap-allocated memory
blocks.
o Attackers manipulate heap management structures (metadata, pointers) to
overwrite function pointers or sensitive data.
 Effects:
o Altering program flow by corrupting pointers.
o Bypassing access controls.
o Gaining arbitrary read/write access in memory.
3. Defense Mechanisms Against Memory-Based Attacks
1. Stack Canaries
o Small random values placed next to return addresses on the stack.
o Before returning from a function, the program checks if the canary value is
intact.
o If overwritten (indicating an overflow), the program aborts, preventing
exploitation.
2. Address Space Layout Randomization (ASLR)
o Randomizes the memory addresses where stack, heap, and libraries are loaded.
o Makes it extremely difficult for attackers to predict the location of injected
code or important structures.
o Defeats return-to-libc and code-reuse attacks.
3. Data Execution Prevention (DEP / NX-bit)
o Marks memory regions (stack/heap) as non-executable.
o Prevents injected code from executing even if placed in memory.
4. Safe Libraries and Bounds Checking
o Use of safe string-handling functions (e.g., strncpy instead of strcpy).
o Compiler-based bounds checking (e.g., stack protector flags in GCC).
5. Control Flow Integrity (CFI)
oEnsures program control flow follows a valid path.
o Prevents hijacking of return addresses or function pointers.
6. Memory-safe Languages
o Using languages like Java, Rust, or C# that manage memory automatically
reduces risks of buffer overflows.
Stack-based and heap-based memory attacks exploit weaknesses in memory management to
gain control over software execution. Defense mechanisms such as stack canaries, ASLR,
DEP, safe libraries, and memory-safe languages are critical to preventing these attacks. A
layered approach combining multiple defenses provides the strongest protection against
modern exploits.

PART C
1.Imagine a web application is experiencing frequent heap-based memory corruption.
Propose a comprehensive plan to address and prevent such issues, including code review
practices and runtime protections.
1. Mitigate customer impact
o If crashes/compromises are happening in production, throttle or take the
affected service out of rotation (canary → full) while investigating.
o Enable a canary/staggered rollout to limit blast radius for hotfixes.
2. Preserve evidence
o Enable and collect core dumps (ulimit -c unlimited) and preserve them
securely.
o Save logs, heap profiles, crash IDs, and any request payloads that triggered the
failure.
3. Enable temporary runtime safeguards
o Restrict access with firewall rules, rate limits, and stricter authentication if
exploitation is possible.
o Consider enabling stricter memory limits (cgroups) so one process doesn’t
bring down hosts.
Diagnosis & root-cause analysis
1. Reproduce the crash locally or in an isolated environment
o Recreate with the same request patterns/data. If non-deterministic, try
replaying traffic or fuzzing inputs that hit the failing code path.
2. Run memory sanitizers
o AddressSanitizer (ASan): excellent for use-after-free, out-of-bounds,
stack/heap corruption in C/C++ during testing.
o UndefinedBehaviorSanitizer (UBSan): detects undefined behavior.
o MemorySanitizer (MSan) for uninitialized reads.
o LeakSanitizer to detect leaks.
o Run with ASan-enabled builds (compile with -fsanitize=address,undefined).
o If the app is large, run targeted tests for the module that handles the failing
inputs.
3. Heavy-weight profilers / tools
o Valgrind (memcheck) — slow but thorough for root cause.
o Heap profiling (e.g., jemalloc’s stats/prof, tcmalloc heap profiler, perf
flamegraphs).
o GDB plus core dumps to inspect corrupted heap metadata and the stack trace
at crash.
4. Fuzz the vulnerable surface
o Use libFuzzer or AFL for the code path that handles untrusted inputs (parsers,
deserializers, image processing).
o Instrument fuzzing targets to run under ASan.
5. Check third-party libs
o Verify versions of libraries that allocate or manage memory (image libraries,
parsers, serialization libs). Look for known CVEs.
Short-term fixes (while permanent fixes are planned)
 Apply input validation / size limits: reject obviously malformed or oversized inputs.
 Sanitize/normalize inputs before forwarding to risky code paths.
 Turn off or constrain features that expose the vulnerable path (e.g., file uploads,
certain parsers) when feasible.
 Hotfix: if a single function is obviously corrupting memory, apply a targeted
validation or conversion to a safer API (e.g., replace strcpy with strncpy+explicit
length checks) — but treat hotfix as temporary until proper fixes + tests exist.
Code-level remediation (permanent)
1. Fix root causes
o Identify and fix the specific bug(s): off-by-one, incorrect size calculation, use-
after-free, double-free, invalid pointer arithmetic, improper realloc usage.
o Replace unsafe APIs with safe alternatives and explicit bounds checks.
2. Use safer memory idioms
o C++: prefer RAII, std::vector/std::string, unique_ptr/shared_ptr over raw
new/delete. Consider std::span for safe view semantics.
o C: use bounded APIs and encapsulate allocation/deallocation in well-tested
helpers. Consider using calloc for zero-initialization if helpful.
o Where feasible, migrate high-risk modules to memory-safe languages (Rust,
Java, Go) or sandbox them as separate services.
3. Defensive coding patterns
o Validate sizes before malloc/realloc; check return values.
o After free, set pointer to NULL to avoid double free/use-after-free.
o Avoid returning pointers into owned buffers that can be freed by caller unless
contract is clear.
o Clear sensitive memory before free if needed.
4. Harden allocator usage
o Use hardened/more-debuggable allocators for dev/testing (e.g., jemalloc with
opt flags, hardened malloc).
o Avoid custom allocators unless absolutely necessary and audit them carefully.
5. Fix concurrency-induced corruption
o Ensure correct locking for shared heap structures. Race conditions can lead to
memory corruption.

2. Evaluate a recent case study of a major software security breach. Analyze the sources
of insecurity involved and propose a set of measures to prevent similar breaches in the
future.
Short summary of the breach (what happened)
In 2020 threat actors inserted a backdoor (commonly called SUNBURST / Solorigate) into
SolarWinds’ Orion product build, which was distributed as digitally-signed, legitimate
software updates to thousands of customers. After a dormant period the backdoor contacted
attacker infrastructure and enabled follow-on intrusion activities against high-value victims
including U.S. federal agencies and major private companies. The compromise went
undetected for months and affected a large supply chain of downstream customers.
Key sources of insecurity (root causes)
Below are the primary weakness areas that enabled the attack, with brief explanations.
1. Compromise of the software build and update pipeline
o The attacker modified the Orion build artifacts so malicious code was included
in official releases. When the vendor’s build/signing process is compromised,
all customers trusting those updates are exposed. (This was the central vector.)
2. Insufficient supply-chain visibility & vendor validation
o Many customers lacked inventory (SBOMs) and relied on a trusted vendor
binary without fine-grained verification of what was inside; that increased
blast radius when signed software was trojanized.
3. Over-privileged trust relationships and lateral access
o Orion is a network management product with privileged visibility and
connectivity; attackers abused those privileges after initial compromise to
move laterally into sensitive environments. Weak segmentation and excessive
service permissions made exploitation much easier.
4. Slow detection & inadequate monitoring
o The compromise persisted months before linkage and full remediation.
Detection was hampered by stealthy attacker behavior (dormant period, living-
off-the-land techniques) and gaps in telemetry/monitoring.
5. Organizational failures (governance, communication, vendor security posture)
o Post-incident reviews highlighted weaknesses in vendor security governance,
control implementation around build systems, and disclosure/timely
information sharing. Regulatory scrutiny and audits followed.

Measures to Prevent Supply Chain Breaches


A. Secure the Build Pipeline
 Harden CI/CD: restrict access, use MFA, isolate build servers.
 Use ephemeral, auditable builds with multi-party approval.
 Protect code-signing keys in HSMs/KMS.
 Adopt reproducible builds with logs and manifests.
 Publish signed SBOMs (Software Bill of Materials).
 Verify artifacts through independent checks and automated scanning.
B. Harden Deployments & Limit Impact
 Apply least privilege and network segmentation.
 Follow zero-trust principles (short-lived credentials, explicit authorization).
 Use runtime protections: EDR/XDR, containers, sandboxing.
C. Detection, Monitoring & Response
 Collect and analyze centralized logs with SIEM.
 Conduct threat hunting and runtime sampling.
 Validate vendor updates in test environments before rollout.
 Maintain incident response playbooks (eviction, key rotation, remediation).
D. Governance & Vendor Risk Management
 Set strong security requirements in vendor contracts.
 Conduct audits and penetration tests.
 Continuously monitor third-party risks and require prompt vendor reporting.
 Ensure clear disclosure and communication policies for regulators/customers.
E. People & Processes
 Train developers/ops on secure development and supply chain risks.
 Require multi-discipline review (security, ops, legal) for critical releases.

3. You are tasked with improving the security of an existing software system. Outline a
plan that includes assessing current threats, implementing secure coding practices, and
applying memory protection techniques.

1. Quick intake & prioritization


Inventory assets: list services, binaries, libraries, 3rd-party components, data stores, and
privileged interfaces.
 Identify crown jewels: which data/components would cause the most harm if
breached.
 Collect recent telemetry & incidents: crashes, CPU/memory anomalies, logs,
vulnerability reports.
 Apply immediate mitigations: enable firewall rules, rate limits, stage vendor patches
in canary, restrict access to admin consoles.
Goal: understand scope, reduce immediate exposure.

2. Assess current threats — threat model + gap analysis


 Threat modeling: map assets → actors → attack vectors → impact. Use STRIDE or
PASTA for each major component.
 Vulnerability scan & SCA: run SAST, SCA (software composition analysis) to find
known CVEs and risky libraries.
 Dynamic testing: run DAST against web endpoints; run fuzzers on
parsers/serializers.
 Memory-issue triage: run AddressSanitizer / Valgrind (dev/staging) to detect
heap/stack corruption and use-after-free.
 Gap analysis report: prioritize findings by risk (likelihood × impact) and business
priority.
Deliverable: prioritized risk register and remediation backlog.

3. Implement secure coding practices (Weeks 2–ongoing)


 Coding standards: adopt/extend secure-coding checklist (no strcpy/gets, validate
inputs, check return values, avoid undefined behavior).
 PR & code-review policy: require security checklist items in PR templates; require at
least one security-aware reviewer for critical modules.
 Static analysis in CI: integrate SAST tools and block merges for high-severity
findings.
 Dependency hygiene: pin versions, require SCA passing in pipeline, maintain SBOM
for releases.
 Training & champions: run secure-coding workshops and appoint security
champions in teams.
 Threat-model-informed tests: add unit/integration tests that cover threat-model
high-risk paths.
Concrete checks for reviewers: validate input size/format, prevent integer overflows,
validate ownership/lifetimes, avoid returning pointers to freed buffers, check error paths.

4. Apply memory protection techniques


 Compiler hardening: enable -fstack-protector-strong, -D_FORTIFY_SOURCE=2, -
fPIE/-pie, -O2 appropriate flags.
 ASLR & DEP/NX: ensure OS and build produce PIE/position-independent binaries
and use NX so data pages are non-executable.
 Stack canaries: enable and verify stack protector is present in builds.
 Control-Flow Integrity (CFI): enable CFI options available for your
compiler/toolchain.
 Hardened allocator / allocator tuning: use modern allocators (jemalloc/tcmalloc)
with security features or enable malloc-quarantine.
 Sanitizers in CI: run ASan/UBSan/MSan for debug/nightly builds; use GWP-ASan
or sampled ASan for production detection if possible.
 Memory-safe refactor: progressively replace high-risk modules in C/C++ with
memory-safe languages (Rust/Go/managed runtimes) where feasible.
 Sandboxing: run risky components in containers/sandboxed processes with limited
capabilities.

5. Testing & verification (continuous)


 Fuzzing: integrate fuzz targets (libFuzzer/AFL) for parsers and continuously run
them (CI/nightly).
 Regression corpus: store test inputs that triggered past crashes and include them in
CI fuzz corpus.
 Penetration testing: periodic pentests focusing on high-risk modules.
 Canary staging & gradual rollout: stage risky changes and monitor canaries before
full rollout.

6. CI/CD and automation


 Security gates: require passing SAST, SCA, sanitizer tests for merges to main.
 Automated builds & reproducibility: keep build logs, use deterministic builds
where possible.
 Artifact signing and key management: protect signing keys in HSM/KMS and
restrict access.
 Fail-fast policies: fail builds that introduce new critical memory/security issues.

7. Runtime detection & response


 Centralized logging & telemetry: collect logs, process creation, network egress, and
crash dumps to a SIEM.
 Crash reporting & core dump collection: enable secure collection of core files and
sanitizer reports.
 EDR / behavior analytics: detect process anomalies (unexpected outbound
connections, suspicious children).
 Incident playbooks: documented steps to isolate, rotate keys, revoke artifacts, and
patch/rollback. Run tabletop exercises.

8. Governance, metrics & roadmap


 Define KPIs: time-to-fix high-severity bugs, sanitizer failures per release, % code
covered by SAST, mean time to detect (MTTD).
 Prioritize backlog: focus on fixes that protect crown-jewel assets and reduce
exploitability (e.g., memory-corruption fixes, authentication issues).
 Continuous improvement: regular retrospectives, update training based on incidents
and findings.

UNIT – II
1. What is Requirements Engineering?
Requirements Engineering (RE) is the systematic process of eliciting, analyzing,
documenting, and managing the requirements of a software system to ensure it meets
stakeholder needs and security objectives.
2. List the tools involved in Requirements Engineering.
 Interviews & Questionnaires
 Use Case Diagrams / Scenarios
 Prototyping Tools
 Checklists
 Requirements Management Tools (e.g., DOORS, JIRA, RequisitePro)
3. What is the primary goal of the SQUARE process model in secure software
development?
The primary goal is to identify, prioritize, and document security requirements
systematically so that security is integrated into software from the early development stages.
4. Name two key activities involved in the requirements elicitation phase of secure
software development.
1. Stakeholder Interviews/Workshops – to gather requirements.
2. Threat Modeling / Risk Analysis – to identify potential security issues.
5. Give an example of how untrusted executable content can affect software security.
Example: A malicious JavaScript in a web page (XSS attack) can steal user cookies or
session IDs, leading to account hijacking.
6. What role does stack inspection play in mitigating the risk of buffer overflows?
Stack inspection checks the runtime call stack to ensure that code has proper permissions
before execution, thereby preventing untrusted code from exploiting buffer overflows.
7. Why is it important to understand vulnerability trends when developing secure
software?
Understanding trends helps developers:
 Anticipate emerging threats.
 Apply timely patches and best practices.
 Design software that resists the most common attack patterns.
8. Write the concept of session hijacking.
Session hijacking is an attack where an attacker steals or predicts a valid session ID to
impersonate a legitimate user and gain unauthorized access to a system.
9. Mention one security design principle that helps in reducing vulnerabilities in
software.
Principle of Least Privilege – Every user or process should be given the minimum level of
access necessary to perform its function.
10. What is code injection, and why is it considered a significant security threat?
Code injection is an attack where malicious code is inserted into a program (e.g., SQL
injection, XSS).
It is significant because it can lead to data theft, unauthorized access, or full system
compromise.

PART B
1. Explain the SQUARE process model and its phases in securing software and explain
its strengths and limitations.
The SQUARE (Security Quality Requirements Engineering) process model is a structured
methodology for identifying, categorizing, and prioritizing security requirements in software
development.
It consists of nine phases:
1. Agree on Definitions – Establish common security terminology among stakeholders.
2. Identify Security Goals – Determine the overall security objectives.
3. Develop Artifacts – Collect supporting documents (use cases, system models).
4. Perform Risk Assessment – Identify threats, vulnerabilities, and risks.
5. Elicit Security Requirements – Gather requirements using interviews, checklists,
etc.
6. Categorize Requirements – Group into confidentiality, integrity, availability, etc.
7. Prioritize Requirements – Rank based on risk, impact, and feasibility.
8. Inspect Requirements – Review for clarity, consistency, completeness.
9. Document Security Requirements – Create final security requirements
specification.
Strengths:
 Provides structured and repeatable process.
 Early identification of security concerns.
 Encourages stakeholder involvement.
 Improves traceability of requirements.
Limitations:
 Time-consuming and resource-intensive.
 Requires skilled security experts.
 May not fit agile/fast-paced development well.
 Focuses more on requirements, less on design/implementation security.

2. Discuss the importance of requirements elicitation and prioritization in addressing


security concerns.
 Requirements elicitation is crucial because security concerns are often not visible
until explicitly asked and documented. Without elicitation, developers may ignore
critical security needs.
 Techniques used: stakeholder interviews, checklists, misuse cases, threat modeling.
 Importance:
o Captures hidden stakeholder expectations.

o Identifies security constraints like authentication, encryption, compliance


needs.
o Prevents costly security redesign later.

 Prioritization is equally important because:


o Not all requirements can be implemented due to cost/time constraints.

o Risk-based prioritization ensures critical vulnerabilities are addressed first.

o Helps balance security with usability and performance.

Together, elicitation and prioritization ensure that software meets real-world security
demands in an efficient manner.

3. Explain how stack inspection prevents attacks like buffer overflows and compare it
with other security mechanisms.
Answer:
Stack inspection is a security mechanism used to check the call stack of a program at
runtime to verify whether the execution has sufficient permissions.
 In buffer overflow attacks, attackers inject malicious code into the stack. Stack
inspection helps by:
o Verifying function calls in the call chain.
o Preventing untrusted code from performing privileged actions.

o Ensuring permissions are inherited properly.

Comparison with other mechanisms:


 Bounds checking → prevents overflow at memory level by checking array
boundaries.
 DEP (Data Execution Prevention) → marks stack memory as non-executable.
 ASLR (Address Space Layout Randomization) → randomizes memory layout to
confuse attackers.
 Stack canaries → detect changes in return addresses before function exit.
Conclusion: Stack inspection focuses on runtime permission checks, whereas other
mechanisms provide memory-level protections. Both together strengthen security.
4. Discuss the role and types of policy specification languages in enforcing software
security policies.
Policy specification languages are formal notations used to define and enforce security
policies in software systems.
Role:
 Define what actions are allowed/denied.
 Enable automated enforcement of policies.
 Ensure compliance with standards (HIPAA, GDPR).
 Reduce ambiguity compared to natural language.
Types:
1. Access Control Policy Languages – e.g., XACML, specify user access rights.
2. Authorization Languages – e.g., Ponder, define role-based or obligation policies.
3. Information Flow Policy Languages – control how sensitive data moves (e.g., JIF).
4. Audit/Monitoring Languages – specify what events must be logged.
Thus, policy specification languages bridge the gap between security goals and enforceable
software rules.
5. Discuss different code injection attacks, their impact, and best practices for
mitigation.
Code injection attacks occur when untrusted data is interpreted as code.
Types & Impacts:
1. SQL Injection – attacker injects malicious SQL → leads to data theft/modification.
2. Cross-Site Scripting (XSS) – attacker injects JavaScript → session hijacking,
phishing.
3. Command Injection – attacker executes OS commands → full system compromise.
4. LDAP/NoSQL Injection – bypass authentication or query manipulation.
Mitigation Best Practices:
 Input validation and sanitization.
 Use of prepared statements/parameterized queries.
 Apply principle of least privilege for DB accounts.
 Use Content Security Policy (CSP) for XSS prevention.
 Regular code reviews and automated security testing.
6. Examine session hijacking methods, their impact on software security, and strategies
for prevention.
Session hijacking occurs when attackers steal or guess a valid session token to impersonate a
user.
Methods:
 Cookie theft (via XSS).
 Session fixation (forcing victim to use known session ID).
 Man-in-the-Middle (MITM) (sniffing traffic).
 Predictable Session IDs (weak tokens).
Impact:
 Unauthorized access to user accounts.
 Financial fraud in e-banking.
 Identity theft.
 Loss of customer trust.
Prevention Strategies:
 Use strong, random session tokens.
 Enforce HTTPS/TLS for all communications.
 Regenerate session IDs after login.
 Implement secure cookie attributes (HttpOnly, Secure, SameSite).
 Enable timeout and re-authentication.

PART – C
1. For an online banking application, apply the SQUARE process model to identify and
address security requirements. Outline how each phase would contribute to securing the
application.
Applying SQUARE to online banking application:
1. Agree on Definitions – Define terms like authentication, fraud, phishing.
2. Identify Security Goals – Confidentiality of user data, integrity of transactions,
availability of services.
3. Develop Artifacts – Use cases (fund transfer, bill payments), threat models.
4. Risk Assessment – Identify risks like SQL injection, session hijacking, insider
threats.
5. Elicit Security Requirements – Multi-factor authentication, encryption of data,
secure APIs.
6. Categorize Requirements – Authentication, authorization, transaction security,
auditing.
7. Prioritize Requirements – Rank MFA and encryption as top priorities, less critical
features later.
8. Inspect Requirements – Review for completeness, e.g., whether password recovery
is secure.
9. Document Requirements – Finalized Security Requirement Specification (SRS)
for developers.
This ensures the online banking app is systematically secured against major cyber threats.
2. Design a comprehensive approach to protect a web application from SQL injection
and XSS attacks, including code review practices and security measures.
Approach:
1. SQL Injection Protection:
o Use prepared statements and stored procedures.

o Avoid dynamic SQL queries.

o Enforce input validation (whitelisting).

o Apply least-privilege principle on DB accounts.

o Regularly test with tools like SQLMap.


2. XSS Protection:
o Encode output (HTML entity encoding).

o Validate and sanitize user inputs.

o Use CSP (Content Security Policy).

o Implement HttpOnly and Secure cookies.

o Use frameworks with built-in XSS protection.

3. Code Review Practices:


o Peer reviews focusing on input handling.

o Security checklists for developers.

o Static analysis tools (e.g., SonarQube, Fortify).

o Threat modeling review sessions.

4. Other Security Measures:


o Apply Web Application Firewalls (WAF).

o Enable logging and intrusion detection.

o Conduct regular penetration testing.

Conclusion: Combining secure coding, proactive reviews, and runtime defenses provides
strong protection against SQLi and XSS.

You might also like