0% found this document useful (0 votes)
26 views33 pages

Study Notes - SA

Uploaded by

YathishMandrira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views33 pages

Study Notes - SA

Uploaded by

YathishMandrira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1. Architecture in Various Contexts.

· Technical Context: Focuses on the technical role of software architecture, including


quality attributes and industry practices.
· Project Life Cycle Context: Relates software architecture to various stages of the
software development lifecycle (SDLC).
· Business Context: Addresses the impact of software architecture on an organisation’s
business goals.
· Professional Context: Involves the role of the architect within a project or organisation,
requiring a mix of technical, leadership, and communication skills.

2. Four Key Software Development Processes.


· Waterfall
· Iterative
· Agile
· Model-driven Development

3. Architecture Activities.
a. Making a business case
b. Identifying architecturally significant requirements
c. Designing/selecting architecture
d. Documenting and communicating architecture
e. Analysing and evaluating architecture
f. Implementing/testing the system
g. Ensuring implementation adherence

4. What is Software Architecture?

5. Architectural Structures

· Module Structures: Partition systems into implementation units (modules) that assign
specific responsibilities.
· Component-and-Connector (C&C) Structures: Focus on runtime entities (components)
and their interactions (connectors).
· Allocation Structures: Map software elements to environments like organizational,
developmental, and execution contexts.

6. Useful Structures
· Decomposition Structure: Shows hierarchical breakdown of modules into submodules.
· Concurrency Structure: Focuses on parallelism and resource contention in runtime.
· Deployment Structure: Maps software to hardware units, crucial for performance,
security, and availability

7. Architectural Patterns

· Layered Pattern: Modules are organized in layers, providing unidirectional uses relations.
· Client-Server Pattern: Components are clients and servers, with connectors as protocols and
messages.
· Shared-Data (Repository) Pattern: Components access and modify a shared repository, often
using SQL protocols.
· Multi-Tier Pattern: Distributes system components across different hardware and software
tiers.

8. Characteristics of Good Architecture

· Aligns with system goals and quality attribute requirements.


· Should maintain conceptual integrity, designed by a small, coherent group of
architects.
· Well-documented, with views addressing stakeholder concerns.
· Evaluated for its ability to meet quality attributes like performance, scalability, and
security.

9. Structural “Rules of Thumb”

Single Architect or Small Team for Conceptual Integrity


· Ensures unified vision and consistent decision-making.
· Avoids conflicting ideas that may compromise system integrity.

2. Strong Architect-Development Team Connection


· Keeps architectural design practical and feasible.
· Promotes continuous feedback and alignment with technical realities.

3. Balance Between Simplicity and Flexibility


· Avoids unnecessary complexity while addressing core problems.
· Ensures flexibility to adapt to future changes.
Quality Attribute

1. Architecture and Requirements

· Functional Requirements: What the system must do (behaviour and runtime responses).
· Quality Attribute Requirements: Annotate functional requirements (e.g., speed, resilience,
learnability).
· Constraints: Design decisions with no degrees of freedom (already decided).

2. Functionality vs. Architecture

· Functionality: The ability of a system to perform its intended tasks.


· Relationship to Architecture: Functionality doesn't define architecture; multiple architectures
can meet the same functionality.

3. Quality Attributes Considerations


· Example QA annotations:
· Performance: Speed of execution.
· Availability: Frequency and repairability of failures.
· Usability: Learnability of functionality.
· Problems in QA Discussions:
1. Non-Testable Definitions: Terms like "modifiability" are vague.
2. Overlapping Concerns: Failures might affect multiple attributes (e.g., availability, security).
3. Varying Terminologies across communities.
· Solution: Use quality attribute scenarios to define QA more precisely.

4. Quality Attribute Scenarios Structure


1. Stimulus Source: The entity (human/system) generating the stimulus.
2. Stimulus: Condition that requires a system response.
3. Environment: System’s operational state when stimulus occurs.
4. Artifact: System components affected by stimulus.
5. Response: System activity in response to the stimulus.
6. Response Measure: Measurable metric (e.g., time, throughput).

5. Tactics for Achieving Quality Attributes

· Tactics: Primitive design techniques used to achieve specific quality attributes.


· Purpose:
1. Simplify complex design patterns.
2. Build design fragments from "first principles."
3. Systematically choose among tactics for a quality attribute goal, balancing trade-offs.

6. Guiding Quality Design Decisions


Seven major categories:

1. Allocation of Responsibilities: Assigning tasks to modules/components.


2. Coordination Model: Choosing communication and coordination mechanisms (e.g.,
synchronous/asynchronous, stateless/stateful).
3. Data Model: Deciding data abstractions, storage, and manipulation.
4. Management of Resources: Identifying and managing resources (e.g., CPU, memory).
5. Mapping among Architectural Elements: Mapping modules to runtime elements, processors,
and data stores.
6. Binding Time Decisions: Choosing when decisions become fixed (e.g., build-time, runtime).
7. Choice of Technology: Selecting appropriate technologies for implementation (considering
compatibility, support, and side effects).
Module 2 – L2: Availability

1. What is Availability?

· Definition:
· Availability is the property of software being present and ready to perform its tasks when
needed.
· Encompasses reliability by adding the concept of recovery (repair).
· Core Objective:
· Minimize service outage time by mitigating faults.
· Ensure the system can endure faults, preventing them from causing failures or limiting their
impact.

2. Availability General Scenario


Components of a Quality Attribute Scenario:

· Source: Internal or external entities (people, hardware, software, environment).


· Stimulus: Type of fault (omission, crash, incorrect timing/response).
· Artifact: System elements affected (processors, communication channels, storage).
· Environment: Operational state during stimulus (normal, startup, shutdown, degraded).
· Response: Actions taken (prevent fault, detect, recover).
· Response Measure: Metrics to evaluate response (availability percentage,
detection/repair time).

· Sample Scenario:
· Example: A heartbeat monitor detects a nonresponsive server during normal operations,
informs the operator, and maintains operation without downtime.

3. Tactics for Availability: Categories of Tactics:


Detect Faults

· Ping/Echo: Check reachability and delay.


· Monitor: Health checks of system components.
· Heartbeat: Periodic status messages.
· Timestamp: Validate event sequences.
· Sanity Checking: Validate operations/outputs.
· Condition Monitoring: Ensure nominal operation parameters.
· Voting: Verify consistency across replicated components.
· Exception Detection: Identify anomalies in execution flow.
· Self-test: Components test their own operations.
Recover from Faults

· Active Redundancy (Hot Spare): Parallel processing with redundant spares.


· Passive Redundancy (Warm Spare): Active processing with standby spares updated
periodically.
· Spare (Cold Spare): Redundants activated only upon fail-over.
· Exception Handling: Report or handle exceptions, possibly masking faults.
· Rollback: Revert to a known good state.
· Software Upgrade: In-service code updates.
· Retry: Attempt transient operations again.
· Ignore Faulty Behavior: Disregard spurious messages.
· Degradation: Maintain critical functions while dropping non-critical ones.
· Reconfiguration: Reassign responsibilities to functioning resources.
· Shadow: Test failed/upgraded components in parallel before full reintegration.
· State Resynchronization: Synchronize state between active and standby components.
· Escalating Restart: Vary granularity of restarts to minimize service impact.
· Non-stop Forwarding: Split functionality to maintain data flow despite failures.

Prevent Faults

· Removal From Service: Temporarily disable components to prevent faults.


· Transactions: Ensure atomicity, consistency, isolation, durability (ACID) in state
updates.
· Predictive Model: Monitor health to preemptively correct potential faults.
· Exception Prevention: Use mechanisms to prevent exceptions (smart pointers,
wrappers).
· Increase Competence Set: Design components to handle more fault cases.

4. Design Checklist for Availability

· Allocation of Responsibilities:
· Identify and assign responsibilities that need high availability.
· Ensure logging, fault notification, disabling fault sources, temporary unavailability, fault
fixing/masking, and degraded operation capabilities.

· Coordination Model:
· Ensure coordination mechanisms can detect and handle faults.
· Consider properties like guaranteed delivery, operation under degraded communication,
and support for artifact replacement.
· Assess coordination behavior during different operational states (startup, shutdown,
overloaded).

· Data Model:
· Identify data abstractions critical for availability.
· Ensure operations on these abstractions can be disabled, made temporarily unavailable,
or fixed/masked during faults.
· Example: Cache write requests if a server is unavailable.
· Mapping Among Architectural Elements:
· Identify artifacts that may produce faults.
· Ensure flexible mapping/re-mapping for fault recovery, such as:
· Reassigning processes to different processors.
· Activating replacement processors or storage.
· Reinstalling systems based on delivery units.
· Managing redundant component mappings.

· Resource Management:
· Identify critical resources needed during faults.
· Ensure sufficient remaining resources to handle fault responses (logging, notifications,
disabling sources).
· Example: Large input queues to buffer messages during server failures.

· Binding Time Decisions:


· Determine when architectural elements are bound (build-time vs. runtime).
· Ensure availability strategies cover faults introduced by all possible bindings.
· Assess the availability characteristics of the binding mechanism itself.

· Choice of Technology:
· Select technologies that aid in fault detection, recovery, and component reintegration.
· Evaluate availability characteristics of chosen technologies (fault recovery capabilities,
potential faults introduced).

5. Summary

· Availability ensures the system remains operational and accessible when faults occur.
· Tactics are divided into:
· Detect Faults: Identify issues promptly.
· Recover from Faults: Restore functionality or minimize impact.
· Prevent Faults: Reduce the likelihood or impact of faults.
· Design Considerations:
· Allocate responsibilities effectively.
· Choose appropriate coordination models.
· Design robust data models.
· Ensure flexible mappings and resource management.
· Make informed binding time and technology choices.
Module 2 – L3: Performance

1. Understanding Performance

· Definition: Performance refers to a software system's ability to meet timing requirements in


response to events (interrupts, user requests, etc.).
· Key Concepts:
· Latency: Time taken to respond to an event.
· Throughput: Number of events processed in a given time.

2. General Performance Scenario

· Source: Internal vs. external to the system.


· Stimulus: Types of events (periodic, sporadic).
· Artifact: System components involved.
· Environment: Operational modes (normal, peak load).
· Response Measures: Latency, throughput, miss rate.

3. Performance Tactics

· Control Resource Demand:


· Manage sampling rate, limit event response, prioritize events, reduce overhead.
· Manage Resources:
· Increase resources (CPU, memory), enhance concurrency, maintain data copies.
· Design Checklist for Performance:
· Responsibilities: Identify critical system tasks and potential bottlenecks.
· Coordination Model: Ensure effective communication and concurrency management.
· Data Model: Analyze data handling and performance impacts.
· Resource Management: Monitor critical resources and performance metrics.

4. Design Checklists

· Responsibilities: Identify heavy-load tasks and manage threads, queues, and resources.
· Coordination Model: Choose suitable communication mechanisms (thread-safe,
asynchronous).
· Data Model: Consider multiple data copies and processing efficiencies.
· Architectural Elements: Optimize component locations and processing assignments.
Module 2 – L4: Usability
What is Usability?

· Usability refers to how easy it is for users to accomplish their tasks and the support the system
provides.
· Key areas of usability include:
· Learning system features
· Efficient usage
· Minimizing errors
· Adapting to user needs
· Increasing user confidence and satisfaction

2. Usability General Scenario

· Components:
· Source: End user
· Stimulus: User actions (learning, minimizing errors, configuring the system)
· Environment: Runtime or configuration
· Artifacts: System components involved
· Response: System should provide needed features or anticipate needs
· Response Measures: Task time, errors, user satisfaction, etc.

3. Usability Tactics

· Support User Initiative:


· Cancel: Listen for cancel requests and terminate commands.
· Pause/Resume: Temporarily free resources for other tasks.
· Undo: Restore earlier system states.
· Aggregate: Group lower-level objects for user operations.
· Support System Initiative:
· Maintain Task Model: Understand user context to provide assistance.
· Maintain User Model: Represent user knowledge and expected behavior.
· Maintain System Model: System keeps track of its own behavior.

4. Design Checklist for Usability

· Responsibilities: Ensure users learn, adapt, and recover from errors effectively.
· Coordination Model: Assess how timely and consistent system responses aid usability.
· Data Model: Ensure data abstractions support user operations like undo/cancel.
· Mapping Among Architectural Elements: Identify how visible architecture affects usability.
· Resource Management: Ensure resource limits don’t hinder task completion.
· Binding Time: Allow user control over configuration decisions.
· Choice of Technology: Choose technologies that support usability features.

5. Summary of Usability

· Usability architecture allows user initiative (e.g., canceling commands) and requires models of
the user and system to predict responses.

Additional Quality Attributes

· Variability: Support for producing system variants.


· Portability: Ease of adapting software across platforms.
· Scalability: Ability to add resources (horizontal/vertical).
· Monitorability: Ability to monitor system execution.
· Safety: Preventing harmful states and recovering from errors.
Module 2 – L5: Security
Understanding Security

· Definition: Ability to protect data from unauthorized access while ensuring authorized access.
· Attacks: Unauthorized attempts to access, modify, or deny services.

2. CIA Triad

· Confidentiality: Protection from unauthorized access (e.g., income tax returns).


· Integrity: Prevention of unauthorized data manipulation (e.g., unchanged grades).
· Availability: Ensuring system access for legitimate users (e.g., protection from DoS attacks).

3. Supporting Characteristics

· Authentication: Verifying identities.


· Non-repudiation: Ensuring senders/recipients cannot deny actions.
· Authorization: Granting privileges to users.

4. Security General Scenario

· Source: Attackers can be internal or external.


· Stimulus: Unauthorized access attempts or service disruptions.
· Response: Protect data, identify actors, and log activities.

5. Security Tactics

· Detect: Monitor traffic patterns, verify message integrity, and check for anomalies.
· Resist: Identify, authenticate, and authorize actors; limit access and encrypt data.
· React: Revoking access, locking resources, and notifying personnel.
· Recover: Keep audit trails and restore systems post-attack.

6. Design Checklist for Security

· Allocation of Responsibilities: Assign roles for identifying, authenticating, and authorizing


actors.
· Coordination Model: Ensure secure communication with other systems.
· Data Model: Protect sensitive data, enforce access rights, and maintain logs.
· Resource Management: Monitor access and ensure adequate resources.
· Binding Time: Manage late-bound components with security measures.
· Choice of Technology: Use appropriate technologies for authentication and encryption.
Module 2 – L6: Modifiabilty

Definition: Modifiability refers to the ease and cost of making changes to a system.

· Key Questions:
1. What can change?
2. What is the likelihood of change?
3. When and who will make the changes?

General Scenario

· Sources of Change: End users, developers, system administrators.


· Stimuli for Change: Requests to add, delete, modify functionality, or change quality attributes.
· Artifacts Affected: Code, data, interfaces, components, etc.
· Responses to Change: Modify, test, deploy; measured by cost, complexity, effort, etc.

Tactics for Modifiability


1. Reduce Size of a Module:
· Split large modules into smaller ones to lower future modification costs.
1. Increase Cohesion:
· Group related responsibilities within the same module; separate unrelated ones.
1. Reduce Coupling:
· Use encapsulation to create clear interfaces.
· Employ intermediaries to break dependencies.
· Restrict dependencies and abstract common services.
1. Defer Binding:
· Postpone decision-making in the lifecycle to allow for flexibility.

Design Checklist for Modifiability

· Allocation of Responsibilities:
· Identify likely changes and impact on responsibilities.
· Group related changes within the same module.
· Coordination Model:
· Analyze which functionalities can change at runtime and how.
· Use models like publish/subscribe to reduce coupling.
· Data Model:
· Assess potential changes to data abstractions and their operations.
· Ensure proper privileges for modifications.
· Mapping Among Architectural Elements:
· Evaluate if the mapping of functionality to computational elements can change.
· Resource Management:
· Analyze impacts of changes on resource usage and management.
· Binding Time:
· Determine optimal points for changes and choose appropriate defer-binding mechanisms.
· Choice of Technology:
· Select technologies that facilitate modifications and are easy to update.
Module 2 – L6: Interoperability

Interoperability Overview

· Definition: Interoperability is the degree to which systems can exchange meaningful


information.
· Quality Attribute: Not binary; it has varying levels.

General Scenario

· Actors: Systems that wish to interoperate.


· Stimulus: Request for information exchange.
· Response: Accept, reject, or log requests.
· Metrics: Percentage of correct exchanges and rejections.

Concrete Scenario Example


· A vehicle information system shares location data with a traffic monitoring system, achieving a
99.9% success rate in data inclusion.

Goals of Interoperability Tactics

· Discovery: Systems must locate each other.


· Meaningful Exchange: Ensure information is exchanged semantically correctly.

Tactics for Interoperability


1. Locate: Discover services via directories.
2. Manage Interfaces:
· Orchestrate: Coordinate service invocation.
· Tailor Interfaces: Modify interface capabilities (e.g., translation).

Design Checklist for Interoperability


1. Allocation of Responsibilities:
· Identify which system responsibilities need to interoperate.
· Ensure the ability to accept, reject, and log requests.
1. Coordination Model:
· Assess performance needs (traffic volume, timeliness, etc.).
1. Data Model:
· Define syntax and semantics of data abstractions for exchange.
1. Mapping Among Architectural Elements:
· Ensure components are hosted on suitable processors for external communication.
1. Resource Management:
· Prevent resource exhaustion from interoperability requests.
· Maintain acceptable resource load.
1. Binding Time:
· Define how systems become aware of each other and manage bindings.
1. Choice of Technology:
· Evaluate the visibility of technology at interfaces and its impact on interoperability.
Module 2 – L: Testability

Testability Overview

· Definition: Testability is the ease with which software can demonstrate faults through
execution-based testing. It focuses on the likelihood of detecting faults during testing.
· Key Considerations:
· Control inputs and manipulate internal states.
· Observe outputs and internal states.

Testability General Scenario

· Actors: Unit testers, integration testers, system testers, end users.


· Stimulus: Execution of tests after completing coding increments or system delivery.
· Response: Capture results, log activities, monitor system states.
· Measures: Effort to find faults, time for tests, state coverage percentage.

Goals of Testability Tactics


1. Facilitate easier testing post-development.
2. Reduce testing costs.
3. Categories of Tactics:
· Control & Observe: Enhance controllability and observability.
· Limit Complexity: Simplify design to ease testing.

Key Tactics for Testability


Control and Observe System State:

· Specialized Interfaces: Capture component variable values.


· Record/Playback: Use past data as input for testing.
· Localize State Storage: Store test states in one place.
· Abstract Data Sources: Simplify data interface management.
· Sandbox: Isolate systems for safe testing.
· Executable Assertions: Code assertions to check states.

Limit Complexity:

· Avoid cyclic dependencies.


· Isolate dependencies to reduce interconnections.
· Remove sources of non-determinism.
Design Checklist for Testability
1. Allocation of Responsibilities: Identify critical responsibilities for testing.
2. Coordination Model: Support for test suite execution and result capture.
3. Data Model: Ensure major data abstractions can be captured and manipulated.
4. Mapping Among Architectural Elements: Test mappings of processes to components.
5. Resource Management: Ensure adequate resources for testing; maintain a representative
environment.
6. Binding Time: Test components that bind later than compile time; capture late binding failures.
7. Choice of Technology: Assess technologies for supporting testability, such as regression
testing and fault injection.
Module 3:

Key Topics

Architecture Requirements and Design

· Understanding ASRs (Architectural Significant Requirements)


· Methods to gather ASRs:
· From documents
· Stakeholder interviews
· Business goals
· Capturing ASRs using a Utility Tree

Designing an Architecture

· Design Strategy: Frameworks and methodologies for architectural design.


· Attribute-Driven Design Method (ADD): Steps and application in architecture.

Documenting Software Architecture

· Purpose of documentation: Audience and usability.


· Different types of views and notations.
· Documenting behavior and quality attributes.
· Approaches for Agile environments: Fast-paced, iterative documentation.

Agility in Architecture

· Balancing architecture and Agile methods:


· Assessing upfront architecture vs. iterative development.
· Importance of adaptability and responsiveness.
· Guidelines for the Agile Architect:
· Commitment from stakeholders.
· Incremental growth and iterative development.

Agile Principles

· Customer satisfaction through continuous delivery.


· Embracing change, even late in development.
· Regular reflection and adjustment by the team.

Evaluation of Architecture in Agile Projects

· Techniques like ATAM (Architecture Tradeoff Analysis Method) for evaluating architecture.
· Importance of addressing stakeholders' concerns.

Examples and Case Studies

· WebArrow system: A dual-mode approach to architecture.


· Use of "spikes" for experimenting with architectural trade-offs.

Guidelines for Implementation

· Upfront architecture design pays off in large, complex systems.


· For smaller projects, focus on key patterns without extensive documentation.

Summary Points

· The Agile Manifesto prioritizes collaboration, customer focus, and rapid delivery.
· Successful large-scale projects require a blend of Agile practices and sound architectural
principles.
· Understanding when and how much architecture to implement is crucial for project success.
MODULE 4 : VIEWS

1. Architecture Views

· Definition: Representations of architectural elements, created for and understood by


stakeholders.

2. Documenting Architecture

· Purpose: Document relevant views and general documentation applicable across views.
· Challenge: Architecture documents often don’t meet the concerns of all stakeholders (end-
users, engineers, developers, project managers).

3. 4+1 Model by Philippe Kruchten

· Components:
· Logical View: Object model, focuses on functional requirements. Notation: Booch.
· Process View: Captures concurrency and synchronization aspects. Focuses on non-functional
requirements.
· Development View: Static organization of software in development. Focus on software module
organization.
· Physical View: Maps software onto hardware, considers reliability and performance.

4. Interconnections Between Views


· Views are interconnected, transitioning from Logical to Process, then to Development and
Physical views.

5. Iterative Process
· Not all architectures require all views.
· Use a scenario-driven approach to develop the system.
· Maintain architectural integrity through software design guidelines.

6. Annotations

· Industry Use: Successfully implemented in fields like Air Traffic Control and Telecom.
· Challenge: Lack of integration tools leads to inconsistency during maintenance.
Layered Architecture Overview

· Definition: A design pattern that separates an application into distinct layers, each with specific
responsibilities.

Typical Layers
1. Presentation Layer:
· Handles user interactions.
· Techniques:
· Caching: Client-side vs. Server-side.
· AJAX: For asynchronous communication.
· Responsive Design: Adapts layout for different devices.
1. Business Layer:
· Contains business logic and rules.
· Techniques:
· Facade Pattern: Simplifies interface for complex subsystems.
· Session Management: Maintains user session state.
· Workflow Engines: Manages business processes.
1. Data Access Layer:
· Interacts with the database.
· Techniques:
· Connection Pooling: For efficient database access.
· Object-Relational Mapping (ORM): Simplifies data handling (e.g., Hibernate).
· Transactions: Ensures data integrity.
1. Services Layer:
· Exposes functionality to external applications.
· Handles communication issues and requests.

Benefits of Layered Architecture

· Separation of Concerns: Each layer handles specific tasks.


· Loose Coupling: Layers can be modified independently.
· Reusability: Common functionality can be reused across layers.

Techniques for Each Layer

· Presentation Layer:
· Use AJAX for dynamic content loading.
· Implement client-side caching for user preferences.
· Business Layer:
· Use application façade for internal module hiding.
· Implement session management (cookies, server-side sessions).
· Data Layer:
· Use stored procedures for performance.
· Implement parameterized SQL queries to prevent SQL injection.

Additional Concepts

· Aspect-Oriented Programming: For cross-cutting concerns like logging, security, and auditing.
· SSL & Security: Ensure secure data transmission between client and server.
· Session Management: Maintain user state across multiple requests.

Exam Preparation Tips


· Understand key definitions and concepts.
· Familiarize yourself with real-world examples (e.g., hotel booking systems, logistics systems).
· Practice identifying components of different layers in various systems.

Exercises
1. Identify components of a Departure Control System (DCS) in each layer.
2. Propose performance improvements for a slow-loading citizen registration screen.
3. Define the service layer components for a hotel reservation system.
4. Design a façade for a logistics system to abstract module interactions.

1. Evaluation Factors

· Forms of Evaluation:
· By the Designer: Ongoing assessments during key design decisions.
· Peer Review: Collaborative evaluations at any design phase.
· Outsider Analysis: Independent evaluations by external experts.

2. Evaluation by the Designer

· Key decisions should be evaluated based on:


· Importance of the Decision: Critical choices require thorough analysis.
· Number of Alternatives: Narrow down options quickly.
· Good Enough vs. Perfect: Make timely decisions without over-analysis.

3. Peer Review Process

· Steps:
1. Reviewers define quality attribute scenarios.
2. Architect presents the architecture for understanding.
3. Scenarios are evaluated against the architecture.
4. Capture potential problems and decide on their acceptance.

4. Outsider Analysis

· Conducted by experts who:


· Provide unbiased feedback.
· Evaluate complete architectures.
· Ensure that evaluations consider the broader business context.

5. The Architecture Tradeoff Analysis Method (ATAM)

· Overview: Comprehensive method for evaluating software architectures, focusing on quality


attributes and architectural decisions.
· Participants:
· Evaluation Team: Competent outsiders.
· Project Decision Makers: Stakeholders with authority.
· Outputs:
· Quality attribute scenarios.
· Risks and non-risks associated with architectural decisions.
· Sensitivity and tradeoff points.

6. Lightweight Architectural Evaluation

· Purpose: Quick evaluation for less risky projects.


· Process:
· Shorter duration (half-day to one day).
· Internal team participation.
· Steps include scenario generation and analysis of architectural approaches.
Module 5: Ensuring Conformance to Architecture

1. Code Drift

· Definition: Code may diverge from the intended architecture.


· Examples of Drift:
· Violating layer discipline (e.g., accessing non-adjacent layers).
· Direct database access bypassing the data access layer.
· Inefficient notification methods (e.g., notifying modules individually instead of using a publish-
subscribe model).
· Inconsistent logging mechanisms.

2. Techniques to Maintain Consistency

· Embed Design Concepts:


· Use an architecturally evident coding style.
· Clearly indicate in code which layer or component the code belongs to (e.g., Publisher,
Subscriber).
· Use Frameworks:
· Examples:
· Spring: Supports MVC architecture.
· Hibernate: Simplifies database operations.
· AUTOSAR: Standard for automotive software.
· Spring MVC Framework:
· Components: Model (data), View (UI), Controller (request handling).

3. Code Templates

· Provide structure to ensure best practices (e.g., for fault tolerance).


· Example template structure:
· Get event
· Case (Event type)
· Process handling for primary and backup processes.

4. Update Architecture Documentation

· Ensure code changes are reflected in architectural documents.


· Mark outdated sections to maintain document credibility.
· Synchronize architecture documentation with code at release time.

5. Additional Techniques

· Educate new team members on the architecture.


· Conduct code reviews to ensure adherence.
· Organize code into folders based on architectural aspects (e.g., layers, services).

Module 5: Architecture & Testing

Importance of Architecture in Testing

· Prioritizing Test Cases:


· Architecture helps prioritize test cases based on Architectural Significant Requirements
(ASRs).
· Integration Test Plan Creation:
· Architecture defines module interactions and dependencies, guiding the creation of an effective
integration test plan.

2. Work Products for Test Case Prioritization

· Utility Tree:
· Use a utility tree to identify high-priority scenarios that have significant business value and
architectural impact. These scenarios translate into high-priority test cases.

3. Creation of Integration Test Plan

· Module Interactions:
· Architecture outlines which modules interact, aiding in identifying integration test cases.
· Dependency Identification:
· Understanding module dependencies helps in determining the modules required for integration
testing.

4. Designing for Testability

· Architecture must support key testability requirements:


· Switch Data Sources: Ability to toggle between test and production data.
· Rollback Changes: Mechanism to revert changes made by test cases to restore system state.
· Component Replacement: Capability to use simulators for external systems (e.g., payment
gateways, sensors).

5. Experience Sharing

· Consider how your architecture facilitated testing activities in your projects


Module 5: Architecture Reconstruction

1. Purpose of Architecture Reconstruction

· Understanding Existing Systems:


· For systems lacking documentation.
· Technology Migration:
· Transitioning systems from older technologies (e.g., mainframe to web).
· Identifying Reusable Components:
· Such as logging or security components.

2. Phases of Architecture Reconstruction

1. Identify Components & Relationships:


· Extract information from:
· Source code
· Execution traces
· Build scripts
· Information includes classes, file usage, caller-callee relationships, and global data access.
1. Aggregate Components:
· Group extracted components into abstract components.
1. Analyze Architecture:
· Use tools to visualize and analyze the reconstructed architecture.

3. Tools for Architecture Reconstruction

· Examples:
· ARMIN (Architecture Reconstruction and Mining)
· Dali
· Lattix
· SonarQube
· Structure101

4. View Fusion and Analysis

· View Fusion:
· Combine static (source code) and dynamic (execution trace) views for a comprehensive
architecture overview.
· Architecture Analysis:
· Validate the correctness of architectural elements against defined constraints (e.g., layer
interactions).
5. Case Study: "Vanish" System

· Tool Used: ARMIN


· Outcome: Revealed non-strict layering in architecture after aggregation.

6. Experience Sharing

· Reflect on personal involvement in architecture reconstruction:


· Techniques and tools used.
OLD QUESTION PAPER

Q1: Inventory Management System


a. Explain the benefits of layered architecture with respect to the above case study. [1 Mark]

1. Separation of Concerns:
· Each layer (Presentation, Business Logic, Data Access) has distinct responsibilities, simplifying
maintenance and development.
1. Reusability:
· Components can be reused across different applications, reducing redundancy and
development time.
1. Ease of Testing:
· Isolated layers allow for easier unit testing, ensuring reliability and quick identification of issues.
1. Scalability:
· New features can be added to a specific layer without impacting others, facilitating growth as
business needs change.

b. How would you go about implementing layered architecture in this case? Explain with a
diagram. [3 Marks]

· Diagram: Create a visual representation of the layered architecture with the following layers:
· Presentation Layer: User interface components for inventory management and order
processing.
· Business Logic Layer: Implements FIFO inventory rules and manages orders.
· Data Access Layer: Handles CRUD operations for inventory data.
· Database: Where raw materials and finished products are stored.
· Explanation:
· Presentation Layer: Interfaces for warehouse management, sales order processing, and
inventory checks.
· Business Logic Layer: Manages inventory logic (e.g., FIFO handling, sales order fulfillment).
· Data Access Layer: Provides data access methods, abstracts database interactions, and
ensures data integrity.

c. Is architecture evaluation in conflict with Agile processes? Explain how an Agile process would
work in the above case. Your answer should cover at least 3 Agile principles. [3 Marks]

· No Conflict: Architecture can evolve during the Agile process, adapting to new insights and
requirements.
· Agile Principles:
1. Customer Collaboration: Frequent discussions with stakeholders ensure the system meets
inventory management needs and adapts to feedback.
2. Responding to Change: The architecture can be adjusted based on changing business
priorities, such as new product lines or market demands.
3. Working Software: Regular iterations allow teams to deliver functional features incrementally,
integrating new architecture as needed.

d. Explain with rough sketches how you would use Kruchten’s 4+1 architectural view model
concerning the above case study. [3 Marks]

· Diagrams: Present sketches for each view:


· Logical View: Shows major components (inventory management, sales orders).
· Development View: Highlights software components and their interactions (e.g., APIs).
· Process View: Illustrates workflows for order processing and inventory updates.
· Physical View: Displays deployment architecture (servers, databases).
· Scenarios: Use cases such as receiving inventory, processing a sales order, and handling
returns.
· Explanation:
· Each view addresses specific stakeholder concerns (e.g., developers, users, operations) and
helps communicate the system’s architecture effectively.

Q2: GoLearn System


a. How can the testability tactics be applied for the utility services, application services, and
configuration services? Explain your answer by drawing a diagram giving the scenario only one
tactic for each of these services in the context of the case study. [5 Marks]

· Utility Services:
· Tactic: Mocking
· Explanation: Simulate dependencies for utility services during testing, allowing for isolated
tests.
· Application Services:
· Tactic: Logging
· Explanation: Implement logging to capture interactions, which aids in debugging and
monitoring performance.
· Configuration Services:
· Tactic: Parameterization
· Explanation: Allow runtime adjustments to configurations without code changes, enhancing
adaptability for testing.
· Diagram: Illustrate the service layers with respective tactics applied.

b. “Utech is replaced by GoLearn” – Justify the role of usability tactics in this context. [2 Marks]

· Justification:
· Usability tactics improve user engagement by providing intuitive interfaces and personalized
learning experiences. This contrasts with Utech’s closed system, which lacked user-friendly
features. Effective usability can lead to higher adoption rates among students and educators.
c. Please give examples of 3 design decisions for performance that you would take for the
services described in the above case. The answer must be in the context of the case study given.
[3 Marks]

1. Caching:
· Use caching for frequently accessed educational content to reduce response time and server
load.
1. Load Balancing:
· Implement load balancing to distribute user requests evenly across multiple servers, enhancing
availability and performance.
1. Service Granularity:
· Design services to be fine-grained, enabling efficient processing and quicker response times by
loading only necessary components for each request.

Q3: Hospital Patient Management System


a. What are the hospital's goals for the new patient management system? [1 Mark]

· Goals:
· Enhance security to protect patient data.
· Improve usability for staff and patients, ensuring smooth workflows and easy access to
information.

b. What are the roles and responsibilities of stakeholders involved in the system (doctors, nurses,
patients, administrators)? [2 Marks]

· Doctors:
· Manage patient care, access and update medical records, and ensure data security.
· Nurses:
· Provide daily patient care, monitor health statuses, and assist in record keeping.
· Patients:
· Access personal health information, manage appointments, and provide feedback on care.
· Administrators:
· Oversee the system’s functionality, manage user access, and ensure compliance with
healthcare regulations.

c. Describe using diagrams 4 brainstorm scenarios for potential security breaches and usability
issues. [4 Marks]

· Diagrams:
1. Unauthorized Access: An external hacker breaches the system to access sensitive patient
data.
2. Data Breach: Insufficient encryption leads to leaked patient information.
3. System Downtime: Users are unable to access the system due to server failure.
4. User Error: Staff accidentally delete critical patient records.
· Explanation: Discuss the implications of each scenario for patient safety and system integrity.

d. What methodology have you studied in the course to assist in prioritizing scenarios for a small
project where the time available is short? [1 Mark]

· Methodology:
· Use the MoSCoW method (Must have, Should have, Could have, Won't have) to prioritize
scenarios based on their importance and urgency.

e. Prioritize scenarios based on their criticality, considering the potential impact on patient safety
and user experience. [1 Mark]

· Criticality Assessment:
1. Data Breach (Must have)
2. Unauthorized Access (Must have)
3. System Downtime (Should have)
4. User Error (Could have)

f. Is there any international standard for deciding what scenarios are to be given priority? [1 Mark]

· Yes, Standards Exist:


· Reference standards like ISO/IEC 27001, which provides guidelines for establishing,
implementing, and maintaining information security management systems

You might also like