Addison Wesley - Managing Information Security Risks
Addison Wesley - Managing Information Security Risks
• Table of Contents
Information security requires far more than the latest tool or technology. Organizations must
understand exactly what they are trying to protect--and why--before selecting specific solutions.
Security issues are complex and often are rooted in organizational and business concerns. A
careful evaluation of security needs and risks in this broader context must precede any security
implementation to insure that all the relevant, underlying problems are first uncovered.
The OCTAVE approach for self-directed security evaluations was developed at the influential
CERT(R) Coordination Center. This approach is designed to help you:
OCTAVE(SM) enables any organization to develop security priorities based on the organization's
particular business concerns. The approach provides a coherent framework for aligning security
actions with overall objectives.
Managing Information Security Risks, written by the developers of OCTAVE, is the complete and
authoritative guide to its principles and implementations. The book:
• Table of Contents
Copyright
Preface
Acknowledgments
Chapter 2. Principles and Attributes of Information Security Risk Evaluations
Part II. The OCTAVE Method
Section 3.2. Mapping Attributes and Outputs to the OCTAVE Method
Chapter 4. Preparing for OCTAVE
Section 5.4. Identify Security Requirements for Most Important Assets
Section 5.5. Capture Knowledge of Current Security Practices and Organizational Vulnerabilities
Chapter 6. Creating Threat Profiles (Process 4)
Section 6.2. Before the Workshop: Consolidate Information from Processes 1 to 3
Section 6.3. Select Critical Assets
Chapter 7. Identifying Key Components (Process 5)
Section 8.2. Before the Workshop: Run Vulnerability Evaluation Tools on Selected Infrastructure
Components
Chapter 9. Conducting the Risk Analysis (Process 7)
Section 10.2. Before the Workshop: Consolidate Information from Processes 1 to 3
Chapter 11. Developing a Protection Strategy—Workshop B (Process 8B)
Section 11.2. Before the Workshop: Prepare to Meet with Senior Management
Section 11.4. Review and Refine Protection Strategy, Mitigation Plans, and Action List
Section 11.5. Create Next Steps
Part III. Variations on the OCTAVE Approach
Glossary
Bibliography
Appendix A. Case Scenario for the OCTAVE Method
Section A.1. MedSite OCTAVE Final Report: Introduction
Section A.3. Risks and Mitigation Plans for Critical Assets
Section A.4. Technology Vulnerability Evaluation Results and Recommended Actions
Appendix B. Worksheets
References
About the Authors
Copyright
Many of the designations used by manufacturers and sellers to distinguish their products are
claimed as trademarks. Where those designations appear in this book, and Addison-Wesley, Inc.
was aware of a trademark claim, the designations have been printed in initial capital letters or in
all capitals.
CMM, Capability Maturity Model, Capability Maturity Modeling, Carnegie Mellon, CERT, and CERT
Coordination Center are registered in the U.S. Patent and Trademark Office.
ATAM; Architecture Tradeoff Analysis Method; CMMI; CMM Integration; CURE; IDEAL; Interim
Profile; OCTAVE; Operationally Critical Threat, Asset, and Vulnerability Evaluation; Personal
Software Process; PSP; SCAMPI; SCAMPI Lead Assessor; SCE; Team Software Process; and TSP
are service marks of Carnegie Mellon University.
Special permission to use materials from the OCTAVE Method Implementation Guide, copyright
© 2002 by Carnegie Mellon University, has been granted by the Software Engineering Institute.
The authors and publisher have taken care in preparation of this book but make no expressed or
implied warranty of any kind and assume no responsibility for errors or omissions. No liability is
assumed for incidental or consequential damages in connection with or arising out of the use of
the information or programs contained herein.
The publisher offers discounts on this book when ordered in quantity for special sales. For more
information, please contact:
(800) 382-3419
corpsales@[Link]
Alberts, Christopher J.
Managing information security risks : the OCTAVE approach / Christopher J. Alberts, Audrey J.
Dorofee.
p. cm.
ISBN 0-321-11886-3
658.4'78—dc21 2002024939
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, without the prior consent of the publisher. Printed in the United States of America.
Published simultaneously in Canada.
For information on obtaining permission for use of material from this work, please submit a
written request to:
Pearson Education, Inc.
Boston, MA02116
1 2 3 4 5 6 7 8 9 10—HT—0605040302
Dedication
—Christopher Alberts
For Ronald Higuera, for putting me on the path to risk management so long ago
—Audrey Dorofee
List of Figures
Figure 5-3 Most Important Senior Management Assets and Rationale for Selection
Figure 5-7 Security Requirements for PIDS from the Senior Managers' Perspective
Figure 5-9 Contextual Security Practice Information from the Senior Managers'
Perspective
Figure 6-1 Asset-Based Threat Tree for Human Actors Using Network Access
Figure 6-2 Asset-Based Threat Tree for Human Actors Using Physical Access
Figure 7-5 Access Paths and Key Classes of Components for PIDS
Figure 9-4 Part of PIDS Risk Profile: Human Actors Using Network Access Tree
Figure 9-6 Part of the PIDS Risk Profile (Including Probability): Human Actors Using
Network Access Tree
Figure 10-1 Survey Results from Senior Managers for Security Awareness and
Training
Figure 10-6 Part of PIDS Risk Profile (Human Actors Using Network Access) with
Mitigation Plan
Figure 10-7 Part of ECDS Risk Profile (Other Problems) with Mitigation Plan
Figure 10-8 Part of PIDS Risk Profile (Other Problems) with Mitigation Plan
Figure 10-12 Expected Values (EV) for Part of PIDS Risk Profile: Human Actors
Using Network Access Tree
Figure 13-2 Critical Asset Risk Profile for OCTAVE Focused on Small Organizations
Figure 14-4 Operations and Tasks of the Information Security Risk Management
Framework
Figure 14-5 Evaluation-Based Information Security Risk Management
Figure A-1 Risk Profile for Paper Medical Records: Human Actors Using Physical
Access
Figure A-2 Risk Profile for Paper Medical Records: Other Problems
Figure A-3 Risk Profile for Personal Computers: Human Actors Using Network
Access
Figure A-4 Risk Profile for Personal Computers:Human Actors Using Physical Access
Figure A-7 Risk Profile for PIDS: Human Actors Using Network Access
Figure A-8 Risk Profile for PIDS: Human Actors Using Physical Access
Figure A-12 Risk Profile for ECDS: Human Actors Using Network Access
Figure A-13 Risk Profile for ECDS: Human Actors Using Physical Access
Figure A-17 Access Paths and Key Classes of Components for PIDS
List of Tables
Table A-6 Types of Impact and Impact Values for Paper Medical Records
Table A-8 Types of Impact and Impact Values for Personal Computers
Table A-13 Types of Impact and Impact Values for ABC Systems
Preface
Many people seem to be looking for a silver bullet when it comes to information security. They
often hope that buying the latest tool or piece of technology will solve their problems. Few
organizations stop to evaluate what they are actually trying to protect (and why) from an
organizational perspective before selecting solutions. In our work in the field of information
security, we have found that security issues tend to be complex and are rarely solved simply by
applying a piece of technology. Most security issues are firmly rooted in one or more
organizational and business issues. Before implementing security solutions, you should consider
characterizing the true nature of the underlying problems by evaluating your security needs and
risks in the context of your business.
Considering the varieties and limitations of current security evaluation methods, it is easy to
become confused when trying to select an appropriate method for evaluating your information
security risks. Most of the current methods are "bottom-up": they start with the computing
infrastructure and focus on the technological vulnerabilities without considering the risks to the
organization's mission and business objectives. A better alternative is to look at the organization
itself and identify what needs to be protected, determine why it is at risk, and develop solutions
requiring both technology- and practice-based solutions.
Enables decision makers to develop relative priorities based on what is important to the
organization
are service marks of Carnegie Mellon University. OCTAVE was developed at the
CERT Coordination Center (CERT/CC). Established in 1988, it is the oldest computer
security response group in existence. The center both advises Internet sites that
have had their security compromised and offers tools and techniques that enable
typical users and administrators to protect systems effectively from damage caused
by intruders. The CERT/CC's home is the Software Engineering Institute (SEI), a
federally funded research and development center operated by Carnegie Mellon
University, with a broad charter to improve the practice of software engineering.
History of OCTAVE
In addition to our experience with vulnerability evaluations, we had also developed and applied a
variety of software development risk evaluation and man agement techniques [Williams 00 and
Dorofee 96]. These techniques focused on the critical risks that could affect project objectives.
A second important observation from our vulnerability evaluation days concerned a given site's
level of involvement and subsequent ownership of the results. Because the vulnerability
evaluations were highly dependent on the expertise of the assessors, site personnel involved in
the process participated very little. When we were able to go back to a site, we saw the same
vulnerabilities from one visit to the next. There had been little or no organizational learning.
People in those organizations did not feel "ownership" of the various evaluations' results and had
therefore not implemented the findings. We decided that sites needed to be more involved in
security evaluations in order to learn about their security processes and participate in developing
improvement recommendations. We started to develop a self-directed evaluation approach that
Included business personnel as well as staff from the information technology department
In June 1999 we published a report describing the OCTAVE framework [Alberts 99], a
specification for an information security risk evaluation. This was refined into the OCTAVE
Method [Alberts 01a], which was developed for large-scale organizations. In addition, we are
developing a second method targeted at small organizations. During these efforts, we
determined that the OCTAVE framework did not sufficiently capture the general approach to, or
requirements for, the self-directed information security risk evaluations that we wanted. We
refined the framework into the OCTAVE criteria [Alberts 01b], namely, a set of principles,
attributes, and outputs that define the OCTAVE approach.
It shows how the OCTAVE Method can be tailored to different types of organizations.
It describes how this approach provides a foundation for managing information security
risks.
To address these key issues, we have divided the contents of the book into three parts.
Part I, the Introduction, summarizes the OCTAVE approach and presents the principles,
attributes, and outputs of self-directed information security risk evaluations.
Part II, The OCTAVE Method, illustrates one way in which the OCTAVE approach can be
implemented in an organization. This part begins with an "executive summary" of the
OCTAVE Method and then presents the method in detail.
Part III, Variations on the OCTAVE Approach, describes ideas for tailoring the OCTAVE
Method for different types of organizations. This part also presents basic concepts
related to managing information security risks after the evaluation.
This book is written for a varied audience. Some familiarity with security issues is helpful, but
not essential; we define all concepts and terms as they appear. The book should satisfy people
who are new to security as well as experts in security and risk management.
Information security risk evaluations are appropriate for anyone who uses networked computers
to conduct business and thus may have critical information assets at risk. This book is for people
who need to perform information security risk evaluations and who are interested in using a self-
directed method that addresses both organizational and information technology issues.
Managers, staff members, and information technology personnel concerned about and
responsible for protecting critical information assets should all find this book useful.
In addition, consultants who provide information security services to other organizations may be
interested in seeing how the OCTAVE approach or the OCTAVE Method might be incorporated
into their existing products and services. Consumers of information security risk evaluation
products and services can use the principles, attributes, and outputs of the OCTAVE approach to
understand what constitutes a comprehensive approach for evaluating information security risks.
Consumers can also use the principles, attributes, and outputs as a benchmark for selecting
products and services that are provided by vendors and consultants.
The OCTAVE Method requires an interdisciplinary analysis team to perform the evaluation and
act as a focal point for security improvement efforts. The primary audience for this book, then, is
anyone who might be on the analysis team or work with them. The book includes "how to"
information for conducting an evaluation as well as concepts related to managing risks after the
evaluation. For an analysis team, the entire book is applicable.
Those who want to understand the OCTAVE approach should read Part I. Those who just want an
overview of the OCTAVE Method and a general idea of how it might be used should read
Chapters 1 and 3. People who already perform information security risk evaluations and are
looking for additional ideas for improvement should first read Chapters 1 and 3 and then decide
which areas to explore further. Those ready to start learning how to conduct self-directed
information security evaluations in their organizations should read Part II. Finally, people who
are interested in customizing the OCTAVE Method or learning about what to do after an
evaluation should read Part III.
Acknowledgments
Writing a book requires an intense effort. We would like to acknowledge the support of everyone
who helped us in writing this book, without which we would never have been able to complete it.
The following people spent countless hours reviewing the material in this book and providing
invaluable feedback: Julia Allen, Rich Caralli, Jeff Collmann, Carol A. Sledge, Andrew Moore,
William Wilson, and Carol Woody.
We would especially like to thank Rich Pethia, program manager of the Networked Systems
Survivability Program, and William Wilson, technical manager of the Survivable Enterprise
Management Team, for their encouragement and support of our work. Such an ambitious project
requires unwavering support from management, and we are grateful for their help.
Many people made contributions to the technical content of this book. Specifically, Julia Allen
helped us develop the catalog of practices, Rich Caralli contributed lessons learned from his
experiences with OCTAVE, Jeff Collmann offered insightful and detailed comments on our early
prototypes, and Bradford Willke made important contributions to the technological pieces of
OCTAVE.
We would also like to acknowledge those who have provided us with production assistance.
Linda Pesante helped design the book and served as our technical editor, and David Biber
created many of the graphics used throughout the book.
The technical content of this book evolved from many previous efforts within the Software
Engineering Institute. We leveraged technical material from several projects, including
Continuous Risk Management, Software Risk Evaluation, Information Security Evaluation, and
the early work on the OCTAVE framework. Many people contributed to these projects, and we
would like to thank all of them for providing such a rich foundation upon which to build.
We would also like to acknowledge all the organizations that provided funding and pilot
opportunities as we developed the OCTAVE Method.
Finally, Chris would like to thank his wife, Carol Feola, for her support and encouragement. To
put up with the frustrations, deadlines, and last-minute reviews required incredible generosity
and patience.
Part I: Introduction
Part I provides an executive overview of self-directed information security risk
evaluations and how they fit into the overall management of information security
risks. Specifically, it introduces the Operationally Critical Threat, Asset, and
Vulnerability Evaluation[SM] (OCTAVE[SM]) approach to assessments and the
OCTAVE Method. Chapter 1 gives background on information security risk
evaluations and the OCTAVE approach to assessing information security risks.
Chapter 2 discusses the principles, attributes, and outputs that define a
comprehensive, self-directed evaluation.
Chapter
[1]
From CERT Coordination Center: The number of vulnerabilities reported in 2001
is 2,437 (up from 1,090 in 2000), and the number of security incidents is 52,658
(up from 21,756 in 2000). See [Link] for
additional information.
No matter which way the current statistics swing, you need to consider both internal and
external threats. Your organization is only as secure as its weakest link, and that link, more
often than not, is one of you. How many people can state with certainty that they have not
deliberately or inadvertently revealed their passwords in the past year? How many have a file on
their personal data assistant (PDA) that lists passwords or contains confidential information?
How many have "yellow stickies" under the keyboard? How many employees load games on
their workstations or open up unknown email attachments? How many companies spend the
time and money to keep up with the latest patches and technological security tools? Without
good organizational practices in place and enforced, in addition to technological safeguards, the
organization and its assets are at risk.
Section
Three weeks after the network administrator was fired, a plant worker started the day by
logging on to the central file server. Instead of booting up, a message came on the screen
saying an area of the operating system was being fixed. Then the server crashed, and in an
instant, all of the plant's 1,000 tooling and manufacturing programs were gone. The server
wouldn't come back up. The plant manager ordered that the manufacturing machines be kept
running with the previous set of programs. It didn't matter if the orders already had been filled.
He had to keep the machines running.
Then the plant manager went to get his salvation—the backup tape, kept in a filing cabinet in
the human resources department. But the tapes were gone. He then turned to the workstations
connected to the file server. The programs, at least a good chunk of them, should have been
stored locally on the individual workstations. But the programs weren't there.
The fired network administrator, the only employee responsible for maintaining, securing, and
backing up the file server, hadn't yet been replaced. In the days that followed the crash, the
company called in three different people to attempt data recovery. Five days after the crash, the
plant manager started shifting workers around the department and shutting down machines that
were running out of raw materials or creating excess inventory. He took steps to hire a fleet of
programmers to start rebuilding some of the 1,000 lost programs.
The company's chief financial officer testified that the software bomb destroyed all the programs
and code generators that allowed the company to manufacture 25,000 different products and
customize those basic products into as many as 500,000 different designs. The company lost its
twin advantages of being able to modify products easily and produce them inexpensively. It lost
more than $10 million, forfeited its position in the industry, and eventually had to lay off 80
employees.
Information security is more than setting up a firewall, applying patches to fix newly discovered
vulnerabilities in your system software, or locking the cabinet with your backup tapes.
Information security is determining what needs to be protected and why, what it needs to be
protected from, and how to protect it for as long as it exists.
The burning question, of course, is how to assure your organization an adequate level of security
over time. There are many answers to this challenging question, just as there are many
approaches to managing an organization's security. Unfortunately, there is no silver bullet, no
single solution that will solve all of your problems. There are four common approaches:
Vulnerability assessment
Vulnerability Assessment
Use standards for specific IT security activities (such as hardening specific types of
platforms)
Use (sometimes proprietary) software tools to analyze the infrastructure and all of its
components
Security risk evaluations expand upon the vulnerability assessment to look at the security-
related risks within a company, including internal and external sources of risk as well as
electronic-based and people-based risks. These multifaceted evaluations attempt to align the
risk evaluation with business drivers or goals and usually focus on the following four aspects of
security:
1. They examine the corporate practices relating to security to identify strengths and
weaknesses that could create or mitigate security risks. This procedure may include a
comparative analysis that ranks this information against industry standards and best
practices.
c. Exfiltration of information
d. Denial of service
Managed security services providers rely on human expertise to manage a company's systems
and networks. They use their own or another vendor's security software and devices to protect
your infrastructure. Usually, a managed security service will proactively monitor and protect an
organization's computing infrastructures from attacks and misuse. The solutions tend to be
customized for each client's unique business requirements and to use proprietary technology.
They can either actively respond to intrusions or notify you after they occur. Some employ
automated, computer-based learning and analysis, promising decreased response time and
increased accuracy.
Vulnerability assessments, information system audits, and information security risk evaluations
help you characterize your security issues, but not manage them. Managed service providers
manage your security for you. Although each of these approaches can be useful to an
organization trying to protect itself, all of them have some limitations, based on their context of
use. A small company may have no choice but to use a managed service provider. A company
with limited IT resources may not be able to do much more than manage vulnerabilities, and,
depending on what it has to protect, may not need to do much more. The next section looks at a
more comprehensive approach that builds upon the previous approaches, allowing an
organization to assume responsibility for characterizing and managing its security issues.
Risk is the possibility of suffering harm or loss. It refers to a situation in which a person could do
something undesirable or a natural occurrence could cause an undesirable outcome, resulting in
a negative impact or consequence. The first step in managing risk is to understand what your
risks are in relation to your organization's missions and its key assets. This understanding is
reached by carrying out a comprehensive risk evaluation to identify your organization's risks.
Once these risks are identified, the organization's personnel must decide what to do to address
them. Risk management is the ongoing process of identifying risks and implementing plans to
address them.
A risk management approach involves the entire organization, including personnel from both the
information technology department and the business lines of the organization [GAO 98].
Solution strategies derived by using this approach are practice-based, that is, they are driven by
best or accepted industry practices. By implementing these practice-based solutions across the
information technology department and the business lines, an organization can start
institutionalizing good security practices and making them part of the way the organization
routinely conducts business. This approach enables an organization to improve its security
posture over time. The next section takes a closer look at an information security risk evaluation
and management.
Think about how much you rely upon access to information and systems to do your job. Today,
information systems are essential to most organizations, because virtually all information is
captured, stored, and accessed in digital [Link] rely on digital data that are accessible,
dependable, and protected from misuse. Systems are interconnected in ways that could not
have been imagined ten years ago. Networked systems have enabled unprecedented access to
information. Unfortunately, they have also exposed our information to a variety of new threats.
Organizations today have implemented a wide variety of complex computing infrastructures.
They need flexible approaches that enable them to understand their information-specific security
risks and then to create strategies to address those risks. An organization that wishes to
improve its security posture must be prepared to take the following steps:
4. Initiate an ongoing, continual effort to maintain and improve its security posture.
An information security risk evaluation is a process that can help you meet these objectives. It
generates an organizationwide view of information security risks. It provides a baseline that can
be used to focus mitigation and improvement activities. Periodically, an organization needs to
"reset" its baseline by conducting another evaluation. The time between evaluations can be
predetermined (e.g., yearly) or triggered by major events (e.g., corporate reorganization,
redesign of an organization's computing infrastructure). However, an information security risk
evaluation is only one part of an organization's continuous information security risk management
activities.
Evaluation Activities
The evaluation only provides a direction for an organization's information security activities; it
does not necessarily lead to meaningful improvement. No evaluation, no matter how detailed or
how expert, will improve an organization's security posture unless the organization follows
through by implementing the results. After the evaluation, the organization should take the
following steps:
1. Plan how to implement the protection strategy and risk mitigation plans from the
evaluation by developing detailed action plans. This activity can include a detailed cost-
benefit analysis among strategies and actions.
3. Monitor the plans for progress and effectiveness. This activity includes monitoring risks
for any changes.
Risk evaluation is only the first step of risk management. Figure 1-1 illustrates an information
security risk management framework and the "slice" that an evaluation provides. The framework
highlights the operations that organizations can use to identify and address their information
security risks. Chapter 14 examines the framework in some detail and presents the basic
concepts behind information security risk management. One important point to note is that most
information security risk management approaches rely upon the evaluation to focus subsequent
mitigation and improvement activities.
There are many types of information security risk evaluations available to potential users. The
quality and scope of products and services vary across an extremely wide range. Many of the
evaluations do not lend themselves to an organizationwide security improvement approach. The
next section outlines a flexible approach to evaluating information security risks in an
organization.
OCTAVE Approach
The Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) enables an
organization to sort through the complex web of organizational and technological issues to
understand and address its information security risks. OCTAVE defines an approach to
information security risk evaluations that is comprehensive, systematic, context driven, and self-
directed.
At the core of OCTAVE is the concept of self-direction, which means that people from an
organization manage and direct the information security risk evaluation for that organization.
Information security is the responsibility of everyone in the organization, not just the IT
department. The organization's people need to direct the activities and make the decisions about
its information security improvement efforts. OCTAVE achieves this by establishing a small,
interdisciplinary team drawn from an organization's own personnel, called the analysis team, to
lead the organization's evaluation process.
The analysis team includes people from both the business units and the information technology
department, because information security includes both business- and technology-related issues.
People from the business units of an organization understand what information is important to
complete their tasks as well as how they access and use the information. The information
technology staff understand issues related to how the computing infrastructure is configured as
well as what is important to keep it running. Both of these perspectives are important in
understanding the global, organizational view of information security risk.
An information security risk breaks down into four major components: asset, threat,
vulnerability, and impact. An information security risk evaluation must account for all of these
components. OCTAVE is an asset-driven evaluation approach, framing the organization's risks in
the context of its assets. Using the organization's assets to focus the evaluation's activities is an
efficient means of reducing the number of threats and risks that you must consider during the
evaluation [Fites 89]. In addition, assets are used to form a bridge between the organization's
business objectives and the security-related information gathered during an evaluation.
OCTAVE requires an analysis team to (1) identify the information-related assets (e.g.,
information, systems) that are important to the organization and (2) focus risk analysis activities
on those assets judged to be most critical to the organization.
The analysis team has to consider the relationships among critical assets, the threats to those
assets, and vulnerabilities (both organizational and technological) that can expose assets to
threats. Only the analysis team can evaluate risks in an operational context. In other words,
OCTAVE focuses on how operational systems are used to conduct an organization's business and
how those systems are at risk due to security threats.
Three Phases
The organizational, technological, and analysis aspects of an information security risk evaluation
lend themselves to a three-stage approach. OCTAVE is built around these three phases to enable
organizational personnel to assemble a comprehensive picture of the organization's information
security needs.
Phase 3: Develop Security Strategy and Plans. During this part of the
evaluation, the analysis team identifies risks to the organization's critical assets and
decides what to do about them. The team creates a protection strategy for the
organization and mitigation plans to address the risks to the critical assets, based
upon an analysis of the information gathered.
OCTAVE Variations
The specific ways in which business practices (e.g., planning, budgeting) are implemented in
different organizations vary according to the characteristics of the organizations. Consider the
differences between management practices at a small start-up company and those required in a
large established organization. Both organizations require a set of similar management practices
for planning and budgeting, but the practices are implemented differently. Similarly, the OCTAVE
approach defines an information security risk as a management practice. We have found that the
ways in which organizations implement information security risk evaluations differ based on a
variety of organizational factors. OCTAVE implemented in a large multinational corporation is
different from OCTAVE in a small start-up. However, some common principles, attributes, and
outputs hold across organizational types.
Common Elements
The common elements of the OCTAVE approach are embodied in a set of criteria that define the
principles, attributes, and outputs of the OCTAVE approach. Many methods can be consistent
with these criteria, but there is only one set of OCTAVE criteria. The Software Engineering
Institute (SEI) has developed one method consistent with the criteria, the OCTAVE Method,
which was designed with large organizations (more than 300 employees) in mind. The institute
is presently developing a method for small organizations (fewer than 100 employees). In
addition, others might define methods for specific contexts that are consistent with the OCTAVE
criteria. Figure 1-2 illustrates these points.
You can think of the OCTAVE Method as a baseline or starting point from which you can adapt to
a particular operational environment or industry segment.
The activities it requires can be tailored for a variety of organizational sizes. There are, however,
limits to tailoring the OCTAVE Method. For example, the organizational dynamics of very small
organizations are quite different from those of large organizations. An information security risk
evaluation specifically designed for the needs of small organizations may have a distinctly
different look and feel from the OCTAVE Method. Part III looks at tailoring options and how to
adapt the OCTAVE approach to meet the needs of both small and complex organizations while
still remaining true to its principles, attributes, and outputs. Part III also lays the groundwork for
continuing the management and improvement of information security.
Section
2.1 Introduction
2.1 Introduction
The OCTAVE approach is defined in a set of criteria that includes principles, attributes, and
outputs [Alberts 01b]. Principles are the fundamental concepts driving the nature of the
evaluation. They define the philosophy that shapes the evaluation process. For example, self-
direction is one of the principles of OCTAVE. The concept of self-direction means that people
inside the organization are in the best position to lead the evaluation and make decisions.
The requirements of the evaluation are embodied in the attributes and outputs. Attributes are
the distinctive qualities, or characteristics, of the evaluation. They are the requirements that
define the basic elements of the OCTAVE approach and define what is necessary to make the
evaluation a success from both the process and organizational perspectives. Attributes are
derived from the OCTAVE principles. For example, one of the attributes of OCTAVE is that an
interdisciplinary team (the analysis team) staffed by personnel from the organization leads the
evaluation. The principle behind the creation of an analysis team is self-direction. Finally,
outputs define the outcomes that an analysis team must achieve during the evaluation.
Table 2-1 lists the structure of the principles, attributes, and outputs that we will examine in this
chapter. We begin our exploration of the OCTAVE approach in the next section by looking at
principles.
Outputs
Principles Attributes Phase 1 Phase 2 Phase 3
Self- Analysis Critical Key Risks to
direction team assets component critical
s assets
Adaptable Augmentin Security
measures g analysis requiremen Current Risk
team skills ts for technology measur
Defined
critical vulnerabilit es
process Catalog of
assets ies
practices Protecti
Foundation
Threats to on
for a Generic
critical strategy
continuous threat
assets
process profile Risk
Current mitigati
Foward- Catalog of
security on plans
looking view vulnerabiliti
practices
es
Focus on
Current
the critical Defined
organizatio
few evaluation
nal
activities
Integrated vulnerabiliti
Managemen Documente es
t d
evaluation
Open
results
communicat
ion Evaluation
Global scope
perspective
Next steps
Teamwork
Focus on
risk
Focused
activities
Organizatio
nal and
technologic
al issues
Business
and
information
technology
participatio
n
Senior
manageme
nt
participatio
n
Collaborativ
e approach
This section focuses on information security risk management principles. This is where we look
at some of the philosophical underpinnings of an information security risk management
approach. The principles shape the nature of risk management activities and provide the basis
for the evaluation process. We group principles into the following three areas:
1. Information Security Risk Evaluation Principles: key aspects that form the
foundation of an effective information security risk evaluation
[1]
These principles are similar in scope and intent to those documented in
the Continuous Risk Management Guidebook [Dorofee 96].
3. Organizational and Cultural Principles:[1] aspects of the organization and its culture
essential to the successful management of information security risks
The ten information security risk management principles, shown graphically in Figure 2-1, are
discussed in turn in the next section.
We begin our examination of principles by focusing on the concepts that drive information
security risk evaluations. This category includes the following four principles:
1. Self-direction
2. Adaptable measures
3. Defined process
These principles provide the foundation for a successful evaluation. They focus on the roles of
organizational personnel, key aspects of the process, and the link to ongoing security
improvement activities. The first principle that we look at is self-direction.
Self-Direction
Self-direction describes a situation in which certain people in an organization manage and direct
information security risk evaluations for that organization. These people are responsible for
directing the risk management activities and for making decisions about the organization's
security efforts. This approach allows the evaluation to consider the organization's unique
circumstances and context. Self-direction requires
Taking responsibility for information security by leading the information security risk
evaluation and managing the evaluation process
Making the final decisions about the organization's security efforts, including which
improvements and actions to implement
Adaptable Measures
A flexible evaluation process can adapt to changing technology and advancements. It is not
constrained by a rigid model of current sources of threats or by what practices are currently
accepted as "best." Because the information security and information technology domains
change very rapidly, an adaptable set of measures against which an organization and its unique
context can be evaluated is essential. Adaptable measures require
Current catalogs of information that define accepted security practices, known sources of
threat, and known technological weaknesses (vulnerabilities)
Defined Process
A defined process describes the need for information security evaluation programs to rely upon
defined and standardized evaluation procedures. Using a defined evaluation process can help to
institutionalize the process, ensuring some level of consistency in the application of the
evaluation. A defined process requires
Specifying all tools, worksheets, and catalogs of information required by the evaluation
An organization must implement practice-based security strategies and plans to improve its
security posture over time. By implementing these practice-based solutions, an organization can
start institutionalizing good security practices, making them part of the way the organization
routinely conducts business. Security improvement is a continuous process, and the results of an
information security risk evaluation provide the foundation for continuous improvement, which
requires
Now that we have presented the information security risk evaluation principles, we broaden our
focus to risk management. The principles in this category are common to general risk
management practices; they are not unique to information security. We first identified these
principles when we were developing risk management techniques for software development
projects [Dorofee 96]. This category includes the following three principles:
1. Forward-looking view
3. Integrated management
Forward-Looking View
A forward-looking view requires an organization's personnel to look beyond the current problems
by focusing on risks to the organization's most critical assets. The focus is on managing
uncertainty by exploring the interrelationships among assets, threats, and vulnerabilities and
examining the resulting impact on the organization's mission and business objectives. A forward-
looking view requires thinking about tomorrow, focusing on managing the uncertainty presented
by a range of risks. It also requires managing organizational resources and activities by
incorporating the uncertainty presented by information security risks.
Focus on the Critical Few
This principle requires the organization to focus on the most critical information security issues.
Every organization faces constraints on the number of staff members and funding that can be
used for information security activities. Thus, the organization must ensure that it is applying its
resources efficiently, both during an information security risk evaluation and afterwards. A focus
on the critical few requires (1) using targeted data collection to collect information about
security risks and (2) identifying the organization's most critical assets and selecting security
practices to protect those assets.
Integrated Management
This principle requires that security policies and strategies be consistent with organizational
policies and strategies. The organization's management proactively considers trade-offs among
business and security issues when creating policy, striking a balance between business and
security goals. Integrated management means (1) incorporating information security issues into
the organization's business processes and (2) considering business strategies and goals when
creating and revising information security strategies and policies.
The final type of principle that we will examine is the broadest of all: organizational and cultural
principles. Like the risk management principles, these are not unique to the information security
domain. Organizational and cultural principles help to create an organizational culture conducive
to effective risk management. From our experience, if these principles are not part of the way an
organization conducts business, many issues will go unnoticed. People will not communicate key
risks, nor will they work together to address them. Since information security is such a complex
discipline, it spans the entire organization. Implementing these principles is essential to create
an environment that supports an open exchange of ideas. Those organizations that are
unsuccessful in implementing a risk management approach often fail because they violate these
principles. This category includes the following three principles:
1. Open communication
2. Global perspective
3. Teamwork
Open Communication
One of the most important principles, open communication, is also the most difficult to
implement. Yet information security risk management cannot succeed without open
communication of security-related issues. Information security risks cannot be addressed if they
aren't communicated to and understood by the organization's decision makers. A fundamental
concept behind most successful risk management programs is a culture that supports open
communication of risk information through a collaborative evaluation approach. Often,
evaluation methods provide staff members with ways of expressing issues so that the
information is not attributed to them, allowing for a free expression of ideas. Open
communication involves three aspects:
1. Developing evaluation activities that are built upon collaborative approaches (e.g.,
workshops)
Global Perspective
This principle requires members of the organization to create a common view of what is most
important to the organization. Individual perspectives pertaining to information security risk are
solicited and then consolidated to form a global picture of the information security risks with
which the organization must deal. Such a global perspective means (1) identifying the multiple
perspectives of information security risk that exist in the organization and (2) viewing
information security risk within the larger context of the organization's mission and business
objectives.
Teamwork
No individual can understand all of the information security issues facing an organization. As
noted, information security risk management requires an interdisciplinary approach, including
both business and information technology perspectives. The teamwork involved requires
The principles defined in this section are broad concepts that form the foundation for information
security risk evaluation activities. The next section explores how these concepts can be
implemented in an information security risk evaluation approach by focusing on information
security risk evaluation attributes.
We now turn our attention directly toward information security evaluation, moving from the
more abstract nature of risk management principles to information security risk evaluation
attributes. The remainder of this chapter focuses on the attributes and outputs of the OCTAVE
approach.
First, we examine the tangible characteristics of information security risk evaluations and define
what is necessary to make the evaluation a success from both the process and organizational
perspectives. We begin by exploring the primary relationships between the principles and
attributes, illustrated in Table 2-2.
Principle Attribute
Self-direction Analysis team
Catalog of vulnerabilities
Evaluation scope
Focused activities
Collaborative approach
Note that some of the attributes map to more that one principle, as is to be expected in such a
complex activity as an information security risk evaluation. By looking at the attribute names,
you will notice that they are focused on tangible characteristics of the evaluation process and are
process oriented rather than activity oriented. We will start looking at activities in the next
section when we present the outputs. Let's turn our attention now to the information security
risk evaluation attributes, starting with the analysis team.
Analysis Team
An analysis team staffed by personnel from the organization must lead the evaluation activities.
The analysis team must be interdisciplinary in nature, including people from both the business
units and the information technology department. The analysis team must manage and direct
the information security risk evaluation for its organization, and it must be responsible for
making decisions based on the information gathered during the process.
This attribute is important because it ensures that ultimate responsibility for conducting the
evaluation is assigned to a team of individuals from the organization. Using an analysis team to
lead it helps to ensure the following results:
People who understand the business processes and who understand information
technology work together to improve the organization's security posture.
The evaluation is run by personnel who understand how to apply all worksheets and
tools used during the evaluation.
People in the organization feel "ownership" of the evaluation results, making them more
likely to implement the recommended strategies and plans.
The evaluation process must allow the analysis team to augment its skills and abilities by
including additional people who have specific skills required by the process or who possess
needed expertise. These additional people can be from other parts of the organization, or they
can be from an external organization.
The analysis team is responsible for analyzing information and making decisions during the
evaluation. However, the core members of the analysis team may not have all of the knowledge
and skills needed during the evaluation. At each point in the process, the analysis team
members must decide if they need to augment their knowledge and skills for a specific task.
They can do so by including others in the organization or by using external experts. This
attribute is important because it ensures that the analysis team has the required skills and
knowledge to complete the evaluation. This attribute also allows an organization to conduct an
information security risk evaluation even when it does not have all of the required knowledge
and skills within the organization. It provides an avenue for working with external experts when
appropriate.
Catalog of Practices
The evaluation process must assess an organization's security practices by considering a range
of strategic and operational security practice areas. These are formally defined in a catalog of
practices. The catalog of practices used by an organization should be consistent with all laws,
regulations, and standards of due care with which the organization must comply. A more
detailed description of the catalog of practices appears in Chapter 5 and in Appendix C.
The evaluation process must assess threats to the organization's critical assets by considering a
broad range of potential threat sources that are formally defined in a generic threat profile. The
profile contains potential threat sources ranging from insiders deliberately modifying critical
information to power outages, broken water pipes, and other dangers beyond the organization's
control.
Using a generic threat profile is important because it allows an organization to identify threats to
its critical assets based on known potential sources of danger. The profile also uses a structured
way of representing potential threats and yields a comprehensive summary of threats to critical
assets, thus providing a complete and simple way to record and communicate threat
information. A detailed look at the generic threat profile is presented in Chapter 6.
Catalog of Vulnerabilities
The evaluation process must assess the current technological weaknesses (technology
vulnerabilities) in the key components of the computing infrastructure by considering a range of
technology vulnerabilities based on platform and application. Vulnerability evaluation tools
(software, checklists, scripts) examine infrastructure components for technology vulnerabilities
contained in the catalog. Two examples of catalogs of vulnerabilities are CERT®[1]
Knowledgebase[2] and Common Vulnerabilities and Exploits (CVE).[3]
[1]
CERT is registered in the U.S. Patent and Trademark Office.
[Link]
The procedures for performing each evaluation activity and the artifacts (worksheets, catalogs,
etc.) used during each activity must be defined and documented. These include
Specifications for catalogs of information that define accepted security practices, known
sources of threat, and known technological weaknesses
Implementing defined evaluation activities helps to institutionalize the evaluation process in the
organization, ensuring some level of consistency in the application of the process [GAO 99]. It
also provides a basis upon which the activities can be tailored to fit the needs of a particular
business line or group.
The organization must document the results of the evaluation, either in paper or electronic form.
Organizations typically document and archive risks to the organization's critical assets as well as
security strategies and plans to improve the organization's security posture.
Evaluation Scope
The extent of each evaluation must be defined. The evaluation process must include guidelines
to help the organization decide which operational areas (business units) to include in the
evaluation. Determining the scope of an evaluation is important for ensuring that its results are
useful to the organization. If the scope of an evaluation becomes too broad, it is often difficult to
analyze all of the information that is gathered. If it is too small, it will not yield an accurate
picture. Setting a manageable scope for the evaluation reduces the size of the evaluation,
making it easier to schedule and perform the activities. In addition, the areas of an organization
can be prioritized for the evaluation. Essentially, the highest-risk areas can be examined first or
more frequently.
Next Steps
The evaluation must include an activity whereby organizational personnel identify the next steps
required to implement security strategies and plans. This activity often requires active
sponsorship and participation from the organization's senior managers. Next steps typically
include the following information:
What actions the organization will take to follow up on the results of the evaluation
The task of identifying the next steps that people in the organization must take to implement the
protection strategy and the mitigation plans is essential for security improvement. The people in
the organization need to build upon the results of the evaluation. Getting senior management
sponsorship is the first critical step toward making this happen.
Focus on Risk
Focused Activities
The evaluation process must include guidelines for focusing evaluation activities, for example:
Analysis activities that use asset information to focus threat and risk identification
activities
Analysis activities that use asset and threat information to set the scope of the
technology vulnerability evaluation
Planning activities that establish risk priorities using risk measures (impact, probability)
Focusing each activity on the most critical information security issues is important to ensure that
the organization applies its resources efficiently. If you gather too much information, it may be
difficult to analyze. Focusing on the most important information reduces the size of the
evaluation, making it easier to perform the activities while still collecting the most meaningful
data and producing the most significant results.
The evaluation process must examine both organizational and technological issues. Information
security risk evaluations typically include the following practice- and vulnerability-related
information:
Because security has both organizational and technological components, an evaluation must deal
with both organizational and technological issues. When creating the organization's protection
strategy and risk mitigation plans, the analysis team considers both types of issues in relation to
the mission and business objectives of the organization. By doing so, the team is able to address
security by creating a global picture of the information security risks the organization must
confront.
The evaluation process must include participants from both the business units and the
information technology department, allowing for the establishment of an interdisciplinary
analysis team (see the analysis team attribute). Participants from key areas (business units) of
the organization also need to contribute their perspectives on security-related issues during
activities designed to elicit knowledge. Note that participants must include representatives from
multiple organizational levels (senior management, middle management, and staff).
Incorporating multiple perspectives is essential to ensure that a broad range of risk factors is
considered. Staff members who work in the business lines of an organization understand the
relative importance of business operations and the systems and information that support them.
In general, they are in the best position to understand the business impact of disruption or
abuse to business systems and operations and the impact of potential mitigation actions. It is
information technology personnel and information security experts who best understand the
design of existing systems and the impact of technology-related vulnerabilities, just as it is they
who are also in the best position to evaluate the trade-offs of mitigation actions when evaluating
their effect on system performance.
Senior managers in the organization must have defined roles during the evaluation process.
Typically, an organization's senior managers demonstrate active sponsorship of the evaluation,
participate in workshops to contribute their understanding of security-related issues and their
effect on business processes, review and approve security strategies and plans, and define the
steps required to implement security strategies and plans.
Senior management participation is the single most important success factor for information
security risk evaluations, as it demonstrates strong sponsorship of the evaluation. This level of
sponsorship helps to ensure that staff members are available and willing to participate in the
evaluation, take the evaluation seriously, and are prepared to implement the findings after the
evaluation.
The senior managers' active participation in an information security risk evaluation is also
important to the success of the initiative. Senior managers can help to define the scope of the
assessment and to identify participants. If senior managers support the evaluation, people in the
organization tend to participate actively. If senior managers do not support the evaluation, staff
support for the evaluation will dissipate quickly.
Collaborative Approach
Each activity of the evaluation process must include interaction and collaboration among the
people who are participating in that activity. Collaboration can be achieved through the use of
workshops or other interactive methods.
As you can see, all of the attributes just described focus on the evaluation process and how that
process is implemented in an organization. Next, we build on this view by exploring the results
of information security risk evaluations.
Outputs are the results, or outcomes, that an analysis team must achieve during the evaluation;
they are the tangible products of the evaluation. An organizationwide information security risk
evaluation produces three basic types of outputs: (1) organizational data, (2) technological data,
and (3) risk analysis and mitigation data.
In designing the OCTAVE, we decided to organize the evaluation activities according to these
data classifications, producing a three-stage information security risk evaluation approach. The
three phases illustrate the interdisciplinary nature of information security by emphasizing its
organizational and technological aspects. The OCTAVE phases and the required outputs are
illustrated in Figure 2-2.
Sections 2.4.1–2.4.3 describe each phase of OCTAVE and highlight the outputs of each phase.
Organizational View
To consolidate the different viewpoints, the analysis team consolidates information from the
knowledge elicitation workshops, selects the assets that are most important to the organization
(critical assets), describes security requirements for the critical assets, and identifies threats to
the critical assets.
The knowledge elicitation workshops are an important way of identifying what is really
happening in the organization with respect to information security. Consolidating and analyzing
the data are important tasks because they provide different perspectives on the organizational
view of information security. These perspectives are used to focus subsequent evaluation
activities and create the basis for the organization's protection strategy and risk mitigation plans
created during phase 3.
Outputs
Table 2-3 highlights each required output of phase 1, provides a brief description of that output,
and indicates where you can find more information about it in this book.
Output Description
Critical Critical assets are the information-related assets that are
assets believed to be most important in meeting the missions of
the organization. Section 5.2 presents asset identification,
and Section 6.3 addresses critical asset selection.
Security Security requirements for a critical asset indicate the
requirements important qualities of that asset with respect to its
for critical confidentiality, integrity, and availability. Section 5.4
assets defines security requirements, and Section 6.4 shows how
to define these requirements for critical assets.
Threats to A threat to a critical asset explicitly indicates how
critical someone or some event can violate that asset's security
assets requirements. Section 5.3 defines threats, and Section 6.5
discusses how to identify threats to critical assets.
Current Security practices are those actions presently used by the
security organization to initiate, implement, and maintain its
practices internal security. Section 5.5 looks at security practices.
Current Organizational vulnerabilities are indications of missing or
organization inadequate security practices. Section 5.5 examines
al organizational vulnerabilities.
vulnerabilitie
s
2.4.2 Phase 2: Identify Infrastructure Vulnerabilities
Scopes the examination of the computing infrastructure using the critical assets and
threats to those assets
Identifies key information technology systems and components that are related to each
critical asset
Analyzes the resulting data to identify weaknesses (technology vulnerabilities) that can
lead to unauthorized action against critical assets
Technological View
Phase 2 captures the technological view of information security, highlighting the technology
vulnerabilities that are present in and apply to network services, architecture, operating
systems, and applications. Phase 2 is important because the assets, security requirements, and
threats of phase 1 are examined in relation to the computing infrastructure. In addition, the
outputs of phase 2 document the present state of the computing infrastructure with respect to
technological weaknesses that could be exploited by threat actors.
Outputs
Table 2-4 highlights each required output of phase 2, provides a brief description of that output,
and indicates where you can find more information about it in this book.
Output Description
Key Key components are devices that are important in
components processing, storing, or transmitting critical assets. Sections
7.2 and 7.3 address key components.
Current Technology vulnerabilities are weaknesses in systems that
technology can directly lead to unauthorized action. Sections 8.2 and
vulnerabilitie 8.3 define technology vulnerabilities.
s
Phase 3 includes risk analysis and risk mitigation activities. During risk analysis, the analysis
team identifies and analyzes the risks to the organization's critical assets. Specifically, the team
does three things:
1. It gathers data used to characterize and measure the risks to critical assets.
2. It defines the risk evaluation criteria for measuring the impact of threats to the
organization.
During risk mitigation, the analysis team creates a protection strategy and mitigation plans
based on an analysis of the information gathered. Specifically, the team does two things:
2. It identifies next steps that will be taken to implement the protection strategy and the
mitigation plans.
Risk Analysis
Phase 3 is important, because it is during this phase that the analysis team makes sense of its
information security issues and develops a strategy and plans for improvement. The risk analysis
activities of phase 3 are important for two reasons:
They put information security threats into the context of what the organization is trying
to achieve, resulting in explicit statements of risk to the organization's critical assets.
They establish the criteria for measuring risks and a basis for setting priorities when
developing risk mitigation plans.
The risk mitigation activities of phase 3 are important for several reasons:
They create a risk mitigation plan for each critical asset designed to protect that asset.
They require the organization's senior managers to review the protection strategy and
risk mitigation plans from the organizational perspective, developing senior management
sponsorship of the evaluation results.
They define what the organization will do to implement the results of the evaluation,
enabling ongoing security improvement.
Outputs
Table 2-5 highlights each required output of phase 3, provides a brief description of that output,
and indicates where you can find more information about it in this book.
Output Description
Risks to A risk to a critical asset explicitly indicates how a threat to a
critical critical asset can result in a negative impact or consequence
assets to the organization. Section 9.2 discusses risk identification.
Risk Risk measures are qualitative assessments of the ultimate
measures effect on an organization's mission and business objectives
(impact value) and the likelihood of occurrence (probability).
Sections 9.3, 9.4, and 9.5 address how to establish risk
measures.
Protectio An organization's protection strategy defines its direction with
n respect to information security improvement efforts. Section
strategy 10.4 presents protection strategies.
Risk Risk mitigation plans are an organization's plans for reducing
mitigatio the risks to its critical assets. Section 10.5 covers risk
n plan mitigation plans.
As indicated in Chapter 1, many methods are consistent with the OCTAVE approach. Part II
focuses on one implementation of these criteria, the OCTAVE Method.
Chapter
Section
The OCTAVE Method uses a three-phase approach to examining organizational and technology
issues, thus assembling a comprehensive picture of the organization's information security
needs. The method comprises a progressive series of workshops, each of which requires
interaction among its participants. The OCTAVE Method is broken into eight processes: four in
phase 1, two in phase 2, and two in phase 3. In addition, several preparation activities need to
be completed before the actual evaluation. The three phases and preparation for the OCTAVE
Method are depicted in Figure 3-1.
The OCTAVE Method involves two types of workshops: (1) facilitated discussions with various
members of the organization and (2) workshops in which the analysis team conducts a series of
activities on its own. All workshops have a leader and a scribe. The leader is responsible for
guiding all workshop activities and ensuring that all of these (including preparatory and follow-up
activities) are completed. The leader is also responsible for ensuring that all participants
understand their roles and that any new or supplementary analysis team members are ready to
participate actively in the workshop. All workshop leaders should also make sure that they select
a decision-making approach (e.g., majority vote, consensus) to be used during the workshops.
Scribes are responsible for recording information generated during the workshops, either
electronically or on paper. Note that you might not have the same leader or scribe for all
workshops. For example, a leader with more facilitation or interviewing skills may be suitable for
the phase 1 workshops, whereas a leader with strong planning and analysis skills might be
preferable for the phase 3 workshops.
The next four sections provide an overview of preparation activities and the processes of the
OCTAVE Method.
3.1.1 Preparation
The initial focus of the OCTAVE Method is preparing for the evaluation. We have found the
following to be key success factors:
Getting senior management sponsorship. This is the top success factor for
information security risk evaluations. If senior managers do not support the
process, staff support for the evaluation will dissipate quickly.
Selecting the analysis team. The analysis team is responsible for managing the
process and analyzing information. The members of the team need to have
sufficient skills and training to lead the evaluation and to know when to augment
their knowledge and skills by including additional people for one or more activities.
Setting the appropriate scope of the OCTAVE Method. The evaluation should
include important operational areas, but the scope cannot get too big. If it is too
broad, it will be difficult for the analysis team to analyze all of the information. If
the scope of the evaluation is too small, the results may not be as meaningful as
they should be.
The goal of preparation is to make sure that the evaluation is scoped properly, that the
organization's senior managers support it, and that everyone participating in the process
understands his or her role. The following preparation activities provide the right foundation for
a successful evaluation:
Select participants.
Coordinate logistics.
Once the preparation for the OCTAVE Method has been completed, the organization is ready to
start the evaluation. Chapter 4 presents a detailed discussion of preparation activities, and the
next section looks at phase 1 of the method.
In phase 1 you begin to build the organizational view of OCTAVE by focusing on the people in
the organization. Figure 3-2 illustrates the four processes in phase 1.
Process 3: Identify Staff Knowledge. The participants in this process are the
organization's staff members. Information technology staff members normally
participate in a separate workshop from the one attended by general staff
members.
Four activities are undertaken to elicit knowledge from workshop participants during processes 1
to 3 (the basic activities are the same for each of the processes):
The participants in this process are the analysis team members. During process 4, the team
identifies the assets that are most critical to the organization and describes how those assets are
threatened. Process 4 comprises the following activities:
See Chapter 6 for an in-depth discussion of process 4. The next section looks at the phase 2
processes.
Phase 2 is also called the "technological view" of the OCTAVE Method, because this is where you
turn your attention to your organization's computing infrastructure. The second phase of the
evaluation includes two processes, depicted in Figure 3-3.
The participants in this process are the analysis team and selected members of the information
technology (IT) staff. The ultimate objective of process 5 is to select infrastructure components
to be examined for technological weaknesses during process 6. Process 5 consists of two
activities:
The participants in this process are the analysis team and selected members of the IT staff. The
goal of process 6 is to identify technological weaknesses in the infrastructure components that
were identified during process 5. The technological weaknesses provide an indication of how
vulnerable the organization's computing infrastructure is. Process 6 comprises two activities:
Chapter 8 provides more details about process 6. The next section completes our overview of
the OCTAVE Method by looking at phase 3.
Phase 3 is designed to make sense of the information that you have gathered thus far in the
evaluation. It is during this phase that you develop security strategies and plans designed to
address your organization's unique risks and issues. The two processes of phase 3 are shown in
Figure 3-4.
The participants in process 7 are the analysis team members, and the goal of the process is to
identify and analyze risks to the organization's critical assets. Process 7 includes the following
three activities:
Process 8 includes two workshops. The participants in the first workshop for process 8 are the
analysis team members and selected members of the organization (if the analysis team decides
to supplement its skills and experience for protection strategy development). The goal of process
8 is to develop a protection strategy for the organization, mitigation plans for the risks to the
critical assets, and an action list of near-term actions. The following are the activities of the first
workshop of process 8:
In the second workshop of process 8, the analysis team presents the proposed protection
strategy, mitigation plans, and action list to senior managers in the organization. The senior
managers review and revise the strategy and plans as necessary and then decide how the
organization will build on the results of the evaluation. The following are the activities of the
second workshop of process 8:
3. Review and refine protection strategy, mitigation plans, and action list.
After the organization has developed a protection strategy and risk mitigation plans, it is ready
to implement them. At this point, the organization has completed the OCTAVE Method. We
examine the first workshop of process 8 in Chapter 10 and the second workshop in Chapter 11.
From the above description, the OCTAVE Method appears to be linear in nature. The method has
three phases and eight processes, all numbered sequentially. It would be easy for you to assume
that this is a lockstep process, that is, that when you complete one process, you are finished
with it and can move to the next. However, since information security addresses such complex
organizational and technological issues, it does not lend itself to a linear process.
As you will find, the OCTAVE Method is nonlinear and iterative in nature. For example, you might
identify issues in later processes that lead you to review (and possibly change) decisions that
you made during earlier processes. There are actually many potential feedback loops in the
method. As we present the detailed overview of the OCTAVE Method in Chapters 4 to 11, we do
highlight some of the more common instances in which you should review your decisions and
test your assumptions in light of new information that you have gathered. However, because of
the overall complexity of security issues, there are too many potential feedback loops in the
process to identify them all. Be aware of the need to revisit decisions and assumptions and do so
when necessary. One guideline that we use often in this part of the book is "use your best
judgment." In this case, you need to do just that—be aware of the nonlinear, iterative nature of
the OCTAVE Method and go where the data lead you.
This concludes our brief introduction to the OCTAVE Method. The next section builds on this
introduction by examining how the method is consistent with the attributes and outputs
presented in Chapter 2.
The OCTAVE Method is consistent with the principles, attributes, and outputs of the OCTAVE
approach described in Chapter 2. This section illustrates how the attributes and outputs map to
the OCTAVE Method. Since Chapter 2 provided a mapping between principles and attributes, we
do not explicitly map the principles to the OCTAVE Method here.
Recall from Chapter 2 that attributes are the distinctive qualities, or characteristics, of the
evaluation. They define the basic elements of an information security risk evaluation from both
the process and organizational perspectives. Table 3-1 summarizes how each attribute is
reflected in the OCTAVE Method.
The next section focuses on how the outputs map to the OCTAVE Method.
As we explore the details of the OCTAVE Method in Chapters 4 to 11, we illustrate major
concepts using examples from a running scenario. The organization in the scenario is a fictitious,
medium-sized, medical facility called MedSite. MedSite is a hospital with several clinics and labs,
some of which are at remote locations. The hospital includes the following functional areas:
The MedSite administrator is the chief administrator for the hospital and has a small staff
responsible for overseeing MedSite operations. In addition, each major functional area of the
organization (administrative, medical, labs, and remote clinics) reports directly to the chief
administrator. MedSite's senior management team includes the MedSite administrator and the
individuals who lead the functional areas of the organization. Each functional area of MedSite
contains one or more operational areas. The head of each operational area is considered to be a
middle manager in the organization. Figure 3-5 shows the organizational chart for MedSite.
MedSite's main computer system is the Patient Information Data System (PIDS). PIDS includes
the main PIDS server, the network, desktop PCs, and a variety of medical applications. The
system also links and integrates a set of smaller, older databases related to patient care, lab
results, and billing.
Patient data can be entered into PIDS or one of the other databases at any time from any
workstation. Physicians, administrative clerks, lab technicians, and nurses have authorization to
enter data into PIDS as well as other systems. Personal computers, or workstations, are located
in all offices, treatment rooms (including emergency rooms), nursing stations, and labs. In
addition, physicians can also access PIDS remotely using their home personal computers. In
fact, there is talk around the hospital that medical personnel will soon be able to access PIDS
using personal digital assistants (PDAs).
An independent contractor, ABC Systems, provides support for most of the systems at MedSite
as well as for the network. MedSite's information technology personnel and another contractor
each maintain some of the legacy systems still being used by MedSite's staff. The information
technology staff members from MedSite provide on-site help desk support and basic system
maintenance. ABC Systems provided MedSite's information technology personnel with limited
systems and network training about a year ago.
MedSite's senior managers decided they wanted a comprehensive review of information security
evaluation within their facility. Several new regulations are expected to be mandated by the
government in the upcoming year, requiring MedSite to document the results of an information
security risk evaluation. The regulations will also require MedSite to implement a practice-based
standard of due care, meaning they would have to institutionalize recognized good security
practices. After some discussion and consultation with other medical facility managers, they
decided to use the OCTAVE Method. Funding for internally staffed activities was easier to find
than more money for contractors, and senior managers hoped that their staff would learn better
security practices while doing this evaluation.
During each activity in Part II, we will chart MedSite's progress as it conducts the OCTAVE
Method. Chapter 4 starts exploring the OCTAVE Method in detail and examines how to prepare
for the evaluation.
Section
Since the OCTAVE Method looks at a cross-section of an organization, it involves many people
and requires a lot of coordination. The preparation activities are important, because they set the
stage for the evaluation. During preparation you must overcome any organizational inertia and
build momentum for conducting the evaluation.
Chapter 3 identified the following success factors for information security risk evaluations:
Getting senior management sponsorship for the evaluation
It is during preparation that you directly address these key success factors and set the direction
for your organization's evaluation.
While there are many ways in which organizations can prepare to conduct the OCTAVE Method,
this section focuses on a likely scenario for many organizations, making the following two
assumptions:
1. There is a champion, someone within the organization who has an interest in conducting
the OCTAVE Method.
2. The analysis team does not exist prior to gaining senior management approval.
The champion should help the senior managers understand the benefits of performing the
OCTAVE Method and thereby gain their sponsorship for conducting the evaluation. After the
organization's senior managers decide that the organization should conduct the OCTAVE Method,
they work with the champion to select members of the analysis team. The analysis team then
becomes the focal point for completing all evaluation activities.
Table 4-1 illustrates the preparation activities, while the rest of this chapter describes the basic
activities that must be completed prior to conducting the evaluation in the context of the above
scenario.
Activity Description
Obtain The champion works with the organization's senior
senior managers to gain their sponsorship of the evaluation. The
managemen champion is responsible for making the managers aware of
t the evaluation process, the expected outcomes, and the
sponsorship time and personnel commitments that must be made.
of OCTAVE
Select The champion assembles the analysis team after obtaining
analysis senior management sponsorship of the evaluation.
team Alternatively, senior managers might designate someone in
members the organization to work with the champion or to lead the
selection of the analysis team. Once analysis team
members have been selected, they need to become
Commitment to allocate the necessary resources
Agreement to review the results and make decisions about next steps
The last item is particularly important, because an evaluation is worthless if little or nothing is
done with its results and recommendations. An evaluation that goes nowhere is, in fact, worse
than no evaluation at all, because participants and managers will be less inclined to do another
one in the future.
Getting Sponsorship
Now that we've established what we mean by sponsorship, we need to think about how to get it.
Although sponsorship from your organization's senior managers is a vital requirement for
successful conduct of the OCTAVE Method, there is no simple formula for obtaining it. In some
cases the senior managers in an organization have taken the initiative in getting the OCTAVE
Method implemented in their organizations, thus guaranteeing sponsorship, but this is not
typical.
Often, one person in the organization learns about the OCTAVE Method and decides to conduct
an evaluation in his or her organization. We refer to that person as the champion. In order to
develop senior management sponsorship of the OCTAVE Method, the champion needs to make
appropriate senior managers aware of the evaluation process, the expected outcomes, and the
expected time and personnel commitments. So an obvious question is, Who are the appropriate
senior managers? In general, they are any individuals high enough up in the company to commit
the organization and its resources to this effort. These senior managers are often chief executive
officers, directors, or members of the organization's governing board.
We have seen cases in which despite strong sponsorship from the chief information officer in an
organization, the organization nevertheless has trouble successfully conducting the OCTAVE
Method. In such cases, broad support from the organization's business units was lacking
because their personnel perceived the evaluation as only an information technology issue. For
the OCTAVE Method to be effectively deployed in an organization, it also needs the support of a
senior manager outside the information technology area.
The OCTAVE Method requires broad sponsorship because it requires the participation of people
from both the business units and the information technology department. Staff members who
work in the business lines of an organization understand the relative importance of business
operations and the systems and information that support these operations. In general, they are
in the best position to understand the business impact of disruption or abuse to business
systems and operations and of potential mitigation actions. Information technology staff
members, including information security experts, understand the design of existing systems and
the impact of technology-related vulnerabilities. They are also in the best position to evaluate
the trade-offs of mitigation actions when evaluating their effect on system performance. Senior
managers need to be made aware that information security is not solely an information
technology issue. In addition, the managers who sponsor an initiative such as the OCTAVE
Method need to have the authority to commit the time of staff members from the organization's
business units as well as from the information technology department.
Regulations are becoming more common in many industry segments these days. The Health
Information Portability and Accountability Act (HIPAA) [HIPAA 98] establishes a standard of due
care for information security for health care organizations, while Gramm-Leach-Bliley [Gramm
00] legislation does the same for financial organizations. Most information security standards of
due care require an organization to conduct an information security risk evaluation and to
manage its risks. If your organization must perform an information security risk evaluation
because of regulations, bring this requirement to the attention of your organization's managers.
We have seen the senior managers of organizations sponsor information security risk
evaluations after learning about regulations and the requirements for compliance.
Anecdotal Information
Although there are no substantial "return on investment" data available at this time with respect
to security improvement activities, you can present anecdotal information to inform senior
managers about the benefits of using information security risk evaluations. You can emphasize
how some organizations use these evaluations as the central component of a security
improvement initiative. Those organizations often view a security improvement initiative as a
competitive advantage.
The champion in one organization decided to conduct a limited evaluation to build sponsorship
for a more extensive implementation of the OCTAVE Method. He was able to recruit an analysis
team and get middle-management sponsorship from one operational area. The team conducted
the OCTAVE Method on the operational area and then presented the results to senior managers.
This approach enabled senior managers to see what the results of the evaluation looked like and
was a good way to get them interested in expanding the effort.
In the end, of course, there is no single way to assure sponsorship for conducting an evaluation
like the OCTAVE Method, but the ideas presented here should start you thinking about how to
build sponsorship of the OCTAVE Method in your organization. The next section examines how to
select analysis team members.
The analysis team is the focal point for conducting the OCTAVE Method. This team is responsible
for the ultimate success of the evaluation. Because the analysis team plays such a pivotal role in
the evaluation, it is important to select a core team that has sufficient skills, experience, and
expertise to lead the evaluation.
Who Is on the Analysis Team?
The champion often assembles the analysis team after obtaining senior management
sponsorship of the evaluation. Senior managers might also designate someone in the
organization to work with the champion or to lead the selection of the analysis team.
The core analysis team consists of three to five people from the organization's business units
and information technology department; typically, the majority are from the business units of
the organization. Some organizations also select people from the operational areas participating
in the evaluation to be on the analysis team. In such cases this activity, Select Analysis Team
Members, should be performed only after the next activity, Select Operational Areas to
Participate in OCTAVE is completed.
The analysis team can add supplemental team members (e.g., an operational area manager or a
vulnerability evaluation tool expert) to particular workshops as needed. These additional people
augment the skills of the core team by providing expertise needed during designated workshops.
One member of the core analysis team normally handles logistics for the evaluation. However,
an additional person can be assigned to the analysis team specifically to address logistics.
(Section 4.6 discusses coordinating logistics for the OCTAVE Method.)
In summary, the analysis team includes between three to five people in the core group,
represents both business/mission and IT perspectives, and is knowledgeable about business and
IT processes.
The analysis team helps to set the scope of the evaluation, leads the selection of evaluation
participants, facilitates the initial set of knowledge elicitation workshops, and gathers and
analyzes information. The roles and responsibilities of the analysis team include
Working with managers to set the scope of the evaluation, select participants, and
schedule OCTAVE activities
Gathering, analyzing, and maintaining evaluation data during the OCTAVE Method
Although the OCTAVE Method is a complex process, analysis team members do not require
extensive or unique skills. The OCTAVE Method is not a typical vulnerability evaluation that
focuses solely on technological issues. Because it addresses both business and technological
issues, the OCTAVE Method is similar to other business processes or management evaluations.
Thus, it is helpful if someone on the analysis team is familiar with or has done assessments or
evaluations. At least one member of the analysis team must have some familiarity with
information technology and information security issues. Information technology representatives
who participate in the evaluation should bring broad perspectives and have pragmatic
viewpoints. They don't have to understand all aspects of security, but they need to be aware of
their technical limits and identify others to include in the evaluation when necessary.
The specific skills needed for each OCTAVE process are detailed in the beginning of each of the
remaining chapters in Part II (Chapters 5 to 11). By looking at the skills that we suggest for your
team, you can determine whether it is necessary to supplement the skills of the core analysis
team by including an additional person for a selected workshop. In general, the core members of
the analysis team should have the following qualifications:
Facilitation skills
Ability to present to and work with senior managers, operational area managers, and
staff
In addition, at different times the core team will need the following skills and knowledge or
should be able to acquire them by adding supplemental team members:
Once analysis team members have been selected, they need to become familiar with the
OCTAVE Method. Team members can either participate in formal training or become familiar with
the process by working on their own, for example, through reading and understanding the
material in this book or the OCTAVE Method Implementation Guide [Alberts 01a].
If you, the analysis team, decide to get started without training, there are some things you can
do to facilitate the learning process. First, all your team members should spend three to five
days reading about the OCTAVE Method and discussing it among yourselves. You would then
perform a very limited pilot by selecting one asset that you consider critical to the organization.
Analyze that asset using the appropriate pieces of the method to perform the following activities
for that asset:
You might also complete the surveys from process 3 and determine what kind of
organizationwide protection strategy you would recommend based on the results. Running
vulnerability tools is not likely to be something you can do without a recognized effort. If the
organization does routinely run these tools, perhaps someone from the information technology
department can help you incorporate the vulnerability information into the pilot.
Working through a limited pilot of the OCTAVE Method can go a long way toward understanding
each evaluation process and how to work with information generated throughout the evaluation.
As you complete your pilot, you should talk about what was easy and what was difficult. You
should also review the guidance for the processes and begin to plan for an expanded evaluation.
Use your results from the pilot to help persuade senior managers to sponsor the OCTAVE
Method. Finally, if you choose to proceed without formal training, make sure your managers
understand that you are learning as you go and that the evaluation may take longer than
planned.
Once the analysis team has been selected and understands the evaluation process, it can set the
scope of the evaluation. This activity is addressed in the next section.
One of the key OCTAVE principles is focus on the critical few. This principle implies that you can
focus the evaluation on selected areas of the organization rather than performing an exhaustive
search of the entire organization. Setting a manageable scope for the evaluation reduces its size,
making it easier to schedule and perform the activities. It also allows you to prioritize the areas
of an organization for the evaluation, ensuring that the highest-risk or most important areas can
be examined first or more frequently.
The analysis team works with the organization's senior managers to select which operational
areas to examine during the OCTAVE Method. You can use the following guidelines when
choosing operational areas for the evaluation:
At least four operational areas are generally recommended, one of which must be the
information technology or information management department.
Select operational areas that reflect the primary operational or business functions as well
as the important support functions of the organization.
Consider areas that are in remote locations or are different in terms of the type of work
or support that they need.
Consider the time commitment that personnel will be required to contribute. Determine
whether there will be significant conflicts with ongoing operations.
Remember that these are only guidelines. The senior managers and analysis team members
need to use their best judgment as to which areas to select to participate in the evaluation.
In addition to taking into account the guidelines suggested above, answer the following
questions as you select operational areas:
What areas of your organization are critical to achieving the mission of your
organization?
What additional areas are critical? Have you considered your entire organization,
including support functions?
Which areas would you (senior managers) like to participate in the risk assessment?
At this point your organization has selected the analysis team and operational areas that will
participate in the evaluation. In the next activity you select participants from each operational
area to participate in processes 1 to 3 of the OCTAVE Method.
Since the OCTAVE Method is an organizationwide evaluation, it requires people from throughout
the organization to participate in it. The core analysis team members lead all activities during
the evaluation; other members of the organization are required to participate in selected
activities. The analysis team selects people from multiple organizational levels to participate in
processes 1 to 3, and the team can augment its skills, experience, and expertise for specific
activities in processes 4 to 8 by including additional participants if necessary.
Evaluation Participants
Table 4-2 provides a summary of the participants for each process required by the OCTAVE
Method, as well as estimates for their time. In most cases the people participating in processes
1, 2, and 3 can provide supplementary support for the analysis team in other processes if
necessary. For example, one of the information technology staff members from process 3 could
also be a supplemental analysis team member for processes 5, 6, and 8. Note that all of the
times in Table 4-2 are estimates. The actual time required to complete each workshop depends
upon factors such as the abilities and experience of analysis team members, the extent to which
the analysis team members are familiar with the evaluation process, and the scope of the
evaluation.
Are familiar with the types of information-related assets used in your organization
Are familiar with the ways in which these information-related assets are used
Have the authority to select and authorize time for staff members
Operational area managers in your organization will contribute a half day of their time to attend
the process 2 workshop. In addition, you might want to supplement your team's skills during
processes 7 and 8A (the first workshop of process 8) by including an operational area manager.
General staff members and information technology staff members participate in the process 3
knowledge elicitation workshops. Operational area managers select three to four key staff
members from their areas to participate in process 3. There should be at least three workshops
involving staff: two for general staff and one for IT staff. Depending upon the number of
operational areas selected, you may need more than two workshops for the general staff.
You should limit the size of the staff workshops to five people. If you include more than five
people in a workshop, it will be difficult for all of them to participate actively, and some
participants might be too overwhelmed to contribute. Higher numbers of participants can also be
difficult to manage for a new analysis team. For the process 3 knowledge elicitation workshops,
you should select at least three to four staff members from each selected operational area who
Are familiar with the types of information-related assets used in their area
Are familiar with the ways in which the information-related assets are used
Additional members of the general staff may be needed to supplement the knowledge or skills of
the analysis team during processes 4, 7, and 8. During process 4, someone with analysis skills
might be included to help with threat identification. In processes 7 and 8A, additional help may
be needed to analyze risks, define evaluation criteria, or develop mitigation plans. You will
probably find that you will be better able to identify specific people to help with targeted pieces
of the evaluation when you start preparing for those parts of the evaluation process. Refer to
Table 4-2 for ideas about whom to include.
Briefing All Participants
After you have identified the participants, you will need to help them understand the purpose of
the evaluation and define their roles for them before any of the workshops begin. We suggest
that you hold a briefing for the selected participants. Make sure you mention that any
information identified during the knowledge elicitation workshops will not be attributed to
specific individuals; this is a good place to emphasize the need for open communication of
sensitive issues. It is also a good idea for one or more senior managers to be present for the
briefing. These managers can then use this opportunity to reinforce their sponsorship of the
evaluation.
This concludes our overview of selecting participants and brings us to the last preparation
activity remaining before you can begin the evaluation: coordinating logistics for the evaluation.
This activity is deceptively difficult in nature. The steps for coordinating logistics are
straightforward and easy to understand, but they tend to present some of the bigger obstacles
that you will face during the evaluation. Much of this activity involves arranging dates for
workshops and coordinating the schedules of participants. Anyone who has ever tried to set up a
meeting for five or six busy individuals knows how difficult this activity can be.
We provide a general schedule of activities in Figures 4-1 and 4-2. Figure 4-1 highlights the
preparation activities, while Figure 4-2 outlines the evaluation processes. The major assumption
underlying this schedule is that the analysis team understands the evaluation process and has
sufficient skills and expertise to conduct the evaluation. Teams who are attempting to conduct
the evaluation for the first time might find this schedule aggressive. In addition, you will likely
find that there is a time lag between preparation activities and the evaluation; you will probably
not start the evaluation the moment after you have set the schedule. The time lag is not shown
in Figures 4-1 and 4-2.
Also, note that the sample schedule makes several implicit assumptions, such as the following:
The analysis team spends only three days learning about the evaluation process, either
through some form of training or through a self-directed effort.
Only two general staff workshops are required in process 3.
A week of elapsed time will be needed to run the vulnerability tools on the computing
infrastructure (not a total of a week's effort, but rather a week with the tools run at
different times and shifts to avoid interrupting key operations).
The OCTAVE Method is conducted using a series of short workshops; the schedule for conducting
the workshops is quite flexible. The shortest possible time for completing an entire evaluation is
slightly less than two weeks, assuming a full-time, dedicated analysis team. Practical
constraints, such as problems scheduling participants for workshops, usually extend the calendar
time required to conduct the OCTAVE Method. You need to consider any organizational
constraints when scheduling evaluation activities. Also, remember that some workshops require
data consolidation activities before they are conducted; allow time to complete all preparation
activities.
One member of the analysis team should be the focal point for coordinating logistics for
conducting the OCTAVE Method in your organization. Be sure to consider the following when you
address evaluation logistics:
Optimal schedule for all workshops (be sure to inform all participants when and where
workshops will be held)
Once you have set the schedule, you are ready to start the evaluation. The last section of this
chapter presents what MedSite, the organization from our sample scenario, did to prepare for
the evaluation.
The senior managers at MedSite held a meeting to select analysis team members. The team that
they selected is shown in Table 4-3. With the exception of the logistics coordinator, all analysis
team members were assigned to this effort on a half-time basis. An information technology
member was identified as an alternate, because the work schedules of all information technology
staff members were subject to emergency interruptions.
Section
Note that in process 3, general staff members and information technology staff members
participate in separate workshops to allow information technology staff to focus on more
technical issues. Thus, there are four types of knowledge elicitation workshops. Depending on
the size of your organization and how you scope the evaluation, you could end up with multiple
workshops for any organizational level. For more information about how to select participants for
processes 1 to 3, see Chapter 4.
Activities
A key activity of processes 1 to 3 is the fourth one, in which participants evaluate the
organization's security practices against a catalog of good practices. The results of this activity
provide a snapshot of organizational practice and a basis for improvement.
Activity Description
Identify assets The participants identify the assets used by the
and relative organization. They then select the assets most
priorities important to the organization and discuss their
rationale for selecting those assets.
Identify areas of The participants identify scenarios that threaten their
concern most important assets based on typical sources and
outcomes of threats. They also discuss the potential
impact of their scenarios on the organization.
Identify security The participants identify the security requirements
requirements for for their most important assets. In addition, they
most important examine trade-offs among the requirements and
assets select the most important requirement.
Capture Participants complete surveys in which they indicate
knowledge of which practices are currently followed by the
current security organization's personnel and which are not. After
practices and completing the survey, they discuss specific issues
organizational from the survey in more detail.
vulnerabilities
Catalog of Practices
Security practices are actions that help initiate, implement, and maintain security within an
enterprise [BSI 95]. A specific practice is normally focused on a specific audience. The audiences
for practices include managers, users (general staff), and information technology staff. An
example of a good security practice is that all staff members should be aware of and understand
the organization's security-related policies.
We call a documented collection of known and accepted good security practices a catalog of
practices. Chapter 2 introduced the idea of using a catalog of practices during an evaluation. The
catalog of practices is used to evaluate the current security practices used by the organization.
During the final activity of each knowledge elicitation workshop, participants fill out surveys and
then discuss any issues arising from the survey that they feel are important. The surveys are
specific to an organizational level. Each survey is developed by selecting practices from the
catalog that should be used by staff members from that organizational level. For example, senior
managers are more likely to know if corporate strategy and plans include or address security
issues, whereas information technology personnel are more likely to be familiar with particular
aspects of managing technological vulnerabilities and configuring firewalls.
The catalog of practices is divided into two types of practices: strategic and operational.
Strategic practices focus on organizational issues at the policy level and provide good general
management practices. Strategic practices address business-related issues as well as issues that
require organizationwide plans and participation. Operational practices, on the other hand, focus
on technology-related issues dealing with how people use, interact with, and protect technology.
Since strategic practices are based on good management practice, they should be fairly stable
over time. Operational practices are more subject to changes as technology advances and new
or updated practices arise to deal with those changes.
The catalog of practices is a general catalog of security-related practices; it is not specific to any
domain, organization, or set of regulations. It can be modified to suit a particular domain's
standard of due care or set of regulations (e.g., the medical community and Health Insurance
Portability and Accountability Act (HIPAA) [HIPAA 98] security regulations, the financial
community and Gramm-Leach-Bliley regulations [Gramm 00]). It can also be extended to add
organization-specific standards, or it can be modified to reflect the terminology of a specific
domain. Figure 5-1 depicts the structure of a basic catalog of practices that was developed at
the time this book was written; the details of the specific practices can be found in Appendix C.
Section 5.5 shows how to use the catalog to evaluate your organization's current security
practices and organizational vulnerabilities, while the next section looks at the knowledge
elicitation workshop activities, starting with asset identification.
Asset identification is the first activity in each knowledge elicitation workshop. During this
activity, participants focus on the information-related assets they use in their jobs. From our
experience of watching people learn how to perform the evaluation, we have singled out asset
identification as a critical success factor for analysis teams. If you collect good information about
assets in this activity, you lay the foundation for a successful and meaningful evaluation.
We call OCTAVE an asset-driven evaluation because assets are used to focus all subsequent
activities. Assets guide the selection of devices and components to evaluate in phase 2, and the
risk mitigation plans that you develop in phase 3 focus on protecting your organization's most
critical assets. So it's important to get this activity right and gather as much meaningful
information about assets as you can. If you allow participants to identify assets that are too
broad or assets not relevant to information security, you will have trouble with later activities
and will have to revisit this important activity.
What Is an Asset?
Before we explore how to conduct step 1, we need to define what we mean by the term "asset."
An asset is something of value to the enterprise [Hutt 95]. In general, information technology
assets combine logical and physical assets and can be grouped into the following categories:[1]
This list was created using information in the following references: [Fites 89],
[1]
Systems— information systems that process and store information (systems being a
combination of information, software, and hardware assets and any host, client, or
server being considered a system)
People— the people in an organization who possess unique skills, knowledge, and
experience that are difficult to replace
Asset Considerations
Category
Systems Systems assets constitute the broadest of the asset
categories, representing a grouping of information, software,
and hardware assets. Most people think of a system as a
whole; they don't break it down into its components.
Because of this, systems assets are often identified during
an information security risk evaluation.
Informatio Information assets are intangible in nature and are closely
n linked to systems assets. Systems store, process, and
transmit the critical information that drives organizations.
Thus, when an organization creates strategies and plans to
protect its systems assets, it is also protecting its critical
information (as well as its software and hardware assets).
Are there any other assets that you are required to protect (e.g., by law or regulation)?
Have you considered your entire organization? What other assets do you use?
Remember that the point of this activity is for the participants to identify assets that they use to
help the organization meet its mission and business objectives. Some facilitators might be
tempted to start by explicitly identifying the mission of the organization and using that as a
common reference point for the participants. However, this could also lead to confusion among
the participants.
For example, think about a knowledge elicitation workshop with the information technology staff
at MedSite. From their perspective, the mission of MedSite is to deliver quality health care to
patients. If you start by identifying the organization's mission, you might bias the IT staff
members' views of assets. They would likely identify assets such as patient-identifiable
information and medical records. However, this is not the information with which they work on a
daily basis. They maintain the infrastructure that enables doctors and nurses to work with
patient-identifiable information and medical records. Thus, you would want them to identify
specific assets that are related to their work on the infrastructure.
The lead facilitator must play an active role in helping participants identify assets. For example,
when a participant identifies a system as an asset, what is the asset that is really being
identified? Is the information on the system the asset? Is an application or service on the system
the asset?
Assets that are identified should be unique, specific, meaningful, and related to information
technology in some way. A common pitfall is that participants will identify assets that have no
relation to information or information technology, for example, a business process, a piece of
physical equipment, or facility that has no link to the organization's computing infrastructure
(e.g., the building that houses the organization).
A second pitfall is identifying assets that are too general in nature. For example, participants
often start off by saying, "Our systems and our people are our two most important assets." To
which systems and which people are they referring? How do those assets relate to information
security? The facilitator must always keep the group focused on information-related assets.
Let's examine what the senior managers at MedSite identified as their important assets. At
MedSite, the senior managers had a lively discussion about assets. Figure 5-2 shows the assets
that were recorded by the scribe. The asterisk (*) by an asset indicates that the managers
identified it as an important asset. (See step 2 of this activity for more details on important
assets.)
Asset Description
Patient PIDS is a database containing most of the important
information patient information at MedSite. Role-based access (e.g.,
data system appointment scheduler, pharmacist, lab technicians,
(PIDS) providers) is required to access PIDS. ABC Systems, an IT
contracting organization, maintains PIDS for MedSite.
Paper Complete patient records are recorded on paper. If a
From the assets that you have identified, which are the most important?
Document the important assets and the rationale for their selection.
The senior managers at MedSite selected their important assets. An asterisk (*) by an asset in
Figure 5-2 denotes that it is important. Note that the senior managers selected only four
important assets. Figure 5-3 shows the managers' rationale for selecting the important assets.
Figure 5-3. Most Important Senior Management Assets and Rationale for Selection
This step concludes the first activity of processes 1 to 3. In the next activity you will identify
scenarios that describe how participants believe their important assets are being threatened.
As people work with information-related assets when performing their jobs, they develop an
understanding of the operational procedures related to accessing and using information. They
learn about the way operations really work in their organization. They know where written
procedures must be followed to the letter, and they know where they have to "make things
work" by deviating from formally written protocols. The knowledge about what is really
happening in the organization is vital when creating threat scenarios.
In this activity participants express concerns about how their most important assets are
threatened. They create the scenarios using prompts based on known sources and outcomes of
threat, resulting in highly contextual threat information from the people who use and depend
upon the organization's assets. This information forms the basis for constructing threat profiles
during process 4.
The threat sources and outcomes in Figure 5-4 are based on known sources of threat from the
generic threat profile. For a more in-depth discussion of the generic threat profile, see Chapter
6. Table 5-4 provides a definition for each category of threat source, while Table 5-5 provides a
definition for each outcome.
To conduct step 1, ask the participants the following question: What scenarios threaten your
important assets? To help them think about threat scenarios, have the participants focus on how
the sources and outcomes contained in Figure 5-4 related to their important assets.
Note that the participants might consider one asset at a time, or they might consider and discuss
all important assets simultaneously when they identify areas of concern. Identifying areas of
concern is a brainstorming activity, in which participants will likely focus on multiple assets and
sources simultaneously.
Categor Definition
y of
Threat
Source
Deliberat This group includes people inside and outside your
e actions organization who might take deliberate action against your
by people assets.
Accidenta This group includes people inside and outside your
l actions organization who might accidentally harm your assets.
by people
System These are problems with your information technology
problems systems. Examples include hardware defects, software
defects, unavailability of related systems, viruses, malicious
code, and other system-related problems.
Other These problems are beyond your control. Threats in this
problems category include natural disasters (e.g., floods and
earthquakes) that can affect your organization's information
technology systems, unavailability of systems maintained by
other organizations, and interdependency issues.
Interdependency issues refer to problems with infrastructure
services, such as power outages, broken water pipes, and
telecommunication outages.
Table 5-5. Threat Outcomes
Threat Definition
Outcome
Disclosure The viewing of confidential or proprietary information by
someone who should not see the information
Modification An unauthorized changing of an asset
Loss/ The limiting of an asset's availability, either temporarily
destruction or because it is unrecoverable
Interruption The limiting of an asset's availability, mainly in terms of
services
At MedSite, the senior managers identified areas of concern for their important assets. Figure 5-
5 shows a few of the areas of concern for PIDS.
Note that the areas of concern in Figure 5-5 are written as complete sentences. One of the
biggest mistakes, made by many inexperienced analysis teams, is to record partial phrases that
do not completely capture the meaning of the concern. When the teams review areas of concern
later in the process, they cannot always remember the exact concern if only a few words were
recorded. Appendix A summarizes the areas of concern identified during processes 1 to 3.
The second step of this activity centers on collecting information about the potential impact on
the organization. This information will be useful when you start to construct risks in process 7. It
will help link the outcomes of threats to business goals and objectives. (See Chapter 9 for more
information about process 7.)
Note that there can be more than one impact for each area of concern. Figure 5-6 illustrates the
potential impact on the organization for two of the senior managers' areas of concern for PIDS.
This concludes the second activity of processes 1 to 3. In the next activity you will identify
security requirements for the participants' most important assets.
In OCTAVE you are ultimately trying to create a protection strategy and risk mitigation plans
geared toward protecting your organization's critical assets. To protect critical assets, you must
first establish what is important about each of those assets. Then you can determine to what
extent you will protect each asset.
At this point in the workshop, participants discuss what qualities are important about the assets
that they have identified. In many cases you will find that an asset is identified as important to
more than one organizational level. However, the quality of the asset that is most valued might
differ among participants from different levels. It is important to understand such differences in
perspective, so that you can consider them later in the evaluation when you are deciding how
best to use organizational resources to protect critical assets.
Security Requirements
Security requirements outline the qualities of an asset that are important to protect. There are
three typical security requirements that organizations need to consider:
Availability— when or how often an asset must be present or ready for use
The categories of security requirements are contextual for any organization and must be defined
in order to conduct a meaningful evaluation. They can be tailored to meet your organization's
needs. For example, some organizations might want to add authenticity and nonrepudiation to
their list of security requirements. First, you need to decide what categories of security
requirements to incorporate into the evaluation, and then you need to use those requirements
consistently throughout all activities. In this book we consider only confidentiality, integrity, and
availability.
The concept of security requirements has also been difficult for people to understand when they
are learning about OCTAVE. Most people have an intuitive sense about the extent to which
information needs to be private, accurate, and available. However, it is hard for many
participants to create concise and precise statements that reflect their knowledge, and in many
cases, they will need additional assistance from you during this activity.
In step 1, participants identify the security requirements for each important asset that has been
identified. We have found that this activity can be difficult for participants; for example, some
become confused and think that the security requirements are the protection strategies. Instead
of stating that an asset needs to be kept confidential, they will state the actions needed to keep
it confidential. Ask the participants the following questions for each of their important assets:
Are authenticity, accuracy, and completeness important for this asset? Do you need to
be sure that only authorized people have modified the asset? Do you need to be certain
that nothing was inadvertently deleted or changed? If the answer to any of these
questions is yes, what is the specific integrity requirement?
Is accessibility of the asset important? Who should be able to get to this asset? When or
how often? What is the specific availability requirement?
Are there any other security-related requirements that are important to this asset? What
are they?
The security requirements for PIDS from the senior managers' perspective are shown in Figure
5-7. Note that a security requirement is a general statement. Each category of requirement
(confidentiality, integrity, and availability) can have one or more statements that express the
specific requirements for an asset. For example, the information on PIDS is medical information
that is vital to treating patients. Because it is so important, one would assume that not just
anyone should be allowed to change a patient's medical information. If we examine the
requirement for integrity, we see that this is the case. The managers indicated that "only
authorized users should be able to modify information." This requirement restricts legitimate
access for the purpose of modifying medical information.
Figure 5-7. Security Requirements for PIDS from the Senior Managers' Perspective
In step 2 you examine the relative trade-offs among the requirements. Ask the participants the
following types of questions for each of their important assets:
What is the relative ranking of the security requirements for each information asset?
Participants often have difficulty making this decision. People will often say that all of the
requirements are equally important. However, in reality they rarely are. Is availability more
important than confidentiality of the asset? If you could preserve only one of the requirements
for an asset, which one would it be? These are important questions to consider, because they
will help form the basis for making trade-offs in your risk mitigation plans later in the evaluation.
The senior managers at MedSite had a long discussion about the relative merits of the
requirements. In the end they selected availability as the most important requirement as
indicated by the asterisk (*) in Figure 5-7 They reasoned that information needed to be available
to doctors and nurses when a patient needed care. They really focused on emergency situations.
If the attending physicians could not get to information when it was needed during an
emergency, lives could be lost.
In general, when you are trying to describe a security requirement for an asset, you need to
understand what aspect of the asset is important. This is especially true for the more complex
assets (systems). During this activity participants might have trouble describing the
requirements. You can help them by suggesting a security requirement and letting them modify
it. You will probably need to take a more active facilitation role for the first asset. Once the
participants get the idea, they will need less help.
For information assets, the security requirements will focus on the confidentiality, integrity, and
availability of the information. For example, the following is an example for information assets:
Remember that systems assets generally represent groupings of information, software, and
hardware assets. The specific aspect or quality of the system that is important will drive the
security requirements. If the information stored, processed, and transmitted by the system is
the most important aspect, the following example describes the security requirements:
The information on system XYZ can be modified only by authorized personnel (integrity).
The information on system XYZ must be available whenever requested. Downtime for
system XYZ can be only 15 minutes every 24 hours (availability).
If the service provided by the system is the most important aspect, then the following example
describes the security requirements:
The service provided by system XYZ must be complete and consistent (integrity).
The service provided by system XYZ must be available whenever requested. Downtime
for system XYZ can be only 15 minutes every 24 hours (availability).
Notice that no confidentiality requirement was listed. Typically, confidentiality does not apply to
services. However, confidentiality may apply, depending on the specific nature of the service.
For software assets, you should focus on the software application or service when you identify
security requirements. Do not focus on the information that is processed, transmitted, or stored
by the application. If you find that this is how you are thinking about the software asset, then
you are really thinking about a systems or information asset. If the software is commercially or
freely available, confidentiality probably does not apply. If the software is proprietary to your
organization, there might be a confidentiality requirement. The following example relates to
proprietary software assets. For commercially or freely available applications, ignore the
confidentiality requirement.
For hardware assets, you should focus on the physical hardware when you identify security
requirements. Do not focus on the information that is processed, transmitted, or stored by the
hardware. If you find that this is how you are thinking about the hardware asset, then you are
really thinking about a systems or information asset. Confidentiality generally does not apply to
physical hardware. Modification of a hardware asset focuses on adding or removing hardware
(e.g., removing a disk drive or adding a modem). Availability focuses on whether the asset is
physically available or accessible. The following is a guideline for hardware assets:
The hardware must be accessible to authorized personnel during normal working hours
(availability).
For people assets, you should focus only on the availability requirement. Remember, people
assets are a special case. When people are identified, it is because of some special skill that they
have or because of a service that they provide. Thus, availability of the service or asset is the
primary requirement. The following is a guideline for people assets:
The IT staff must provide ongoing and consistent system and network management
services (availability).
Remember, when people are identified as assets, determine whether there are related assets.
For example, identify a key system that they use or a type of information that they know. When
you examine the security requirements for people assets, you might start to find out that the
systems the people use are also important. However, be careful with extending this activity too
far. If the people are part of another organization, you can stop after you identify them as an
asset. Your main concern is the service that they provide to you. Their systems are beyond the
scope of your information security risk assessment. If the people are part of your organization,
you can explore related assets.
If you want your organization to improve with respect to how it handles information security,
you need first to establish where you currently are, that is, what you are currently doing well
and where you need to improve. You do this by examining the security practices within your
organization.
In this activity you evaluate your organization's current security practices against a catalog of
known good security practices. You elicit detailed information about your organization's current
security policies, procedures, and practices, thus providing a starting point for improvement. In
OCTAVE we suggest using multiple means to collect information about current security practices
used by the organization. This method uses surveys to collect the information and open
discussion to reveal gaps, inconsistencies, and areas requiring clarification.
In this step you distribute a short survey on security practices to the participants and give them
time to complete the survey. The surveys should be based on known security practices as
documented in the catalog of practices. (See Section 5.1 for more information on the catalog of
practices.) Figure 5-8 shows part of a survey for senior managers. You will find complete
examples of surveys in Appendix B of this book.
Don't know— the respondent does not know if the practice is used by the organization or
not.
At MedSite, the senior managers completed the surveys. Each participant answered the question
from his or her own perspective. Figure 5-8 shows part of the survey that was completed by
MedSite's chief administrator. Note that surveys are just one means of collecting information
about current practice. Another way to collect very contextual information about current practice
is to facilitate a discussion around the practices in the survey. You do this in step 2.
Step 2: Discuss Current Security Practices and Organizational Vulnerabilities
A facilitated discussion about current security practices in the organization will uncover detailed
information that cannot be elicited by using surveys. In this step you use the surveys as a point
of departure for a discussion about organizational security practices.
During this step the participants identify security practices that they currently use as well as
organizational vulnerabilities that are present in the organization. Organizational vulnerabilities
are weaknesses in organizational policy or practice that can result in unauthorized actions. These
vulnerabilities include missing or inadequate security practices. Two examples of organizational
vulnerabilities are staff members sharing their passwords with others and a lack of written
security policies. In essence, you can think of organizational vulnerabilities as the reverse of
good security practices.
Which issues from the survey would you like to discuss in more detail?
Are there specific policies, procedures, and practices unique to specific assets? What are
they?
Do you think that your organization's protection strategy is effective? How do you know?
The first question addressed areas of the survey that the participants would like to discuss.
Usually, they will focus on issues that are important to them and the organization. The second
question addresses any important issues not covered by the survey. The third question focuses
on specific actions that staff members take to protect certain assets. Sometimes an organization
requires special policies, procedures, or practices for important information technology assets.
The last question is broader and is intended to create a discussion of the general state of
information security in the organization.
The resulting discussion should provide more details about issues covered in the survey and
should elicit unique security practices and organizational vulnerabilities that were not covered in
the survey. This discussion should also uncover issues that are important to or unique to the
organization.
When discussing the first question, you should use the practice areas (e.g., security awareness
and training, security strategy) as well as questions from the survey as prompts for focusing the
participants' attention. For example, you might ask, "What is your impression of the
organization's policies and procedures? Are they working?" Concentrate as much as possible on
the direct experience of the participants with respect to the practices (e.g., ask probing
questions, such as, "What security training have you had?"). The discussion should address what
the organization is doing well (its current security practices) as well as poorly (its organizational
vulnerabilities).
Remember that when the scribe records contextual information, the key is to capture all
information in the words of the participants (and in complete sentences). Later in the evaluation,
the analysis team reviews this information when creating your organization's protection strategy
and risk mitigation plans. If you do not record the information as completely as possible, you will
lose important contextual information.
You also need to document whether a statement represents a security practice or whether it is
an organizational vulnerability. Many times during this step, people focus only on what isn't
working. Make sure that you also prompt them to think about what is working. Figure 5-9 shows
the results of the discussion that senior managers at MedSite had about their organization's
current security practices and organizational vulnerabilities. From the example, you can see that
the senior managers believe that there are two security practices that are used within MedSite
(marked with a "+") and three that are not (marked with a "–"), the latter being their
organizational vulnerabilities.
Figure 5-9. Contextual Security Practice Information from the Senior Managers'
Perspective
The analysis team collected information like this from each workshop in processes 1 to 3. The
team compiled the survey and discussion data prior to the first workshop of process 8. Team
members used the data as background information when they developed MedSite's protection
strategy and risk mitigation plans.
This concludes the knowledge elicitation workshop activities. After you have conducted all of
these workshops, you will have gathered security-related information from throughout the
organization. In the next step, process 4, you consolidate the information and start analyzing it.
Process 4 completes phase 1 of OCTAVE by consolidating and refining the individual perspectives
elicited during the first three processes. You gain insight into how each asset is threatened by
examining individual areas of concern in the context of a known range of threats.
Section
During process 4 you perform two vital functions. First, you consolidate the information that you
documented during the first three processes, formatting the information for data analysis.
Consolidating the information enables you to look for inconsistencies and gaps among individual
perspectives. The analysis activities constitute the second vital function. You examine the
individual perspectives and create a global picture of which assets are important to the
organization and how those assets are being threatened.
Process 4 is important because this is where you set the scope for the rest of the evaluation. You
use critical assets to focus the infrastructure evaluation in phase 2, and you use threat profiles
as the basis for the risk analysis conducted in phase 3.
Process 4 Workshop
Process 4 is implemented using the core analysis team members and any supplemental
personnel that they decide to include. An experienced team can complete this workshop in about
three to four hours. Remember to review all activities for process 4 and decide whether your
team collectively has the required knowledge and skills to complete all tasks successfully. We
suggest that your team have the following mix of skills for this process:
Understanding of your organization's business environment
Process 4 requires data consolidation prior to the workshop. Obviously this consolidation could
also have been done progressively at the end of each of the knowledge elicitation workshops.
Table 6-1 summarizes the data consolidation activities. Table 6-2 summarizes the activities that
the analysis team must perform during the workshop.
Activity Description
Group assets by The assets that were identified during processes 1
organizational level to 3 are grouped by organizational level to easily
identify common assets and viewpoints.
Group security Security requirements that were identified during
requirements by processes 1 to 3 are grouped by asset and
organizational level organizational level to easily identify commonalities
and asset and conflicts.
Group areas of Areas of concern that were identified during
concern and processes 1 to 3 are grouped by asset and
impacts by organizational level to easily identify common
organizational level concerns and gaps in perception at different levels.
and asset
Table 6-2. Process 4 Activities
Activity Description
Select The analysis team determines which assets will have a
critical large adverse impact on the organization if their security
assets requirements are violated. Those with the greatest impact
to the organization are the critical assets. Normally, the
analysis team selects five critical assets.
Refine The analysis team creates or refines the security
security requirements for the organization's critical assets. In
requirement addition, the team selects the most important security
s for critical requirement for each critical asset.
assets
Identify The analysis team identifies the threats to each critical
threats to asset by first mapping the areas of concern for each critical
critical asset to a generic threat profile, creating the unique threat
Figure 6-2. Asset-Based Threat Tree for Human Actors Using Physical Access
Figure 6-3. Asset-Based Threat Tree for System Problems
Figure 6-4. Asset-Based Threat Tree for Other Problems
Chapter 12 addresses tailoring issues for the generic threat profile.
Before you can analyze the information that you collected during processes 1 to 3, you need to
organize it. Consolidating, or grouping, data provides information in a format you can easily read
and understand. This section presents three activities in which the focus is grouping information
from processes 1 to 3. These activities do not require decision making and can be carried out by
one team member or performed incrementally at the end of processes 2 and 3. They can also be
automated.
When you consolidate data from processes 1 to 3, you need to represent the data as originally
recorded. You shouldn't paraphrase, edit, or interject opinions into the data as you consolidate
them. This preserves the integrity of the data for all later analysis tasks.
In this activity you group the assets from processes 1 to 3 according to the organizational level
that identified them. For each organizational level you document the following:
Let's look at consolidated asset information in the context of our example. Figure 6-5 shows part
of the consolidated list of important assets identified by the operational area managers at
MedSite.
The security requirements identified during processes 1 to 3 are grouped according to the
organizational level that identified them and according to asset. Since more than one workshop
group might have selected an asset as important, you can have more than one set of security
requirements per asset. When you record security requirements information, make sure that you
also indicate the security requirement(s) each workshop group considered most important.
Let's look at how the analysis team at MedSite consolidated security requirements information.
Figure 6-6 shows the security requirements for PIDS. Notice that senior managers and staff
considered availability to be the most important security requirement, while the operational area
managers viewed all requirements as equally important. The words that were recorded during
the knowledge elicitation workshops are included on the worksheet. Remember that you want to
document all information in the words of the participants in order to help you resolve conflicts in
viewpoints. There are no PIDS security requirements recorded for the information technology
staff, because PIDS was not selected as an important asset during their workshop in process 3.
In the final consolidation activity you group the areas of concern identified during processes 1 to
3 according to the organizational level that identified them and according to asset. Remember
also to record any information about the resulting impact to the organization if it was identified.
The consolidated information helps highlight any conflicts or similarities.
Figure 6-7 shows areas of concern for PIDS identified by the operational area managers at
MedSite. Notice that the third area of concern does not have an associated impact. During the
workshop, this impact was not discussed, nor did the analysis team actively pursue the
information. As you consolidate information, you will often find that the information is
incomplete in places. Part of your job during the process 4 workshop is to fill in these blanks as
best as you can.
Figure 6-7. Areas of Concern Group
This completes the consolidation activities. Next, we move to the process 4 workshop, starting
with the selection of your organization's most critical assets.
This activity requires you to make decisions that shape the remainder of the evaluation—
selecting your organization's critical assets. Depending upon the size of the organization, the
number of information assets identified during processes 1 to 3 could easily exceed a hundred.
To make the analysis manageable, you need to narrow the focus of the evaluation by selecting
the few assets that are most critical to achieving the mission and meeting the business
objectives of your organization. These are the only assets that you will analyze during later
activities.
Step 1: Identify Critical Assets
Select your organization's five most critical assets. When you select critical assets, you are not
bound to choose only five. Five assets are normally enough to enable you to develop a good set
of mitigation plans during phase 3. However, you must use your judgment—you can select fewer
than five or more than five if you desire. As you select critical assets, consider which assets will
result in a large adverse impact on the organization in one of the following scenarios:
Loss or destruction
Interrupted access
Remember that each of you brings a unique perspective to the discussion. Make sure you review
the consolidated list of important assets from the processes 1 to 3 workshops. It is important
that you review what was judged to be important from the participants' perspectives. Remember
that you must consider the organizational view when you make your selections. When you reach
a decision and select the critical assets, make sure that you record your selections.
Let's review which critical assets the analysis team at MedSite selected. Before reaching a
decision, each analysis team member reviewed the assets identified as important by each
organizational level, the rationale for selecting them, the security requirements for each
important asset, and the areas of concern for each important asset. They then engaged in a
lively discussion about the relative merits of selecting each asset. In the end, they selected the
five assets shown in Figure 6-8.
For example, in this case study, at MedSite, personal computers were not identified as an
important asset by any of the groups during processes 1 to 3. However, when the analysis team
was selecting MedSite's critical assets, it realized how important personal computers were for
accessing all of the organization's systems. Thus, the analysis team decided to make personal
computers one of the critical assets.
While selecting critical assets in step 1, you will discuss a lot of issues related to these assets. In
this step you document your rationale for selecting each critical asset so that if you are asked
subsequently why you designated an asset as critical after the evaluation, you will be able to
provide an answer. In addition, by understanding why an asset is critical, you will be better able
to define security requirements and threats in later process 4 activities. For each critical asset,
consider and record your answer to this question: Why is the asset critical to meeting the
mission of your organization?
At MedSite, the analysis team recorded information related to the organization's critical assets.
Figure 6-9 shows information related to PIDS. The rationale for including PIDS as a critical asset
is simple: MedSite depends upon it to deliver patient care. PIDS stores, processes, and transmits
many types of patient information for various departments at MedSite. The other piece of
information that the team recorded is a description of PIDS. Step 3 deals with how to create a
description for each critical asset.
Discuss the operational aspects of each asset. Consider the following questions for each critical
asset.
How is it used?
These questions focus on how assets are used and why they are important. If you can't answer
all of these questions, you may need to ask people in your organization who can answer them.
The information that you identify by answering these questions will be useful later in this process
when you identify threats to the critical assets and in process 8 when you build mitigation plans.
Make sure that you record this information.
At MedSite, the analysis team discussed the questions relative to PIDS. Two of the team
members wrote a brief description based on their experiences using PIDS. Figure 6-9 shows the
results.
Now that you have identified the critical assets for your organization, you next need to document
what about each asset is important by describing or refining its security requirements. In the
next activity we examine this topic.
This activity can be difficult for many analysis teams, as it requires defining security
requirements for each critical asset, focusing on the organizational perspective. As you review
security requirements from earlier workshops, you will start to see conflicts and gaps among the
data.
For example, senior managers may have selected confidentiality as the most important security
requirement, while staff members valued availability most. Your task is to view the information
from the perspective of the organization and resolve the differences in the data. You must
consider trade-offs in selecting one security requirement over another. Which aspect of security
would you sacrifice to protect another? Is easy availability of data more important than
preserving confidentiality? These are the types of issues that you must resolve during this
activity.
Consider the following questions when refining or describing security requirements for each
critical asset:
Are authenticity, accuracy, and completeness important for this asset? If yes, what is the
specific integrity requirement?
As you think about the questions, review the security requirements and areas of concern that
were recorded for that asset during processes 1 to 3. Remember that if the critical asset was not
identified as important during the earlier processes, you will have neither areas of concern nor
security requirements information for it. In that case you will have to create security
requirements without the benefit of this additional information. Discuss the questions among
yourselves. When you reach a decision about the security requirements for a critical asset, make
sure that you record this information.
The analysis team at MedSite reviewed the security requirements and areas of concern for PIDS
that were identified by earlier workshop participants (see Figures 6-6 and 6-7). The team then
used its collective judgment and experience to create a refined list of security requirements. You
can see the results in the right column of Figure 6-10.
Once you have refined (or in some cases, created) the security requirements for each critical
asset, you need to determine which requirement is most important.
Consider any conflicts among the security requirements. As you do this, discuss the trade-offs
among the requirements. Is confidentiality more important than availability? How important is
integrity relative to the other requirements? This trade-off can be difficult. You need to avoid
taking the easy way out and declaring that all requirements are equally important. When you get
to mitigation in phase 3, you may find that you need to make a choice between mitigation
strategies or actions based on the relative priorities of security requirements. Will you need to
sacrifice some confidentiality for availability? When you reach a decision about the most
important security requirement for a critical asset, make sure that you record this information.
The analysis team from MedSite discussed the trade-offs among the requirements. They decided
that availability was the most important requirement and then documented this decision by
placing an X in the middle column of the table in Figure 6-10. You could also put all of the
security requirements in priority order.
If you look at the security requirements for PIDS created by the operational area managers
during process 2, you will see that they selected all requirements as being equally important
(see Figure 6-6). Although the facilitator captured the wishes of the managers during that
workshop, the analysis team members understood that they needed to evaluate the trade-offs
and make a decision during this step. They selected availability as the top requirement for PIDS
because the primary mission of MedSite is to treat its patients.
Now you understand what assets are most critical to your organization, and you have examined
what aspects of those assets are important. It is time to examine what threatens your critical
assets.
At this point in the evaluation you begin to examine the range of threats that can affect your
critical assets. You perform a gap analysis of the areas of concern you elicited earlier in the
evaluation, creating a complete threat profile for each critical asset.
Recall that a generic threat profile is a structured way of presenting a range of potential threats
to a critical asset. In this activity you essentially tailor the generic threat profile for each critical
asset by deciding which threats in the range of possibilities actually apply to a critical asset. This
information helps to form the basis for examining the computing infrastructure for vulnerabilities
as well as for identifying and analyzing risks to critical assets.
For each critical asset, review the consolidated areas of concern that affect that asset. Consider
the following question: How do the areas of concern map to the threat profile?
To map an area of concern to the threat profile, you must first determine which category of
threat (e.g., human actors using network access) is represented by the area of concern. You
then determine which threat properties (asset, access, actor, motive, outcome) are represented
by the area of concern. Finally, you map the threat properties to the corresponding asset-based
threat tree.
Let's examine how the analysis team at MedSite performed the mapping. Figure 6-11 shows
three areas of concern for PIDS. Each is from the human actors using network access category.
The team determined which threat properties were represented by each area of concern. In the
first area of concern, the asset is the information on the PIDS system. The concern focuses on
staff members using network access to enter data into PIDS. Thus, network access and people
inside the organization are part of the concern, and the motive in this case is accidental. Finally,
the outcome is modification—the data are entered incorrectly.
Notice how much interpretation is required when mapping areas of concern, which can be
ambiguous. That is why it is important to be as precise as possible when capturing areas of
concern during the knowledge elicitation work shops. For example, in the first item in the table,
the threat actor is stated as "too many people." The analysis team interpreted this to mean
insiders (staff members). Figure 6-12 shows the threat properties for the areas of concern from
Figure 6-11.
During this step you must remember that the areas of concern were elicited during the
knowledge elicitation workshops. It is unlikely that all threats for an asset will be elicited during
those workshops. Your job during this step is to determine what other threats could affect your
organization's critical assets.
For which remaining potential threats is there a more than negligible possibility of a
threat to the asset? Mark these branches in the threat profile.
When discussing the questions, remember to consider all remaining branches for each threat
tree. When you reach a decision, mark each additional more than negligible threat on the
appropriate asset-based threat tree.
Always remember to record relevant contextual information on the threat profile. This
information elaborates on the information represented by the trees. If a branch of the human
actors using network access tree indicates an outside threat actor, you might want to add
contextual notes to supplement the areas of concern. For example, if the threat refers
specifically to threats from corporate spies, make sure that you add a note indicating this.
In some cases you might find that an area of concern contains a threat actor not in the generic
threat profile. This is especially true in the systems problems and other problems threat
categories. These categories might contain threats that are unique to a system or to your
environment, or new threats that haven't been added to the generic threat profile. Since these
unique threats might not easily map to the threat actors in the generic threat profile, you must
add them to threat profiles for the affected critical assets. Depending on the nature of the threat
actor identified from an area of concern, you might decide to add it to the generic threat profile.
The analysis team at MedSite performed a gap analysis on the PIDS threat profile. During the
analysis the team members decided that if insiders could deliberately disclose and modify PIDS
information, they could also destroy or deny access to the information. The team then identified
other threats to the PIDS information. In fact, the analysis team felt that all threats except
accidental actions by outsiders were applicable to the information on PIDS. They felt that PIDS
was too difficult to access by accident. Only a determined outsider would be able to get in.
Figure 6-14 shows the asset-based threat tree for human actors using network access after the
gap analysis. The team used the same process for the other categories of threats, yielding a
threat profile for the critical asset. (Appendix A presents the entire threat profile for PIDS.)
The team removed the following threat actors from the PIDS threat profile: third-party
problems or unavailability of third-party systems and telecommunications problems or
unavailability.
The team added the following threat actors to the PIDS threat profile: lack of control
over hardware and software and lack of trained maintenance personnel.
After you have created a threat profile for each critical asset, look at the outcomes across the
threat profile. Compare the outcomes with the security requirements to check for consistency
and completeness.
When comparing threat trees and security requirements, you must understand the relationships
among the outcomes and the security requirements, as shown in Table 6-4.
You might have missed threats that result in disclosure of the critical asset.
The security requirement might be driven by law or regulation rather than by an existing
threat.
You should note that the category of asset dictates which threat categories you should consider
for a critical asset. Complete the threat trees for these categories, using the following
information as a guide:
For information assets, you need to determine whether the asset is represented
electronically (on a systems asset), physically, or both. For electronic information, the
following threat categories apply: human actors using network access, human actors
using physical access, systems problems, and other problems.
For information that is represented physically (for example, on paper only), the following
threat categories apply: human actors using physical access and other problems.
Hardware assets focus only on the physical information technology hardware. The
following threat categories apply to hardware assets: human actors using physical
access and other problems.
People assets focus on either a special skill that the people have or a service that they
provide. The only threat category that applies to people assets is other problems.
You should also consider checking for consistency across critical assets. For example, the
analysis team at MedSite identified three systems assets as being critical (PIDS, ECDS, and
personal computers). When mapping areas of concern to the PIDS threat profile, team members
identified two unique threat actors for PIDS (lack of control over hardware and software and lack
of trained maintenance personnel). As a consistency check, the team examined the threat
profiles for ECDS and personal computers to see if either of the unique threat actors for PIDS
affects those systems as well.
This completes our presentation of process 4. Chapter 7 looks at process 5, in which you identify
the key components of your organization's computing infrastructure. These components are
used to store, transmit, and process your organization's critical information, and they are
evaluated for technological weaknesses during phase 2 of the OCTAVE Method.
Section
Upon completion of process 4 you identified your organization's critical assets and examined the
threats to those assets. In process 5 you use this information to determine how to evaluate your
organization's computing infrastructure for technology vulnerabilities.
You need to focus on the vulnerability evaluation to complete it in an efficient and effective
manner. To understand your risk, you need to collect vulnerability information only on key
components relative to the critical assets. Process 5 enables you to identify those key
components.
Process 5 Workshop
Process 5 is implemented using the core analysis team members as well as any supplemental
personnel that this team decides to include. Since this workshop also marks the beginning of the
technology-intensive activities of the evaluation, you might include additional information
technology personnel in the workshop.
The workshop should take an experienced analysis team two to four hours to conduct.
Remember to review all activities for process 5 and decide whether your team collectively has
the required knowledge and skills to complete all tasks successfully. We suggest that your team
members have the following skills:
Before looking at process 5 in detail, let's examine technology vulnerabilities and the supporting
information you'll need to conduct this process.
Technology Vulnerabilities
Ultimately, your goal during phase 2 of OCTAVE is to identify technological weaknesses in the
computing infrastructure. Technology vulnerabilities are weaknesses in systems, devices, and
components that can directly lead to unauthorized action [NSTISSC 98]. Technology
vulnerabilities are present in and apply to network services, architecture, operating systems, and
applications. They are often grouped into three categories [Howard 98]:
Consider a case in which designers specify a weak authentication mechanism for a system that
will store classified information. This can be considered a design vulnerability. If system
developers implement the requirement as specified, the resulting authentication mechanism will
allow attackers to break into the system easily.
Activity Description
Identify key The analysis team establishes the system(s) of interest for
classes of each critical asset. The team then identifies the classes of
components components that are related to the system(s) of interest.
Identify The analysis team selects specific components to evaluate.
infrastructur The team selects one or more infrastructure components
e from each key class to evaluate. In addition, the team also
components selects an approach and specific tools for evaluating
to examine vulnerabilities.
Now let's look at a common implementation vulnerability, the buffer overflow. A buffer overflow
exploit takes advantage of programs that improperly parse data and inadvertently attempt to
store too much data in a storage (memory) area that is too small, causing an overflow. One
possible result of a buffer overflow is that an attacker can execute whatever command is
desired, thus potentially affecting the confidentiality, integrity, or availability of the data on that
system. Buffer overflows can also force a system to abort because it is trying to perform illegal
instructions, compromising the availability of that system.
A system contains accounts with default passwords, allowing an attacker to gain access
to the system by using a commonly known password.
File permissions on a system are set up to allow "world write" permission for new files,
meaning that anyone with access to the system can read or change information on that
system.
Services that are known to be vulnerable are running on a system, allowing attackers to
exploit the vulnerability and gain access to the system.
Each of the above examples of configuration vulnerabilities stems from systems not being
securely configured by administrators. You should also note that once design and
implementation vulnerabilities become known (often because someone has exploited them and
gained access to a system), vendors typically work to address the vulnerabilities and make
software patches containing the fix available to their customers. It is then the responsibility of
system and network administrators to apply the patches to the appropriate systems. Thus, you
can see how design and implementation vulnerabilities can eventually be transformed into
related configuration vulnerabilities.
During process 5 you select components from your computing infrastructure to examine for
technology vulnerabilities. As you do this, you will most likely refer to information about your
computing infrastructure. For this, you will need a network topology, or map, that represents the
layout of the computing infrastructure, including all access points to the organization's networks.
You can also use a listing of the systems in the organization. It is important that one member of
your team be able to read and interpret the network topology. The following list highlights three
potential sources of information about your computing infrastructure that you can use:
Network topology diagrams— electronic or paper documents used to display the logical
or physical mapping of a network. These documents identify the connectivity of systems
and networking components. They usually contain less detail than that provided by
network mapping tools.
Network mapping tools— software used to search a network, identifying the physical
connectivity of systems and networking components. The software also displays detailed
information about the interconnectivity of networks and devices (routers, switches,
bridges, hosts).
Any of the above sources of information can be used. You need to decide how much and what
type of information you need to select the key classes of components. Typically, a network
topology diagram is sufficient for the activities in process 5.
Next we look at the first activity of process 5, in which you identify key classes, or types, of
components in your computing infrastructure.
In this activity you look at critical assets and threats from phase 1 in relation to your computing
infrastructure. You examine network access paths (how information or services can be accessed
via your organization's network) in the context of threat scenarios to identify the important
classes of components for your critical assets.
You focus on the threat tree for human actors using network access, because that tree defines
the range of scenarios that threaten the critical asset due to deliberate exploitation of technology
vulnerabilities by people. Thus, this activity is limited to identifying information technology
components that could be used as part of network attacks against critical assets. Figure 7-2
illustrates the relationship among a threat tree and infrastructure components.
Note that you could also use a similar approach to examine threat scenarios for human actors
using physical access. By examining the physical threat scenarios, you could identify important
components from your physical infrastructure that could be used during attacks.
In this step you identify the system that is most closely linked to the critical asset. This is the
system of interest. We define a system as a logical grouping of components designed to perform
a defined function(s) or meet a defined objective(s).
The system of interest is a system that gives a threat actor access to a critical asset. It is also
the system that gives legitimate users access to a critical asset. Consider the following
guidelines as you identify the system of interest for different types of assets:
For information assets, the system of interest is the one most closely linked to the
information. It can be where the critical information asset is stored and processed. It can
also be where the critical information asset moves outside the network (backup systems,
off-site storage, other storage devices).
For software assets, the system of interest is the system that is most closely linked to
the software application or service. It can be the system from which the critical software
asset is served or where it is stored.
To conduct step 1, select a critical asset. Remember, you will examine only the threat tree for
human actors using network access during this activity. Review the scenarios represented by
that threat tree. If the tree has no threats marked, you will not need to complete this activity for
the critical asset, and you should move on to the next critical asset.
If threats for human actors using network access do exist for the critical asset, consider the
following questions:
Which system(s) is most closely linked to the critical asset? In which system(s) is the
critical asset stored and processed?
Where outside of the system of interest do critical information assets move? Backup
system? Off-site storage? Other?
Based on the critical asset, which system(s) would be the target of a threat actor acting
deliberately?
Refer to your network topology diagrams as needed. Identify systems of interest for all
applicable critical assets and record this information.
You may have multiple systems of interest for a critical asset, for example in the following
situations:
For example, you might identify multiple systems of interest for information and software assets
because those types of assets are often closely linked to multiple systems. Distributed assets,
such as the network, might also comprise multiple systems of interest. For distributed critical
assets, you have a couple of options when identifying the system(s) of interest. If you realize
that the critical asset is defined too broadly, you could then define it more narrowly or break it
down into smaller critical assets. Alternatively, you can accept how the critical asset is defined
and identify multiple systems of interest for it.
Let's consider the sample scenario. For process 5, the analysis team members augmented their
skills by including two additional people from MedSite's information technology department as
well as one member from the information technology staff at ABC Systems. They all reviewed
their organization's network topology diagram and selected systems of interest for each of their
critical assets, shown in Figure 7-3.
Note that the analysis team did not identify systems of interest for paper medical records and
ABC Systems. Since the paper medical records are not electronic, there are no threats from
network attacks on the paper medical records. ABC Systems refers to a group of people, and
people assets are not subject to network attacks. The systems used by the staff at ABC Systems
are subject to network attacks. However, those systems are outside the scope of MedSite's risk
evaluation.
This situation emphasizes an interdependency issue for MedSite. If threats to the systems used
by ABC Systems existed and then materialized, the service provided to MedSite by ABC Systems
could be affected. The analysis team checked their threat trees for their systems assets (PIDS,
ECDS, and personal computers) to make sure that a threat to ABC Systems was identified as an
interdependency threat on the other problems threat trees.
Note that the results from process 5 caused the analysis team to go back and review information
that they completed during process 4. This process shows the iterative nature of risk
evaluations. Remember, the results of certain analysis activities will cause you to revisit
decisions or review information from previous activities.
In this step you identify the classes (or types) of components that are part of or are related to
each system of interest. When legitimate users access a critical asset, they also access devices
and components from these classes, as indeed threat actors do when they deliberately target a
critical asset. Thus, in this step you are examining both how staff members legitimately access a
system of interest via the network and how human threat actors use unauthorized access to
reach the system of interest. Table 7-2 highlights the key classes of components that you will
consider. This is a basic set of key component classes, and the classes that you consider in this
activity are contextual. You may need to refine this list in order to conduct a meaningful
evaluation.
To conduct step 2, consider the following questions for each critical asset for which you identified
a system of interest:
Which types of components are part of the system of interest? Consider servers,
networking components, security components, desktop workstations, home machines,
laptops, storage devices, wireless components, and others.
How could threat actors access the system of interest? Via the Internet? Via the internal
network? Shared external networks? Wireless devices? Others?
Which types of components could a threat actor use to access the system of interest?
Which could serve as intermediate access points? Consider physical and network access
to servers, networking components, security components, desktop workstations, home
machines, laptops, storage devices, wireless components, and others.
What other systems could a threat actor use to access the system of interest?
Based on your answers to the above questions, which classes of components could be
part of the threat scenarios?
By answering these questions, you are reviewing access paths for each system of interest.
Remember to refer to your network topology as needed. When you identify which classes of
components could be part of the threat scenarios, record this information and the rationale for
selecting each key component class.
In our example, the analysis team selected key classes of components for each system of
interest. In performing this step, the members of the analysis team from the administrative and
clinical parts of the organization described how they used the systems to access information. The
members of the team with information technology skills (remember that the team included three
additional people with information technology skills for this workshop) reviewed the information
about how systems are accessed in relation to the organization's network topology to identify
the key classes of components. Figure 7-4 shows the key classes of components for PIDS and
their rationale for selection; Figure 7-5 shows the network topology map used to identify the
component classes. A check mark by a class in Figure 7-4 indicates that the team selected it as
a key component class for PIDS. The team also recorded its reasons for selecting each class for
PIDS.
Figure 7-5. Access Paths and Key Classes of Components for PIDS
As the analysis team was reviewing the access paths for PIDS using the network topology (see
Figure 7-5), the team members made some interesting observations. They noticed that several
access paths relied upon components that were controlled by other organizations or by
individuals, for example:
ABC Systems had access to MedSite's internal network via a connection that bypassed
the firewall.
Staff with home machines could gain remote access to PIDS via the Internet and
MedSite's Internet Service Provider.
Equipment used by ABC Systems, the Internet service provider, and home users could not be
examined for technology vulnerabilities during the risk evaluation, because those components
are not owned by MedSite. However, if any of those components have technology vulnerabilities,
information belonging to MedSite could be at risk. The analysis team checked to see if this
presented any threats that had not been recorded on the human actors using network access
threat trees for applicable critical assets. They also recorded these observations as contextual
notes on the appropriate threat trees. As they talked among themselves, the team members
agreed that these were broad issues that had policy implications for the organization. The team
members agreed to revisit the issues during process 8 when they develop risk mitigation plans
and a protection strategy.
This concludes the first activity of process 5. In the next activity you select specific components
from each key class to evaluate for technology vulnerabilities.
Recall that focus on the critical few is a guiding principle of this evaluation process. In this
activity you follow that principle when you select specific components from each key class to
examine for technology vulnerabilities.
One point that needs to be emphasized here is the difference between performing a vulnerability
evaluation in the context of a risk evaluation and doing so in the context of an ongoing
vulnerability management practice. During this activity your goal is to select enough components
from each key class to enable you to gain an understanding of how vulnerable your computing
infrastructure currently is.
By contrast, when you form your risk mitigation plans in process 8, you may decide that
vulnerability management is a practice that your organization should undertake to mitigate your
risks. (The catalog of practices in Appendix C presents more information about vulnerability
management.) As part of that ongoing vulnerability management practice, you periodically
examine all components of your infrastructure. Your goal in vulnerability management is
continually to identify and then eliminate technology vulnerabilities in your computing
infrastructure. In this activity you target your collection of vulnerability information.
Look at the key classes of components you identified for your critical assets. Review your
organization's network topology diagram in relation to each key class of component for that
critical asset. You must determine how many infrastructure components to evaluate from each
class. You need to evaluate enough components from each class to get a sufficient
understanding of the vulnerability status of a "typical" component from the class. As you select
specific components to evaluate, consider the following questions:
When you select a specific component, you also need the Internet Protocol (IP) address and the
host/domain name system (DNS) name (fully qualified domain name) for the device. Remember
to select one or more components in each key class. Once you have chosen components from a
class, you need some consistent way of identifying them. We suggest using components' IP
addresses and host/DNS names. In larger organizations, IP addresses can change on a daily
basis for many components, although this is not likely for servers, routers, and firewalls.
Recording the fully qualified domain name of the component helps to identify it more reliably
than the IP address alone because of services like DHCP (dynamic host configuration protocol),
which change the IP address each time a machine boots. Record the IP addresses or fully
qualified domain names as well as the rationale for selecting those devices.
You'll need to select infrastructure components from each key class for all critical assets. Keep in
mind that some components will be important to more than one critical asset. As you select
components to evaluate, look for any overlaps and redundancy across critical assets.
In selecting specific components to include in a vulnerability evaluation, you need to balance the
comprehensiveness of the evaluation with the effort required to evaluate the components. For
example, if you have selected desktop workstations as a key class of components, you need to
select one or more workstations to include in your vulnerability evaluation. If you have a
thousand desktop workstations in your organization, you need to decide how many to include in
the technology evaluation. Since your goal is just to get a feel for the vulnerability status of a
typical workstation, you probably need to evaluate only a handful of workstations.
In general, you want to make sure that you have enough information to understand the
vulnerability status of the key class, but you don't want to select so many components that you
have trouble sorting through all of the data. You need to determine when you have enough
information to move forward in the process. No universal guideline can tell you how many
devices you should select from a given class. You have to use your best judgment.
Figure 7-6 shows which components the analysis team at MedSite selected for evaluation, as
well as information about how the analysis team intends to conduct the vulnerability evaluation.
You identify this information during the next step of this activity, when you develop a plan for
conducting vulnerability evaluation.
In this step you answer the following question: What approach will we use to evaluate each
selected component?
When you select an approach for evaluating an infrastructure component, you determine how
the evaluation will be performed. You decide whether your information technology staff will
perform the evaluation or whether you intend to outsource the evaluation to external experts. If
you already run vulnerability tools, check to see if you have recent results that can be used.
If you decide that your organization will perform the evaluation, you need to identify the
software tool(s) you will use. You also need to identify who will perform the evaluation and
interpret the results. Always make sure that you have permission to run any tools on your site's
infrastructure. There may be legal implications or personal liability issues. Also make sure that
you set a specific schedule for running the tools and that you let all stakeholders know what you
intend to do and when you intend to do it. You should determine if there are any potential side
effects from running the tools and notify stakeholders accordingly.
If you decide to outsource, think about how you will communicate your needs and requirements
to the external experts and how you intend to verify whether they have sufficiently addressed
those needs and requirements. Also make sure that the external experts set a specific schedule
for running any tools on your networks and that they let all stakeholders know what they intend
to do and when they intend to do it.
Some organizations use contractors or managed service providers to maintain their systems and
networks. If contractors or personnel from managed service providers are going to participate in
the vulnerability evaluation, you need to decide whether to include them on the analysis team,
or treat them as external experts. It depends on the nature of the working relationship that your
organization has with them.
Either you or your external experts must also decide which tool(s) you will use. Tools include
software, checklists, and scripts. You need to decide whether to automate the process of
evaluating technology vulnerabilities (by using software) or whether to use checklists or scripts.
Checklists, for example, might be needed for components that are not currently supported by
tools (e.g., mainframes). Once you have made this decision, you need to select the specific
tools, checklists, or scripts. Figure 7-7 illustrates your choices in creating an approach for
evaluating an infrastructure component for technology vulnerabilities. The next section examines
the topic of vulnerability evaluation tools in more depth.
Let's review these concepts in the context of our example. The analysis team at MedSite decided
to contract with ABC Systems for the vulnerability evaluation. ABC Systems maintains the
systems and networks at MedSite and is contractually obligated to scan MedSite's computing
infrastructure periodically for technology vulnerabilities. Analysis team members scheduled a
meeting with their contacts at ABC Systems to convey MedSite's requirements for performing a
vulnerability evaluation of MedSite's infrastructure. They also included staff from MedSite's
contracting office to ensure that any contractual issues with ABC Systems could be addressed.
Figure 7-6 highlighted the information recorded by the analysis team for this activity.
Before you run any vulnerability evaluation tools on your organization's networks, you will need
to obtain appropriate management approval. In addition, you may decide that you need to
research available vulnerability evaluation tools in order to choose the right tools. You may also
need to acquire the selected tools and/or undergo training in their use. You should recognize
that this process may significantly delay the overall evaluation schedule.
Cost estimates may also be required, particularly if tools need to be purchased or upgraded, or if
someone needs to be trained to run them. You must also consider the cost of personnel time to
coordinate and run the tools, as well as time lost by staff members who might not be able to
perform their duties efficiently when testing occurs.
Let's examine the topic of software tools used to evaluate vulnerabilities. These software tools
assess devices or components, identifying known weaknesses (exploits) and misconfigurations.
They also provide information about the potential for success if a threat actor were to attempt
an intrusion. These types of tools are often used by threat actors attacking an organization's
systems and networks. An actor will commonly scan systems remotely from the Internet to find
vulnerabilities. The scans can provide the actor with the means to access (read, modify, or
destroy) and interrupt (deny availability of) your systems and networks.
Types of Tools
The following list highlights types of vulnerability evaluation tools that you should consider:
Operating system scanners— these target specific operating systems such as Windows
NT/2000, Sun Solaris, Red Hat Linux, or Apple Mac OS.
Checklists— these provide the same functionality as automated tools. However, unlike
automated tools, checklists are manual, not automated. They also require a consistent
review of the items being checked and must be routinely updated. However, checklists
may be all you can find or use.
Scripts— these provide the same functionality as automated tools, but they usually have
a singular function. The more items you test, the more scripts you'll need. As with
checklists, scripts require a consistent review of the items being checked and must be
routinely updated.
The reports generated by software tools provide a wide range of content and format. First you
need to determine what information you require, and then you need to match your requirements
to the report(s) provided by the tool(s). You should also consider how much information each
tool provides and whether it provides any means to filter or interpret the information. The
reports that are generated by software tools can be quite long (300+ pages), especially when a
large number of systems are scanned.
Limitations
Vulnerability tools do have limitations. They will not indicate when some system administration
procedures are being improperly or incorrectly performed. For example, a tool will not be able to
determine whether users are being given access to more information or services than they need.
The information technology staff needs to follow good practices for defining required security
levels, setting up and managing accounts, and configuring infrastructure components. In
addition, vulnerability tools check only for known vulnerabilities; the tools will not identify
unknown vulnerabilities or new vulnerabilities. Thus, you need to ensure that you keep your
vulnerability tools current with the latest vulnerability information provided by vendors and other
sources (i.e., a catalog of vulnerabilities) and that you run them properly.
Finally, vulnerability tools may not indicate whether staff members are following good practices
(e.g., if staff members have shared passwords or ignored physical security procedures) or
whether implemented security rules are in line with your business objectives. This is why the
surveys and protection strategy discussion from processes 1 to 3 are so important; they
evaluate aspects of security that tools can't examine.
Some automated tools have the potential to cause interruptions in service or other problems
when they are run, depending upon your particular systems and the way in which they are
configured. The analysis team and any supplemental members need to discuss these possibilities
and determine who would be affected should anything happen.
This concludes process 5. You have now selected components from your organization's
computing infrastructure to evaluate for technology vulnerabilities. In process 6 you conduct the
vulnerability evaluation and interpret the results.
Process 6 completes phase 2 of OCTAVE. You execute the vulnerability evaluation approach that
you outlined in process 5, completing the data gathering for the evaluation and setting you up
for subsequent analysis and planning activities.
Section
Knowledge of how to use and interpret the results of vulnerability evaluation tools
Catalog of Vulnerabilities
During process 6, you run vulnerability evaluation tools on your organization's systems and
networks to identify the technological weaknesses in selected infrastructure components. The
tools examine each component for known weaknesses (exploits) and misconfigurations, also
known as technology vulnerabilities. Technology vulnerabilities change constantly. To effectively
evaluate your systems and networks for technology vulnerabilities, you need to make sure that
your tools are examining components for the latest set of known weaknesses.
To ensure that you are evaluating components for all currently known technology vulnerabilities,
you must select tools that are designed to examine specific components and are aligned with an
established catalog or collection of vulnerabilities. A catalog of vulnerabilities contains a listing of
known technological weaknesses, based on platform and application. At the time of writing this
book, the one broadly recognized catalog of vulnerabilities was MITRE's Common Vulnerabilities
and Exposures (CVE)[1], collaboratively developed by representatives across the community and
maintained by the MITRE Corporation.
[1]
[Link]
Activity Description
Review The information technology staff members or external
technology experts who ran the vulnerability tool(s) present a
vulnerabilitie vulnerability summary for each critical asset and interpret
s and it for the analysis team. Each vulnerability summary is
summarize reviewed by the team and refined if appropriate.
results
"CVE is a list of standardized names for vulnerabilities and other information security exposures
—CVE aims to standardize the names of all publicly known vulnerabilities and security
exposures."[2]
[2]
These works were taken from MITRE's CVE Web page dated 12/10/2001.
CVE is not considered to be a database; rather, it is a list or dictionary that provides common
names for publicly known vulnerabilities [Merkow 00]. A common naming convention enables
effective communication about vulnerabilities, their potential impact, and approaches for
addressing them. Thus, CVE enables open and shared information among vulnerability databases
and tools without any distribution restrictions.
Individual vulnerability tool providers generally use their own vulnerability databases, which are
often consistent with CVE. The CVE Web site provides considerable information on the contents
of the CVE list, how it was developed, and how it continues to be updated. CVE information can
be downloaded free or searched online.
The CERT© Coordination Center's (CERT/CC) Vulnerability Notes Database[3] also provides a
source of vulnerability information based on an analysis of the reports CERT/CC receives. The
Vulnerability Notes Database is a Web-based, searchable collection of the CERT Vulnerability
Notes. The database can be searched by several fields (including the CVE name) and supports
customized queries. It is also fully CVE compatible.
[3]
[Link]
This chapter addresses software-based vulnerability evaluation tools rather than checklists and
scripts. Most organizations rely on commercial or freeware tools to perform vulnerability
evaluations, rather than more time-consuming checklists and scripts. You need to make sure
that any software tool you use is consistent with a catalog of vulnerabilities, such as CVE. Check
with your vendor or examine the tools for yourself.
8.2 Before the Workshop: Run Vulnerability Evaluation Tools on Selected Infrastructure
Components
The focus of this activity rests squarely on the computing infrastructure. You goal is to make
sure that each component you selected during process 5 is evaluated against known
technological weaknesses to see which are present in that component.
Staff members from your information technology department, or possibly external experts, take
lead roles in process 6. Remember, it takes specialized information technology and security
knowledge to run the tools and interpret the results. You need to make sure that you have the
right people engaged in this part of the evaluation.
In this step you conduct the vulnerability evaluation by running the vulnerability evaluation tools
you selected during process 5. Before you use the tools, you must verify that
The correct tool(s) is being used
All necessary approvals have been obtained and all affected personnel have been
notified
You should always make sure that you have proper permission and management approval prior
to running vulnerability evaluation tools on your networks. Your organization's information
technology department should have procedures for obtaining approval to use the tools. You
should notify personnel who may use or rely on the systems and networks being evaluated in
case something unexpected happens and they lose access to the asset(s).
Run the tools on the selected components. Remember, designated people with information
technology skills lead this activity. Members of the analysis team can be present to observe the
evaluation or participate in it directly, if appropriate.
In our sample scenario, three members from ABC Systems led the vulnerability evaluation for
MedSite. The analysis team member with information technology skills participated in the
evaluation, as did two other information technology staff members from MedSite who wanted to
learn more about vulnerability tools. They used a suite of commercial and freeware tools that
were approved for use according to ABC Systems' policies and procedures. The staff members
from ABC Systems knew how to run the tools, which they used on a regular basis. The staff
members helped the members from MedSite to become familiar with the tools and how they are
used.
Prior to running the tools on MedSite's networks, the analysis team and the staff from ABC
Systems made sure to obtain approval from MedSite's management. Everyone agreed that the
tools should be run after standard working hours at MedSite to minimize any problems that
might occur. At that point, they ran the tools. Figure 8-1 shows the components at MedSite on
which the vulnerability evaluation tools were run. Note that despite their efforts to gain
approval, they were blocked from looking at home machines due to corporate policy.
After you have completed step 1, you have to review the reports generated by the tools.
Software vulnerability evaluation tools typically produce the following types of information for
each component:
Vulnerability name
During step 2, you review the detailed vulnerability reports, interpret the results, and create a
preliminary summary of the technology vulnerabilities for each key component. A vulnerability
summary should state how many vulnerabilities should be fixed immediately (high-severity
vulnerabilities), how many should be fixed soon (medium-severity vulnerabilities), and how
many should be fixed later (low-severity vulnerabilities).
Note that the severity levels defined above are a basic set used to indicate how soon action
should be taken to address vulnerabilities. The levels are contextual for any organization, and
you should tailor them to meet your organization's needs. Some tools identify severity levels for
vulnerabilities but interpret high, medium, and low severity differently.
The need for a preliminary summary is based on the assumption that the vulnerability evaluation
is not conducted by all of the analysis team members. Software vulnerability evaluation tools
produce very detailed reports that are not easily understood by personnel who do not have
information technology and security backgrounds. Remember that the analysis team includes
business staff members who probably do not configure and manage systems on a day-to-day
basis.
If additional information technology staff members or external experts conduct the evaluation, a
preliminary vulnerability summary is necessary to communicate vulnerability information to the
core analysis team members. The summary should be presented to the analysis team during the
process 6 workshop. However, if all core analysis team members are able to participate actively
in the vulnerability evaluation, you can wait until the workshop to analyze and interpret the
results.
Let's go back to our example. The staff members from ABC Systems and MedSite first
established severity levels for technology vulnerabilities, shown in Figure 8-2.
Next, they analyzed the reports generated by the tool and interpreted the results, creating a
preliminary summary for the key components. That summary is shown in Figure 8-3.
The previous activity required specialized information technology and security knowledge to
complete. Before you can move to the risk analysis activities of phase 3, you need to make sure
that all analysis team members have an appreciation of the results of the infrastructure
examination. Thus, part of this activity requires communicating technological issues effectively
to people who may not have technology backgrounds.
A second part of this activity requires you to think about the technology information in the
context of your organization. You refine the picture of current security practices and
organizational vulnerabilities. You also revisit the threat profile for each critical asset to see if the
vulnerability evaluation has exposed any new threats.
In this step the entire analysis team reviews the preliminary summary of vulnerabilities. The
information technology staff or external experts who conducted the evaluation lead the review.
During this step you must make sure that you understand the following information for each
critical asset:
You can make changes to the summary during the discussion, if appropriate. For example, you
might decide to change the definitions of severity levels. Once the summary is reviewed and
refined for each component, make sure that you document it. You should also keep the detailed
reports generated by the tools. You might need to reference them after the evaluation when you
fix specific vulnerabilities.
In our sample scenario, one member of ABC Systems presented the results of the vulnerability
evaluation to the analysis team. The presenter highlighted the types of vulnerabilities that were
found on the key components and illustrated how those weaknesses could enable attackers to
access PIDS, ECDS, and desktop computers (the critical assets that can be accessed using the
network). This activity helped all members of the analysis team to appreciate the relationship
between technology vulnerabilities and their business processes. No changes were made to the
vulnerability summary shown in Figure 8-3.
As you review and refine the summary of vulnerabilities, you may identify specific actions or
recommendations for addressing the technology vulnerabilities. If you need to address any
technology vulnerabilities immediately, make sure that you assign an action item and designate
responsibility for it.
Remember to look at the technology vulnerabilities across components and critical assets for
patterns that can help you better understand the security issues. Patterns of technology
vulnerabilities can indicate problems with the current security practices in your organization.
(See the catalog of practices in Appendix C for a list of Information Technology Security
practices.) For example, staff members may indicate that they perform a practice, but the
pattern of technology vulnerabilities might show evidence to the contrary. You also need to be
careful when establishing vulnerability patterns. Make sure that you don't jump to conclusions
based solely on one (or a few) technology vulnerabilities. Review patterns of technology
vulnerabilities that affect a critical asset as well as patterns of technology vulnerabilities across
critical assets.
Record all actions and recommendations. This information will be useful during process 8, when
you create a protection strategy, risk mitigation plans, and an action list.
At MedSite the workshop group, which included three staff members from ABC Systems and the
analysis team, performed this step in conjunction with step 1. As the presenter from ABC
Systems discussed the vulnerability summary and illustrated how attackers could exploit
technology vulnerabilities to access critical information and systems, the group identified a
number of actions that they needed to take, including a review of the policy that prevents
assessment of home PCs. These actions are shown in Figure 8-4.
Remember from our discussion in Chapter 7 that technology vulnerabilities define the access
paths that human threat actors can use to access a critical asset. Thus, when you identify a
technology vulnerability on a key infrastructure component, you have identified a weakness that
can directly lead to unauthorized action by a human threat actor. You now need to review the
threats you identified in process 4 in light of your understanding of how vulnerable your
infrastructure is. Your view of threats may have changed.
After you have reviewed and discussed the vulnerability summary, perform a gap analysis of the
threat profile for each critical asset you created during process 4. During the gap analysis, you
reexamine the unmarked branches of the threat tree for human actors using network access.
Consider the following question when you review the unmarked branches of a threat tree: Do
the technology vulnerabilities associated with the critical asset's key infrastructure components
indicate that there is a more than negligible possibility of additional threats to the asset? Make
sure that you mark any new threats on the appropriate branches of the threat tree and
document any important contextual comments or notes (e.g., refer to the vulnerability
summary).
The workshop group at MedSite reviewed the human actors using network access threat trees
for PIDS, ECDS, and personal computers and determined that there were no threats in addition
to those already marked. (See the complete example in Appendix A for the complete threat
profiles for each critical asset in the example.)
This completes the process 6 workshop. By this point in the process you have gathered a lot of
asset, threat, and vulnerability information. It is time for you to start making sense of the data
by identifying and analyzing your organization's risks. Process 7 kicks off OCTAVE's risk
identification and analysis activities.
Process 7 begins phase 3 of the OCTAVE Method, Develop Security Strategy and Plans. This
process creates the link between critical assets and what is important to your organization,
putting your organization in a better position to manage the uncertainty that it faces.
Section
Before you work though the steps in this activity, you need to review information about your
critical assets. This is important, because you are building on information from process 4, which
you probably completed a while ago. Specifically, we suggest that you look at the following for
each critical asset:
Security requirements
Threat profiles
Areas of concern
These data indicate what is important about each critical asset (security requirements) and how
they are threatened (threat profile and areas of concern). You need to make sure that this
information is fresh in your mind as you move on to step 2.
Your objective in this step is to record a narrative description of the potential impact on your
organization of threats to your critical assets. Note the difference in the use of the terms
"outcome" and "impact." An outcome is the immediate result of a threat; it centers on what
happens to an asset. There are four possible threat outcomes: disclosure, modification,
loss/destruction, and interruption. The impact, on the other hand, is broader, describing the
effect of a threat on an organization's mission and business objectives. Consider the following
example.
Someone inside the organization uses network access to deliberately modify the
medical records database. This could result in patient death, improper treatment
delivered to patients, lawsuits, and additional staff time to correct the records.
In this example the threat outcome is modification. Notice that modification is tied to an asset,
namely, the medical records database. Now consider how modification of the medical records
database can affect the organization. The potential impact on the organization includes the
following: patient death, improper treatment delivered to patients, lawsuits, and additional staff
time to correct the records. Again, an outcome is the immediate result of the threat actor and
centers on assets, whereas the impact considers the resulting effect on the operations and
people in the organization.
We ask you to consider impact in the following areas during this activity:
Reputation/customer confidence
Safety/health issues
Fines/legal penalties
Financial
Productivity
These impact areas are contextual and should be tailored to meet the needs of your
organization. Before you conduct an evaluation, you should determine which areas of impact to
consider. One way to determine unique areas for your organization is to consider its business
objectives and make sure that impact areas are linked to your key business objectives. For
example, a military organization may add combat readiness as an area of impact.
To conduct step 2, select one of your critical assets. Review the threat profile for that critical
asset. Make sure that you note which of the threat outcomes (disclosure, modification,
loss/destruction, interruption) are part of the scenarios in the profile. Next, answer the following
questions for each outcome that appears in at least one of the scenarios:
Continue with this activity until you have described the impact in relation to all critical assets.
Make sure that you document your results.
Let's looks at our example to see how MedSite's analysis team completed this activity,
specifically, how they created impact descriptions for PIDS. The team members reviewed the
information that they had recorded for PIDS. They reviewed the threat profile, the security
requirements, and areas of concern. (See Appendix A for a summary of this information for
PIDS.)
The team members noted that at least one threat resulted in disclosure of PIDS information.
Likewise, at least one threat resulted in modification, loss/destruction, and interruption of access
to PIDS information. Thus, all threat outcomes were possible. As a result, the team would have
to consider impacts in relation to all four outcomes. They discussed the key questions for each
outcome and documented the resulting types of impact on MedSite. These are shown in Figure
9-1.
We have just shown you how to begin expanding threats into risks by considering the impact on
the organization. Next, we present an approach for setting qualitative risk levels for your
organization.
During this activity you define your organization's tolerance for risk by creating evaluation
criteria. These criteria are measures against which you evaluate the types of impact you
described during the previous activity. An organization must explicitly prioritize known risks,
because it cannot mitigate all of them. Funding, staff, and schedule constraints limit how many
and to what extent risks can be addressed. This activity provides decision makers with additional
information that they can use when establishing mitigation priorities.
You need to review relevant background information to help you define evaluation criteria. Such
information includes the following:
Strategic and/or operational plans that outline the major business objectives of your
organization
Legal requirements, regulations, and standards of due care with which your organization
must comply
You can also use the narrative impact information that you documented during the previous
activity. Your goal is to develop an understanding of any existing organizational risk limits based
on strategic and operational plans, liability, and insurance-related issues. These data are
important in shaping evaluation criteria.
Evaluation criteria are highly contextual. For example, while $1 million may represent a high
impact for one organization, it might signify only a medium or low impact for another. Also,
some organizations will have risks that could result in a loss of life, but others will not. The
contextual nature of evaluation criteria is the reason every organization must define its own
criteria and why you need to review relevant background information.
In this step you define your organization's evaluation criteria. Discuss the following questions for
each area of impact (see previous activity for a discussion of areas of impact):
You are trying to define specific measures that constitute high, medium, and low risks for your
organization in each case. For example, a low impact on productivity might be three lost days,
whereas a high impact might be three weeks. As always, make sure that you record this
information.
Now let's look at evaluation criteria in the context of an example. The analysis team at MedSite
included a member from the risk management department to help them construct evaluation
criteria. Prior to the process 7 workshop, the staff member from the risk management
department worked with one of the analysis team members to collect background information.
They gathered the organization's operational plan and information about legal requirements and
regulations.
Prior to the workshop, all members of the team reviewed the information. They selected the
following areas of impact for which to create evaluation criteria:
Reputation/customer confidence
Life/health of customers
Productivity
Fines/legal penalties
Finances
Facilities
The team discussed what constitutes a high, medium, and low impact on the organization for
each of the relevant areas and recorded the information. Figure 9-2 highlights the evaluation
criteria for reputation/customer confidence. You will find a complete set of criteria for the
example in Appendix A of this book.
You might have noticed that we are focusing only on impact at this point. A second commonly
used risk measure is probability. For information security risks, probability is a more complex
and imprecise variable than is normally found in other risk management domains, because risk
factors are constantly changing. Probability is highly subjective in the absence of objective data
and must be used carefully during risk analysis.
Because objective data for certain types of information security threats (i.e., human actors
exploiting known vulnerabilities) are lacking, it is difficult to use a forecasting approach based on
probability. Without objective data, it is impossible to develop a reliable forecast of the future
[HBR 99]. What you can do, however, is carefully analyze threats to limit the range of potential
options, so that you become able to manage your risk. In information security, you can define a
range of threats that could affect a critical asset, but you cannot reliably predict which
scenario(s) will occur. However, by broadly defining the range of threats that your organization
faces, you can make fairly certain that those that develop do so within the defined bounds.
The analysis approach that we are describing here is derived from a technique called scenario
planning. A range of threat scenarios, or a threat profile, is constructed for each critical asset.
The scenarios in each threat profile represent those in the probable range of outcomes, not
necessarily the entire range. Because data with respect to threat probability are limited for the
scenarios, they are assumed to be equally likely [Van der Heijden 97]. Thus, priorities are based
on the qualitative impact values assigned to the scenarios.
Probability values can be factored into prioritization, but you must take care when doing so.
Remember, probability is a forecasting technique based on the premise that you can forecast
threat probability with reliable precision. Thus, in many cases you may be forcing decisions
based on probability forecasts that are nothing more than guesswork. Nonetheless, incorporating
probability into a risk analysis continues to be a popular topic. Section 9.5 considers an approach
for incorporating subjective probability in OCTAVE.
There is one set of evaluation criteria for all assets; the criteria are not unique to an
asset.
Evaluation criteria are created for predefined areas of impact, which are related to the
organization's key business objectives.
Because evaluation criteria are asset-independent and address broad organizational issues, you
could create them earlier in the evaluation process. Some organizations decide to add this
activity to process 1, the senior management workshop. By doing so, these organizations are
able to gather input from senior managers with a broad perspective on organizational issues.
Another idea is to create evaluation criteria when preparing to conduct the OCTAVE Method, as
part of your tailoring activities.
If you have previously conducted the OCTAVE Method in your organization, you could use the
set of criteria that you already created. If you decide to use evaluation criteria from a previous
evaluation, remember to review them and adjust them as appropriate before using them in the
current evaluation.
No matter when you create evaluation criteria, it can be a long process. You will probably find
that it is also an iterative process. An organization will often revisit its evaluation criteria and
adjust them after trying to use them. However, once you are satisfied with your criteria, you
have a useful tool for interpreting risk. In the next activity we show how you use this tool.
This activity builds upon the first two. You use the evaluation criteria that you created previously
to evaluate the impact descriptions that you developed earlier during the first activity of process
7. By doing this, you are able to estimate the impact on the organization for each threat to a
critical asset. The ultimate result is that you can now establish priorities to guide your risk
mitigation activities during process 8.
Before you evaluate your risks, you need to review the information gathered so far from earlier
processes. Specifically, we suggest that you look at the evaluation criteria and the following for
each critical asset:
Threat profiles
Impact descriptions
These data provide you with scenarios that threaten your critical assets (threat profiles), the
resulting impact (impact descriptions), and risk measures for your organization (evaluation
criteria). Together, they provide you with a picture of the information security risks that your
organization is facing.
For each critical asset, first review the impact descriptions for each threat outcome (disclosure,
modification, destruction/loss, interruption). Some outcomes will have more than one impact
description associated with them. Next evaluate each impact description by assigning it an
impact measure (high, medium, or low). Using the qualitative evaluation criteria that you
created during the previous activity as a guide, continue evaluating impacts until you have
evaluated all of the impacts for each critical asset. Make sure you record your results.
Finally, when you add impact values to the threat profile, you create a risk profile. Essentially,
you have created a set of risk scenarios for a critical asset.
Let's see how the team at MedSite evaluated impacts. The analysis team and the representative
from MedSite's risk management department started with PIDS. They reviewed its threat profiles
and impact descriptions, as well as the evaluation criteria, and evaluated each impact that they
recorded for PIDS.
Let's specifically look at how the team evaluated the impact of modification of PIDS information.
In reviewing the PIDS threat profile, they found the following threats with an outcome of
modification in the profile:
People inside MedSite can use network access to modify PIDS information accidentally.
People inside MedSite can use network access to modify PIDS information deliberately.
Outsiders (i.e., attackers) can use network access to modify PIDS information
deliberately.
People inside MedSite can use physical access to modify PIDS information deliberately.
People outside MedSite can use physical access to modify PIDS information deliberately.
A virus can modify PIDS information.
Note that the above threats are textual versions of PIDS threat profile branches. Next, the team
reviewed the various types of impact. Consider the following impact from Figure 9-1:
Medical treatment facility could lose credibility, causing patients to seek care from
another source.
This impact is related to the area of reputation/customer confidence, for which the evaluation
criteria are shown in Figure 9-1. After the team discussed this impact and examined it in relation
to these criteria, they felt that MedSite's reputation would be damaged, but that it could be
recovered with some effort and expense. Thus, the team assigned the value of "medium" to this
impact. Figure 9-3 shows the impact values for the levels of impact resulting from modification
of PIDS information.
Notice that there are four levels of impact associated with modification of PIDS information. Each
impact was evaluated, and its value recorded in the right column. Three were assigned a value
of medium, while the fourth was judged to be high. The team evaluated all levels of impact for
PIDS and the other critical assets. You will find the complete set of evaluation results in
Appendix A.
The final step is to create what we call a risk profile. To do this, you simply append the impact
values to the trees in the threat profile and record the range on the risk profile—in this case,
high to medium. Figure 9-4 shows the threat tree for human actors using network access for
PIDS with all impact values added. Note that a solid line in Figure 9-4 indicates the existence of
a risk, while a dashed line indicates no risk to the asset.
Figure 9-4. Part of PIDS Risk Profile: Human Actors Using Network Access Tree
If you have difficulty using the evaluation criteria as you evaluate the impact descriptions, then
one of the following might be occurring:
The impact description might be too vague to enable you to match it to the evaluation
criteria. If this is the case, you need to refine the impact descriptions by making them
more specific.
The evaluation criteria might not be specific enough to enable you to assign measures to
impact descriptions. In this case you need to refine the evaluation criteria by making
them more specific.
In the second case you might also want to check any impact values that were assigned using the
first set of criteria to make sure that they are consistent with the refined criteria.
This completes the basic risk analysis activities for OCTAVE. The next section presents a special
topic: incorporating probability into the risk analysis.
So far this chapter has focused on an analysis technique based on scenario planning. We
incorporated this technique in OCTAVE, because the lack of objective data for certain types of
information security threats makes it difficult to incorporate a forecasting approach based on
probability. However, we have found that there is considerable interest in using probability
during a more traditional risk analysis. This section presents some basic concepts of probability
and shows how you can include probability in the activities of process 7.
We define probability as the likelihood that an event will occur. We first consider the classical
concept of probability. This concept is the oldest historically and was originally developed in
connection with games of chance [Bernstein 96]. For example, consider a die, which is simply a
cube with six faces. Because of its symmetry, each face is as likely to come up as any other.
Thus, you could easily determine the probability of one face coming up with a roll of the die as 1
in 6. The key for this concept of probability is that all possibilities must be equally likely to occur.
Frequency
Next, we consider the frequency interpretation of probability. This interpretation indicates that
the probability of an event occurring (or a given outcome occurring) is the proportion of the time
that similar events will occur over a long period of time. Note that when using the frequency
interpretation of probability, you cannot guarantee what will happen on any particular occasion.
Thus, you actually never really "know" the probability that an event will occur, because you will
not be able to collect enough information to know precisely what will happen in the long run.
Although you cannot know the exact value of a probability, you can estimate it by observing how
often similar events have occurred in the past. Estimates of probability made after observing
similar events are useful because of the law of large numbers [Freund 93]. This law states that
as the number of times a situation is repeated becomes larger, the proportion of successes tends
toward the actual probability of success. For example, consider multiple flips of a coin. If you flip
a coin repeatedly and chart the accumulated proportion of time that you get heads, you will find
that over time the proportion comes closer and closer to 1 in 2 (the probability of getting heads
with each flip).
A common example that uses the frequency interpretation of probability is weather forecasting.
If the forecast calls for a 60 percent chance of rain, it means that under the same weather
conditions, it will rain in 60 percent of cases. Next, let's consider a variation of this case—how do
you estimate the probability of something that occurs just once? Consider how doctors estimate
the probability of how long it will take a patient to recover from an illness. A doctor can check
medical records and discover that in the past, 50 percent of the patients recovered within two
months under a specific treatment plan. By using this information from similar cases, the doctor
can predict that there is a 50 percent probability that the patient will recover within two months.
You can probably see how complicated this can get. It is not always easy or straightforward to
determine which cases are similar to the one that you are considering. In the case of the patient,
the doctor might consider not just the treatment plan being prescribed but also the patient's
age, gender, height, and weight, among other factors. This approach can be difficult and
requires individual judgment, indicating how easy it is for two individuals to arrive at different
probabilities for the same event.
Subjective Probability
The final type of probability that we will discuss is subjective probability. This approach is often
used in situations where there is very little direct evidence. You might have only some indirect,
or collateral, information, educated guesses, intuition, or other subjective factors to consider
[Freund 93]. A person determines a probability based on what he or she believes to be the
likelihood of occurrence. The key word here is "believes." Different people assess probabilities
differently, based on their personal evaluation of a situation. One disadvantage of this approach
is that it is often hard for people to estimate probability, and the same person can end up
estimating different probabilities for the same event using different estimating techniques.
In information security, you are interested in estimating the likelihood that a threat will actually
materialize. For some types of security threats, you have information upon which you can draw.
For example, you can use the frequency data to estimate the probability of natural disasters
(e.g., floods, earthquakes) in your region. You might also be able to use the frequency of
occurrence to estimate the probability of some systems problems, such as system crashes and
susceptibility to viruses. However, for some other types of threats there are no frequency data.
How would you estimate the probability of an attacker viewing confidential customer data from
your organization's customer database? How much company data do you have to estimate the
probability of this attack? Most likely, your organization has not collected sufficient data about
such attacks to enable an estimation of probability based on frequency of occurrence. If it has
occurred, it has probably happened only once or twice. In addition, you cannot be sure how
many times this attack has occurred but gone undetected. What about industry data? Is this the
kind of information that companies readily disclose? Many attacks of this type go unreported,
making it difficult to obtain sufficient data to derive probability based on frequency models.
Finally, even if you had some industry data about these types of attacks, how do you establish
which events are similar? For example, does information about past attacks in the banking
sector apply to organizations in the manufacturing sector? All of these factors make a frequency-
based estimation of probability difficult and time-consuming, if not impossible. That leaves us
with subjective probability for threats resulting from human attackers.
Subjectively estimating probability for attacks by human threat actors is tricky. You need to
consider the following factors:
Motive— how motivated is the attacker? Is the attacker motivated by political concerns?
Is the attacker a disgruntled employee? Is an asset an especially attractive target for
attackers?
Means— which attacks can affect your critical assets? How sophisticated are the attacks?
Do likely attackers have the skills to execute the attacks?
When estimating the above factors, people typically rely upon their experience to make educated
guesses about the likelihood of attacks occurring. You would need experience with networked
systems security as well as an understanding of the industry sector in which an organization
operates. Note that some people do not have sufficient experience to estimate probability using
subjective techniques. In fact, probabilities estimated by inexperienced people can actually skew
the results of a risk analysis.
In general, you must be careful when incorporating probability into your risk analysis. The next
section explains how you can incorporate probability into the activities of process 7 using a
combination of frequency data and subjective estimation.
We propose using a combination of frequency and subjective probability into the OCTAVE
Method's risk analysis activities. There are three activities to add if you choose to do this:
In addition to identifying the impacts of threats, you identify probability. You gather information
related to the factors that contribute to determining probability. Consider the following questions
for each threat profile:
What are the motive, means, and opportunity of each human threat actor who might use
network access to violate the security requirements of the critical asset?
What are the motive, means, and opportunity of each human threat actor who might use
physical access to violate the security requirements of the critical asset?
What historical data for your company or domain are available for all threats in the
threat profile? How often have threats of each type occurred in the past?
What unusual current conditions or circumstances might affect the probability of the
threats in the threat profile?
By answering the above questions, you gather both subjective and objective data about threats
to your critical assets. You can then use them to estimate threat probability. Notice that the first
three questions and the last question are subjective in nature, while the fourth question relates
to any objective threat data you may have. You need to make sure that you record all subjective
information and objective data for each type of threat to your critical assets.
In addition to developing evaluation criteria for impact, you also create evaluation criteria for
probability. These criteria are measures against which you will evaluate each threat to establish
a qualitative probability value for that threat. Evaluation criteria for probability indicate how
often threats occur over a common period of time. When you create evaluation criteria for
probability, you define measures for high, medium, and low likelihood of occurrence for your
organization.
Review the probability information that that you gathered during the previous activity and
answer the following questions:
Remember, your goal is to define probability measures using any objective data that you have in
addition to your subjective experience and expertise. You also need to make sure that your
criteria are meaningful to your organization. As always, record your results.
Let's examine what evaluation criteria might look like for our sample organization. At MedSite
the analysis team supplemented their skills by including the following:
A member from MedSite's risk management department. This individual has background
knowledge about many of the threat actors in the threat profile.
The expanded team reviewed background information. The people with information technology
and risk management expertise provided valuable insight into creating frequency ranges for
each probability level. Figure 9-5 shows the resulting probability evaluation criteria.
Notice that the criteria in Figure 9-5 use frequency of occurrence to define probability levels.
Team members used the data that they had for certain sources of threat in conjunction with
their subjective experience for those sources for which they had little or no objective data. Thus,
despite the use of frequency in the criteria, this represents a highly subjective look at
probability, and it should be noted as such.
Finally, in addition to evaluating the impact of each threat, you evaluate its probability. Review
all relevant background information before you complete this activity. Make sure that you review
threat profiles for each critical asset and the evaluation criteria for probability.
Select a critical asset. Assign each threat a qualitative probability value (high, medium, or low)
based on (1) the probability information that you have gathered, (2) the probability evaluation
criteria that you created, and (3) your team's collective experience and expertise.
If you find that your probabilities don't make intuitive sense—for example, if all of your threats
are evaluated as "high probability"—you might want to go back and adjust your probability
criteria. Once you are satisfied with your evaluation results, the final step is to add probabilities
to the risk profile.
At MedSite the expanded team (the analysis team plus supplemental personnel) assigned
probability values to each threat in all threat profiles. Figure 9-6 shows part of the PIDS risk
profile with probability added to the tree.
Figure 9-6. Part of the PIDS Risk Profile (Including Probability): Human Actors Using
Network Access Tree
This concludes process 7. Chapter 10, which examines risk mitigation, revisits the topic of
probability and looks at building risk mitigation plans for each critical asset and forming a
protection strategy for organizational improvement.
The first workshop of process 8 marks the transition from identifying and characterizing risks to
addressing them. In this workshop you develop both strategic and tactical solutions designed to
manage the uncertainty your organization faces due to its information security risks. At the end
of this workshop you will have produced a proposed protection strategy for organizational
security improvement and risk mitigation plans to reduce the risks to your organization's critical
assets.
Section
Upon completing process 7, you identified the risks to your organization's critical assets and
evaluated the potential impact on your organization of those risks. In the first workshop of
process 8 (also referred to as process 8A), you analyze all the risk-related information that you
gathered throughout the evaluation and decide how to improve your organization's security
posture.
Process 8A Workshop
Process 8A is implemented using the core analysis team members and any supplemental
personnel that they decide to include. It takes an experienced team about a day to complete the
activities in this workshop. Review all activities for this process and decide whether your team
collectively has the required knowledge and skills to complete all tasks successfully. We suggest
that your team have the following mix of skills in this process:
Process 8A requires data consolidation prior to the workshop. You need the security practice
information gathered during processes 1 to 3 (results of practice surveys and follow-on
discussions). If you haven't already compiled this information, you will need to do so prior to the
workshop. Table 10-1 summarizes the data consolidation activities, while Table 10-2 summarizes
the activities that the analysis team must perform during the workshop.
Activity Description
Compile survey The survey results from processes 1 to 3 are compiled
results according to organizational level.
Consolidate The contextual information (security practices and
protection organizational vulnerabilities) from processes 1 to 3 is
strategy consolidated according to organizational level.
information
Table 10-2. Process 8A Activities
Activity Description
Review The analysis team members individually review the following information that
risk they have generated during the process:
informatio
Threats to critical assets
n
Areas of concern for the critical assets
A strong majority of respondents from an organizational level believe that the practice is
used by the organization. This is an indication of a current security practice that is most
probably used by your organization.
A strong majority of respondents from an organizational level believe that the practice is
not used by the organization. This indicates that the practice is most probably not used
by the organization. This is a strong indication of an organizational vulnerability.
The opinions of the respondents give no strong indication that a practice is used or not
used by the organization. Thus, the practice may be used by some individuals but is not
an organizationwide security practice. This is also an indication of an organizational
vulnerability.
You are probably wondering how to define "a strong majority of respondents." You need to
select a threshold that indicates a strong preference for a response. When we work with
organizations, we usually recommend using 75 percent as a threshold. One word of caution is
warranted here. Typically, you have only a few respondents from each organizational level.
Thus, you will not have enough responses to be able to draw definitive conclusions, but you can
use the numbers as indicators of preference. Compile the results for all organizational levels
(senior management, operational area management, staff, and information technology staff).
At MedSite the analysis team decided to use the following guidelines when interpreting the
survey results:
Unclear— Neither of the first two criteria was met. If the percentages of "yes" and "no"
responses do not meet the 75 percent threshold, it is unclear whether the practice is
present or not. It may be that some people use the practice while others don't.
The analysis team used these guidelines when interpreting the senior managers' survey results
for security awareness and training (see Figure 10-1). The team decided that the first statement
indicated that the senior managers believe that staff members understand their security roles
and responsibilities. All of the managers indicated that this practice is currently used by MedSite.
On the other hand, it was unclear whether the second and third statements indicated the
presence of practices in the organization. The results show no strong indication whether the
managers believe that the practices are or are not currently being used at MedSite. The analysis
team interpreted the results for all of the organizational levels. Figure 10-2 shows the results for
security awareness and training for all organizational levels.
In this activity you compile contextual information about security practices that you recorded
during the knowledge elicitation workshops of processes 1 to 3. Recall that you conducted a
facilitated discussion about current security practices in the organization after participants
completed the surveys, using the surveys as a point of departure for a discussion about
organizational security practices. The facilitated discussions produced information about current
security practices and organizational vulnerabilities according to the perspectives of the
participants. You recorded this information for each workshop group.
In this activity you group each security practice and organizational vulnerability identified during
the knowledge elicitation workshops according to the practice area to which it is most related. As
in the previous activity, we suggest that you compile the information by organizational level.
Since you are transcribing information, be sure to record the information as it was originally
documented.
Let's examine how the analysis team at MedSite consolidated this information. The team
grouped each security practice and organizational vulnerability according to the security practice
areas as defined in the catalog of practices. They then added this information to the survey
results. Figure 10-3 shows the results for security awareness and training, and Appendix A
presents the results for all of the practice areas.
Up to this point in the OCTAVE Method, you have been setting the stage for problem-solving
activities. If you are in a large organization, you probably scheduled the evaluation activities
over many weeks. Thus, before you start to create solutions for your organization's security
issues, you need to review the data that you have gathered.
In this activity you review the major pieces of data that you have collected and generated
throughout the previous processes of the OCTAVE Method. You can either complete your review
individually before the workshop, or you can review the information as a group as part of the
first activity of the process 8A workshop.
Reviewing Information
You must review both organizational information as well as asset-specific information during this
activity. First, review the compiled survey results and contextual information that you
consolidated prior to the workshop. As you review these data, make sure that you keep both the
global and asset perspectives in mind. Information about security practices used by your
organization and organizational vulnerabilities present in your organization is vital to the
development of your organization's protection strategy (using the global perspective) as well as
each risk mitigations plan (using the asset perspective).
Next, review the following risk information for each critical asset:
Potential impact on the organization for each threat and associated impact values
When you review asset-specific information, remember to look for common themes across
assets as well as themes unique to an asset. Looking for themes across critical assets can help
you to identify mitigation actions that are appropriate for more than one critical asset. In
addition, consider looking at the security practice and organizational vulnerability information in
relation to the asset-specific data. Think about how current security practices and organizational
vulnerabilities might relate to potential mitigation actions.
Let's briefly look at how the organization in our example approached this activity. The analysis
team at MedSite included a staff member from the Strategic Planning department in process 8A.
The team wanted to supplement its skills by adding someone with an organizationwide
perspective as well as someone with good planning skills. They found both in the representative
from the Strategic Planning department.
The core team members decided to review the risk information as part of the process 8A
workshop. One of the primary reasons for doing this was to help the additional team member
become familiar with the data that had been collected. The team reviewed the consolidated
security practice information as well as all asset-specific data. After about an hour and a half,
the team was ready to move on to the next activity, creating a protection strategy for the
organization.
Information security affects the entire organization. It is ultimately a business problem whose
solution involves more than the deployment of information technology. Solution strategies need
to balance the organization's long- and short-term needs by incorporating both strategic and
tactical (or operational) views of risk. An organization can take strategic actions focused on
organizational improvement (by implementing a protection strategy) as well as operational
actions focused on protecting their critical assets (by implementing risk mitigation plans). In this
activity you develop a protection strategy for organizational improvement, addressing the
strategic view of risk.
Protection Strategy
A protection strategy defines the initiatives that an organization uses to enable, initiate,
implement, and maintain its internal security. It tends to incorporate long-term organizationwide
activities.
A protection strategy leads to a series of steps that an organization can take to raise or maintain
its existing level of security. Its objective is to provide a direction for future information security
efforts rather than to find an immediate solution to every security vulnerability and concern
[Dempsey 97]. Since a protection strategy provides organizational direction with respect to
information security activities, we suggest structuring it around the catalog of practices. A
protection strategy contains approaches in each of the following practice areas:
Security strategy
Security management
Physical security
Staff security
During this activity, you define strategic initiatives in each of the above areas, defining the
direction for information security efforts in your organization. However, practical considerations
will prevent you from immediately implementing all of the initiatives after the evaluation. Your
organization will likely have limited funds and staff members available to implement the
protection strategy. After the evaluation, you must prioritize the activities in the protection
strategy and then focus on implementing the highest-priority activities.
In this activity you use the practice information that you collected during processes 1 to 3.
Specifically, you should consider the survey results across all organizational levels and
contextual security practice information (protection strategy practices and organizational
vulnerabilities across all organizational levels).
You will likely find discrepancies in the survey results across the different organizational levels.
You may also find that the survey results from an organizational level contradict the contextual
information from the same level. Your task is to make sense of the information. You should have
been present for all of the workshops to allow you to hear a variety of perspectives on what is
happening in the organization. Now you have to sort through everything that you have recorded
and heard during the previous workshops.
The survey results may give indications about the organization's current security practices. You
will be able to identify some security practices that a strong majority of respondents from an
organizational level believe are currently used by the organization. You will also identify some
security practices that a strong majority of respondents from an organizational level believe are
not used by the organization. For the majority of the practices, there will be no strong indication
in either direction.
Be careful when you use the survey results. Remember, this is not designed to be a scientific
activity; your sample groups from each organizational level were not selected from a statistical
perspective. Do not try to extrapolate too much from the results. Look for instances in which the
vast majority of respondents from an organizational level responded in the same way. You
should also look for inconsistencies across organizational levels. For example, perhaps the senior
managers responded that the organization had a complete set of security policies, while the staff
members indicated that the organization does not have security policies. Obviously, there is a
discrepancy here, and it is up to you to interpret the information.
We believe that you will find the contextual information about current security practices and
organizational vulnerabilities more useful to you than the survey results as you develop your
organization's protection strategy. You will most likely find that participants have identified many
instances of what is currently working well in your organization and where there is room for
improvement.
You develop the strategy in two parts. First, you identify approaches in each strategic security
practice area that could improve or maintain your organization's security posture. Then you
explore what is required to enable good practice in the operational practice areas. In this step
we focus on the strategic practice areas.
The current practices in this area that your organization should continue to use
The current practices in this area that your organization needs to improve
To conduct step 1, you need to answer the questions about each strategic practice area
presented in Table 10-3.
Remember to review the survey and contextual security practice information as you answer the
questions in Table 10-3. Also, remember to review the actions and recommendations that you
recorded during process 6. You might find that these recommendations help you to identify
security-related strategies for your organization. Record the approaches that you identify for
each strategic practice area.
As you develop your organization's protection strategy, you should also think about any near-
term actions that could help you develop or implement the protection strategy. Make sure that
you record these action items, which you should use as input to the final activity of process 8A,
in which you formally document action items.
Let's look at how the analysis team at MedSite created the protection strategy. Remember that
the overall team developing the protection strategy includes the core analysis team members
and a staff member from MedSite's Strategic Planning department. The team considered the key
questions for each strategic practice area. As team members discussed the questions in each
area, they often referred to the surveys and contextual security practice information. Based on
the information collected, the team felt that it needed to create a strategy to improve security
awareness and training at MedSite. Figure 10-4 shows the strategies that the team selected. The
team also reviewed the actions and recommendations that it recorded during process 6, but
none of those actions and recommendations was related to security awareness and training. The
analysis team identified initiatives related to all strategic practice areas. You can find the
complete protection strategy for MedSite in Appendix A.
What can you do to ensure that all staff members understand their
security roles and responsibilities?
What can you do to improve the way in which security strategies, goals,
and objectives are documented and communicated to the organization?
management isn't really being done well.... Vulnerability management must be
investigated and the weaknesses in procedure corrected. A plan will be needed to
increase the knowledge and skills of IT and to improve the formality of ABC
Systems' procedures.
The team incorporated vulnerability management into the protection strategy along with other
strategies that could improve MedSite's information technology security practices. Figure 10-5
illustrates the activities that the team recorded for the information technology security practice
area. You can find the complete protection strategy for MedSite in Appendix A. After developing
the protection strategy for MedSite, the analysis team was ready to develop risk mitigation
plans.
This activity marks a transition from the strategic view of risk to a more tactical, or operational,
view. Rather than identifying long-term initiatives that result in organizational security
improvement, you develop risk mitigation plans that directly reduce risks to your organization's
critical assets. The focus shifts from the organization to critical assets.
Risk mitigation plans are intended to reduce the risks to critical assets. These plans tend to
incorporate actions, or countermeasures, designed to overcome the threats to the assets. In
some cases these mitigation actions can be directed toward reducing the impact on the
organization, but most often you reduce the risk to a critical asset by addressing the underlying
threat.
Mitigation plans are linked to business continuity, or enterprise survivability, because they are
based on recognizing or detecting threats as they develop, resisting or preventing threats from
developing, and recovering from threats after they develop.
There is no hierarchical relationship between the protection strategy and the mitigation plans.
The mitigation plans are generally consistent with the protection strategy, since both are based
on security practices (and there might be some overlap between them). However, mitigation
plans are not plans to implement the protection strategy. A protection strategy is based on
addressing organizational improvement and is strategic in nature, whereas mitigation plans are
focused on protecting critical assets and are tactical.
Since a risk mitigation plan includes actions designed to counter the threats to a critical asset,
we suggest structuring the mitigation plan for each critical asset according to threat categories
that apply to that critical asset. Recall that there are four basic threat categories:
System problems
Other problems
In this step you determine the mitigation approach for each risk. When you identify a mitigation
approach, you decide which risks to accept and which to mitigate. When you accept a risk, you
take no action to reduce the risks and accept the consequences should the risk materialize.
When you mitigate a risk, you identify actions designed to counter the threat and thereby reduce
the risk.
Remember to review the narrative impact descriptions and impact values before you decide
whether to accept or mitigate a risk. Will your organization generally accept risks that have low
impact values? Will your organization generally mitigate risks that have high impact values?
What approach will you take for risks with medium impact values? Your answers to these
questions will help you to select a mitigation approach for each risk. Make sure that you use
your answers to support your decisions, not as absolute rules. Always remember to use your
best judgment based on your review of all background information.
To conduct step 1, decide whether to accept or mitigate the risks to each critical asset. Make
sure that you record your decisions in that asset's risk profile. It is also useful to record your
rationale for accepting a risk. At the end of this step, you will have selected the risks for which
you intend to identify mitigation actions in step 2. You should also record your rationale for any
risk you chose to accept.
At MedSite the analysis team members (including the representative from the Strategic Planning
department) reviewed the risk profiles and areas of concern for each critical asset. They also
reviewed all the narrative impact descriptions they recorded during process 7. (See Appendix A
for PIDS areas of concern and impact descriptions.) The team members then set general rules
for selecting mitigation approaches. They would generally mitigate risks with high impact values
while accepting those with low impact values. The team would make decisions for risks with
medium impact values on a case-by-case basis.
Note that the team set general guidelines, not absolute rules, for high and low impact risks, and
it made the decisions for medium-impact risks entirely contextual. Team members discussed
each risk before selecting a mitigation approach.
Figure 10-6 shows part of the risk profile and associated mitigation plan for PIDS. (Figure 10-6
illustrates mitigation approaches and mitigation plans; we are exploring only mitigation
approaches in this step.) The figure highlights the risks in the human actors using network
access threat category. All of the risks were judged to have impact values of medium or high.
The team quickly decided to mitigate all of the high-impact risks. After some discussion, team
members decided to mitigate the medium-impact risks for PIDS, which were related to
disclosure of medical information. Since medical organizations must comply with government
privacy regulations, the team decided that the organization needed to take measures to prevent
the disclosure of personal medical information. The scribe recorded each mitigation approach
next to the impact value in Figure 10-6.
Figure 10-6. Part of PIDS Risk Profile (Human Actors Using Network Access) with
Mitigation Plan
Part of the risk profile for ECDS is shown in Figure 10-7. Remember that ECDS contains mainly
billing-related information for emergency cases. The figure shows ECDS's risks in the other
problems threat category. One risk in this category had a medium-impact value, while all other
risks had low-impact values. The team decided to accept all low risks and mitigate the medium
risk.
Figure 10-7. Part of ECDS Risk Profile (Other Problems) with Mitigation Plan
Step 2: Select Mitigation Actions
In this step you select mitigation actions, or countermeasures, designed to overcome the threats
to the critical assets. First, make sure that you review the survey results and contextual security
practice information. By doing so, you will better understand what your organization is currently
doing well and where it needs to improve, providing a basis for selecting mitigation actions. Also,
remember to review the actions and recommendations you recorded during process 6. These can
be incorporated into your mitigation plans.
You create risk mitigation plans for each critical asset. Recall that you structure each mitigation
plan around the threat categories that apply to that critical asset. If there are no risks in a given
threat category, you will not need to develop a plan for that category. For each critical asset,
answer the following questions as you identify mitigation actions for a threat category:
What actions could you take to recognize or detect this type of threat as it is developing?
What actions could you take to resist or prevent this type of threat from developing?
What actions could you take to recover from this type of threat if it develops?
What other actions could you take to address this type of threat?
How will you test or verify that this mitigation plan works and is effective?
As you consider the questions for a given threat category, think about the administrative,
physical, and technical practices that you could implement to reduce the risks to the critical
asset. Complete and document mitigation plans for all critical assets.
During this activity, you identify a range of mitigation actions. After the evaluation, you prioritize
the mitigation actions by examining the costs and benefits of each action and by considering any
organizational budget and staff constraints. You then focus on implementing the highest-priority
mitigation actions.
When you develop risk mitigation plans, think about any near-term actions that could help you
implement the plans. Make sure that you record these action items. You will use these as input
for the final activity of process 8A, in which you formally record action items.
The analysis team at MedSite reviewed the survey results and contextual security practice
information, as well as the actions and recommendations that it recorded during process 6.
Team members considered these data when they created risk mitigation plans for MedSite's
critical assets. Figure 10-6 shows part of the risk mitigation plan for PIDS. One of the
recommendations from process 6 was to improve the way in which technology vulnerabilities
were being managed. As you can see in Figure 10-6, the analysis team included a mitigation
action to establish vulnerability management policies and procedures. Note that the team also
included measures of success in the mitigation plans.
Figure 10-7 illustrates the risk mitigation plan for the other problems threat category for ECDS.
Note that the mitigation action is related to the only risk in that category that is being mitigated.
Next, look across mitigation plans for common themes and gaps. You want to ensure that the
risk mitigation plans are consistent with each other. You must resolve any inconsistencies that
you find. In addition, you should also note which mitigation actions might reduce risks to more
than one critical asset. These mitigation actions should be high on your list for implementing
after the evaluation.
MedSite's analysis team reviewed the mitigation plans for the organization's critical assets. One
theme that was consistent across many of the plans was the need for enhanced training—both
general security awareness training for users and enhanced training for MedSite's information
technology staff—in how to configure and maintain systems and networks securely.
The team also noticed another interesting point when they reviewed mitigation plans across
critical assets. Figure 10-8 shows the risks and mitigation plan for other problems threats for
PIDS, whereas Figure 10-7 shows the risks and mitigation plan for other problems threats for
ECDS. Notice that many of the risks result from the same threat sources. Most of the risks in the
other problems category for ECDS were accepted, whereas all of the risks in this category for
PIDS were mitigated. The analysis team noted that the following mitigation actions for PIDS also
helped mitigate risks in the other problems category for ECDS that were accepted:
Enhance training for IT staff in securely configuring and maintaining systems and
networks.
Figure 10-8. Part of PIDS Risk Profile (Other Problems) with Mitigation Plan
Remember that focus on the critical few is one of the principles of OCTAVE. The above example
with ECDS and PIDS shows why it is effective. Think of the assets in your organization as
forming a chain. When you identify critical assets, you identify the weakest links in the chain. If
the weakest links are stressed too much, the chain could break apart. Likewise, if something
happens to your organization's critical assets, your organization could suffer catastrophic
consequences. Thus, the critical assets define the level of protection that you need in your
organization. You will find that when you improve your organization's security practices based on
the risks to critical assets, you improve the way in which you protect all similar assets.
Consider the ECDS and PIDS example above. When MedSite updates its contingency plans to
include addressing power supply problems, it will address risks for all assets that are threatened
as a result of problems with MedSite's power supply. Likewise, if information technology staff
members receive enhanced training in how to configure and maintain systems and networks
securely, they can apply that knowledge to all systems and networks. All risks to systems and
networks resulting from mistakes and errors made by people who do not have adequate training
will be reduced. The improved practice employed by the information technology staff members
will thus be applied to both critical and noncritical systems.
This is the final step in creating risk mitigation plans. In step 3, you looked across mitigation
plans for common themes and gaps to ensure that the risk mitigation plans were consistent with
each other. In this step you determine whether any themes that emerged in step 3 need to be
incorporated into the protection strategy. Make sure that you update your organization's
protection strategy accordingly.
MedSite's analysis team did not find any new themes. However, they did note that security
awareness and training was a common theme among the risk mitigation plans and the
organization's protection strategy. In the protection strategy for security awareness and training
(see Figure 10-4), the team documented the need for security awareness training for system
users at MedSite, as well as enhanced training for the information technology staff, in how to
configure systems and networks securely. Mitigation actions for PIDS (see Figures 10-6 and 10-
8) emphasized the importance of improved security awareness and training as it relates to PIDS.
This will likely be a high priority for MedSite after the evaluation.
Thus far, you have created a protection strategy and risk mitigation plans. In the final activity of
process 8A, you document near-term action items that your organization needs to address.
In the previous two activities you developed a protection strategy for organizational
improvement and risk mitigation plans to reduce the risks to your critical assets. In this activity
you look for near-term actions that people in your organization can immediately start to
implement. By employing a few simple actions, your organization can start to improve in a few
areas. By taking these initial steps toward improvement, your organization can start to build the
momentum needed to implement its protection strategy and risk mitigation plans.
Action List
An action list defines any action items that people in your organization can take in the near term
without the need for specialized training, policy changes, etc. Because items on the action list
have little cost associated with them, you can start implementing them immediately after the
evaluation. Implementing action items is an easy way to start improving your organization's
security posture. Here are two examples of action items that can be placed on the action list:
Assign an IT staff member to fix the high-severity vulnerabilities that were identified
during phase 2 of OCTAVE.
Assign the analysis team and the organization's management an action to define the
details of implementing the protection strategy.
Any management support that is required to facilitate completion of each action item
As you created the protection strategy and risk mitigation plans, you should have recorded any
near-term actions that could help you implement the strategy and plans. Review your list of
actions and decide if any are appropriate for the action list.
Think about any additional near-term actions that could help you implement your protection
strategy and risk mitigation plans. What near-term actions need to be taken? Remember to
document all action items.
Now that you have identified specific action items for the action list, you need to assign
responsibility for completing them as well as a completion date. Answer the following question
for each action item on your list and record the results:
At MedSite the analysis team members reviewed the action items that they recorded when they
developed the protection strategy and the risk mitigation plans, as well as the actions and
recommendations from process 6. MedSite's action list is shown in Figure 10-9.
Develop a card that tracks administrators and their capabilities. Also establish
points of contact for incidents.
The team felt that this action should be included on the action list, and it is the second item in
Figure 10-9. The third action item in Figure 10-9 is one of the recommendations from process 6.
The last action item was documented during the development of the risk mitigation plan for
paper medical records. The team believed that an informal physical security test within the next
90 days was important, because MedSite has encountered some problems with the physical
security of medical records. For each action item that team members documented, they also
assigned responsibility, established a completion date, and identified any management actions
that would facilitate completing that action.
The order in which we present these activities is not mandatory. Different teams will address the
activities in different orders, depending on their preferences. This particular sequence requires
you to think strategically first. However, if strategic thinking is not one of your team's strengths,
you might want to start by identifying near-term action items and then develop risk mitigation
plans. You could then look across the action list and mitigation plans to see what strategic
themes emerge, providing input for your protection strategy.
On the other hand, you might want to first examine the tactical view of risk by developing risk
mitigation plans. Once you have identified tactical actions, you can identify strategic themes and
near-term action items.
Make sure you think about how you want to approach the activities and address them in the
order that makes most sense for you. However, remember that creating your strategy and plans
is not a lockstep process. No matter what order you choose for the activities, you will likely need
to iterate among the activities.
This completes the activities of the first workshop of process 8. You now have one more hurdle
to complete before the evaluation is over. In the second workshop of process 8, you review the
evaluation results with your organization's senior managers and allow the managers to refine the
protection strategy, risk mitigation plans, and action list. Before we move on to Chapter 11, we
need to complete our discussion of incorporating probability into the evaluation. The next section
looks at how to incorporate probability into risk mitigation decisions.
Chapter 9 presented the concept of probability and showed how it could be incorporated into
process 7 of OCTAVE. The chapter then focused on the problems of estimating probability in the
absence of extensive data on threats. This section revisits the concept of probability, but this
time focusing on using it when making risk mitigation decisions. Specifically, it addresses issues
relating to expected value.
The expected value (or expected loss) for a risk is the product of the potential loss that could
occur (or impact value) multiplied by its projected frequency of occurrence (or probability). The
expected value is often measured in annualized loss expectancy (ALE), that is, the monetary loss
that can be expected in a year [Hutt 95].
Many common risk analysis approaches use expected value (also referred to as risk exposure) to
set priorities. Higher expected values in a given year correspond to a higher-priority risk. In
addition, conventional wisdom dictates that funds dedicated to mitigation activities in a year
should not exceed the expected ALE.
Using expected value to set priorities is a straightforward way of setting priorities. However,
there is one major problem with this approach. Extreme and catastrophic events have low
probabilities and a very high impact on the organization. An analysis based solely on expected
value equates catastrophic events with those that have a high probability but very low
consequences [Haimes 98]. Thus, decision makers relying only on expected values when making
decisions would put the same effort into mitigating a high-probability, low-impact event as a
low-probability, high-impact (i.e., catastrophic) event.
Let's take a look at how you would assign expected values to risks. First, let's consider how to
determine the expected value in a quantitative analysis. Since expected value is the product of a
particular risk's impact value and probability, you simply multiply those two numbers to calculate
the expected value for that risk.
Now, how would you determine the expected value in a qualitative analysis? Remember, we do
not use numbers in a qualitative analysis; rather, we assign "high," "medium," and "low" values
to the impact and probability of each risk. To look at the combination of impact and probability,
use a table like the one in Figure 10-10 [Dorofee 96].
For example, for a risk that has an impact value of "medium" and a probability of "high," the
expected value would be "high." The expected value for the risk lies in the table cell where the
individual probability and impact values for that risk intersect. See Figure 10-11 for a graphic
representation of this example.
Let's take a look at expected value in the context of our running example (see Figure 9-6).
Chapter 9 showed how the analysis team could have estimated probability using the risks to
PIDS resulting from human actors using network access. Figure 10-12 presents the expected
values for those risks determined using the matrix in Figure 10-10.
Figure 10-12. Expected Values (EV) for Part of PIDS Risk Profile: Human Actors Using
Network Access Tree
Notice as you look across the risks in the tree that there is a tendency toward "medium." Also
note that any potential catastrophic event that has a "low" probability and "high" impact would
be assigned a "medium" expected value. A high-probability, low-impact risk would also be
assigned a "medium" expected value. But would you mitigate these two risks in the same way?
The first risk might put you out of business, whereas the second might merely be a nuisance.
Using expected values alone obscures the significant differences between these two cases.
Expected value in a qualitative risk analysis approach does separate the extremes and can be
used to help guide decisions. However, you must not depend upon it completely, for the reasons
mentioned above.
As we finish our discussion about expected value, we want to warn you about a common mistake
that we see in many risk analysis methods. These methods express "high," "medium," and "low"
as numerical values. For example, a method might assign high a value of 3, medium a value of
2, and low a value of 1.
To determine expected value, the numbers are multiplied (as in a quantitative analysis). Figure
10-13 shows the resulting matrix.
We caution you not to follow this qualitative approach, which does nothing more than indicate
relative priority. If you assign numbers to those values and then perform mathematical
operations on the numbers, you are implying a quantitative relationship that you have not
established. For example, it might be tempting to say that a high-impact, medium-probability
risk has twice the expected value of a high-impact, low-probability risk, because their respective
expected values using the numerical values in Figure 10-13 are 6 and 3. However, because we
have looked only at relative ranking of impact and probability, we can merely conclude that we
consider the first risk greater than the second. We cannot begin to say how much greater.
So beware of assigning too much meaning to relative rankings. We have seen some risk
analyses that assign numerical values to relative rankings and then put those numbers in a
"proprietary algorithm." The results can be meaningless and dangerous if people base their
decisions solely on the resulting numbers.
Uncertainty
Finally, we offer one last caution about using expected values. Consider a quantitative risk
analysis approach where impact values and probabilities are quantitatively estimated. In this
case, expected values can be calculated using multiplication. Many approaches that incorporate
quantitative estimates of impact and probability leave out one major concept, namely, the
uncertainty associated with each numerical value. When you quantitatively estimate impact and
probability, each estimate will have an uncertainty associated with it. The uncertainty depends
on the data that you have gathered and the statistical approach that you use to estimate each
value. The resulting expected value has an uncertainty that is a combination of the individual
uncertainties of impact and probability. Many risk analysis approaches produce a number as the
expected value but give no indication of the confidence level (or uncertainty range) associated
with it. As a result, less sophisticated decision makers will have a false sense of security in the
quantitative expected value produced by the tool. Therefore, you must also beware of assigning
too much meaning to quantitative results of a risk analysis. Know how the values were
estimated and calculate the resulting uncertainty associated with each number.
Our overall message is simple: be careful how you incorporate probability into your decision-
making process. People who have less experience with risk evaluations will most likely have
greater confidence in their estimates of impact than in their estimates of probability. Thus, you
might want to use impact as the primary driver when you decide whether to mitigate or accept a
risk. You could use probability to help determine which mitigation plans to implement first. For
example, you might use scarce resources to address a medium-impact, high-probability risk in
the near term. Later on, you might be able to free up enough resources to address a medium-
impact, medium-probability risk. In this case you are using probability to refine your priorities by
determining when to implement mitigation plans. You are not using probability to drive the
decision of whether to accept or mitigate the risk.
Unfortunately, we cannot offer a silver bullet or a step-by-step process that applies in all
circumstances. No matter which risk analysis method you decide to use, you need to understand
the limitations of any information that you gather. Risk analysis methods support your decision
making and help you to make reasonable decisions about information security; they do not
replace your need to think. Just remember that you always have to use your best judgment
when making decisions in any risk analysis approach.
Chapter 11 presents the final workshop of the OCTAVE Method, in which you take the results of
the evaluation and present them to your organization's senior managers.
The second workshop of process 8 marks the end of the OCTAVE Method. Although the formal
evaluation process comes to an end, the organization needs to consider what happens after the
evaluation. This workshop sets up the transition from conducting the evaluation to implementing
the results, to ensure that your organization is in a position to benefit from the whole process.
Section
One of the most difficult tasks in any improvement activity is maintaining the momentum
generated during an evaluation. As you conduct an evaluation, you spend concentrated time
gathering information, analyzing it, and creating solutions. Because of the intensity of these
activities and the well-defined goals of the process, you develop a momentum that culminates in
creating solution strategies and plans. It's easy to think that the hard work is over when you
finish the final activity, but actually it is just beginning. In this workshop your organization's
senior managers must think about what happens after the evaluation, setting forth the direction
for security improvement and establishing their sponsorship for ongoing security improvement.
Process 8B Workshop
Process 8B is a facilitated workshop led by the analysis team and attended by the organization's
senior managers. In this workshop you incorporate the senior management perspective into the
protection strategy, risk mitigation plans, and the action list. The workshop can be conducted in
about two to three hours under the direction of an experienced facilitator. One member of the
analysis team assumes the role of scribe and records any changes to the protection strategy, the
risk mitigation plans, and the action list. Review all activities for process 8B and decide whether
your team collectively has the skills to conduct all the activities successfully. We suggest that
your team have the following skills for this workshop:
Facilitation skills
Before you meet with senior managers, you need to compile all information in a concise,
meaningful format. Table 11-1 summarizes the preparation activity, while Table 11-2 highlights
the activities that are performed during the workshop. The next section kicks off the
presentation of process 8B by highlighting some ideas about what to include in a presentation to
your organization's senior managers.
Activity Description
Present risk The following risk-related information that was generated during the
information OCTAVE process is presented to senior managers:
Asset information
Review and refine The protection strategy, risk mitigation plans, and
protection action list are presented, to senior managers. The
strategy, managers then refine each as necessary.
mitigation plans
and action list
Create next steps The senior managers decide how to implement the protection strategy,
risk mitigation plans, and action list by determining (1) what steps will
be taken after the evaluation, (2) who will be responsible for the next
steps, and (3) when these steps will be completed.
You need to prepare thoroughly for your meeting with senior managers. This task is more
difficult than it appears. Since most senior managers have a limited amount of time to spend on
efforts such as this, you need to be able to set the context for the managers and get input from
them in a span of an hour or two. You must help them understand which assets are critical to
the organization, why they are critical, and how they are at risk. You also need to help managers
understand what the organization is currently doing well to protect its critical assets and where
its protection measures are missing or inadequate. Finally, you need to present solutions that
you developed to improve how the organization is protecting its critical assets. In this activity
you prepare for your meeting with your organization's senior managers by deciding how you will
present the issues identified during the evaluation and the solutions that you developed to
address those issues.
Your presentation will likely be broken into the following two themes: (1) background risk
information and (2) proposed solutions. Table 11-3 shows key elements that you should consider
including in the presentation.
Remember to consider the requirements of your audience (the organization's senior managers)
before you create your presentation, as well as the time constraints involved. Tailor your
presentation to the needs of your managers and make sure that it is consistent with any
requirements or conventions in your organization. You might consider providing senior mangers
with a summary of the evaluation results in advance. Each organization and each set of senior
managers are different, so there are no universal rules, but Table 11-3 provides some guidelines
and ideas for you to consider. When preparing to meet with senior managers, you need to rely
upon your experience in the organization and use your best judgment. Appendix A presents a
sample final report from our case example.
Now that you have created a presentation for your organization's senior managers, you are
ready to meet with them. The next section looks at the process 8B workshop.
Make sure that you summarize the above data in your presentation. You want to make sure that
the managers understand the information, but you don't want to spend too much time on the
details. After you have provided the background data, ask the managers if they have any
questions. Let them know that they will next review the protection strategy and risk mitigation
plans.
11.4 Review and Refine Protection Strategy, Mitigation Plans, and Action List
In the previous activity you set the context for the senior managers. In this activity you have the
following two objectives:
To present the protection strategy, risk mitigation plans, and action list that you
developed in process 8A
Remember that your organization's senior managers have a broad, organizationwide perspective
that you might not have. Senior managers understand the parameters within which the
organization must operate. They have an appreciation for how many organizational resources
can be applied to information security improvement efforts, as well as the constraints that must
be factored into the protection strategy and risk mitigation plans.
Step 1: Present the Protection Strategy, Risk Mitigation Plans, and Action List
One member of your analysis team should present solutions while the other team members
support the lead presenter as appropriate. First you need to establish ground rules for reviewing
and refining strategies and plans. We suggest that you ask the managers to wait until they have
seen all strategies and plans before they suggest changes. By waiting, they will be able to get a
feel for the "big picture." If you think that the managers will nevertheless want to dive into the
details of the strategy and plans immediately, you might try to define the big picture right away
with a summary of the information. Alternatively, you can temporarily record any potential
changes on flip charts and address them later in the workshop. Use your best judgment, be
flexible, and manage the activities as best you can.
First, define what constitutes a protection strategy; next, present the one that you developed in
process 8A; and then ask the managers if they have any questions about it. Try to postpone
discussing any changes to this strategy until after you have also presented the risk mitigation
plans and the action list. If the managers insist on making changes before seeing the other
items, at most this might require some iteration when they see the mitigation plans or action
list.
Now define the term risk mitigation plan, pointing out that there is no hierarchical relationship
between the protection strategy and the mitigation plans. (The protection strategy defines long-
term organizational initiatives, whereas risk mitigation plans define actions to reduce the risks to
the organization's critical assets.) Present each risk mitigation plan to the managers, and ask
them if they have any questions. Again, try to postpone discussing any changes to the plans
until after you have presented the action list.
Finally, discuss what an action list involves and present the action list you have created. Ask the
managers if they have any questions about the list. After this you are ready to ask the managers
for their thoughts on refining the protection strategy, risk mitigation plans, and action list.
Step 2: Refine the Protection Strategy, Risk Mitigation Plans, and Action List
Ask the senior managers if they want to propose any refinements or modifications to the
protection strategy, risk mitigation plans, and action list. Guide the discussion to cover all
proposed changes, and make sure that the managers think about any implications or ripple
effects that these might cause. Remember to record any changes to the protection strategy, risk
mitigation plans, and action list.
Let's look at how this activity was implemented in the context of our sample scenario. At
MedSite the following people were present for the meeting with MedSite's senior managers:
One staff member from ABC Systems who led the process 6 vulnerability evaluation
The team used the following approach to present to MedSite's senior managers:
One of the analysis team members presented the risk information to the managers.
The staff member from MedSite's strategic planning department presented the strategy,
risk mitigation plans, and action list.
The manager of MedSite's information technology department, the information technology staff
member, and the representative from ABC Systems were present to participate in any
technological discussions that might arise. MedSite's senior managers, upon reviewing the
recommendations of the analysis team, made no major changes to the protection strategy,
mitigation plans, and action list. Their primary concern was to determine a practical way to
implement the recommendations with a limited budget and other resources.
After completing this activity, you have one last evaluation activity to conduct. Your
organization's senior managers now need to decide what the organization will do to implement
the results of the evaluation.
This activity marks the end of the evaluation process. In many ways it is one of the most critical
steps; as you now ask your organization's senior managers to think about what happens after
the evaluation, they determine the ultimate direction for security improvement efforts in the
organization.
What will you do to ensure that your organization improves its information security?
What can you do to support this security improvement initiative? What can other
managers in your organization do?
Notice that the questions really focus on what the senior managers plan to do to enable and
encourage implementation of the evaluation results as well as ongoing security improvement
activities. Facilitate a discussion around each question, and make sure that you record all next
steps.
At MedSite the senior managers determined a set of next steps that were intended to build on
the results of the evaluation. Figure 11-1 shows the next steps for MedSite. The managers
decided to get the strategic planning department involved in implementing the protection
strategy and risk mitigation plans. They also decided to continue their discussion of how to
manage implementation of the protection strategy and mitigation plans at the next management
team meeting. This constitutes a first step in making security management a permanent part of
their organizational processes.
The second workshop of process 8 is the final evaluation activity. After the workshop is
completed, you formally document the results of the evaluation. The format for documenting all
OCTAVE results should fit your organization's normal documentation guidelines and should be
tailored to meet your organization's needs.
In addition, make sure that you ask the senior managers whether they would like a results
briefing for the evaluation participants or other staff members in the organization. Encourage the
managers to make the results of the evaluation known, in line with the key OCTAVE principle of
open communication.
At MedSite the analysis team completed its documentation of the evaluation results. One week
later, the team presented the results to all of the participants. ABC Systems sent two
representatives to support the presentation. After the meeting, the representatives from ABC
Systems met with MedSite's information technology manager and staff. They discussed how to
prioritize vulnerabilities and set up a more coordinated, routine vulnerability evaluation and
correction process.
You should note that your work is not finished when you complete OCTAVE. After the evaluation,
you must identify high-priority activities in the protection strategy as well as high-priority
mitigation actions. Doing so will focus your post-evaluation activities. Remember, organizational
budget and staff constraints will prevent you from immediately addressing everything in the
protection strategy and risk mitigation plans. Finally, to improve your organization's security
posture, you need to manage your information security risks by implementing the results of the
evaluation. We examine concepts related to managing risks in Chapter 14, which presents a
framework for information security risk management.
This concludes our presentation of the OCTAVE Method and brings to a close Part II of this book.
Part I defined the essential principles, attributes, and outputs of the OCTAVE approach. Part II
presented the OCTAVE Method, an evaluation methodology consistent with the OCTAVE
approach.
It comprises eight processes: four in phase 1, two in phase 2, and two in phase 3.
It includes facilitated discussions with various members of the organization and self-
directed workshops in which members of the analysis team conduct a series of activities
on their own.
We designed the OCTAVE Method for large organizations. However, you can use it as a baseline
or starting point from which to tailor the method for a variety of organizational sizes, operational
environments, or industry segments. Part III examines tailoring options and considers how to
adjust the OCTAVE Method to meet the needs of both small and complex organizations while
remaining faithful to OCTAVE's principles, attributes, and outputs. It also lays the groundwork
for managing your information security risks after OCTAVE.
Chapter 12 describes a number of ways in which you can tailor the processes,
activities, and artifacts of the OCTAVE Method. Chapter 13 highlights examples of
how OCTAVE is being applied in a range of operational environments. Finally,
Chapter 14 presents a framework for managing information security risks.
Chapter
So what do we mean by tailoring? Almost any option that doesn't violate the basic set of
requirements of the OCTAVE approach qualifies, and that list is very long. This chapter describes
a variety of tailoring options, and Chapter 13 presents several practical implementations based
on these options.
Section
How would you approach evaluating information security risks in a small organization? You might
begin with the following goal in mind: streamlining the OCTAVE Method for efficient data
collection and analysis activities. Consider the requirements of a small medical office consisting
of five physicians, seven nurses, and four administrative staff members, where an external
vendor maintains the systems and network for the office. The personnel in the office do not
possess significant information technology expertise. Thus, they would work with their external
vendor for the technological parts of the evaluation (phase 2). In addition, because of the small
number of staff in the organization, only one knowledge elicitation workshop is really needed.
Processes 1 to 3 would therefore be condensed into one self-directed (as opposed to facilitated)
workshop, with only the analysis team participating. Chapter 13 explores the evaluation
requirements of small organizations in more detail.
Next, let's look at the financial community, which must comply with a standard of due care and
federal regulations [Gramm 01]. The community could replace the catalog of practices used
during the OCTAVE Method with one specifically tailored to their standard of due care and
regulations. The community could also revise the generic threat profile, adding threats to cover
fraud, electronic banking transactions, money laundering, and international finance and
accounting issues. The actual processes of the OCTAVE Method could remain largely untouched,
with only the artifacts being modified.
Consider how consultants might use this approach. They could tailor the OCTAVE Method by
providing a consultant to facilitate the evaluation. This facilitated version still requires an
analysis team staffed by the client's personnel to play an integral role in making all decisions
during the evaluation. The consultant's role is to facilitate the process and support activities after
the evaluation to help institutionalize improved security practices. The activities during each
process are modified to accommodate the use of an external facilitator.
Finally, each individual process of the OCTAVE Method can be tailored to meet the needs of any
organization. If a company has extensive, secure, Web-based collaborative tools, the surveys
used during processes 1 to 3 can be distributed, completed, and collected via the company's
intranet.
As you can see, tailoring can cover a wide variety of issues. We can't address every permutation
in this chapter. What we can do is focus on tailoring the processes and artifacts of the OCTAVE
Method for an organization. The next section takes a closer look at what an organization might
do to tailor the OCTAVE Method to suit its needs.
The first step as you start preparing to conduct the OCTAVE Method is to decide where you need
to modify the method for your organization. The ideas that we present in this section do not
address all of the ways in which the method can be tailored. We have included ideas to help you
think about your organization's unique needs and decide which aspects of the method you need
to adjust to meet those needs. There are two major aspects of the OCTAVE Method that can be
tailored: the evaluation activities and the artifacts used during the method. We start with how to
tailor the evaluation.
The following list highlights some major areas in which you can modify the OCTAVE Method for
your organization:
Order of processes
Policy reviews
Schedule
Outsourcing
Risk probability
Automated tools
This section addresses each of these aspects, starting with the order in which you conduct
processes 1 to 3.
Order of Processes
Strongly hierarchical organizations might prefer reversing the order of the knowledge elicitation
workshops, interviewing senior managers last. If you hold the senior management workshop
after the other knowledge elicitation workshops, you can provide the senior managers with the
results of the other workshops and then ask them to address any gaps that they see. You need
to be careful to ensure that senior managers contribute their perspectives and do not simply
rubber-stamp the results presented to them. Remember that the point of processes 1 to 3 is to
build a global perspective of organizational security knowledge. Thus you need everyone's input,
especially that of senior managers. However, we acknowledge that some senior managers prefer
to review results and then provide their input. They are often able to do this more quickly than if
they participated in a full workshop. Your team should understand the needs of your senior
managers and adjust the evaluation process to address any management constraints, while still
obtaining the required input.
Policy Reviews
Policy reviews can be a useful addition to the beginning of an evaluation. Your analysis team
gathers and reviews the policies, procedures, regulations, laws, and standards of due care that
apply to your organization. For some companies, this task could be very long; others might be
able to complete it quickly, depending on the nature of the policies that currently exist. You may
find that you can use the results of this review to tailor the evaluation. For example, you can
tailor the catalog of practices to meet a new or emerging standard of due care. Or you might be
able to use this information when you develop your protection strategy and risk mitigation plans.
Finally, finding out exactly how many security-related policies and regulations actually exist in
your organization and domain could be an eye-opening experience.
Schedule
The time required to conduct the OCTAVE Method varies. Organizations that follow the process
faithfully have taken anywhere from six weeks to six months to conduct the evaluation. One
major reason for this variability is how much concentrated time the analysis team has available.
Remember, many analysis team members have regular duties to perform in addition to their
analysis team tasks. A part-time approach to staffing the analysis team increases the length of
time it takes to complete an evaluation. While most organizations cannot afford a dedicated
analysis team, they must also be careful not to allow the schedule to be stretched so far that the
results of their evaluations are stale before they are completed. If you find yourself using an
extended schedule, consider providing a mechanism for identifying and completing some critical,
near-term action items as they arise, such as fixing high-severity vulnerabilities found during
process 6.
The number of knowledge elicitation workshops is flexible. Certainly, a larger organization may
need more of these workshops than a small company with only two departments. In addition,
some processes can be combined to save time (e.g., processes 7 and 8A). It is the results of the
workshops that are important, not the specific number of workshops. Always remember that the
OCTAVE Method is not a lockstep process. You have great latitude to change the processes that
make sense for your organization, but make sure that whatever you do puts you in a position to
make the best decisions about information security for your organization. For example, some of
the knowledge elicitation activities, such as surveys, can be completed prior to the workshop.
You also might want to consider conducting workshops over brown-bag lunches to deal with time
constraints. Experiment a bit to determine what works best for your organization.
Phase 2 of OCTAVE requires you to examine your computing infrastructure for technology
vulnerabilities. You can expand phase 2 by examining your organization's physical infrastructure
for weaknesses, for example, by doing the following:
Examining access paths into areas containing critical paper documents or infrastructure
equipment
Evaluating your organization's physical security will identify additional vulnerabilities and build
upon some of the areas of concern elicited during the early discussions with organization
personnel.
Outsourcing
Information technology outsourcing is becoming more and more popular. Many organizations
cannot conduct a vulnerability evaluation of their computing infrastructures, because external
service providers maintain their systems and networks. The organizations simply do not have
the capability to evaluate their computing infrastructure. Typically, external organizations, or
service providers, address the security needs for many of these organizations.
Organizations that rely upon such outsourcing as a business strategy need to determine how to
work with their service providers during information security risk evaluations. An organization
can identify its critical assets, the threats to those assets, and what its staff members are doing
to protect the critical assets (the phase 1 activities of OCTAVE). However, that organization will
have to work with the service provider to determine whether the provider is using due care in
maintaining systems and networks. Often, this process demands a contracting mechanism,
whereby the service provider is required to meet a level of due care. Verification of such
contracting mechanisms is often difficult and costly. We will revisit this topic in Chapter 13.
Risk Probability
Chapters 9 and 10 explored how you can incorporate probability into the risk analysis. You
should note that some standards of due care do require the estimation of probability. If you are
required to use probability, do so with care. Some risk analysis techniques that incorporate
probability can obscure the risk of extreme events that have a very low probability but produce
disastrous results. See Chapters 9 and 10 for more information on probability.
The OCTAVE Method requires the analysis team to record the range of impact values as part of
the risk profile. As you will recall from Chapter 7, you estimate impact values for the following
types of impact areas: reputation, health and safety issues, productivity, and legal and financial
information. Rather than recording the range of impact values for these areas, you might find it
more useful to record the value of each area of impact separately. You can then review the value
of each area of impact when you set mitigation priorities. For example, if your organization's
reputation is more important than any other type of impact, a medium impact on your
reputation might have a higher priority than a high impact on your productivity. Thus, by
recording impact values for each area separately, you will be able to differentiate among
different types of impacts and make more effective use of mitigation-related resources. Figure
12-2 illustrates a risk profile that includes multiple impact values based on area of impact. (A
risk profile showing a range of impacts is shown in Figure 9-4.)
Any evaluation will proceed more efficiently if tools are used, even if you only use a simple
spreadsheet application. Custom-developed databases and analysis tools can improve the
efficiency of your evaluation, but they aren't critical unless you are dealing with an extremely
large set of information. Tools can also provide a more effective foundation for managing
information security risks by allowing easy maintenance of data and tracking status changes of
risks and mitigation plans.
Think about how the OCTAVE Method might be implemented throughout a large, geographically
dispersed company. One approach involves using an internal independent analysis team. The
team travels from site to site, or department to department. It facilitates information security
risk evaluations in a department, while local personnel play an integral role in making all
decisions during the evaluation. The internal team's main role is to facilitate the process and
help sites and/or departments implement security improvement activities. This is a variation on
the consulting model mentioned earlier in this chapter.
We hope that some of the ideas presented here help you think about how you might modify the
process for your organization. The next section offers ideas about how to tailor specific artifacts
used during the OCTAVE Method for your organization.
The artifacts, particularly those found in the appendices of this book, can always be tailored to
suit an organization or a particular domain.
Catalog of Practices
The catalog of practices (see Appendix C) is a general catalog of accepted security practices. If
you must comply with a specific standard of due care (e.g., HIPAA), you can modify the catalog
to ensure that it addresses the range of practices in the standard. You can add specific practices
unique to your domain or remove practices that are not relevant. You can also modify the
catalog to make it consistent with the terminology used in your domain. The goal is to have a
catalog of generally accepted, good security practices against which you can evaluate your
current security practices. The catalog must be meaningful to your organization.
Before you start OCTAVE, you can tailor the generic threat profile to meet your evaluation needs
by doing the following:
For some organizations the standard categories are sufficient. Other organizations might require
additional categories of threat. Threat categories are contextual and are based on the
environment in which an organization must operate. The standard categories are a good starting
place. As you implement the OCTAVE Method, you may start identifying unique threats that
require the creation of new threat categories.
The following example addresses how to tailor the threat actors for the human actors using
network access category. The basic threat tree for this category focuses on two types of threat
actors: those inside the organization and those outside it. Depending on the evaluation needs of
an organization, this classification of actors could be too broad. For example, an organization
that deals with national security issues would probably want a more detailed classification of
threat actors. The following list is an expanded classification of threat actors:[1]
[1]
This list was created using [Howard 98], [Hutt 99], and [Parker 98].
Attackers— people who attack computer systems for challenge, status, or thrill
Terrorists— people who attack computer systems to cause fear and for destruction for
political gain
Criminals— people who attack computer systems for personal financial gain
The asset-based threat profile could be modified to include the above classifications and more
detailed motives. In addition, other forms of tailoring can be applied to add detail to the access
paths. Separate trees could be created for different means of network access or for different
means of physical access. The trees do become more complicated with the additional detail and
could make the subsequent analysis more complex. For many organizations, however, the
standard generic set of trees will be sufficient. As a general guideline, make sure that your
organization's threat profile addresses the range of threats known to affect your operational
environment.
Worksheets
Any worksheet from Appendix B can be modified to suit the particular needs or standards of an
organization or domain. Certainly the final report contained in Appendix A will look very different
based on who writes it and the documentation requirements of the organization. Worksheets can
be combined, split apart, and rearranged to be more efficient or to adapt them to a particular
database or other automated tool. Figure 12-2 illustrates one modification of the risk profile.
Figure 12-3 further modifies the risk profile to include vulnerability information, combining
elements from two of the worksheets from processes 6 and 7.
In the end, every organization needs to tailor and adapt OCTAVE to suit its particular needs. The
key is to maintain consistency with the principles, attributes, and outputs presented in Chapter
2. You need to choose an implementation that works in your environment and helps you to make
sensible information protection decisions for your organization. There are, of course, many
unwise choices that you can make when you tailor the OCTAVE approach. You could decide that
your organization doesn't need to work collaboratively with your service provider and assume
that the provider is keeping your organization's network and Websites secure. In this case you
will be omitting phase 2 from your evaluation. You could also choose to focus only on the
computing infrastructure and skip the phase 1 activities. If you modify the evaluation in these
ways, you are only getting part of the big picture, and your protection strategy and risk
mitigation plans are not likely to keep critical assets secure.
Ultimately, it does not matter if you follow the OCTAVE Method religiously or adapt it. What does
matter is that you gather the information you need to make informed decisions and improve
your organization's security posture.
Section
13.1 Introduction
13.1 Introduction
Before conducting an OCTAVE, you must decide how to set the scope of the evaluation. You
must also tailor the evaluation to meet the needs of the organization and to complement your
unique operational environment and business processes. So where do you start? The following
questions will help you think about how to implement OCTAVE in your organization:
How complex is your organization? What size is it? Is it national or international? How
many business lines are in the organization? How many products does your organization
produce? Is your organization geographically dispersed, or is it centralized? How diverse
is the organizational culture?
Who is within your organization's sphere of influence? Who will be affected by your
organization's security practices and policies? Which other organizations' security
practices and policies affect you? (Consider customers, partners, contractors,
subcontractors, visitors, Web site visitors, etc.)
Who can legitimately access your systems and assets? What assumptions are you
making about the trustworthiness of those people and their organizations?
How complex are your organization's systems and networks? How diverse are your
organization's computing systems? How interconnected is your organization to external
parties?
What are the existing and pending laws and regulations with which your organization
must comply? What are the domain-specific standards to which your organization must
adhere? What political considerations might affect how your organization implements
security?
How much of this evaluation will your organization conduct? How much will you depend
upon third-party experts or service providers? Should you require external partners or
contractors to conduct their own evaluations?
What is the best way to implement the analysis team(s) in your organization? Will your
organization require one team or many? How many teams will be needed per site? How
many teams will be needed per division? If you require more than one analysis team in
your organization, will there be personnel common to all teams? Will all teams require
local personnel? Will your organization allow external representatives on analysis teams?
These questions focus on integrating OCTAVE with the way your organization conducts its
business. We designed the OCTAVE Method because a "one-size-fits-all" approach doesn't work
for evaluating information security risks. A key requirement was to make the evaluation
approach flexible, enabling it to be tailored to each organization's unique environment.
The remainder of this chapter focuses on the flexible nature of OCTAVE by presenting four
scenarios based on how organizations are currently implementing the approach. As you will see,
each organization adjusted OCTAVE to fit its operational environment. The following
organizations are profiled in the scenarios:
A small organization
The chapter concludes with a few additional ideas that can be incorporated into OCTAVE.
13.2.1 Company S
Company S is a small manufacturing facility with 22 people in three departments: shop floor,
management, and administrative. The company has one location and many longtime employees.
It has used two interconnected computer systems to run its manufacturing equipment and
administrative functions for seven years. A Web-based marketing and order-processing system
has recently been added, enabling Company S to expand its customer base. The Web system is
also connected to the administrative system to enable easy transfer of customer information.
The organization outsources configuration and maintenance of its systems and networks to two
external vendors. One vendor maintains the computer systems for manufacturing and
administration. These systems are used to access many important assets, including
manufacturing control software, customer information, product information, insurance records,
and personnel records. The second vendor maintains the Web site. Because Company S has
implemented Web-based order processing, the Web server stores some customer information.
Company S also relies on both vendors to address its information security needs. Role-based
access has not been implemented at Company S; the company has always cross-trained its staff
members and permits them to access whatever they need.
Recently, a competitor fell victim to the actions of a disgruntled employee, taking down that
organization's systems for a week. Because of this incident, the managers at Company S decided
that they need to pay closer attention to information security in their organization. In particular,
they are worried about protecting the following items:
Customer information
Insurance records
Personnel records
Senior managers decided to build an in-house capability to conduct information security risk
evaluations. The managers also knew that they had two issues to overcome:
1. Staff members at Company S can perform some information security risk evaluation
tasks, but they don't have a lot of experience with information security issues.
2. The organization needs to work with the vendors to ensure that they are using due care
to protect the company's critical information and systems.
Company S is tightly run, with little margin in its schedule or resource loading. The organization
needs to schedule the information security risk evaluation carefully. It also needs to negotiate
with its vendors about providing Company S with information verifying that they are managing
vulnerabilities in their computing infrastructures. Overall, Company S needs an evaluation
process that is efficient, requiring a modest time investment (e.g., taking from two to five days);
is easy to use; focuses on the entire organization, rather than one area; and helps it to define an
approach for interacting with its vendors.
Most approaches for evaluating information security risks that we have seen generally focus on
the needs of large organizations. A pragmatic approach designed for small organizations does
not exist today, and most small organizations cannot afford the cost of outsourcing this function
to external parties. Our intent is to provide those organizations with an efficient, inexpensive
approach to begin identifying and managing their information security risks, enabling them to
improve their security posture. The resulting evaluation will provide small organizations with an
approach that is consistent with the OCTAVE principles, attributes, and outputs and is tailored to
their unique environments. This section presents our current work in this area.
When we were developing the OCTAVE Method, we met with people from many types of
organizations to understand their requirements as potential users of the method. People who
indicated that they worked in small organizations typically liked the approach, but they needed
an implementation consistent with their organization's business processes. These organizations
generally contained 100 or fewer employees. (We'll use this as our working definition of a
"small" organization.)
The requirements for implementing OCTAVE in small organizations are driven by the following
organizational characteristics:
3. Scarce resources. Very small organizations are typically quite lean and have limited staff
time available for security improvement initiatives.
Although these characteristics are typical of small organizations, we have seen organizations
with fewer than 100 employees that are very hierarchical, manage their computing
infrastructures, and implement process improvement efforts. Likewise, we have seen instances
of organizations with more than 100 employees that have a flat organizational structure,
outsource management of their computing infrastructures, and do not have staff available for
process improvement activities. There is no absolute definition of a small organization. The
approach described in this section addresses the "typical" small organization that is
nonhierarchical, outsources management of its computing infrastructures, and has very limited
staff time to conduct OCTAVE.
Let's examine each characteristic in more detail, starting with organizational structure. When we
discuss approaches for implementing an evaluation process based on organizational structure,
we often use the following analogy. Think of information security as trying to solve a puzzle. In
large, hierarchical organizations, people can become very specialized in their job duties. Their
understanding of the big picture related to the organization's business processes often becomes
very narrow. Thus, each person in such an organization holds one piece of the information
security puzzle.
By contrast, in small, nonhierarchical organizations people often acquire a range of skills and
perform a variety of tasks. Each person in such an organization has greater insight into business
processes and holds many pieces of the information security puzzle. In hierarchical organizations
the evaluation process requires a series of knowledge elicitation workshops to build the big
picture of security in the organization, with each person contributing his or her piece of the
puzzle. In nonhierarchical organizations, only one workshop may be needed to build the global
view of security, because analysis team members bring most of the puzzle with them.
Now, let's focus on outsourcing. Consider an organization whose management decides to build a
core competency in information technology management. Managers will likely hire people with
information technology backgrounds and provide educational opportunities for them to keep
their skills up to date with current technology trends. People within the organization have the
knowledge and skills to lead the technological aspects of an information security risk evaluation.
Finally, we look at how limited resources affect the evaluation process. Note that this
characteristic applies to many organizations, but small organizations are especially constrained
by limited staff time. Consider a large organization that has implemented many process
improvement initiatives using a quality assurance department for oversight and guidance. The
OCTAVE Method presented in Part II is probably a good fit for that organization. Personnel in the
quality assurance department can become core analysis team members and lead the evaluation
in the organization.
Now consider a small organization with only 40 employees. It does not have a quality assurance
department and may not be experienced in implementing process improvement initiatives. This
organization needs an evaluation process that doesn't take too much time and still provides
sufficient information for the organization to characterize its risks.
A version of OCTAVE tailored for the typical small organization will have the following features:
It will not require a series of knowledge elicitation workshops, because the analysis team
has sufficient insight into the organization's operational environment.
It will be designed for efficient data collection, enabling the analysis team to characterize
risks in a timely manner.
Figure 13-1 shows what OCTAVE for small organizations might look like. Each process is a self-
directed activity; there are no facilitated knowledge elicitation workshops. Also notice that
process 3 is explicitly designed to incorporate outsourcing. The processes shown in Figure 13-1
are described below.
The basic premise for this approach is that information security requires knowledge of both
business and information technology processes. We believe that staff members in most
organizations have sufficient understanding of their business processes and how they use
information technology on a day-to-day basis. Thus, most organizations can characterize their
information security risks.
The evaluation process for small organizations must be highly efficient and focused. Information
security knowledge and experience must be engineered directly into the evaluation's worksheets
and artifacts, enabling an analysis team from a small organization to characterize their
information security risks based on (1) team members' understanding of business processes and
(2) the way in which information technology is used in those organizations.
Figure 13-2 shows an example of a worksheet used to record the risk profile for a critical asset,
documenting relevant risk and mitigation information for that asset. Note that it combines
aspects from several worksheets presented in Appendix B. Highly structured, streamlined
worksheets such as this are essential for making the evaluation process efficient while still
producing useful results. An evaluation tailored in this way may not provide the same level of
detail as the OCTAVE Method. That method was designed to be an open-ended examination of
information security issues, which is useful for exploring complex organizational issues often
found in large, hierarchical organizations. Early testing of OCTAVE in small organizations
indicates that a streamlined approach can help them characterize their information security risks
without having a strong security background. More testing is required to determine if this
approach for small organizations will scale to larger, more hierarchical organizations.
Figure 13-2. Critical Asset Risk Profile for OCTAVE Focused on Small Organizations
As mentioned in Part I of this book, we designed the OCTAVE Method for large organizations.
However, "large" is an imprecise and relative term. This section describes an organization that
would fit almost anyone's definition of large. We turn our attention to implementing OCTAVE in a
global organization that is distributed across multiple locations.
Company X
Figure 13-3 shows the organizational structure for Company X. Some sites in Company X are
large facilities that use the latest technology; others are small, remote offices with small staffs.
Company X is hierarchical in nature; it is organized according to geographic regions and has one
director per region. The company has tens of thousands of employees and an extremely large,
relatively stable customer base for its products and services. The corporate culture values
diversity of skills in the company's workforce, and management encourages employees
periodically to rotate across sites. This globally diversified organization has facilities on every
major continent and uses local employees to staff those facilities. To conduct its business
efficiently, the company uses large numbers of contractors and subcontractors to complement its
in-house expertise.
Company X uses a few common systems across all of its sites. In addition, many local,
independent systems are used and maintained by individual sites. Management has recently
initiated a plan to standardize most of the major information systems across the company. A few
regions are also being subjected to stringent new standards of due care in information security.
Management has decided to use the new standards of due care as an opportunity to standardize
and improve its information security practices across the organization.
At the center of its information security program is a common, systematic information security
risk evaluation (OCTAVE) that will be implemented across the entire organization. Senior
management wants everyone using the same process as a means of ensuring consistent quality.
The organization is also creating a common database to collect site-specific information security
data. The information in the database will be analyzed to identify common issues and solutions
across the organization.
The organization's personnel will conduct OCTAVE. The director in each region is responsible for
ensuring that all sites in the region conduct OCTAVE. Each medium and large-scale site (as
defined in the company's policy) is required to create an analysis team to lead the evaluation. A
team is also being formed in each region to coordinate the evaluations within the region and to
provide specialized expertise when needed. At small sites, the analysis team will include local
staff as well as members from the regional team.
Their Approach
The approach that Company X implemented for conducting OCTAVE involved the following steps:
Using an external, third-party trainer to rapidly train multiple analysis teams in the
evaluation methodology
Performing data analysis of the results reported from all site results to identify common
issues and solutions
Requiring all sites to perform a policy review before starting their evaluations
OCTAVE can help an organization establish the means to effectively communicate its information
protection requirements with its service providers or system maintenance contractors. It can
also provide a common framework for communicating with customers, partners, and contractors
about information security issues. This section looks at a small company that consolidates access
to the Web sites and services of many other organizations. This organization needs to coordinate
its information security efforts with those of several organizations.
Company SP
Company SP provides an integrating service that consolidates access to Web sites and services
provided by other organizations. Figure 13-4 shows how the company must work with several
partners, service providers, and customers.
One primary system provides the Web portal service and is linked to all customer Web sites. The
system physically exists at the company's facilities but is managed remotely from the prime
contractor's site. The company has a second system that it uses to manage its internal business
processes.
The Web portal exists as a dynamic environment, because the customer Web sites to which it
links change frequently. The dynamic environment coupled with number of organizations
involved in maintaining the Web portal creates a complex situation for Company SP.
Management at the company is worried that the complexity could lead to information security
problems.
Their Approach
The approach that Company SP implemented for OCTAVE involved the following steps:
Allowing the consultant to select and run vulnerability evaluation tools in cooperation
with the prime contractor
Continuing the relationship with the consultant after the evaluation, making the
consultant responsible for revising security policies and procedures based on the results
of the evaluation
The analysis team conducted knowledge elicitation workshops with personnel at Company SP,
the prime contractor, and one subcontractor. From these workshops, an organizational
vulnerability related to contracting was identified. Company SP had not explicitly communicated
its security requirements to the prime contractor, and there was no mechanism in place to
monitor what the contractor was doing with respect to information security. With the number of
subcontractors and service providers involved, Company SP had no idea what was being done to
secure its Web portal.
The analysis team suggested that Company SP use the information gathered during OCTAVE to
generate security requirements. It further recommended that Company SP and its contractors
establish a formal mechanism for communicating security requirements and verifying that they
are being met.
OCTAVE highlighted a complex interorganizational problem for Company SP. The complex web of
relationships among all parties created unique security issues related to the Web portal service.
Company SP staff need to review all of these relationships in light of the organization's security
requirements and then determine how they can work with multiple organizations to meet their
business goals and their security requirements.
We now examine how a professional society comprising organizations of various sizes intends to
implement OCTAVE. The central office of the society wants to use different implementations of
OCTAVE to manage information security risks collaboratively among its members.
The Professional Society
Figure 13-5 depicts a professional society that is a loosely interconnected organization. The
central organization is large, and it provides services to many small member companies. The
professional society's central office has about 400 employees, including 40 information
technology professionals. There are several thousand organizations affiliated with the society.
The key objective of the central office is to provide benefits and services to its membership. It
also acts as a central repository and distribution site for useful products and services. The
central office provides member organizations with connectivity to several of its systems.
Personnel can access the central office's systems from home computers, laptops, and wireless
devices. Staff members at the central office are concerned about security issues related to
unmonitored access to the office's systems and networks.
Impending data security regulations will affect all of the society's members as well as the central
office. Senior managers at the central office have decided to use the OCTAVE Method to evaluate
information security risks. For its member organizations, the central office is recommending a
version of OCTAVE tailored to small organizations.
Using a consistent evaluation approach enables effective communication of security issues and
requirements among all participating organizations. A common approach also facilitates sharing
critical information among the organizations (e.g., recommended security practices, potential
threats to consider). The society is planning to create a database to collect evaluation results
from participating organizations. Managers at the society have requested that member
organizations contribute sanitized, aggregate evaluation results that can be analyzed for trends.
Senior managers at the society hope to identify common issues that member organizations can
address collaboratively through the society's working groups.
Management wants to conduct the OCTAVE Method initially at the central office before it rolls out
a tailored version to its membership. Staff members from the central office will provide OCTAVE
training and consulting services related to the evaluation process for the society's members.
Their Approach
The approach that the professional society wants to implement for OCTAVE involves the
following steps:
Tailoring the catalog of practices for consistency with impending data security
requirements
Finally, we present a few additional issues that organizations are addressing when they
implement OCTAVE.
Section 13.3 illustrated issues related to implementing OCTAVE in a large, dispersed company.
Recall that each medium and large site was required to create an analysis team to lead the
evaluation. An alternative approach is to create and maintain independent, "floating" analysis
teams, which could travel from site to site to lead the evaluations. The analysis team for a site
would include the independent team members and a couple of local staff members. Local
analysis team members could be given just-in-time training, and the independent team
members could lead the evaluation process. This type of approach is often used in process
improvement activities (e.g., software engineering process groups for software process
improvement). Organizations with a large, centralized quality assurance or risk management
department are good candidates for using this type of approach.
Consolidating Results from Multiple Evaluations
Many organizations are pursuing the idea of creating a database to collect evaluation results
from multiple sites. While the results of each individual OCTAVE can help the organization that
conducts it, larger organizations also see benefits in analyzing evaluation results across the
organization for common issues and for trends. For example, each major division of an
organization might identify similar issues that can be addressed only through changes to
corporate-level policy or through the creation of corporate resources.
Large, diverse organizations often have shared computing systems. For example, an
organization might have a single financial system that is used by all business units. Managing
the security of a common system will likely require cooperation across business units. Individual
evaluations conducted by the business units will provide information about issues related to the
system, but mitigation plans need to be coordinated across the business units to avoid conflicts.
The resulting benefit is the identification of dependencies and interrelationships among all users,
maintainers, and information technology staff members. Once all parties understand the issues
related to common systems, the organization can work to ensure that security requirements for
common assets are addressed.
Organizations need to consider security issues related to how customers and collaborators
access their systems and networks. For example, collaborators might inadvertently compromise
security when they access an organization's computing infrastructure. Do they understand the
organization's security policies? Does the organization provide open access to its infrastructure
that bypasses its firewall? A balance is needed between meeting customer needs and securing
the computing infrastructures. In some cases an organization might include customers or
collaborators as part of its knowledge elicitation activities.
Shared Facilities
Organizations must also consider how to manage physical security in shared facilities. Is an
organization located in a building with other companies? Does the building's owner provide a
central security service? After it conducts OCTAVE, an organization is in a better position to
identify security requirements related to the facility. Someone from the organization can then
meet with the building's facility management group to see which requirements that group is
already meeting. For example, the building's facility management group might already be
addressing some business continuity issues, such as uninterrupted power supply. An
organization located in that building could leverage existing resources rather than duplicate
them.
This chapter has identified a few practical scenarios to help you decide how to implement
OCTAVE's flexible evaluation approach in your organization. OCTAVE is applicable to a variety of
organizations, and the key to making it work in your organization is to consider how to tailor it
for your unique environment. At this point we're ready to examine some ideas about managing
information security risks on a continual basis, presented in Chapter 14.
Section
14.1 Introduction
14.4 Summary
14.1 Introduction
To understand how its security risks change over time, an organization typically "resets" its
baseline periodically by conducting another evaluation. The time between evaluations can be
predetermined (e.g., yearly) or triggered by major events (e.g., corporate reorganization,
redesign of an organization's computing infrastructure).
We also indicated in Chapter 1 that an organization improves its security posture only after it
implements its protection strategy and risk mitigation plans. Figure 14-1 illustrates the
framework for managing information security risks as well as the "slice" provided by the
evaluation. We derived the framework from previous work, in which we developed an approach
to managing risks on software and system development projects [Dorofee 96].
Figure 14-1. Information Security Risk Evaluation and Management
Key Principles
Think back to the principles presented in Chapter 2 (see Figure 14-2). To be effective,
information security risk management must be consistent with these principles. Our discussion
will focus on two of the principles: open communication and integrated management. Recall that
information security risk management cannot succeed without a reasonable degree of open
communication of security-related issues.[1] A culture that supports open communication of risk
information is the basis for effective information security risk management. A process for
managing your information security risks must ensure that the right people get the right
information in a timely manner.
While open communication among key decision makers and trusted personnel is
important, you must use discretion when sharing this information with people with
whom you have not established a level of trust.
3. Comply with laws and regulations within which the organization operates.
After OCTAVE
The key results of OCTAVE include a protection strategy for organizational improvement and
mitigation plans to reduce the risks to the organization's critical assets. To manage information
security risks effectively, you must develop detailed action plans and manage the
implementation of those plans. The post-OCTAVE activities are nothing more than a plan-do-
check-act cycle, ensuring that selected aspects of your organization's protection strategy and
mitigation plans are implemented. To build on the results of OCTAVE, you must address the
following operations from Figure 14-3:
Plan for implementation by developing develop detailed action plans for key aspects of
your organization's protection strategy and risk mitigation plans.
The next section presents a framework for information security risk management—a "roadmap"
for managing your risks. Following that, Section 14.3 examines an approach for implementing
the framework.
Information security risk management is the ongoing process of identifying and addressing
information security risks. This section explores the details of a structured approach for
managing risks. Figure 14-4 illustrates the operations required by the information security risk
management framework as well as the major tasks completed during each operation. This type
of framework is common to risk management approaches in many domains, including
information security [GAO 98].
Figure 14-4. Operations and Tasks of the Information Security Risk Management
Framework
Assigning Responsibility
To manage your information security risks effectively, you must clearly define roles and
responsibilities for all of the operations and tasks in the framework. Effective risk management
requires everyone in the organization to know his or her role in managing risks. During OCTAVE,
an analysis team was responsible for identifying and analyzing risks and for completing high-
level planning tasks. This team may not have a permanent existence in your organization, and
new people might be assigned responsibility for managing the risks after the evaluation. As you
consider the framework in this section and how it might apply to your organization, remember
that you will eventually need to determine the appropriate set of roles and responsibilities and
distribute them effectively. The remainder of this section examines each operation in Figure 14-
4, starting with "Identify."
14.2.1 Identify
Identification is the process of transforming uncertainties and issues related to how well an
organization's assets are being protected into distinct (tangible) risks. The objective of this
activity is to anticipate risks before they become problems and to incorporate this information
into the organization's information security risk management process. Table 14-1 illustrates the
types of tasks that are conducted during risk identification and the key results produced by each
task.
Narrative
description of
the potential
impact of the
risks on the
organization
Key
infrastructure
components
related to
critical assets
After you finish planning, you have defined the direction for improving your organization's
security posture. In the next operation you execute the action plans as designed.
14.2.4 Implement
Assign responsibility for implementing action plans during the planning process. People who are
assigned responsibility for implementing action plans must follow through by ensuring that those
plans are completed according to the plan's defined schedules and success criteria.
Enable staff members to reprioritize existing work tasks to incorporate their action plan
activities
Provide staff members with sufficient funds, equipment, and other required resources to
complete the action plans
As you implement action plans, you also need to monitor them to ensure that they are being
implemented according to schedule and are meeting their defined success criteria.
14.2.5 Monitor
The monitoring process tracks action plans to determine their current status and reviews
organizational data for indications of new risks or changes to existing risks. The objectives of
monitoring risks are to collect accurate, timely, and relevant information about the progress of
action plans being implemented and any major changes to the organization's operational
environment that could indicate the existence of new risks or significant changes to existing
risks.
Typically, the people who are responsible for implementing action plans also monitor those
plans. In addition, everyone in the organization needs to be empowered to look for and report
information that might indicate the presence of new risks or significant changes to existing risks.
For example, if there are major changes to the organization's operational environment (e.g.,
corporate reorganization, major redesign of the organization's computing infrastructure),
management might decide to conduct another information security risk evaluation.
Risk monitoring should provide an organization with an efficient and effective way to track the
progress of action plans, indications of new risks, and significant changes to existing risks. The
monitoring process should both leverage current project management practices within the
organization and enable effective and timely communication of status information and risk
indicators.
As you monitor risks, you need to interpret the data that you collect. Controlling risks allows you
to decide how to proceed with action plans, whether the organization needs to identify new risks,
and how to address significant changes to existing risks.
14.2.6 Control
Controlling risks is a process whereby designated personnel adjust the course of action plans
and determine whether changing organizational conditions indicate the presence of new risks.
The objective of controlling risks is to make informed, timely, and effective decisions about
corrective measures for action plans and about whether to identify new risks to the organization.
Table 14-6 highlights the tasks required to control risks.
You can make two types of control decisions. The first type deals with adjusting the course of
action plans. Part of the responsibility for making control decisions lies with the person who is
monitoring an action plan. If action plans were being implemented according to their schedules
and were meeting defined success criteria, the person monitoring the plans would simply
continue tracking them. The decision in this case is to continue as planned. On the other hand, if
the person monitoring the risk noticed a deviation or anomaly that was causing a delay in a
plan's schedule or indicated that success criteria were not being met, that person would make
sure that the issue was raised at the appropriate management level. It might be necessary to
revise that action plan or execute predefined contingency actions.
The second type of control decision focuses on interpreting risk indicators. You are looking for
major changes to the organization's operational environment, indicating the possible existence
of new risks or significant changes to existing risks. As mentioned during our discussion about
monitoring risks, anyone in the organization could look for and report information that might
indicate the presence of new risks or changes to existing risks. Whoever believes that changes to
the operational environment could significantly change the nature of the organization's
information security risks should make sure that those issues are raised at the appropriate
management level. If appropriate, new risks could be identified (e.g., by conducting another
evaluation) or action plans could be revised based on changes to the underlying risks.
Start of risk
identification
activity
Continuous control of risks should be tightly integrated into the organization's management
practices. The control process should
Ensure that responsibility for making control decisions is formally assigned and accepted
Provide personnel with guidance for weighing alternatives and making trade-offs
This concludes our presentation of the information security risk management framework. The
next section looks at a common implementation of the framework.
The information security risk management framework provides guidance about the operations
that organizations can implement to identify and address their information security risks. This
section presents a common implementation of the framework. At the heart of this
implementation is the information security risk evaluation.
Figure 14-5 illustrates a time line between two successive evaluations. Notice that after the
organization completes Evaluation A, it has set its baseline with respect to its information
security risks (i.e., the organization has taken its "snapshot" of its current risks). The
organization must then address, or manage, the highest-priority risks that were identified during
the evaluation, using these to galvanize mitigation and improvement activities. During the time
between evaluations, people in the organization implement action plans designed to improve the
organization's security posture. Assuming that those people effectively manage implementation
of the action plans, the organization's security posture will indeed change.
Section 14.2.5 introduced the idea that an organization needs to monitor risk indicators for
significant changes to its operational environment, indicating the existence of new risks or
changes to existing risks. A "significant" change to an organization's operational environment
would be one that alters the nature of the organization's information security risks and
potentially affects its protection strategy and mitigation priorities. Because the organization's
protection strategy and mitigation priorities may both be affected, the organization might decide
to conduct another evaluation before its scheduled interval.
For example, a significant change to the organization's operational environment could be the
acquisition of a former rival company. Such an acquisition might trigger an evaluation before the
scheduled interval, setting a new baseline just before the acquisition. This step would establish
an updated view of the organization's security posture going into the acquisition and could
identify risks that must be addressed before the merger.
An organization's staff typically expends a lot of time and effort conducting an information
security risk evaluation. Thus, an organization's management needs to be selective about which
changes in its operational environment are significant enough to warrant another evaluation. The
vast majority of changes do not meet this threshold.
So how do organizations handle small changes between evaluations? Typically, they rely upon
established security practices and procedures to address small changes. (Appendix C provides a
catalog of security practices.) If there are no established procedures in place for a given
situation, staff members are likely to handle it in an ad hoc fashion.
For example, consider how an organization handles a newly discovered vulnerability. New
vulnerabilities are identified quite frequently and generally neither change the nature of the risks
that the organization is managing nor affect the organization's mitigation priorities. The
information technology staff could use established vulnerability management procedures to
address the new vulnerability.
Likewise, if an organization acquires or develops a new business system, the nature of the risks
to that system are likely to be similar to those affecting other systems and not affect the
organization's mitigation priorities. The staff would apply existing security policies and
procedures to designing, configuring, maintaining, and using the new system.
14.4 Summary
Much of this book has focused on the OCTAVE approach and the need for organizations to assess
their information security risks. Recall from Chapter 2 that one of the information security risk
evaluation principles is foundation for a continuous process (see Figure 14-2). This principle
states that the results of an information security risk evaluation provide the foundation for
improvement. To realize any improvement in its security posture, an organization must
implement the results of information security risk evaluations.
This chapter presented a framework for managing information security risks. The framework
provides basic requirements for an information security risk management approach. In defining
this approach, we have merged the asset-driven, risk-based concepts from OCTAVE with general
risk management concepts commonly used in other domains to create a comprehensive
approach for managing information security risks in an organization. A risk-based approach
enables organizations to develop solution strategies tailored to their unique environments.
Glossary
Accept
a decision made during risk analysis to take no action to address a risk and to accept the
consequences should the risk occur.
Access path
Action list
a list of actions that people in an organization can take in the near term without the need
for specialized training, policy changes, etc. It is essentially a list of near-term action
items.
Actor
a property of a threat that defines who or what may violate the security requirements
(confidentiality, integrity, availability) of an asset.
Analysis team
the typical monetary loss that can be expected in a year resulting from a risk. Annualized
loss expectancy is the product of the potential loss that could occur (impact value)
multiplied by the projected frequency of occurrence of the risk in a given year
(probability).
Area of concern
Asset
something of value to the enterprise. Information technology assets are the combination
of logical and physical assets and are grouped into the specific classes (information,
systems, software, hardware, people).
Attributes
Availability
the extent to which, or frequency with which, an asset must be present or ready for use.
Catalog of practices
a collection of good strategic and operational security practices that an organization can
use to manage its security.
Catalog of vulnerabilities
Champion
a vulnerability evaluation tool that functions the same as automated tools. However,
unlike automated tools, checklists are manual, not automated. Checklists require a
consistent review of the items being checked and must be routinely updated.
the likelihood that an event will occur when all possibilities are known to be equally likely
to occur. This concept of probability is the oldest historically and was originally developed
in connection with games of chance.
a listing of the computer inventory owned by an organization. This listing typically depicts
a prioritized ordering of systems or networking components based on their importance to
the organization (e.g., mission-critical systems, high/medium/low-priority systems,
administrative systems, support systems).
Confidentiality
Configuration vulnerability
Critical assets
an organization's most important assets. The organization will suffer a large adverse
impact if something happens to critical assets.
Desktop workstation
Design vulnerability
Destruction
Disclosure
the viewing of confidential or proprietary information by someone who should not see the
information.
Evaluation criteria
a set of qualitative measures against which a risk is evaluated. Evaluation criteria define
high, medium, and low impacts for an organization.
Expected value
the product of the potential loss that could occur (impact value) multiplied by the
projected frequency of occurrence of a risk (probability). Expected value is also known as
expected loss or risk exposure.
Extreme event
an event that has a low probability of occurrence but a potentially catastrophic impact on
the organization.
the likelihood that an event (or a given outcome) will occur, based on the proportion of
the time that similar events have occurred over a long period of time.
a catalog containing a range of all potential threats under consideration. The generic
threat profile is a starting point for creating a unique threat profile for each critical asset.
Hardware asset
Home computer
home personal computers that staff members use to access information remotely via an
organization's networks.
Hybrid scanner
a vulnerability evaluation tool that targets a range of services, applications, and operating
system functions. Hybrid scanners may address Web servers (CGI, JAVA), database
applications, registry information (e.g., Windows NT/2000), and weak password storage
and authentication services. These are also known as specialty and targeted scanners.
Impact
Implementation vulnerability
Information asset
documented (paper or electronic) data or intellectual property used to meet the mission of
an organization.
Integrity
Interruption
Laptop
the rule that as the number of times a situation is repeated becomes larger, the
proportion of successes tends toward the actual probability of success.
Loss
the limiting of an asset's availability; the asset still exists but is temporarily unavailable.
Mitigate
Mitigation approach
the way in which an organization intends to address a risk. An organization can either
mitigate or accept a risk.
Modification
Motive
a property of a threat that defines whether the intentions of a human actor are deliberate
or accidental. Motive is also sometimes referred to as the objective of a threat actor.
Networking component
devices important to an organization's networks. Routers, switches, and modems are all
examples of this class of component.
Network infrastructure scanner
software used to search a network by identifying the physical connectivity of systems and
networking components. The software also displays detailed information about the
interconnectivity of networks and devices (routers, switches, bridges, hosts).
a vulnerability evaluation tool that targets specific operating systems such as Windows
NT/2000, Sun Solaris, Red Hat Linux, or Apple Mac OS.
Operational practice
security practices that focus on technology-related issues. They include issues related to
how people use, interact with, and protect technology.
Organizational vulnerability
Outputs
the outcomes that an analysis team must achieve during an information security risk
evaluation.
People asset
the people in an organization who possess unique skills, knowledge, and experience that
are difficult to replace.
Principles
the fundamental concepts driving the nature of an information security risk evaluation.
Probability
Protection strategy
the policy an organization develops to enable, initiate, implement, and maintain its
internal security. It tends to incorporate long-term, organizationwide initiatives.
an action that helps initiate, implement, and maintain security within an organization. A
protection strategy practice is also called a security practice.
Risk
the possibility of suffering harm or loss; the potential for realizing unwanted negative
consequences of an event. Risk refers to a situation in which either a person could do
something undesirable or a natural occurrence could cause an undesirable outcome,
resulting in a negative impact or consequence.
Risk evaluation
Risk management
the ongoing process of identifying risks and implementing plans to address them.
Risk measure
a qualitative value used to estimate some aspect of risk. There are two risk measures:
impact value and probability.
a plan intended to reduce the risks to a critical asset. Risk mitigation plans tend to
incorporate actions, or countermeasures, designed to counter the threats to the assets.
Risk profile
a definition of the range of risks that can affect an asset. Risk profiles contain categories
grouped according to threat source (human actors using network access, human actors
using physical access, system problems, other problems).
Script
a vulnerability evaluation tool that works as well as an automated tool except that it
usually has a singular function. If a large number of items are being evaluated, a
corresponding number of scripts will be required. Scripts require a consistent review of
the items being checked and must be routinely updated.
Security component
Security practice
actions that help initiate, implement, and maintain security within an organization. A
security practice is also called a protection strategy practice.
Security requirements
Self-direction
a policy whereby people manage and direct information security risk evaluations for their
own organization. These people are responsible for directing risk evaluation activities and
for making decisions about the organization's security efforts.
Server
host within the information technology infrastructure that provides information technology
services to an organization.
Software assets
Storage device
Strategic practice
security practice that focuses on organizational issues at the policy level. They include
business-related issues as well as issues that require organizationwide plans and
participation.
Subjective probability
the likelihood that an event (or a given outcome) will occur, based on indirect or collateral
information, educated guesses, intuition, or other subjective factors.
System
System of interest
Systems assets
information systems that process and store information. Systems are a combination of
information, software, and hardware assets. Any host, client, or server can be considered
a system.
Technology vulnerability
Threat
Threat profile
a definition of the range of threats that can affect an asset. Threat profiles contain
categories grouped according to threat source (human actors using network access,
human actors using physical access, system problems, other problems).
Vulnerability
method of evaluating each infrastructure component; this includes deciding who will
perform the evaluation and selecting the appropriate tool(s).
Vulnerability summary
a summary of the technology vulnerabilities for each component that is evaluated. A
vulnerability summary lists the types of technology vulnerabilities found, when they need
to be addressed, their potential effect on the critical assets, and how they can be dealt
with.
Wireless component
devices, such as cell phones and wireless access points, that staff members may use to
access information (for example, email).
Bibliography
This bibliography contains references cited in the text as well as general sources of security and
risk management information. References are grouped into the following categories:
Risk management
Security practices
System survivability
Web security
Risk Management
Alberts, Christopher; Behrens, Sandra; Pethia, Richard; and Wilson, William. Operationally
SM
Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE ) Framework, Version 1.0
(CMU/SEI-99-TR-017, ADA 367718). Pittsburgh, PA: Software Engineering Institute, Carnegie
Mellon University, 1999. Available online:
<[Link]
Alberts, Christopher J. et al. "Health Information Risk Assessment and Management: Toolkit
Section 4.5." CPRI Toolkit: Managing Information Security in Health Care, Version 2. Available
online: <[Link] (2000).
SM
Alberts, Christopher J. and Dorofee, Audrey J. OCTAVE Method Implementation Guide, v2.0.
Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2001. Can be
ordered online: <[Link]
Alberts, Christopher and Dorofee, Audrey. Operationally Critical Threat, Asset, and Vulnerability
SM
Evaluation (OCTAVE ) Criteria (CMU/SEI-01-TR-016). Pittsburgh, PA: Software Engineering
Institute, Carnegie Mellon University, 2001. Available online:
<[Link]
Bernstein, Peter L. Against the Gods: The Remarkable Story of Risk. New York: John Wiley &
Sons, Inc., 1996.
Charette, Robert N. Software Engineering Risk Analysis and Management. New York: Intertext
Publications/Multiscience Press, Inc., 1989.
Dorofee, A.; Walker, J.; Alberts, C.; Higuera, R.; Murphy, R.; and Williams, R. Continuous Risk
Management Guidebook. Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon
University, 1996.
Freund, John E. Introduction to Probability. Mineola, NY: Dover Publications, Inc., 1993.
United States General Accounting Office. Executive Guide: Information Security Management
(GAO/AIMD-98-68). Washington, DC: GAO, May 1998.
United States General Accounting Office. Information Security Risk Assessment, Practices of
Leading Organizations (GAO/AIMD-00-33). Washington, DC: GAO, November 1999.
Haimes, Yacov Y. Risk Modeling, Assessment, and Management. New York: John Wiley & Sons,
Inc., 1996.
Harvard Business Review. Harvard Business Review on Managing Uncertainty. Boston: Harvard
Business School Press, 1999.
Institute of Electrical and Electronics Engineers. IEEE Standard for Software Lifecycle Processes
—Risk Management (IEEE Std 1540-2001). New York: IEEE, Inc., 2001.
Lange, Scott K.; Davis, Julie K.; Jaye, Daniel; Erwin, Dan; Mullarney, James X.; Clarke, Leo L.;
and Loesch, Martin C. e-Risk: Liabilities in a Wired World. Cincinnati, OH: National Underwriter
Co., 2000.
Peltier, Thomas R. Information Security Risk Analysis. Boca Raton, FL: Auerbach Publications,
2001.
Rowe, William D. An Anatomy of Risk. Malibu, FL: Robert E. Crier, 1988.
Van der Heijden, Kees. Scenarios: The Art of Strategic Conversation. Chichester, England: John
Wiley & Sons, Inc., 1997.
Abrams, Marshall D.; Podell, Harold J.; and Jajodia, Sushil. Information Security: An Integrated
Collection of Essays. Los Alamitos, CA: IEEE Computer Society Press, 1995.
Allen, Julia et al. "Improving the Security of Networked Systems." Crosstalk: The Journal of
Defense Software Engineering 13, 10 (October 2000). Available online:
<[Link]
Ahuja, Vijay. Network and Internet Security. Boston, MA: AP Professional, 1996.
Atkinson, Randall J. "Toward a More Secure Internet." IEEE Computer 30, 1 (January 1997): 57–
61.
Barrett, Daniel J. Bandits on the Information Superhighway. Sebastopol, CA: O'Reilly and
Associates, 1996.
Bosselaers, Antoon and Preneel, Bart. "Integrity Primitives for Secure Information Systems:
Final Report of RACE Integrity Primitives Evaluation RIPE-RACE 1040." Lecture Notes in
Computer Science: 1007. Berlin: Springer, 1995.
Caelli, William; Longley, Dennis; and Shain, Michael. Information Security Handbook. New York:
Stockton Press, 1991.
Cohen, Frederick B. Protection and Security on the Information Superhighway. New York: Wiley,
1995.
Computer Security Institute. "2000 CSI/FBI Computer Crime and Security Survey." Computer
Security Issues and Trends, vol. VI, no. 1 (spring 2000).
Davis, Peter T., ed. Securing Client/Server Computer Networks. New York: McGraw-Hill, 1996.
Dempsey, Rob and Bruce, Glen. Security in Distributed Computing. Upper Saddle River, NJ:
Prentice-Hall, Inc., 1997.
Ermann, D. M.; Williams, M. B.; and Shauf, M. S. Computers, Ethics, and Society. Second
edition. New York: Oxford University Press, 1997.
Fites, P. E.; Kratz, M. P.; and Brebner, A. F. Control and Security of Computer Information
Systems. Rockville, MD, Computer Science Press, Inc., 1989.
Ford, Warwick and Baum, Michael. Secure Electronic Commerce. New York: Prentice-Hall, 1997.
Gollmann, Dieter. Computer Security. Chichester, England: John Wiley & Sons, 1999.
Howard, John and Longstaff, Tom. A Common Language for Computer Security Incidents.
(SAND98-8997). Albuquerque, NM: Sandia National Laboratories, 1998.
Hutt, Arthur E.; Bosworth, Seymour; and Hoyt, Douglas B. Computer Security Handbook. Third
edition. New York: John Wiley & Sons, Inc. 1995.
Kaufman, C.; Perlman, R.; and Speciner, M. Network Security: Private Communication in a
Public World. Englewood Cliffs, NJ: PTR Prentice-Hall, 1995.
Kessler, Gary C. "Web of Worries." Information Security (April 2000). Available online:
<[Link]
King, Nathan. "Sweeping Changes for Modem Security." Information Security (June 2000).
Available online: <[Link] (2000).
Kyas, O. Internet Security, Risk Analysis, Strategies and Firewalls. Boston: Int'l Thompson,
1997.
Laswell, Barbara; Simmel, Derek; and Behrens, Sandra. Information Assurance Curriculum and
Certification: State of the Practice (CMU/SEI-99-TR-021, ADA 367575). Pittsburgh, PA: Software
Engineering Institute, Carnegie Mellon University, 1999. Available online:
<[Link]
Longstaff, Thomas et al. "Security of the Internet," 231–255. The Froelich/Kent Encyclopedia of
Telecommunications, vol. 15. New York: Marcel Dekker, Inc., 1997. Also available online:
<[Link]
McGraw, Gary and Felten, Edward W. Java Security. New York: John Wiley and Sons, Inc., 1996.
Merkow, M. S. and Breithaupt, J. The Complete Guide to Internet Security, New York: AMACOM,
American Management Association, 2000: pp. 95–109.
NIST. NIST Federal Information Processing Standards (FIPS) on Computer Security. Available
online: <[Link] (2001).
NCSC. NCSC Glossary of Computer Security Terms. Ft. George G. Meade, MD: National
Computer Security Center: Washington, DC: For sale by the Supt. of Docs., U.S. Government
Printing Office, 1989.
National Research Council. Computers at Risk: Safe Computing in the Information Age.
Washington DC: National Academy Press, 1991.
Parker, Donn B. Fighting Computer Crime. New York: John Wiley & Sons, 1998.
Pethia, Richard. Internet Security Issues: Testimony Before the U.S. Senate Judiciary
Committee. Carnegie Mellon University, Software Engineering Institute, May 25, 2000. Available
online: <[Link]
Pfleeger, Charles P. Security in Computing. Second edition. Upper Saddle River, NJ: Prentice-
Hall, 1997.
Power, Richard. "1999 CSI/FBI Computer Crime and Security Survey." Computer Security
Journal, volume XV, 2. San Francisco, CA: Computer Security Institute, 1999.
Ruiu, Dragos. Cautionary Tales: Stealth Coordinated Attack HOWTO. Available online:
<[Link] (1999).
Russell, Deborah and Gemi, Sr.,G. T. Computer Security Basics. Sebastopol, CA: O'Reilly &
Associates, Inc., 1991.
[Link] Publishing. Maximum Security: A Hacker's Guide to Protecting Your Internet Site and
Network. Indianapolis, IN: [Link] Publishing, 1997.
SANS Institute. How to Eliminate the Ten Most Critical Internet Security Threats: The Experts'
Consensus, Version 1.32. Available online: <[Link] (2001).
Schneider, Fred B., ed. Trust in Cyberspace. Washington, DC: National Academy Press, 1999.
Sellens, John. "System and Network Monitoring." login: 25, 3 (June 2000).
Stevens, W. Richard. TCP/IP Illustrated, Volume 1: The Protocols. Reading, MA: Addison-Wesley,
1994.
Stoll, Cliff. The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage. New
York: Doubleday, 1989.
Wadlow, Thomas A. The Process of Network Security. Reading, MA: Addison-Wesley, 2000.
Allen, Julia et al. Security for Information Technology Service Contracts (CMU/SEI-SIM-003, ADA
336329). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 1998.
Available online: <[Link]
Best, Reba A. and Piquet, D. Cheryl. Computer Law and Software Protection: A Bibliography of
Crime, Liability, Abuse, and Security. Jefferson, NC: McFarland, 1993.
Cappel, James J.; Vanecek, Michael T.; and Vedder, Richard G. "CEO and CIO Perspectives on
Competitive Intelligence." Communications of the ACM (August 1999).
Dijker, Barbara L., ed. Short Topics in System Administration. Vol. 2, A Guide to Developing
Computing Policy Documents. Berkeley, CA: The USENIX Association for SAGE, the System
Administrators Guild, 1996.
Guttman, B. and Bagwill, R. Internet Security Policy: A Technical Guide. Gaithersburg, MD: NIST
Special Publication 800-XX, 1997. Available online: <[Link]
Kimmins, John; Dinkel, Charles; and Walters, Dale. Telecommunications Security Guidelines for
Telecommunications Management Network (NIST Special Publication: 800-13). Gaithersburg,
MD: Dept. of Commerce, Technology Administration, National Institute of Standards and
Technology, 1995.
Kuncicky, D. and Wynn, B. A. Short Topics in System Administration, Vol. 4, Educating and
Training System Administrators: A Survey. Berkeley, CA: The USENIX Association for the
System Administrators Guild (SAGE), 1998.
Oppenheimer, David L.; Wagner, David A.; and Crabb, Michele D. Short Topics in System
Administration, Vol. 3, System Security: A Management Perspective. Berkeley, CA: The USENIX
Association for the System Administrators Guild (SAGE), 1997.
Regan, Priscilla M. Legislating Privacy: Technology, Social Values, and Public Policy. Chapel Hill,
NC: University of North Carolina Press, 1995.
Schweitzer, James A. Protecting Business Information: A Manager's Guide. Boston: Butterworth-
Heinemann, 1996.
Sterling, Bruce. The Hacker Crackdown: Law and Disorder on the Electronic Frontier. New York:
Bantam Books, 1992.
Wood, Charles Cresson. Information Security Policies Made Easy Version 7. Baseline Software,
Inc., 2000.
Security Practices
®
Allen, Julia H. The CERT Guide to System and Network Security Practices. Reading, MA:
Addison-Wesley, 2001.
British Standards Institution. Information Security Management, Part 1: Code of Practice for
Information Security Management of Systems (BS7799: Part 1 : 1995). London, England: British
Standards Institution, February 1995.
"Security Standards and Electronic Signature Standards; Proposed Rule," Federal Register, vol.
63, no. 155 (August 1998): 43242–43280.
Swanson, Marianne and Guttman, Barbara, Generally Accepted Principles and Practices for
Securing Information Technology Systems, (NIST SP 800-14). National Institute of Standards
and Technology, Department of Commerce, Washington, DC: 1996.
System Survivability
Ellison, Robert; Linger, Richard; and Mead, Nancy. Case Study in Survivable Network System
Analysis (CMU/SEI-98-TR-014, ADA 355070). Pittsburgh, PA: Software Engineering Institute,
Carnegie Mellon University, 1998. Available online:
<[Link]
Firth, Robert et al. An Approach for Selecting and Specifying Tools for Information Survivability
(CMU/SEI-97-TR-009, ADA 350658). Pittsburgh, PA: Software Engineering Institute, Carnegie
Mellon University, 1997. Available online:
<[Link]
Mead, Nancy R.; Ellison, Robert J.; Linger, Richard C.; Longstaff, Thomas; and McHugh, John.
Survivable Network Analysis Method (CMU/SEI-2000-TR-013, ADA 383771). Pittsburgh, PA:
Software Engineering Institute, Carnegie Mellon University, 2000. Available online:
<[Link]
Mead, N. R.; Lipson, H. F.; and Sledge, C. A. "Toward Survivable COTS-Based Systems," Cutter
IT Journal 14, 2 (February 2001): 4–11.
Salter, Chris; Saydjari, O. Sami; Schneier, Bruce; and Wallner, Jim. "Toward a Secure System
Engineering Methodology." New Security Paradigms Workshop, 1998. Available online:
<[Link] (1998).
Allen, Julia and Kossakowski, Klaus-Peter. Securing Network Servers (CMU/SEI-SIM-010, ADA
379469). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2000.
Available online: <[Link]
Ford, Gary et al. Securing Network Servers (CMU/SEI-SIM-007, ADA 361387). Pittsburgh, PA:
Software Engineering Institute, Carnegie Mellon University, 1999. Available online:
<[Link]
Internet Engineering Task Force, Network Working Group. Guidelines for the Secure Operation of
the Internet (RFC 1281). Available online: <[Link] (1991).
Internet Engineering Task Force, Site Security Policy Handbook Working Group. Site Security
Handbook (RFC 2196, FYI 8). Available online: <[Link]
(1997).
National Institute of Standards and Technology. Internet Security Policy: A Technical Guide.
Washington, DC: National Institute of Standards and Technology. Available online:
<[Link] (1998).
Kabay, Michel E. The NCSA Guide to Enterprise Security: Protecting Information Assets. New
York: McGraw-Hill, 1996.
Northcutt, Stephen. Network Intrusion Detection: An Analyst's Handbook. Indianapolis, IN: New
Riders Publishing, Macmillan, 1999.
Web Security
How to Remove Meta-characters from User-Supplied Data in CGI Scripts. Available online:
<[Link] (1999).
Frequently Asked Questions About Malicious Web Scripts Redirected by Web Sites. Available
online: <[Link] (2000).
Garfinkel, S. and Spafford, G. Web Security and Commerce. Sebastopol, CA: O'Reilly and
Associates, Inc., 1997.
Kossakowski, Klaus-Peter and Allen, Julia. Securing Public Web Servers (CMU/SEI-SIM-011).
Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2000. Available
online: <[Link]
Larson, Eric and Stephens, Brian. Web Servers, Security and Maintenance. Upper Saddle River,
NJ: Prentice-Hall, 2000.
McCarthy, Vance. "Web Security: How Much Is Enough?" Datamation (January 1997).
Rubin, A. D.; Geer, D.; and Ranum, M. Web Security Sourcebook. New York: John Wiley and
Sons, 1997.
Rubin, Aviel and Geer, Daniel. "A Survey of Web Security." IEEE Computer (September 1998).
Soriano, Ray and Bahadur, Gary. "Securing Your Web Server." Sys Admin (May 1999).
Spainhour, Stephen and Quercia, Valerie. Webmaster in a Nutshell. Sebastopol. CA: O'Reilly and
Associates, 1996.
Stein, Lincoln. Web Security: A Step-by-Step Reference Guide. Reading, MA: Addison-Wesley,
1998.
Stein, Lincoln. The World Wide Web Security FAQ. Available online:
<[Link] (1999).
Allen, Julia et al. State of the Practice of Intrusion Detection Technologies. (CMU/SEI-99-TR-028,
ADA 357846). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 1999.
Available online:
<[Link]
Allen, Julia and Stoner, Ed. Detecting Signs of Intrusion (CMU/SEI-SIM-009). Pittsburgh, PA:
Software Engineering Institute, Carnegie Mellon University, 2000. Available online:
<[Link]
Base, Rebecca Gurley. Intrusion Detection. Indianapolis, IN: Macmillan Technical Publishing,
2000.
CERT Coordination Center. How the FBI Investigates Computer Crime. Available online:
<[Link] (2000).
Dunigan, Tom and Hinkel, Greg. "Intrusion Detection and Intrusion Prevention on a Large
Network: A Case Study." Proceedings of the 1st Workshop on Intrusion Detection and Network
Monitoring. Santa Clara, CA. April 9–12, 1999. Available online:
<[Link]
dunigan_html/[Link]>.
Escamilla, Terry. Intrusion Detection: Network Security Beyond the Firewall. New York: Wiley
Computer Publishing, 1998.
Howard, John. An Analysis of Security Incidents on the Internet: 1989–1995. Pittsburgh, PA:
Carnegie Mellon University, 1997. Available online:
<[Link]
Maiwald, Eric. "Automating Response to Intrusions," Proceedings of the Fourth Annual UNIX and
NT Network Security Conference. Orlando, FL, October 24–31, 1998. Bethesda, MD: The SANS
Institute, 1998.
Marchany, Randy. "Incident Response: Scenarios and Tactics." Proceedings of the Fourth Annual
UNIX and NT Network Security Conference. Orlando, FL, October 24–31, 1998. Bethesda, MD:
The SANS Institute, 1998.
Newsham, Tim and Ptacek, Tom. Insertion, Evasion, and Denial of Service: Eluding Network
Intrusion Detection. Available online: <[Link] under Security Info (1998).
Northcutt, Stephen. "Computer Security Incident Handling: Step-by-Step." Proceedings of the
Fourth Annual UNIX and NT Network Security Conference. Orlando, FL, October 24–31, 1998.
Bethesda, MD: The SANS Institute, 1998.
Northcutt, Stephen. Network Intrusion Detection: An Analyst's Handbook. Indianapolis, IN: New
Riders Publishing, 1999.
Ranum, Marcus. "Some Tips on Network Forensics." Computer Security Institute 198
(September 1999): 1–8.
Reavis, Jim. "Do You Have an Intrusion Detection Response Plan?" Network World Fusion
(September 13, 1999). Available online:
<[Link]
SANS Institute. Computer Security Incident Handling Step by Step Guide, vo1. 5. Bethesda, MD:
The SANS Institute. May 1998.
Schultz, Eugene. "Effective Incident Response." Proceedings of the Fourth Annual UNIX and NT
Network Security Conference. Orlando, FL, October 24–31, 1998: Bethesda, MD: The SANS
Institute, 1998.
Toigo, Jon William. Disaster Recovery Planning for Computers and Communication Resources.
New York: John Wiley, 1996.
West-Brown, Moira J.; Stikvoort, Don; and Kossakowski, Klaus-Peter. Handbook for Computer
Security Incident Response Teams (CSIRTs) (CMU/SEI-98-HB-001). Pittsburgh, PA: Software
Engineering Institute, Carnegie Mellon University, 1998. Available online:
<[Link]
Finally, you should remember that the results you achieve from any information security risk
evaluation are meaningful only if you use them. The final strategy and plans resulting from the
data acquired and analyzed during processes 1 through 7 will have meaning only if they are
implemented and tracked to completion.
Section
This report is the final result of applying the OCTAVE Method within MedSite. It was written by
the analysis team and provides our recommendations for an organizationwide protection
strategy, mitigation plans for risks to our critical assets, and short-term action items. Section 4
of this report also contains a considerable amount of additional information gathered during the
course of OCTAVE. As a reminder, the analysis team comprised the following members:
L. Pierce
J. Cutter
K. Brown
The protection strategy outlined in Table A-1 focuses on improving the security posture of the
entire MedSite organization. We developed the protection strategy after analyzing the results of
the surveys completed by senior and operational area managers as well as general and
information technology staff members during processes 1 to 3 of the OCTAVE Method. We also
considered the risks identified during OCTAVE when developing the strategy. The protection
strategy is organized according to the structure of the OCTAVE catalog of practices. The results
of the security practice surveys are contained in Section 4 of this report.
Provide annual training in physical security for all staff (including staff in
outlying clinics).
Security Incorporate results from this analysis team into the MedSite strategic
strategy plan, upon approval of the executive committee.
Security Allocate greater funds for system security. Annual budgeting should weigh
management expenditures to forecast future needs adequately.
Security policies Disseminate revised policies and procedures at all levels and actively
and regulations
Modification
The risk profile for paper medical records is shown in Figures A-1 and A-2. There are two trees in
the risk profile for paper medical records, each with a specific mitigation plan. Because network
access and system problems do not affect the paper records, trees for these threat categories
are not included in the risk profile.
Figure A-1. Risk Profile for Paper Medical Records: Human Actors Using Physical
Access
Figure A-2. Risk Profile for Paper Medical Records: Other Problems
Table A-6. Types of Impact and Impact Values for Paper Medical Records
Personal computers are used to access PIDS and other systems. Our definition of personal
computers includes all office, treatment room, and lab computers, as well as the laptops used by
some physicians. The security requirements for personal computers are defined in the Table A-7.
Availability is considered to be the most important security requirement.
Evaluation participants did not consider personal computers to be an important asset. Thus, no
areas of concern were recorded for personal computers. (Note: After reviewing all information,
the analysis team concluded that personal computers were a critical asset to MedSite.)
We defined specific types of impact on the organization resulting from disclosure, modification,
loss or destruction, and unavailability of information on personal computers. We then evaluated
these against a set of evaluation criteria (defined in Section 4) that define what constitutes a
high, medium, and low impact for MedSite. The types of impact related to personal computers
are shown in Table A-8.
Figure A-3. Risk Profile for Personal Computers: Human Actors Using Network
Access
Human actors using physical access (Figure A-4)
Figure A-4. Risk Profile for Personal Computers: Human Actors Using Physical
Access
System problems (Figure A-5)
Note: Staff members use massive mailing lists or misuse the Internet and bog down the systems. The
network has crashed a few times from this behavior.
A.3.3 PIDS
PIDS is essential to the operation of MedSite. MedSite's operations are dependent on the
information provided by this system. The security requirements for PIDS are defined in Table A-
9. Availability is considered the most important security requirement.
We defined specific types of impact on the organization resulting from disclosure, modification,
loss or destruction, and unavailability of information on PIDS. We then evaluated these against a
set of evaluation criteria (defined in Section 4) that define what constitutes a high, medium, and
low impact for MedSite. The types of impact related to PIDS are shown in Table A-11.
Figure A-7. Risk Profile for PIDS: Human Actors Using Network Access
ABC Systems is responsible for the maintenance of PIDS and some of the other systems we have
at MedSite. We rely on them to keep PIDS up and running. Because we depend on PIDS, we also
depend on the services provided by ABC Systems. The security requirements at ABC Systems
are defined in Table A-12. Availability is the most important security requirement. Note that
confidentiality does not apply.
Evaluation participants did not identify specific areas of concern for ABC Systems. The analysis
team constructed the threat profile during the process 4 workshop.
We defined the specific types of impact on the organization resulting from modification and
unavailability of the service provided by ABC Systems. We then evaluated these against a set of
evaluation criteria (defined in Section 4) that define what constitutes a high, medium, and low
impact for MedSite. The types of impact related to PIDS are shown in Table A-13.
The risk profile for ABC Systems is shown in Figure A-11. There is only one tree in the risk
profile:
A.3.5 ECDS
The Emergency Care Data System (ECDS) is essential to the efficient operation of emergency
rooms. It is also representative of systems we have that are linked to PIDS but are maintained
by the local staff. ECDS is used to maintain and update patient records and billing for emergency
cases, but it is not used during actual emergencies. The security requirements for ECDS are
defined in Table A-14. Integrity and confidentiality are considered the most important security
requirements.
We defined specific impacts on the organization resulting from disclosure, modification, loss or
destruction, and unavailability of information on ECDS. We then evaluated these impacts against
a set of evaluation criteria (defined in Section 4) that define what constitutes a high, medium,
and low impact for MedSite. The types of impact related to PIDS are shown in Table A-16.
Systems are susceptible to malicious code and virus activity (in Modification
part due to the location/configuration of the firewall). Loss/destructio
n Interruption
for.
Other ABC Systems does not recognize the Interruptio
problems importance of the hospital/health care n
organization. Priorities of the hospital are not
understood.
The risk profile for ECDS is shown in Figures A-12 through A-15. There are four trees in the risk
profile, each with a specific mitigation plan. We also checked for consistency between the risk
profiles for PIDS and ECDS and identified several threats previously not identified for ECDS.
Figure A-13. Risk Profile for ECDS: Human Actors Using Physical Access
Figure A-12. Risk Profile for ECDS: Human Actors Using Network Access
Figure A-15. Risk Profile for ECDS: Other Problems
Table A-16. Types of Impact and Impact Values for ECDS
Once we identified the critical assets and the threats to those assets, we identified key
infrastructure components to evaluate for technology vulnerabilities as part of phase 2 of the
OCTAVE Method. This section summarizes our results and specific recommendations based on
the results of phase 2. The summary provides a snapshot of how MedSite is managing its
technology vulnerabilities.
Figure A-16 shows a high-level map of our computing infrastructure. As a part of the OCTAVE
Method, we identified systems of interest for each critical asset and looked at access paths to
identify key classes of components. From this, we selected specific instances of the key classes
to evaluate for technology vulnerabilities.
Figure A-17. Access Paths and Key Classes of Components for PIDS
Although paper records do not have a network access path, printed email is sometimes included
in paper records, as are printouts from systems such as PIDS. We wanted to evaluate the PDAs
and the local email server, but at this point we could not determine how the PDAs used by the
physicians were linking into the system. A representative from ABC Systems was not available to
help, so this action should be dealt with as soon as possible.
Note that we did not conduct a physical vulnerability evaluation of MedSite; that was considered
outside our scope of responsibility. However, we do recommend that a physical security audit or
evaluation be conducted to verify that access to physical records is sufficient. Information that
we gathered during the OCTAVE Method leads us to believe that paper medical records (stored
in the Records Retention room) are physically vulnerable.
Table A-17 illustrates the system(s) of interest and key classes of components for each of the
critical assets. There was some commonality among the key classes of components to be
evaluated.
Networking
the technology vulnerability evaluation. Table A-21 shows those recommendations.
Key IP Vulnerability
Componen Addresse Evaluation Rational
t s[1] Approach Tool(s) [2]
e
Office PCs ----------- ABC Systems Vulnerability scanner These are
-----------
- personnel will —Vulnerabilities-R- common
----------- be responsible Found, version 6.73 tools used
-----------
-
for running all at ABC
----------- of the tools. Systems.
----------- MedSite's IT Our IT
-
personnel will personnel
Home PCs -----------
----------- be present and do not
- will also get have the
Firewall ----------- some on-the- Network/Internet level knowledg
----------- job training e to run
- tool—Improve-UR-
----------- Network, version 4.8 them but
PIDS server want to
-----------
- learn.
ECDS -----------
-----------
server -
Routers -----------
-----------
-
-----------
-----------
-
[1]
Real IP addresses are not supplied in this table.
[2]
These are fictitious tools.
We defined the impact evaluation criteria and then evaluated each impact against those criteria.
We recommend that these evaluation criteria, shown in Table A-22, become a standard for
MedSite. We include criteria for the following areas:
Reputation/customer confidence
Life/health of customers
Productivity
Fines/legal penalties
Finances
Other (facilities)
Inability to track
performance of
facilities or
providers
accurately
The complete list of assets identified by personnel during processes 1 to 3 is shown in Table A-
23. This list of assets highlights differences in opinion about what is important to MedSite. We
recommend that any additional work with respect to documenting MedSite's information-related
assets should start with this list.
Table A-23. Assets Grouped by Organizational Level
Medical Logistics
System (MLS)
External relations
Yes: 75 percent or more of respondents answered that the practice is most likely used
by the organization.
No: 75 percent or more of respondents answered that the practice is most likely not
used by the organization.
Unclear: Based on the respondents' answers, it is not clear whether the practice is used
by the organization.
I do not understand
my role or
responsibility for
security.
2. Administration (including
periodic reviews and
updates)
3. Communication
2. Understanding the
security policies and
procedures of external
organizations
3. Ending access to
information by terminated
external personnel
lock up our
Location/distribution
offices at
of terminals
the end of
the day. The need to share
terminals
Shared codes to
cipher locks
Multiple access
points to rooms
IT staff Hardware
security is
very good.
Table A-32. Monitoring and Auditing Physical Security
Unique user
identification is required
for all information
system users, including
third-party users.
We force users
to change
passwords
regularly.
ABC Systems
has reported
very few
intrusions.
Table A-34. System Administration Tools
Selecting
vulnerability
evaluation tools,
checklists, and
scripts
Keeping up to date
with known
vulnerability types
and attack methods
Reviewing sources
of information on
vulnerability
announcements,
security alerts, and
notices
Identifying
infrastructure
components to be
evaluated
Scheduling
vulnerability
evaluations
Interpreting and
responding to the
results
Maintaining secure
storage and
disposition of
vulnerability data
Vulnerability Unclea
management r
procedures are
followed and are
periodically
reviewed and
updated.
Technology Unclea
vulnerability r
assessments are
performed on a
periodic basis, and
vulner-abilities are
addressed when
they are identified.
Comments
Protection Strategy Organizational
Organizational Level Practices Vulnerabilities
Senior management
Operational area
management
Staff
IT staff ABC Systems does all We haven't
of the been trained
vulnerabilitymanageme in what to do
nt and assessment with those
activities. They do a vulnerability
good job. reports. We
usually file
them in a
drawer.
Table A-38. Encryption
Security strategies,
policies, and procedures
History of security
compromises
Appendix B. Worksheets
This appendix contains a set of worksheets that are used during the OCTAVE Method. We have
classified the worksheets into the following types:
1. Knowledge elicitation worksheets. These are used during processes 1 through 3. There is
one set of worksheets to be used for all three processes.
2. Asset profile. This profile is a set of worksheets that includes all of the information
gathered or created for a critical asset. You will complete an asset profile for each critical
asset.
3. Strategies and actions. These worksheets are used when developing the
organizationwide protection strategy and the action list in process 8A.
All of the worksheets include a basic set of instructions derived from the OCTAVE Method
Implementation Guide, v2.0 [Alberts 01].
Processes 1 to 3 elicit knowledge from senior managers, operational area managers, general
staff members, and information technology staff members. Participants in processes 1 to 3
provide their perspectives on assets that are important to the success of the organization, the
way in which important assets are threatened, and security requirements for important assets.
The worksheets used when you elicit the above information are identical for all participants; we
provide only one set. During the last activity of processes 1 to 3, you elicit information about
security practices currently used by the organization and the organizational vulnerabilities that
are present in the organization. There is a different survey for each organizational level, and all
of the surveys are included in this appendix. The final worksheet in processes 1 to 3 is for a
follow-up discussion after participants complete their surveys and is the same for all
participants. The following worksheets are provided in this section of Appendix B:
Asset Worksheet
Practice Surveys
- IT staff survey
You normally use these worksheets (except for the surveys) to prompt the participants and
stimulate a discussion among them. However, you could ask them to complete the worksheets in
advance and be prepared to discuss their answers. The workshop's scribe records the official
results of each workshop. The scribe can record data on flip charts, copies of these worksheets,
or in some other, more abbreviated, electronic form.
Instructions
Processes 1 Activity: Identify Assets and Relative Priorities (Section
to 3 5.2)
Purpose To identify assets that are important to participants (senior
managers, operational area managers, general staff, or
information technology staff)
Instruction 1. Participants brainstorm a list of assets and then select those assets
s considered to be most important. Use the following questions to guide
your discussions:
o Are there any other assets that you are required to protect
(e.g., by law or regulation)?
o From the assets that you have identified, which are the most
important? What is your rationale for selecting these assets as
important?
3. Record all assets identified during the workshop and note which ones
were identified as most important by the participants.
Asset Worksheet
1. What are your important assets?
Instructions
Processes 1 Activity: Identify Security Requirements for Most
to 3 Important Assets (Section 5.4)
Purpose To identify security requirements for each important asset
previously identified by the participants
Instruction 1. Participants brainstorm a list of security requirements for their
s important assets and then select which requirement is considered
most important for each asset. Use the following questions to guide
your discussions:
3. Communication
Position: ________________________________________________________
IT Staff Survey
Practice Is this practice used by your organization?
Security Awareness and Training
Staff members understand their security roles and Yes No Don't
responsibilities. This is documented and verified. know
There is adequate in-house expertise for all supported Yes No Don't
services, mechanisms, and technologies (e.g., logging, know
monitoring, or encryption), including their secure
operation. This is documented and verified.
Security awareness, training, and periodic reminders are Yes No Don't
provided for all personnel. Staff understanding is know
documented and conformance is periodically verified.
Security Strategy
The organization's business strategies routinely incorporate Yes No Don't
security considerations. know
Security strategies and policies take into consideration the Yes No Don't
organization's business strategies and goals. know
Security strategies, goals, and objectives are documented Yes No Don't
and are routinely reviewed, updated, and communicated to know
the organization.
Security Management
Management allocates sufficient funds and resources to Yes No Don't
information security activities. know
Security roles and responsibilities are defined for all staff in Yes No Don't
the organization. know
The organization's hiring and termination practices for staff Yes No Don't
take information security issues into account. know
The organization manages information security risks by Yes No Don't
assessing risks to information security and taking steps to know
mitigate information security risks.
Management receives and acts upon routine reports Yes No Don't
summarizing security-related information (e.g., audits, know
logs, risk and vulnerability assessments).
Security Policies and Regulations
The organization has a comprehensive set of documented, Yes No Don't
current policies that are periodically reviewed and updated. know
There is a documented process for management of security policies: Yes No Don't
1. Creation
know
3. Communication
Instructions
Processes 1 Activity: Capture Knowledge of Current Security Practices
to 3 and Organizational Vulnerabilities (Section 5.5)
Purpose To build on the survey information by identifying specific
security practices used by the organization and
organizational vulnerabilities present in the organization
Instruction 1. Participants brainstorm a list of security practices and organizational
s vulnerabilities. Use the following questions to guide your discussions:
o Which issues from the survey would you like to discuss in
more detail?
3. Are there specific security policies, procedures, and practices unique to certain assets?
What are they?
Use the worksheets in this section to document the analysis results during processes 4 through
8A for each critical asset. Collectively, these worksheets are called an asset profile; you should
develop one asset profile for each critical asset.
The worksheets in this section generally appear in the order in which they are used. Any
exceptions are specifically noted in the instructions for a section. The following asset profile
worksheets are presented in this section:
Process 4
- Threat Profile
Process 5
- System(s) of Interest
Process 6
Process 7
Process 8
Instructions
Process 4 Activity: Select Critical Assets (Section 6.3)
Purpose To document information pertaining to the selection of a
critical asset
Instruction Record the following information for the critical asset on the Critical Asset
s Information Worksheet:
A brief description of the critical asset, including who controls it, who
is responsible for it, who uses it, and how it is used
Instructions
Process 4 Activity: Refine Security Requirements for Critical Assets
(Section 6.4)
Purpose To refine security requirements for the critical asset
Instruction 1. Review any security requirements and areas of concern for the critical
s asset that were identified during processes 1 to 3. Also review any
areas of concern.
2. Document the security requirements for the critical asset in the third
column of the Security Requirements Worksheet. Use the following
questions as prompts:
o Confidentiality
o Integrity
o Availability
o Other
NOTE: You do not complete the entire worksheet during one activity. First complete the threat
profile for the critical asset (all fields in the Threat Profile Worksheet with the exception of the
impact field). Record impact values on the Threat Profile Worksheet after evaluating impacts in
Section B.2.11.
Instructions
Process 4 Activity: Identify Threats to Critical Assets (Section 6.5)
Purpose To identify the range of threats that affect the critical
assets, creating a threat profile for the critical asset
Instruction 1. Review the security requirements (Section B.2.2) and critical asset
s information (Section B.2.1). Also review any areas of concern for the
critical asset identified during processes 1 to 3.
Use the worksheets in this section when you define and document the organizationwide
protection strategy and near-term action items during process 8A. The following worksheets are
contained in this section:
Instructions
Process 8 Activity: Before the Workshop: Consolidate Information
from processes 1 to 3 (Section 10.2)
Purpose To compile current security practice and organizational
vulnerability information from processes 1 to 3.
Instruction 1. Note that there are two tables for each practice area in the Current
s Security Practices Worksheet. The first table summarizes the results of
the surveys that were completed during processes 1 to 3. The second
table consolidates contextual information (protection strategy
practices and organizational vulnerabilities) that was identified during
the protection strategy discussion from processes 1 to 3.
Issues: What issues related to collaborative security management cannot be addressed by your
organization?
Issues: What issues related to contingency planning and disaster recovery cannot be addressed
by your organization?
What external experts could help you with physical security? How will you
communicate your requirements? How will you verify that your
requirements were met?
Issues: What issues related to physical security cannot be addressed by your organization?
Are your policies and procedures sufficient for your staff security needs?
How could they be improved?
Who has responsibility for staff security? Should anyone else be involved?
Issues: What issues related to staff security cannot be addressed by your organization?
Instructions
Process 8 Activity: Create Action List (Section 10.6)
Purpose To define action items that people in your organization can
take in the near term without the need for specialized
training, policy changes, etc.
Instruction 1. As you created the protection strategy and risk mitigation plans, you
s should have recorded any near-term actions that could help you
implement the strategy and plans. Review your list of actions and
decide if any are appropriate for the action list. Record the action
items on the Action List Worksheet.
2. Think about any additional near-term actions that could help you
implement your protection strategy and risk mitigation plans. Answer
the following question: What near-term actions need to be taken?
3. Now that you have identified specific action items for the action list,
you need to assign responsibility for completing them as well as a
completion date. Answer the following question for each action item on
your list and record the results on the Action List Worksheet:
The catalog of practices is deliberately divided into two types of practices: strategic and
operational. Strategic practices focus on organizational issues at the policy level and provide
good general management practices. Strategic practices include issues that are business-related
as well as those that require organizationwide planning and participation. Operational practices
focus on technology-related concerns. They include issues related to how people use, interact
with, and protect technology. Since strategic practices are based on good management practice,
they should be fairly stable over time. Operational practices are more subject to changes as
technology advances and new practices arise to deal with those changes.
The catalog of practices is a general catalog; it is not specific to any domain, organization, or set
of regulations. It can be modified to suit a particular domain's standard of due care or set of
regulations (e.g., the medical community and HIPPA security regulations). It can also be
extended to add organization-specific standards, or it can be modified to reflect the terminology
of a specific domain.
Figure C-1 depicts the structure of the catalog of practices; the details can be found on the
following pages. This catalog was developed using several sources, which are referenced on the
last page of this appendix. In addition to these security-related references, we also used our
experience developing, delivering, and analyzing the results of the Information Security
Evaluation (ISE), a vulnerability assessment technique developed by the Software Engineering
Institute and delivered to a variety of organization over the past six years.
Christopher Alberts is a senior member of the technical staff in the Networked Systems
Survivability Program at the Software Engineering Institute (SEI). He is responsible for
developing information security risk management methods, tools, and techniques. Alberts is
SM
currently the team leader for OCTAVE , a risk assessment technique designed for self-directed
use by organizations. Prior to his work in networked systems security, Alberts focused on
developing techniques to advance the practice of risk management for software development
projects.
Before joining the SEI, Alberts was a scientist at Carnegie Mellon Research Institute, where he
developed mobile robots for hazardous environments. He also worked at AT&T Bell Laboratories,
where he designed information systems to support AT&T's advanced manufacturing processes.
He has B.S. and M.E. degrees in engineering from Carnegie Mellon University.
Alberts's publications have focused on information security, risk management, and robotic and
automated system development. He is a coauthor of the OCTAVE Method Implementation Guide
and the Continuous Risk Management Guidebook. Among other publications he has coauthored
are Alberts, Christopher J.; Behrens, Sandra G.; Pethia, Richard D.; and Wilson, William R.
SM SM
Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE ) Framework,
Version 1.0 (CMU/SEI-99-TR-017, 1999); Gallagher, B., Alberts, C., and Barbour, R. Software
Acquisition Risk Management KPA—A Guidebook. Pittsburgh, PA, Software Engineering Institute,
Carnegie Mellon University, 1997; Siegel, M. W., W. M. Kaufman, and C. J. Alberts, "Mobile
Robots for Difficult Measurements in Difficult Environments: Applications to Aging Aircraft,"
Proceedings of the Pittsburgh Meeting, C. Thorpe, ed., International Conference on Intelligent
Autonomous Systems: IAS-3, Pittsburgh, PA, February 1993; Alberts, C. J., W. M. Kaufman, and
M. W. Siegel, "Automated Inspection of Aircraft," Proceedings of Aerospace '92: Maintaining and
Supporting an Aircraft Fleet, Society of Manufacturing Engineers, Dallas, TX, June 1992; and
Kaufman, W. M., M. W. Siegel, and C. J. Alberts, "Robot for Automation of Aircraft Skin
Inspection," Proceedings of the International Workshop on Inspection and Evaluation of Aging
Aircraft, Behnam Bahr, ed., Federal Aviation Administration, Albuquerque, NM, May 1992.
Audrey J. Dorofee
Audrey Dorofee is a senior member of the technical staff in the Networked Systems Survivability
Program at the Software Engineering Institute (SEI). She is responsible for developing,
transitioning, and training security risk management methods, tools, and techniques. She is
currently working on OCTAVE and other areas in the security domain. Prior to this work, Dorofee
was developing and delivering software and systems development risk management practices
and was project lead for risk management in the Risk Program at the SEI.
Prior to joining the SEI, Dorofee was a member of the technical staff with the MITRE Corporation
in Houston, Texas, supporting various types of work for the National Aeronautics and Space
Administration (NASA), including Space Station software environments, user interfaces, and
expert systems. Before MITRE, she was a NASA electronics engineer at the Kennedy Space
Center, working with the Space Shuttle Launch Processing System.
Dorofee's most recent publications have focused on risk management. She is coauthor of the
OCTAVE Method Implementation Guide and the Continuous Risk Management Guidebook. Among
her other publications are "Putting Risk Management into Practice," R. Williams, J. Walker, A.
Dorofee. IEEE Software, May/June 1997, Piscataway, New Jersey; "Team Risk Management: A
New Model for Customer-Supplier Relationships," R. Higuera et al. (CMU/SEI-94-SR-5).
Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 1994; "Overview and
Analysis of National Space Transportation System Fault Management," A. Dorofee. (MTR-
92W00026). McLean, VA: The MITRE Corporation, 1992; "Space Station Freedom Advanced
Automation: Evolution with Environments: A Plan for the Software Support Environment," A.
Dorofee. (MTR- 89W00271-03). McLean, VA: The MITRE Corporation, 1989; and "SUIM: An
Alternative for Developing and Effecting User Interfaces," L. Ambrose et al. (MTR-90W00020).
McLean, VA: The MITRE Corporation, 1989.
The key principles of the OCTAVE approach include self-direction, adaptable measures, a defined process, and a foundation for continuous improvement. These principles emphasize the involvement of organizational personnel in leading evaluations, which enhances ownership and implementation of security initiatives . Self-direction ensures that internal staff, who are familiar with the organization's nuances, spearhead the evaluation process, augmenting engagement and leading to better alignment with organizational goals . Adaptability allows the framework to be tailored to different organizational sizes and contexts, making it applicable from small medical offices to large multinational corporations, aligning with their unique security needs and operational characteristics . The approach's defined process and focus on continuous improvement highlight the need for clear, consistent evaluation activities and ongoing management of information security risks, ensuring that security practices are sustainable and evolve with the organization . These principles, when effectively implemented, tailor OCTAVE to varied organizational contexts by modifying its processes and outputs without compromising its foundational criteria .
Prioritizing security requirements for each critical asset in the OCTAVE Method is necessary to ensure that the organization's limited resources are allocated effectively to protect assets that are most vital to fulfilling its mission. This involves assessing the importance of confidentiality, integrity, and availability for each asset, as these factors can differ greatly in priority depending on the asset's role and importance in the organization . Managing conflicts involves examining trade-offs between different security requirements, such as deciding whether confidentiality is more crucial than availability or integrity for a particular asset. It requires a collective judgment to determine which security requirement takes priority, often through facilitated discussions and a clear understanding of the implications of these trade-offs on the organization's security strategy . This process helps form the basis for risk mitigation strategies and ensures that any security measures implemented align closely with the organization’s primary security concerns .
The OCTAVE approach ensures information security risk evaluations are context-driven by involving the organization's personnel directly in the evaluation process. An interdisciplinary team composed of members from both the business units and IT department leads the evaluation, allowing the organization to tailor evaluations based on how they use their infrastructure to meet business objectives. This involvement ensures evaluations are aligned with the organization's unique business context and security needs, promoting ownership of the findings and enabling strategic and tactical risk management . OCTAVE's flexibility allows it to be adapted to various organizational sizes and operational environments, encouraging contextual solutions tailored to specific organizational demands, whether large or small . The process aims to bridge business objectives with security considerations, ensuring that risk evaluations account for both organizational and technological issues .
The outputs of an OCTAVE evaluation contribute to managing information security risks by providing a structured framework that identifies and analyzes risks specific to an organization and its assets. These outputs consist of organizational and technological data, along with risk analysis and mitigation data. The evaluation results in a comprehensive view of the risks and forms the basis for developing protection strategies and risk mitigation plans . Consequently, this enables organizations to align their security measures with their business objectives, focusing on strategic improvements and tactical risk reductions . Furthermore, the evaluation process encourages organizational involvement and ownership, ensuring that identified vulnerabilities are addressed and that the organization adapts to new threats and changes over time . By periodically resetting the baseline through evaluations triggered by significant changes or at regular intervals, organizations maintain an updated risk management posture .
The process of selecting critical assets in the OCTAVE method is a key component that occurs during Process 4 of Phase 1. In this process, members from across the organization initially provide input on important assets during Processes 1 to 3. The analysis team then consolidates this information to select the assets that are most critical to the organization . This step is crucial because it establishes the scope for subsequent evaluation phases, focusing on the "critical few" assets that hold the most importance . The identification of these critical assets informs threat profiles, which are used throughout the risk analysis and mitigation planning, ensuring that the organization’s security efforts are directed toward protecting its most valuable resources .
The OCTAVE Method's limitations for small organizations arise from differences in organizational dynamics and resources compared to large organizations. Small organizations often lack a hierarchical structure, requiring fewer workshops, and have limited IT expertise and staff time for conducting comprehensive evaluations . They typically outsource IT management, necessitating collaboration with external vendors for the technological aspects . To address these limitations, the method can be adapted by simplifying processes and condensing workshops, focusing on efficient data collection and leveraging external facilitators where necessary, without compromising the core principles of OCTAVE .
The core components of the OCTAVE approach include principles, attributes, and outputs, which define the structure of information security risk evaluations. The principles emphasize self-direction, meaning that the organization's personnel lead the evaluation, enhancing internal engagement and accountability. Attributes specify the involvement of an interdisciplinary analysis team that guides the process. Outputs are the results achieved, such as risk profiles and mitigation plans, which support organizational learning and improvement . The approach consists of three phases: building asset-based threat profiles, identifying infrastructure vulnerabilities, and developing security strategies and plans . This comprehensive method integrates organizational and technological evaluations, allowing organizations to better manage information security risks and create actionable mitigation strategies . The method is adaptable, facilitating use in different organizational contexts while maintaining core OCTAVE criteria .
The analysis team in the OCTAVE method serves as the central group responsible for conducting the evaluation. Their primary tasks include setting the scope of the evaluation, selecting participants, facilitating workshops, and gathering and analyzing information . The team ensures coordination with senior and operational managers and IT staff to conduct vulnerability evaluations and manage logistics for the evaluation . Comprising 3 to 5 members from both business and IT departments, the team requires skills such as facilitation, communication, analytical abilities, and knowledge of the organization's business and IT environments . The team also selects additional participants to supplement their expertise as needed for specific processes within the OCTAVE method .
The concept of self-direction in the OCTAVE method involves the organization's personnel leading and conducting the risk evaluation, which contrasts with traditional approaches that often rely heavily on external experts. This method emphasizes the involvement of business personnel alongside IT staff in assessing risks and developing improvement recommendations, fostering a sense of ownership among site personnel . Traditional methods typically see less involvement from the organization's own personnel, leading to recurrent vulnerabilities due to a lack of organizational learning and responsibility . The self-directed nature ensures that the evaluation is tailored to the organization's specific needs, with internal staff making key decisions throughout the process .
Participation of site personnel in vulnerability evaluations enhances organizational learning and security improvements by fostering a sense of ownership, encouraging implementation of findings, and providing a broader understanding of security issues that align with business objectives. Including personnel from various departments, such as business and IT, helps integrate diverse perspectives, facilitating a risk-based approach that addresses both organizational and technological vulnerabilities . When personnel from within an organization are actively involved in the process, they become more committed to implementing the evaluation's recommendations, leading to sustained security improvements and reduced repetition of the same vulnerabilities in subsequent evaluations . This involvement helps establish relevant asset-based threat profiles and supports the understanding of risks in relation to the mission and objectives of the organization, which is critical for developing effective mitigation strategies .