Skip to content

How to systematically secure anything: a repository about security engineering

Notifications You must be signed in to change notification settings

n0r/how-to-secure-anything

 
 

Repository files navigation

How to Secure Anything

Security engineering is the discipline of building secure systems.

Its lessons are not just applicable to computer security. In fact, in this repo, I aim to document a process for securing anything, whether it's a medieval castle, an art museum, or a computer network.

Please contribute! Create a pull request or just create a issue for content you'd like to add: I'll add it for you!

Table of contents

What is security engineering?

Security engineering isn't about adding a bunch of controls to something.

It's about coming up with security properties you'd like a system to have, choosing mechanisms that enforce these properties, and assuring yourself that your security properties hold.

High level process

Here's the process I like for securing things:

  • We follow as many known best practices as we can. If humans already know how to secure something well, why try to derive the answer ourselves?
  • Learn about the adversaries you want to defend against
  • We write down our security policies, or high level security goals
  • We develop a security model, or a spec we follow to satisfy our policies
  • We reduce attack surface, follow security design principles, brainstorm ideas for and implement additonal security controls, and more -- to improve our security
  • We test our design by assessing our controls, assessing protocols, looking for side channels, and more
  • We write assurance cases to prove we satisfy our security policy.

Follow known best practices

Before anything else, I'd Google for the best practices for securing whatever you're trying to secure and implement all of them.

If you're in a corporate environment, set up SSO and 2FA. If you're securing a physical facility, see if there's a well-regarded physical security standard you can comply with.

I'd study how people have defended what you're defending now in the past. Also, I'd talk to the people who are the very best at defending what I'm defending now, and learn what they do that most people don't do.

Doing this will make you significantly more secure than the majority of people, who don't do this.

Understand your adversaries

There's no such thing as a system being secure, only being secure against a particular adversary.

This is why it's important to understand who your adversaries are, as well as the motivation behind and capabilities of each adversary.

Consider non-human threats, too. If you're asked to secure a painting in a museum, a fire may technically not be a security issue -- but it's something to guard against, regardless.

Also, study the history of attacks. If I was designing a prison, I'd learn about all the past prison breakouts that I could.

Security policies

Policies are the high level properties we want our system to have. Policies are what we want to happen.

Let's say we're designing a prison.

I'd start with a strong policy:

No prisoner may escape the prison.

Of course, time, money, and manpower are all limited. The goal isn't to eliminate risk entirely, but bring it down to an acceptable level.

As I go through the next couple steps and learn what controls I need and how costly they'll be, I might refine my security policy to something like this:

No more than 10 out of 10,000 (0.1%) prisoners may escape our prison in any given time period.

Looking at benchmarks may help us come up with this number.

Any system has additional requirements in addition to its security requirements. These two sets of requirements may conflict, so you may need to relax your security requirements.

Going back to the example above, our policy is that only a tiny percentage of prisoners may leave the prison without permission. But what if there's a fire?

If you've achieved this low escape rate by building a fully autonomous fortress with no fire detection or human override, the results may be suboptimal.

Security models

We can then turn our policy into a more detailed model. A model is a set of rules, a specification, we can follow to achieve our policy. Our policy is our "what", the model is our "how".

Each individual in the prison facility must have a ID that identifies him/her as a "prisoner" or "not a prisoner"

A prisoner may have the written consent of the warden to leave.

A non-prisoner may leave at any time.

Luckily, in information security, our policies often revolve around confidentiality, integrity, and availability and so there are popular existing security models for each of these policies.

For confidentiality, for example, you can choose between:

Improve defenses

Here are some useful techniques I've found for improving the security of a system.

Also see if you any of the mechanisms in popular mechanisms would help.

Minimize attack surface

See tptacek's HN comment on this:

For instance: you can set up fail2ban, sure. But what's it doing for you? If you have password SSH authentication enabled anywhere, you're already playing to lose, and logging and reactive blocking isn't really going to help you. Don't scan your logs for this problem; scan your configurations and make sure the brute-force attack simply can't work.

The same goes for most of the stuff shrink-wrap tools look for in web logs. OSSEC isn't bad, but the things you're going to light up on with OSSEC out of the box all mean something went so wrong that you got owned up.

Same with URL regexes. You can set up log detection for people hitting your admin interfaces. But then you have to ask: why is your admin interface available on routable IPs to begin with?

Minimize, simplify, verify your trusted computing base (TCB)

When evaluating a design, it's useful to see how much of the system must be trusted in order for a security goal to be achieved. The smaller this trusted computing base is, the better.

Also, once you identify the TCB for an existing system, you know that you only need to secure your TCB. You don't need to worry about securing components outside your TCB.

You want to make your TCB as small, simple, unbypassable, tamper-resistant, and verifiable as you can, as I write about here.

Separate and minimize privilege; sandbox if possible

When designing a system, a great way to mitigate the impact of a successful attack is to break the system down into components based upon their privilege level.

Then, ask what's the least amount of privilege each component needs -- and then enforce the allowed privileges with a sandbox (if applicable).

Say one of our SRE SSH's into a production EC2 instance as root to check the instance's memory and CPU usage. Instead, we can assign the SRE a non-root account. Even better, we can whitelist the commands this account can run. Even better, we can even remove SSH access entirely and set up Prometheus for monitoring.

Prevent/detect/respond framework

The way I see it, every defense falls into one of these categories:

  • Prevent: consists of deter, stop
  • Detect
  • Respond: consists of delay, contain, investigate, remediate

Take any attack. Then, for each of the seven categories, brainstorm defenses that fall into that category.

Kill chains

By mapping out an adversary's kill chain, we can then identify controls to counteract each step in the kill chain. Check out MITRE ATT&CK.

Security design principles

I would go down this list and see if there's any principles which you can apply to your system.

  • Secure the weakest link
  • Defense in depth
  • Fail securely
  • Secure by default
  • Secure by design
  • Least privilege - discussed earlier in the repo
  • Separation of privilege - discussed earlier in the repo
  • Economy of mechanism - controls should be as simple as possible
  • Least common mechanism - limit unnecessary sharing
  • Open design - your design should be secure without obscurity. obscurity is discussed later in the repo
  • Complete mediation - applies to reference monitors, which many controls are. The idea is to perform a check on every request. If you cache results, then a request that should be rejected after things changed might be allowed. See this link
  • Work factor - find ways to make the attacker need to do several times more work to break something than it takes you, the defender. Here's a paper on dynamic network reconfiguration being used to increase recon work for attackers
  • Security is economics - discussed later in the repo
  • Human factors matter - if a control relies on a human to do something, make sure your control is usable or the person just won't do it
  • Know your threat model & update it - keep your threat model up to date with threats, and your defenses too

Sources

Find vulnerabilities

The techniques below help you find vulnerabilities in a proposed design for you to fix.

Developing an attacker mindset

Theories of security derive from theories of insecurity. - Unknown

If you're a great attacker you can be "logically" a great defender. However, a great defender cannot be a great attacker, nor would I say they could be a "great" defender. - Caleb Sima, VP of Security at Databricks

More important than the attacks in subsequent sections is being able to think creatively, like an attacker. I do believe this skill is essential if you want in order to assess the security of your designs effectively.

This section describes some techniques for developing this skill that I've gathered.

Think in graphs

Read this post by John Lambert first. It's about how attackers think in graphs, while defenders think in lists, so attackers win.

I've copied the list of links below from John's post above.

Attack trees

After building an attack tree, you can query it easily: "list all the attack paths costing less than $100k". (Remember: we don't seek absolute security, but rather security against a certain set of adversaries.)

Also, remember the weakest link principle. You can query your attack tree for the lowest cost attack path and ensure that the cost isn't too low.

On, un-bypassable, tamperproof, functionally correct, fail closed

If a security control does not have the qualities above, then an attacker can violate a system's security properties by subverting its controls.

  • Can the attacker turn off the control?
  • Can the attacker get you to turn off the control?
  • Can the attacker get around your control?
  • Does the control depend on something that the attacker can disable?
  • Are there any cases where the control doesn't work?
  • Does the control fail open or closed? If it fails open, can the attacker make the control fail?

Example: a burglar

Take a burglar confronting a home security system which calls the police if someone crosses the lawn at night

  • Can the burglar turn off the control? Probably not
  • Can the burglar get you to turn off the control? Yes, he could set off the alarm everyday until you turn it off
  • Can the burglar get around your control? Yes, he could land on the roof
  • Does the control depend on something that the burglar can disable? Yes, the burglar can cut the electric wire or the fiber cable used to call the police
  • Are there any cases where the control doesn't work? The burglar can buy the control and learn the alarm doesn't go off if he tip toes.

Assumptions analysis

I like using a statement/conclusion format to draw out my assumptions about my controls.

Statement: I have a home security system which calls the police if someone crosses the lawn.

Conclusion: I won't get robbed.

Assumptions:

  • For every single attacker that tries to cross my lawn, my home security system calls the police. (If the answer to any of the questions above is yes, this assumption is false.)
  • The police will arrive before any attacker is able to steal anything and stop the theft.
    • What if the attacker impersonates the homeowner and tells the police that my home security system is faulty; don't come if it calls you?
    • What if the attacker makes hundreds of 911 calls while he is robbing the house?
    • What if the police is blocked by a "car accident"? What if the attacker has arranged for a getaway helicopter?

Saydjari writes an entire chapter on this:

Failure analysis

We want our security controls to fail closed, not open. There's two ways to analyze the ways something might fail: failure tree analysis (FTA), which is top down, and failure modes and effects analysis (FMEA), which is bottom up.

Fault tree analysis

FMEA

Protocol analysis

Protocols aren't a tool for securing something. But all communication between two components of a system is done through a protocol, so it's worth learning how to analyze protocols for vulnerabilities.

Side channel analysis

Even if something isn't vulnerable to attacks (on confidentiality, integrity, or availability), it may leak information which makes these attacks easier.

For example, take a login program that checks if the username is valid, returns a generic "login failed" error if it's not, then checks if the password is valid, and returns the same generic error if it's not.

At a first glance, determining if a particular username is valid may seem impossible. After all, the error message is the same regardless of whether the username is invalid or the username is valid and the password is invalid.

However, an attacker could examine the time it takes to get the error to determine if the username is valid or not.

Assurance

The goal of security engineering is to build a system that satisfies certain security properties -- not just to add a lot of controls. Assurance is how we prove that our system satisfies the properties we want it to.

Popular mechanisms

In order to secure something, you need to know what tools are available to you. Here are some that which can be used in many different contexts.

A lot of tools are context-specific, however. Before I start trying to secure a building, for example, I'd spend the time to learn about all the tools I can use: walls, sensors, natural barriers, guards, CCTV cameras, etc

Cryptography

To learn about later: secure enclaves

Economics

The idea here is to make it economically, not technically, infeasible for the attacker to attack us. He can still attack us, but his expected effort will exceed his expected gain.

Say a scammer manages to scam one of every hundred people out of $5. If we can add a $0.10 fee to every call, then he'd need to pay $10 in fees to earn $5.

Another example would be not storing credit card data ourselves, and instead outsourcing this to a payment processor, so the reward of attacking us is less.

If the attacker isn't motivated by money, this doesn't work.

Laws and regulations (deterrence by the government)

Deterrence has three parts: certainty, severity, and swiftness. In other words, to deter attackers most effectively, someone should be able to catch most or all of them -- and do this quickly -- and then sufficiently punish them once you do catch them.

This someone could be the government, via laws and regulations against whatever you're trying to defend against. The government may not catch everyone, but these laws and regulations will deter most people. Copyright protection, anti-shoplifting, and anti-trespassing laws all are examples of this.

Retaliation (deterrence by you or third parties)

The government is not the only third party who can deter attacks on you. Organizations, like NATO, can as well.

Alternatively, you can try to retaliate against attacks yourself. Take, for example, media companies that sue people that pirate their movies.

Tamper resistance

Tamper detection

If we can't prevent tampering, we can try to make it obvious when something has been tampered with.

This is one reason why bags of chips or gallons of milk, for example, are sealed.

Access control

Authentication

The three ways to authenticate someone is based on you are (biometrics), what you know (password), what you have (Yubikey, phone).

Biometrics

Authorization

Without authorization, anyone who authenticates to our system would have full access to everything. We'd like to make it more difficult than that for attackers, and likely don't trust all insiders that much, either.

Multilevel

Think about the intel classification hierarchy: some documents are top secret, others are secret, others are confidential, and so on. This is a multi-level scheme.

Multilateral

Even if an analyst has a secret clearance, you may not want him to be able to access any documents from other departments. This is a multi-lateral scheme.

Two-man rule

The idea is simple: to authorize certain actions, more than one person must consent. This helps protect against malicious insiders.

Inference control

While an individual, anonymized database may not be enough to de-anonymize people, a combination of anonymized databases may make this possible. Inference control aims to prevent this.

I haven't seen this concept outside of computer security, yet.

Sandboxing

Privilege separation is dividing a system into different components, based on what permission level each component should have.

Least privilege is then making the permission level for each component as small as possible.

The way you enforce this minimal permission level is via a sandbox.

I haven't seen this concept outside of computer security, yet.

Obscurity

Obscurity, not its own, does not count as security. However, it can be added on top of real security measures, to make attacks on you require more time and a higher skill level.

Learn about how real world systems are secured

The chapters in Anderson's book fall into two categories, in my view: mechanisms for securing systems and examples of how some real world systems are secured.

We've already learned about the first category; this section is about the second category.

Physical facilities

Defending

Attacking

Nuclear command and control

Monitoring and metering

Banking and bookkeeping

Defending

Attacking

Distributed systems

Copyright and DRM

Web browsers

BeyondCorp & zero trust

Apple

Cloud providers

Operating systems

Prisons

Museums

Defending

Attacking

Counterintelligence

Casinos

Defending

Attacking

Military Architecture

Also known as: fortifications

Defending

Attacking

Both

Books

I've tried to reference specific chapters of books in my sections above. I haven't done this for all the books I've read, however, so I thought I'd add a section listing all the books I've found on this topic here.

Recommended (by me)

"Recommended" is just my subjective opinion. YMMV!

  • Computer Security: Art and Science (by Bishop) - I'd read this first; it teaches security engineering in the right order: policies and models, then mechanisms, then assurance.
  • Security Engineering (by Ross Anderson)
  • Engineering Trustworthy Systems (by Sami Saydjari)
  • "Security in Computing" (by Pfleeger) - I liked the chapter on trusted operating systems in particular.
  • Building Secure and Reliable Systems

Not recommended

Again, "not recommended" is just my subjective opinion. YMMV!

  • Time Based Security - my notes. Wasn't information dense.
  • "Engineering Information Security" (by Jacobs) - Mostly contains general security content, not content on security engineering. Only the systems engineering chapter felt new.
  • "The Craft of System Security" (by Smith and Marchesini) - Also mostly general security content
  • "Cyber Security Engineering" (by Woody and Mead) - Wasn't very information dense or well organized

Haven't read yet

System engineering

In the future

  • Write up case studies on how I'd use my process to secure different things
  • Create practical, step by step checklists for doing each of the parts of my process
  • Have interviews with people who design security for museums, banks, prisons, casinos, etc

About

How to systematically secure anything: a repository about security engineering

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published