PS Product SecurityKnowledge Base

๐Ÿงญ STRIDE, DREAD, and PASTA โ€” Practical Comparison, Mapping, and SDL Context

Intro: Teams often mix up threat enumeration, risk prioritization, and threat-modeling process. STRIDE, DREAD, and PASTA live in different parts of that stack, so they are best used together rather than treated as interchangeable.

What this page includes

  • what STRIDE, DREAD, and PASTA are actually for
  • where they overlap and where they do not
  • how to map them together in a modern product-security workflow
  • references to the classic Microsoft SDL books that shaped how many teams still think about threat modeling

The short version

  • STRIDE is a threat categorization mnemonic. It helps you ask, "What kind of thing can go wrong here?"
  • DREAD is a risk-ranking method. It helps you ask, "Which of these threats should we treat first?"
  • PASTA is a process model. It helps you ask, "How do we structure the whole threat-modeling exercise from business goals to attack analysis?"

They solve different problems.

Why teams still talk about STRIDE and DREAD

Many modern product-security teams still inherit their mental model from the Microsoft SDL tradition, including the books:

  • Michael Howard, Steve Lipner โ€” The Security Development Lifecycle
    • ISBN-13: 9780735622142
    • Amazon search link: https://www.amazon.com/s?k=9780735622142
  • Michael Howard, David LeBlanc โ€” Writing Secure Code (Second Edition)
    • ISBN-13: 9780735617223
    • Amazon search link: https://www.amazon.com/s?k=9780735617223

Those books are still worth citing because they established a durable engineering mindset:

  • secure design before secure implementation;
  • threat modeling as a design-time activity, not a late audit;
  • explicit attention to trust boundaries, attacker paths, and control decisions.

Their examples are older, but the method discipline remains highly relevant.

1) STRIDE

What it is

STRIDE is a structured way to enumerate threat categories:

Letter Category Core question
S Spoofing Can an attacker pretend to be a user, service, workload, or system component?
T Tampering Can data, configuration, code, or state be changed without authorization?
R Repudiation Can an actor deny an action because evidence is weak or missing?
I Information Disclosure Can data be exposed to someone who should not see it?
D Denial of Service Can capacity or availability be degraded or exhausted?
E Elevation of Privilege Can a lower-trust actor gain higher privilege or wider reach?

Best use

STRIDE works especially well when you already have a simple architecture view or DFD and want a repeatable design-review prompt set.

Strengths

  • very easy to teach;
  • works well with DFDs and trust boundaries;
  • good for feature reviews, API changes, and service design reviews;
  • prevents teams from forgetting common categories like repudiation or privilege escalation.

Weaknesses

  • it does not tell you how to prioritize the threats;
  • it can become a paperwork exercise if the team mechanically fills rows without discussing business impact;
  • it is security-centric, not business-centric, so workflow abuse and product abuse may still need explicit extra treatment.

2) DREAD

What it is

DREAD is a scoring model that evaluates each threat using five dimensions:

Letter Dimension Practical meaning
D Damage How bad is the impact if this threat is exploited?
R Reproducibility How reliably can the attack be repeated?
E Exploitability How difficult is the exploit in practice?
A Affected Users How many users, tenants, systems, or workloads are impacted?
D Discoverability How easy is it for an attacker to find this weakness?

Historically, teams would score each dimension and average the values.

Best use

DREAD is best understood as a historical prioritization aid, not as a universal mandatory scoring scheme.

Strengths

  • pushes teams to move from "threat list" to "what matters first";
  • useful when a program needs a structured, consistent ranking conversation;
  • easy to explain to non-specialists.

Weaknesses

  • creates a false sense of mathematical precision;
  • teams often score the same issue very differently;
  • "Discoverability" and "Exploitability" are often guessed rather than evidenced;
  • many modern teams now prefer likelihood ร— impact, exploitability/context ranking, or internal risk matrices over classic DREAD averaging.

Modern usage pattern

A practical modern pattern is:

  • use STRIDE to enumerate;
  • use contextual severity / exploitability / blast-radius / business impact to prioritize;
  • keep DREAD as a teaching tool or optional workshop aid, rather than the only scoring method.

3) PASTA

What it is

PASTA stands for Process for Attack Simulation and Threat Analysis. It is not just a mnemonic. It is a full process model that starts from business objectives and works down toward technical attack analysis.

A simplified view of PASTA is:

  1. define business and security objectives;
  2. define technical scope;
  3. decompose the application or system;
  4. analyze threats;
  5. analyze weaknesses and attack paths;
  6. model attacks;
  7. decide mitigations and risk treatment.

Best use

PASTA is a better fit when:

  • the system is high value or high complexity;
  • you need stronger business-context linkage;
  • you want threat modeling to feed a more explicit risk-analysis conversation;
  • you are reviewing a platform, payment flow, identity plane, or multi-tenant architecture rather than just a single endpoint change.

Strengths

  • stronger tie to business impact and attacker goals;
  • better fit for large or cross-domain systems;
  • naturally supports attack-path thinking;
  • useful when architecture, product, and security must all align on why a design choice matters.

Weaknesses

  • heavier than STRIDE for fast iteration;
  • can be overkill for a small feature delta;
  • requires more facilitation discipline and better system decomposition.

Comparison table

Model Main purpose Best fit Typical output Common failure mode
STRIDE enumerate threat categories feature review, service review, API review categorized threat list teams list threats but do not prioritize or assign controls
DREAD rank / prioritize threats workshops, legacy Microsoft-style scoring relative priority score false precision and inconsistent scoring
PASTA run an end-to-end threat-analysis process high-value systems, platform reviews, complex architectures attack-path-informed risk and mitigation set too heavy for small or fast-moving deltas

How they map together

They do not map 1:1

This is the most important point.

  • STRIDE categories are about threat type.
  • DREAD dimensions are about priority / risk characteristics.
  • PASTA stages are about workflow and analysis sequence.

So the "mapping" is not a category-to-category conversion.

The practical mapping looks like this

Step in real review Useful model
Draw the system and trust boundaries STRIDE, PASTA
Enumerate what can go wrong STRIDE
Connect threats to attacker goals and business scenarios PASTA
Prioritize which threats to treat first DREAD or modern risk-ranking method
Convert model output into design changes, backlog items, and release gates PASTA-style process discipline

A simple way to combine them

flowchart LR A[Architecture / DFD / trust boundaries] --> B[STRIDE pass] B --> C[Threat list] C --> D[Risk ranking: DREAD or likelihood x impact] D --> E[Mitigations / backlog / release gates] A --> F[PASTA-style business and attacker context] F --> D

Worked example

Scenario

A SaaS product adds a GraphQL admin mutation that can trigger tenant-wide data export.

STRIDE pass

  • Spoofing: can a service token or support account impersonate an admin?
  • Tampering: can the export parameters be altered after approval?
  • Repudiation: do you have durable audit evidence of who initiated the export?
  • Information Disclosure: can one tenant export another tenant's data?
  • Denial of Service: can repeated export jobs starve shared workers?
  • Elevation of Privilege: can a support role reach an admin-only mutation?

DREAD-style prioritization

The cross-tenant export path may score high because:

  • damage is high;
  • affected users are large;
  • exploitability may be moderate;
  • reproducibility may be high once discovered.

PASTA angle

PASTA adds questions like:

  • what is the business value of this export path?
  • which attacker persona benefits from it?
  • what trust assumptions exist between UI, API, background workers, and storage?
  • what attack simulation or misuse case best reflects the likely path?

Which one should you use in this KB?

For this knowledge base and for most real engineering teams:

  • use STRIDE as the baseline threat-enumeration language;
  • use a simple contextual risk-ranking method instead of blindly averaging DREAD numbers;
  • use PASTA-style thinking for larger or higher-risk systems when business context and attack-path analysis matter more.

Suggested rule of thumb

Review size Suggested approach
small feature or endpoint change lightweight DFD + STRIDE
new service or new trust boundary STRIDE + contextual risk ranking
payment / identity / admin plane / multi-tenant platform STRIDE + PASTA-style staged analysis
executive or program-level discussion PASTA framing + business impact language

Where the Microsoft SDL books still help

The two Howard / Lipner / LeBlanc books are still useful in three places:

  1. discipline โ€” threat modeling should change design decisions early;
  2. vocabulary โ€” trust boundaries, attacker thinking, secure defaults, secure deployment;
  3. engineering ownership โ€” security is part of how software is built, not a separate inspection lane.

They are weaker for cloud-native specifics like identity federation, Kubernetes, GraphQL abuse, GitOps, or modern CI/CD provenance. For those, keep the books as foundational references, not as the only implementation guide.

References and further reading

  • OWASP Threat Modeling Process
  • OWASP Threat Modeling Project
  • Microsoft Threat Modeling Tool / STRIDE guidance
  • Microsoft Learn guidance that still documents DREAD scoring examples for threat assessment
  • The Security Development Lifecycle โ€” Michael Howard, Steve Lipner
  • Writing Secure Code โ€” Michael Howard, David LeBlanc

Author attribution: Ivan Piskunov, 2026 - Educational and defensive-engineering use.