๐ BSIMM Deep Dive โ Domains, Practices, and Manager Use
Use this page when: you want to understand how BSIMM is structured, what its domains and practices actually cover, and how a Product Security leader can use it to benchmark and evolve an initiative without turning it into a score-chasing exercise.
What BSIMM is really for
BSIMM is best understood as a peer-informed software security benchmark.
Its core question is not:
โWhat should a perfect program look like?โ
Its core question is closer to:
โWhat are mature organizations actually doing, and how does our initiative compare?โ
That framing is powerful for leaders because it reduces unproductive debate. Instead of arguing from preference, you can anchor decisions in an external body of observed practice.
BSIMM structure at a glance
BSIMM organizes its Software Security Framework into four domains and twelve practices.
| Domain | What it is about | Practices |
|---|---|---|
| Governance | How the software security initiative is organized, managed, measured, and socialized | Strategy & Metrics, Compliance & Policy, Training |
| Intelligence | The reusable knowledge a firm creates to guide secure engineering | Attack Models, Security Features & Design, Standards & Requirements |
| SSDL Touchpoints | The reviews and tests applied to software artifacts and development workflows | Architecture Analysis, Code Review, Security Testing |
| Deployment | The operational and production-facing practices around deployed software | Penetration Testing, Software Environment, Configuration Management & Vulnerability Management |
Domain 1 โ Governance
1. Strategy & Metrics (SM)
This practice covers how the initiative is planned, measured, funded, and steered.
Typical questions:
- Do we know which products and teams are in scope?
- Are there metrics beyond raw vulnerability counts?
- Is there a defined software security group or equivalent operating model?
- Can we show improvement over time?
Why managers care: This practice determines whether the program is a collection of actions or an actual managed system.
Signals of maturity:
- service and product inventory with risk tiering;
- scorecards or evidence-based reviews;
- budget linked to risk and delivery realities;
- clear review cadence and accountability.
2. Compliance & Policy (CP)
This practice covers how policy, obligations, and control expectations are defined and enforced.
Typical questions:
- What is mandatory versus recommended?
- Where do exceptions live?
- Can policy be enforced in CI/CD, cloud, and platform layers?
Why managers care: Without this practice, โstandardsโ remain slideware. This is where governance becomes operational.
Signals of maturity:
- policy-as-code or guardrail-as-code adoption;
- exception lifecycle with expiry and ownership;
- vendor and supplier expectations defined in contracts or onboarding controls.
3. Training (T)
This practice covers role-based security education and enablement.
Typical questions:
- Are developers, reviewers, platform engineers, and responders trained differently?
- Are security champions used effectively?
- Does training change behavior or just check a box?
Why managers care: A Product Security team does not scale by central review alone. It scales through better distributed judgment.
Signals of maturity:
- role-based learning tracks;
- targeted training after real incidents or recurring flaws;
- security champion operating model tied to delivery teams.
Domain 2 โ Intelligence
4. Attack Models (AM)
This practice turns attacker behavior into reusable guidance for engineering and review.
Typical questions:
- Do we model likely abuse paths, not only generic threats?
- Are attacker patterns turned into test cases, guardrails, and review questions?
Why managers care: This is where Product Security stops speaking only in controls and starts speaking in real attacker paths and business abuse.
Signals of maturity:
- threat models grounded in architecture and adversary behavior;
- recurring attack patterns linked to product classes;
- detection and response informed by attacker workflows.
5. Security Features & Design (SFD)
This practice covers secure design patterns and reusable security capabilities.
Typical questions:
- Do teams have standard patterns for authn, authz, secrets, isolation, and logging?
- Are reference architectures easier to adopt than unsafe custom designs?
Why managers care: This practice reduces review load by replacing one-off design decisions with reusable, safer defaults.
Signals of maturity:
- platform security patterns;
- reusable authn/authz building blocks;
- standard designs for secrets, tenancy, auditability, and admin access.
6. Standards & Requirements (SR)
This practice covers requirements, coding rules, design rules, and engineering standards.
Typical questions:
- Are secure defaults explicit?
- Can teams tell what โgood enoughโ looks like before release?
- Are requirements linked to risk tiers and product types?
Why managers care: This practice is where engineering expectations become portable and reviewable.
Signals of maturity:
- role-appropriate standards;
- secure release criteria;
- risk-tiered requirements for high-value systems and sensitive workflows.
Domain 3 โ SSDL Touchpoints
7. Architecture Analysis (AA)
This practice focuses on security review of designs, data flows, trust boundaries, and major changes.
Manager value: It lets leaders move risk detection earlier, before defects become implementation debt.
Useful evidence:
- design review records;
- threat models attached to major initiatives;
- architectural decision records with security trade-offs.
8. Code Review (CR)
This practice covers manual and automated review of source code and adjacent artifacts.
Manager value: It is often where โsecurity in the IDE and pipelineโ becomes measurable.
Useful evidence:
- mandatory review patterns;
- SAST integration quality;
- review guidance for common stacks;
- reduced false-positive load over time.
9. Security Testing (ST)
This practice covers the testing family: static, dynamic, interactive, compositional, and other application-focused assurance.
Manager value: It shows whether the program can find meaningful defects repeatedly and at the right stages of the lifecycle.
Useful evidence:
- CI-integrated testing;
- contract linting for APIs;
- quality gates tuned by exploitability and confidence;
- testing tied to product risk, not just repo count.
Domain 4 โ Deployment
10. Penetration Testing (PT)
This practice covers targeted offensive validation.
Manager value: Pen testing helps validate assumptions, especially around abuse paths, authz failures, and edge-case exposure that automated tools miss.
Use carefully: Treat it as validation, not the backbone of the whole program.
11. Software Environment (SE)
This practice addresses runtime environment, platform, and deployment context.
Manager value: This is where cloud, containers, orchestration, and production controls enter the Product Security scope.
Useful evidence:
- hardened runtime baselines;
- production environment standards;
- secure deployment patterns;
- runtime drift controls.
12. Configuration Management & Vulnerability Management (CMVM)
This practice covers configuration integrity, patching, dependency and vulnerability handling, and operational hygiene.
Manager value: This is one of the strongest links between Product Security and resilience.
Useful evidence:
- SLA-backed vulnerability handling;
- ownership for runtime findings;
- patch and configuration governance;
- operational feedback into platform defaults.
How a Product Security manager should use BSIMM
1. use it to frame the program, not just the team
A Product Security function usually touches engineering, platform, cloud, response, and governance. BSIMM helps show that this breadth is normal in mature software security initiatives.
2. use it to spot structural blind spots
Examples:
- too much emphasis on scanning, too little on training or standards;
- good testing, weak deployment and environment controls;
- strong governance, weak intelligence and reusable design patterns.
3. use it in quarterly reviews
A director does not need to explain all 12 practices every quarter. They can instead show:
- where the initiative is thin;
- which domains are gaining capability;
- which investment closes the most meaningful gap.
4. use it to justify organization design
BSIMM is especially useful when explaining why Product Security needs:
- security champions;
- design-review capacity;
- policy-as-code support;
- platform collaboration;
- runtime-to-SDLC feedback loops.
What BSIMM is not
- it is not a compliance certification;
- it is not a guaranteed maturity ladder for every company;
- it is not a replacement for your own risk model;
- it is not sufficient by itself for target-state planning.
Practical advice for directors
If you want the model to help instead of confuse, do this:
- map your current activities to the 12 practices;
- identify which practices are materially weak for your business model;
- link those gaps to product risk, delivery friction, or response gaps;
- convert the result into a roadmap, owner list, and investment ask.
Cross-links
- BSIMM and OWASP SAMM โ Overview, Value, and Comparison
- Using BSIMM and SAMM Together โ Assessments, Roadmaps, and Quarterly Reviews
- Security Program Economics and Investment Decisions
- Stakeholder Communication and Executive Narratives
Official references
- BSIMM Foundations report โ https://www.synopsys.com/content/dam/bsimm/reports/bsimm13-foundations.pdf
- BSIMM Questions and Answers โ https://www.synopsys.com/content/dam/synopsys/bsimm/datasheets/BSIMM-questions_and-answers.pdf
Author attribution: Ivan Piskunov, 2026 - Educational and defensive-engineering use.