PS Product SecurityKnowledge Base

Security Requirements, Trust Boundaries, Data Flows, and Architectural Trade-offs

Short version: these four concepts are the backbone of secure design work. If a review cannot explain them clearly, the team is usually still debating implementation details before it has agreed on the security problem.

1) Security requirements

Security requirements are the explicit statements of what the system must protect and how it must behave. They are not โ€œnice-to-have controlsโ€ or vague aspirations like โ€œbe secure.โ€ They are implementable, testable expectations.

Typical examples:

  • administrators must use phishing-resistant MFA;
  • customer data classified as confidential must be encrypted in transit and at rest;
  • privileged actions must generate auditable events;
  • service-to-service calls crossing trust boundaries must authenticate mutually;
  • internet-facing rate-limited endpoints must fail closed when policy enforcement is unavailable.

Practical sources of security requirements:

  • product and business abuse cases;
  • architecture review findings;
  • compliance and contractual commitments;
  • incident learnings;
  • platform baselines and reference architectures;
  • public catalogs such as ASVS.

The useful test is simple: can the requirement be traced to an owner, a component, and a verification method? If not, it is usually still a principle rather than a requirement.

2) Trust boundaries

A trust boundary is the point where data, identity, or control crosses from one trust level or trust zone to another. In practice, these are the places where a design starts to need real security controls rather than optimistic assumptions.

Common trust boundaries:

  • browser โ†” edge / CDN / WAF
  • public internet โ†” application edge
  • frontend โ†” backend API
  • one microservice โ†” another microservice
  • workload โ†” secrets manager
  • workload โ†” cloud control plane
  • production admin workstation โ†” production service
  • application โ†” third-party SaaS / webhook destination

What usually matters at a trust boundary:

  • authentication of the caller;
  • authorization of the requested action;
  • integrity and confidentiality of the data flow;
  • logging, traceability, and replay protection where relevant;
  • rate limiting / abuse controls; and
  • input validation plus protocol assumptions.

If a team draws a data flow but does not mark trust boundaries, it will often miss where stronger controls are supposed to activate.

3) Data flows

A data flow explains what moves where, in what direction, using what protocol, under what identity, and for what purpose. Good data flows are not only diagrams; they are also review artifacts.

A good data-flow description usually answers:

  • who or what initiates the action;
  • what data moves;
  • whether the data is sensitive or regulated;
  • what component validates it;
  • what component persists it;
  • what trust boundaries are crossed;
  • what controls are expected at each crossing; and
  • what logs, traces, or evidence remain after the transaction.

Typical mistakes:

  • only drawing request arrows but not showing identity propagation;
  • forgetting asynchronous paths such as queues, webhooks, retries, and DLQs;
  • ignoring operational data such as audit logs, backups, secrets, and telemetry;
  • treating โ€œinternal trafficโ€ as inherently trusted.

4) Architectural trade-offs

Architectural trade-offs are the security-relevant decisions where improving one property can weaken, slow, complicate, or raise the cost of another. This is normal. Secure design is rarely about perfect maximization; it is about intentional choices with explicit residual risk.

Common trade-offs:

  • stronger authentication vs. user friction and support cost;
  • tighter segmentation vs. operational complexity and deployment speed;
  • more logging vs. privacy exposure and storage cost;
  • short-lived credentials vs. automation and reliability burden;
  • synchronous policy checks vs. latency and availability impact;
  • strict default-deny controls vs. slower onboarding and integration work.

Useful review habit: do not ask โ€œwhat is the most secure option?โ€ Ask instead:

  1. what risk is being reduced;
  2. what cost is introduced;
  3. what failure mode is created;
  4. who owns the resulting residual risk; and
  5. how the team will know if the choice stops working.

Condensed table

Concept What it is Why it matters Quick example What to capture
Security requirements Explicit, testable security expectations Turns abstract security goals into buildable and reviewable controls โ€œAll privileged actions require MFA and audit logging.โ€ ID, statement, rationale, owner, verification method
Trust boundaries Points where trust level changes Reveals where stronger controls must activate Public API call enters the service mesh zones, identities, controls, failure assumptions
Data flows The movement of data, identity, and control through the system Exposes attack paths, hidden storage, and missing validation Browser request โ†’ API โ†’ queue โ†’ worker โ†’ database source, destination, protocol, sensitivity, validation, storage
Architectural trade-offs Decisions where security affects usability, cost, speed, or reliability Forces explicit residual-risk decisions instead of hidden compromises short-lived certs improve security but increase operational burden option, benefit, cost, owner, fallback, residual risk

Lightweight checklist for design reviews

  • Are the key security requirements explicit and mapped to components?
  • Are trust boundaries marked, named, and defended?
  • Do the data flows include async paths, admin paths, and third parties?
  • Are the high-impact trade-offs written down as decision records?
  • Is the residual risk assigned to a real owner rather than โ€œthe teamโ€?

Best-practice references

Use these instead of copying giant control catalogs into the KB: