PS Product SecurityKnowledge Base

Take-Home Assignments and Evaluation Guide

Purpose: This page gives realistic take-home interview assignments for AppSec, DevSecOps, Architect, Manager, and senior IC roles. It also explains what strong reviewers look for, what weak submissions usually miss, and how to calibrate feedback.

Design principles for good take-homes

A good take-home should:

  • test judgment, not unpaid bulk labor;
  • be completable in 2-4 focused hours;
  • include a clear prompt, constraints, and expected deliverables;
  • reward explanation quality, not only tool output;
  • allow multiple valid solutions.

A bad take-home usually:

  • hides the evaluation criteria;
  • asks for excessive implementation from scratch;
  • turns into free consulting;
  • punishes candidates for missing undocumented company context.

Assignment 1 - AppSec review of a small web service

Candidate prompt

You are given a small API service with:

  • one authentication flow;
  • one admin-only endpoint;
  • one file-upload route;
  • one background task trigger.

Deliver:

  1. a short findings memo with severity and rationale;
  2. the top five fixes in order;
  3. one code-level patch for the most important issue;
  4. one paragraph on how you would prevent recurrence.

What strong submissions do

  • separate authn, authz, input validation, logging, and secrets issues;
  • do not inflate every bug to critical;
  • explain exploitability and business impact;
  • propose guardrails such as SAST rules, tests, review checklists, or framework-level controls.

Red flags

  • tools dumped without interpretation;
  • generic OWASP language with no app-specific judgment;
  • missing prioritization;
  • no distinction between immediate fix and structural fix.

Assignment 2 - DevSecOps pipeline and runner review

Candidate prompt

You are given:

  • a GitHub Actions workflow;
  • a self-hosted runner description;
  • an ECR/GCR/ACR publishing step;
  • a Kubernetes deployment job.

Deliver:

  1. a trust-boundary diagram;
  2. a list of top risks;
  3. a proposed hardened pipeline design;
  4. one concrete control each for secrets, approvals, provenance, and runner isolation.

What strong submissions do

  • identify where untrusted code meets trusted credentials;
  • question self-hosted runner persistence and network reachability;
  • call out token permission minimization;
  • mention signed artifacts / attestations / promotion over rebuild;
  • propose staged rollout and break-glass handling.

Red flags

  • only talking about secret scanning;
  • ignoring runner compromise persistence;
  • missing separation between build identity and deploy identity;
  • no release evidence or provenance story.

Assignment 3 - Architect design review

Candidate prompt

Design a security review memo for a SaaS product adding:

  • public API
  • async worker fleet
  • internal admin console
  • model-serving feature using third-party provider
  • multi-region deployment

Deliver:

  1. trust boundaries and data flows;
  2. top architectural risks;
  3. security requirements;
  4. recommended controls with trade-offs;
  5. open questions you would escalate before sign-off.

What strong submissions do

  • separate internet edge, internal service plane, admin plane, data plane, and third-party provider boundary;
  • treat external model integration as both data-governance and abuse-risk surface;
  • identify cross-region key management, tenancy, and logging considerations;
  • explicitly note where business decisions are needed.

Red flags

  • control laundry list without architecture;
  • zero mention of data classification or secrets flow;
  • no trade-offs;
  • no sign-off blockers or assumptions.

Assignment 4 - Manager triage and planning simulation

Candidate prompt

You lead a small Product Security team. In the same week you receive:

  • one high-severity external report;
  • one critical scanner finding in a non-critical internal app;
  • one major release missing threat-model review;
  • one platform migration requiring security support;
  • one request from leadership for a board-ready metrics pack.

Deliver:

  1. your triage order and why;
  2. what you delegate and to whom;
  3. what you escalate;
  4. one-week and one-month plan;
  5. stakeholder messages for engineering, product, and executives.

What strong submissions do

  • prioritize on exposure + exploitability + business impact + deadline;
  • keep the program operating while handling the urgent issue;
  • show clear ownership and escalation paths;
  • do not personally absorb all work instead of managing the system.

Red flags

  • reacting only to severity labels;
  • no delegation;
  • no stakeholder communication discipline;
  • missing operational cadence after the fire is out.

Assignment 5 - Director / VP operating model memo

Candidate prompt

Write a 2-page memo proposing a Product Security operating model for a 500-engineer software company with:

  • Kubernetes-based delivery
  • cloud-native stack
  • mixed B2B and enterprise customers
  • recent audit pressure
  • growing vulnerability backlog

Deliver:

  1. target operating model;
  2. top three investments;
  3. metrics to track;
  4. first two quarters of execution;
  5. risks if the proposal is underfunded.

What strong submissions do

  • tie team design and control investments to business outcomes;
  • explain what gets centralized versus embedded;
  • separate foundational controls from long-tail ambitions;
  • acknowledge political and delivery friction.

Red flags

  • too many programs launched at once;
  • metrics with no decision value;
  • no budget discipline;
  • no explanation of what the company should deliberately not do yet.

Evaluation rubric

Dimension What excellent looks like What weak looks like
Risk judgment Focuses on trust boundaries, blast radius, and realistic abuse paths Chases surface-level issues or scanner labels only
Prioritization Orders work by impact, exploitability, and operational context Treats all findings as equally urgent
Technical depth Explains root cause and proposes feasible controls Uses buzzwords without showing mechanism
Communication Clear, concise, executive-friendly where needed Overlong, vague, or tool-centric
Prevention mindset Adds guardrails, tests, policy, and feedback loops Stops at one-off fixes
Practicality Respects delivery reality and rollout friction Recommends idealized but unusable controls

Reviewer note

A take-home should not be graded as if it were a production RFC. The better question is:

"Does this submission show the reasoning patterns we want under ambiguity and time pressure?"