Live Code-Review Drills and Answer Guides
Purpose: Use this page to rehearse live interview drills where the interviewer shares code, YAML, HCL, IAM JSON, pipeline config, or a short architecture fragment and asks: "What is wrong, why is it risky, and how would you fix it?"
Best pairing: read this with Interview Answer Patterns, Tactics, and Hiring-Loop Meta, the AppSec Engineer Interview Pack (2026), and the DevSecOps Engineer Interview Pack (2026).
How strong candidates handle live review
A strong answer usually follows the same sequence:
- State the finding clearly.
- Name the class of weakness or control failure.
- Explain the exploit path or operational impact.
- Prioritize by blast radius and trust boundary, not by aesthetics.
- Propose the smallest safe fix first, then the structural fix.
- Mention how to prevent recurrence.
Good interviewer-facing phrasing:
- "The primary issue is..."
- "This crosses a trust boundary because..."
- "The direct risk is..., but the secondary risk is..."
- "The tactical fix is..., and the more durable control is..."
- "I would also add a guardrail in CI or policy to stop this pattern from reappearing."
Drill 1 - Python command execution path
from flask import request
import subprocess
@app.route('/ping')
def ping():
host = request.args.get('host', '')
return subprocess.check_output(f"ping -c 1 {host}", shell=True)
What the interviewer wants to see
- command injection recognition
- shell expansion risk
- safe alternative patterns
- input validation versus command construction
Worked answer
Finding
This is OS command injection because untrusted request data is concatenated into a shell command and executed with shell=True.
Why it matters
An attacker can inject shell metacharacters and run arbitrary commands under the service account. The true severity depends on what that process can reach: network, filesystem, cloud metadata, secrets, service account tokens, and lateral movement paths.
Safer fix
- avoid the shell entirely;
- pass arguments as an array;
- constrain the input to an allowlisted hostname/IP format;
- rate-limit and log suspicious requests.
import ipaddress
import subprocess
@app.route('/ping')
def ping():
host = request.args.get('host', '')
ipaddress.ip_address(host)
return subprocess.check_output(["ping", "-c", "1", host], text=True)
Better long-term control
If this is a diagnostic feature, move it behind admin authorization, add audit logging, and ask whether the feature should exist at all in production.
Drill 2 - Java SQL path with user-controlled sort field
String sort = request.getParameter("sort");
String sql = "SELECT id, amount, created_at FROM invoices ORDER BY " + sort;
PreparedStatement ps = connection.prepareStatement(sql);
ResultSet rs = ps.executeQuery();
Worked answer
Finding
Even though PreparedStatement is used, this is still dangerous because the identifier / clause itself is attacker-controlled. Prepared statements protect values, not arbitrary SQL syntax injected into keywords, identifiers, or clauses.
Risk
- SQL injection through the
ORDER BYclause - query manipulation or error-based discovery
- possibility of broader abuse if the driver / database supports stacked behavior or unsafe modes
Safer pattern
Map allowed sort values to a fixed server-side list.
Map<String, String> allowed = Map.of(
"created_at", "created_at",
"amount", "amount",
"id", "id"
);
String sort = allowed.getOrDefault(request.getParameter("sort"), "created_at");
String sql = "SELECT id, amount, created_at FROM invoices ORDER BY " + sort;
Strong interview comment
"I would call this an allowlist failure, not just an SQL bug. The architectural lesson is that user input should select from server-owned choices, not become syntax."
Drill 3 - GitHub Actions workflow with risky pull-request path
name: ci
on:
pull_request_target:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: ./scripts/integration.sh
env:
PROD_TOKEN: ${{ secrets.PROD_TOKEN }}
Worked answer
Finding
pull_request_target runs in the context of the base repository and can expose secrets to attacker-controlled pull-request logic if not handled with extreme care.
Why risky
This mixes untrusted code and trusted secrets in the same execution path.
Safer options
- run untrusted PR jobs without secrets;
- reserve secrets for trusted branches or a later promotion step;
- isolate integration tests behind a reviewed internal workflow;
- minimize token permissions.
Stronger answer
A mature candidate also says: "This is not only a CI bug. It is a trust-boundary design failure between code provenance and secret reachability."
Drill 4 - Kubernetes pod with broad privilege surface
apiVersion: v1
kind: Pod
metadata:
name: debug-shell
spec:
hostNetwork: true
containers:
- name: shell
image: ubuntu:latest
securityContext:
privileged: true
allowPrivilegeEscalation: true
command: ["sleep", "3600"]
Worked answer
Finding
This pod defeats multiple layers of isolation:
privileged: trueallowPrivilegeEscalation: truehostNetwork: true- mutable
latesttag
Risk
If compromised, the pod gains an easy path to host-level observation or manipulation and dramatically increases blast radius.
Fix
Use a minimal image, pin by digest, remove privilege, apply seccomp/AppArmor/SELinux where available, deny host namespace access, and enforce a restricted baseline through admission policy.
Interview nuance
A very strong candidate says they would also ask why this workload exists, because emergency debug shells often represent a process failure, not just a YAML failure.
Drill 5 - Terraform IAM policy with wildcard permissions
resource "aws_iam_policy" "deploy" {
name = "deploy-policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Action = "*"
Resource = "*"
}]
})
}
Worked answer
Finding
This is an obvious least-privilege failure, but a strong answer should not stop at "too broad".
What else to say
- identify who assumes this policy;
- check whether it is reachable from CI, runners, or automation;
- determine if it can modify IAM, KMS, secrets, network controls, or logging;
- classify it as a privilege-escalation enabler.
Fix
Replace broad actions and resources with task-scoped permissions, split read from write, and enforce policy review with IaC policy checks.
Strong phrasing
"The dangerous part is not that the policy is broad. It is that this broadness likely collapses multiple trust zones into one identity."
Drill 6 - GraphQL resolver missing object-level authorization
const resolvers = {
Query: {
invoice: async (_: any, { id }: { id: string }, ctx: Ctx) => {
return ctx.db.invoice.findUnique({ where: { id } });
}
}
};
Worked answer
Finding
Authentication may exist elsewhere, but object-level authorization is absent. Any authenticated user may request any invoice by ID.
Risk
This is classic IDOR/BOLA behavior. The bug is in business authorization, not GraphQL itself.
Fix
Bind data access to tenant or ownership context.
return ctx.db.invoice.findFirst({
where: { id, tenantId: ctx.user.tenantId }
});
Prevention
Move auth decisions out of ad hoc resolvers where possible and standardize authorization checks in a shared layer.
Drill 7 - NGINX reverse proxy leaks internal headers
location / {
proxy_pass http://backend;
proxy_set_header X-Forwarded-For $remote_addr;
}
Worked answer
Review points
A good candidate does not invent a vulnerability blindly. They inspect context.
What to ask
- are trusted proxy headers normalized consistently?
- does the app trust client-supplied forwarding headers?
- are internal routing or auth headers stripped?
- is mTLS or network allowlisting expected upstream?
Likely concern
Header trust confusion can break audit trails, rate limits, geo controls, or internal-auth patterns.
Strong answer style
"I need to verify how the backend interprets forwarding headers before calling this a bug, but I already see a possible identity and logging integrity issue."
Drill 8 - Secrets in Docker build context
FROM node:22-alpine
WORKDIR /app
COPY . .
RUN npm ci
RUN npm run build
Worked answer
Finding
COPY . . can accidentally include .env, credentials, cloud config, SSH material, build metadata, or local secrets into the image build context.
Why it matters
Even if the final runtime image does not expose these files directly, they may remain in layers, build cache, or CI artifact traces.
Fix
- define
.dockerignoreaggressively; - separate build and runtime stages;
- use secret mounts for build-time secrets instead of file copies;
- scan images and layers for leaked secrets.
Drill 9 - Logging sensitive auth material
log.Printf("login failed for user=%s password=%s token=%s", user, password, token)
Worked answer
Finding
This violates basic log hygiene and may create a secondary breach path through observability systems.
Risk expansion
A strong candidate mentions:
- log aggregation retention
- operator access
- downstream SIEM copies
- support-ticket screenshots
- legal/privacy exposure
Fix
Never log raw secrets or credentials. Use structured logging with redaction and a documented sensitive-field policy.
Drill 10 - Release gate bypass through manual environment switch
deploy_prod:
stage: deploy
when: manual
script:
- kubectl config use-context prod
- helm upgrade api ./chart
Worked answer
Finding
This is not automatically wrong, but it raises several review questions:
- who can trigger it?
- what evidence or approvals are required first?
- is the deployer identity separate from the builder identity?
- is there signed artifact verification or are we deploying whatever the chart points to?
- are production credentials directly reachable from a generic CI job?
Mature answer
"I would not evaluate this only as a YAML fragment. I would inspect the release-governance model around it: approvals, provenance, artifact immutability, deploy identity separation, and rollback evidence."
What interviewers usually reward
They reward candidates who:
- prioritize the highest-risk trust-boundary break first;
- distinguish symptom from root cause;
- propose a tactical fix and a systemic control;
- avoid absolutist language when more context is needed;
- think about prevention, detection, and rollback, not only patching.