๐ Secure Coding Review Labs โ Facilitator Guide
Use this page when: you want to turn secure coding into a short, repeatable engineering practice rather than a one-time awareness talk.
The simplest operating model
Run labs in short cycles:
- give the team a small vulnerable snippet or PR-sized change;
- ask them to identify the defect class and likely impact;
- ask for the minimum safe fix direction;
- ask how they would prevent recurrence in linting, tests, framework usage, or review criteria;
- close with one stack-specific checklist item the team agrees to enforce.
Session lengths that work
| Format | Duration | Good for |
|---|---|---|
| lightning review | 15โ20 min | stand-ups, guild calls, champion nudges |
| focused lab | 30โ45 min | onboarding, team learning, post-incident follow-up |
| deep review workshop | 60โ90 min | architecture or security-champion sessions |
What makes a good lab
A good lab should:
- feel like a realistic code review, not an exploit contest;
- use code the target team can actually read;
- focus on root cause and safer design direction;
- end with a change to behavior, checklist, linting, or test coverage.
Recommended flow
1) Start with the review problem
Frame the exercise like this:
- what would you flag in this PR?
- what would you ask the author to change before merge?
- what business or attacker value does this defect create?
2) Keep the exploit discussion bounded
Use exploit language only enough to explain risk.
Do not let the session drift into competitive offensive walkthroughs unless that is explicitly the training goal.
3) Make the team name the control
Always finish with one of these:
- code fix pattern;
- framework helper;
- lint or rule;
- unit or integration test;
- review checklist line item;
- release criterion.
Good facilitator prompts
- Which input is trusted here that should not be trusted?
- What is the security boundary in this snippet?
- Is this a data-scope problem, an execution problem, or a rendering problem?
- What would the minimum viable safer fix look like?
- What would stop this class of mistake from reappearing next sprint?
Scoring model (optional)
Use a lightweight four-part score:
- found the defect class;
- explained attacker or business impact;
- proposed a viable safer pattern;
- proposed a prevention mechanism.
This keeps the session constructive and prevents โspot the bugโ from becoming too shallow.
Mapping labs to team maturity
| Team maturity | Best lab style |
|---|---|
| newcomer | one clear defect, one clear safer pattern |
| working team | mixed defects in a realistic endpoint or helper |
| senior team | design trade-off lab, ambiguous trust boundary, or layered failure |
| champions | convert the finding into tests, rules, and release criteria |
How to use the language pages
The example pages in this KB already give you:
- vulnerable snippet;
- safer direction;
- business impact;
- review cue.
The facilitatorโs job is to add:
- discussion prompts;
- a merge/no-merge decision;
- one prevention mechanism.
What not to do
- do not run 20 examples in one sitting;
- do not make developers guess hidden trick answers;
- do not turn every lab into policy trivia;
- do not end without an agreed operational takeaway.
Good closing questions
- Should this become a checklist item?
- Should this become a unit/integration test pattern?
- Should we add or tune a SonarQube, Semgrep, or framework rule for this?
- Which service or language family is most exposed to this mistake?
Use with
- Secure Coding Review Lab Scenarios by Language
- Language-Specific Secure Coding Review Checklists
- Code Vulnerability Examples and Fixes by Language
Author attribution: Ivan Piskunov, 2026 - Educational and defensive-engineering use.