PS Product SecurityKnowledge Base

๐ŸŒ Web Application Security Testing and Gate Patterns

Intro: Older DevSecOps books were right about one important thing: web application testing belongs close to CI, but not every test belongs on every pull request. This page turns that idea into a practical, modern test model for product teams.

Why this page exists

Classic guidance around ZAP, web scanners, and โ€œsecurity in CIโ€ is still useful, but teams usually need a cleaner split between:

  • fast passive checks on every change;
  • targeted active tests on protected preview environments;
  • deep manual testing on material releases, architecture changes, or high-risk workflows.

That split keeps security close to engineering without turning the pipeline into a slow and noisy obstacle.

A practical three-speed model

Speed Where it runs Good for Keep out of this stage
Fast PR / merge request pipeline headers, obvious passive issues, contract drift, auth miswiring smoke tests long active scans, large authenticated crawl jobs
Medium preview, ephemeral test, nightly authenticated walkthroughs, limited active scanning, common abuse checks exhaustive crawling across entire estate
Deep release candidate, dedicated security window, manual review business logic, chained authz issues, stored XSS, multi-step abuse โ€œalways on every commitโ€ assumptions

Legacy versus current working model

Older pattern Why it helped What to do now
scheduled weekly scanner run found obvious issues eventually keep periodic scans, but add PR-time passive and contract checks
one giant DAST job simple mental model split passive, API, authenticated, and deep active testing
security-owned scanner only gave central visibility still centralize standards, but let product teams run first-line checks
single โ€œpass/fail web scanโ€ gate easy to explain score by test type, environment, and finding class

Stage 1 โ€” PR or merge request

Run checks that are cheap and repeatable:

  • response-header smoke tests;
  • CSP / cookie / framing checks;
  • OpenAPI or GraphQL contract linting;
  • route inventory diffing;
  • ZAP baseline or equivalent passive-only scan.

Stage 2 โ€” preview environment

Run authenticated or richer checks against a realistic but disposable environment:

  • authenticated ZAP baseline or API scan;
  • targeted active probes against changed routes;
  • object-level access checks for changed endpoints;
  • file upload and download safety checks;
  • login, reset, invite, and session workflow checks.

Stage 3 โ€” release or pre-release

Reserve for expensive or expert work:

  • deep active scanning with tuned scope;
  • manual business-logic abuse testing;
  • chained authorization and tenant-boundary review;
  • browser storage and frontend trust-boundary review;
  • exploitability review before blocking release.

Practical snippet โ€” ZAP baseline in CI

docker run -t ghcr.io/zaproxy/zaproxy:stable \
  zap-baseline.py \
  -t https://preview.example.internal \
  -u https://raw.githubusercontent.com/org/repo/main/zap-baseline.conf \
  -J zap-baseline.json \
  -w zap-baseline.md

Use this where you want passive findings quickly. Treat it as a broad smoke test, not as proof that the app is secure.

Practical snippet โ€” generate a baseline config and then tune it

docker run --rm -v "$(pwd):/zap/wrk/:rw" -t ghcr.io/zaproxy/zaproxy:stable \
  zap-baseline.py -t https://preview.example.internal -g zap-baseline.conf

Then promote a few rules from WARN to FAIL, leave some as WARN, and explicitly suppress known false positives that have a real owner.

Example zap-baseline.conf fragment:

10016	IGNORE	(Web Browser XSS Protection Not Enabled)
10020	FAIL	(X-Frame-Options Header Not Set)
10021	FAIL	(X-Content-Type-Options Header Missing)
10202	FAIL	(Absence of Anti-CSRF Tokens)

Practical snippet โ€” API-focused scan from an OpenAPI spec

docker run -t ghcr.io/zaproxy/zaproxy:stable \
  zap-api-scan.py \
  -t openapi.yaml \
  -f openapi \
  -r zap-api-report.html \
  -J zap-api-report.json

Prefer API scans when the web UI is thin but the API is business-critical.

Practical snippet โ€” GitHub Actions example

name: zap-baseline
on:
  pull_request:

jobs:
  zap:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Start preview stack
        run: docker compose up -d --build
      - name: Run ZAP baseline
        run: |
          docker run --rm --network host \
            -v "$PWD:/zap/wrk/:rw" \
            -t ghcr.io/zaproxy/zaproxy:stable \
            zap-baseline.py \
            -t http://127.0.0.1:8080 \
            -c /zap/wrk/zap-baseline.conf \
            -J /zap/wrk/zap-baseline.json
      - name: Upload report
        uses: actions/upload-artifact@v4
        with:
          name: zap-baseline
          path: |
            zap-baseline.json

Practical snippet โ€” GitLab CI example

zap_baseline:
  stage: test
  image: docker:27
  services:
    - docker:27-dind
  script:
    - docker compose up -d --build
    - |
      docker run --rm --network host \
        -v "$CI_PROJECT_DIR:/zap/wrk/:rw" \
        -t ghcr.io/zaproxy/zaproxy:stable \
        zap-baseline.py \
        -t http://127.0.0.1:8080 \
        -c /zap/wrk/zap-baseline.conf \
        -J /zap/wrk/zap-baseline.json
  artifacts:
    when: always
    paths:
      - zap-baseline.json

Practical snippet โ€” simple header checks with curl

curl -I https://app.example.com | sed -n '1,20p'

Look for:

  • content-security-policy
  • strict-transport-security
  • x-content-type-options: nosniff
  • x-frame-options or frame-ancestors in CSP
  • set-cookie with Secure, HttpOnly, and appropriate SameSite

Practical snippet โ€” quick CSP regression check

curl -s https://app.example.com | grep -i "content-security-policy"

Better yet, assert it in a test.

Practical snippet โ€” lightweight authz smoke check

TOKEN_A=$(cat token-user-a.txt)
TOKEN_B=$(cat token-user-b.txt)

curl -s -H "Authorization: Bearer $TOKEN_A" \
  https://api.example.com/invoices/123 | jq .

curl -i -H "Authorization: Bearer $TOKEN_B" \
  https://api.example.com/invoices/123

A second user should not read another tenantโ€™s or another userโ€™s object unless the workflow explicitly allows it.

What to block on

Block release or merge when you see:

  • missing authn or broken authz on changed endpoints;
  • new high-confidence stored XSS or reflected XSS in changed flows;
  • missing CSRF protection where the app still uses cookie-backed sessions;
  • file upload routes without type, size, and storage controls;
  • severe debug exposure or accidental admin routes.

Do not auto-block on every browser header warning in every environment. Treat local dev, preview, and production expectations differently.

Legacy notes that still matter

Older books often show:

  • a single scanner wired into CircleCI or Jenkins;
  • a simple monolithic web app as the test target;
  • explicit HTML issues like XSS, CSRF, and clickjacking;
  • dependency freshness treated as part of web security.

These are still useful ideas. What changed is the packaging and the operating model:

  • preview environments are more common;
  • APIs often matter more than server-rendered HTML;
  • modern teams expect JSON reports, artifacts, and issue links;
  • manual testers focus more on workflow abuse than on โ€œscan everything the same wayโ€.

Review checklist

  • Are fast passive checks running on every PR?
  • Is there a separate authenticated preview scan path?
  • Are changed routes or changed specs used to limit scan scope?
  • Are known false positives tracked with an owner and expiry?
  • Are deep tests reserved for workflows that scanners miss?

---Author attribution: Ivan Piskunov, 2026 - Educational and defensive-engineering use.