Author: Olivier Vitrac, PhD, HDR — olivier.vitrac@adservio.fr
Affiliation: Adservio Innovation Lab · Applied AI & Engineering Sciences
License: CC BY-NC-SA 4.0
This companion proposes hands-on tasks per lecture. Each exercise lists goals, inputs, expected deliverables, and a rubric.
Goal: Produce a deterministic Claude Code setup usable from VS Code and shell.
Steps:
Document OS, VS Code, Claude Code plugin version, and MCP configuration.
Create .claudeignore tuned for your repo.
Validate shell access: list repo files and run a harmless command (e.g., python --version) through Claude Code.
Deliverables:
reports/L0_env_report.md (versions, screenshots, .claudeignore snippet).
reports/L0_shell_check.txt (command + output).
Rubric (5 pts): 2 pts completeness · 1 pt clarity · 1 pt safety (no secrets) · 1 pt reproducibility.
Goal: Build a slide deck using your template.
Steps: Run tools/scripts/build_reveal.sh lectures/lecture0_install --title "L0 Install".
Deliverables: lectures/lecture0_install/dist/index.html, reports/L0_build_log.md.
Rubric (5 pts): 2 build, 2 organization, 1 screenshot.
Goal: Register one read-only MCP tool (e.g., file lister).
Deliverables: tools/mcp/config.json diff + reports/L0_mcp_readonly.md.
Rubric (10 pts): 4 working config · 3 safety · 3 docs.
Claude.md (20 min, ★☆☆)Goal: Turn your working protocol into a checklist.
Deliverables: Claude.md with sections 0–9 + “Plan v1.x”.
Rubric (5 pts): structure, brevity, reviewability.
Goal: Use Claude Code to refactor a module and update tests.
Steps:
Ask Claude to propose plan → approve → execute.
Enforce minimal diffs and commit messages.
Deliverables: PR or patch (patches/E1_refactor.patch), reports/E1_refactor.md (before/after, tests).
Rubric (10 pts): 4 correctness · 3 diffs · 3 tests.
Goal: Chain: ruff → pytest → summarization.
Deliverables: reports/E1_chain.md (commands, outputs), CI snippet (optional).
Rubric (15 pts): 6 chaining · 5 automation · 4 reporting.
Goal: Run Semgrep/Bandit/Ruff locally and compare with Claude audit.
Deliverables: reports/E2_hybrid_audit.md with table mapping findings ↔ CWE/OWASP.
Rubric (15 pts): 5 tooling · 5 mapping · 5 clarity.
Goal: Classify debt (code/design/test/docs/security) and propose remediation.
Deliverables: reports/E2_debt.md with ranked backlog (impact × effort).
Rubric (10 pts): 4 diagnosis · 4 remediation · 2 prioritization.
Goal: Add a pre-merge gate using Claude Code CLI + linter exit codes.
Deliverables: CI config snippet + reports/E2_ci_gate.md.
Rubric (15 pts): 6 correctness · 5 resiliency · 4 developer experience.
B1 — Audit Delta over Time: compare audit.json across commits; graph top-5 categories.
B2 — Policy Pack: codify rules (JSON/YAML), auto-generate release notes.
Create self_assessment.md after each exercise with:
What I attempted (2–5 lines)
What worked / didn’t
One improvement
Evidence (links/logs/PR)