← Accueil

Extended Synopsis — AI Agents and Code Auditing

Author: Olivier Vitrac, PhD, HDR — olivier.vitrac@adservio.fr
Institutional context: Adservio Innovation Lab · Applied Artificial Intelligence & Engineering Sciences
Academic level: Graduate / Professional Development
Estimated total duration: ~6–8 hours (excluding practical exercises)


🧭 Abstract

This lecture series introduces the emergence of AI-assisted software engineering, from conversational code agents to autonomous auditing systems.
Students and professionals will explore how Claude Code (Pro/Max) and the Model Context Protocol (MCP) enable reproducible, auditable workflows in real-world development environments.

Starting with installation and toolchain integration, the series progresses toward the architecture of autonomous agents, large-context reasoning, and the design of technical audits that combine static, dynamic, and semantic analysis.
A final module focuses on evaluating technical debt, code quality, and compliance with industry standards.


🎯 What You Will Learn

By the end of the sequence, you will be able to:

  1. Set up and operate Claude Code (Max) in VS Code and shell environments.

  2. Differentiate between assistants, copilots, and fully autonomous agents.

  3. Leverage the MCP protocol to integrate external tools (linters, test runners, CI/CD pipelines).

  4. Design and execute AI-based code audits, including security and performance analyses.

  5. Interpret and mitigate technical debt using both quantitative metrics and AI-driven recommendations.

  6. Evaluate ethical and governance implications of machine-assisted programming.


🧩 Prerequisites

TypeRecommended knowledge
ProgrammingIntermediate proficiency in Python or similar languages
Software toolsFamiliarity with VS Code, Git, and command-line interfaces
AI fundamentalsBasic understanding of LLMs, prompt engineering, and model inference
SystemsLinux or Unix-like environment (used for demonstrations)

🕒 Lecture Path & Estimated Durations

ModuleTitleDurationFormatLink
Lecture 0Installation, setup, and shell/VS Code integration1 h 15 minDemo + Slideslecture0_install/slides.html
Lecture 1From Assistants to Agents – Practical AI Agents for Developers and Auditors2 h 00 minSlides + Lablecture1_agents/slides.html
Lecture 2Principles and Strategies of Code Auditing – From Technical Debt to Claude Code2 h 30 minSlides + Notebookslecture2_audit/slides.html
Supporting DocsAudit prompt cheat-sheet, MCP examples, and demo notebooksReferencelecture2_audit/audit_prompts.html, lecture2_audit/claude_audit_demo.html

🧭 Suggested Learning Paths

🧑‍🎓 Academic / Graduate Students

  1. Start with Lecture 0 to ensure a reproducible environment.

  2. Follow Lecture 1 to understand architectural and conceptual frameworks.

  3. Apply the knowledge in Lecture 2, focusing on traceability and compliance.

  4. Extend through individual projects or audits on open-source repositories.

🧑‍💻 Professional Developers

  1. Briefly review Lecture 0 (installation nuances).

  2. Focus on Lecture 1’s agent workflows and MCP usage.

  3. Deep-dive into Lecture 2 for technical-debt quantification and automation.

  4. Integrate learned workflows into CI/CD pipelines.

🧠 AI Researchers or Data Scientists

  1. Skim installation (Lecture 0).

  2. Concentrate on Lecture 1 sections describing model reasoning and context control.

  3. Use Lecture 2 as a case study for LLM-based evaluation and explainability.


🧮 How to Evaluate Your Knowledge

Evaluation typeDescription
Practical Audit TaskRun claude code audit . --rules security,style --out audit.json, interpret findings, and propose remediations.
Conceptual QuizExplain differences between static, dynamic, and semantic auditing.
Design ExerciseWrite a YAML MCP tool spec connecting Claude Code to pytest and interpret output.
ReflectionIdentify ethical implications of AI-assisted reviews in corporate or academic contexts.

💡 You are encouraged to maintain a learning journal summarizing the prompts, outputs, and model behaviors encountered during each session.


📚 Additional Resources


🧑‍🏫 Author

Olivier Vitrac, PhD, HDR
Founder & Lead Architect — Generative Simulation Initiative
AI Specialist & Innovation Lead — Adservio Group
📧 olivier.vitrac@adservio.fr


🔗 Navigation Map


🧩 Continuing Your Learning Journey

To deepen your understanding and validate your progress, three complementary resources are provided within this lecture series:

  1. 📚 Supplementary Readings
    An extended set of curated references covering agents, MCP protocols, Claude Code manuals, software auditing, AI governance, and foundational research papers.
    These readings serve both as preparation for the lectures and as a follow-up path for independent study or group seminars.

  2. 🧠 Further Exercises
    A collection of guided, hands-on activities for each lecture — from environment setup and MCP configuration to autonomous audit pipelines.
    Each exercise includes learning objectives, deliverables, and a suggested rubric to support self- or instructor-based evaluation.

  3. 🧩 Test Your Knowledge
    An interactive quiz interface designed to assess comprehension after each lecture.
    Open it with a URL fragment specifying the target quiz, for example:

    The quiz app randomizes answer order to reduce bias and supports direct linking to a specific question using
    #quiz=…&q=N.

Together, these resources transform the lecture suite into a full learning environment — combining conceptual knowledge, practical experimentation, and self-assessment consistent with university-level pedagogy.


🎫 Licenses

ComponentLicenseLocation
Code (scripts, templates, build utilities)MIT or Apache-2.0LICENSES/LICENSE_CODE.txt
Lectures, docs, slides, synopsesCC BY-NC-SA 4.0LICENSES/LICENSE_DOCS.txt
Summary / clarificationmixed noticeLICENSES/NOTICE.txt

🧩 Versioning