paul · spencer
case · lve
home work github
case · lve Agent-Led .NET Modernisation loo van eck · 2026
senior engineer · ai engineer · 2026

LVE

Migration of LVE (Loo van Eck)’s training platform from a .NET 4.6 WebForms application to a hexagonal .NET 9 stack with two Blazor frontends — Server-rendered for hundreds of content editors, SSR for thousands of trainees. The legacy system kept serving traffic during the rebuild. Cutover was a single big-bang migration.

delivered
6 wks
prior estimate
12+ wks
commits / day
10+
parallel agents at peak
10+

i. Overview

The defining feature of the project was not the target stack. It was the development model. Codebase and agent behaviour evolved together: every correction became a persistent memory, every architectural rule became a build-failing analyser, every recurring error became a guard test or a CLAUDE.md line.

The work averaged 10+ commits a day, with at least 3 Claude Code sessions running in parallel — and often 10+ on git worktrees. Loo van Eck, 2026.

LVE training platform lesson view with chapter navigation and embedded video
plate i — lesson view, embedded video and chapter navigation

ii. Learning to Delegate

I have always been pedantic about my code and design decisions. On a project this size, holding on to that level of control would have made the project impossible.

The maths of agentic coding was unforgiving. To get real leverage I had to let multiple Claude Code sessions run in parallel against a production codebase, and not be reading every line. What got me past that resistance was other people: Amsterdam AI meetups, hackathon weekends, evenings in community sessions with engineers a few weeks ahead of me on the same curve. Each had hit the same wall and worked out a piece of the answer.

The guardrails came first, before any new code. I spent the opening week extracting the legacy app’s behaviour into BDD scenarios, each paired with a Playwright screenshot, and walked users through the document until they signed off on what the system actually did. From there I built an end-to-end test suite against the legacy code itself, and those tests became the guiding principles for the tests written first for the new app. The coding agents weren’t just allowed to run free. The new code grew inside a harness that already knew what it should do.

iii. Agent Orchestration

The context layer that let multiple Claude Code agents work coherently against the same codebase:

  • CLAUDE.md as the canonical rules document, with six markdown sub-agent definitions (worktree-builder, code-reviewer, simplify, Plan, Explore, general-purpose) composed into a feature pipeline.
  • Custom slash-command suite (/review, /simplify, /loop, /handoff, /mint-url, plus the full speckit.* family) for repeatable, named operations rather than ad-hoc prompting.
  • Adopted the context-mode plugin (ctx) for scoped context loading, which kept agent attention on the right files without bloating the prompt.
  • Persistent memory captured every workflow correction, project decision and reference note so the next session inherited the rule.
Agentic feedback loop diagram showing the Claude Code main session, plan and explore agents, review and specialist sub-agents, with persistent knowledge feeding into Roslyn analysers and the build-time quality ratchet
plate ii — agentic feedback loop, persistent memory through to compiler guardrails

iv. Spec-Driven Parallel Execution

Every feature began as a written specification, not a prompt, generated using Spec Kit:

  • Specifications authored through spec → plan → tasks → implement before any production code was written for the slice.
  • Worktree-per-feature: each spec lived in its own git worktree with a dedicated builder agent, allowing multiple specs to be implemented in parallel without merge collisions.
  • BDD scenarios in Reqnroll as the executable contract between spec and code. .feature files drove Playwright through a glue layer; a feature was not “done” until its scenarios passed.
  • Conventional Commits and a mandatory build+test gate after every master merge — the rule went into CLAUDE.md.
Feature pipeline diagram showing the path from idea through spec-driven authoring, isolated worktree with TDD loop, multi-layer verification, quality gate with Roslyn analysers, and final integration into master
plate iii — feature pipeline, idea to integration

v. Oracle Testing Against the Legacy

Specs said what the new system should do. The legacy app was the source of truth for what it actually did:

  • Built purpose-specific applications that probed the legacy WebForms system as a black-box behavioural oracle, then wrapped those tools as Claude Code skills so any agent could call them inside its loop.
  • Visual Oracle Pattern: Playwright captured screenshots of the new Blazor output and pixel-compared them against legacy screenshots. Behavioural parity was measured, not assumed.
  • Testcontainers + Podman stood up real SQL Server instances for integration tests; the same Dapper / EF Core code paths exercised in production were exercised in CI.
  • Hard rule: domain entity Create methods never invented validation rules absent from the legacy app. The legacy was authoritative, even when its rules were ugly.

vi. Compiler-Enforced Guardrails

Agents were fast but drifted. The fix was not more prompting — it was build-time enforcement.

  • TreatWarningsAsErrors globally: a single yellow squiggly failed the build. The ratchet only went one way.
  • A suite of custom Roslyn analysers turned house style into compile errors. No primitives in Domain APIs (Value Objects required). No var (explicit types only). No Repository / Database / Dapper / EF in port names. Bans on ambient statics and implementation-revealing fully-qualified names.
  • NetArchTest enforced hexagonal layer dependencies: Razor could not reach into Application or Domain, Domain had no infrastructure imports, the Anti-Corruption Layer was the only path between external schemas and domain types.
  • Combined with off-the-shelf analysers and a pre-commit hook running dotnet build, an agent could not quietly skip a step. If it forgot, the build failed and the worktree refused to commit.

vii. Hexagonal Architecture & CQRS

The architecture was what made the rebuild tractable:

  • Hexagonal arrangement: ports & adapters with strict layer policing, MediatR-based CQRS (one handler per command/query), Result<T> for explicit failure modes, and a Web-layer ViewService Facade between every Razor component and the Application layer.
  • Anti-Corruption Layer as ExternalRow → Translator → Domain. The legacy database schema never leaked past Infrastructure, so the new domain model was free to be clean.
  • Dapper for legacy stored-procedure adapters, EF Core for the internal database, SQL Server FileTable for the editor image library, Polly for retry and circuit-breaker on the external DB connection.

viii. Key Contributions

  • Designed the hexagonal architecture and the agent collaboration model that produced it.
  • Built the worktree-per-feature parallel execution workflow that let one engineer ship at multi-engineer pace.
  • Authored the SpecKit specifications driving the migration end to end.
  • Wrote the custom Roslyn analyser suite that turned architectural intent into build-time enforcement.
  • Established the Visual Oracle Pattern and the legacy-probing skills that gave the migration measurable behavioural parity.
  • Captured the persistent memory rules so the next session started smarter than the last.
  • Delivered the rebuild in 6 weeks against a prior estimate of more than twice that.

ix. Stack

C#.NET 9Blazor Server + SSRRazor Components MediatRCQRSFluentValidationResult<T> DapperEF CoreSQL Server FileTablePolly ReqnrollPlaywrightxUnitNetArchTestTestcontainers + Podman Custom Roslyn analysersTreatWarningsAsErrors Claude Codesub-agentsgit worktreesslash commandsSpecKitpersistent memory Conventional CommitsPowerShellAzure DevOps Server Hexagonal architectureAnti-Corruption Layerbig-bang migration

No mail program is configured for this browser, so here is the address:

spencerpj@gmail.com