DDD's missing definition of done.
Three-pass convergence. Gap reports as diagnostic signals. Zero unresolved gaps.
Standing on shoulders
SDD builds on two decades of DDD contributions. Every concept has a lineage. Click any node to see what SDD inherits.
What is Signal-Driven Development?
Signal-Driven Development is a solo-practitioner methodology that extends Domain-Driven Design with three things it has never had: a structured convergence process, diagnostic feedback loops, and a measurable definition of done.
The core mechanism is the gap report — a structured diagnostic that evaluates a domain specification against four categories of completeness and consistency. Gaps are signals, not verdicts. The architect reads, investigates, and resolves each one. The definition of done is simple: zero unresolved gaps.
The problem
DDD has no feedback loop. Traditional knowledge crunching assumes collaborative workshops with domain experts in the room — but solo practitioners don't have a second pair of eyes. Architectural decisions get deferred without rationale. Boundaries get drawn by convention rather than analysis. There is no way to know when a domain model is complete. You just stop when it "feels right."
SDD replaces intuition with measurement. The gap report acts as your second pair of eyes — a structured diagnostic that surfaces what confirmation bias, time pressure, and familiarity would otherwise hide.
How it works
A domain specification iterates through passes. Each pass produces three artifacts: the specification itself, a gap report, and a resolution log. The gap count must decrease across passes. If it doesn't, the specification is diverging — and that non-convergence is the most important signal the process can produce.
Pass 1 — Extraction
Write the domain specification in DDD building blocks: bounded contexts, aggregates, commands, events, invariants, policies, and a glossary of ubiquitous language. Don't aim for perfection — the point is extraction. Run the gap report. Typical result: 15–35 gaps surface. Structural errors, missing invariants, heuristic violations, undocumented decisions. Resolve each one with rationale.
Pass 2 — Resolution
Update the specification with your resolutions and run the gap report again. This is where real architectural decisions get made — boundary placements, aggregate decompositions, scope tradeoffs. Typical result: 5–10 gaps remain. The hard questions are now visible and documented.
Pass 3 — Convergence
Final pass. Residual tradeoffs are resolved or consciously accepted. Deferred items get explicit scope notes. Typical result: 0–3 gaps. When zero remain, every question the model raised has been answered. The specification is implementation-ready.
Typical trajectory across 8 production domains: 18 → 5 → 0 gaps across three passes. Pass 1 takes 2–4 hours, Pass 2 takes 1–2 hours, Pass 3 takes 30–60 minutes. Simple domains may converge in two passes. Complex domains with regulatory concerns or event-sourced architectures may need four or five.
What it catches
Every gap falls into one of four categories. Each category targets a different kind of incompleteness in the domain specification.
Structural Gaps (SG)
Missing or malformed elements. Binary: the element exists or it doesn't. Aggregates without invariants, commands without corresponding events, bounded contexts with no declared relationships, projections consuming events from undeclared upstream contexts.
Heuristic Gaps (HG)
Pattern violations with measurable thresholds from DDD literature. Aggregate command density exceeding Vernon's 6-command threshold, missing sagas in multi-step cross-aggregate processes, policies chained in sequence, contexts with too many aggregates.
Language Gaps (LG)
Ubiquitous language inconsistencies. The same term meaning different things in different contexts without a declared homonym. Implicit concepts referenced in multiple places but never formally named. Glossary entries missing for terms used in commands or events.
Decision Gaps (DG)
Undocumented or deferred architectural choices. Bounded context boundaries drawable in two reasonable ways with no documented rationale. Aggregate decompositions with undocumented tradeoffs. Scope decisions that exist only in someone's head. "It seemed right" is not a rationale.
Why this matters
SDD replaces "this doesn't feel right" with specific, actionable gaps. The gap report is your second pair of eyes — it surfaces what confirmation bias, time pressure, and domain familiarity would otherwise hide. It works for solo practitioners without requiring a team, but scales to teams where the specification becomes a shared artifact for review.
Every pass creates an artifact trail — specifications, gap reports, and resolution logs — that captures not just what you decided but why you decided it. Two weeks later, when you've forgotten the reasoning behind a boundary placement, the resolution log remembers.
The templates are structured enough for AI assistants to participate in gap identification and resolution drafting. The architect retains decision authority — AI surfaces signals, the architect investigates and decides. This was part of the original motivation: creating a methodology that works with AI as a modeling partner, not a replacement.