Put most code in class libraries so new hosts (API, worker, AI, mobile) plug in without duplicating rules. Controllers should stay thin; fat controllers make reuse and testing expensive.
Business rules and workflows live in class libraries. Entry points are thin shells that connect those libraries to HTTP, queues, schedules, chat, etc.
| Layer | Examples | Role |
|---|---|---|
| Domain | Entities, value objects, domain rules | No dependency on infrastructure frameworks |
| Application | Use cases, validation, orchestration, ports (interfaces) | “What the system does” |
| Infrastructure | EF, HTTP clients, messaging, file storage | Implements application ports |
| Entry points | ASP.NET Web API, worker service, Azure Functions, AI host | DI, routing, HTTP status, wiring — not the heart of the feature |
Mental model: treat class libraries as the product; APIs and workers are cables plugged into that product.
-
Entry points multiply; rules should not. Today one API; tomorrow a background worker, new public API, internal tool, or AI agent. Each is a new host. Logic trapped in
Controllersgets copied or forked; fixes ship in multiple places. -
One place for correctness. Use-case code (validation, authz, transactions, invariants) is hard enough once. Libraries + unit tests give one behavior everywhere.
-
Adding a host is integration work, not archaeology. New project: reference libraries → register DI → call the same application services the API already uses.
-
Teams can evolve surfaces independently while keeping auditing and rules centralized.
Mobile note: clients often call HTTP APIs rather than referencing the same server DLLs (platform, security, trimming). Shared small netstandard models are optional; the server truth still lives in libraries.
Libraries (large) → Entry points (thin)
┌─────────────────────────────────────────────────────────────┐
│ CLASS LIBRARIES — most code / business rules │
│ ┌──────────┐ ┌──────────────┐ ┌─────────────────────┐ │
│ │ Domain │ → │ Application │ ← │ Infrastructure │ │
│ └──────────┘ └──────────────┘ └─────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
↑ ↑
│ │
┌─────────┴────────────────────┴──────────────────────────────┐
│ ENTRY POINTS — thin adapters (routing, DI, HTTP status) │
│ [ Web API ] [ Worker ] [ AI agent ] [ Mobile → API ] │
└──────────────────────────────────────────────────────────────┘
Hosts should depend on Application (and composition root wires Infrastructure), not cram domain logic into controllers.
flowchart TB
subgraph libs["Class libraries"]
D[Domain]
A[Application]
I[Infrastructure]
D --> A
I --> A
end
subgraph hosts["Entry points — thin"]
API[Web API]
W[Worker]
AI[AI host]
Mob[Mobile client]
end
API --> A
W --> A
AI --> A
Mob -.->|HTTP| API
- Libraries: reusable core, fast unit tests, consistent rules across hosts.
- Hosts: boring glue; new “cable” is cheap when the core already exists.
- Extract later sounds fine until duplicate behavior ships in production during migration.
Why it drifts
- Controllers are the first place things work (routing, binding), so the next
iflands there. - Deadlines favor finishing in the action.
- Tutorials often show fat actions for brevity.
Why it hurts
- Reuse: workers and other hosts cannot cleanly call
OrdersController.Post. - Tests: proving rules requires HTTP + auth + routing instead of testing plain classes.
- Consistency: two actions can implement “cancel order” slightly differently.
Direction
| Controller | Application library |
|---|---|
Status codes, routes, [FromBody] |
Validation, rules, transactions |
| Map DTO ↔ command | Orchestration |
Ok / Problem |
Domain + infrastructure via abstractions |
Practical rule: if it would still be correct without HTTP (same inputs/outputs), it does not belong in the controller long-term.