Skip to content
Written with Claude

Case Study: 74 Endpoints, Zero Backend Code

May 2026 · Case StudyArchitectureNpgsqlRest


Most of the blog posts on this site argue why you might want to build an application with NpgsqlRest. This one is different: it reports what happened when a team actually did, end-to-end, on a real product. The application is anonymized — it's a finance/visualization platform with time-series charts, expression-based metric computation, and per-user dashboards — but the numbers are unmodified, taken directly from the repository on the day this post was written.

It is, as far as we know, the first production-grade application built entirely on NpgsqlRest. No C#. No Python. No Node backend. The HTTP layer is the binary. Everything else lives in PostgreSQL.

What's in the repository

LayerFilesLines of code
Public API SQL (auto-exposed as HTTP)804,889
System / migrations / helpers SQL53~2,000
pgTAP-style SQL tests1104,756
Auto-generated TypeScript API client245,679
Hand-written frontend TypeScript~36~1,700
Svelte components4314,419
Hand-written backend host code (C#/Python/Node)00
Backend host config (appsettings.json)1217

Approximately 74 HTTP endpoints are exposed. There are no controllers, no DTO classes, no repository layer, no hand-written API client. The single appsettings.json replaces what would otherwise be a Program.cs plus DI registration plus route mapping plus auth pipeline configuration.

The application has shipped to production at build number 2.9.1517 — fifteen hundred production builds with no backend host code at any point in the project's history.

What NpgsqlRest is doing for them

Reading the repository, the workload NpgsqlRest absorbs is broader than most users probably realize from the docs:

  • All ~74 endpoints are auto-exposed from annotated PostgreSQL functions. HTTP verb, route, authorization, response shape, and per-module grouping are declared in SQL comments via HTTP POST / HTTP GET, authorize, tsclient_module = .... There is no route table.
  • The entire TypeScript client (~5,679 lines across 12 modules) is regenerated from the live database catalog on every dev-mode startup. The frontend imports from "./timeApi.ts", from "./holidaysApi.ts", etc. — files no human ever touches.
  • The full WebAuthn / passkey ceremony is implemented as nine named SQL commands wired up in appsettings.json. Challenge generation, attestation, authentication, sign-count validation — all in PL/pgSQL. The host process knows the protocol; the application logic lives in functions.
  • Data-protection key storage (the keys ASP.NET uses to encrypt cookies and other transient state) is two SQL commands, not a custom IXmlRepository implementation in C#.
  • Admin and observability endpoints/stats/routines, /stats/tables, /stats/indexes, /stats/activity — are built in. The team gets per-routine performance stats, table stats, and active-query visibility without writing any of it.
  • Static file serving with claims templating into index.html injects user identity into the SPA shell. No separate Node server in front of the app.

The blast radius of "stop using NpgsqlRest" is therefore much wider than just losing the routing layer. It would mean replacing roughly half a dozen pieces that are individually load-bearing.

The comparison: equivalent build in ASP.NET Core

The realistic alternative for a .NET-shop building this application would be ASP.NET Core with Minimal APIs or controllers, plus Dapper or Npgsql for data access (an ORM doesn't fit the workload — most of the endpoints return composite records or aggregated time-series data, which fights ORMs). A typical per-endpoint cost in that stack:

  • Request DTO: 5–15 LOC
  • Response DTO: 5–15 LOC
  • Controller / Minimal API handler with validation, model binding, error mapping: 10–25 LOC
  • Repository or data-access method: 10–30 LOC
  • Service-layer method (often present even when not strictly necessary): 5–15 LOC
  • DI registration: 1–2 LOC

Conservatively: 40–80 lines of C# per endpoint, spread across 3–5 files. For 74 endpoints, that is ~3,000–6,000 LOC before any application-specific concern is addressed.

On top of the per-endpoint cost:

ConcernNpgsqlRestHand-rolled ASP.NET Core (LOC est.)
Program.cs / DI / routingconfig150–300
Cookie auth + claims transformationconfig50–150
WebAuthn / passkey ceremonies9 named SQL commands300–600 (Fido2.NET-Core plumbing)
Data-protection keys → PostgreSQL2 SQL commands50–100 (custom IXmlRepository)
TypeScript client generation + drift managementregenerated, freeNSwag / Kiota config + ongoing maintenance
Stats / activity / index admin endpointsbuilt-inhand-rolled
Retry, forwarded headers, antiforgery, compressionconfig50–150

Realistic ASP.NET Core equivalent total: 5,000–9,000 lines of C# — plus a .csproj, a layered project structure, and a CI step to keep the generated TypeScript client in sync with the deployed API. None of this exists in the case-study repository.

A FastAPI / Python equivalent comes out leaner — Pydantic models plus path operations are denser than C# DTOs plus controllers — but the WebAuthn plumbing (py_webauthn) and data-protection equivalents still cost real lines. Estimated total: 3,500–6,000 LOC of Python.

How it scores on the four dimensions that matter

Productivity

The headline number isn't lines of code — it's that impedance mismatch is gone entirely. Both kinds.

The classic object–relational impedance mismatch — objects on one side, rows on the other, an ORM negotiating between them — doesn't exist here because there's no object layer. PostgreSQL composite types and JSON are the data model, end to end. The frontend receives the same shapes the database returns.

The less-discussed function impedance mismatch — the chain of translation layers between an HTTP request and the SQL that ultimately runs (controller → service → repository → ORM → SQL) — doesn't exist either. The SQL function is the endpoint. The generated TypeScript wrapper calls it directly. There are no intermediate functions whose signatures can drift from the layer below them, because there are no intermediate functions.

What this means in practice is that adding a column in a typical ASP.NET Core + EF Core path is: write a migration, update the entity, update the DTO, update the mapping, update the endpoint, regenerate the TypeScript client, fix any drift the regeneration surfaces. In this project, it is: edit the SQL function, restart the dev server. The TypeScript client rewrites itself; the type checker fails the build on every line of frontend code that no longer matches.

Adding a new endpoint is: write one annotated SQL function. Restart. Done. There is no parallel set of files in the host language to keep in sync, because there is no host language.

The empirical signal that this works at scale is build number 2.9.1517. A development loop that is genuinely faster is the only thing that gets a small team to that build count.

Lines of code saved

Net of the SQL the team would have written anyway (the business logic has to live somewhere), our estimate is 3,000–6,000 lines of host-language code eliminated, plus the 5,679-line generated TypeScript client which is purely a free byproduct.

A caveat worth being honest about: SQL functions tend to be denser than C# or Python, so a one-to-one LOC swap understates the effective savings. The eliminated code is overwhelmingly the low-value boilerplate — DTOs, mappers, route attributes, repository methods that are one query each — not the parts of an application where careful design pays off.

Performance

Strong, structurally. One database round-trip per request. No ORM materialization, no entity-graph allocation in the API process, no DTO mapping pass. Stored functions are precompiled in PostgreSQL. The transport layer is Kestrel + Npgsql, which is already at the top tier of TechEmpower-style benchmarks for the underlying primitives.

The performance-tuning work concentrates where it actually pays off: in the database. A recent optimization in this project rewrote one of the heaviest functions (get_time_window_ranges) from PL/pgSQL with temp tables to plain SQL with CTEs and produced a measured 5.6× speedup (1.96 ms → 0.35 ms over 500 calls × 20 mixed windows). That improvement is visible directly in pg_stat_statements. In a conventional stack, the same query is hidden behind ORM-generated SQL and the per-request cost is split across object materialization, JSON serialization, and HTTP middleware — all of which obscure where the actual time goes.

That kind of rewrite — a full implementation swap, a different language paradigm inside the function, and two unrelated correctness fixes folded in along the way — is exactly the change that goes badly wrong without a safety net. It didn't here, because the safety net already existed: the 110 SQL test files and 4,756 lines of assertions covered later in this post. The optimization shipped in the same commit as new test coverage for the edge cases the rewrite exposed (degenerate date spans, empty resolved holiday sets, mixed pass-through and per-day windows). The tests run against the real PostgreSQL engine, against the real function being optimized, and finish in seconds. There is no faster correctness gate than that, and without it a 5.6× refactor is the sort of thing teams quietly decide isn't worth the risk. Performance work and test coverage are not two separate concerns in this architecture; they're the same loop.

Overall quality

End-to-end type safety is genuinely better than most hand-rolled stacks. PostgreSQL types map directly to TypeScript types via the generator. There is no DTO layer in the middle that can disagree with either the database or the frontend. Schema drift is impossible by construction: if a column is renamed, the generated client changes shape on the next dev restart, and every consumer fails to compile.

The schema is the single source of truth. There is no ORM mapping that can lie about it.

The test suite — 110 SQL test files totaling 4,756 lines — runs against the real PostgreSQL engine, not a mock or in-memory fake. Every assertion exercises the same code path that production uses. This is harder to do well than a conventional unit-test suite, but the tests that exist are categorically more truthful.

Honest tradeoffs

A case study that doesn't acknowledge tradeoffs is a sales pitch. After working through the obvious objections honestly, two remain:

  • Younger ecosystem. NpgsqlRest is newer and has a smaller community than ASP.NET Core or FastAPI. Fewer Stack Overflow answers, fewer third-party plugins, fewer "battle-tested by a thousand companies" reassurance signals. Mitigated by the fact that PostgreSQL itself — which does most of the heavy lifting in this architecture — is anything but new.
  • SQL fluency is a precondition, not a nice-to-have. Anyone working on the backend has to be comfortable in PL/pgSQL: not just SELECTs, but window functions, CTEs, procedural control flow, and reading EXPLAIN output. AI tooling has closed most of the on-ramp — modern coding assistants write competent PL/pgSQL on demand — but the architecture genuinely rewards teams that invest in SQL skill rather than treating it as a thing the ORM hides.

That's the honest list. A few other objections look like tradeoffs at first but don't survive scrutiny:

  • "Refactoring SQL across a schema lacks IDE support." In practice, the auto-generated TypeScript client gives every backend function a real, IDE-indexed symbol on the frontend (get_time_window_rangesgetTimeWindowRanges). "Find all references" on the generated function locates every cross-stack caller. Modern PostgreSQL IDEs (DataGrip, JetBrains DB tools) also index PL/pgSQL, and the database itself fails function creation if a referenced object is missing. The cross-stack refactoring story is, if anything, better than in conventional architectures.
  • "Logic-heavy SQL is harder to test." The opposite turned out to be true on this project. set constraints all deferred plus a transaction-scoped rollback gives clean isolation with no fixture framework. Putting drop function, create function, and the test in a single .sql file lets you re-run the entire cycle on save — a feedback loop measurably faster than rebuilding a .NET or Python project. Different idiom from xUnit / pytest, but not harder.
  • "Lock-in to NpgsqlRest." The exit path is to write the controller layer you skipped — i.e., to do the 5,000–9,000 LOC of host-language work this case study just argued against. That isn't lock-in in any meaningful sense; it's a choice that's reversible at exactly the price the alternative architecture charges upfront. The SQL stays portable.
  • "Calling external APIs requires a host language." It doesn't. NpgsqlRest's HTTP Types let you declare external REST calls in a PostgreSQL type comment using .http file syntax — function parameters substitute into URLs, headers, and request bodies, and the response is handed to your function as a parameter. The HTTP call is made from the NpgsqlRest tier, not from inside the database, so there's no pgsql-http extension to install and no database connection sitting blocked on a remote service. For full gateway scenarios, the reverse proxy @proxy annotation supports both passthrough and transform modes (response routed through a PG function for caching, enrichment, or transformation). External integration is one of the architecture's stronger stories, not a weak point.

What this case study is, and isn't

It isn't a benchmark. It isn't a controlled study. It's a single production application, observed honestly, with the numbers reported as they are.

What it does establish is that the architecture works — not as a demo, not as a "how-to," but as the substrate of a real product that has shipped 1,500+ builds. The combination most people assume can't be done without host code (WebAuthn, data-protection keys, an admin/observability surface, a generated frontend client, structured tests) is all here, in SQL, in production.

If you've been wondering whether a database-first architecture scales past CRUD demos, this is the existence proof.

Comments