Skip to content
Written with Claude
IMPORTANT

As you may notice, this page and pretty much the entire website were obviously created with the help of AI. I wonder how you could tell? Was it a big "Written With Claude" badge on every page? I moved it to the top now (with the help of AI of course) to make it even more obvious. There are a few blogposts that were written by me manually, the old-fashioned way, I hope there will be more in the future, and those have a similar "Human Written" badge. This project (not the website), on the other hand, is a very, very different story. It took me more than two years of painstaking and unpaid work in my own free time. A story that, hopefully, I will tell someday. But meanwhile, what would you like me to do? To create a complex documentation website with a bunch of highly technical articles with the help of AI and fake it, to give you an illusion that I also did that manually? Like the half of itnernet is doing at this point? How does that makes any sense? Is that even fair to you? Or maybe to create this website manually, the old-fashioned way, just for you? While working a paid job for a salary, most of you wouldn't even get up in the morning. Would you like me to sing you a song while we're at it? For your personal entertainment? Seriously, get a grip. Do you find this information less valuable because of the way this website was created? I give my best to fix it to keep the information as accurate as possible, and I think it is very accurate at this point. If you find some mistakes, inaccurancies or problems, there is a comment section at the bottom of every page, which I also made with the help of the AI. And I woould very much appreciate if you leave your feedback there. Look, I'm just a guy who likes SQL, that's all. If you don't approve of how this website was constructed and the use of AI tools, I suggest closing this page and never wever coming back. And good riddance. And I would ban your access if I could know how. Thank you for your attention to this matter.

Changelog v3.13.0 (2026-04-24)

Version 3.13.0 (2026-04-24)

Full Changelog

New: Auth Schemes (Named Additional Authentication Schemes)

Auth:Schemes is a named-dict section that registers additional ASP.NET Core authentication schemes alongside the main one. Each entry is a fully-fledged scheme of any of the three supported types — Cookies, BearerToken, or Jwt — with its own options. A login function selects which scheme to use by returning the scheme's name in its scheme column.

Use cases this unlocks:

  • Short-lived sensitive sessions for admin or payment flows (Cookies scheme with shorter CookieValid + CookieMultiSessions: false).
  • Per-scope JWT signing keys so a key leak has limited blast radius (separate JwtSecret per Jwt scheme).
  • Multiple bearer-token APIs with different expirations and refresh paths.
  • Single-session cookies for areas where parallel logins must be disallowed, alongside a normal long-lived session.
jsonc
jsonc
"Auth": {
  "CookieAuth": true,
  "CookieValid": "14 days",
  "JwtAuth": true,
  "JwtSecret": "...root-secret-32+chars...",
  "Schemes": {
    "short_session": {
      "Type": "Cookies",
      "Enabled": true,
      "CookieValid": "1 hour",
      "CookieMultiSessions": false
    },
    "api_token": {
      "Type": "BearerToken",
      "Enabled": true,
      "BearerTokenExpire": "30 minutes",
      "BearerTokenRefreshPath": "/api/api-token/refresh"
    },
    "admin_jwt": {
      "Type": "Jwt",
      "Enabled": true,
      "JwtSecret": "...separate-admin-secret-32+chars...",
      "JwtExpire": "5 minutes",
      "JwtRefreshPath": "/api/admin-jwt/refresh"
    }
  }
}
sql
sql
-- Standard login: returns 'Cookies' → 14-day persistent cookie
create function login(_user text, _pass text)
returns table (scheme text, name_identifier text, name text)
language sql security definer as $$
  select 'Cookies' as scheme, user_id::text, username from users where ...
$$;

-- Sensitive-area login: returns 'short_session' → 1-hour session-only cookie
create function admin_login(_user text, _pass text)
returns table (scheme text, name_identifier text, name text)
language sql security definer as $$
  select 'short_session' as scheme, user_id::text, username from users where ...
$$;

-- Admin JWT login: returns 'admin_jwt' → 5-minute JWT signed with the admin-only secret
create function admin_jwt_login(_user text, _pass text)
returns table (scheme text, name_identifier text, name text)
language sql security definer as $$
  select 'admin_jwt' as scheme, user_id::text, username from users where ...
$$;

Per-type override fields:

TypeOverride fields
CookiesCookieValid, CookieName, CookiePath, CookieDomain, CookieMultiSessions, CookieHttpOnly
BearerTokenBearerTokenExpire, BearerTokenRefreshPath
JwtJwtExpire, JwtRefreshExpire, JwtSecret, JwtIssuer, JwtAudience, JwtClockSkew, JwtRefreshPath, JwtValidateIssuer, JwtValidateAudience, JwtValidateLifetime, JwtValidateIssuerSigningKey

Common fields: Type (required, case-insensitive), Enabled (default true).

Inheritance. A scheme that overrides only one or two fields reuses everything else from the root Auth section, so blocks stay small. Setting CookieMultiSessions: false is the typical "single-session" override — the cookie's Max-Age becomes null (browser-session-only) while ExpireTimeSpan still bounds server-side validity. JWT schemes inherit JwtSecret from the root section if not set explicitly, so a per-scheme block can be just a shorter expiration.

Validation at startup (fail-fast).

  • Scheme name must not collide with the main scheme names (CookieAuthScheme, BearerTokenAuthScheme, JwtAuthScheme).
  • Type must be one of Cookies, BearerToken, Jwt (case-insensitive). Missing or unsupported types throw with a clear message.
  • Explicit CookieName values must be distinct across all cookie schemes. When unset, ASP.NET's per-scheme .AspNetCore.<scheme> default automatically differs and is excluded from collision tracking.
  • Refresh paths (BearerTokenRefreshPath / JwtRefreshPath) must be unique across the main scheme and every scheme that defines one — two app.Use middlewares listening on the same path would race.
  • Jwt schemes require a secret either on the scheme or on the root section; JwtSecret must be ≥32 chars for HS256.
  • Invalid interval strings (CookieValid, BearerTokenExpire, JwtExpire, JwtRefreshExpire, JwtClockSkew) throw with the offending path and value.

Refresh middleware per scheme. Each BearerToken/Jwt scheme that declares a refresh path gets its own app.Use middleware listening on that path, with that scheme's tokens validated under that scheme's options. The main-scheme refresh middleware (root BearerTokenRefreshPath / JwtRefreshPath) continues to work for the main scheme.

Logout. The existing logout pipeline accepts a list of scheme names from the logout function's result columns and signs out each — additional schemes work without changes. To clear both main and additional cookies in one logout, return both scheme names from the function.

Breaking: legacy auth time-integer fields removed

The four integer-based time fields under Auth are removed in 3.13.0. Use the interval-notation equivalents instead:

Removed (3.12 and earlier)Use instead (3.13.0+)
Auth:CookieValidDays: 14Auth:CookieValid: "14 days"
Auth:BearerTokenExpireHours: 1Auth:BearerTokenExpire: "1 hour"
Auth:JwtExpireMinutes: 60Auth:JwtExpire: "60 minutes"
Auth:JwtRefreshExpireDays: 7Auth:JwtRefreshExpire: "7 days"

The new fields accept Postgres-interval syntax ("14 days", "12 hours", "30 minutes", "45 seconds", etc.) — finer-grained durations than the legacy integers permitted.

If you upgrade with any of the four removed fields still in your config, startup will fail with a clear migration message naming the offending field, the replacement field, and an example interval string. Failing fast is deliberate — silently ignoring the legacy field would mean an "I configured 30-day cookies" intent silently flips to the new field's default of 14 days, which would be very surprising.

New: interval notation for auth time fields

Each of the four time-window settings under Auth (cookie validity, bearer-token expiration, JWT access-token expiration, JWT refresh-token expiration) is expressed as a Postgres interval string:

jsonc
jsonc
"Auth": {
  "CookieAuth": true,
  "CookieValid": "12 hours",
  "JwtAuth": true,
  "JwtExpire": "5 minutes",
  "JwtRefreshExpire": "1 day"
}

Setting any of these to null falls back to the framework default (14 days / 1 hour / 60 minutes / 7 days respectively). Invalid interval values fail at startup with a clear error, rather than silently falling back. The shipped appsettings.json includes the explicit defaults so users see exactly what they're getting.

Breaking: RateLimiterOptions:Policies is now a dict, not an array

This section was previously an array of objects, each with an explicit "Name" property. It is now an object keyed by policy name, matching the existing ValidationOptions:Rules and the new CacheOptions:Profiles shape. Migration is mechanical:

jsonc
jsonc
// Before (3.12 and earlier):
"Policies": [
  { "Name": "fixed",   "Type": "FixedWindow",  "Enabled": true,  "PermitLimit": 100, ... },
  { "Name": "sliding", "Type": "SlidingWindow", "Enabled": false, ... }
]

// After (3.13.0):
"Policies": {
  "fixed":   { "Type": "FixedWindow",  "Enabled": true,  "PermitLimit": 100, ... },
  "sliding": { "Type": "SlidingWindow", "Enabled": false, ... }
}

Move each policy's Name value to be the JSON key, then drop the Name field. No other field changes; runtime behavior is identical.

If you upgrade with the old array form still in your config, startup will fail with a clear InvalidOperationException telling you to migrate. (Failing fast is deliberate — silently registering policies under names like "0" and "1" would have made endpoint annotations referencing them stop matching, leading to silent loss of rate limiting.)

New: Per-User Rate Limiting (Partition on a policy)

Rate-limiter policies can now be partitioned at request time, so each request gets its own bucket based on a value derived from HttpContext (a claim, an IP, a header, or a static fallback). The classic use case is per-user throttling: each authenticated user gets their own quota instead of all users sharing one global bucket.

jsonc
jsonc
"RateLimiterOptions": {
  "Enabled": true,
  "Policies": {
    "per_user": {
      "Type": "FixedWindow",
      "Enabled": true,
      "PermitLimit": 100,
      "WindowSeconds": 60,
      "Partition": {
        "Sources": [
          { "Type": "Claim", "Name": "name_identifier" },
          { "Type": "IpAddress" },
          { "Type": "Static", "Value": "anonymous" }
        ]
      }
    },
    "throttle_anon_only": {
      "Type": "FixedWindow",
      "Enabled": true,
      "PermitLimit": 10,
      "WindowSeconds": 60,
      "Partition": {
        "BypassAuthenticated": true,
        "Sources": [{ "Type": "IpAddress" }]
      }
    }
  }
}

Partition fields:

  • Sources — ordered list. Walked top-to-bottom at request time; the first source returning a non-empty value wins. Each source has a Type:

    • Claim — reads HttpContext.User.FindFirst(Name).Value. Name is required (the claim type, e.g., "name_identifier").
    • IpAddress — reads the client IP via HttpRequest.GetClientIpAddress(), which honors X-Forwarded-For / X-Real-IP ahead of Connection.RemoteIpAddress. No Name needed.
    • Header — reads HttpContext.Request.Headers[Name]. Name is required.
    • Static — always returns the configured Value. Useful as a terminal fallback (e.g., everyone unmatched shares the "anonymous" bucket).

    If no source resolves, partition resolution falls through to the literal key "unpartitioned" so the policy still rate-limits coherently.

  • BypassAuthenticated (bool, default false) — when true, signed-in users (HttpContext.User.Identity.IsAuthenticated) skip the limiter entirely. Evaluated before Sources, so use this for "throttle anonymous only" patterns. Authenticated users get an unlimited bucket; anonymous users still hit the partitioned limiter.

Behavior is unchanged for policies without a Partition block. Each non-partitioned policy still uses a single global bucket, exactly as in 3.12 and earlier.

Each Sources entry is validated at startup — invalid entries (e.g., Claim without Name, unknown Type) are logged at Warning and skipped. If a Partition block has no usable sources and BypassAuthenticated is false, the partition is dropped (with a Warning) and the policy reverts to a single global bucket.

New: Caching Profiles (CacheOptions.Profiles + @cache_profile annotation)

Named cache profiles allow you to maintain multiple distinct caching policies in one application — different backends, expirations, key shapes, or bypass conditions — and let endpoints opt into them via a single comment annotation.

jsonc
jsonc
"CacheOptions": {
  "Enabled": true,
  "Type": "Memory",
  "Profiles": {
    "fast_memory": {
      "Enabled": false,
      "Type": "Memory",
      "Expiration": "30 seconds",
      "Parameters": ["user_id"]
    },
    "shared_redis": {
      "Enabled": false,
      "Type": "Redis",
      "Expiration": "1 hour"
    },
    "date_range_hybrid": {
      "Enabled": false,
      "Type": "Hybrid",
      "Expiration": "5 minutes",
      "Parameters": ["from", "to"],
      "When": [
        { "Parameter": "to", "Value": null, "Then": "skip" }
      ]
    }
  }
}
sql
sql
comment on function get_orders(from text, to text) is '
HTTP GET
@cache_profile date_range_hybrid
';

Profile fields:

  • Enabled (bool, default false) — disabled profiles are skipped at startup; flip to true to activate.

  • TypeMemory, Redis, or Hybrid. Backends are pooled: all profiles of the same type share one instance (one Memory cache, one Redis connection, one HybridCache singleton). A backend type is only instantiated if root or some enabled profile uses it.

  • Expiration — default expiration in PostgreSQL interval format. Used when the endpoint has no @cache_expires annotation.

  • Parameters — default cache-key parameter list:

    • null (or property omitted): use all routine parameters.
    • [] (empty array): URL-only cache (one entry per endpoint, regardless of inputs).
    • ["x", "y"]: only those named parameters as the key.

    The endpoint's @cached p1, p2 annotation overrides this.

  • When — list of conditional rules. Each rule has:

    • Parameter — the routine parameter name to inspect.
    • Value — match condition. Scalar (single match) or array (OR over entries). JSON null matches .NET null/DBNull.Value (does not match empty string). Other values are stringify-and-equal case-insensitive.
    • Then — the literal string "skip" to bypass the cache for that request, OR a PostgreSQL interval (e.g. "30 seconds") to override the entry's TTL when writing.

    Rules evaluate in declaration order; first match wins. No match → fall through to the profile's Expiration.

    This unlocks scenarios that pure skip-on-condition couldn't express:

    • Skip-on-null: [{ "Parameter": "to", "Value": null, "Then": "skip" }]
    • Tiered TTL: [{ "Parameter": "tier", "Value": "free", "Then": "5 minutes" }, { "Parameter": "tier", "Value": "pro", "Then": "1 hour" }]
    • Status-aware caching: [{ "Parameter": "status", "Value": ["draft", null], "Then": "skip" }, { "Parameter": "status", "Value": "published", "Then": "1 hour" }]

    Validation: a rule whose Parameter is not in the resolved cache-key parameter list (Parameters or the endpoint's @cached) is dropped at startup with a Warning. This prevents the surprising case where two requests with different rule-matched values share the same cache entry.

Annotation: @cache_profile <name> selects a profile. It implies caching even without a separate @cached annotation. The existing @cached p1, p2 (overrides profile params) and @cache_expires <interval> (overrides profile expiration) annotations continue to work and take precedence over the profile's defaults.

Misconfiguration is loud at startup. Unknown profile names referenced by @cache_profile cause startup to fail with a single InvalidOperationException listing every unresolved name and the endpoints that referenced each — so typos surface immediately rather than silently disabling caching at runtime. Profiles registered but unreferenced log an Information warning. Bad Type or Expiration values warn and skip the profile. Empty/whitespace profile names are rejected.

Cache key isolation. Entries written under a profile are prefixed with the profile name, so two profiles sharing the same backend (e.g., two Memory profiles) cannot collide on the same routine + parameters. Endpoints without a profile have no prefix — existing cache entries stay wire-compatible across the upgrade.

The default appsettings.json ships with three disabled example profiles covering each Type and demonstrating a When rule. Flip "Enabled": true on the one(s) you need.

Never-expiring (infinite) cache entries

There is no explicit "forever" or "never" literal — instead, omit the Expiration field to get never-expiring entries. This applies at every level:

  • Endpoint without @cache_expires → entry never expires (today's pre-3.13 behavior, unchanged).
  • Profile without Expiration field → entries written under that profile never expire.
  • Both @cache_expires and profile Expiration set → annotation wins (the explicit interval is used).

If you need a mix in one app — some profiles with TTL, others never-expiring — define a dedicated profile with no Expiration:

jsonc
jsonc
"Profiles": {
  "static_lookup_data": {
    "Enabled": true,
    "Type": "Memory"
    // No Expiration → entries never expire (suitable for ISO codes, taxonomies, etc.)
  },
  "session_data": {
    "Enabled": true,
    "Type": "Redis",
    "Expiration": "1 hour"
  }
}

Endpoints opt into the appropriate profile via @cache_profile. This pattern handles the common cases (static reference data, immutable historical content) without needing a separate "force never expire" override.

New: WrapInTransaction Option (Connection Pooler Compatibility)

When set to true, every request is wrapped in an explicit BEGIN ... COMMIT, and all set_config calls switch from session-scoped (is_local=false) to transaction-local (is_local=true).

This is required for connection poolers in transaction mode — including PgBouncer transaction-pool, AWS RDS Proxy in transaction mode, and Supabase Pooler. Previously, set_config(name, value, false) would set the GUC at the session level on the underlying PostgreSQL backend. With a transaction-mode pooler, the same backend is reused for unrelated client requests, which means session-scoped GUCs from one request could be visible to the next. With WrapInTransaction = true, GUCs are scoped to the request transaction and discarded on COMMIT.

The default remains false to preserve existing behavior; it is safe to leave off when using Npgsql's native pool only (which issues DISCARD ALL on connection return).

jsonc
jsonc
{
  "NpgsqlRest": {
    "WrapInTransaction": true
  }
}

New: BeforeRoutineCommands Option

A new option allowing arbitrary SQL commands to run after any context is set but before the main routine call. They run in the same batch as the context set_config calls, so there is no extra network round-trip.

Each entry can be either a raw SQL string (no parameters) or an object with Sql and Parameters. Each parameter has a Source (Claim, RequestHeader, or IpAddress) and an optional Name (claim type or header name). Parameter values are bound at request time from HttpContext — claim and header values are passed as parameterized SQL inputs (no string interpolation, no injection risk).

The most useful pattern is multi-tenant search_path setup driven by a JWT/cookie claim:

jsonc
jsonc
{
  "NpgsqlRest": {
    "WrapInTransaction": true,
    "BeforeRoutineCommands": [
      "select set_config('app.request_time', clock_timestamp()::text, true)",
      {
        "Sql": "select set_config('search_path', $1, true)",
        "Parameters": [{ "Source": "Claim", "Name": "tenant_id" }]
      }
    ]
  }
}

Per-request execution order with this config:

  1. BEGIN
  2. Each BeforeRoutineCommand is added as a NpgsqlBatchCommand (with parameters bound from claims/headers/IP) and dispatched in a single batch.
  3. The main routine call.
  4. COMMIT.

Steps 1–3 share a single network round-trip.

Fix: 400 Bad Request responses are no longer silent in logs

Endpoints that returned HTTP 400 were not being logged at all, making client-error problems invisible in production. Two independent paths produced silent 400s:

  1. Database exceptions mapped to 400 (P0001 raise exception, P0004 assert_failure, or any user-configured ErrorHandlingOptions mapping to 400). The exception handler in NpgsqlRestEndpoint explicitly skipped logging for status 400.
  2. Validation rule failures (ValidationOptions.Rules → 400). These were logged at Debug level, which is below the default minimum log level (Information), so they never appeared in production logs.

Fix: 400s are now logged at Warning level — visible by default but not raised to Error, since 400 is a client-side problem rather than a server fault. Genuine server errors (500, etc.) continue to be logged at Error with full stack traces.

Docker Images: Ubuntu 26.04 LTS Base

The native AOT Docker images (latest, latest-arm, latest-bun) now build on Ubuntu 26.04 "Resolute Wolf" LTS, up from Ubuntu 25.04 (a 9-month interim release that is reaching end of support). This extends the security-update window for the published images to the 5-year LTS support period and picks up a newer stack (Linux 7.0 kernel, newer OpenSSL, cgroup v2). No changes are required for consumers of the images — runtime dependencies (libssl3, libgssapi-krb5-2, ca-certificates) resolve under the same package names on 26.04.

NuGet Package Upgrades

NpgsqlRest (main library):

  • Microsoft.SourceLink.GitHub 10.0.201 → 10.0.203 (build-time only)

NpgsqlRestClient (client application):

  • Microsoft.AspNetCore.Authentication.JwtBearer 10.0.5 → 10.0.7
  • Microsoft.Extensions.Caching.Hybrid 10.4.0 → 10.5.0
  • Microsoft.Extensions.Caching.StackExchangeRedis 10.0.5 → 10.0.7
  • StackExchange.Redis 2.12.8 → 2.12.14

Comments