Skip to content
Written with Claude
IMPORTANT

As you may notice, this page and pretty much the entire website were obviously created with the help of AI. I wonder how you could tell? Was it a big "Written With Claude" badge on every page? I moved it to the top now (with the help of AI of course) to make it even more obvious. There are a few blogposts that were written by me manually, the old-fashioned way, I hope there will be more in the future, and those have a similar "Human Written" badge. This project (not the website), on the other hand, is a very, very different story. It took me more than two years of painstaking and unpaid work in my own free time. A story that, hopefully, I will tell someday. But meanwhile, what would you like me to do? To create a complex documentation website with a bunch of highly technical articles with the help of AI and fake it, to give you an illusion that I also did that manually? Like the half of itnernet is doing at this point? How does that makes any sense? Is that even fair to you? Or maybe to create this website manually, the old-fashioned way, just for you? While working a paid job for a salary, most of you wouldn't even get up in the morning. Would you like me to sing you a song while we're at it? For your personal entertainment? Seriously, get a grip. Do you find this information less valuable because of the way this website was created? I give my best to fix it to keep the information as accurate as possible, and I think it is very accurate at this point. If you find some mistakes, inaccurancies or problems, there is a comment section at the bottom of every page, which I also made with the help of the AI. And I woould very much appreciate if you leave your feedback there. Look, I'm just a guy who likes SQL, that's all. If you don't approve of how this website was constructed and the use of AI tools, I suggest closing this page and never wever coming back. And good riddance. And I would ban your access if I could know how. Thank you for your attention to this matter.

Changelog v3.1.0 (2025-12-13)

Version 3.1.0 (2025-12-13)

Full Changelog

Http Types

New feature that enables PostgreSQL functions to make HTTP requests to external APIs by using specially annotated composite types. When a function parameter uses a composite type with an HTTP definition comment, NpgsqlRest automatically invokes the HTTP request and populates the type fields with the response data before executing the function.

Creating an HTTP Type:

sql
sql
-- Create a composite type with response fields
create type weather_api as (
    body text,
    status_code int,
    headers json,
    content_type text,
    success boolean,
    error_message text
);

-- Add HTTP definition as a comment (RFC 7230 format)
comment on type weather_api is 'GET https://api.weather.com/v1/current?city={_city}
Authorization: Bearer {_api_key}
timeout 30s';

Using the HTTP Type in a function:

sql
sql
create function get_weather(
  _city text,
  _api_key text,
  _req weather_api
)
returns json
language plpgsql
as $$
begin
    if (_req).success then
        return (_req).body::json;
    else
        return json_build_object('error', (_req).error_message);
    end if;
end;
$$;

HTTP Definition Format:

The comment on the composite type follows a simplified HTTP message format similar to .http files:

code
METHOD URL [HTTP/version]
Header-Name: Header-Value
...

[request body]

Supported HTTP methods: GET, POST, PUT, PATCH, DELETE

Timeout Directives:

Timeout can be specified before the request line using various formats:

code
timeout 30
timeout 30s
timeout 00:00:30
@timeout 2 minutes

Response Fields:

The composite type fields are automatically populated based on their names (configurable via HttpClientOptions):

Field NameTypeDescription
bodytextResponse body content
status_codeint or textHTTP status code (e.g., 200, 404)
headersjsonResponse headers as JSON object
content_typetextContent-Type header value
successbooleanTrue for 2xx status codes
error_messagetextError message if request failed

Placeholder Substitution:

URLs, headers, and request body in the type comment can contain placeholders in the format {parameter_name}. These placeholders are automatically replaced with the values of other function parameters that share the same name.

In the example above, the function get_weather has parameters _city and _api_key. The HTTP type comment contains placeholders {_city} and {_api_key} which are substituted with the actual parameter values when the HTTP request is made:

sql
sql
-- Type comment with placeholders
comment on type weather_api is 'GET https://api.weather.com/v1/current?city={_city}
Authorization: Bearer {_api_key}
timeout 30s';

-- Function with matching parameter names
create function get_weather(
  _city text,        -- Value substitutes {_city} placeholder
  _api_key text,     -- Value substitutes {_api_key} placeholder
  _req weather_api   -- HTTP type parameter (receives response)
)
...

When calling GET /api/get-weather?_city=London&_api_key=secret123, NpgsqlRest will:

  1. Substitute {_city} with London and {_api_key} with secret123
  2. Make the HTTP request to https://api.weather.com/v1/current?city=London with header Authorization: Bearer secret123
  3. Populate the _req parameter fields with the response data
  4. Execute the PostgreSQL function

Configuration Options:

Enable HTTP Types in NpgsqlRestOptions.HttpClientOptions options or in client configuration:

json
json
{
  "NpgsqlRest": {
    //
    // HTTP client functionality for annotated composite types.
    // Allows PostgreSQL functions to make HTTP requests by using specially annotated types as parameters.
    //
    "HttpClientOptions": {
      //
      // Enable HTTP client functionality for annotated types.
      //
      "Enabled": false,
      //
      // Default name for the response status code field within annotated types.
      //
      "ResponseStatusCodeField": "status_code",
      //
      // Default name for the response body field within annotated types.
      //
      "ResponseBodyField": "body",
      //
      // Default name for the response headers field within annotated types.
      //
      "ResponseHeadersField": "headers",
      //
      // Default name for the response content type field within annotated types.
      //
      "ResponseContentTypeField": "content_type",
      //
      // Default name for the response success field within annotated types.
      //
      "ResponseSuccessField": "success",
      //
      // Default name for the response error message field within annotated types.
      //
      "ResponseErrorMessageField": "error_message"
    }
  }
}

Routine Caching Improvements

Major improvements to the routine caching system for reliability, correctness, and expanded functionality:

Cache Key Generation Fixes:

  • Fixed potential hash collisions by switching from integer hash codes to string-based cache keys.
  • Added separator character (\x1F) between parameter values to prevent cache key collisions when parameter values concatenate to the same string (e.g., "ab" + "c" vs "a" + "bc" now produce different cache keys).
  • Added distinct null marker (\x00NULL\x00) to differentiate between null values and empty strings in cache keys.
  • Fixed array parameter serialization to properly include all array elements in the cache key with separators.

Extended Caching Support for Records and Sets:

Caching now works for set-returning functions and record types, not just single scalar values. When a cached function returns multiple rows, the entire result set is cached and returned on subsequent calls.

New Configuration Option:

Added MaxCacheableRows option to CacheOptions to limit memory usage when caching large result sets:

csharp
csharp
public class CacheOptions
{
    /// <summary>
    /// Maximum number of rows that can be cached for set-returning functions.
    /// If a result set exceeds this limit, it will not be cached (but will still be returned).
    /// Set to 0 to disable caching for sets entirely. Set to null for unlimited (use with caution).
    /// Default is 1000 rows.
    /// </summary>
    public int? MaxCacheableRows { get; set; } = 1000;
}

Configuration in appsettings.json:

json
json
{
  "NpgsqlRest": {
    "CacheOptions": {
      "MaxCacheableRows": 1000
    }
  }
}

Cache Key Hashing for Long Keys:

Added optional SHA256 hashing for long cache keys to improve performance, especially with Redis cache. When enabled, cache keys exceeding a configurable threshold are automatically hashed to a fixed 64-character string, reducing:

  • Memory usage for storing long cache keys
  • Network transfer overhead with Redis
  • Redis server memory consumption

New configuration options in CacheOptions:

csharp
csharp
public class CacheOptions
{
    /// <summary>
    /// When true, cache keys longer than HashKeyThreshold characters are hashed to a fixed-length SHA256 string.
    /// This reduces memory usage for long cache keys and improves Redis performance with large keys.
    /// Default is false (cache keys are stored as-is).
    /// </summary>
    public bool UseHashedCacheKeys { get; set; } = false;

    /// <summary>
    /// Cache keys longer than this threshold (in characters) will be hashed when UseHashedCacheKeys is true.
    /// Keys shorter than this threshold are stored as-is for better debuggability.
    /// Default is 256 characters.
    /// </summary>
    public int HashKeyThreshold { get; set; } = 256;
}

Configuration in appsettings.json:

json
json
{
  "CacheOptions": {
    "UseHashedCacheKeys": true,
    "HashKeyThreshold": 256
  }
}

This is particularly recommended when:

  • Using Redis cache with routines that have many or large parameters
  • Caching routines with long SQL expressions
  • High cache hit rates where memory efficiency matters

Cache Invalidation Endpoints:

Added support for programmatic cache invalidation via auto-generated invalidation endpoints. When InvalidateCacheSuffix is configured, NpgsqlRest automatically creates an invalidation endpoint for each cached endpoint.

For example, if you have a cached endpoint /api/get-user/ and set InvalidateCacheSuffix to "invalidate", NpgsqlRest will create /api/get-user/invalidate endpoint automatically.

Calling the invalidation endpoint with the same parameters as the cached endpoint removes the corresponding cache entry:

code
GET /api/get-user/?id=123           -> Returns cached user data
GET /api/get-user/invalidate?id=123 -> Removes cache entry, returns {"invalidated":true}
GET /api/get-user/?id=123           -> Fresh data (cache was cleared)

Key Features:

  • Same authentication and authorization as the original endpoint
  • Same parameter handling - no need to know the internal cache key format
  • Works correctly with hashed cache keys
  • Returns {"invalidated":true} if cache entry was removed, {"invalidated":false} if not found

Configuration in CacheOptions:

csharp
csharp
public class CacheOptions
{
    /// <summary>
    /// When set, creates an additional invalidation endpoint for each cached endpoint.
    /// The invalidation endpoint has the same path with this suffix appended.
    /// Default is null (no invalidation endpoints created).
    /// </summary>
    public string? InvalidateCacheSuffix { get; set; } = null;
}

Configuration in appsettings.json:

json
json
{
  "CacheOptions": {
    "InvalidateCacheSuffix": "invalidate"
  }
}

Multi-Host Connection Support

Added support for PostgreSQL multi-host connections with failover and load balancing capabilities using Npgsql's NpgsqlMultiHostDataSource.

Features:

  • Automatic detection of multi-host connection strings (connection strings with comma-separated hosts like Host=server1,server2)
  • Configurable target session attributes per connection: Any, Primary, Standby, PreferPrimary, PreferStandby, ReadWrite, ReadOnly
  • Seamless integration with existing named connections - multi-host data sources take priority over connection strings

Configuration:

json
json
{
  "ConnectionSettings": {
    "MultiHostConnectionTargets": {
      // Default target for all multi-host connections
      "Default": "Any",
      // Per-connection overrides
      "ByConnectionName": {
        "readonly": "Standby",
        "primary": "Primary"
      }
    }
  }
}

Example Multi-Host Connection String:

json
json
{
  "ConnectionStrings": {
    "default": "Host=primary.db.com,replica1.db.com,replica2.db.com;Database=mydb;Username=app;Password=secret"
  }
}

Target Session Attributes:

ValueDescription
AnyAny successful connection is acceptable (default)
PrimaryServer must not be in hot standby mode
StandbyServer must be in hot standby mode
PreferPrimaryTry primary first, fall back to any
PreferStandbyTry standby first, fall back to any
ReadWriteSession must accept read-write transactions
ReadOnlySession must not accept read-write transactions

See Npgsql Failover and Load Balancing for more details.

New Options Property:

Added DataSources property to NpgsqlRestOptions for storing multi-host data sources:

csharp
csharp
/// <summary>
/// Dictionary of data sources by connection name. This is used for multi-host connection support.
/// When a connection name is specified in a routine endpoint, the middleware will first check
/// this dictionary for a data source. If not found, it falls back to the ConnectionStrings dictionary.
/// </summary>
public IDictionary<string, NpgsqlDataSource>? DataSources { get; set; }

Other Changes and Fixes

  • Fixed default value on ErrorHandlingOptions.RemoveTraceId configuration setting. Default is true as it should be.
  • Fixed PostgreSQL parameter and result type mapping when default search path is not public.
  • Fixed type on TypeScript client generation when returing error. Errors now return JSON object instead of string.
  • Removed options.md, annotations.md, client.md and login-endpoints.md documentation files because dedicated website is now live: https://npgsqlrest.github.io/
  • Added missing CsvUploadKey with value "csv" in NpgsqlRest.UploadOptions.UploadHandlers configuration.
  • Moved authorization check after parameter parsing. This allows for endpoint to return proper 404 response codes when parameter is missing, instead of 400 when authorization fails.
  • When using custom types in PostgreSQL function parameters (composite types, enums, etc), and those parameters are not supplied in the request, they will now default to NULL always. Previous behavior was 404 Not Found when parameter was missing.
  • Fixed debug logging in ErrorHandlingOptions builder.
  • Fixed default mapping in ErrorHandlingOptions builder.
  • Added guard clause that returns error if serviceProvider is null when ServiceProviderMode is set
  • Removed 5 duplicate HttpClientOptions.Enabled blocks (kept 1)
  • Replaced un-awaited transaction?.RollbackAsync() with proper shouldCommit = false and uploadHandler?.OnError() for consistency with other error paths

Comments