Skip to content
Written with Claude
IMPORTANT

As you may notice, this page and pretty much the entire website were obviously created with the help of AI. I wonder how you could tell? Was it a big "Written With Claude" badge on every page? I moved it to the top now (with the help of AI of course) to make it even more obvious. There are a few blogposts that were written by me manually, the old-fashioned way, I hope there will be more in the future, and those have a similar "Human Written" badge. This project (not the website), on the other hand, is a very, very different story. It took me more than two years of painstaking and unpaid work in my own free time. A story that, hopefully, I will tell someday. But meanwhile, what would you like me to do? To create a complex documentation website with a bunch of highly technical articles with the help of AI and fake it, to give you an illusion that I also did that manually? Like the half of itnernet is doing at this point? How does that makes any sense? Is that even fair to you? Or maybe to create this website manually, the old-fashioned way, just for you? While working a paid job for a salary, most of you wouldn't even get up in the morning. Would you like me to sing you a song while we're at it? For your personal entertainment? Seriously, get a grip. Do you find this information less valuable because of the way this website was created? I give my best to fix it to keep the information as accurate as possible, and I think it is very accurate at this point. If you find some mistakes, inaccurancies or problems, there is a comment section at the bottom of every page, which I also made with the help of the AI. And I woould very much appreciate if you leave your feedback there. Look, I'm just a guy who likes SQL, that's all. If you don't approve of how this website was constructed and the use of AI tools, I suggest closing this page and never wever coming back. And good riddance. And I would ban your access if I could know how. Thank you for your attention to this matter.

Changelog v3.2.0 (2025-12-22)

Version 3.2.0 (2025-12-22)

Full Changelog

Reverse Proxy Feature

Added reverse proxy support for NpgsqlRest endpoints. When an endpoint is marked as a proxy, incoming HTTP requests are forwarded to an upstream service, and the response can either be returned directly to the client (passthrough mode) or processed by the PostgreSQL function (transform mode).

Basic Usage:

sql
sql
-- Passthrough mode: forward request, return upstream response directly
create function get_external_data()
returns void
language sql as 'select';
comment on function get_external_data() is 'HTTP GET
proxy';

-- Transform mode: forward request, process response in PostgreSQL
create function get_and_transform(
    _proxy_status_code int default null,
    _proxy_body text default null,
    _proxy_headers json default null,
    _proxy_content_type text default null,
    _proxy_success boolean default null,
    _proxy_error_message text default null
)
returns json
language plpgsql as $$
begin
    if not _proxy_success then
        return json_build_object('error', _proxy_error_message);
    end if;
    return json_build_object(
        'status', _proxy_status_code,
        'data', _proxy_body::json
    );
end;
$$;
comment on function get_and_transform(int, text, json, text, boolean, text) is 'HTTP GET
proxy';

Proxy Annotations:

sql
sql
-- Basic proxy with default host from configuration
comment on function my_func() is 'proxy';

-- Proxy with custom host
comment on function my_func() is 'proxy https://api.example.com';
comment on function my_func() is 'proxy_host https://api.example.com';

-- Proxy with custom HTTP method
comment on function my_func() is 'proxy POST';
comment on function my_func() is 'proxy_method POST';

-- Combined host and method
comment on function my_func() is 'proxy https://api.example.com POST';

Response Parameters:

When the PostgreSQL function has parameters matching these names, the proxy response data is passed to the function:

Parameter NameTypeDescription
_proxy_status_codeintHTTP status code from upstream (e.g., 200, 404)
_proxy_bodytextResponse body content
_proxy_headersjsonResponse headers as JSON object
_proxy_content_typetextContent-Type header value
_proxy_successbooleanTrue for 2xx status codes
_proxy_error_messagetextError message if request failed

User Claims and Context Forwarding:

When user_params is enabled, user claim values are forwarded to the upstream proxy as query string parameters:

sql
sql
create function proxy_with_claims(
    _user_id text default null,        -- Forwarded as ?userId=...
    _user_name text default null,      -- Forwarded as ?userName=...
    _ip_address text default null,     -- Forwarded as ?ipAddress=...
    _user_claims json default null,    -- Forwarded as ?userClaims=...
    _proxy_status_code int default null,
    _proxy_body text default null
)
returns json language plpgsql as $$
begin
    return json_build_object('user', _user_id, 'data', _proxy_body);
end;
$$;
comment on function proxy_with_claims(text, text, text, json, int, text) is 'HTTP GET
authorize
user_params
proxy';

When user_context is enabled, user context values are forwarded as HTTP headers to the upstream proxy:

sql
sql
create function proxy_with_context(
    _proxy_status_code int default null,
    _proxy_body text default null
)
returns json language plpgsql as $$
begin
    return json_build_object('status', _proxy_status_code);
end;
$$;
comment on function proxy_with_context(int, text) is 'HTTP GET
authorize
user_context
proxy';
-- Headers forwarded: request.user_id, request.user_name, request.user_roles (configurable via ContextKeyClaimsMapping)

Upload Forwarding:

For upload endpoints with proxy, you can configure whether to process uploads locally or forward raw multipart data:

json
json
{
  "NpgsqlRest": {
    "ProxyOptions": {
      "ForwardUploadContent": false
    }
  }
}
  • ForwardUploadContent: false (default): Uploads are processed locally; proxy receives parsed data
  • ForwardUploadContent: true: Raw multipart/form-data is streamed directly to upstream (memory-efficient)

Configuration:

json
json
{
  "NpgsqlRest": {
    "ProxyOptions": {
      "Enabled": false,
      "Host": null,
      "DefaultTimeout": "30 seconds",
      "ForwardHeaders": true,
      "ExcludeHeaders": ["Host", "Content-Length", "Transfer-Encoding"],
      "ForwardResponseHeaders": true,
      "ExcludeResponseHeaders": ["Transfer-Encoding", "Content-Length"],
      "ResponseStatusCodeParameter": "_proxy_status_code",
      "ResponseBodyParameter": "_proxy_body",
      "ResponseHeadersParameter": "_proxy_headers",
      "ResponseContentTypeParameter": "_proxy_content_type",
      "ResponseSuccessParameter": "_proxy_success",
      "ResponseErrorMessageParameter": "_proxy_error_message",
      "ForwardUploadContent": false
    }
  }
}

Key Features:

  • Passthrough mode: No database connection opened when function has no proxy response parameters
  • Transform mode: Process upstream response in PostgreSQL before returning to client
  • User claims forwarding: Authenticated user claims passed as query parameters to upstream
  • User context headers: User context values passed as HTTP headers to upstream
  • Streaming uploads: Memory-efficient streaming for large file uploads when ForwardUploadContent is enabled
  • Timeout handling: Configurable per-request timeout with proper 504 Gateway Timeout responses
  • Header forwarding: Configurable request/response header forwarding with exclusion lists

Docker Image with Bun Runtime

Added new Docker image variant with pre-installed Bun runtime: vbilopav/npgsqlrest:latest-bun

This image includes the Bun JavaScript runtime alongside NpgsqlRest, enabling proxy endpoints to execute Bun scripts within the same container. Useful for scenarios where you need lightweight proxy handlers without external service calls.

Available tags:

  • vbilopav/npgsqlrest:3.2.1-bun - specific version with Bun
  • vbilopav/npgsqlrest:latest-bun - latest version with Bun

Configuration Default Fixes

Fixed multiple configuration default mismatches where code fallback values did not match the defaults defined in appsettings.json. When configuration keys were not present, the application would use incorrect fallback values instead of the documented defaults.

Fixed defaults:

SectionKeyWasNow
DataProtectionGetAllElementsCommand"select data from get_all_data_protection_elements()""select get_data_protection_keys()"
DataProtectionStoreElementCommand"call store_data_protection_element($1,$2)""call store_data_protection_keys($1,$2)"
CorsAllowedOrigins["*"][]
CommandRetryOptionsEnabledfalsetrue
RateLimiterOptions.ConcurrencyPermitLimit10010
Auth.BasicAuthUseDefaultPasswordHasherfalsetrue
NpgsqlRest.HttpFileOptionsNamePattern"{0}{1}""{0}_{1}"
NpgsqlRest.OpenApiOptionsFileOverwritefalsetrue
NpgsqlRest.CrudSourceEnabledfalsetrue
StaticFiles.ParseContentOptionsHeadersnull["Cache-Control: no-store, no-cache, must-revalidate", "Pragma: no-cache", "Expires: 0"]
NpgsqlRestRequestHeadersModeIgnoreParameter
RateLimiterOptions.TokenBucketReplenishmentPeriodSeconds (log)110
RateLimiterOptions.ConcurrencyQueueLimit105
RateLimiterOptionsMessage (field name)"Message""StatusMessage"
CacheOptionsUseRedisBackend (field name)"UseRedisBackend""HybridCacheUseRedisBackend"

Note: If you were relying on the previous (incorrect) fallback behavior, you may need to explicitly set these values in your configuration.

Comments