Skip to content
Written with Claude
IMPORTANT

As you may notice, this page and pretty much the entire website were obviously created with the help of AI. I wonder how you could tell? Was it a big "Written With Claude" badge on every page? I moved it to the top now (with the help of AI of course) to make it even more obvious. There are a few blogposts that were written by me manually, the old-fashioned way, I hope there will be more in the future, and those have a similar "Human Written" badge. This project (not the website), on the other hand, is a very, very different story. It took me more than two years of painstaking and unpaid work in my own free time. A story that, hopefully, I will tell someday. But meanwhile, what would you like me to do? To create a complex documentation website with a bunch of highly technical articles with the help of AI and fake it, to give you an illusion that I also did that manually? Like the half of itnernet is doing at this point? How does that makes any sense? Is that even fair to you? Or maybe to create this website manually, the old-fashioned way, just for you? While working a paid job for a salary, most of you wouldn't even get up in the morning. Would you like me to sing you a song while we're at it? For your personal entertainment? Seriously, get a grip. Do you find this information less valuable because of the way this website was created? I give my best to fix it to keep the information as accurate as possible, and I think it is very accurate at this point. If you find some mistakes, inaccurancies or problems, there is a comment section at the bottom of every page, which I also made with the help of the AI. And I woould very much appreciate if you leave your feedback there. Look, I'm just a guy who likes SQL, that's all. If you don't approve of how this website was constructed and the use of AI tools, I suggest closing this page and never wever coming back. And good riddance. And I would ban your access if I could know how. Thank you for your attention to this matter.

SQL File Source

Configuration for generating REST API endpoints from .sql files.

Overview

json
json
{
  "NpgsqlRest": {
    "SqlFileSource": {
      "Enabled": false,
      "FilePattern": "",
      "CommentsMode": "OnlyWithHttpTag",
      "CommentScope": "All",
      "ErrorMode": "Exit",
      "ResultPrefix": "result",
      "UnnamedSingleColumnSet": true,
      "NestedJsonForCompositeTypes": false
    }
  }
}

Settings

SettingTypeDefaultDescription
EnabledboolfalseEnable or disable SQL file source endpoints.
FilePatternstring""Glob pattern for SQL files. Empty string disables the feature.
CommentsModestring"OnlyWithHttpTag"How comment annotations affect endpoint creation.
CommentScopestring"All"Which comments in the SQL file to parse as annotations.
ErrorModestring"Exit"Behavior when a SQL file fails to parse or describe.
ResultPrefixstring"result"Prefix for result keys in multi-command JSON responses.
UnnamedSingleColumnSetbooltrueSingle-column queries return flat arrays instead of object arrays.
NestedJsonForCompositeTypesboolfalseWhen true, composite type columns are serialized as nested JSON objects. When false (default), composite fields are flattened inline. Can also be enabled per-endpoint with the @nested annotation.

Enabled

Enable or disable SQL file source endpoints. Default is false — you must explicitly enable this feature.

json
json
"SqlFileSource": {
  "Enabled": true
}

FilePattern

Glob pattern for locating SQL files. Supports the following wildcards:

PatternDescription
*Matches any characters within a single directory level
**Matches any characters including / (crosses directory boundaries)
?Matches a single character

When ** is present in the pattern, * stops matching / (standard glob semantics). When no ** is present, * matches / for backward compatibility.

Examples

json
json
// All .sql files in the sql/ directory (non-recursive)
"FilePattern": "sql/*.sql"

// All .sql files in sql/ and all subdirectories (recursive)
"FilePattern": "sql/**/*.sql"

// Any .sql file at any depth
"FilePattern": "**/*.sql"

An empty string disables the feature (even if Enabled is true).

CommentsMode

Controls how comment annotations affect SQL file endpoint creation.

ModeDescription
ParseAllEvery SQL file becomes an endpoint. Comments are parsed as annotations to modify endpoint behavior.
OnlyWithHttpTagOnly files containing an HTTP annotation become endpoints. (default)
IgnoreEvery SQL file becomes an endpoint. All comments are ignored.

TIP

The default OnlyWithHttpTag means only SQL files with an explicit HTTP annotation (e.g., -- HTTP GET) become endpoints. Use ParseAll if you want every SQL file in the matched pattern to become an endpoint automatically.

CommentScope

Controls which comments in the SQL file are parsed as annotations.

ScopeDescription
AllParse every comment in the file, regardless of position. (default)
HeaderOnly parse comments that appear before the first SQL statement.

Example

With CommentScope: "Header", only comments before the first statement are parsed:

sql
sql
-- This IS parsed as an annotation
-- HTTP GET
-- @authorize admin

select * from users;

-- This is NOT parsed (after first statement)
-- @cached

With CommentScope: "All" (default), all comments are parsed regardless of position.

ErrorMode

Controls behavior when a SQL file fails to parse or when PostgreSQL reports an error during the describe phase.

ModeDescription
ExitLog the error and exit the process. Fail-fast — catches SQL errors at startup. (default)
SkipLog the error, skip the file, and continue startup. Tolerates partial failures.

All SQL file errors are logged at Error level. In Exit mode, a Critical log explains the exit and how to switch to Skip mode.

A warning is logged when the configured file pattern matches no files.

Errors caught at startup include:

  • Parse errors (malformed SQL, unclosed strings/quotes)
  • Describe errors (PostgreSQL syntax errors, invalid table/column references)
  • Parameter type conflicts in multi-command files

TIP

Use Exit (default) during development to catch SQL errors early. Use Skip in production to tolerate partial failures.

ResultPrefix

Prefix for result keys in multi-command JSON responses. Default keys are result1, result2, result3, etc.

json
json
// Default: result1, result2, ...
"ResultPrefix": "result"

// Custom: data1, data2, ...
"ResultPrefix": "data"

// Custom: query1, query2, ...
"ResultPrefix": "query"

Individual result keys can be overridden per-file using the @result annotation.

UnnamedSingleColumnSet

When true (default), single-column queries return flat arrays instead of arrays of objects. This matches the behavior of PostgreSQL functions returning setof single values.

sql
sql
-- sql/get_names.sql
select name from users;

With UnnamedSingleColumnSet: true (default):

json
json
["Alice", "Bob", "Charlie"]

With UnnamedSingleColumnSet: false:

json
json
[{"name": "Alice"}, {"name": "Bob"}, {"name": "Charlie"}]

This applies to both single-command endpoints and per-result in multi-command files.

NestedJsonForCompositeTypes

Controls how composite type columns are serialized in SQL file endpoint responses.

Default (flat): Composite fields are spliced inline into the JSON row:

sql
sql
-- sql/get_user_with_address.sql
-- HTTP GET
-- @param $1 user_id
select id, address from users where id = $1;
-- where address is: create type address_type as (street text, city text, zip text)
json
json
{"id": 1, "street": "123 Main St", "city": "New York", "zip": "10001"}

With NestedJsonForCompositeTypes: true or @nested annotation: Composite wrapped under column name:

json
json
{"id": 1, "address": {"street": "123 Main St", "city": "New York", "zip": "10001"}}

Enable globally for all SQL file endpoints:

json
json
"SqlFileSource": {
  "Enabled": true,
  "FilePattern": "sql/**/*.sql",
  "NestedJsonForCompositeTypes": true
}

Or per-endpoint with the @nested annotation:

sql
sql
-- sql/get_user_with_address.sql
-- HTTP GET
-- @nested
-- @param $1 user_id
select id, address from users where id = $1;

NULL composites are serialized as null in nested mode, or as individual null fields in flat mode.

TIP

This setting is also available in Routine Options for function/procedure endpoints. Each endpoint source has its own independent setting.

LogCommandText

Controls whether multi-command SQL file endpoints include the full SQL text in debug command logs.

Default (false): Only the file path and statement count are logged:

code
[DBG] -- POST http://127.0.0.1:8080/api/send-message
-- $1 text = 'hello'
SQL file: sql/send-message.sql (5 statements)

With LogCommandText: true: The full SQL body of all statements is logged.

json
json
{
  "NpgsqlRest": {
    "SqlFileSource": {
      "LogCommandText": true
    }
  }
}

Single-command SQL file endpoints always log the SQL text regardless of this setting. This only applies when LogCommands is true.

Quick Start Example

  1. Enable the SQL file source in appsettings.json:
json
json
{
  "NpgsqlRest": {
    "SqlFileSource": {
      "Enabled": true,
      "FilePattern": "sql/**/*.sql"
    }
  }
}
  1. Create a SQL file:
sql
sql
-- sql/get_users.sql
-- HTTP GET
-- @authorize
-- @param $1 active
select id, name, email from users where active = $1;
  1. The endpoint is available at GET /api/get-users?active=true

Comments