Skip to content
Written with Claude
IMPORTANT

As you may notice, this page and pretty much the entire website were obviously created with the help of AI. I wonder how you could tell? Was it a big "Written With Claude" badge on every page? I moved it to the top now (with the help of AI of course) to make it even more obvious. There are a few blogposts that were written by me manually, the old-fashioned way, I hope there will be more in the future, and those have a similar "Human Written" badge. This project (not the website), on the other hand, is a very, very different story. It took me more than two years of painstaking and unpaid work in my own free time. A story that, hopefully, I will tell someday. But meanwhile, what would you like me to do? To create a complex documentation website with a bunch of highly technical articles with the help of AI and fake it, to give you an illusion that I also did that manually? Like the half of itnernet is doing at this point? How does that makes any sense? Is that even fair to you? Or maybe to create this website manually, the old-fashioned way, just for you? While working a paid job for a salary, most of you wouldn't even get up in the morning. Would you like me to sing you a song while we're at it? For your personal entertainment? Seriously, get a grip. Do you find this information less valuable because of the way this website was created? I give my best to fix it to keep the information as accurate as possible, and I think it is very accurate at this point. If you find some mistakes, inaccurancies or problems, there is a comment section at the bottom of every page, which I also made with the help of the AI. And I woould very much appreciate if you leave your feedback there. Look, I'm just a guy who likes SQL, that's all. If you don't approve of how this website was constructed and the use of AI tools, I suggest closing this page and never wever coming back. And good riddance. And I would ban your access if I could know how. Thank you for your attention to this matter.

SQL File Endpoints

NpgsqlRest creates REST API endpoints directly from .sql files. No CREATE FUNCTION, no RETURNS TABLE, no LANGUAGE sql, no COMMENT ON FUNCTION — just the query itself.

Source Code: Every example in this guide comes from the examples repository. Each function-based example has a _sql_file counterpart.

How It Works

At startup, for each .sql file matched by the configured glob pattern:

  1. The file is parsed — comments are extracted as annotations, SQL is split into statements on ; boundaries
  2. Each statement is described via PostgreSQL's wire protocol (Parse → Describe → Sync with SchemaOnly) — parameter types and return columns are inferred without executing the query
  3. A REST endpoint is created with the URL path derived from the filenameget-users.sql becomes /api/get-users

This gives you static type checking — SQL errors are caught at startup, not at runtime. Your SQL files are validated against the actual database schema before the server accepts any requests:

code
SqlFileSource: /path/to/get-posts.sql:
error 42703: column u.id does not exist
  at line 3, column 12
  select u.id, u.name from users u
             ^

This is the default behavior (ErrorMode: "Exit"). Set ErrorMode: "Skip" to log errors and continue startup instead. Individual statements can bypass Describe entirely with @returns, which is necessary when the SQL references objects that don't exist at startup (e.g. temp tables created at runtime).

Configuration

Enable SQL File Source in appsettings.json:

json
json
{
  "NpgsqlRest": {
    "SqlFileSource": {
      "Enabled": true,
      "FilePattern": "sql/**/*.sql"
    }
  }
}

FilePattern uses glob syntax: ** crosses directories, * matches filenames, ? matches a single character.

By default (CommentsMode: "OnlyWithHttpTag"), only files containing an HTTP annotation become endpoints. This prevents accidental exposure of migration scripts or utility files. Set CommentsMode: "ParseAll" to make every matched file an endpoint.

Other settings:

SettingDefaultDescription
ErrorModeExitExit fails fast at startup. Skip logs errors and continues
CommentScopeAllAll parses every comment. Header only parses comments before the first statement
UnnamedSingleColumnSettrueSingle-column queries return flat arrays (["a","b"]) instead of object arrays
ResultPrefixresultPrefix for multi-command result keys (result1, result2, ...)
SkipNonQueryCommandstrueTransaction control (BEGIN, COMMIT, etc.), DO blocks, SET/RESET are auto-skipped from response

See SQL File Source Configuration for the complete reference.

Single-Command Files

A file with one SQL statement produces a standard endpoint:

sql
sql
-- sql/get-users.sql
-- HTTP GET
select user_id, username, email, active from example_2.users;

GET /api/get-users returns an array of objects:

json
json
[{"userId": 1, "username": "alice", "email": "alice@example.com", "active": true}, ...]

Column names are converted to camelCase by the default NameConverter. Single-column queries return flat arrays — select name from users returns ["Alice","Bob"] not [{"name":"Alice"},...] (configurable via UnnamedSingleColumnSet).

HTTP Verb Detection

Without an explicit HTTP annotation, the verb is inferred from the SQL: SELECT → GET, INSERT → PUT, UPDATE → POST, DELETE → DELETE, DO block → POST. Mixed mutations → most destructive wins (DELETE > POST > PUT). An explicit annotation always overrides: -- HTTP POST.

Parameters

SQL files use PostgreSQL positional parameters ($1, $2, ...). The @param annotation gives them meaningful names and optionally overrides the type:

sql
sql
-- sql/get-reports.sql
-- HTTP GET
-- @param $1 from_date
-- @param $2 to_date
select id, title, created_at
from reports
where created_at between $1 and $2;

GET /api/get-reports?from_date=2024-01-01&to_date=2024-12-31

Without @param, the raw positional names work: ?$1=2024-01-01&$2=2024-12-31.

Type Hints

When PostgreSQL can't infer a parameter's type from context (e.g. select set_config('key', $1, true)$1 is ambiguous), add a type to @param:

sql
sql
-- @param $1 user_id integer
-- @param $2 active boolean

The type is used during the Describe step so PostgreSQL can resolve the parameter.

Default Values

Positional parameters must always be bound — unlike function parameters, there's no native DEFAULT clause. The @param annotation fills this gap:

sql
sql
-- @param $1 status default 'active'
-- @param $2 limit integer = 50

When a parameter with a default is not provided in the request, the default value is bound. Parameters with defaults become optional in generated TypeScript (? suffix) and OpenAPI (required: false).

Value syntax follows SQL conventions: null → SQL NULL, 'text' → string, 42 → number, true → boolean.

Virtual Parameters (@define_param)

@define_param creates HTTP parameters that are not bound to the SQL query. They exist for annotation placeholders and claim mapping:

sql
sql
-- @define_param format text
-- @table_format = {format}

The format parameter feeds into the {format} placeholder without appearing in the SQL. Default type is text.

Multi-Command Files

A file with multiple statements (separated by ;) becomes one endpoint that executes everything in a single database round-trip via NpgsqlBatch. From the first example:

sql
sql
-- HTTP GET

-- @result first
select * from (
    values ('Hello, World!'), 
    ('This is my first SQL endpoint.'), 
    ('Enjoy coding in SQL!')
) as t(text);

-- @result second
-- @single
select current_query() as query_text, current_user as user, current_timestamp as timestamp;

Returns a JSON object with one key per statement:

json
json
{
  "first": ["Hello, World!", "This is my first SQL endpoint.", "Enjoy coding in SQL!"],
  "second": {"queryText": "...", "user": "postgres", "timestamp": "..."}
}

Result Rules

  • SELECT queries → JSON array of objects (or flat array for single-column with UnnamedSingleColumnSet)
  • INSERT/UPDATE/DELETE without RETURNING → rows-affected count (integer)
  • Transaction control (BEGIN, COMMIT, SET, DO, etc.) → auto-skipped from response (SkipNonQueryCommands: true)
  • All statements share the same parameters ($1, $2, ...) — the user sends each parameter once

Positional Annotations

Three annotations are positional — they apply to the next statement below them (or inline after ; on the same line):

  • @result name — rename result key (default: result1, result2, ...)
  • @single — return a single row as an object instead of an array
  • @skip — execute the statement but exclude it from the response
sql
sql
select set_config('app.val', $1, true); -- @skip
-- @result data
-- @single
select current_setting('app.val') as value;

@void

@void forces the entire endpoint to return 204 No Content. All statements execute for side effects only — no JSON response, no result keys. This eliminates the need to @skip every individual statement.

@returns — Skip Describe

@returns skips the PostgreSQL Describe step entirely for a statement and resolves return columns from a type instead. This is a positional annotation.

When to use it: when a statement references objects that don't exist at startup — typically temp tables created inside DO blocks.

sql
sql
-- @returns my_result_type
-- @result data
-- @single
select * from _result;

The type must exist in the database at startup. Columns are resolved from pg_catalog.

Three forms:

  • @returns composite_type — resolve columns from the composite type definition
  • @returns scalar_type (e.g. @returns integer, @returns json) — single-column result
  • @returns void — no columns, void result (differs from @void which still runs Describe)

DO Blocks and Limitations

PostgreSQL DO blocks cannot receive $N parameters — this is a PostgreSQL language limitation, not an NpgsqlRest one. DO blocks also cannot return values. Multi-command SQL files work around this:

Passing parameters in: Use set_config() with true (transaction-local) to store values, then current_setting() inside the DO block. Or use a temp table bridge with @skip:

sql
sql
begin;
select set_config('app.user_id', $1, true);
do $$ begin
    insert into logs (user_id) values (current_setting('app.user_id')::int);
end; $$;
end;

Getting results out: Create a temp table inside the DO block with ON COMMIT DROP, then SELECT from it with @returns to declare the return type.

These are workarounds. When you need proper procedural logic with native parameters and return values, use a PostgreSQL function instead — that's what they're for.

Existing Features Work Unchanged

All NpgsqlRest features that existed before SQL File Source — authentication (@login, @logout, @authorize), file uploads (@upload), SSE (@sse), proxy (@proxy), HTTP custom types, CSV/Excel export (@raw, @table_format), caching (@cached), encryption, custom headers, composite types (@nested) — work identically in SQL files. The annotations are the same; only the endpoint source is different.

The examples repository has a _sql_file counterpart for most examples demonstrating this.

SQL Files vs Functions

Use SQL files when: the query is declarative, involves multi-statement workflows, or the team prefers plain SQL files over DDL.

Use functions when:

  • Procedural logic — functions receive parameters and return results natively. DO blocks require set_config/temp table workarounds.
  • Testing — functions support assert blocks inside repeatable migrations that run on every build, giving you database-level unit tests with rollback isolation. SQL files have no equivalent. See End-to-End Type Checking for examples.
  • OptimizationVOLATILE/STABLE/IMMUTABLE, COST, ROWS, PARALLEL hints.
  • Overloading — multiple function signatures per name.

Use both together. Each endpoint source is independently enabled. SQL files can call functions, and HTTP custom types can reference any endpoint regardless of source.

Comments