Skip to content
Written with Claude
IMPORTANT

As you may notice, this page and pretty much the entire website were obviously created with the help of AI. I wonder how you could tell? Was it a big "Written With Claude" badge on every page? I moved it to the top now (with the help of AI of course) to make it even more obvious. There are a few blogposts that were written by me manually, the old-fashioned way, I hope there will be more in the future, and those have a similar "Human Written" badge. This project (not the website), on the other hand, is a very, very different story. It took me more than two years of painstaking and unpaid work in my own free time. A story that, hopefully, I will tell someday. But meanwhile, what would you like me to do? To create a complex documentation website with a bunch of highly technical articles with the help of AI and fake it, to give you an illusion that I also did that manually? Like the half of itnernet is doing at this point? How does that makes any sense? Is that even fair to you? Or maybe to create this website manually, the old-fashioned way, just for you? While working a paid job for a salary, most of you wouldn't even get up in the morning. Would you like me to sing you a song while we're at it? For your personal entertainment? Seriously, get a grip. Do you find this information less valuable because of the way this website was created? I give my best to fix it to keep the information as accurate as possible, and I think it is very accurate at this point. If you find some mistakes, inaccurancies or problems, there is a comment section at the bottom of every page, which I also made with the help of the AI. And I woould very much appreciate if you leave your feedback there. Look, I'm just a guy who likes SQL, that's all. If you don't approve of how this website was constructed and the use of AI tools, I suggest closing this page and never wever coming back. And good riddance. And I would ban your access if I could know how. Thank you for your attention to this matter.

REST APIs from Plain SQL Files: The Complete Guide to SQL File Source

March 2026 · SQL Filesv3.12.0TutorialPostgreSQLNpgsqlRest

NpgsqlRest has always been about collapsing the distance between your database and your API. Write a PostgreSQL function, get a REST endpoint. But functions still carry ceremony — CREATE FUNCTION, RETURNS TABLE, LANGUAGE sql, BEGIN ATOMIC, comments on functions for annotations.

NpgsqlRest 3.12.0 removes even that ceremony. Drop a .sql file in a directory. Get a REST endpoint. Parameters, return types, HTTP verbs — all inferred automatically from the SQL itself.

sql
sql
-- sql/get-users.sql
-- HTTP GET
-- @authorize
-- @param $1 active
select id, name, email from users where active = $1;

That's it. GET /api/get-users?active=true is live. TypeScript client generated. OpenAPI spec updated. Authentication enforced.

Source Code: All SQL file examples are available in the examples repository on GitHub. Each function-based example has a _sql_file counterpart showing the same functionality implemented with SQL files.

Why SQL Files?

PostgreSQL functions are powerful. They give you parameter validation, default values, overloading, and optimization hints. But for many endpoints, they're overkill:

  • Simple queriesSELECT * FROM users WHERE active = $1 doesn't benefit from being wrapped in a function
  • Ad-hoc reports — queries that change frequently are easier to edit as plain files than as DDL migrations
  • Multi-statement workflows — transactions that span multiple queries are more readable as sequential SQL than as PL/pgSQL procedures
  • Team collaboration — developers who are comfortable with SQL but not PL/pgSQL can contribute endpoints directly

SQL File Source is not a replacement for functions. It's a third endpoint source alongside RoutineSource (functions/procedures) and CrudSource (tables/views). Use whichever fits the task.

Getting Started

1. Enable SQL File Source

Add to your config.json (or appsettings.json):

json
json
{
  "NpgsqlRest": {
    "SqlFileSource": {
      "Enabled": true,
      "FilePattern": "sql/**/*.sql"
    }
  }
}

The FilePattern uses glob syntax — ** matches directories recursively, * matches filenames.

2. Create Your First SQL File

sql
sql
-- sql/hello.sql
-- HTTP GET
select 'Hello, World!' as message;

Start the server. GET /api/hello returns:

json
json
[{"message": "Hello, World!"}]

No function definition. No migration. No registration. The file IS the endpoint.

3. URL Path Derivation

The endpoint path comes from the filename, with the same name conversion applied to functions:

FilenameEndpoint Path
hello.sql/api/hello
get-users.sql/api/get-users
user_profile.sql/api/user-profile
sql/reports/monthly.sql/api/monthly

Override with the @path annotation: -- @path /custom/path.

Parameters

SQL files use PostgreSQL positional parameters ($1, $2, ...). Without renaming, they work as-is:

code
GET /api/my-query?$1=hello&$2=42

But positional names in URLs are ugly. The @param annotation fixes that:

sql
sql
-- sql/get-reports.sql
-- HTTP GET
-- @param $1 from_date
-- @param $2 to_date
select id, title, created_at
from reports
where created_at between $1 and $2;

Now: GET /api/get-reports?from_date=2024-01-01&to_date=2024-12-31

Type Override

Sometimes PostgreSQL can't infer the parameter type from context. Add a type to the @param annotation:

sql
sql
-- @param $1 user_id integer
-- @param $2 active boolean
select * from users where id = $1 and active = $2;

Default Values

PostgreSQL functions support default parameters natively. SQL files don't — positional parameters must always be bound. The @param annotation fills this gap:

sql
sql
-- @param $1 status default 'active'
-- @param $2 limit default 100
select * from users where status = $1 limit $2;

Now GET /api/users works without any parameters (using defaults), but GET /api/users?status=inactive&limit=50 overrides them.

Default value syntax follows SQL conventions:

  • default null — SQL NULL
  • default 'text value' — single-quoted string
  • default 42 — number
  • default true — boolean
  • default (no value) — NULL

Parameters with defaults become optional in generated TypeScript interfaces and OpenAPI specs.

You can also use = as shorthand for default:

sql
sql
-- @param $1 _user_id text = null
-- @param $2 limit integer = 50

HTTP Verb Detection

When you don't specify an HTTP method, NpgsqlRest infers it from the SQL:

SQL StatementHTTP Verb
SELECT / WITH ... SELECTGET
INSERTPUT
UPDATEPOST
DELETEDELETE
DO $$ ... $$POST
Mixed mutationsMost destructive wins

An explicit HTTP GET, HTTP POST, etc. annotation always overrides auto-detection:

sql
sql
-- HTTP POST
select * from users;

Multi-Command Endpoints

This is where SQL files truly shine. A single file with multiple statements (separated by ;) becomes one endpoint that executes everything in a single database round-trip:

sql
sql
-- sql/process-order.sql
-- HTTP POST
-- @param $1 order_id
-- @result validate
select count(*) from orders where id = $1;
update orders set status = 'processing' where id = $1;
-- @result confirm
select id, status from orders where id = $1;

POST /api/process-order with {"order_id": 42} returns:

json
json
{
  "validate": [1],
  "result2": 1,
  "confirm": [{"id": 42, "status": "processing"}]
}

Each statement produces a result:

  • SELECT queries return JSON arrays of row objects
  • INSERT/UPDATE/DELETE return rows-affected counts
  • Result keys default to result1, result2, etc.
  • The @result annotation names them: @result validate

All statements share the same parameters — the user sends each parameter once.

Skipping Noise

Transaction control and session statements are auto-skipped from the response (while still being executed). But sometimes you need explicit control. The @skip annotation excludes a statement from the response:

sql
sql
-- HTTP POST
-- @param $1 id
begin;
-- @skip
select set_config('app.user_id', $1::text, true);
update users set active = true where id = $1;
-- @result verification
select id, active from users where id = $1;
end;

Result: {"result1": 1, "verification": [{"id": 1, "active": true}]}

The set_config call executes but doesn't appear in the response.

Void Endpoints

When all statements are side-effect-only, use @void to return 204 No Content:

sql
sql
-- sql/send-message.sql
/*
HTTP POST
@authorize
@void
@param $1 messageText text
@param $2 _user_id text = null
*/
select set_config('app.message', $1, true);
select set_config('app.user_id', $2, true);
do $$ begin
    insert into messages (user_id, text)
    values (current_setting('app.user_id')::int, current_setting('app.message'));
end; $$;

Without @void, this returns {"result1":"...","result2":"...","result3":-1}. With @void, it returns 204 — clean and simple.

Per-Command @single

The @single annotation can apply to individual commands in a multi-command file:

sql
sql
-- HTTP GET
-- @param $1 id

-- @single
select id, name from users where id = $1;

select id, total from orders where user_id = $1;

Result: {"result1": {"id": 1, "name": "alice"}, "result2": [{"id": 10, "total": 99.99}, {"id": 11, "total": 42.00}]}

The first query returns an object (single record), the second returns an array (multiple rows).

Authentication and Authorization

Every annotation that works on function endpoints works on SQL files — including auth:

Login

sql
sql
-- sql/login.sql
/*
HTTP POST
@login
@allow_anonymous
@param $1 username
@param $2 password
*/
select
    'cookies' as scheme,
    u.user_id::text as user_id,
    u.username,
    u.email
from users u
where u.username = $1
    and verify_password($2, u.password_hash);

Who Am I (Claim Mapping)

sql
sql
-- sql/who-am-i.sql
/*
HTTP GET
@authorize
@user_parameters
@param $1 _user_id default null
@param $2 _username default null
@param $3 _email default null
*/
select $1 as user_id, $2 as username, $3 as email;

The @user_parameters annotation auto-fills parameters from authenticated user claims. Renaming $1 to _user_id maps it to the name_identifier claim. The default null ensures the parameters are always bindable.

Logout

sql
sql
-- sql/logout.sql
-- HTTP POST
-- @logout
-- @authorize
select 'cookies'

Four lines. That's a complete logout endpoint.

Virtual Parameters with @define_param

Sometimes you need HTTP parameters that don't appear in the SQL query. @define_param creates virtual parameters for annotation placeholders and claim mapping:

sql
sql
-- sql/users-report.sql
-- @define_param format text
-- @table_format = {format}
-- @param $1 department_id
select id, name, email from users where department_id = $1;

GET /api/users-report?department_id=5&format=excel — the format parameter controls the output format via the {format} placeholder without appearing in the SQL.

The @returns Annotation: Working with Temp Tables

SQL files enable patterns that functions can't — like creating temp tables in DO blocks and querying them. But temp tables don't exist at startup when NpgsqlRest describes the SQL. The @returns annotation solves this:

sql
sql
-- HTTP GET
-- @param $1 val1 text
-- @param $2 val2 integer
begin;
select set_config('app.val1', $1, true); -- @skip
select set_config('app.val2', $2::text, true); -- @skip
do $$ begin
    create temp table _result on commit drop as
    select current_setting('app.val1') as val1,
           current_setting('app.val2')::int as val2,
           true as active;
end; $$;
-- @returns my_result_type
-- @result data
-- @single
select * from _result;
end;

Where my_result_type is a composite type that mirrors the temp table structure:

sql
sql
create type my_result_type as (
    val1 text,
    val2 integer,
    active boolean
);

@returns completely skips the Describe step for that statement — the SQL is never sent to PostgreSQL during startup. Instead, column information is resolved from the composite type definition in pg_catalog.

@returns also supports scalar types (@returns integer) and void (@returns void).

Real-World Example: Financial Dashboard with HTTP Types

SQL files support composite type parameters, including HTTP custom types that make external API calls automatically:

sql
sql
-- sql/financial-dashboard.sql
/*
HTTP GET /financial-dashboard
@authorize

@param $1 _base_currency text
@param $2 _target_currencies_csv text
@param $3 _crypto_ids_csv text
@param $4 _vs_currencies_csv text

@param $5 _exchange_rate_response example_9.exchange_rate_api
@param $6 _crypto_response example_9.crypto_price_api
*/

begin;

-- @skip
create temp table _var on commit drop as
select 
    $1::text as base_currency,
    $5::example_9.exchange_rate_api as exchange_rate_response,
    $6::example_9.crypto_price_api as crypto_response;

do $$ 
declare
    _exchange_rate_response example_9.exchange_rate_api 
        = (select exchange_rate_response from _var);
    -- ... process API responses, build result
begin
    -- Business logic here
end; $$;

-- @result dashboard
-- @single
-- @returns example_9.financial_dashboard_result
select * from _result_out;
end;

Parameters $5 and $6 are HTTP custom types — NpgsqlRest makes the external API calls automatically and passes the responses as composite values. The SQL processes them with full PostgreSQL power. No middleware, no HTTP client code.

TypeScript Code Generation

SQL file endpoints get the same TypeScript client generation as function endpoints:

Single-command endpoint:

typescript
typescript
export async function getUsers(
    request: { active: boolean }
): Promise<{ id: number; name: string; email: string }[]> { ... }

Multi-command endpoint:

typescript
typescript
interface IMyFirstQueryResponse {
    first: string[];
    second: { queryText: string; user: string; timestamp: string };
}

export async function myFirstQuery(): Promise<IMyFirstQueryResponse> { ... }

Multi-command endpoints generate typed response interfaces with one property per result. Void commands are typed as number (rows affected). Parameters with defaults get ? suffix (optional).

Wire Protocol Introspection

How does NpgsqlRest know the parameter types and return columns without executing anything?

At startup, each statement goes through PostgreSQL's Parse → Describe → Sync cycle:

  1. Parse — sends the SQL to PostgreSQL's parser
  2. Describe — asks for parameter types and result column metadata
  3. Sync — completes the cycle

This is the same mechanism that PREPARE uses. No query planning, no execution — roughly the cost of SELECT 1. Parameter types come from ParameterDescription messages (authoritative OIDs). Return columns come from RowDescription messages.

For multi-command files, each statement is described individually and parameter types are merged. Same $N with same type → use that type. Conflicting types → startup error with a clear message (fix with @param $1 name type).

Errors produce compiler-like messages:

code
SqlFileSource: /path/to/get-posts.sql:
error 42703: column u.id does not exist
  at line 3, column 12
  select u.id, u.name from users u
             ^

SQL errors are caught at startup, not at runtime. This is one of the key advantages — your SQL files are validated against the actual database schema before the server accepts any requests.

Configuration Reference

json
json
{
  "NpgsqlRest": {
    "SqlFileSource": {
      "Enabled": true,
      "FilePattern": "sql/**/*.sql",
      "CommentsMode": "OnlyWithHttpTag",
      "CommentScope": "All",
      "ErrorMode": "Exit",
      "ResultPrefix": "result",
      "UnnamedSingleColumnSet": true,
      "NestedJsonForCompositeTypes": false
    }
  }
}
SettingDefaultDescription
EnabledfalseMust be explicitly enabled
FilePattern""Glob pattern for SQL files. Empty = disabled
CommentsModeOnlyWithHttpTagOnlyWithHttpTag requires an explicit HTTP annotation. ParseAll makes every file an endpoint. Ignore makes every file an endpoint but ignores comments
CommentScopeAllAll parses every comment. Header only parses comments before the first statement
ErrorModeExitExit fails fast at startup. Skip logs the error and continues
ResultPrefix"result"Prefix for multi-command result keys (result1, result2, ...)
UnnamedSingleColumnSettrueSingle-column queries return flat arrays (["a","b"]) instead of object arrays
NestedJsonForCompositeTypesfalseComposite type columns as nested JSON objects. Also per-endpoint with @nested

Disabling Other Sources

For SQL-files-only deployments:

json
json
{
  "NpgsqlRest": {
    "RoutineOptions": {
      "Enabled": false
    },
    "SqlFileSource": {
      "Enabled": true,
      "FilePattern": "sql/**/*.sql"
    }
  }
}

This skips the PostgreSQL catalog query for routines entirely.

Full Annotation Support

SQL file endpoints support every annotation available to function endpoints:

CategoryAnnotations
Auth@authorize, @allow_anonymous, @login, @logout, @user_parameters
Caching@cached, @cache_expires_in
Output@raw, @single, @void, @table_format, @nested, @sse
Parameters@param, @define_param, @request_param_type
Routing@path, @tag
Security@encrypt, @decrypt
Multi-command@result, @skip, @returns
Performance@timeout, @buffer_rows

Comments work in both line and block format:

sql
sql
-- Line comments are annotations
/* Block comments are annotations too */

-- Inline annotations work after semicolons:
select * from users; -- @result users

SQL Files vs Functions: When to Use Which

AspectSQL FilesFunctions
SetupDrop a file, doneCREATE FUNCTION + COMMENT ON
ParametersPositional ($1, $2) + @param renameNamed parameters natively
DefaultsVia @param defaultNative DEFAULT clause
Multi-statementNatural — just separate with ;Requires PL/pgSQL procedure
Temp tablesFirst-class with @returnsSupported in PL/pgSQL
OptimizationN/AVOLATILE/STABLE/IMMUTABLE, COST, ROWS
OverloadingOne file = one endpointMultiple signatures per name
Schema organizationFile system directoriesPostgreSQL schemas
MigrationsFile changes = instantRequires CREATE OR REPLACE

Use SQL files when: the query is simple, changes frequently, involves multi-statement workflows, or the team prefers plain SQL.

Use functions when: you need optimization hints, parameter defaults natively, overloading, or the logic is complex enough to benefit from PL/pgSQL control flow.

Use both together. SQL files and functions coexist. Each endpoint source is independently enabled. Mix and match based on what fits each endpoint.

Try It Yourself

Every function-based example in the examples repository has a _sql_file counterpart:

bash
bash
git clone https://github.com/NpgsqlRest/npgsqlrest-docs.git
cd npgsqlrest-docs/examples

# Install NpgsqlRest binary
bun install

# Try the simplest SQL file example
cd 1_my_first_function_sql_file
bun run db:up
bun run dev

# Visit http://127.0.0.1:8080

Compare the function-based and SQL file versions side by side to see how each feature translates.

What's Next

Comments