Skip to content
Written with Claude
IMPORTANT

As you may notice, this page and pretty much the entire website were obviously created with the help of AI. I wonder how you could tell? Was it a big "Written With Claude" badge on every page? I moved it to the top now (with the help of AI of course) to make it even more obvious. There are a few blogposts that were written by me manually, the old-fashioned way, I hope there will be more in the future, and those have a similar "Human Written" badge. This project (not the website), on the other hand, is a very, very different story. It took me more than two years of painstaking and unpaid work in my own free time. A story that, hopefully, I will tell someday. But meanwhile, what would you like me to do? To create a complex documentation website with a bunch of highly technical articles with the help of AI and fake it, to give you an illusion that I also did that manually? Like the half of itnernet is doing at this point? How does that makes any sense? Is that even fair to you? Or maybe to create this website manually, the old-fashioned way, just for you? While working a paid job for a salary, most of you wouldn't even get up in the morning. Would you like me to sing you a song while we're at it? For your personal entertainment? Seriously, get a grip. Do you find this information less valuable because of the way this website was created? I give my best to fix it to keep the information as accurate as possible, and I think it is very accurate at this point. If you find some mistakes, inaccurancies or problems, there is a comment section at the bottom of every page, which I also made with the help of the AI. And I woould very much appreciate if you leave your feedback there. Look, I'm just a guy who likes SQL, that's all. If you don't approve of how this website was constructed and the use of AI tools, I suggest closing this page and never wever coming back. And good riddance. And I would ban your access if I could know how. Thank you for your attention to this matter.

Changelog v3.4.0 (2025-01-16)

Version 3.4.0 (2025-01-16)

Full Changelog

Composite Type Support

Added automatic JSON serialization support for PostgreSQL composite types in two scenarios:

1. Arrays of Composite Types

When a function returns a column that is an array of a composite type (or table type), the array elements are now automatically serialized as JSON arrays of objects instead of PostgreSQL's text representation.

Example:

sql
sql
create type book_item as (
    book_id int,
    title text,
    author_id int
);

create function get_authors_with_books()
returns table(
    author_id int,
    author_name text,
    books book_item[]
)
language sql as $$
select * from (values
    (1, 'George Orwell', array[
        row(1, '1984', 1)::book_item,
        row(2, 'Animal Farm', 1)::book_item
    ])
) as t(author_id, author_name, books);
$$;

Previous behavior:

json
json
[{"authorId":1,"authorName":"George Orwell","books":["(1,1984,1)","(2,Animal Farm,1)"]}]

New behavior:

json
json
[{"authorId":1,"authorName":"George Orwell","books":[{"bookId":1,"title":"1984","authorId":1},{"bookId":2,"title":"Animal Farm","authorId":1}]}]

This feature is automatic and requires no annotations. It works with:

  • Custom composite types (create type)
  • Table types (arrays of table row types)
  • Composite types containing NULL values
  • Empty arrays and NULL arrays
  • Multiple array columns in the same result set
  • Primitive arrays inside composite types (e.g., int[] field) - properly serialized as JSON arrays

Limitations (with ResolveNestedCompositeTypes: false):

When ResolveNestedCompositeTypes is disabled, the array composite serialization works for one level only. Nested structures have the following behavior:

ScenarioOutput
Nested composite (composite inside composite)Inner composite serialized as PostgreSQL tuple string: "(1,x)" instead of {"id":1,"name":"x"}
Array of composites inside compositeArray of tuple strings: ["(1,a)","(2,b)"] instead of [{"id":1,"name":"a"},...]

Note: These limitations do not apply when ResolveNestedCompositeTypes: true (the default). See the ResolveNestedCompositeTypes documentation in version 3.4.4 for full nested composite support.

For complex nested structures with the option disabled, use PostgreSQL's json_build_object/json_agg functions to construct the JSON directly in your query.

2. Nested JSON for Composite Type Columns (Opt-in)

When a function returns a composite type column, by default the composite type fields are expanded into separate columns (existing behavior preserved for backward compatibility).

To serialize composite type columns as nested JSON objects, you can either:

  1. Enable globally via configuration option NestedJsonForCompositeTypes (default is false):
json
json
{
  "NpgsqlRest": {
      "RoutineOptions": {
        "NestedJsonForCompositeTypes": true
    }
  }
}
  1. Enable per-endpoint via comment annotation (nested, nested_json, or nested_composite):
sql
sql
comment on function get_user_with_address() is 'nested';
-- or: 'nested_json'
-- or: 'nested_composite'

Example:

sql
sql
create type address_type as (
    street text,
    city text,
    zip_code text
);

create function get_user_with_address()
returns table(
    user_id int,
    user_name text,
    address address_type
)
language sql as $$
select 1, 'Alice', row('123 Main St', 'New York', '10001')::address_type;
$$;

comment on function get_user_with_address() is 'nested';

Default behavior (expanded columns):

json
json
[{"userId":1,"userName":"Alice","street":"123 Main St","city":"New York","zipCode":"10001"}]

With nested annotation or NestedJsonForCompositeTypes: true:

json
json
[{"userId":1,"userName":"Alice","address":{"street":"123 Main St","city":"New York","zipCode":"10001"}}]

Multidimensional Array Support

Added proper JSON serialization for multidimensional PostgreSQL arrays. Previously, multidimensional arrays were serialized incorrectly, producing invalid JSON. Now they are properly converted to nested JSON arrays.

Example:

sql
sql
create function get_2d_int_array()
returns table(
    matrix int[][]
)
language sql as $$
select array[[1,2,3],[4,5,6]];
$$;

Previous behavior (invalid JSON):

code
[{"matrix":[{1,2,3,{4,5,6]}]

New behavior:

json
json
[{"matrix":[[1,2,3],[4,5,6]]}]

This feature is automatic and requires no configuration. It works with:

  • 2D arrays: {{1,2},{3,4}}[[1,2],[3,4]]
  • 3D arrays: {{{1,2},{3,4}},{{5,6},{7,8}}}[[[1,2],[3,4]],[[5,6],[7,8]]]
  • Higher dimensional arrays
  • All primitive types (int, text, boolean, numeric, etc.)
  • NULL values within multidimensional arrays

Limitation: Multidimensional arrays of composite types are serialized as nested arrays of PostgreSQL tuple strings, not as fully expanded JSON objects. For example, a 2D array of composites {{"(1,a)","(2,b)"},{"(3,c)","(4,d)"}} becomes [["(1,a)","(2,b)"],["(3,c)","(4,d)"]]. The data is preserved but not fully parsed. For complex nested structures, consider using PostgreSQL's json_build_object/json_agg functions instead.

JSON Escaping Fix for Arrays and Tuple Strings

Fixed JSON escaping issues that could produce invalid JSON output when PostgreSQL arrays or composite types contain special characters. The fix ensures all special characters are properly escaped:

Characters now properly escaped:

  • Quotes (") - escaped as \"
  • Backslashes (\) - escaped as \\
  • Newlines - escaped as \n
  • Tabs - escaped as \t
  • Carriage returns - escaped as \r
  • Combined special characters in the same string

Example:

sql
sql
create function get_text_array()
returns text[]
language sql as $$
select array['hello "world"', 'path\to\file', E'line1\nline2'];
$$;

Previous behavior (could produce invalid JSON):

json
json
["hello \"world\"", "path\to\file", "line1
line2"]

New behavior (valid JSON):

json
json
["hello \"world\"","path\\to\\file","line1\nline2"]

This fix applies to:

  • Simple text arrays with special characters
  • Multidimensional arrays (2D, 3D, etc.)
  • Nested composite types serialized as tuple strings
  • Arrays of composite types with special characters in field values
  • Unicode characters and emoji (preserved correctly)
  • Empty strings and whitespace-only strings
  • JSON-like string content (properly escaped, not parsed)

TsClient Plugin: Composite Type Interface Generation

The TsClient plugin now generates proper TypeScript interfaces for composite types:

Generated TypeScript:

typescript
typescript
interface IBooks {
    bookId: number | null;
    title: string | null;
    authorId: number | null;
}

interface IAddress {
    street: string | null;
    city: string | null;
    zipCode: string | null;
}

interface IGetAuthorsWithBooksResponse {
    authorId: number | null;
    authorName: string | null;
    books: IBooks[] | null;  // Array of composite type
}

interface IGetUserWithAddressResponse {
    userId: number | null;
    userName: string | null;
    address: IAddress | null;  // Nested composite type
}

Features:

  • Separate interfaces generated for each unique composite type structure
  • Array composite columns typed as InterfaceName[]
  • Nested composite columns typed as InterfaceName
  • Interfaces are deduplicated when the same composite structure appears in multiple functions

TsClient Limitation - Multidimensional Arrays:

PostgreSQL normalizes multidimensional array types (int[][], int[][][]) to single-dimensional (integer[]) in all catalog views. This is a PostgreSQL limitation—there is no way to retrieve the original array dimensionality from metadata.

Consequence: Multidimensional arrays are typed as single-dimensional in TypeScript:

  • int[][]number[] (instead of number[][])
  • int[][][]number[] (instead of number[][][])

The runtime JSON is always correct (e.g., [[1,2],[3,4]]), but the TypeScript type won't match. For strict TypeScript projects, manually cast the response type when using multidimensional arrays.

Optional @ Prefix Extended to Annotation Parameters

The optional @ prefix for comment annotations (introduced in 3.3.1) now also works with annotation parameters using the key = value syntax.

Both syntaxes are now equivalent:

sql
sql
-- Without @ prefix
comment on function my_func() is '
HTTP GET
raw = true
timeout = 30s
my_custom_param = custom_value
';

-- With @ prefix
comment on function my_func() is '
HTTP GET
@raw = true
@timeout = 30s
@my_custom_param = custom_value
';

This applies to all annotation parameters including raw, timeout, buffer, connection, user_context, user_parameters, SSE settings, basic auth settings, and custom parameters.

Custom parameters with @ prefix are stored without the prefix (e.g., @my_param = value is stored as my_param).

Comments