Skip to content
Written with Claude
IMPORTANT

As you may notice, this page and pretty much the entire website were obviously created with the help of AI. I wonder how you could tell? Was it a big "Written With Claude" badge on every page? I moved it to the top now (with the help of AI of course) to make it even more obvious. There are a few blogposts that were written by me manually, the old-fashioned way, I hope there will be more in the future, and those have a similar "Human Written" badge. This project (not the website), on the other hand, is a very, very different story. It took me more than two years of painstaking and unpaid work in my own free time. A story that, hopefully, I will tell someday. But meanwhile, what would you like me to do? To create a complex documentation website with a bunch of highly technical articles with the help of AI and fake it, to give you an illusion that I also did that manually? Like the half of itnernet is doing at this point? How does that makes any sense? Is that even fair to you? Or maybe to create this website manually, the old-fashioned way, just for you? While working a paid job for a salary, most of you wouldn't even get up in the morning. Would you like me to sing you a song while we're at it? For your personal entertainment? Seriously, get a grip. Do you find this information less valuable because of the way this website was created? I give my best to fix it to keep the information as accurate as possible, and I think it is very accurate at this point. If you find some mistakes, inaccurancies or problems, there is a comment section at the bottom of every page, which I also made with the help of the AI. And I woould very much appreciate if you leave your feedback there. Look, I'm just a guy who likes SQL, that's all. If you don't approve of how this website was constructed and the use of AI tools, I suggest closing this page and never wever coming back. And good riddance. And I would ban your access if I could know how. Thank you for your attention to this matter.

Changelog v3.4.4 (2025-01-17)

Version 3.4.4 (2025-01-17)

Full Changelog

Deep Nested Composite Type Resolution (ResolveNestedCompositeTypes)

By default, NpgsqlRest resolves nested composite types to any depth. When a composite type contains another composite type (or an array of composites), the inner composites are serialized as proper JSON objects/arrays instead of PostgreSQL tuple strings.

Example:

sql
sql
create type inner_type as (id int, name text);
create type outer_type as (label text, inner_val inner_type);
create type with_array as (group_name text, members inner_type[]);

create function get_nested_data()
returns table(data outer_type, items with_array)
language sql as $$
select
    row('outer', row(1, 'inner')::inner_type)::outer_type,
    row('group1', array[row(1,'a')::inner_type, row(2,'b')::inner_type])::with_array;
$$;

Output:

json
json
[{
  "data": {"label":"outer","innerVal":{"id":1,"name":"inner"}},
  "items": {"groupName":"group1","members":[{"id":1,"name":"a"},{"id":2,"name":"b"}]}
}]

Configuration:

json
json
{
  "NpgsqlRest": {
    "RoutineOptions": {
      "ResolveNestedCompositeTypes": true
    }
  }
}

Default: true - nested composites are fully resolved.

How it works:

At application startup, when ResolveNestedCompositeTypes is enabled:

  1. Type Cache Initialization: Queries pg_catalog to build a cache of all composite types in the database, including their field names, field types, and nested relationships.

  2. Metadata Enrichment: For each routine that returns composite types, the field descriptors are enriched with nested type information from the cache.

  3. Runtime Serialization: During request processing, the serializer checks each field's metadata. If the field is marked as a composite type (or array of composites), it recursively parses the PostgreSQL tuple string and outputs a proper JSON object/array.

When to disable (ResolveNestedCompositeTypes: false):

ScenarioReason
Large schemas with thousands of composite typesReduces startup time by skipping the type cache initialization query
No nested composites in your schemaIf your composites don't contain other composites, the cache provides no benefit
Memory-constrained environmentsThe type cache consumes memory proportional to the number of composite types
Backward compatibilityIf you depend on the old tuple string format "(1,x)" in your client code

Performance considerations:

  • Startup cost: One additional query to pg_catalog at startup to build the type cache
  • Memory: Cache size is proportional to: (number of composite types) × (average fields per type)
  • Runtime: Negligible - just a dictionary lookup per composite field

PostgreSQL version compatibility:

Tested and works on PostgreSQL 13 through 17. The feature uses standard pg_catalog views that are stable across PostgreSQL versions.

Edge cases handled:

  • Empty arrays of composites → []
  • NULL composite elements in arrays → [{"id":1},null,{"id":2}]
  • Composites with all NULL fields → {"id":null,"name":null}
  • Empty string vs NULL distinction → "" vs null
  • Unicode characters (emoji, Chinese, Arabic) → preserved correctly
  • Deeply nested structures (4+ levels) → fully resolved
  • Self-referencing types → cycle detection prevents infinite loops

Bug Fixes

  • Fixed "permission denied for schema" error in the metadata query when a database user with limited privileges runs the routine discovery. The error occurred when a user with only USAGE permission on specific schemas tried to discover routines, but the database contained other schemas with composite types that the user couldn't access. The ::regtype cast in the metadata query would fail when attempting to resolve type names from unauthorized schemas. Added has_schema_privilege checks to filter out:
    • Array element types from schemas the user cannot access
    • Schemas the user cannot access from the schema aggregation
    • Routines that return types from schemas the user cannot access

Comments