Planet PostgreSQL
Latest Blog Posts
All Your GUCs in a Row: application_name
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-24 at 01:00
Most GUCs in this series will be operationally irrelevant to most readers. This one is not. application_name is the single cheapest piece of observability infrastructure PostgreSQL ships, and an astonishing number of production databases are running with it unset or stuck at a client library&rsqu…
PREEMPT_NONE Is Dead; Your Postgres Probably Doesn’t Care
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-23 at 19:00
A benchmark came out of AWS earlier this month showing PostgreSQL throughput on Linux 7.0 dropping to 0.51x what the same workload produced on Linux 6.x. The Phoronix headline wrote itself. Hacker News did what Hacker News does. By the end of the week, I had been asked by three separate clients w…
Postgres Conference 2026
Posted by
Dave Stokes
on
2026-04-23 at 14:46
Postgres Conference 2026
was held in San Jose, California, and once again, I was lucky to be invited to speak. This is a great show for the 'hallway track' where you talk to members of the community and discover many interesting things.
I had a brief conversation with two early contributors to the original PostgreSQL project. One said he was surprised by how much of his code was still in the code base after FORTY YEARS.
AI and MCP are fully interlaced in many projects.  The one that struck me the most was PgEdge's
AI DBA Workbench
. It features three tiers of detection and an agentic AI assistant to watch it all. I was talking with CEO David Mitchell about it and told him about my past frustrations with Percona's PMM. He then told me author of the AI DBA Workbench also wrote PMM. And PgAdmin4.
Observability seems to be the hot, underlying theme this year. Take a look at
pg_collector
. It is an SQL script that gathers valuable database information and consolidates it into a single HTML file, providing a convenient way to view and navigate the report's sections. One issue with PostgreSQL management is that it is a very complex product with about 400 different settings, and nobody can know in depth how they work. Pg_collector covers a lot of the territory by showing duplicate indexes, invalid indexes, unused indexes, autovacuum parameter, sequences with less than 10% of the remaining values, orphaned prepared transactions, connections without SSL,  excessive logging parameters, track_counts parameter,  synchronous_commit parameter, invalid databases, tables with more than 20% dead rows, transaction ID TXID (Wraparound),  tables that have autovacuum_enabled=off on table level, inactive replication slots, logical replication spill files, and outdated extensions.
For more on observability, check out Ryan Booz's session from the second day, the one not about beekeeping.
This is being written on the morning of the last of three days. I needed to capture some of the highlights of the show before th
[...]
Async I/O in PostgreSQL 19: The Year After
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-23 at 13:00
PostgreSQL 18 shipped asynchronous I/O. The dominant flavor on Linux was io_uring; everything else fell back to a worker pool controlled by io_method=worker. Early benchmarks from pganalyze, Aiven, and Better Stack showed real wins on read-heavy workloads with large sequential scans. They also sh…
All your GUCs in a row: allow_system_table_mods
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-23 at 01:00
Here is a GUC that ships with a warning label. The docs, which are normally restrained to the point of parody, state plainly that setting this parameter wrong can cause “irretrievable data loss or seriously corrupt the database system.” When the PostgreSQL docs raise their voice, list…
Give Us Access, Already
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-22 at 13:00
If you’re going to hire a PostgreSQL consultant, hire one. That means access to the database. I’m writing this because the “we hired you but you can’t touch the thing” conversation happens at the start of roughly one in four PGX engagements, and I would like to have …
Introducing the AI DBA Workbench: PostgreSQL Monitoring That Diagnoses, Not Just Reports
Posted by
Antony Pegg
in
pgEdge
on
2026-04-22 at 05:54
PostgreSQL is dominating the database market, and the monitoring tools haven't noticed.More teams run Postgres in production every year. More of those deployments are distributed, multi-region, and mission-critical. And the tooling most of those teams rely on was built for a simpler world: a single instance, a handful of threshold alerts, and a senior DBA who can interpret what the graphs mean at 3 AM. That works when you have one cluster and one person who knows where the bodies are buried. It falls apart the moment you scale past either of those constraints.We built the pgEdge AI DBA Workbench to close that gap, and today it's entering public beta. We think it's the best PostgreSQL monitoring and management platform you've seen, and the rest of this post explains why any postgres 14+. Local installs, self-hosted enterprise estates, Supabase, Amazon RDS - If you can connect to it, you can monitor it.
Three Services, One Platform
The Workbench is a self-hosted platform that combines three services into a single deployment. A collector gathers metrics from every monitored PostgreSQL instance. An alerter evaluates those metrics against threshold rules and a layered anomaly detection system. A server ties everything together through a web UI, a REST API, and a Model Context Protocol (MCP) endpoint that lets AI tools talk directly to your databases.It runs on your infrastructure, ships under the PostgreSQL license, and stores nothing outside your network.
34 Probes, Zero Agents
The collector is the foundation. Point it at your PostgreSQL instances (any PostgreSQL 14 or later, not just pgEdge) and it starts pulling metrics across 34 built-in probes covering query performance, replication health, active connections, WAL throughput, vacuum activity, checkpoints, database conflicts, IO statistics, system-level CPU and memory, disk usage, and more.Two things matter about how collection works. First, the collector connects remotely over standard PostgreSQL connections. There are no agents to install on your datab[...]
All your GUCs in a row: allow_in_place_tablespaces
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-22 at 01:00
allow_in_place_tablespaces exists so the PostgreSQL test suite can test replication. That’s it. If you’re reading this as an operator, you will never touch it. But it’s in the alphabet, so here we are. When off (the default), CREATE TABLESPACE requires a LOCATION that points to …
Polish configuration for TSearch
Posted by
Hubert 'depesz' Lubaczewski
on
2026-04-21 at 21:06
Some time ago someone posted on Reddit that they have problems adding Polish configuration to their PostgreSQL. While checking it, I found some interesting news. And of course figured out how to have Polish configuration… First, the news: apparently in PostgreSQL 19, we will automatically get Polish dictionary, thanks to this commit from 5th of …
Continue reading
"Polish configuration for TSearch"
Hints, Part 3: Advice, Not Orders
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-21 at 13:00
Robert Haas’s pg_plan_advice patch set, proposed for PostgreSQL 19, is where the twenty-year argument from Part 2 has landed — or is trying to. It is not pg_hint_plan brought into core. It is a different thing, with different mechanics, a different scope, and a different answer to the &ldqu…
Waiting for PostgreSQL 19 – Add CONCURRENTLY option to REPACK
Posted by
Hubert 'depesz' Lubaczewski
on
2026-04-21 at 10:40
On 6th of April 2026, Álvaro Herrera committed patch: Add CONCURRENTLY option to REPACK   When this flag is specified, REPACK no longer acquires access-exclusive lock while the new copy of the table is being created; instead, it creates the initial copy under share-update-exclusive lock only (same as vacuum, etc), and it follows an MVCC …
Continue reading
"Waiting for PostgreSQL 19 – Add CONCURRENTLY option to REPACK"
How to Use the pgEdge Control Plane: From Zero to Multi-Master and Beyond
Posted by
Antony Pegg
in
pgEdge
on
2026-04-21 at 09:37
A couple of months back, the CEO challenged product and marketing to revamp the developer experience on our website in three weeks. I vibe-coded a proof of concept full of "try it now" buttons and interactive guides, the CEO loved it, and then I had to deal with almost every one of those interactive guides being a placeholder card. Engineering was fully booked, and the Control Plane product I needed to write guides for was one I knew inside out at the architecture level but had never personally operated end-to-end through the API.So I sat down and learned the pgEdge Control Plane the hard way: by using it. What follows is what I found, organized as the guide I wish I'd had when I started. If you're evaluating Control Plane, deploying it for the first time, or trying to understand what Day 2 operations actually look like, this is for you.
What Is the Control Plane?
pgEdge Control Plane is a lightweight orchestrator for PostgreSQL. It manages the full database lifecycle (creation, replication, failover, backup, restore, scaling) through a declarative REST API. You describe the database you want in a JSON spec, POST it, and Control Plane handles the rest: configuration, networking, Spock multi-master replication, Patroni for high availability, pgBackRest for backups. All of it.The important thing to understand is that setup is only half the story. There are enough tools out there that can get you a running cluster if you know what you're doing. The hard part, the part where most tools leave you on your own, is Day 2. Modifying a running HA cluster. Adding a node to a live distributed database. Performing a rolling upgrade without downtime. Restoring from backup while keeping replication intact across the remaining nodes. That's where the complexity lives, and that's where Control Plane earns its keep.
Getting Started: Zero to Multi-Master in Five Minutes
Caveat
I’m not promising that every line of code in here will run as-is - It's real, but as I learned the hard way while building the interactive guid[...]
Finding invisible use-after-free bugs in the PostgreSQL planner
Posted by
Andrei Lepikhov
in
pgEdge
on
2026-04-20 at 19:57
On a PostgreSQL build with assertions enabled, run the standard
make check-world
suite with a small debugging extension called
pg_pathcheck
loaded. It will report on pointers to freed memory in the planner's path lists. Such dangling pointers exist even in the core Postgres now. They are harmless today. But the word
today
is what makes this worth writing about.
A production story
This story started in July 2021. At the time, I was finishing a sharding solution built on top of
postgres_fdw
. During testing, our engineers sent me an example query that would crash periodically with a
SEGFAULT
. One look at the plan told me something was very off.
Append
-> Nested Loop
Output: data_1.b
Join Filter: (g1.a = g2.a)
...
-> Materialize
Output: g2.a, data_2.b
-> Hash Join
Output: g2.a, data_2.b
Hash Cond: (data_2.b = g2.a)
...
The first obvious question: how did a
Materialize
node end up as a direct child of an
Append
? The second, more interesting one: how can one
Append
combine two sources with different tuple widths? No wonder the query was crashing — and to make it worse, the failure was intermittent; sometimes the very same query produced a perfectly reasonable plan.
On paper, the bug shouldn't have been possible: the optimiser doesn't work that way. A few days of debugging pointed the finger at dangling pointers. While building one of the alternative
Append
paths, the optimiser adds a cheaper path to a child
RelOptInfo
’s
pathlist
and evicts the one that was there before. But the previously constructed
Append
still holds a pointer to that now-freed slot. A step or two later, the allocator hands the exact same chunk back out for a new
Path
higher up the tree, for, say, an enclosing
JOIN
. The result is a plan that makes no semantic sense at all.
Where dangling pointers come from
PostgreSQL builds each relation's pathlist incrementally through
add_path()
. When a newly arrived path dominates an e
[...]
All your GUCs in a row: allow_alter_system
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-20 at 18:00
We begin at allow_alter_system, which is both new and politically fraught — so let’s start with a fight. ALTER SYSTEM was added in 9.4 as a quality-of-life improvement: set GUCs from an SQL prompt, have the values written into postgresql.auto.conf, no shell access required. It was immediate…
Hints, Part 2: Features We Do Not Want
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-20 at 13:00
For most of PostgreSQL’s history, the official community position on query hints has been a polite version of “no, and stop asking.” The position isn’t subtle. The PostgreSQL wiki maintains a page titled Not Worth Doing, and “Oracle-style optimizer hints” is li…
Understanding PostgreSQL REPACK Through repack.c
Posted by
Chao Li
in
Highgo Software
on
2026-04-20 at 08:55
REPACK
is a new PostgreSQL 19 feature for physically compacting a table by rewriting it into new storage. Like
VACUUM
, it deals with the space left behind by dead tuples, but it does so by building a fresh table file instead of mostly cleaning pages in place. Ordinary
VACUUM
can mark space reusable
inside the table and may truncate some empty pages at the end, but it usually cannot fully return bloat to the operating system.
REPACK
, like
VACUUM FULL
, rewrites the table into a compact file and swaps that storage into place. The important difference from
VACUUM FULL
is that
REPACK CONCURRENTLY
keeps the table usable for most of the operation by copying a snapshot and replaying concurrent changes before a short final lock-and-swap phase.
REPACK
code is interesting because it sits between several difficult subsystems: table rewrites, index rebuilds, relfilenode swaps, logical decoding, background workers, snapshots, and lock management. Reading
repack.c
is a good way to understand how PostgreSQL can physically rebuild a table while preserving the table’s logical identity.
At a high level,
REPACK
creates a new physical copy of a table, fills it with live tuples from the old table, rebuilds or swaps indexes, and then swaps the physical storage underneath the original relation OID. The user still sees the same table OID, privileges, dependencies, inheritance relationships, and catalog identity, but the heap file is new and compact.
The file comment at the top of
repack.c
summarizes the two modes:
non-concurrent mode: take
AccessExclusiveLock
, rewrite the table, swap storage, drop the old storage
concurrent mode: take
ShareUpdateExclusiveLock
, copy the table while writes continue, decode concurrent changes from WAL, replay them into the new heap, briefly upgrade to
AccessExclusiveLock
, apply remaining changes, then swap.
That split drives almost every design choice in the file.
Entry Point
The main SQL entry point is
ExecRepack()
in repack.c. It parses options like
VERBOSE
ANALYZ
[...]
Contributions for week 15, 2026
Posted by
Cornelia Biacsics
in
postgres-contrib.org
on
2026-04-20 at 08:07
The
London PostgreSQL Meetup Group
met on April 14, 2026 organized by:
Valeria K. (Data Egret)
Chris Ellis
Alastair Turner
Michael Christofides
Monica Sarbu spoke at the
The San Francisco Bay Area PostgreSQL Meetup Group
met virtually on April 14, 2026 organized by
Katharine Saar
Stacey Haysler
Christophe Pettus
On April 15, the
Postgres Meetup for All
met virtually, organized by Elizabeth Christensen on April 15, 2026.
Speaker:
Arun Kumar Samayam
Phani Kadambari
Y V Ravi Kumar
The
Barcelona (+Valencia) PostgreSQL User Group
met on April 17, 2026
Organizer:
Marcelo Diaz
Valeria Haro
Laura Minen
Martín Marqués
Speaker:
Marcelo Díaz
Javier Vela
On April 15, 2026, Ellyne Phneah delivered a PostgreSQL talk at the
Digital Marketing Europe 2026
Community Blog Posts:
Andreas Scherbaum about
PGConf India 2026 - Review
Jesús Espino wrote about his book publication “[My Book Is Out: Deep Dive into a SQL Query]”(https://www.linkedin.com/pulse/my-book-out-deep-dive-sql-query-jes%C3%BAs-espino-arcqe/)
Book Publications:
Jesús Espino published a PostgreSQL book
Deep dive into a SQL query - A Journey Through PostgreSQL's Query Processing
The Postgres Performance Triangle
Posted by
Richard Yen
on
2026-04-20 at 08:00
Everyone who’s gone at least knee-deep in photography knows there’s this idea of the
exposure triangle
: aperture, shutter speed, and ISO. Depending on what you’re going for artistically, you adjust the three parameters, knowing that there are trade-offs in doing so. After working on a few cases, and presenting solutions to customers, I’ve started to think about Postgres performance tuning in a similar way – there are basic parameters that can be tuned, and there are trade-offs for the choices DBAs make:
Memory Allocation
Disk I/O
Concurrency
Each of these (in broad strokes) affects throughput – how much work your system gets done.
Caveat: I know that in the academic sense, “throughput” doesn’t quite capture the balance of these concepts, but please bear with me!
Let’s talk about how each of these three work together with the whole system, and what the trade-offs look like.
Memory Allocation
When you increase memory allocation in Postgres, whether it’s
shared_buffers
or
work_mem
, things tend to feel smoother. Most notably, queries spill to disk less often, sorts and joins stay in memory, cache hit rates improve. But there’s a trade-off that’s easy to miss at first, especially with these two parameters. A single complex query can consume multiple chunks of
work_mem
(see
Laetitia’s excellent post about it
). Multiply that across concurrent queries, and you begin to see the OS consuming swap space, churning at checkpoints, and even OOM Killer getting invoked. So while more memory
can
make things faster, it also quietly reduces how much concurrency your system can safely handle.
I’d relate this to aperture – you can throw money at some fast glass, but you also get shallower depth of field (in an annoying way).
Disk I/O
Disk is where things go when memory isn’t enough, or when an access pattern requires it. We see examples of this in , sequential scans, random index lookups, and temporary files from sorts or hashes. Lowering
work_mem
might increase disk I/O due to so
[...]
Hints, Part 1: The State of the Art Everywhere But Here
Posted by
Christophe Pettus
in
pgExperts
on
2026-04-20 at 06:00
pg_plan_advice is expected to land in PostgreSQL 19. That makes this a good moment to look at query hints — what they are, what every other major database does with them, and how PostgreSQL ended up being the obvious outlier. Three parts. This is the first. What a hint is A query hint is an instr…
PostgreSQL MVCC, Byte by byte
Posted by
Radim Marek
on
2026-04-17 at 13:15
You run
SELECT * FROM orders
in one psql session and see 50 million rows. A colleague in another session runs the same query at the same moment and sees 49,999,999. Neither of you is wrong, and neither is seeing stale data. You are both reading the same 8KB heap pages, the same bytes on disk.
This is the promise of PostgreSQL's MVCC (Multi-Version Concurrency Control), and it's the reason readers never block writers and writers never block readers. It is also one of the most misunderstood pieces of the storage engine. People know "there are multiple versions of a row" and leave it at that.
The answer lives in eight bytes on every single tuple.
xmin and xmax: the only two XIDs that matter
If you've read
Inside the 8KB Page
, you know that every tuple starts with 23-byte header. The first eight bytes of that header are two 32-bit transaction IDs:
t_xmin
(the transaction that inserted this version) and
t_xmax
(the transaction that deleted or updated it, or 0 if it's still live).
That's the core of MVCC at the storage level. PostgreSQL does not keep a separate "current version" table. It does not mark rows as latest. Every tuple carries its own two-field timestamp, and when your query reads a page, PostgreSQL has to decide, tuple by tuple, whether
your
transaction is allowed to see it.
A minimal demo:
CREATE TABLE
mvcc_demo
(id
int
, val
text
);
INSERT INTO
mvcc_demo
VALUES
'alpha'
), (
'beta'
);
Peek at the raw page with
pageinspect
SELECT
lp, t_xmin, t_xmax, t_ctid
FROM
heap_page_items(get_raw_page(
'mvcc_demo'
));
lp | t_xmin | t_xmax | t_ctid
----+--------+--------+--------
1 | 100 | 0 | (0,1)
2 | 100 | 0 | (0,2)
(2 rows)
Two tuples. Both stamped with
t_xmin = 100
(the transaction that ran the INSERT) and
t_xmax = 0
(nobody has deleted them). At this moment, every session on the database will see these rows, because everyone's snapshot agrees that transaction 100 has committed.
Now open two concurrent sessions. Session A runs an UPDATE without committ
[...]
Enforcing Constraints Across Postgres Partitions
Posted by
Shaun Thomas
in
pgEdge
on
2026-04-17 at 05:48
Postgres table partitioning is one of those features that feels like a superpower right up until it isn't. Just define a partition key, carve up data into manageable chunks, and everything hums along beautifully. And what's not to love? Partition pruning in query plans, smaller tables, faster maintenance, easy archiving of old data; it's a smorgasbord of convenience.Then you try to enforce a unique constraint without including the partition key, and Postgres behaves as if you just asked it to divide by zero. Well... about that.
The Rule Nobody Reads Until It's Too Late
The
Postgres documentation on partitioning
spells it out pretty clearly in the limitations section:To create a unique or primary key constraint on a partitioned table, the partition keys must not include any expressions or function calls and the constraint's columns must include all of the partition key columns.Read that again. The constraint's columns must include all of the partition key columns. Not "should." Not "it would be nice if." Must. And the reasoning is maddeningly justified: each partition maintains its own index, and a local index can only enforce uniqueness within its own partition. Postgres has no concept of a global index that spans all partitions simultaneously, so it has no mechanism to check whether some value in partition A already exists in partition B.Other database engines (Oracle, for instance) have global indexes that solve this at the storage layer. Postgres does not, and there's been no serious movement on the mailing lists to add them. So we're left to our own devices.
When Theory Meets the Event Pipeline
Consider a fairly common (if somewhat contrived) scenario: an  table partitioned by range on an identity column. The table includes a  that the application uses to prevent duplicate event processing. Naturally, that should be unique across all partitions.Now try adding  to that table without an error. The partition key is , and  doesn't include it, so Postgres rejects the constraint. You could make a composit[...]
Postgres 19 Release Notes
Posted by
Bruce Momjian
in
EDB
on
2026-04-15 at 21:15
I have just completed the first draft of the Postgres 19
release notes.
It includes little developer community feedback and still needs more XML markup and links. This year I have created a
wiki page
explaining the process I use.
The release note
feature count
is 212, which includes a strong list of administrative and monitoring features.
Postgres 19 Beta 1
should be released in a few months. The final release is planned for September/October of this year.
Waiting for PostgreSQL 19 – Online enabling and disabling of data checksums
Posted by
Hubert 'depesz' Lubaczewski
on
2026-04-15 at 18:05
On 3rd of April 2026, Daniel Gustafsson committed patch: Online enabling and disabling of data checksums   This allows data checksums to be enabled, or disabled, in a running cluster without restricting access to the cluster during processing.   Data checksums could prior to this only be enabled during initdb or when the cluster is …
Continue reading
"Waiting for PostgreSQL 19 – Online enabling and disabling of data checksums"
Introducing Xata OSS: Postgres platform with branching, now Apache 2.0
Posted by
Tudor Golubenco
in
Xata
on
2026-04-15 at 12:30
Xata core is now available as open source under the Apache 2 license. It adds copy-on-write branching, scale-to-zero compute to Postgres.
pgEdge Vectorizer and RAG Server: Bringing Semantic Search to PostgreSQL (Part 2)
Posted by
Ahsan Hadi
in
pgEdge
on
2026-04-15 at 06:29
In my previous blog, I walked through setting up the pgEdge MCP Server with a distributed PostgreSQL cluster, and connecting Claude to live database data through natural language. In this blog I want to look at a different problem: how do you build AI-powered search over your own content, without adding a separate vector database to your infrastructure?This is where the
pgEdge Vectorizer
and
RAG Server
come in. Together, they give you a complete open-source Retrieval-Augmented Generation (RAG) pipeline that runs entirely inside PostgreSQL. In this blog, I'll explain what each component does, how they work together, and walk through working examples that you can follow on your own PostgreSQL instance.I am following the same pattern in this blog as I have been doing in my other blogs. The goal is to explain each component and then provide real world working examples in order to the reader to better understand these concepts.Please note: I am using my Rocky Linux VM for this installation and testing and using the Ollama embedding provider (installed on my VM) to generate the embeddings.
Background: The Problem With Keeping Vector Search In Sync
Most teams building AI-powered search hit the same wall. You set up a vector search pipeline, load your documents, generate embeddings, and everything works. Then someone updates a document or adds a new one - suddenly you need a process to detect the change, re-chunk the content, regenerate the embeddings, and update the index. Teams typically solve this with custom scripts, message queues, or external orchestration tools - all of which need to be built, maintained, and monitored separately from the database.The pgEdge Vectorizer eliminates that problem entirely. It runs as a PostgreSQL background worker. Once you enable Vectorizer on a table, it monitors the source data through triggers, chunks and embeds new or modified rows automatically, and keeps the search index in sync without any external orchestration. The same transactional guarantees that PostgreSQL gives y[...]
Postgres performance regression: are we there yet?
Posted by
Lætitia AVROT
on
2026-04-15 at 00:00
Every year, PostgreSQL gets faster. Researchers benchmarking the optimizer from version 8 through 16 found an average 15% performance improvement per major release. That’s a decade of consistent, measurable progress. The project has been doing this since 1996. So when a headline claimed Linux 7.0 just halved PostgreSQL throughput, DBAs, Sys Admins, and DevOps started panicking (in particular, those working with Ubuntu 26.04 LTS which plan to ship Linux kernel 7.
AI-Ready PostgreSQL 18 Is Out: Why AI Applications Win or Lose at the Seams
Posted by
Vibhor Kumar
on
2026-04-14 at 23:29
Most AI projects do not fail because the model is weak. They fail because the seams around the model break under real-world constraints such as data truth, governance, and production reality.
If you have shipped anything beyond a demo, you have seen the pattern. The embeddings look plausible, the chatbot sounds confident, and the prototype “works.” Then a user asks a normal question like: “Show me something like a leather jacket but lighter, under $150, and available right now.” If the system cannot enforce current pricing, availability reality, and access rules, the experience becomes untrustworthy. When trust breaks, architecture often splinters into extra systems, sync pipelines, and brittle glue code.
That is the motivation behind
AI-Ready PostgreSQL 18: Building Intelligent Data Systems with Transactions, Analytics, and Vectors
, which I coauthored with
Marc Linster
, with a foreword by
Ed Boyajian
This book is built as a field guide. It includes working schemas, scripts, and production patterns—not just concepts—so builders can ship semantic search, recommendations, and assistants without splitting truth across systems.
This post is not a sales pitch. This post explains the core idea, shows a minimal hands-on demo using the open-source scripts, and gives you a practical checklist for what “AI-ready” means in production.
What you will get from this post
By the end of this post, you will understand three things clearly:
Why semantic search fails in production when it is not paired with relational truth.
What the “hybrid pattern” looks like: semantic candidates + SQL constraints in one flow.
How to try a working demo that returns both evidence rows and an LLM-generated explanation grounded in those rows.
TL;DR
AI systems succeed when meaning and truth stay close.
Vectors provide semantic recall (“what feels similar”). SQL enforces operational truth (“what is valid, current, allowed, and sellable”). When you keep embeddings in PostgreSQL with pgvec
[...]
PGConf India 2026 - Review
Posted by
Andreas Scherbaum
on
2026-04-14 at 22:00
This was my first time attending PGConf.India. That is a conference I wanted to visit for quite a while, heard good things about the it, but never had a chance before. During past years it overlapped with another conference I’m attending in Germany - but this year it worked out! Overall this is the 9th Indian PostgreSQL Conference, with no signs of slowing down. Stage at PGConf India 2026 The conference is well attended, and very vibrant.
Owning the pipe: physical replication, cloud neutrality, and the escape from DBaaS lock-in
Posted by
Gabriele Bartolini
in
EDB
on
2026-04-14 at 00:32
This article examines how managed database services deliberately suppress access to the physical replication stream, turning operational convenience into permanent lock-in. It makes the case for a cloud-neutral stack — PostgreSQL, Kubernetes, and CloudNativePG — as the only architecture that returns full operational sovereignty to the organisation that owns the data.
ParadeDB is Officially on Railway
Posted by
Ming Ying
in
ParadeDB
on
2026-04-14 at 00:00
Deploy ParadeDB on Railway with one click. Full-text search, vector search, and hybrid search over Postgres — now available on your favorite cloud platform.
Top posters
Number of posts in the past two months
Hubert 'depesz' Lubaczewski
- 11
Christophe Pettus
(pgExperts) - 10
Vibhor Kumar
- 9
Cornelia Biacsics
(postgres-contrib.org) - 9
Shaun Thomas
(pgEdge) - 8
Richard Yen
- 7
Lætitia AVROT
- 7
Radim Marek
- 6
Antony Pegg
(pgEdge) - 5
Henrietta Dombrovskaya
- 4
Top teams
Number of posts in the past two months
pgEdge
- 22
EDB
- 13
pgExperts
- 10
postgres-contrib.org
- 9
Stormatics
- 5
Cybertec
- 4
Percona
- 2
Xata
- 2
Snowflake
- 2
Postgres Professional
- 2
Feeds
All feeds (full entries)
All feeds (short entries)
Planet
Policy
for being listed on Planet PostgreSQL.
Add
your blog to Planet PostgreSQL.
List
of all subscribed blogs.
Manage
your registration.
Contact
Get in touch with the Planet PostgreSQL administrators at
planet at postgresql.org