Lots of good improvements, my favorites are Oauth, NOT NULL constraint with NOT VALID, uuidv7, RETURNING contains old/new. And I think the async IO will bring performance benefits, although maybe not so much immediately.
A quiet addition here that I'm very excited about is the extension_control_path configuration, which will make adding extensions in environments like Kubernetes a whole lot nicer (see eg https://cloudnative-pg.io/documentation/1.27/imagevolume_ext...).
Those are some great features. More than I remembered, and it's only been a year.
Personally, I'm very happy to see parallel builds for GIN indexes get released - (re)index time for those indexes is always a pain. I'm looking forward to further improvements on that front, as there are still some relatively low-hanging fruits that could improve build times even more.
I don't think so. What do you need it for if I may ask? If some people actually need a patch it is much more likely to get people working on it to make it committable.
fwiw, we have a team working on OrioleDB at supabase, with the plan to get to GA later this year or early next year. we'll continue to submit patches upstream for the TAM, and of course that will take as long as it takes for the community to accept them. Our focus right now reliability and compatibility, so that the community can gain confidence in the implementation
I was wondering how far away OrioleDB was from becoming a pure extension instead of being a postgres fork. I'm not an expert by any means on TAM - but was curious if the Orioledb team managed to upstream some parts of their fork.
Most alternative PG storage engines have stumbled, and OrioleDB touches a lot of core surfaces.
The sensible order is: first make OrioleDB rock-solid and bug-free; ( https://github.com/orioledb/orioledb/issues?q=is%3Aissue%20s... ) then, using real-world feedback and perf data, refactor and carve out patches that are upstream-ready. That’s a big lift for both the OrioleDB team and core PG.
From what I understand, they’re aiming for a release candidate in the next ~6 months; meaningful upstreaming would come after that.
In other words: make it work --> validate the plan --> upstream the final patches.
That is mostly an issue with the official docker image, not a PostgreSQL issue. Upgrading to a new major is easy if you use the Debian packages and is fine with a couple of minutes downtime. There is no reason a docker image could not do the same.
The issue with PostgreSQL which the Debian packages handle fine but the Docker Hub's image does not is that you need both the executables for the old and the new major version of PostgreSQL when upgrading. This just works with Debian packages but not with the Docker image.
> That is mostly an issue with the official docker image, not a PostgreSQL issue.
I wish Docker would stop calling their unofficial Postgres images “official”. :-/ (They're “official Docker”, but not “official Postgres”. The naming is deeply misleading for everyone who is not a Docker expert.)
I'm fully aware why it's an issue with official image. But even with debian (or any other "packaging") there are hoops to be hopped through due to said missing upgrade handling which IMHO is very annoying.
They do handle minor versions upgrade so the code handling upgrading is there but devs seems to be quite adamant against adding major version upgrade. I (well, a lot of people judging from votes and comments in https://github.com/docker-library/postgres/issues/37 and stars in https://github.com/tianon/docker-postgres-upgrade) would love that, and only between subsequent version would be more than enough…
> They do handle minor versions upgrade so the code handling upgrading is there but devs seems to be quite adamant against adding major version upgrade
Minor versions of PostgreSQL have constraints that major versions don't have; in that minor versions in principle don't see new features added. This allows minor versions of the same major release to run against the same data directory without modifications.
However.
Major versions add those new features, at the cost of changes to internals that show up in things like catalog layout. This causes changes in how on-disk data is interpreted, and thus this is incompatible, and unlike minor releases this requires specialized upgrade handling.
Well, yeah. But virtually all other software can peroform the data upgrad with single/new binary and only PostgreSQL requires old binaries to be available to run `pg_upgrade`...
I suggest you look into what pg_upgrade actually does because the "code handling upgrading" does not exist other than as pg_upgrade. And writing it would be a ton of work and would likely halt a lot of innovation in PostgreSQL and make it no longer a competitive database unless someone does a major re-architecting of PostgreSQL which likely would break all extensions.
The issue causing it is how PostgreSQL has implemented the system catalog.
I'm super curious about what special features / extensions you're using. The only painful upgrade for me from PostgreSQL 7 to 9 (2010 IIRC, building from scratch with SSL library issues). My next upgrade (exact same database, evolved a lot in complexity with stored queries, views, ~500x size) was from 9 to 11 (2022), and it was totally painless. A few weeks ago I migrated it from 11 to 17, zero issues.
Edit: Oh, for reference, I'm not using docker, just an AWS EC2 Amazon Linux instance, with a mirror on an Alpine Linux server. PostgreSQL installed via package repo, very little custom configuration.
Lots of good improvements, my favorites are Oauth, NOT NULL constraint with NOT VALID, uuidv7, RETURNING contains old/new. And I think the async IO will bring performance benefits, although maybe not so much immediately.
PostgreSQL Gains a Built-in UUIDv7 Generation Function for Primary Keys (many interesting details)
https://habr.com/en/news/950340/
A quiet addition here that I'm very excited about is the extension_control_path configuration, which will make adding extensions in environments like Kubernetes a whole lot nicer (see eg https://cloudnative-pg.io/documentation/1.27/imagevolume_ext...).
Those are some great features. More than I remembered, and it's only been a year.
Personally, I'm very happy to see parallel builds for GIN indexes get released - (re)index time for those indexes is always a pain. I'm looking forward to further improvements on that front, as there are still some relatively low-hanging fruits that could improve build times even more.
Does this release have the TAM (Table Access Methods) patch set from orioledb? Or at least parts of it?
I don't think so. What do you need it for if I may ask? If some people actually need a patch it is much more likely to get people working on it to make it committable.
TAM is prerequisite for OrioleDB. And I am guessing OP like me is looking forward to orioledb being used as default or even upstreamed.
OrioleDB is very far from being usable as default or upstreamed. Nobody is working on this right now.
fwiw, we have a team working on OrioleDB at supabase, with the plan to get to GA later this year or early next year. we'll continue to submit patches upstream for the TAM, and of course that will take as long as it takes for the community to accept them. Our focus right now reliability and compatibility, so that the community can gain confidence in the implementation
I was wondering how far away OrioleDB was from becoming a pure extension instead of being a postgres fork. I'm not an expert by any means on TAM - but was curious if the Orioledb team managed to upstream some parts of their fork.
imho: not soon.
Most alternative PG storage engines have stumbled, and OrioleDB touches a lot of core surfaces. The sensible order is: first make OrioleDB rock-solid and bug-free; ( https://github.com/orioledb/orioledb/issues?q=is%3Aissue%20s... ) then, using real-world feedback and perf data, refactor and carve out patches that are upstream-ready. That’s a big lift for both the OrioleDB team and core PG.
From what I understand, they’re aiming for a release candidate in the next ~6 months; meaningful upstreaming would come after that.
In other words: make it work --> validate the plan --> upstream the final patches.
Doesn't look like it, no. At least not to the degree that it has updated the table AM interface.
Eh... interesting features, yet docker upgrade is still huge PITA because upgrade between major versions is not supported :/
I love PostgreSQL, but this is just utterly annoying and discouraging to use it…
That is mostly an issue with the official docker image, not a PostgreSQL issue. Upgrading to a new major is easy if you use the Debian packages and is fine with a couple of minutes downtime. There is no reason a docker image could not do the same.
The issue with PostgreSQL which the Debian packages handle fine but the Docker Hub's image does not is that you need both the executables for the old and the new major version of PostgreSQL when upgrading. This just works with Debian packages but not with the Docker image.
> That is mostly an issue with the official docker image, not a PostgreSQL issue.
I wish Docker would stop calling their unofficial Postgres images “official”. :-/ (They're “official Docker”, but not “official Postgres”. The naming is deeply misleading for everyone who is not a Docker expert.)
I'm fully aware why it's an issue with official image. But even with debian (or any other "packaging") there are hoops to be hopped through due to said missing upgrade handling which IMHO is very annoying.
They do handle minor versions upgrade so the code handling upgrading is there but devs seems to be quite adamant against adding major version upgrade. I (well, a lot of people judging from votes and comments in https://github.com/docker-library/postgres/issues/37 and stars in https://github.com/tianon/docker-postgres-upgrade) would love that, and only between subsequent version would be more than enough…
> They do handle minor versions upgrade so the code handling upgrading is there but devs seems to be quite adamant against adding major version upgrade
Minor versions of PostgreSQL have constraints that major versions don't have; in that minor versions in principle don't see new features added. This allows minor versions of the same major release to run against the same data directory without modifications.
However.
Major versions add those new features, at the cost of changes to internals that show up in things like catalog layout. This causes changes in how on-disk data is interpreted, and thus this is incompatible, and unlike minor releases this requires specialized upgrade handling.
Well, yeah. But virtually all other software can peroform the data upgrad with single/new binary and only PostgreSQL requires old binaries to be available to run `pg_upgrade`...
I suggest you look into what pg_upgrade actually does because the "code handling upgrading" does not exist other than as pg_upgrade. And writing it would be a ton of work and would likely halt a lot of innovation in PostgreSQL and make it no longer a competitive database unless someone does a major re-architecting of PostgreSQL which likely would break all extensions.
The issue causing it is how PostgreSQL has implemented the system catalog.
I'm super curious about what special features / extensions you're using. The only painful upgrade for me from PostgreSQL 7 to 9 (2010 IIRC, building from scratch with SSL library issues). My next upgrade (exact same database, evolved a lot in complexity with stored queries, views, ~500x size) was from 9 to 11 (2022), and it was totally painless. A few weeks ago I migrated it from 11 to 17, zero issues.
Edit: Oh, for reference, I'm not using docker, just an AWS EC2 Amazon Linux instance, with a mirror on an Alpine Linux server. PostgreSQL installed via package repo, very little custom configuration.
this is not postgres' fault.
once the latest pgautoupgrade is available, you can do it with this:
I recommend copying the volume first, in case any mistakes occur you can rollback.