Skip to content

Upgrading our PostgreSQL image notes

Justin Clift edited this page Jul 1, 2023 · 8 revisions

These are just very rough notes for the moment, while investigating our moving to a newer (supported) version of PostgreSQL.


The problem is that we're using an ancient version of PostgreSQL (9.5-alpine). By using that, we're stuck to using very old versions of some Python libraries, which in turn themselves are forcing us to stick with other further extremely old Python libraries:

https://github.com/getredash/redash/blob/ee601ec20676b0705e352bfb9b6726a0e53e3b12/requirements.txt#L28-L31

# We can't upgrade SQLAlchemy-Searchable version as newer versions require PostgreSQL > 9.6, but we target older versions at the moment.
SQLAlchemy-Searchable==0.10.6
# We need to pin the version of pyparsing, as newer versions break SQLAlchemy-Searchable-10.0.6 (newer versions no longer depend on it)
pyparsing==2.3.0

Many of our existing data sources have newer versions of their own libraries available which we can't use due to this, and it's very common for other data sources to be unusable for us due to the above ancient libraries.

So, we must upgrade PostgreSQL sooner rather than later. This week if possible.


While it's fairly trivial for us to just increase the version number of PostgreSQL in our docker-compose.yml file, that will automatically break the Redash installation for all of our existing users who upgrade.

The official Docker "postgres" images don't handle upgrades at all, instead completely erroring out (thus a permanent restart loop) when presented with older PostgreSQL database files.


The existing postgres:9.5-alpine Docker image we use can actually have PostgreSQL 13 easily installed by using apk add postgresql. However, the upgrade process from 9 -> 13 is non trivial if we want to use pg_upgrade, and it requires the old PG (v9) server process to be stopped.

Which due to the existing docker-entrypoint.sh approach they use, means the docker container automatically exits.

The exiting-when-PG-stops problem is trivial to solve if we're allowed to modify the existing docker entrypoint script. eg by adding an unbounded sleep loop:

while :; do
  sleep 5
done

With that in place, the entrypoint script can run the PG daemons in the background and have them start/stop/etc without terminating the docker container.

Getting PG v9 and PG v13 to both be callable at the same time looks to be problematic though, as they both need libpq.so, but their versions are incompatible with each other and placed in different directories. That's fine when we can call LD_LIBRARY_PATH with the different directory names separately, but calling both from the same pg_upgrade command seems impossible.


For Alpine images, there is an alpine-sdk package that can be added which easily adds the useful developer tools. eg compiler, make, etc. If we need to compile PG ourselves, that'd probably be the first package to install.


Experimenting with building PG 15.3 in the PG 9.5-alpine container, just to see how it goes. Compile options being used so far:

# apk add alpine-sdk linux-headers readline-dev zlib-dev
# ./configure --prefix=/usr/local/pg15
# make -j4              # takes about 1.5 mins to compile
# make install

Lets set up timezones and collation, then do the initdb:

# apk update
# apk add musl-locales tzdata
# cp /usr/share/zoneinfo/US/Eastern /etc/localtime
# echo "US/Eastern" > /etc/timezone
# su - postgres
$ export PATH=/usr/local/pg15/bin/:$PATH
$ rm -rf /var/lib/postgresql/data-pg15
$ export TZ=US/Eastern
$ export LANG=en_US.utf8
$ export LANGUAGE=en_US.utf8
$ export LC_ALL=en_US.utf8
$ /usr/local/pg15/bin/initdb /var/lib/postgresql/data-pg15
...
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
...

Lets try the pg_upgrade:

$ /usr/local/pg15/bin/pg_upgrade -d /var/lib/postgresql/data -D /var/lib/postgresql/data-pg15 -b /usr/local/bin -B /usr/local/pg15/bin
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for system-defined composite types in user tables  ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for user-defined encoding conversions              ok
Checking for user-defined postfix operators                 ok
Checking for incompatible polymorphic functions             ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Checking for invalid "unknown" user columns                 ok
Checking for roles starting with "pg_"                      ok
Creating dump of global objects                             ok
Creating dump of database schemas                           
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok
Checking for new cluster tablespace directories             ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_clog to new server                           ok
Setting oldest XID for new cluster                          ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster               
                                                            ok
Copying user relation files                                 
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to delete old cluster                       ok
Checking for hash indexes                                   ok
Checking for extension updates                              ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade.
Once you start the new server, consider running:
    /usr/local/pg15/bin/vacuumdb --all --analyze-in-stages

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

That part worked. :)

After updating the PG 15 postgresql.conf to listen on all network interfaces, and updating the pg_hba.conf to allow connections, the Redash worked after a simple docker-compose up -d for the rest of the services.


Reading further stuff online, build-base might be the better package to grab for the Alpine compiling pieces instead of alpine-sdk. TBD.


This is an interesting idea for simply combining the PG pieces of two Alpine images into a new Docker image:

https://github.com/docker-library/postgres/issues/37#issuecomment-1131288227

We might be able to use that for creating our own PG image, without having to spend a lot of time maintaining it afterwards.

Clone this wiki locally