6 minutes reading posted in linux
TTRSS to Docker[Part two]
In the previous post we got a TinyTiny-RSS Feedreader running in a docker environment. Because the migrate of the database takes more effort we will discuss it in this post. We will restore the old database to docker and upgrade it. In the previous post we had a running installation without any data.
Databases restoration and upgrade
Next step - after getting the base installation running - was to import the old database of ttrss to the
db container. At this point, things become messy for several reasons. First, the docker installation routine of TT-RSS does not support a separated databases user. Why? Because in
startup.sh it tries to create a database extension for the given database variable within the
Extension within Postgres are similar to modules, it allows to provide additional features in a default installation. For instance it is possible to write functions in C, using the corresponding extension. Allowing a database within postgres to make use of an extension is something only the super user is allowed to do.
Not all extensions require super user rights. For ttrss we need to make use of the
pg_trgm extension, where the
CREATION privilege should enough. However, this does fail for some reason and
startup.sh fails. The
startup.sh validates the existing of the given extension by creating it if it is not presented already. To workaround the issue I’ve tried to alter the installation script for this with no success. Probably because I do not use the latest version of postgres, with more time and effort I could get to the bottom of this, however, I want to be done.
I conclude this as a bad practice and further more it adds a taste of dislike to me to run a DB service via docker. It feels unnatural because the container makes a lot of assumption about it’s operation. For example: It assumes that it will be run in a ‘embedded’ fashion without direct User access. That means that you’re not supposed to login into the
postgres database via a shell, however, that exactly what a
postgres requires you to do for maintenance like restoring a database. This adds quite some pain to the process of importing an older backup to a fresh docker installation.
Instead of modifying the installation script to respect some older database, I’ve decided to do this in a one shot act. I copied the backup dump into the volume of the running container and restored it to the provided
postgres of the
setup.sh databas that the docker installation is using by default.
The restoration process was not working because my original database dump used a separated postgres role for access. This lead to some errors while writing some data with the role to it. More error where shown for the reason that older data were written to the already created table structure that consistent of a newer schema version than the data has been created for. Like some keys had became unique in the newer release of the TT-RSS and therefor made the restoration inconsistent. The
schema_version of my dump was:
rss=# SELECT * FROM ttrss_version; schema_version ---------------- 126
The latest one is
postgres=# SELECT * FROM ttrss_version; schema_version ---------------- 140
To get the dump correctly restored the right database with the right role is necessary. Only then it is possible to upgrade. Unfortunately, the postgres dump did not include the creation of the database and role. So what can we do? At this point I was thinking to alter the dump and replace the name of the role to fit the current database. The pity here is that the role name and the databases name are the same: rss.
That seems dumb in hindsight, so renaming the key word is not well suitded. Besides, the database tables names often include the phrasing, you guess it, rss. Hence renaming the databases dumb to use a separated role or username seems to be more work.
However, I remembered one thing, instead of hacking my way around of the missing user, why not just extract the data from the VM? Fortunately, I had the original volume from the database still around. After some magic with VM volume and starting it in a chroot I created a dump from the postgres database from the old VM. As a note for later: That’s why you’re using
Looking at output of the the
pg_dumpall I’ve got an idea for the installation route to create the user, make it to the
SuperUser let it setup the database extension and THAN drop the privilege. Not sure if I should do this.
Anyway back to the migration:
postgres=# CREATE ROLE rss; CREATE ROLE postgres=# ALTER ROLE rss WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION PASSWORD 'rss'; ALTER ROLE
Once the database and rile is present restoring the data back is as easy as eating cake.
Migration to the newer release
After the first step comes the second, migrating the database to a new schema. For this step we need reconfigure the
ttrss configuration to to use the freshly restored
rss database instead of the
postgres one. We do not have to start the container with a
exec, because the files is within the volumes datapoint on the filesystem. After it is changed we can test it by execution the
update_daemon2.php from within a running
/ # sudo -u app /usr/bin/php /var/www/html/tt-rss/update_daemon2.php [16:54:00/84] Spawn interval: 120 sec Schema version is wrong, please upgrade the database.
Here a note about fuckup in your system environment: They made you question what’s going on until you notice that the disk was running full and the system be almost shut.. After some restart trouble and a broken btrfs issue, a reboot and reset of all of docker I’ve got back to it and could the app to the upgrade page. It started to migrate the database to new schema and stopped at version 134. The problem was that the table
ttrss_feeds had an entry that was invalid for the migration:
alter table ttrss_feeds add constraint ttrss_feeds_feed_url_owner_uid_key unique (feed_url, owner_uid)
TTRSS was to kind to display an error with the row that was duplicated. Once the duplicated row was deleted the databases migration finished without issue.
At this stage I was able to login with my old user to the instance. After some clean up I’ve decided to run the feed update by executing the
update_daemon2.php script. It will go through all feeds in the database and check for changes on the server and import them. This way the server knew that a rss feed has an update. The first update took quite some time until all was updated.
So what necessary to get it going? Get the containers to run, update the configuration file after the deployment to new database. Importe the databases with the porper role pre-existing. Make sure that the feeds are not duplicated and then keep it running.
So how does it look in the end:
$ docker-compose ps Name Command State Ports ---------------------------------------------------------------------------------------------------------------------- docker-db_1 docker-entrypoint.sh postgres Up 5432/tcp docker-nginx-proxy_1 /app/docker-entrypoint.sh ... Up 0.0.0.0:80->80/tcp docker-ttrss_1 /bin/sh -c /startup.sh Up 0.0.0.0:9000->9000/tcp, 0.0.0.0:9001->9001/tcp docker-web_1 /bin/parent caddy --conf / ... Up 2015/tcp, 443/tcp, 80/tcp
so far, akendo