1 August 2019: 2ndQuadrant is proud to announce the release of Barman version 2.9, a Backup and Recovery Manager for PostgreSQL.
This minor release natively supports PostgreSQL 12, which introduces major changes in the way Point-In-Time-Recovery and Replicas are managed. PostgreSQL 12 removes the
recovery.conf file and manages recovery settings as GUC options. It also introduces two signal files,
standby.signal, to determine the recovery and replica state, respectively.
For older versions of PostgreSQL (11 and earlier), Barman still transparently maps its configuration and run-time options to the underlying PostgreSQL system, using the traditional
recovery.conf based method, whilst using the new GUCs for version 12 (and future versions).
Experimental support for JSON output of Barman commands has been added, facilitating integration with external monitoring and management tools. Minor UI improvements have been delivered, and also minor bugs have been fixed.
For a complete list of changes, see the “Release Notes” section below.
Transparently support PostgreSQL 12, by supporting the new way of managing recovery and standby settings through GUC options and signal files (
--bwlimitcommand line option to set bandwidth limitation for
Ignore WAL archive failure for
checkcommand in case the latest backup is
--target-lsnoption to set recovery target Log Sequence Number for
recovercommand with PostgreSQL 10 or higher
barman-wal-restoreso that users can change the spool directory location from the default, avoiding conflicts in case of multiple PostgreSQL instances on the same server (thanks to Drazen Kacar).
JSON output writer to export command output as JSON objects and facilitate integration with external tools and systems (thanks to Marcin Onufry Hlybin). Experimental in this release.
replication-statusdoesn’t show streamers with no slot (GH-222)
When checking that a connection is alive (“SELECT 1” query), preserve the status of the PostgreSQL connection (GH-149). This fixes those cases of connections that were terminated due to idle-in-transaction timeout, causing concurrent backups to fail.