When you use the streaming replication feature, you can use the pg_is_in_recovery() function to check the server status.
The function returns t if the server is a standby, or f if the server is a primary.
In Fujitsu Enterprise Postgres, when you use Mirroring Controller to comprise the cluster system, you can use the Mirroring Controller command mc_ctl to determine which is primary and which is standby server as follows:
mc_ctl status -M /mcdir/inst1
The output will look like this:
mirroring status ----------------- switchable server_id host_role host host_status db_proc_status disk_status ------------------------------------------------------------------------------ inst1p standby XXX.XX.XX.156 normal normal normal inst1s primary XXX.XX.XX.161 normal normal normal
For more information on how to use the Mirroring Controller command mc_ctl to check the server role, refer to Fujitsu Enterprise Postgres Cluster Operation Guide - Database Multiplexing > Chapter 3 - Operations in database multiplexing mode > 3.3 - Checking the database multiplexing mode status [Linux version] [Windows version].
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, PostgreSQL
Versions: from 9.5
Architecture: X86, S390x
Operating System: Linux
If you promote the standby server while the source (master) is still active, the standby server will become an updatable instance (client connections besides those of a replication type can update the database cluster).
This situation where both database instances of a cluster are updatable is referred to as a split-brain scenario. It means that some clients can make changes to the original master but not to the original standby, while other clients can make changes to the original standby but not to the master, eventuating in two separate database clusters containing different sets of data. At this point, no matter which database cluster you connect to, some data will be missing (as it will exist on the other database instance).
Recovering from a split-brain scenario is difficult because you need to identify the differences and ensure that missing data on one server is recovered from the other server. A new standby will need to be created from the newly recovered master before a high availability architecture can again be achieved.
Before promoting a standby server to master, it is vital that the master database (source) is fenced, i.e., that it is made inaccessible to any clients. For more details on this topic, look up STONITH (Shoot The Other Node In The Head).
For details on how to avoid split brains when using pgpool-II, refer to our PostgreSQL Insider article PostgreSQL High Availability using pgpool-II.
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, Fujitsu Enterprise Postgres for Kubernetes, PostgreSQL
Versions: from 9.5
Architecture: X86, S390x
Operating System: Linux
No, the recommend approach is to use a binary dump such as that performed by using pg_basebackup. This ensures that the necessary information is written to the database data files and WAL files in order to begin playing the WAL files from the correct position.
The risk of using a logical dump is that data written since the start of the backup is not included in that backup, and that a restore is not able to identify the correct place in the WAL to start synchronising from.
A restore of a binary backup by itself may not produce a consistent database (specifically when the database is being used while the backup is being executed), so the WAL files must be used to bring the database into a consistent state during restore.
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, PostgreSQL
Versions: from 9.5
Architecture: X86, S390x
Operating System: Windows, Linux
There is no "DB updates are allowed but updates are not replicated" timing. The standby will re-establish connectivity to the master and replication of all updates will occur.
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, Fujitsu Enterprise Postgres for Kubernetes, PostgreSQL
Versions: from 9.4
Architecture: X86, S390x
Operating System: Windows, Linux
Follow the steps below:
Note that the settings mentioned above are specified differently depending on the database version - versions up to 12 use the recovery.conf file; versions from 12 onwards use the postgresql.conf file and the signal file recovery.signal.
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, PostgreSQL
Versions: from 9.2
Architecture: X86, S390x
Operating System: Windows, Linux
Follow the steps below:
Note that the settings mentioned above are specified differently depending on the database version - versions up to 12 use the recovery.conf file; versions from 12 onwards use the postgresql.conf file and the signal file recovery.signal.
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, PostgreSQL
Versions: from 9.2
Architecture: X86, S390x
Operating System: Windows, Linux
The safest course of action in this situation is to rebuild the standby database from the primary one.
Follow the steps below:
If the configuration file in the database directory being replaced is suitable, this can be backed up and replace the one from the backup.
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, PostgreSQL
Versions: from 9.2
Architecture: X86, S390x
Operating System: Windows, Linux
pg_basebackup is a utility that provides a less error-prone method of building a streaming replication environment, by performing the below activities automatically:
While other options are available, such as taking down the database or using pg_start_backup/pg_stop_backup and then manually copying the data directory, pg_basebackup offers the simplest and safest way of doing it.
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, Fujitsu Enterprise Postgres for Kubernetes, PostgreSQL
Versions: from 9.4
Architecture: X86, S390x
Operating System: Windows, Linux
The configuration settings that you need to be aware of for streaming replication, and their meanings, are:
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, Fujitsu Enterprise Postgres for Kubernetes, PostgreSQL
Versions: from 9.4
Architecture: X86, S390x
Operating System: Windows, Linux
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, PostgreSQL
Versions: from 9.2
Architecture: X86, S390x
Operating System: Windows, Linux
Standby servers connect to primary servers in order to receive data updates that are streamed from the transaction log. This means that the primary instance needs to be configured to allow connection from the standby instance. To do this, appropriate settings should be configured in the pg_hba.conf file (credentials and host that the standby will connect with).
The standby instance also needs to be configured with the connection information for the master instance. This is slightly different depending on which version of the database you are using:
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, Fujitsu Enterprise Postgres for Kubernetes, PostgreSQL
Versions: from 9.4
Architecture: X86, S390x
Operating System: Windows, Linux
Changes made on the primary instance by the VACUUM command are written to the primary instance's transaction log; these changes are then replicated to the standby instance.
The VACUUM command is not run on the standby, and doesn’t need to.
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, Fujitsu Enterprise Postgres for Kubernetes, PostgreSQL
Versions: from 9.2
Architecture: X86, S390x
Operating System: Windows, Linux
The operation flow of the VACUUM command is as follow:
If, for some reason, a VACUUM operation is performed on a primary table when replication is not being performed, the optimized database information is stored on the primary, and then when replication is restored, the optimized database information is transferred and populated on the secondary.
Both the VACUUM command and table update operations (UPDATE, INSERT, DELETE, etc.) result in database changes, which are treated as differential information via WAL, so the flow of synchronization between the primary and the secondary is the same when replication is being performed.
Product: Fujitsu Enterprise Postgres SE, Fujitsu Enterprise Postgres AE, Fujitsu Enterprise Postgres for Kubernetes, PostgreSQL
Versions: from 9.2
Architecture: X86, S390x
Operating System: Windows, Linux
On the standby database server, set hot_standby to on in postgresql.conf file. This allows you to connect and run queries on the standby server during recovery.
Note that if the standby is in sync streaming replication mode, then the master will not complete requests until the standby database (not the server itself) is restarted — just reloading the configuration is not enough.
Product: Fujitsu Enterprise Postgres AE,PostgreSQL
Versions: from 9.5 and 9.6
Architecture: X86, S390x
Operating System: Windows, Linux
Replication slots ensure that the primary server keeps the WAL necessary for standby recovery. This is useful in cases where the standby server needs to be kept up-to-date even after being disconnected from the master server for a long period.
To drop a replication slot, follow the steps below:
Product: Fujitsu Enterprise Postgres AE,PostgreSQL
Versions: from 9.5
Architecture: X86, S390x
Operating System: Windows, Linux
Read our most recent articles regarding all aspects of PostgreSQL and FUJITSU Enterprise Postgres.