Example of Setting Up a Synchronous Replica
WARNING!
This section provides a final example of setting up a database cluster with synchronous replication. Note that there are some parameters (such as host IP addresses or database directories) that may not match yours.
Initial Conditions
Operating system on servers — CentOS 7.
172.31.101.70 — IP of the server designated as the primary (master).
172.31.101.71 — IP of the standby server (replica).
The DBMS is installed from RPM packages (see Download page. QHB for Centos 7 and 8).
Database location directory — PG_DATA = /u01/qhb/db.
/usr/local/qhb/ — default QHB installation path.
systemd services are configured with consideration to the above mentioned.
Setting Up the Primary Server
- Create QHB user for replication:
create user repluser replication password 'repluser';
- Set the necessary parameters in the vi /u01/qhb/db/qhb.conf configuration file:
listen_addresses = '*'
wal_level = hot_standby
max_wal_senders = 2
max_replication_slots = 2
hot_standby = on
hot_standby_feedback = on
logging_collector = on
log_directory = './log'
synchronous_commit = on
synchronous_standby_names = '*'
- Set the parameters for accessing the database from the standby machine in the configuration file
vi /u01/qhb/db/qhb_hba.conf
host replication repluser 172.31.101.70/32 md5
host replication repluser 172.31.101.71/32 md5
WARNING!
The existing host replication entry should not be modified.
- Restart the DBMS service
systemctl restart qhb && systemctl status qhb
Setting Up a Standby Server
- Stop the DBMS service on the replica server
systemctl stop qhb && systemctl status qhb
- Delete the directory with the database data
rm -rf /u01/qhb/db/* # Complete data deletion
IMPORTANT!
When deleting a directory, you must be fully aware that this action is irreversible and that all data, configuration files, and log records will be lost without the possibility of recovery.
- If the situation allows, we perform recovery via a backup.
If the data size in the database is large, it is recommended to use a copy of the data, if available.
Recovery via the backup utility is run under the QHB system user:
sudo -u qhb \#
/usr/local/qhb/bin/qhb_basebackup \ # Backup utility
-h 172.31.101.70 \ # Master server IP address
-p 5432 \ # Default port; it is recommended to change it if configuration has been changed
-U repluser \ # A user created earlier on the primary server with replication permissions
-D /u01/qhb/db/ \ # DBMS location directory
-Fp -Xs -P -R # The set of keys required for replication
For a better understanding, you should get acquainted with the operation of the -Fp, -Xs, -P and -R keys of the utility qhb_basebackup, described in its Options.
If a copy is made using qhb_basebackup, the connection parameters for the
replica will be specified in the file qhb.auto.conf.
You should make sure that the specified connection string is correct.
If a copy of the database was deployed via a file (or another correct method, not specified in this guide), then before starting it is necessary to adjust several parameters:
- starting the DBMS server in backup mode. To do this, you need to create the file standby.signal in the database cluster directory (next to qhb.conf).
sudo -u qhb touch /u01/qhb/db/standby.signal
ls -la standby.signal
#-rw------- 1 qhb qhb 0 Nov 25 15:34 standby.signal
- in the qhb.conf configuration file you need to specify the connection string to the main server:
primary_conninfo = 'user=repluser password=repluser host=172.31.101.70 port=5432 sslmode=disable sslcompression=0 gssencmode=disable krbsrvname=postgres target_session_attrs=any'
Note
The connection string is long, and some console editors may apply the word-wrap option, breaking the visual integrity of the string.
- In both cases, the DBMS parameters in the configuration file will be copied from the master, and some values should be changed to match the replica.
The following parameters are specified on the replica:
vi /u01/qhb/db/qhb.conf
listen_addresses = '*'
logging_collector = on
log_directory = 'log' # Make sure the directory exists and you have required permissions.
- Now and only now you can run the DBMS on the second server, the replica.
systemctl start qhb && systemctl status qhb
Checking
- Check on the master:
/usr/local/qhb/bin/psql -x -c "select * from pg_stat_replication"
One slot should appear:
pid | 15566
usesysid | 16385
usename | repluser
application_name | walreceiver
client_addr | 172.31.101.71
client_hostname |
client_port | 40516
backend_start | 2020-11-25 09:45:22.249802-03
backend_xmin |
state | streaming
sent_lsn | 0/30001F8
write_lsn | 0/30001F8
flush_lsn | 0/30001F8
replay_lsn | 0/30001F8
write_lag |
flush_lag |
replay_lag |
sync_priority | 1
sync_state | sync
^^^^^^^
reply_time | 2020-11-25 09:47:22.710523-03
The sync_state parameter must have the value sync (synchronous replication).
- Create a table on the master:
CREATE TABLE rtest (vname VARCHAR(40));
- Fill in a million rows with random values:
insert into rtest select substr(md5(random()::text), 0, 40) from generate_series(1,1000000);
- Check reading from the table on the standby:
qhb=# select count(*) from rtest;
count
---------
1000000
(1 row)
Additional Settings
In some cases, it is recommended to explicitly specify the replication slot.
On the replica, add the parameter primary_slot_name = 'standby_slot2_qhb` to the qhb.auto.conf file.
On the master, execute the SQL command:
SELECT pg_create_physical_replication_slot('standby_slot2_qhb');
A query for active replication slots should show our replica on the master
SELECT * FROM pg_replication_slots WHERE active=true;
See Also
External manual with instructions and explanations, qhb_basebackup