qhb_upgrade

qhb_upgrade — upgrade a QHB server instance

Synopsis

qhb_upgrade -b oldbindir [-B newbindir] -d oldconfigdir -D newconfigdir [option...]

Description

qhb_upgrade allows data stored in QHB data files to be upgraded to a later QHB major version without the data dump/ restore typically required for major version upgrades, e.g., from 1.1.0 to 1.3.1 or from 1.4 to 1.5. It is not required for minor version upgrades, e.g., from 1.3.0 to 1.3.1.

Major QHB releases regularly add new features that often change the layout of the system tables, but the internal data storage format rarely changes. qhb_upgrade uses this fact to perform rapid upgrades by creating new system tables and simply reusing the old user data files. If a future major release ever changes the data storage format in a way that makes the old data format unreadable, qhb_upgrade will not be usable for such upgrades.

qhb_upgrade does its best to make sure the old and new clusters are binary-compatible, e.g., by checking for compatible compile-time settings, including 32/64-bit binaries. It is important that any external modules are also binary compatible, though this cannot be checked by qhb_upgrade.

qhb_upgrade supports upgrades from 1.1.X and later to the current major release of QHB, including snapshot and beta releases.

WARNING
Upgrading a cluster causes the destination to execute arbitrary code of the source superusers' choice. Ensure that the source superusers are trusted before upgrading.

The table below shows the database version matching for which qhb_upgrade is applicable.

Source DatabaseQHB 1.1.xQHB 1.2.xQHB 1.3.xQHB 1.4.xQHB 1.5.x
QHB 1.1.x++++
QHB 1.2.x+++
QHB 1.3.x++
QHB 1.4.x+
PostgreSQL 12.x+++++
PostgreSQL 13.x+
PostgreSQL 14.x+

WARNING!
Because of using generated keys for encrypting in QHB 1.4.0, qhb_upgrade can't upgrade from qhb_upgrade 1.3.0 and earlier versions to QHB 1.4.0 and later versions if they use encrypting (qss_mode = 1).


Options

qhb_upgrade accepts the following command-line arguments:

-b bindir
--old-bindir=bindir
The old QHB executable directory; environment variable PGBINOLD.

-B bindir
--new-bindir=bindir
The new QHB executable directory; default is the directory where qhb_upgrade resides; environment variable PGBINNEW.

-c
--check
Check clusters only, don't change any data.

-d configdir
--old-datadir=configdir
The old database cluster configuration directory; environment variable PGDATAOLD.

-D configdir
--new-datadir=configdir
The new database cluster configuration directory; environment variable PGDATANEW.

-j njobs
--jobs=njobs
Number of simultaneous processes or threads to use.

-k
--link
Use hard links instead of copying files to the new cluster (this option can be used for upgrade only; it should not be used for migration from PostgreSQL).

-N
--no-sync
By default, qhb_upgrade will wait for all files of the upgraded cluster to be written safely to disk. This option causes qhb_upgrade to return without waiting, which is faster, but means that a subsequent operating system crash can leave the data directory corrupt. Generally, this option is useful for testing but should not be used on a production installation.

-o options
--old-options options
Options to be passed directly to the old qhb command; multiple option invocations are appended.

-O options
--new-options options
Options to be passed directly to the new qhb command; multiple option invocations are appended.

-p port
--old-port=port
The old cluster port number; environment variable PGPORTOLD.

-P port
--new-port=port
The new cluster port number; environment variable PGPORTNEW.

-r
--retain
Retain SQL and log files even after successful completion.

-s dir
--socketdir=dir
Directory to use for qhbmaster sockets during upgrade; default is current working directory; environment variable PGSOCKETDIR.

-U username
--username=username
Cluster's install user name; environment variable PGUSER.

-v
--verbose
Enable verbose internal logging.

-V
--version
Display version information, then exit.

--clone
Use efficient file cloning (also known as “reflinks” on some systems) instead of copying files to the new cluster. This can result in near-instantaneous copying of the data files, giving the speed advantages of -k/--link while leaving the old cluster untouched.
File cloning is only supported on some operating systems and file systems. If it is selected but not supported, the qhb_upgrade run will error. At present, it is supported on Linux (kernel 4.5 or later) with Btrfs and XFS (on file systems created with reflink support), and on macOS with APFS.

--copy
Copy files to the new cluster. This is the default. (See also --link and --clone.)

-?
--help
Show help, then exit.


Usage

These are the steps to perform an upgrade with qhb_upgrade:

  1. Optionally move the old cluster

f you are using a version-specific installation directory, e.g., /opt/ QHB/151, you do not need to move the old cluster. The graphical installers all use version-specific installation directories.

If your installation directory is not version-specific, e.g., /usr/local/qhb, it is necessary to move the current QHB install directory so it does not interfere with the new QHB installation. Once the current QHB server is shut down, it is safe to rename the QHB installation directory; assuming the old directory is /usr/local/qhb, you can do:

mv /usr/local/qhb /usr/local/qhb.old

to rename the directory.

  1. For source installs, build the new version

Build the new QHB source with configure flags that are compatible with the old cluster. qhb_upgrade will check qhb_controldata to make sure all settings are compatible before starting the upgrade.

  1. Install the new QHB binaries

Install the new server's binaries and support files. qhb_upgrade is included in a default installation.

For source installs, if you wish to install the new server in a custom location, use the prefix variable:

make prefix=/usr/local/qhb.new install
  1. Initialize the new QHB cluster

Initialize the new cluster using qhb_bootstrap (or initdb). Again, use compatible qhb_bootstrap (or initdb) flags that match the old cluster. Many prebuilt installers do this step automatically. There is no need to start the new cluster.

  1. Install extension shared object files

Many extensions and custom modules, whether from share/extension, qhb-contrib or another source, use shared object files (or DLLs), e.g., pgcrypto.so. If the old cluster used these, shared object files matching the new server binary must be installed in the new cluster, usually via operating system commands. Do not load the schema definitions, e.g., CREATE EXTENSION pgcrypto, because these will be duplicated from the old cluster. If extension updates are available, qhb_upgrade will report this and create a script that can be run later to update them.

  1. Copy custom full-text search files

Copy any custom full text search files (dictionary, synonym, thesaurus, stop words) from the old to the new cluster.

  1. Adjust authentication

qhb_upgrade will connect to the old and new servers several times, so you might want to set authentication to peer in qhb_hba.conf or use a ~/.pgpass file (see Section The Password File).

  1. Stop both servers

Make sure both database servers are stopped using, on Unix, e.g.:

qhb_ctl -D /opt/QHB-1.1.0 stop
qhb_ctl -D /opt/QHB-1.5.3 stop

Streaming replication and log-shipping standby servers must be running during this shutdown so they receive all changes.

  1. Prepare for standby server upgrades

If you are upgrading standby servers using methods outlined in section Step 11, verify that the old standby servers are caught up by running qhb_controldata against the old primary and standby clusters. Verify that the “Latest checkpoint location” values match in all clusters. Also, make sure wal_level is not set to minimal in the qhb.conf file on the new primary cluster.

  1. Run qhb_upgrade

Always run the qhb_upgrade binary of the new server, not the old one. qhb_upgrade requires the specification of the old and new cluster's data and executable (bin) directories. You can also specify user and port values, and whether you want the data files linked or cloned instead of the default copy behavior.

If you use link mode, the upgrade will be much faster (no file copying) and use less disk space, but you will not be able to access your old cluster once you start the new cluster after the upgrade. Link mode also requires that the old and new cluster data directories be in the same file system. (Tablespaces and pg_wal can be on different file systems.) Clone mode provides the same speed and disk space advantages but does not cause the old cluster to be unusable once the new cluster is started. Clone mode also requires that the old and new data directories be in the same file system. This mode is only available on certain operating systems and file systems.

The --jobs option allows multiple CPU cores to be used for copying/linking of files and to dump and restore database schemas in parallel; a good place to start is the maximum of the number of CPU cores and tablespaces. This option can dramatically reduce the time to upgrade a multi-database server running on a multiprocessor machine.

WARNING!
You should to disable logon_jobs by setting it to off and restart the database system before cluster upgrading; otherwise there is no guarantee that cluster updating process will be stable (see Chapter Security Profiles).

WARNING!
You should to disable integrity_checks by setting it to off and restart the database system before upgrading cluster using qhb_upgrade; otherwise there is no guarantee that cluster updating process will be stable.

Once started, qhb_upgrade will verify the two clusters are compatible and then do the upgrade. You can use qhb_upgrade --check to perform only the checks, even if the old server is still running. qhb_upgrade --check will also outline any manual adjustments you will need to make after the upgrade. If you are going to be using link or clone mode, you should use the option --link or --clone with --check to enable mode-specific checks. qhb_upgrade requires write permission in the current directory.

Obviously, no one should be accessing the clusters during the upgrade. qhb_upgrade defaults to running servers on port 50432 to avoid unintended client connections. You can use the same port number for both clusters when doing an upgrade because the old and new clusters will not be running at the same time. However, when checking an old running server, the old and new port numbers must be different.

If an error occurs while restoring the database schema, qhb_upgrade will exit and you will have to revert to the old cluster as outlined in Step 17 below. To try qhb_upgrade again, you will need to modify the old cluster so the qhb_upgrade schema restore succeeds. If the problem is a share/extension module, you might need to uninstall the share/extension module from the old cluster and install it in the new cluster after the upgrade, assuming the module is not being used to store user data.

  1. Upgrade streaming replication and log-shipping standby servers

If you used link mode and have Streaming Replication (see Section Streaming Replication) or Log-Shipping (see Section Log-Shipping Standby Servers) standby servers, you can follow these steps to quickly upgrade them. You will not be running qhb_upgrade on the standby servers, but rather rsync on the primary. Do not start any servers yet.

If you did not use link mode, do not have or do not want to use rsync, or want an easier solution, skip the instructions in this section and simply recreate the standby servers once qhb_upgrade completes and the new primary is running.

  • Install the new QHB binaries on standby servers
    Make sure the new binaries and support files are installed on all standby servers.

  • Make sure the new standby data directories do not exist
    Make sure the new standby data directories do not exist or are empty. If qhb_bootstrap (or initdb) was run, delete the standby servers' new data directories.

  • Install extension shared object files
    Install the same extension shared object files on the new standbys that you installed in the new primary cluster.

  • Stop standby servers
    If the standby servers are still running, stop them now using the above instructions.

  • Save configuration files
    Save any configuration files from the old standbys' configuration directories you need to keep, e.g., qhb.conf (and any files included by it), qhb.auto.conf, qhb_hba.conf, because these will be overwritten or removed in the next step.

  • Run rsync
    When using link mode, standby servers can be quickly upgraded using rsync. To accomplish this, from a directory on the primary server that is above the old and new database cluster directories, run this on the primary for each standby server:

rsync --archive --delete --hard-links --size-only --no-inc-recursive old_cluster new_cluster remote_dir
where ***old\_cluster*** and ***new\_cluster*** are relative to the current
directory on the primary, and ***remote\_dir*** is **above** the old and new
cluster directories on the standby. The directory structure under the specified
directories on the primary and standbys must match. Consult the rsync manual
page for details on specifying the remote directory, e.g.,
rsync --archive --delete --hard-links --size-only --no-inc-recursive /opt/QHB/1.1.0 \
      /opt/QHB/1.5.3 standby.example.com:/opt/QHB
You can verify what the command will do using rsync's **--dry-run** option.
While rsync must be run on the primary for at least one standby, it is
possible to run rsync on an upgraded standby to upgrade other standbys, as
long as the upgraded standby has not been started.

What this does is to record the links created by ***qhb\_upgrade***'s link
mode that connect files in the old and new clusters on the primary server.
It then finds matching files in the standby's old cluster and creates links
for them in the standby's new cluster. Files that were not linked on the
primary are copied from the primary to the standby. (They are usually small.)
This provides rapid standby upgrades. Unfortunately, rsync needlessly copies
files associated with temporary and unlogged tables because these files don't
normally exist on standby servers.

If you have tablespaces, you will need to run a similar rsync command for
each tablespace directory, e.g.:
rsync --archive --delete --hard-links --size-only --no-inc-recursive /vol1/pg_tblsp/QHB_1.1.0_201510051 \
      /vol1/pg_tblsp/QHB_1.5.3_201608131 standby.example.com:/vol1/pg_tblsp
If you have relocated ***pg\_wal*** outside the data directories, rsync must
be run on those directories too.
  • Configure streaming replication and log-shipping standby servers
    Configure the servers for log shipping. (You do not need to run pg_backup_start() and pg_backup_stop() or take a file system backup as the standbys are still synchronized with the primary.) Replication slots are not copied and must be recreated.
  1. Restore qhb_hba.conf

If you modified qhb_hba.conf, restore its original settings. It might also be necessary to adjust other configuration files in the new cluster to match the old cluster, e.g., qhb.conf (and any files included by it), qhb.auto.conf.

  1. Start the new server

The new server can now be safely started, and then any rsync'ed standby servers.

  1. Post-upgrade processing

If any post-upgrade processing is required, qhb_upgrade will issue warnings as it completes. It will also generate script files that must be run by the administrator. The script files will connect to each database that needs post-upgrade processing. Each script should be run using:

psql --username=qhb --file=script.sql qhb

The scripts can be run in any order and can be deleted once they have been run.

WARNING!
In general it is unsafe to access tables referenced in rebuild scripts until the rebuild scripts have run to completion; doing so could yield incorrect results or poor performance. Tables not referenced in rebuild scripts can be accessed immediately.

  1. Statistics

Because optimizer statistics are not transferred by qhb_upgrade, you will be instructed to run a command to regenerate that information at the end of the upgrade. You might need to set connection parameters to match your new cluster.

  1. Delete old cluster

Once you are satisfied with the upgrade, you can delete the old cluster's data directories by running the script mentioned when qhb_upgrade completes. (Automatic deletion is not possible if you have user-defined tablespaces inside the old data directory.) You can also delete the old installation directories (e.g., bin, share).

  1. Reverting to old cluster

If, after running qhb_upgrade, you wish to revert to the old cluster, there are several options:

  • If the --check option was used, the old cluster was unmodified; it can be restarted.

  • If the --link option was not used, the old cluster was unmodified; it can be restarted.

  • If the --link option was used, the data files might be shared between the old and new cluster:

    • If qhb_upgrade aborted before linking started, the old cluster was unmodified; it can be restarted.

    • If you did not start the new cluster, the old cluster was unmodified except that, when linking started, a .old suffix was appended to $PGDATA/global/ pg_control. To reuse the old cluster, remove the .old suffix from $PGDATA/global/pg_control; you can then restart the old cluster.

    • If you did start the new cluster, it has written to shared files and it is unsafe to use the old cluster. The old cluster will need to be restored from backup in this case.


Notes

qhb_upgrade creates various working files, such as schema dumps, stored within qhb_upgrade_output.d in the directory of the new cluster. Each run creates a new subdirectory named with a timestamp formatted as per ISO 8601 (%Y%m%dT%H%M%S), where all its generated files are stored. qhb_upgrade_output.d and its contained files will be removed automatically if qhb_upgrade completes successfully; but in the event of trouble, the files there may provide useful debugging information.

qhb_upgrade launches short-lived postmasters in the old and new data directories. Temporary Unix socket files for communication with these qhbmasters are, by default, made in the current working directory. In some situations the path name for the current directory might be too long to be a valid socket name. In that case you can use the -s option to put the socket files in some directory with a shorter path name. For security, be sure that that directory is not readable or writable by any other users.

All failure, rebuild, and reindex cases will be reported by qhb_upgrade if they affect your installation; post-upgrade scripts to rebuild tables and indexes will be generated automatically. If you are trying to automate the upgrade of many clusters, you should find that clusters with identical database schemas require the same post-upgrade steps for all cluster upgrades; this is because the post-upgrade steps are based on the database schemas, and not user data.

For deployment testing, create a schema-only copy of the old cluster, insert dummy data, and upgrade that.

qhb_upgrade does not support upgrading of databases containing table columns using these reg* OID-referencing system data types:

regcollation
regconfig
regdictionary
regnamespace
regoper
regoperator
regproc
regprocedure
(regclass, regrole, and regtype can be upgraded.)

If you want to use link mode and you do not want your old cluster to be modified when the new cluster is started, consider using the clone mode. If that is not available, make a copy of the old cluster and upgrade that in link mode. To make a valid copy of the old cluster, use rsync to create a dirty copy of the old cluster while the server is running, then shut down the old server and run rsync --checksum again to update the copy with any changes to make it consistent. (--checksum is necessary because rsync only has file modification-time granularity of one second.) You might want to exclude some files, e.g., qhbmaster.pid, as documented in Section Making a Base Backup Using the Low Level API. If your file system supports file system snapshots or copy-on-write file copies, you can use that to make a backup of the old cluster and tablespaces, though the snapshot and copies must be created simultaneously or while the database server is down.


See Also

qhb_bootstrap, initdb, qhb_ctl, qhb_dump, qhb Instance