Monitoring Database Activity

A database administrator frequently wonders, “What is the system doing right now?” This chapter discusses how to find that out.

Several tools are available for monitoring database activity and analyzing performance. Most of this chapter is devoted to describing QHB's cumulative statistics system, but one should not neglect regular Unix monitoring programs such as ps, top, iostat, and vmstat. Also, once one has identified a poorly-performing query, further investigation might be needed using QHB's [EXPLAIN] command. Section Using EXPLAIN discusses EXPLAIN and other methods for understanding the behavior of an individual query.



Standard Unix Tools

On most Unix platforms, QHB modifies its command title as reported by ps, so that individual server processes can readily be identified. A sample display is

$ ps auxww | grep ^qhb
qhb  15551  0.0  0.1  57536  7132 pts/0    S    18:02   0:00 qhb -i
qhb  15554  0.0  0.0  57536  1184 ?        Ss   18:02   0:00 qhb: background writer
qhb  15555  0.0  0.0  57536   916 ?        Ss   18:02   0:00 qhb: checkpointer
qhb  15556  0.0  0.0  57536   916 ?        Ss   18:02   0:00 qhb: walwriter
qhb  15557  0.0  0.0  58504  2244 ?        Ss   18:02   0:00 qhb: autovacuum launcher
qhb  15582  0.0  0.0  58772  3080 ?        Ss   18:04   0:00 qhb: joe runbug 127.0.0.1 idle
qhb  15606  0.0  0.0  58772  3052 ?        Ss   18:07   0:00 qhb: tgl regression [local] SELECT waiting
qhb  15610  0.0  0.0  58772  3056 ?        Ss   18:07   0:00 qhb: tgl regression [local] idle in transaction

(The appropriate invocation of ps varies across different platforms, as do the details of what is shown. This example is from a recent Linux system.) The first process listed here is the primary server process. The command arguments shown for it are the same ones used when it was launched. The next four processes are background worker processes automatically launched by the primary process. (The “autovacuum launcher” process will not be present if you have set the system not to run autovacuum.) Each of the remaining processes is a server process handling one client connection. Each such process sets its command line display in the form

qhb: user database host activity

The user, database, and (client) host items remain the same for the life of the client connection, but the activity indicator changes. The activity can be idle (i.e., waiting for a client command), idle in transaction (waiting for client inside a BEGIN block), or a command type name such as SELECT. Also, waiting is appended if the server process is presently waiting on a lock held by another session. In the above example we can infer that process 15606 is waiting for process 15610 to complete its transaction and thereby release some lock. (Process 15610 must be the blocker, because there is no other active session. In more complicated cases it would be necessary to look into the pg_locks system view to determine who is blocking whom.)

If cluster_name has been configured the cluster name will also be shown in ps output:

$ psql -c 'SHOW cluster_name'
 cluster_name
--------------
 server1
(1 row)


$ ps aux|grep server1
qhb   27093  0.0  0.0  30096  2752 ?        Ss   11:34   0:00 qhb: server1: background writer
...

If you have turned off update_process_title then the activity indicator is not updated; the process title is set only once when a new process is launched. On some platforms this saves a measurable amount of per-command overhead; on others it's insignificant.

Tip
Solaris requires special handling. You must use /usr/ucb/ps, rather than /bin/ps. You also must use two w flags, not just one. In addition, your original invocation of the qhb command must have a shorter ps status display than that provided by each server process. If you fail to do all three things, the ps output for each server process will be the original qhb command line.



The Cumulative Statistics System

QHB's cumulative statistics system supports collection and reporting of information about server activity. Presently, accesses to tables and indexes in both disk-block and individual-row terms are counted. The total number of rows in each table, and information about vacuum and analyze actions for each table are also counted. If enabled, calls to user-defined functions and the total time spent in each one are counted as well.

QHB also supports reporting dynamic information about exactly what is going on in the system right now, such as the exact command currently being executed by other server processes, and which other connections exist in the system. This facility is independent of the cumulative statistics system.


Statistics Collection Configuration

Since collection of statistics adds some overhead to query execution, the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in qhb.conf. (See Chapter Server Configuration for details about setting configuration parameters.)

The parameter track_activities enables monitoring of the current command being executed by any server process.

The parameter track_counts controls whether cumulative statistics are collected about table and index accesses.

The parameter track_functions enables tracking of usage of user-defined functions.

The parameter track_io_timing enables monitoring of block read and write times.

The parameter track_wal_io_timing enables monitoring of WAL write times.

Normally these parameters are set in qhb.conf so that they apply to all server processes, but it is possible to turn them on or off in individual sessions using the [SET] command. (To prevent ordinary users from hiding their activity from the administrator, only superusers are allowed to change these parameters with SET.)

Cumulative statistics are collected in shared memory. Every QHB process collects statistics locally, then updates the shared data at appropriate intervals. When a server, including a physical replica, shuts down cleanly, a permanent copy of the statistics data is stored in the pg_stat subdirectory, so that statistics can be retained across server restarts. In contrast, when starting from an unclean shutdown (e.g., after an immediate shutdown, a server crash, starting from a base backup, and point-in-time recovery), all statistics counters are reset.


Viewing Statistics

Several predefined views, listed in Table 1, are available to show the current state of the system. There are also several other views, listed in Table 2, available to show the accumulated statistics. Alternatively, one can build custom views using the underlying cumulative statistics functions, as discussed in Section Statistics Functions.

When using the cumulative statistics views and functions to monitor collected data, it is important to realize that the information does not update instantaneously. Each individual server process flushes out accumulated statistics to shared memory just before going idle, but not more frequently than once per PGSTAT_MIN_INTERVAL milliseconds (1 second unless altered while building the server); so a query or transaction still in progress does not affect the displayed totals and the displayed information lags behind actual activity. However, current-query information collected by track_activities is always up-to-date.

Another important point is that when a server process is asked to display any of the accumulated statistics, accessed values are cached until the end of its current transaction in the default configuration. So the statistics will show static information as long as you continue the current transaction. Similarly, information about the current queries of all sessions is collected when any such information is first requested within a transaction, and the same information will be displayed throughout the transaction. This is a feature, not a bug, because it allows you to perform several queries on the statistics and correlate the results without worrying that the numbers are changing underneath you. When analyzing statistics interactively, or with expensive queries, the time delta between accesses to individual statistics can lead to significant skew in the cached statistics. To minimize skew, stats_fetch_consistency can be set to snapshot, at the price of increased memory usage for caching not-needed statistics data. Conversely, if it's known that statistics are only accessed once, caching accessed statistics is unnecessary and can be avoided by setting stats_fetch_consistency to none. You can invoke pg_stat_clear_snapshot() to discard the current transaction's statistics snapshot or cached values (if any). The next use of statistical information will (when in snapshot mode) cause a new snapshot to be built or (when in cache mode) accessed statistics to be cached.

A transaction can also see its own statistics (not yet flushed out to the shared memory statistics) in the views pg_stat_xact_all_tables, pg_stat_xact_sys_tables, pg_stat_xact_user_tables, and pg_stat_xact_user_functions. These numbers do not act as stated above; instead they update continuously throughout the transaction.

Some of the information in the dynamic statistics views shown in Table 1 is security restricted. Ordinary users can only see all the information about their own sessions (sessions belonging to a role that they are a member of). In rows about other sessions, many columns will be null. Note, however, that the existence of a session and its general properties such as its sessions user and database are visible to all users. Superusers and roles with privileges of built-in role pg_read_all_stats (see also Section Predefined Roles) can see all the information about all sessions.

Table 1. Dynamic Statistics Views

View NameDescription
pg_stat_activityOne row per server process, showing information related to the current activity of that process, such as state and current query. See pg_stat_activity for details.
pg_stat_replicationOne row per WAL sender process, showing statistics about replication to that sender's connected standby server. See pg_stat_replication for details.
pg_stat_wal_receiverOnly one row, showing statistics about the WAL receiver from that receiver's connected server. See pg_stat_wal_receiver for details.
pg_stat_recovery_prefetchOnly one row, showing statistics about blocks prefetched during recovery. See pg_stat_recovery_prefetch for details.
pg_stat_subscriptionAt least one row per subscription, showing information about the subscription workers. See pg_stat_subscription for details.
pg_stat_sslOne row per connection (regular and replication), showing information about SSL used on this connection. See pg_stat_ssl for details.
pg_stat_gssapiOne row per connection (regular and replication), showing information about GSSAPI authentication and encryption used on this connection. See pg_stat_gssapi for details.
pg_stat_progress_analyzeOne row for each backend (including autovacuum worker processes) running ANALYZE, showing current progress. See Section ANALYZE Progress Reporting.
pg_stat_progress_create_indexOne row for each backend running CREATE INDEX or REINDEX, showing current progress. See Section CREATE INDEX Progress Reporting.
pg_stat_progress_vacuumOne row for each backend (including autovacuum worker processes) running VACUUM, showing current progress. See Section VACUUM Progress Reporting.
pg_stat_progress_clusterOne row for each backend running CLUSTER or VACUUM FULL, showing current progress. See Section CLUSTER Progress Reporting.
pg_stat_progress_basebackupOne row for each WAL sender process streaming a base backup, showing current progress. See Section Base Backup Progress Reporting.
pg_stat_progress_copyOne row for each backend running COPY, showing current progress. See Section COPY Progress Reporting.

Table 2. Collected Statistics Views

View NameDescription
pg_stat_archiverOne row only, showing statistics about the WAL archiver process's activity. See pg_stat_archiver for details.
pg_stat_bgwriterOne row only, showing statistics about the background writer process's activity. See pg_stat_bgwriter for details.
pg_stat_walOne row only, showing statistics about WAL activity. See pg_stat_wal for details.
pg_stat_databaseOne row per database, showing database-wide statistics. See pg_stat_database for details.
pg_stat_database_conflictsOne row per database, showing database-wide statistics about query cancels due to conflict with recovery on standby servers. See pg_stat_database_conflicts for details.
pg_stat_all_tablesOne row for each table in the current database, showing statistics about accesses to that specific table. See pg_stat_all_tables for details.
pg_stat_sys_tablesSame as pg_stat_all_tables, except that only system tables are shown.
pg_stat_user_tablesSame as pg_stat_all_tables, except that only user tables are shown.
pg_stat_xact_all_tablesSimilar to pg_stat_all_tables, but counts actions taken so far within the current transaction (which are not yet included in pg_stat_all_tables and related views). The columns for numbers of live and dead rows and vacuum and analyze actions are not present in this view.
pg_stat_xact_sys_tablesSame as pg_stat_xact_all_tables, except that only system tables are shown.
pg_stat_xact_user_tablesSame as pg_stat_xact_all_tables, except that only user tables are shown.
pg_stat_all_indexesOne row for each index in the current database, showing statistics about accesses to that specific index. See pg_stat_all_indexes for details.
pg_stat_sys_indexesSame as pg_stat_all_indexes, except that only indexes on system tables are shown.
pg_stat_user_indexesSame as pg_stat_all_indexes, except that only indexes on user tables are shown.
pg_statio_all_tablesOne row for each table in the current database, showing statistics about I/O on that specific table. See pg_statio_all_tables for details.
pg_statio_sys_tablesSame as pg_statio_all_tables, except that only system tables are shown.
pg_statio_user_tablesSame as pg_statio_all_tables, except that only user tables are shown.
pg_statio_all_indexesOne row for each index in the current database, showing statistics about I/O on that specific index. See pg_statio_all_indexes for details.
pg_statio_sys_indexesSame as pg_statio_all_indexes, except that only indexes on system tables are shown.
pg_statio_user_indexesSame as pg_statio_all_indexes, except that only indexes on user tables are shown.
pg_statio_all_sequencesOne row for each sequence in the current database, showing statistics about I/O on that specific sequence. See pg_statio_all_sequences for details.
pg_statio_sys_sequencesSame as pg_statio_all_sequences, except that only system sequences are shown. (Presently, no system sequences are defined, so this view is always empty.)
pg_statio_user_sequencesSame as pg_statio_all_sequences, except that only user sequences are shown.
pg_stat_user_functionsOne row for each tracked function, showing statistics about executions of that function. See pg_stat_user_functions for details.
pg_stat_xact_user_functionsПохоже на pg_stat_user_functions, but counts only calls during the current transaction (which are not yet included in pg_stat_user_functions).
pg_stat_slruOne row per SLRU, showing statistics of operations. See pg_stat_slru for details.
pg_stat_replication_slotsOne row per replication slot, showing statistics about the replication slot's usage. See pg_stat_replication_slots for details.
pg_stat_subscription_statsOne row per subscription, showing statistics about errors. See pg_stat_subscription_stats for details.

The per-index statistics are particularly useful to determine which indexes are being used and how effective they are.

The pg_statio_ views are primarily useful to determine the effectiveness of the buffer cache. When the number of actual disk reads is much smaller than the number of buffer hits, then the cache is satisfying most read requests without invoking a kernel call. However, these statistics do not give the entire story: due to the way in which QHB handles disk I/O, data that is not in the QHB buffer cache might still reside in the kernel's I/O cache, and might therefore still be fetched without requiring a physical read. Users interested in obtaining more detailed information on QHB I/O behavior are advised to use the QHB statistics views in combination with operating system utilities that allow insight into the kernel's handling of I/O.


pg_stat_activity

The pg_stat_activity view will have one row per server process, showing information related to the current activity of that process.

Table 3. pg_stat_activity View

Column Type
Description
datid oid
OID of the database this backend is connected to
datname name
Name of the database this backend is connected to
pid integer
Process ID of this backend
leader_pid integer
Process ID of the parallel group leader, if this process is a parallel query worker. NULL if this process is a parallel group leader or does not participate in parallel query.
usesysid oid
OID of the user logged into this backend
usename name
Name of the user logged into this backend
application_name text
Name of the application that is connected to this backend
client_addr inet
IP address of the client connected to this backend. If this field is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum.
client_hostname text
Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This field will only be non-null for IP connections, and only when log_hostname is enabled.
client_port integer
TCP port number that the client is using for communication with this backend, or -1 if a Unix socket is used. If this field is null, it indicates that this is an internal server process.
backend_start timestamp with time zone
Time when this process was started. For client backends, this is the time the client connected to the server.
xact_start timestamp with time zone
Time when this process' current transaction was started, or null if no transaction is active. If the current query is the first of its transaction, this column is equal to the query_start column.
query_start timestamp with time zone
Time when the currently active query was started, or if state is not active, when the last query was started
state_change timestamp with time zone
Time when the state was last changed
wait_event_type text
The type of event for which the backend is waiting, if any; otherwise NULL. See Table 4.
wait_event text
Wait event name if backend is currently waiting, otherwise NULL. See Table 5 through Table 13.
state text
Current overall state of this backend. Possible values are:
  • active: The backend is executing a query.
  • idle: The backend is waiting for a new client command.
  • idle in transaction: The backend is in a transaction, but is not currently executing a query.
  • idle in transaction (aborted): This state is similar to idle in transaction, except one of the statements in the transaction caused an error.
  • fastpath function call: The backend is executing a fast-path function.
  • disabled: This state is reported if track_activities is disabled in this backend.
backend_xid xid
Top-level transaction identifier of this backend, if any.
backend_xmin xid
The current backend's xmin horizon.
query_id bigint
Identifier of this backend's most recent query. If state is active this field shows the identifier of the currently executing query. In all other states, it shows the identifier of last query that was executed. Query identifiers are not computed by default so this field will be null unless compute_query_id parameter is enabled or a third-party module that computes query identifiers is configured.
query text
Text of this backend's most recent query. If state is active this field shows the currently executing query. In all other states, it shows the last query that was executed. By default the query text is truncated at 1024 bytes; this value can be changed via the parameter track_activity_query_size.
backend_type text
Type of current backend. Possible types are autovacuum launcher, autovacuum worker, logical replication launcher, logical replication worker, parallel worker, background writer, client backend, checkpointer, archiver, startup, walreceiver, walsender and walwriter. In addition, background workers registered by extensions may have additional types.

Note
The wait_event and state columns are independent. If a backend is in the active state, it may or may not be waiting on some event. If the state is active and wait_event is non-null, it means that a query is being executed, but is being blocked somewhere in the system.

Table 4. Wait Event Types

Wait Event TypeDescription
ActivityThe server process is idle. This event type indicates a process waiting for activity in its main processing loop. wait_event will identify the specific wait point; see Table 5.
BufferPinThe server process is waiting for exclusive access to a data buffer. Buffer pin waits can be protracted if another process holds an open cursor that last read data from the buffer in question. See Table 6.
ClientThe server process is waiting for activity on a socket connected to a user application. Thus, the server expects something to happen that is independent of its internal processes. wait_event will identify the specific wait point; see Table 7.
ExtensionThe server process is waiting for some condition defined by an extension module. See Table 8.
IOThe server process is waiting for an I/O operation to complete. wait_event will identify the specific wait point; see Table 9.
IPCThe server process is waiting for some interaction with another server process. wait_event will identify the specific wait point; see Table 10.
LockThe server process is waiting for a heavyweight lock. Heavyweight locks, also known as lock manager locks or simply locks, primarily protect SQL-visible objects such as tables. However, they are also used to ensure mutual exclusion for certain internal operations such as relation extension. wait_event will identify the type of lock awaited; see Table 11.
LWLockThe server process is waiting for a lightweight lock. Most such locks protect a particular data structure in shared memory. wait_event will contain a name identifying the purpose of the lightweight lock. (Some locks have specific names; others are part of a group of locks each with a similar purpose.) See Table 12.
TimeoutThe server process is waiting for a timeout to expire. wait_event will identify the specific wait point; see Table 13.

Table 5. Wait Events of Type Activity

Activity Wait EventDescription
ArchiverMainWaiting in main loop of archiver process.
AutoVacuumMainWaiting in main loop of autovacuum launcher process.
BgWriterHibernateWaiting in background writer process, hibernating.
BgWriterMainWaiting in main loop of background writer process.
CheckpointerMainWaiting in main loop of checkpointer process.
LogicalApplyMainWaiting in main loop of logical replication apply process.
LogicalLauncherMainWaiting in main loop of logical replication launcher process.
RecoveryWalStreamWaiting in main loop of startup process for WAL to arrive, during streaming recovery.
SysLoggerMainWaiting in main loop of syslogger process.
WalReceiverMainWaiting in main loop of WAL receiver process.
WalSenderMainWaiting in main loop of WAL sender process.
WalWriterMainWaiting in main loop of WAL writer process.

Table 6. Wait Events of Type BufferPin

BufferPin Wait EventDescription
BufferPinWaiting to acquire an exclusive pin on a buffer.

Table 7. Wait Events of Type Client

Client Wait EventDescription
ClientReadWaiting to read data from the client.
ClientWriteWaiting to write data to the client.
GSSOpenServerWaiting to read data from the client while establishing a GSSAPI session.
LibPQWalReceiverConnectWaiting in WAL receiver to establish connection to remote server.
LibPQWalReceiverReceiveWaiting in WAL receiver to receive data from remote server.
SSLOpenServerWaiting for SSL while attempting connection.
WalSenderWaitForWALWaiting for WAL to be flushed in WAL sender process.
WalSenderWriteDataWaiting for any activity when processing replies from WAL receiver in WAL sender process.

Table 8. Wait Events of Type Extension

Extension Wait EventDescription
ExtensionWaiting in an extension.

Table 9. Wait Events of Type IO

IO Wait EventDescription
BaseBackupReadWaiting for base backup to read from a file.
BaseBackupSyncWaiting for data written by a base backup to reach durable storage.
BaseBackupWriteWaiting for base backup to write to a file.
BufFileReadWaiting for a read from a buffered file.
BufFileWriteWaiting for a write to a buffered file.
BufFileTruncateWaiting for a buffered file to be truncated.
ControlFileReadWaiting for a read from the pg_control file.
ControlFileSyncWaiting for the pg_control file to reach durable storage.
ControlFileSyncUpdateWaiting for an update to the pg_control file to reach durable storage.
ControlFileWriteWaiting for a write to the pg_control file.
ControlFileWriteUpdateWaiting for a write to update the pg_control file.
CopyFileReadWaiting for a read during a file copy operation.
CopyFileWriteWaiting for a write during a file copy operation.
DSMFillZeroWriteWaiting to fill a dynamic shared memory backing file with zeroes.
DataFileExtendWaiting for a relation data file to be extended.
DataFileFlushWaiting for a relation data file to reach durable storage.
DataFileImmediateSyncWaiting for an immediate synchronization of a relation data file to durable storage.
DataFilePrefetchWaiting for an asynchronous prefetch from a relation data file.
DataFileReadWaiting for a read from a relation data file.
DataFileSyncWaiting for changes to a relation data file to reach durable storage.
DataFileTruncateWaiting for a relation data file to be truncated.
DataFileWriteWaiting for a write to a relation data file.
LockFileAddToDataDirReadWaiting for a read while adding a line to the data directory lock file.
LockFileAddToDataDirSyncWaiting for data to reach durable storage while adding a line to the data directory lock file.
LockFileAddToDataDirWriteWaiting for a write while adding a line to the data directory lock file.
LockFileCreateReadWaiting to read while creating the data directory lock file.
LockFileCreateSyncWaiting for data to reach durable storage while creating the data directory lock file.
LockFileCreateWriteWaiting for a write while creating the data directory lock file.
LockFileReCheckDataDirReadWaiting for a read during recheck of the data directory lock file.
LogicalRewriteCheckpointSyncWaiting for logical rewrite mappings to reach durable storage during a checkpoint.
LogicalRewriteMappingSyncWaiting for mapping data to reach durable storage during a logical rewrite.
LogicalRewriteMappingWriteWaiting for a write of mapping data during a logical rewrite.
LogicalRewriteSyncWaiting for logical rewrite mappings to reach durable storage.
LogicalRewriteTruncateWaiting for truncate of mapping data during a logical rewrite.
LogicalRewriteWriteWaiting for a write of logical rewrite mappings.
RelationMapReadWaiting for a read of the relation map file.
RelationMapSyncWaiting for the relation map file to reach durable storage.
RelationMapWriteWaiting for a write to the relation map file.
ReorderBufferReadWaiting for a read during reorder buffer management.
ReorderBufferWriteWaiting for a write during reorder buffer management.
ReorderLogicalMappingReadWaiting for a read of a logical mapping during reorder buffer management.
ReplicationSlotReadWaiting for a read from a replication slot control file.
ReplicationSlotRestoreSyncWaiting for a replication slot control file to reach durable storage while restoring it to memory.
ReplicationSlotSyncWaiting for a replication slot control file to reach durable storage.
ReplicationSlotWriteWaiting for a write to a replication slot control file.
SLRUFlushSyncWaiting for SLRU data to reach durable storage during a checkpoint or database shutdown.
SLRUReadWaiting for a read of an SLRU page.
SLRUSyncWaiting for SLRU data to reach durable storage following a page write.
SLRUWriteWaiting for a write of an SLRU page.
SnapbuildReadWaiting for a read of a serialized historical catalog snapshot.
SnapbuildSyncWaiting for a serialized historical catalog snapshot to reach durable storage.
SnapbuildWriteWaiting for a write of a serialized historical catalog snapshot.
TimelineHistoryFileSyncWaiting for a timeline history file received via streaming replication to reach durable storage.
TimelineHistoryFileWriteWaiting for a write of a timeline history file received via streaming replication.
TimelineHistoryReadWaiting for a read of a timeline history file.
TimelineHistorySyncWaiting for a newly created timeline history file to reach durable storage.
TimelineHistoryWriteWaiting for a write of a newly created timeline history file.
TwophaseFileReadWaiting for a read of a two phase state file.
TwophaseFileSyncWaiting for a two phase state file to reach durable storage.
TwophaseFileWriteWaiting for a write of a two phase state file.
VersionFileWriteWaiting for the version file to be written while creating a database.
WALBootstrapSyncWaiting for WAL to reach durable storage during bootstrapping.
WALBootstrapWriteWaiting for a write of a WAL page during bootstrapping.
WALCopyReadWaiting for a read when creating a new WAL segment by copying an existing one.
WALCopySyncWaiting for a new WAL segment created by copying an existing one to reach durable storage.
WALCopyWriteWaiting for a write when creating a new WAL segment by copying an existing one.
WALInitSyncWaiting for a newly initialized WAL file to reach durable storage.
WALInitWriteWaiting for a write while initializing a new WAL file.
WALReadWaiting for a read from a WAL file.
WALSenderTimelineHistoryReadWaiting for a read from a timeline history file during a walsender timeline command.
WALSyncWaiting for a WAL file to reach durable storage.
WALSyncMethodAssignWaiting for data to reach durable storage while assigning a new WAL sync method.
WALWriteWaiting for a write to a WAL file.

Table 10. Wait Events of Type IPC

IPC Wait EventDescription
AppendReadyWaiting for subplan nodes of an Append plan node to be ready.
ArchiveCleanupCommandWaiting for archive_cleanup_command to complete.
ArchiveCommandWaiting for archive_command to complete.
BackendTerminationWaiting for the termination of another backend.
BackupWaitWalArchiveWaiting for WAL files required for a backup to be successfully archived.
BgWorkerShutdownWaiting for background worker to shut down.
BgWorkerStartupWaiting for background worker to start up.
BtreePageWaiting for the page number needed to continue a parallel B-tree scan to become available.
BufferIOWaiting for buffer I/O to complete.
CheckpointDoneWaiting for a checkpoint to complete.
CheckpointStartWaiting for a checkpoint to start.
ExecuteGatherWaiting for activity from a child process while executing a Gather plan node.
HashBatchAllocateWaiting for an elected Parallel Hash participant to allocate a hash table.
HashBatchElectWaiting to elect a Parallel Hash participant to allocate a hash table.
HashBatchLoadWaiting for other Parallel Hash participants to finish loading a hash table.
HashBuildAllocateWaiting for an elected Parallel Hash participant to allocate the initial hash table.
HashBuildElectWaiting to elect a Parallel Hash participant to allocate the initial hash table.
HashBuildHashInnerWaiting for other Parallel Hash participants to finish hashing the inner relation.
HashBuildHashOuterWaiting for other Parallel Hash participants to finish partitioning the outer relation.
HashGrowBatchesAllocateWaiting for an elected Parallel Hash participant to allocate more batches.
HashGrowBatchesDecideWaiting to elect a Parallel Hash participant to decide on future batch growth.
HashGrowBatchesElectWaiting to elect a Parallel Hash participant to allocate more batches.
HashGrowBatchesFinishWaiting for an elected Parallel Hash participant to decide on future batch growth.
HashGrowBatchesRepartitionWaiting for other Parallel Hash participants to finish repartitioning.
HashGrowBucketsAllocateWaiting for an elected Parallel Hash participant to finish allocating more buckets.
HashGrowBucketsElectWaiting to elect a Parallel Hash participant to allocate more buckets.
HashGrowBucketsReinsertWaiting for other Parallel Hash participants to finish inserting tuples into new buckets.
LogicalSyncDataWaiting for a logical replication remote server to send data for initial table synchronization.
LogicalSyncStateChangeWaiting for a logical replication remote server to change state.
MessageQueueInternalWaiting for another process to be attached to a shared message queue.
MessageQueuePutMessageWaiting to write a protocol message to a shared message queue.
MessageQueueReceiveWaiting to receive bytes from a shared message queue.
MessageQueueSendWaiting to send bytes to a shared message queue.
ParallelBitmapScanWaiting for parallel bitmap scan to become initialized.
ParallelCreateIndexScanWaiting for parallel CREATE INDEX workers to finish heap scan.
ParallelFinishWaiting for parallel workers to finish computing.
ProcArrayGroupUpdateWaiting for the group leader to clear the transaction ID at end of a parallel operation.
ProcSignalBarrierWaiting for a barrier event to be processed by all backends.
PromoteWaiting for standby promotion.
RecoveryConflictSnapshotWaiting for recovery conflict resolution for a vacuum cleanup.
RecoveryConflictTablespaceWaiting for recovery conflict resolution for dropping a tablespace.
RecoveryEndCommandWaiting for recovery_end_command to complete.
RecoveryPauseWaiting for recovery to be resumed.
ReplicationOriginDropWaiting for a replication origin to become inactive so it can be dropped.
ReplicationSlotDropWaiting for a replication slot to become inactive so it can be dropped.
RestoreCommandWaiting for restore_command to complete.
SafeSnapshotWaiting to obtain a valid snapshot for a READ ONLY DEFERRABLE transaction.
SyncRepWaiting for confirmation from a remote server during synchronous replication.
WalReceiverExitWaiting for the WAL receiver to exit.
WalReceiverWaitStartWaiting for startup process to send initial data for streaming replication.
XactGroupUpdateWaiting for the group leader to update transaction status at end of a parallel operation.

Table 11. Wait Events of Type Lock

Lock Wait EventDescription
advisoryWaiting to acquire an advisory user lock.
extendWaiting to extend a relation.
frozenidWaiting to update pg_database.datfrozenxid and pg_database.datminmxid.
objectWaiting to acquire a lock on a non-relation database object.
pageWaiting to acquire a lock on a page of a relation.
relationWaiting to acquire a lock on a relation.
spectokenWaiting to acquire a speculative insertion lock.
transactionidWaiting for a transaction to finish.
tupleWaiting to acquire a lock on a tuple.
userlockWaiting to acquire a user lock.
virtualxidWaiting to acquire a virtual transaction ID lock.

Table 12. Wait Events of Type LWLock

LWLock Wait EventDescription
AddinShmemInitWaiting to manage an extension's space allocation in shared memory.
AutoFileWaiting to update the qhb.auto.conf file.
AutovacuumWaiting to update the postgresql.auto.conf file.
AutovacuumScheduleWaiting to ensure that a table selected for autovacuum still needs vacuuming.
BackgroundWorkerWaiting to read or update background worker state.
BtreeVacuumWaiting to read or update vacuum-related information for a B-tree index.
BufferContentWaiting to access a data page in memory.
BufferMappingWaiting to associate a data block with a buffer in the buffer pool.
CheckpointerCommWaiting to manage fsync requests.
CommitTsWaiting to read or update the last value set for a transaction commit timestamp.
CommitTsBufferWaiting for I/O on a commit timestamp SLRU buffer.
CommitTsSLRUWaiting to access the commit timestamp SLRU cache.
ControlFileWaiting to read or update the pg_control file or create a new WAL file.
DynamicSharedMemoryControlWaiting to read or update dynamic shared memory allocation information.
LockFastPathWaiting to read or update a process' fast-path lock information.
LockManagerWaiting to read or update information about “heavyweight” locks.
LogicalRepWorkerWaiting to read or update the state of logical replication workers.
MultiXactGenWaiting to read or update shared multixact state.
MultiXactMemberBufferWaiting for I/O on a multixact member SLRU buffer.
MultiXactMemberSLRUWaiting to access the multixact member SLRU cache.
MultiXactOffsetBufferWaiting for I/O on a multixact offset SLRU buffer.
MultiXactOffsetSLRUWaiting to access the multixact offset SLRU cache.
MultiXactTruncationWaiting to read or truncate multixact information.
NotifyBufferWaiting for I/O on a NOTIFY message SLRU buffer.
NotifyQueueWaiting to read or update NOTIFY messages.
NotifyQueueTailWaiting to update limit on NOTIFY message storage.
NotifySLRUWaiting to access the NOTIFY message SLRU cache.
OidGenWaiting to allocate a new OID.
OldSnapshotTimeMapWaiting to read or update old snapshot control information.
ParallelAppendWaiting to choose the next subplan during Parallel Append plan execution.
ParallelHashJoinWaiting to synchronize workers during Parallel Hash Join plan execution.
ParallelQueryDSAWaiting for parallel query dynamic shared memory allocation.
PerSessionDSAWaiting for parallel query dynamic shared memory allocation.
PerSessionRecordTypeWaiting to access a parallel query's information about composite types.
PerSessionRecordTypmodWaiting to access a parallel query's information about type modifiers that identify anonymous record types.
PerXactPredicateListWaiting to access the list of predicate locks held by the current serializable transaction during a parallel query.
PredicateLockManagerWaiting to access predicate lock information used by serializable transactions.
ProcArrayWaiting to access the shared per-process data structures (typically, to get a snapshot or report a session's transaction ID).
RelationMappingWaiting to read or update a pg_filenode.map file (used to track the filenode assignments of certain system catalogs).
RelCacheInitWaiting to read or update a pg_internal.init relation cache initialization file.
ReplicationOriginWaiting to create, drop or use a replication origin.
ReplicationOriginStateWaiting to read or update the progress of one replication origin.
ReplicationSlotAllocationWaiting to allocate or free a replication slot.
ReplicationSlotControlWaiting to read or update replication slot state.
ReplicationSlotIOWaiting for I/O on a replication slot.
SerialBufferWaiting for I/O on a serializable transaction conflict SLRU buffer.
SerializableFinishedListWaiting to access the list of finished serializable transactions.
SerializablePredicateListWaiting to access the list of predicate locks held by serializable transactions.
PgStatsDSAWaiting for stats dynamic shared memory allocator access.
PgStatsHashWaiting for stats shared memory hash table access.
PgStatsDataWaiting for shared memory stats data access.
SerializableXactHashWaiting to read or update information about serializable transactions.
SerialSLRUWaiting to access the serializable transaction conflict SLRU cache.
SharedTidBitmapWaiting to access a shared TID bitmap during a parallel bitmap index scan.
SharedTupleStoreWaiting to access a shared tuple store during parallel query.
ShmemIndexWaiting to find or allocate space in shared memory.
SInvalReadWaiting to retrieve messages from the shared catalog invalidation queue.
SInvalWriteWaiting to add a message to the shared catalog invalidation queue.
SubtransBufferWaiting for I/O on a sub-transaction SLRU buffer.
SubtransSLRUWaiting to access the sub-transaction SLRU cache.
SyncRepWaiting to read or update information about the state of synchronous replication.
SyncScanWaiting to select the starting location of a synchronized table scan.
TablespaceCreateWaiting to create or drop a tablespace.
TwoPhaseStateWaiting to read or update the state of prepared transactions.
WALBufMappingWaiting to replace a page in WAL buffers.
WALInsertWaiting to insert WAL data into a memory buffer.
WALWriteWaiting for WAL buffers to be written to disk.
WrapLimitsVacuumWaiting to update limits on transaction id and multixact consumption.
XactBufferWaiting for I/O on a transaction status SLRU buffer.
XactSLRUWaiting to access the transaction status SLRU cache.
XactTruncationWaiting to execute pg_xact_status or update the oldest transaction ID available to it.
XidGenWaiting to allocate a new transaction ID.

Note
Extensions can add LWLock types to the list shown in Table 12. In some cases, the name assigned by an extension will not be available in all server processes; so an LWLock wait event might be reported as just “extension” rather than the extension-assigned name.

Table 13. Wait Events of Type Timeout

Timeout Wait EventDescription
BaseBackupThrottleWaiting during base backup when throttling activity.
CheckpointWriteDelayWaiting between writes while performing a checkpoint.
PgSleepWaiting due to a call to pg_sleep or a sibling function.
RecoveryApplyDelayWaiting to apply WAL during recovery because of a delay setting.
RecoveryRetrieveRetryIntervalWaiting during recovery when WAL data is not available from any source (pg_wal, archive or stream).
RegisterSyncRequestWaiting while sending synchronization requests to the checkpointer, because the request queue is full.
VacuumDelayWaiting in a cost-based vacuum delay point.
VacuumTruncateWaiting to acquire an exclusive lock to truncate off any empty pages at the end of a table vacuumed.

Here is an example of how wait events can be viewed:

SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event is NOT NULL;
 pid  | wait_event_type | wait_event
------+-----------------+------------
 2540 | Lock            | relation
 6644 | LWLock          | ProcArray
(2 rows)

pg_stat_replication

The pg_stat_replication view will contain one row per WAL sender process, showing statistics about replication to that sender's connected standby server. Only directly connected standbys are listed; no information is available about downstream standby servers.

Table 14. pg_stat_replication View

Column Type
Description
pid integer
Process ID of a WAL sender process
usesysid oid
OID of the user logged into this WAL sender process
usename name
Name of the user logged into this WAL sender process
application_name text
Name of the application that is connected to this WAL sender
client_addr inet
IP address of the client connected to this WAL sender. If this field is null, it indicates that the client is connected via a Unix socket on the server machine.
client_hostname text
Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This field will only be non-null for IP connections, and only when log_hostname is enabled.
client_port integer
TCP port number that the client is using for communication with this WAL sender, or -1 if a Unix socket is used
backend_start timestamp with time zone
Time when this process was started, i.e., when the client connected to this WAL sender
backend_xmin xid
This standby's xmin horizon reported by hot_standby_feedback.
state text
Current WAL sender state. Possible values are:
  • startup: This WAL sender is starting up.
  • catchup: This WAL sender's connected standby is catching up with the primary.
  • streaming: This WAL sender is streaming changes after its connected standby server has caught up with the primary.
  • backup: This WAL sender is sending a backup.
  • stopping: This WAL sender is stopping.
sent_lsn pg_lsn
Last write-ahead log location sent on this connection
write_lsn pg_lsn
Last write-ahead log location written to disk by this standby server
flush_lsn pg_lsn
Last write-ahead log location flushed to disk by this standby server
replay_lsn pg_lsn
Last write-ahead log location replayed into the database on this standby server
write_lag interval
Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it (but not yet flushed it or applied it). This can be used to gauge the delay that synchronous_commit level remote_write incurred while committing if this server was configured as a synchronous standby.
flush_lag interval
Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it (but not yet applied it). This can be used to gauge the delay that synchronous_commit level on incurred while committing if this server was configured as a synchronous standby.
replay_lag interval
Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it. This can be used to gauge the delay that synchronous_commit level remote_apply incurred while committing if this server was configured as a synchronous standby.
sync_priority integer
Priority of this standby server for being chosen as the synchronous standby in a priority-based synchronous replication. This has no effect in a quorum-based synchronous replication.
sync_state text
Synchronous state of this standby server. Possible values are:
  • async: This standby server is asynchronous.
  • potential: This standby server is now asynchronous, but can potentially become synchronous if one of current synchronous ones fails.
  • sync: This standby server is synchronous.
  • quorum: This standby server is considered as a candidate for quorum standbys.
reply_time timestamp with time zone
Send time of last reply message received from standby server

The lag times reported in the pg_stat_replication view are measurements of the time taken for recent WAL to be written, flushed and replayed and for the sender to know about it. These times represent the commit delay that was (or would have been) introduced by each synchronous commit level, if the remote server was configured as a synchronous standby. For an asynchronous standby, the replay_lag column approximates the delay before recent transactions became visible to queries. If the standby server has entirely caught up with the sending server and there is no more WAL activity, the most recently measured lag times will continue to be displayed for a short time and then show NULL.

Lag times work automatically for physical replication. Logical decoding plugins may optionally emit tracking messages; if they do not, the tracking mechanism will simply display NULL lag.

Note
The reported lag times are not predictions of how long it will take for the standby to catch up with the sending server assuming the current rate of replay. Such a system would show similar times while new WAL is being generated, but would differ when the sender becomes idle. In particular, when the standby has caught up completely, pg_stat_replication shows the time taken to write, flush and replay the most recent reported WAL location rather than zero as some users might expect. This is consistent with the goal of measuring synchronous commit and transaction visibility delays for recent write transactions. To reduce confusion for users expecting a different model of lag, the lag columns revert to NULL after a short time on a fully replayed idle system. Monitoring systems should choose whether to represent this as missing data, zero or continue to display the last known value.


pg_stat_replication_slots

The pg_stat_replication_slots view will contain one row per logical replication slot, showing statistics about its usage.

Table 15. pg_stat_replication_slots View

Column Type
Description
slot_name text
A unique, cluster-wide identifier for the replication slot
spill_txns bigint
Number of transactions spilled to disk once the memory used by logical decoding to decode changes from WAL has exceeded logical_decoding_work_mem. The counter gets incremented for both top-level transactions and subtransactions.
spill_count bigint
Number of times transactions were spilled to disk while decoding changes from WAL for this slot. This counter is incremented each time a transaction is spilled, and the same transaction may be spilled multiple times.
spill_bytes bigint
Amount of decoded transaction data spilled to disk while performing decoding of changes from WAL for this slot. This and other spill counters can be used to gauge the I/O which occurred during logical decoding and allow tuning logical_decoding_work_mem.
stream_txns bigint
Number of in-progress transactions streamed to the decoding output plugin after the memory used by logical decoding to decode changes from WAL for this slot has exceeded logical_decoding_work_mem. Streaming only works with top-level transactions (subtransactions can't be streamed independently), so the counter is not incremented for subtransactions.
stream_count bigint
Number of times in-progress transactions were streamed to the decoding output plugin while decoding changes from WAL for this slot. This counter is incremented each time a transaction is streamed, and the same transaction may be streamed multiple times.
stream_bytes bigint
Amount of transaction data decoded for streaming in-progress transactions to the decoding output plugin while decoding changes from WAL for this slot. This and other streaming counters for this slot can be used to tune logical_decoding_work_mem.
total_txns bigint
Number of decoded transactions sent to the decoding output plugin for this slot. This counts top-level transactions only, and is not incremented for subtransactions. Note that this includes the transactions that are streamed and/or spilled.
total_bytes bigint
Amount of transaction data decoded for sending transactions to the decoding output plugin while decoding changes from WAL for this slot. Note that this includes data that is streamed and/or spilled.
stats_reset timestamp with time zone
Time at which these statistics were last reset

pg_stat_wal_receiver

The pg_stat_wal_receiver view will contain only one row, showing statistics about the WAL receiver from that receiver's connected server.

Table 16. pg_stat_wal_receiver View

Column Type
Description
pid integer
Process ID of the WAL receiver process
status text
Activity status of the WAL receiver process
receive_start_lsn pg_lsn
First write-ahead log location used when WAL receiver is started
receive_start_tli integer
First timeline number used when WAL receiver is started
written_lsn pg_lsn
Last write-ahead log location already received and written to disk, but not flushed. This should not be used for data integrity checks.
flushed_lsn pg_lsn
Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started/td>
received_tli integer
Timeline number of last write-ahead log location received and flushed to disk, the initial value of this field being the timeline number of the first log location used when WAL receiver is started
last_msg_send_time timestamp with time zone
Send time of last message received from origin WAL sender
last_msg_receipt_time timestamp with time zone
Receipt time of last message received from origin WAL sender
latest_end_lsn pg_lsn
Last write-ahead log location reported to origin WAL sender
latest_end_time timestamp with time zone
Time of last write-ahead log location reported to origin WAL sender
slot_name text
Replication slot name used by this WAL receiver
sender_host text
Host of the QHB instance this WAL receiver is connected to. This can be a host name, an IP address, or a directory path if the connection is via Unix socket. (The path case can be distinguished because it will always be an absolute path, beginning with /.)
sender_port integer
Port number of the QHB instance this WAL receiver is connected to.
conninfo text
Connection string used by this WAL receiver, with security-sensitive fields obfuscated.

pg_stat_recovery_prefetch

The pg_stat_recovery_prefetch view will contain only one row. The columns wal_distance, block_distance and io_depth show current values, and the other columns show cumulative counters that can be reset with the pg_stat_reset_shared function.

Table 17. pg_stat_recovery_prefetch View

Column Type
Description
stats_reset timestamp with time zone
Time at which these statistics were last reset
prefetch bigint
Number of blocks prefetched because they were not in the buffer pool
hit bigint
Number of blocks not prefetched because they were already in the buffer pool
skip_init bigint
Number of blocks not prefetched because they would be zero-initialized
skip_new bigint
Number of blocks not prefetched because they didn't exist yet
skip_fpw bigint
Number of blocks not prefetched because a full page image was included in the WAL
skip_rep bigint
Number of blocks not prefetched because they were already recently prefetched
wal_distance int
How many bytes ahead the prefetcher is looking
block_distance int
How many blocks ahead the prefetcher is looking
io_depth int
How many prefetches have been initiated but are not yet known to have completed

pg_stat_subscription

Table 18. pg_stat_subscription View

Column Type
Description
subid oid
OID of the subscription
subname name
Name of the subscription
pid integer
Process ID of the subscription worker process
relid oid
OID of the relation that the worker is synchronizing; null for the main apply worker
received_lsn pg_lsn
Last write-ahead log location received, the initial value of this field being 0
last_msg_send_time timestamp with time zone
Send time of last message received from origin WAL sender
last_msg_receipt_time timestamp with time zone
Receipt time of last message received from origin WAL sender
latest_end_lsn pg_lsn
Last write-ahead log location reported to origin WAL sender
latest_end_time timestamp with time zone
Time of last write-ahead log location reported to origin WAL sender

pg_stat_subscription_stats

The pg_stat_subscription_stats view will contain one row per subscription.

Table 19. pg_stat_subscription_stats View

Column Type
Description
subid oid
OID of the subscription
subname name
Name of the subscription
apply_error_count bigint
Number of times an error occurred while applying changes
sync_error_count bigint
Number of times an error occurred during the initial table synchronization
stats_reset timestamp with time zone
Time at which these statistics were last reset

pg_stat_ssl

The pg_stat_ssl view will contain one row per backend or WAL sender process, showing statistics about SSL usage on this connection. It can be joined to pg_stat_activity or pg_stat_replication on the pid column to get more details about the connection.

Table 20. pg_stat_ssl View

Column Type
Description
pid integer
Process ID of a backend or WAL sender process
ssl boolean
True if SSL is used on this connection
version text
Version of SSL in use, or NULL if SSL is not in use on this connection
cipher text
Name of SSL cipher in use, or NULL if SSL is not in use on this connection
bits integer
Number of bits in the encryption algorithm used, or NULL if SSL is not used on this connection
client_dn text
Distinguished Name (DN) field from the client certificate used, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated if the DN field is longer than NAMEDATALEN (64 characters in a standard build).
client_serial numeric
Serial number of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. The combination of certificate serial number and certificate issuer uniquely identifies a certificate (unless the issuer erroneously reuses serial numbers).
issuer_dn text
DN of the issuer of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated like client_dn.

pg_stat_gssapi

The pg_stat_gssapi will contain one row per backend, showing information about GSSAPI usage on this connection. It can be joined to pg_stat_activity or pg_stat_replication on the pid column to get more details about the connection.

Table 21. pg_stat_gssapi View

Column Type
Description
pid integer
Process ID of a backend
gss_authenticated boolean
True if GSSAPI authentication was used for this connection
principal text
Principal used to authenticate this connection, or NULL if GSSAPI was not used to authenticate this connection. This field is truncated if the principal is longer than NAMEDATALEN (64 characters in a standard build).
encrypted boolean
True if GSSAPI encryption is in use on this connection

pg_stat_archiver

The pg_stat_archiver view will always have a single row, containing data about the archiver process of the cluster.

Table 22. pg_stat_archiver View

Column Type
Description
archived_count bigint
Number of WAL files that have been successfully archived
last_archived_wal text
Name of the WAL file most recently successfully archived
last_archived_time timestamp with time zone
Time of the most recent successful archive operation
failed_count bigint
Number of failed attempts for archiving WAL files
last_failed_wal text
Name of the WAL file of the most recent failed archival operation
last_failed_time timestamp with time zone
Time of the most recent failed archival operation
stats_reset timestamp with time zone
Time at which these statistics were last reset

Normally, WAL files are archived in order, oldest to newest, but that is not guaranteed, and does not hold under special circumstances like when promoting a standby or after crash recovery. Therefore it is not safe to assume that all files older than last_archived_wal have also been successfully archived.


pg_stat_bgwriter

The pg_stat_bgwriter view will always have a single row, containing global data for the cluster.

Table 23. pg_stat_bgwriter View

Column Type
Description
checkpoints_timed bigint
Number of scheduled checkpoints that have been performed
checkpoints_req bigint
Number of requested checkpoints that have been performed
checkpoint_write_time double precision
Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds
checkpoint_sync_time double precision
Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds
buffers_checkpoint bigint
Number of buffers written during checkpoints
buffers_clean bigint
Number of buffers written by the background writer
maxwritten_clean bigint
Number of times the background writer stopped a cleaning scan because it had written too many buffers
buffers_backend bigint
Number of buffers written directly by a backend
buffers_backend_fsync bigint
Number of times a backend had to execute its own fsync call (normally the background writer handles those even when the backend does its own write)
buffers_alloc bigint
Number of buffers allocated
stats_reset timestamp with time zone
Time at which these statistics were last reset

pg_stat_wal

The pg_stat_wal view will always have a single row, containing data about WAL activity of the cluster.

Table 24. pg_stat_wal View

Column Type
Description
wal_records bigint
Total number of WAL records generated
wal_fpi bigint
Total number of WAL full page images generated
wal_bytes numeric
Total amount of WAL generated in bytes
wal_buffers_full bigint
Number of times WAL data was written to disk because WAL buffers became full
wal_write bigint
Number of times WAL buffers were written out to disk via XLogWrite request. See Section WAL Configuration for more information about the internal WAL function XLogWrite.
wal_sync bigint
Number of times WAL files were synced to disk via issue_xlog_fsync request (if fsync is on and wal_sync_method is either fdatasync, fsync or fsync_writethrough, otherwise zero). See Section WAL Configuration for more information about the internal WAL function issue_xlog_fsync.
wal_write_time double precision
Total amount of time spent writing WAL buffers to disk via XLogWrite request, in milliseconds (if track_wal_io_timing is enabled, otherwise zero). This includes the sync time when wal_sync_method is either open_datasync or open_sync.
wal_sync_time double precision
Total amount of time spent syncing WAL files to disk via issue_xlog_fsync request, in milliseconds (if track_wal_io_timing is enabled, fsync is on, and wal_sync_method is either fdatasync, fsync or fsync_writethrough, otherwise zero).
stats_reset timestamp with time zone
Time at which these statistics were last reset

pg_stat_database

The pg_stat_database view will contain one row for each database in the cluster, plus one for shared objects, showing database-wide statistics.

Table 25. pg_stat_database View

Column Type
Description
datid oid
OID of this database, or 0 for objects belonging to a shared relation
datname name
Name of this database, or NULL for shared objects.
numbackends integer
Number of backends currently connected to this database, or NULL for shared objects. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset.
xact_commit bigint
Number of transactions in this database that have been committed
xact_rollback bigint
Number of transactions in this database that have been rolled back
blks_read bigint
Number of disk blocks read in this database
blks_hit bigint
Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the QHB buffer cache, not the operating system's file system cache)
tup_returned bigint
Number of live rows fetched by sequential scans and index entries returned by index scans in this database
tup_fetched bigint
Number of live rows fetched by index scans in this database
tup_inserted bigint
Number of rows inserted by queries in this database
tup_updated bigint
Number of rows updated by queries in this database
tup_deleted bigint
Number of rows deleted by queries in this database
conflicts bigint
Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see pg_stat_database_conflicts for details.)
temp_files bigint
Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the log_temp_files setting.
temp_bytes bigint
Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting.
deadlocks bigint
Number of deadlocks detected in this database
checksum_failures bigint
Number of data page checksum failures detected in this database (or on a shared object), or NULL if data checksums are not enabled.
checksum_last_failure timestamp with time zone
Time at which the last data page checksum failure was detected in this database (or on a shared object), or NULL if data checksums are not enabled.
blk_read_time double precision
Time spent reading data file blocks by backends in this database, in milliseconds (if track_io_timing is enabled, otherwise zero)
blk_write_time double precision
Time spent writing data file blocks by backends in this database, in milliseconds (if track_io_timing is enabled, otherwise zero)
session_time double precision
Time spent by database sessions in this database, in milliseconds (note that statistics are only updated when the state of a session changes, so if sessions have been idle for a long time, this idle time won't be included)
active_time double precision
Time spent executing SQL statements in this database, in milliseconds (this corresponds to the states active and fastpath function call in pg_stat_activity)
idle_in_transaction_time double precision
Time spent idling while in a transaction in this database, in milliseconds (this corresponds to the states idle in transaction and idle in transaction (aborted) in pg_stat_activity)
sessions bigint
Total number of sessions established to this database
sessions_abandoned bigint
Number of database sessions to this database that were terminated because connection to the client was lost
sessions_fatal bigint
Number of database sessions to this database that were terminated by fatal errors
sessions_killed bigint
Number of database sessions to this database that were terminated by operator intervention
stats_reset timestamp with time zone
Time at which these statistics were last reset

pg_stat_database_conflicts

The pg_stat_database_conflicts view will contain one row per database, showing database-wide statistics about query cancels occurring due to conflicts with recovery on standby servers. This view will only contain information on standby servers, since conflicts do not occur on primary servers.

Table 26. pg_stat_database_conflicts View

Column Type
Description
datid oid
OID of a database
datname name
Name of this database
confl_tablespace bigint
Number of queries in this database that have been canceled due to dropped tablespaces
confl_lock bigint
Number of queries in this database that have been canceled due to lock timeouts
confl_snapshot bigint
Number of queries in this database that have been canceled due to old snapshots
confl_bufferpin bigint
Number of queries in this database that have been canceled due to pinned buffers
confl_deadlock bigint
Number of queries in this database that have been canceled due to deadlocks

pg_stat_all_tables

The pg_stat_all_tables view will contain one row for each table in the current database (including TOAST tables), showing statistics about accesses to that specific table. The pg_stat_user_tables and pg_stat_sys_tables views contain the same information, but filtered to only show user and system tables respectively.

Table 27. pg_stat_all_tables View

Column Type
Description
relid oid
OID of a table
schemaname name
Name of the schema that this table is in
relname name
Name of this table
seq_scan bigint
Number of sequential scans initiated on this table
seq_tup_read bigint
Number of live rows fetched by sequential scans
idx_scan bigint
Number of index scans initiated on this table
idx_tup_fetch bigint
Number of live rows fetched by index scans
n_tup_ins bigint
Number of rows inserted
n_tup_upd bigint
Number of rows updated (includes HOT updated rows)
n_tup_del bigint
Number of rows deleted
n_tup_hot_upd bigint
Number of rows HOT updated (i.e., with no separate index update required)
n_live_tup bigint
Estimated number of live rows
n_dead_tup bigint
Estimated number of dead rows
n_mod_since_analyze bigint
Estimated number of rows modified since this table was last analyzed
n_ins_since_vacuum bigint
Estimated number of rows inserted since this table was last vacuumed
last_vacuum timestamp with time zone
Last time at which this table was manually vacuumed (not counting VACUUM FULL)
last_autovacuum timestamp with time zone
Last time at which this table was vacuumed by the autovacuum daemon
last_analyze timestamp with time zone
Last time at which this table was manually analyzed
last_autoanalyze timestamp with time zone
Last time at which this table was analyzed by the autovacuum daemon
vacuum_count bigint
Number of times this table has been manually vacuumed (not counting VACUUM FULL)
autovacuum_count bigint
Number of times this table has been vacuumed by the autovacuum daemon
analyze_count bigint
Number of times this table has been manually analyzed
autoanalyze_count bigint
Number of times this table has been analyzed by the autovacuum daemon

pg_stat_all_indexes

The pg_stat_all_indexes view will contain one row for each index in the current database, showing statistics about accesses to that specific index. The pg_stat_user_indexes and pg_stat_sys_indexes views contain the same information, but filtered to only show user and system indexes respectively.

Table 28. pg_stat_all_indexes View

Column Type
Description
relid oid
OID of a table for this index
indexrelid oid
OID of this index
schemaname name
Name of the schema this index is in
relname name
Name of the table for this index
indexrelname name
Name of this index
idx_scan bigint
Number of index scans initiated on this index
idx_tup_read bigint
Number of index entries returned by scans on this index
idx_tup_fetch bigint
Number of live table rows fetched by simple index scans using this index

Indexes can be used by simple index scans, “bitmap” index scans, and the optimizer. In a bitmap scan the output of several indexes can be combined via AND or OR rules, so it is difficult to associate individual heap row fetches with specific indexes when a bitmap scan is used. Therefore, a bitmap scan increments the pg_stat_all_indexes.idx_tup_read count(s) for the index(es) it uses, and it increments the pg_stat_all_tables.idx_tup_fetch count for the table, but it does not affect pg_stat_all_indexes.idx_tup_fetch. The optimizer also accesses indexes to check for supplied constants whose values are outside the recorded range of the optimizer statistics because the optimizer statistics might be stale.

Note
The idx_tup_read and idx_tup_fetch counts can be different even without any use of bitmap scans, because idx_tup_read counts index entries retrieved from the index while idx_tup_fetch counts live rows fetched from the table. The latter will be less if any dead or not-yet-committed rows are fetched using the index, or if any heap fetches are avoided by means of an index-only scan.


pg_statio_all_tables

The pg_statio_all_tables view will contain one row for each table in the current database (including TOAST tables), showing statistics about I/O on that specific table. The pg_statio_user_tables* and pg_statio_sys_tables views contain the same information, but filtered to only show user and system tables respectively.

Table 29. pg_statio_all_tables View

Column Type
Description
relid oid
OID of a table
schemaname name
Name of the schema that this table is in
relname name
Name of this table
heap_blks_read bigint
Number of disk blocks read from this table
heap_blks_hit bigint
Number of buffer hits in this table
idx_blks_read bigint
Number of disk blocks read from all indexes on this table
idx_blks_hit bigint
Number of buffer hits in all indexes on this table
toast_blks_read bigint
Number of disk blocks read from this table's TOAST table (if any)
toast_blks_hit bigint
Number of buffer hits in this table's TOAST table (if any)
tidx_blks_read bigint
Number of disk blocks read from this table's TOAST table indexes (if any)
tidx_blks_hit bigint
Number of buffer hits in this table's TOAST table indexes (if any)

pg_statio_all_indexes

The pg_statio_all_indexes view will contain one row for each index in the current database, showing statistics about I/O on that specific index. The pg_statio_user_indexes* and pg_statio_sys_indexes views contain the same information, but filtered to only show user and system indexes respectively.

Table 30. pg_statio_all_indexes View

Column Type
Description
relid oid
OID of a table for this index
indexrelid oid
OID of this index
schemaname name
Name of the schema this index is in
relname name
Name of the table for this index
indexrelname name
Name of this index
idx_blks_read bigint
Number of disk blocks read from this index
idx_blks_hit bigint
Number of buffer hits in this index

pg_statio_all_sequences

The pg_statio_all_sequences view will contain one row for each sequence in the current database, showing statistics about I/O on that specific sequence.

Table 31. pg_statio_all_sequences View

Column Type
Description
relid oid
OID of a sequence
schemaname name
Name of the schema this sequence is in
relname name
Name of this sequence
blks_read bigint
Number of disk blocks read from this sequence
blks_hit bigint
Number of buffer hits in this sequence

pg_stat_user_functions

The pg_stat_user_functions view will contain one row for each tracked function, showing statistics about executions of that function. The track_functions parameter controls exactly which functions are tracked.

Table 32. pg_stat_user_functions View

Column Type
Description
funcid oid
OID of a function
schemaname name
Name of the schema this function is in
funcname name
Name of this function
calls bigint
Number of times this function has been called
total_time double precision
Total time spent in this function and all other functions called by it, in milliseconds
self_time double precision
Total time spent in this function itself, not including other functions called by it, in milliseconds

pg_stat_slru

QHB certain on-disk information via SLRU (simple least-recently- used) caches. The pg_stat_slru view will contain one row for each tracked SLRU cache, showing statistics about access to cached pages.

Table 31. pg_stat_slru View

Column Type
Description
name text
Name of the SLRU
blks_zeroed bigint
Number of blocks zeroed during initializations
blks_hit bigint
Number of times disk blocks were found already in the SLRU, so that a read was not necessary (this only includes hits in the SLRU, not the operating system's file system cache)
blks_read bigint
Number of disk blocks read for this SLRU
blks_written bigint
Number of disk blocks written for this SLRU
blks_exists bigint
Number of blocks checked for existence for this SLRU
flushes bigint
Number of flushes of dirty data for this SLRU
truncates bigint
Number of truncates for this SLRU
stats_reset timestamp with time zone
Time at which these statistics were last reset

Statistics Functions

Other ways of looking at the statistics can be set up by writing queries that use the same underlying statistics access functions used by the standard views shown above. For details such as the functions' names, consult the definitions of the standard views. (For example, in psql you could issue \d+ pg_stat_activity.) The access functions for per-database statistics take a database OID as an argument to identify which database to report on. The per-table and per-index functions take a table or index OID. The functions for per-function statistics take a function OID. Note that only tables, indexes, and functions in the current database can be seen with these functions.

Additional functions related to the cumulative statistics system are listed in Table 34.

Table 34. Additional Statistics Functions

Function
Description
pg_backend_pid () → integer
Returns the process ID of the server process attached to the current session.
pg_stat_get_activity ( integer ) → setof record
Returns a record of information about the backend with the specified process ID, or one record for each active backend in the system if NULL is specified. The fields returned are a subset of those in the pg_stat_activity view.
pg_stat_get_snapshot_timestamp () → timestamp with time zone
Returns the timestamp of the current statistics snapshot, or NULL if no statistics snapshot has been taken. A snapshot is taken the first time cumulative statistics are accessed in a transaction if stats_fetch_consistency is set to snapshot.
pg_stat_get_xact_blocks_fetched ( oid ) → bigint
Returns the number of block read requests for table or index, in the current transaction. This number minus pg_stat_get_xact_blocks_hit gives the number of kernel read() calls; the number of actual physical reads is usually lower due to kernel-level buffering.
pg_stat_get_xact_blocks_hit ( oid ) → bigint
Returns the number of block read requests for table or index, in the current transaction, found in cache (not triggering kernel read() calls).
pg_stat_clear_snapshot () → void
Discards the current statistics snapshot or cached information.
pg_stat_reset () → void
Resets all statistics counters for the current database to zero.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_shared ( text ) → void
Resets some cluster-wide statistics counters to zero, depending on the argument. The argument can be bgwriter to reset all the counters shown in the pg_stat_bgwriter view, archiver to reset all the counters shown in the pg_stat_archiver view, wal to reset all the counters shown in the pg_stat_wal view or recovery_prefetch to reset all the counters shown in the pg_stat_recovery_prefetch view.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_single_table_counters ( oid ) → void
Resets statistics for a single table or index in the current database or shared across all databases in the cluster to zero.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_single_function_counters ( oid ) → void
Resets statistics for a single function in the current database to zero.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_slru ( text ) → void
Resets statistics to zero for a single SLRU cache, or for all SLRUs in the cluster. If the argument is NULL, all counters shown in the pg_stat_slru view for all SLRU caches are reset. The argument can be one of CommitTs, MultiXactMember, MultiXactOffset, Notify, Serial, Subtrans, or Xact to reset the counters for only that entry. If the argument is other (or indeed, any unrecognized name), then the counters for all other SLRU caches, such as extension-defined caches, are reset.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_replication_slot ( text ) → void
Resets statistics of the replication slot defined by the argument. If the argument is NULL, resets statistics for all the replication slots.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_subscription_stats ( oid ) → void
Resets statistics for a single subscription shown in the pg_stat_subscription_stats view to zero. If the argument is NULL, reset statistics for all subscriptions.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.

WARNING
Using pg_stat_reset() also resets counters that autovacuum uses to determine when to trigger a vacuum or an analyze. Resetting these counters can cause autovacuum to not perform necessary work, which can cause problems such as table bloat or out-dated table statistics. A database-wide ANALYZE is recommended after the statistics have been reset.

pg_stat_get_activity, the underlying function of the pg_stat_activity view, returns a set of records containing all the available information about each backend process. Sometimes it may be more convenient to obtain just a subset of this information. In such cases, an older set of per-backend statistics access functions can be used; these are shown in Table 35. These access functions use a backend ID number, which ranges from one to the number of currently active backends. The function pg_stat_get_backend_idset provides a convenient way to generate one row for each active backend for invoking these functions. For example, to show the PIDs and current queries of all backends:

SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
       pg_stat_get_backend_activity(s.backendid) AS query
    FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;

Table 35. Per-Backend Statistics Functions

Function
Description
pg_stat_get_backend_idset () → setof integer
Returns the set of currently active backend ID numbers (from 1 to the number of active backends).
pg_stat_get_backend_activity ( integer ) → text
Returns the text of this backend's most recent query.
pg_stat_get_backend_activity_start ( integer ) → timestamp with time zone
Returns the time when the backend's most recent query was started.
pg_stat_get_backend_client_addr ( integer ) → inet
Returns the IP address of the client connected to this backend.
pg_stat_get_backend_client_port ( integer ) → integer
Returns the TCP port number that the client is using for communication.
pg_stat_get_backend_dbid ( integer ) → oid
Returns the OID of the database this backend is connected to.
pg_stat_get_backend_pid ( integer ) → integer
Returns the process ID of this backend.
pg_stat_get_backend_start ( integer ) → timestamp with time zone
Returns the time when this process was started.
pg_stat_get_backend_userid ( integer ) → oid
Returns the OID of the user logged into this backend.
pg_stat_get_backend_wait_event_type ( integer ) → text
Returns the wait event type name if this backend is currently waiting, otherwise NULL. See Table 4 for details.
pg_stat_get_backend_wait_event ( integer ) → text
Returns the wait event name if this backend is currently waiting, otherwise NULL. See Table 5 through Table 13.
pg_stat_get_backend_xact_start ( integer ) → timestamp with time zone
Returns the time when the backend's current transaction was started.


Viewing Locks

Another useful tool for monitoring database activity is the pg_locks system table. It allows the database administrator to view information about the outstanding locks in the lock manager. For example, this capability can be used to:

  • View all the locks currently outstanding, all the locks on relations in a particular database, all the locks on a particular relation, or all the locks held by a particular QHB session.

  • Determine the relation in the current database with the most ungranted locks (which might be a source of contention among database clients).

  • Determine the effect of lock contention on overall database performance, as well as the extent to which contention varies with overall database traffic.

Details of the pg_locks view appear in Section pg_locks. For more information on locking and managing concurrency with QHB, refer to Chapter Concurrency Control.



Progress Reporting

QHB has the ability to report the progress of certain commands during command execution. Currently, the only commands which support progress reporting are ANALYZE, CLUSTER, CREATE INDEX, VACUUM, COPY, and BASE_BACKUP (i.e., replication command that [qhb_basebackup] issues to take a base backup). This may be expanded in the future.


ANALYZE Progress Reporting

Whenever ANALYZE is running, the pg_stat_progress_analyze view will contain a row for each backend that is currently running that command. The tables below describe the information that will be reported and provide information about how to interpret it.

Table 36. pg_stat_progress_analyze View

Column Type
Description
pid integer
Process ID of backend.
datid oid
OID of the database to which this backend is connected.
datname name
Name of the database this backend is connected to.
relid oid
OID of the table being analyzed.
phase text
Current processing phase. See Table 37.
sample_blks_total bigint
Total number of heap blocks that will be sampled.
sample_blks_scanned bigint
Number of heap blocks scanned.
ext_stats_total bigint
Number of extended statistics.
ext_stats_computed bigint
Number of extended statistics computed. This counter only advances when the phase is computing extended statistics.
child_tables_total bigint
Number of child tables.
child_tables_done bigint
Number of child tables scanned. This counter only advances when the phase is acquiring inherited sample rows.
current_child_table_relid oid
OID of the child table currently being scanned. This field is only valid when the phase is acquiring inherited sample rows.

Table 37. ANALYZE Phases

PhaseDescription
initializingThe command is preparing to begin scanning the heap. This phase is expected to be very brief.
acquiring sample rowsThe command is currently scanning the table given by relid to obtain sample rows.
acquiring inherited sample rowsThe command is currently scanning child tables to obtain sample rows. Columns child_tables_total, child_tables_done, and current_child_table_relid contain the progress information for this phase.
computing statisticsThe command is computing statistics from the sample rows obtained during the table scan.
computing extended statisticsThe command is computing extended statistics from the sample rows obtained during the table scan.
finalizing analyzeThe command is updating pg_class. When this phase is completed, ANALYZE will end.

Note
Note that when ANALYZE is run on a partitioned table, all of its partitions are also recursively analyzed. In that case, ANALYZE progress is reported first for the parent table, whereby its inheritance statistics are collected, followed by that for each partition.


CREATE INDEX Progress Reporting

Whenever CREATE INDEX or REINDEX is running, the pg_stat_progress_create_index view will contain one row for each backend that is currently creating indexes. The tables below describe the information that will be reported and provide information about how to interpret it.

Table 38. pg_stat_progress_create_index View

Column Type
Description
pid integer
Process ID of backend.
datid oid
OID of the database to which this backend is connected.
datname name
Name of the database this backend is connected to.
relid oid
OID of the table on which the index is being created.
index_relid oid
OID of the index being created or reindexed. During a non-concurrent CREATE INDEX, this is 0.
command text
The command that is running: CREATE INDEX, CREATE INDEX CONCURRENTLY, REINDEX, or REINDEX CONCURRENTLY.
phase text
Current processing phase of index creation. See Table 39.
lockers_total bigint
Total number of lockers to wait for, when applicable.
lockers_done bigint
Number of lockers already waited for.
current_locker_pid bigint
Process ID of the locker currently being waited for.
blocks_total bigint
Total number of blocks to be processed in the current phase.
blocks_done bigint
Number of blocks already processed in the current phase.
tuples_total bigint
Total number of tuples to be processed in the current phase.
tuples_done bigint
Number of tuples already processed in the current phase.
partitions_total bigint
When creating an index on a partitioned table, this column is set to the total number of partitions on which the index is to be created. This field is 0 during a REINDEX.
partitions_done bigint
When creating an index on a partitioned table, this column is set to the number of partitions on which the index has been created. This field is 0 during a REINDEX.

Table 39. CREATE INDEX Phases

PhaseDescription
initializingCREATE INDEX or REINDEX is preparing to create the index. This phase is expected to be very brief.
waiting for writers before buildCREATE INDEX CONCURRENTLY or REINDEX CONCURRENTLY is waiting for transactions with write locks that can potentially see the table to finish. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.
building indexThe index is being built by the access method-specific code. In this phase, access methods that support progress reporting fill in their own progress data, and the subphase is indicated in this column. Typically, blocks_total and blocks_done will contain progress data, as well as potentially tuples_total and tuples_done.
waiting for writers before validationCREATE INDEX CONCURRENTLY or REINDEX CONCURRENTLY is waiting for transactions with write locks that can potentially write into the table to finish. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.
index validation: scanning indexCREATE INDEX CONCURRENTLY is scanning the index searching for tuples that need to be validated. This phase is skipped when not in concurrent mode. Columns blocks_total (set to the total size of the index) and blocks_done contain the progress information for this phase.
index validation: sorting tuplesCREATE INDEX CONCURRENTLY is sorting the output of the index scanning phase.
index validation: scanning tableCREATE INDEX CONCURRENTLY is scanning the table to validate the index tuples collected in the previous two phases. This phase is skipped when not in concurrent mode. Columns blocks_total (set to the total size of the table) and blocks_done contain the progress information for this phase.
waiting for old snapshotsCREATE INDEX CONCURRENTLY or REINDEX CONCURRENTLY is waiting for transactions that can potentially see the table to release their snapshots. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.
waiting for readers before marking deadREINDEX CONCURRENTLY is waiting for transactions with read locks on the table to finish, before marking the old index dead. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.
waiting for readers before droppingREINDEX CONCURRENTLY is waiting for transactions with read locks on the table to finish, before dropping the old index. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.

VACUUM Progress Reporting

Whenever VACUUM is running, the pg_stat_progress_vacuum view will contain one row for each backend (including autovacuum worker processes) that is currently vacuuming. The tables below describe the information that will be reported and provide information about how to interpret it. Progress for VACUUM FULL commands is reported via pg_stat_progress_cluster because both VACUUM FULL and CLUSTER rewrite the table, while regular VACUUM only modifies it in place. See Section CLUSTER Progress Reporting.

Table 40. pg_stat_progress_vacuum View

Column Type
Description
pid integer
Process ID of backend.
datid oid
OID of the database to which this backend is connected.
datname name
Name of the database this backend is connected to.
relid oid
OID of the table being vacuumed.
phase text
Current processing phase of vacuum. See Table 41.
heap_blks_total bigint
Total number of heap blocks in the table. This number is reported as of the beginning of the scan; blocks added later will not be (and need not be) visited by this VACUUM.
heap_blks_scanned bigint
Number of heap blocks scanned. Because the visibility map is used to optimize scans, some blocks will be skipped without inspection; skipped blocks are included in this total, so that this number will eventually become equal to heap_blks_total when the vacuum is complete. This counter only advances when the phase is scanning heap.
heap_blks_vacuumed bigint
Number of heap blocks vacuumed. Unless the table has no indexes, this counter only advances when the phase is vacuuming heap. Blocks that contain no dead tuples are skipped, so the counter may sometimes skip forward in large increments.
index_vacuum_count bigint
Number of completed index vacuum cycles.
max_dead_tuples bigint
Number of dead tuples that we can store before needing to perform an index vacuum cycle, based on maintenance_work_mem.
num_dead_tuples bigint
Number of dead tuples collected since the last index vacuum cycle.

Table 41. VACUUM Phases

PhaseDescription
initializingVACUUM is preparing to begin scanning the heap. This phase is expected to be very brief.
scanning heapVACUUM is currently scanning the heap. It will prune and defragment each page if required, and possibly perform freezing activity. The heap_blks_scanned column can be used to monitor the progress of the scan.
vacuuming indexesVACUUM is currently vacuuming the indexes. If a table has any indexes, this will happen at least once per vacuum, after the heap has been completely scanned. It may happen multiple times per vacuum if maintenance_work_mem (or, in the case of autovacuum, autovacuum_work_mem if set) is insufficient to store the number of dead tuples found.
vacuuming heapVACUUM is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of vacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed.
cleaning up indexesVACUUM is currently cleaning up indexes. This occurs after the heap has been completely scanned and all vacuuming of the indexes and the heap has been completed.
truncating heapVACUUM is currently truncating the heap so as to return empty pages at the end of the relation to the operating system. This occurs after cleaning up indexes.
performing final cleanupVACUUM is performing final cleanup. During this phase, VACUUM will vacuum the free space map, update statistics in pg_class, and report statistics to the cumulative statistics system. When this phase is completed, VACUUM will end.

CLUSTER Progress Reporting

Whenever CLUSTER or VACUUM FULL is running, the pg_stat_progress_cluster view will contain a row for each backend that is currently running either command. The tables below describe the information that will be reported and provide information about how to interpret it.

Table 42. pg_stat_progress_cluster View

Column Type
Description
pid integer
Process ID of backend.
datid oid
OID of the database to which this backend is connected.
datname name
Name of the database this backend is connected to.
relid oid
OID of the table being clustered.
command text
The command that is running. Either CLUSTER or VACUUM FULL.
phase text
Current processing phase. See Table 43.
cluster_index_relid oid
If the table is being scanned using an index, this is the OID of the index being used; otherwise, it is zero.
heap_tuples_scanned bigint
Number of heap tuples scanned. This counter only advances when the phase is seq scanning heap, index scanning heap or writing new heap.
heap_tuples_written bigint
Number of heap tuples written. This counter only advances when the phase is seq scanning heap, index scanning heap or writing new heap.
heap_blks_total bigint
Total number of heap blocks in the table. This number is reported as of the beginning of seq scanning heap.
heap_blks_scanned bigint
Number of heap blocks scanned. This counter only advances when the phase is seq scanning heap.
index_rebuild_count bigint
Number of indexes rebuilt. This counter only advances when the phase is rebuilding index.

Table 43. CLUSTER and VACUUM FULL Phases

PhaseDescription
initializingThe command is preparing to begin scanning the heap. This phase is expected to be very brief.
seq scanning heapThe command is currently scanning the table using a sequential scan.
index scanning heapCLUSTER is currently scanning the table using an index scan.
sorting tuplesCLUSTER is currently sorting tuples.
writing new heapCLUSTER is currently writing the new heap.
swapping relation filesThe command is currently swapping newly-built files into place.
rebuilding indexThe command is currently rebuilding an index.
performing final cleanupThe command is performing final cleanup. When this phase is completed, CLUSTER or VACUUM FULL will end.

Base Backup Progress Reporting

Whenever an application like qhb_basebackup is taking a base backup, the pg_stat_progress_basebackup view will contain a row for each WAL sender process that is currently running the BASE_BACKUP replication command and streaming the backup. The tables below describe the information that will be reported and provide information about how to interpret it.

Table 44. pg_stat_progress_basebackup View

Column Type
Description
pid integer
Process ID of a WAL sender process.
phase text
Current processing phase. See Table 45.
backup_total bigint
Total amount of data that will be streamed. This is estimated and reported as of the beginning of streaming database files phase. Note that this is only an approximation since the database may change during streaming database files phase and WAL log may be included in the backup later. This is always the same value as backup_streamed once the amount of data streamed exceeds the estimated total size. If the estimation is disabled in qhb_basebackup (i.e., --no-estimate-size option is specified), this is NULL./td>
backup_streamed bigint
Amount of data streamed. This counter only advances when the phase is streaming database files or transferring wal files.
tablespaces_total bigint
Total number of tablespaces that will be streamed.
tablespaces_streamed bigint
Number of tablespaces streamed. This counter only advances when the phase is streaming database files.

Table 45. Base Backup Phases

PhaseDescription
initializingThe WAL sender process is preparing to begin the backup. This phase is expected to be very brief.
waiting for checkpoint to finishThe WAL sender process is currently performing pg_backup_start to prepare to take a base backup, and waiting for the start-of-backup checkpoint to finish.
estimating backup sizeThe WAL sender process is currently estimating the total amount of database files that will be streamed as a base backup.
streaming database filesThe WAL sender process is currently streaming database files as a base backup.
waiting for wal archiving to finishThe WAL sender process is currently performing pg_backup_stop to finish the backup, and waiting for all the WAL files required for the base backup to be successfully archived. If either --wal-method=none or --wal-method=stream is specified in qhb_basebackup, the backup will end when this phase is completed.
transferring wal filesThe WAL sender process is currently transferring all WAL logs generated during the backup. This phase occurs after waiting for wal archiving to finish phase if --wal-method=fetch is specified in qhb_basebackup. The backup will end when this phase is completed.

COPY Progress Reporting

Whenever COPY is running, the pg_stat_progress_copy view will contain one row for each backend that is currently running a COPY command. The table below describes the information that will be reported and provides information about how to interpret it.

Table 46. pg_stat_progress_copy View

Column Type
Description
pid integer
Process ID of backend.
datid oid
OID of the database to which this backend is connected.
datname name
Name of the database this backend is connected to.
relid oid
OID of the table on which the COPY command is executed. It is set to 0 if copying from a SELECT query.
command text
The command that is running: COPY FROM, or COPY TO.
type text
The io type that the data is read from or written to: FILE, PROGRAM, PIPE (for COPY FROM STDIN and COPY TO STDOUT), or CALLBACK (used for example during the initial table synchronization in logical replication).
bytes_processed bigint
Number of bytes already processed by COPY command.
bytes_total bigint
Size of source file for COPY FROM command in bytes. It is set to 0 if not available.
tuples_processed bigint
Number of tuples already processed by COPY command.
tuples_excluded bigint
Number of tuples not processed because they were excluded by the WHERE clause of the COPY command.


Dynamic Tracing

QHB provides facilities to support dynamic tracing of the database server. This allows an external utility to be called at specific points in the code and thereby trace execution.

A number of probes or trace points are already inserted into the source code. These probes are intended to be used by database developers and administrators. By default the probes are not compiled into QHB; the user needs to explicitly tell the configure script to make the probes available.

Currently, the DTrace utility is supported, which, at the time of this writing, is available on Solaris, macOS, FreeBSD, NetBSD, and Oracle Linux. The SystemTap project for Linux provides a DTrace equivalent and can also be used. Supporting other dynamic tracing utilities is theoretically possible by changing the definitions for the macros.


Compiling for Dynamic Tracing

By default, probes are not available, so you will need to explicitly tell the configure script to make the probes available in QHB. To include DTrace support specify --enable-dtrace to configure.


Built-in Probes

A number of standard probes are provided in the source code, as shown in Table 47; Table 48 shows the types used in the probes. More probes can certainly be added to enhance QHB's observability.

Table 47. Built-in DTrace Probes

NameParametersDescription
transaction-start(LocalTransactionId)Probe that fires at the start of a new transaction. arg0 is the transaction ID.
transaction-commit(LocalTransactionId)Probe that fires when a transaction completes successfully. arg0 is the transaction ID.
transaction-abort(LocalTransactionId)Probe that fires when a transaction completes unsuccessfully. arg0 is the transaction ID.
query-start(const char *)Probe that fires when the processing of a query is started. arg0 is the query string.
query-done(const char *)Probe that fires when the processing of a query is complete. arg0 is the query string.
query-parse-start(const char *)Probe that fires when the parsing of a query is started. arg0 is the query string.
query-parse-done(const char *)Probe that fires when the parsing of a query is complete. arg0 is the query string.
query-rewrite-start(const char *)Probe that fires when the rewriting of a query is started. arg0 is the query string.
query-rewrite-done(const char *)Probe that fires when the rewriting of a query is complete. arg0 is the query string.
query-plan-start()Probe that fires when the planning of a query is started.
query-plan-done()Probe that fires when the planning of a query is complete.
query-execute-start()Probe that fires when the execution of a query is started.
query-execute-done()Probe that fires when the execution of a query is complete.
statement-status(const char *)Probe that fires anytime the server process updates its pg_stat_activity.status. arg0 is the new status string.
checkpoint-start(int)Probe that fires when a checkpoint is started. arg0 holds the bitwise flags used to distinguish different checkpoint types, such as shutdown, immediate or force.
checkpoint-done(int, int, int, int, int)Probe that fires when a checkpoint is complete. (The probes listed next fire in sequence during checkpoint processing.) arg0 is the number of buffers written. arg1 is the total number of buffers. arg2, arg3 and arg4 contain the number of WAL files added, removed and recycled respectively.
clog-checkpoint-start(bool)Probe that fires when the CLOG portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint.
clog-checkpoint-done(bool)Probe that fires when the CLOG portion of a checkpoint is complete. arg0 has the same meaning as for clog-checkpoint-start.
subtrans-checkpoint-start(bool)Probe that fires when the SUBTRANS portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint.
subtrans-checkpoint-done(bool)Probe that fires when the SUBTRANS portion of a checkpoint is complete. arg0 has the same meaning as for subtrans-checkpoint-start.
multixact-checkpoint-start(bool)Probe that fires when the MultiXact portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint.
multixact-checkpoint-done(bool)Probe that fires when the MultiXact portion of a checkpoint is complete. arg0 has the same meaning as for multixact-checkpoint-start.
buffer-checkpoint-start(int)Probe that fires when the buffer-writing portion of a checkpoint is started. arg0 holds the bitwise flags used to distinguish different checkpoint types, such as shutdown, immediate or force.
buffer-sync-start(int, int)Probe that fires when we begin to write dirty buffers during checkpoint (after identifying which buffers must be written). arg0 is the total number of buffers. arg1 is the number that are currently dirty and need to be written.
buffer-sync-written(int)Probe that fires after each buffer is written during checkpoint. arg0 is the ID number of the buffer.
buffer-sync-done(int, int, int)Probe that fires when all dirty buffers have been written. arg0 is the total number of buffers. arg1 is the number of buffers actually written by the checkpoint process. arg2 is the number that were expected to be written (arg1 of buffer-sync-start); any difference reflects other processes flushing buffers during the checkpoint.
buffer-checkpoint-sync-start()Probe that fires after dirty buffers have been written to the kernel, and before starting to issue fsync requests.
buffer-checkpoint-done()Probe that fires when syncing of buffers to disk is complete.
twophase-checkpoint-start()Probe that fires when the two-phase portion of a checkpoint is started.
twophase-checkpoint-done()Probe that fires when the two-phase portion of a checkpoint is complete.
buffer-read-start(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool)Probe that fires when a buffer read is started. arg0 and arg1 contain the fork and block numbers of the page (but arg1 will be -1 if this is a relation extension request). arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer. arg6 is true for a relation extension request, false for normal read.
buffer-read-done(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool, bool)Probe that fires when a buffer read is complete. arg0 and arg1 contain the fork and block numbers of the page (if this is a relation extension request, arg1 now contains the block number of the newly added block). arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer. arg6 is true for a relation extension request, false for normal read. arg7 is true if the buffer was found in the pool, false if not.
buffer-flush-start(ForkNumber, BlockNumber, Oid, Oid, Oid)Probe that fires before issuing any write request for a shared buffer. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation.
buffer-flush-done(ForkNumber, BlockNumber, Oid, Oid, Oid)Probe that fires when a write request is complete. (Note that this just reflects the time to pass the data to the kernel; it's typically not actually been written to disk yet.) The arguments are the same as for buffer-flush-start.
buffer-write-dirty-start(ForkNumber, BlockNumber, Oid, Oid, Oid)Probe that fires when a server process begins to write a dirty buffer. (If this happens often, it implies that shared_buffers is too small or the background writer control parameters need adjustment.) arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation.
buffer-write-dirty-done(ForkNumber, BlockNumber, Oid, Oid, Oid)Probe that fires when a dirty-buffer write is complete. The arguments are the same as for buffer-write-dirty-start.
wal-buffer-write-dirty-start()Probe that fires when a server process begins to write a dirty WAL buffer because no more WAL buffer space is available. (If this happens often, it implies that wal_buffers is too small.)
wal-buffer-write-dirty-done()Probe that fires when a dirty WAL buffer write is complete.
wal-insert(unsigned char, unsigned char)Probe that fires when a WAL record is inserted. arg0 is the resource manager (rmid) for the record. arg1 contains the info flags.
wal-switch()Probe that fires when a WAL segment switch is requested.
smgr-md-read-start(ForkNumber, BlockNumber, Oid, Oid, Oid, int)Probe that fires when beginning to read a block from a relation. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer.
smgr-md-read-done(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)Probe that fires when a block read is complete. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer. arg6 is the number of bytes actually read, while arg7 is the number requested (if these are different it indicates trouble).
smgr-md-write-start(ForkNumber, BlockNumber, Oid, Oid, Oid, int)Probe that fires when beginning to write a block to a relation. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer.
smgr-md-write-done(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)Probe that fires when a block write is complete. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer. arg6 is the number of bytes actually written, while arg7 is the number requested (if these are different it indicates trouble).
sort-start(int, bool, int, int, bool, int)Probe that fires when a sort operation is started. arg0 indicates heap, index or datum sort. arg1 is true for unique-value enforcement. arg2 is the number of key columns. arg3 is the number of kilobytes of work memory allowed. arg4 is true if random access to the sort result is required. arg5 indicates serial when 0, parallel worker when 1, or parallel leader when 2.
sort-done(bool, long)Probe that fires when a sort is complete. arg0 is true for external sort, false for internal sort. arg1 is the number of disk blocks used for an external sort, or kilobytes of memory used for an internal sort.
lwlock-acquire(char *, LWLockMode)Probe that fires when an LWLock has been acquired. arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lwlock-release(char *)Probe that fires when an LWLock has been released (but note that any released waiters have not yet been awakened). arg0 is the LWLock's tranche.
lwlock-wait-start(char *, LWLockMode)Probe that fires when an LWLock was not immediately available and a server process has begun to wait for the lock to become available. arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lwlock-wait-done(char *, LWLockMode)Probe that fires when a server process has been released from its wait for an LWLock (it does not actually have the lock yet). arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lwlock-condacquire(char *, LWLockMode)Probe that fires when an LWLock was successfully acquired when the caller specified no waiting. arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lwlock-condacquire-fail(char *, LWLockMode)Probe that fires when an LWLock was not successfully acquired when the caller specified no waiting. arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lock-wait-start(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)Probe that fires when a request for a heavyweight lock (lmgr lock) has begun to wait because the lock is not available. arg0 through arg3 are the tag fields identifying the object being locked. arg4 indicates the type of object being locked. arg5 indicates the lock type being requested.
lock-wait-done(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)Probe that fires when a request for a heavyweight lock (lmgr lock) has finished waiting (i.e., has acquired the lock). The arguments are the same as for lock-wait-start.
deadlock-found()Probe that fires when a deadlock is found by the deadlock detector.

Table 48. Defined Types Used in Probe Parameters

TypeDefinition
LocalTransactionIdunsigned int
LWLockModeint
LOCKMODEint
BlockNumberunsigned int
Oidunsigned int
ForkNumberint
boolunsigned int

Using Probes

The example below shows a DTrace script for analyzing transaction counts in the system, as an alternative to snapshotting pg_stat_database before and after a performance test:

#!/usr/sbin/dtrace -qs

qhb$1:::transaction-start
{
      @start["Start"] = count();
      self->ts  = timestamp;
}

qhb$1:::transaction-abort
{
      @abort["Abort"] = count();
}

qhb$1:::transaction-commit
/self->ts/
{
      @commit["Commit"] = count();
      @time["Total time (ns)"] = sum(timestamp - self->ts);
      self->ts=0;
}

When executed, the example D script gives output such as:

# ./txn_count.d `pgrep -n qhb` or ./txn_count.d <PID>
^C

Start                                          71
Commit                                         70
Total time (ns)                        2312105013

Note
SystemTap uses a different notation for trace scripts than DTrace does, even though the underlying trace points are compatible. One point worth noting is that at this writing, SystemTap scripts must reference probe names using double underscores in place of hyphens. This is expected to be fixed in future SystemTap releases.

You should remember that DTrace scripts need to be carefully written and debugged, otherwise the trace information collected might be meaningless. In most cases where problems are found it is the instrumentation that is at fault, not the underlying system. When discussing information found using dynamic tracing, be sure to enclose the script used to allow that too to be checked and discussed.


Defining New Probes

New probes can be defined within the code wherever the developer desires, though this will require a recompilation. Below are the steps for inserting new probes:

  1. Decide on probe names and data to be made available through the probes

  2. Add the probe definitions to /usr/local/qhb/backend/utils/probes.d

  3. Include pg_trace.h if it is not already present in the module(s) containing the probe points, and insert TRACE_QHB probe macros at the desired locations in the source code

  4. Recompile and verify that the new probes are available

Example:

Here is an example of how you would add a probe to trace all new transactions by transaction ID.

  1. Decide that the probe will be named transaction-start and requires a parameter of type LocalTransactionId

  2. Add the probe definition to /usr/local/qhb/backend/utils/probes.d:

    probe transaction__start(LocalTransactionId);
    

    Note the use of the double underline in the probe name. In a DTrace script using the probe, the double underline needs to be replaced with a hyphen, so transaction-start is the name to document for users.

  3. At compile time, transaction__start is converted to a macro called TRACE_QHB_TRANSACTION_START (notice the underscores are single here), which is available by including pg_trace.h. Add the macro call to the appropriate location in the source code. In this case, it looks like the following:

    TRACE_QHB_TRANSACTION_START(vxid.localTransactionId);
    
  4. After recompiling and running the new binary, check that your newly added probe is available by executing the following DTrace command. You should see similar output:

    # dtrace -ln transaction-start
       ID    PROVIDER      MODULE           FUNCTION NAME
    18705    qhb49878       qhb     StartTransactionCommand transaction-start
    18755    qhb49877       qhb     StartTransactionCommand transaction-start
    18805    qhb49876       qhb     StartTransactionCommand transaction-start
    18855    qhb49875       qhb     StartTransactionCommand transaction-start
    18986    qhb49873       qhb     StartTransactionCommand transaction-start
    

There are a few things to be careful about when adding trace macros to the C/RUST code:

  • You should take care that the data types specified for a probe's parameters match the data types of the variables used in the macro. Otherwise, you will get compilation errors.

  • On most platforms, if QHB is built with --enable-dtrace, the arguments to a trace macro will be evaluated whenever control passes through the macro, even if no tracing is being done. This is usually not worth worrying about if you are just reporting the values of a few local variables. But beware of putting expensive function calls into the arguments. If you need to do that, consider protecting the macro with a check to see if the trace is actually enabled:

    if (TRACE_QHB_TRANSACTION_START_ENABLED())
        TRACE_QHB_TRANSACTION_START(some_function(...));
    

    Each trace macro has a corresponding ENABLED macro.



QHB Metrics

Overview

QHB metrics are currently cumulated at the database cluster level and apply to the all QHB processes regardless of whether they are background processes or processes maintaining user connections. Metrics data are sent to the metric collector and aggregator metricsd; short description of metricsd you can find in Chapter [Сервер метрик]. Then metrics are aggregated and stored in Graphite; Grafana is used as an interface for it.


Metric Types

  • Gauge — non-negative integer value. It can be set, increased or decreased by given number.
  • Counter — non-negative integer value. It can only be increased by given number.
  • Timer — non-negative integer value, duration time in nanoseconds. The value can only be stored, some statistical characteristics are computed during aggregation:
    • sum — sum of values
    • count — number of stored values
    • max — maximum of stored values
    • min — minimum of stored values
    • mean — arithmetic mean
    • median — median value
    • std — standard deviation
    • percentile list if needed

Metric Groups

sys

The sys group includes various system metrics.

There are two global literals that control collection of system metrics:

  • qhb_os_monitoring — Boolean literal, the default is off. When set to on, it enables the periodical collection of metrics.
  • qhb_os_stat_period — the length of time in seconds between collections of system statistics, the default is 30 seconds.

Table 49. Load and CPU Metrics

NameDescriptionType
sys.load_average.1minload average over the last minutegauge
sys.load_average.5minThe same but over the last 5 minutesgauge
sys.load_average.15minThe same but over the last 15 minutesgauge
sys.cpu.1minCPU load percent over the last minutegauge
sys.cpu.5minThe same but over the last 5 minutesgauge
sys.cpu.15minThe same but over the last 15 minutesgauge

Note
load average metric values include values with two decimals precision, however due to technical intricacy during data sending raw values will be multiplied by 100 and sent as integers. Thus when printing data you should take this feature into account and divide resulting values by 100.

load average is an average number of running plus waiting threads over the specified amount of time (1, 5 and 15 minutes). Normally corresponding values are printed using uptime, top, or cat /proc/loadavg commands. See a translation of article Load Average in Linux for details about this metric's features.

If the 1 minute average is higher than the 5 or 15 minute averages, then load is increasing; if the 1 minute average is lower than the 5 or 15 minute averages, then load is decreasing. However this metric itself has a little meaning without total CPU count. Additional sys.cpu.Nmin metrics for CPU load (derivative of load average) show an approximate percent of CPUs load with account of their number and are calculated using the following simplified formula:

sys.cpu.<N>min = sys.load_average.<N>min / cpu_count * 100

where cpu_count is a number of CPU in system and N accepts the values 1, 5 or 15.

CPU number is calculated as product of physical sockets number, kernel number per socket, and tread number per kernel. Command lscpu shows all the necessary data. For example:

Thread(s) per core:              2
Core(s) per socket:              4
Socket(s):                       1

В данном случае cpu_count = 2 * 4 * 1 = 8.

Альтернативным и более простым методом может быть получение этого значения через команду nproc.

Таким образом, в данном случае загрузка в 100% будет достигнута, если величина load average будет стремиться к 8. Однако эти расчеты и значения будут иметь довольно приблизительный и даже условный характер, что показывает приведенная выше по ссылке статья.

Table 50. Метрики использования памяти

Имя метрикиDescriptionТип метрики
sys.mem.totalОбщий размер установленной памяти RAMgauge
sys.mem.usedИспользуемая памятьgauge
sys.mem.freeНеиспользуемая памятьgauge
sys.mem.availableПамять, доступная для запускаемых приложений (не включая swap, но учитывая потенциально освобождаемую память, занимаемую страничным кэшем)gauge
sys.swap.totalОбщий размер файла подкачкиgauge
sys.swap.freeНеиспользуемая память файла подкачкиgauge

Значения метрик соответствуют полям из вывода утилиты free (значения отображаются в килобайтах, т. е. соответствуют выводу free -k).

Table 51. Соответствие значений метрик полям вывода утилиты free

МетрикаПоле утилиты free
sys.mem.totalMem:total
sys.mem.usedMem:used
sys.mem.freeMem:free
sys.mem.availableMem:available
sys.swap.totalSwap:total
sys.swap.freeSwap:free

Величину, соответствующую выводимому в утилите free значению Mem:buff/cache, можно рассчитать по формуле:

Mem:buff/cache = Mem:total - Mem:used - Mem:free

Таким образом, в Grafana можно, используя функцию diffSeries, рассчитывать и выводить это значение на основании других имеющихся данных.

Значение Mem:shared (данные виртуальной файловой системы tmpfs) через метрики не выводится.

Значение Swap:used можно рассчитать по формуле:

Swap:used = Swap:total - Swap:free

Это значение также можно выводить в Grafana в виде рассчитываемой величины через функцию diffSeries.

Более подробное описание этих показателей можно получить через справочную систему операционной системы для утилиты free (Для этого вызовите man free).

Table 52. Метрики использования дискового пространства

Имя метрикиDescriptionТип метрики
sys.disk_space.totalОбъем дисковой системы, на которой находится каталог с данными (в байтах)gauge
sys.disk_space.freeСвободное пространство в дисковой системе, на которой находится каталог с данными (в байтах)gauge

Метрики относятся к дисковой системе, на которой расположен каталог с файлами базы данных. Этот каталог определяется параметром командной строки -D при запуске базы данных либо переменной среды $PGDATA. Параметр data_directory в файле конфигурации qhb.conf может переопределять расположение каталога с данными.

Table 53. Другие системные метрики

Имя метрикиDescriptionТип метрики
sys.processesОбщее количество запущенных в системе процессовgauge
sys.uptimeКоличество секунд, прошедших с начала запуска системыgauge

db_stat

К группе db_stat относятся метрики чтения и записи блоков на уровне экземпляра QHB.

Table 54. Метрики чтения и записи блоков

Имя метрикиDescriptionТип метрики
qhb.db_stat.numbackendsКоличество активных серверных процессовgauge
qhb.db_stat.blocks_fetchedКоличество блоков, полученных при чтенииcounter
qhb.db_stat.blocks_hitКоличество блоков, найденных в кэше при чтенииcounter
qhb.db_stat.blocks_read_timeВремя чтения блоков, в миллисекундахcounter
qhb.db_stat.blocks_write_timeВремя записи блоков, в миллисекундахcounter
qhb.db_stat.conflicts.tablespaceКоличество запросов, отмененных из-за удаленных табличных пространствcounter
qhb.db_stat.conflicts.lockКоличество запросов, отмененных из-за таймаутов блокировокcounter
qhb.db_stat.checksum_failuresКоличество несовпадений контрольной суммы страницы данныхcounter
qhb.db_stat.conflicts.snapshotКоличество запросов, отмененных из-за устаревших снимков состоянияcounter
qhb.db_stat.conflicts.bufferpinКоличество запросов, отмененных из-за закрепленных буферовcounter
qhb.db_stat.conflicts.startup_deadlockКоличество запросов, отмененных из-за взаимных блокировокcounter
qhb.db_stat.tuples.returnedКоличество строк, полученных при последовательном сканированииcounter
qhb.db_stat.tuples.fetchedКоличество строк, полученных при сканировании по индексуcounter
qhb.db_stat.tuples.insertedКоличество строк, добавленных в базу данныхcounter
qhb.db_stat.tuples.updatedКоличество строк, измененных в базе данныхcounter
qhb.db_stat.tuples.deletedКоличество строк, удаленных из базы данныхcounter

Также генерируются версии этих метрик по базам данных. В имена метрик включается идентификатор базы данных, например: qhb.db_stat.db_16384.numbackends.

На основании метрик qhb.db_stat.blocks_fetched и qhb.db_stat.blocks_hit рассчитывается коэффициент попадания в кэш:

k = blocks_hit / blocks_fetched * 100%

Хорошим уровнем обычно считается значение более 90%. Если значение коэффициента существенно ниже этой отметки, желательно рассмотреть возможность увеличения объема буферного кэша.


bgwr

К группе bgwr относятся метрики фонового процесса записи.

Table 55. Метрики фонового процесса записи

Имя метрикиDescriptionТип метрики
qhb.bgwr.checkpoints_timedКоличество запланированных контрольных точек, которые были выполненыcounter
qhb.bgwr.checkpoints_reqКоличество запрошенных контрольных точек, выполненных вне очереди запланированныхcounter
qhb.bgwr.checkpoint_write_timeВремя, потраченное на этап обработки контрольной точки, где файлы записываются на диск, в миллисекундахcounter
qhb.bgwr.checkpoint_sync_timeВремя, потраченное на этап обработки контрольной точки, где файлы синхронизируются с диском, в миллисекундахcounter
qhb.bgwr.buffers_checkpointКоличество буферов, записанных во время контрольных точекcounter
qhb.bgwr.buffers_cleanКоличество буферов, записанных фоновым процессом записиcounter
qhb.bgwr.maxwritten_cleanСколько раз фоновый процесс записи останавливал сброс грязных страниц, поскольку записывал слишком много буферовcounter
qhb.bgwr.buffers_backendКоличество буферов, записанных непосредственно обслуживающим процессомcounter
qhb.bgwr.buffers_backend_fsyncСколько раз обслуживающий процесс вызывал fsync сам (обычно их обрабатывает фоновый процесс записи, даже когда обслуживающий процесс выполняет запись самостоятельно)counter
qhb.bgwr.buffers_allocКоличество выделенных буферовcounter

Данные по перечисленным метрикам отображаются в разделе «Контрольные точки и операции с буферами» информационной панели QHB. Обычно при выполнении запланированных контрольных точек сначала происходит запись информации о начале контрольной точки, затем в течение некоторого времени идет сброс блоков на диск и по окончании контрольной точки фиксируется информация о продолжительности записи и синхронизации данных. В случае обработки запланированной контрольной точки запись блоков равномерно распределяется во времени согласно параметрам настройки, чтобы снизить влияние этого процесса на общий ввод/вывод. При контрольных точках, запрошенных через команду, сброс блоков происходит сразу, без искусственной задержки.

Также генерируются версии этих метрик по базам данных. В имена метрик включается идентификатор базы данных, например: qhb.bgwr.db_16384.buffers_backend.


mem_stat

К группе mem_stat относятся метрики для отслеживания размера памяти work area.

Память work area может быть приблизительно описана как память, используемая внутренними операциями сортировки и хеш-таблицами. Размер этой памяти контролируется глобальным параметром конфигурации work_mem: при превышении лимита вместо оперативной памяти начинают использоваться временные файлы на диске. Параметр work_mem ограничивает память, используемую единичной операцией.

В группу mem_stat входит три разновидности метрик с общим именем work_area:

  • qhb.mem_stat.work_area — показывает общий размер work area в байтах всего кластера
  • qhb.mem_stat.db_<номер>.work_area — показывает размер work area в байтах для одной базы данных
  • qhb.mem_stat..work_area — показывает размер work area в байтах для одного рабочего процесса

Метрики второй и третьей разновидностей создаются динамически.

Все метрики типа gauge, то есть их значения могут уменьшаться по мере освобождения ранее занятой памяти.


temp_stat

К группе temp_stat относятся метрики по временным файлам и таблицам QHB.

Table 56. Метрики по временным файлам и таблицам

Имя метрикиDescriptionТип метрики
qhb.temp_stat.temp_filesКоличество временных файлов, созданных за период агрегацииcounter
qhb.temp_stat.temp_bytesОбщий объем временных файлов, созданных за период агрегации, в байтахcounter
qhb.temp_stat.temp_tablesКоличество временных таблиц, созданных за период агрегацииcounter
qhb.temp_stat.temp_table_bytesОбщий объем временных таблиц, созданных за период агрегации, в байтахcounter

Данные собираются за период агрегации коллектора метрик. Обновление данных происходит по окончании соответствующей операции, например, при создании временной таблицы с указанием DROP ON COMMIT обновление статистики произойдет после выполнения команды COMMIT при удалении временной таблицы. В некоторых случаях объем, занимаемый временными файлами и таблицами, может быть весьма существенным.

Также генерируются версии этих метрик по базам данных. В имена метрик включается идентификатор базы данных, например: qhb.temp_stat.db_16384.temp_bytes.


wal

К группе wal относятся метрики процесса архивации журнала упреждающей записи.

Table 57. Метрики процесса архивации WAL

Имя метрикиDescriptionТип метрики
qhb.wal.archivedКоличество успешно выполненных операций архивации файлов журнала упреждающей записиcounter
qhb.wal.failedКоличество попыток архивации, завершившихся сбоемcounter
qhb.wal.archive_timeВремя, потраченное на копирование файлов журналов, в наносекундахcounter

Эти метрики работают в том случае, если настроена архивация журнала упреждающей записи. Для этого необходимо установить параметр archive_mode в значение on (включен) и определить команду архивации в параметре archive_command.


transaction

К группе transaction относятся метрики по транзакциям.

Table 58. Метрики по транзакциям

Имя метрикиDescriptionТип метрики
qhb.transaction.commitКоличество фиксаций транзакцийcounter
qhb.transaction.rollbackКоличество откатов транзакцийcounter
qhb.transaction.deadlocksКоличество взаимоблокировокcounter

Данные метрик собираются непосредственно при выполнении команд завершения и отмены транзакций на уровне всего кластера баз данных

Коэффициент подтверждения транзакций рассчитывается как процентное отношение фиксаций транзакций к сумме фиксаций и откатов транзакций:

k = commit/(commit + rollback)*100%

Обычно значение стремится к 100%, поскольку чаще всего транзакции завершаются успешно. Существенная доля откатившихся транзакций может говорить о том, что в системе существуют проблемы.

Взаимоблокировки возникают, когда различные сеансы взаимно ожидают освобождения заблокированных данных. В этом случае после автоматического определения взаимоблокировки происходит откат одной из транзакций.


wait

К группе wait относятся метрики событий ожидания.

Данный набор метрик полностью соответствует набору стандартных событий ожидания. Метрики имеют префикс qhb.wait. Далее в наименовании идет класс события ожидания и через точку название события ожидания. В текущем релизе имена метрик ограничены в размере и имеют в наименовании максимум 31 символ. Все метрики по событиям ожидания имеют тип counter, однако значение отражает промежуток времени в микросекундах, который эти ожидания заняли в совокупности во всех сеансах за период сбора.

Примечание
Если в течение периода наблюдения работало множество пользовательских подключений, которые находились в состоянии ожидания, суммарное значение времени ожидания может многократно превысить этот период. Например, если в течение 10 секунд 1000 сеансов провели в ожиданиях по одной секунде, суммарное время ожидания составит 1000 секунд.

Метрики с типами событий ожидания Lock, LWLock, IO и IPC отображаются в разделе «События ожидания» информационной панели QHB. Значения метрик выводятся в микросекундах (автоматически переводясь в другие единицы при увеличении значений). На существующих графиках выводятся не все события ожиданий, а только по пять самых значимых по величине на каждом графике. Разные события ожиданий могут иметь сильно отличающиеся по величине продолжительности. Значительные колебания значений могут отражать возникающие проблемы.

Table 59. Наиболее значимые события ожидания

Имя события ожиданияDescription
Lock.extendОжидание расширения отношения. Становится заметным при активном росте таблиц. Необходимость выделения новых блоков приводит к некоторым задержкам, которые отражаются в этой метрике.
Lock.transactionidОжидание завершения транзакции. возникает в том случае, если транзакция вынуждена ждать окончания обработки предыдущих транзакций, которые также получают подтверждение своего окончания.
Lock.tupleОжидание получения блокировки для кортежа. Возникает в случае одновременной работы с теми же данными нескольких транзакций.
LWLock.WALWriteLockОжидание записи буферов WAL на диск. Часто является лидером среди событий ожиданий этого типа, так как операции с диском являются наиболее медленными в этой группе событий ожидания.
LWLock.wal_insertОжидание вставки WAL в буфер памяти.
LWLock.buffer_contentОжидание чтения или записи страницы данных в памяти. Возникает и становится существенным при интенсивном вводе/выводе.
LWLock.buffer_mappingОжидание связывания блока данных с буфером в буферном пуле.
LWLock.lock_managerОжидание добавления или проверки блокировок для обслуживающих процессов. Событие становится значимым при частых транзакциях.

bufmgr

К группе bufmgr относятся метрики буферного менеджера, касающиеся механизмов управления памятью. Сбор метрик проводится при операциях чтения данных.

Table 60. Метрики буферного менеджера

Имя метрикиDescriptionТип метрикиАгрегат
qhb.bufmgr.BufferAllocСколько раз проводился поиск буфераtimercount
qhb.bufmgr.BufferAllocОбщее время, потраченное на поиск буфераtimersum
qhb.bufmgr.happy_pathСколько раз буфер нашелся сразуtimercount
qhb.bufmgr.happy_pathОбщее время, потраченное на поиск, когда буфер нашелся сразуtimersum
qhb.bufmgr.cache_missКоличество промахов кэша буферовtimercount
qhb.bufmgr.cache_missОбщее время, потраченное на обработку промахов кэша буферовtimersum
qhb.bufmgr.disk_readКоличество чтений страницы с диска (асинхронно)timercount
qhb.bufmgr.flush_dirtyКоличество выгрузок страницы на диск (асинхронно)timercount
qhb.bufmgr.retry_counterКоличество повторных обработок промахаcounter
qhb.bufmgr.strategy_pop_cntСколько раз срабатывала специальная стратегия получения или вытеснения буфераcounter
qhb.bufmgr.strategy_reject_cntКоличество забракованных буферов, предложенных специальной стратегиейcounter
tarq_cache.allocateСколько раз проводился поиск в TARQtimercount
tarq_cache.allocateОбщее время, потраченное на поиск в TARQtimersum
tarq_cache.allocate_newСколько раз выбирался исключаемый блок в TARQtimercount
tarq_cache.rollbackКоличество откатов вытеснения в TARQtimercount
tarq_cache.rollbackОбщее время, потраченное на откаты вытеснения в TARQtimersum
tarq_cache.touchОбщее время, потраченное на учет популярных страниц в TARQtimersum

lmgr

К группе lmgr относятся метрики менеджера блокировок. Блокировки бывают основные и предикатные. Предикатные блокировки называются «блокировками» по историческим причинам; сейчас они ничего не блокируют, а используются для отслеживания зависимостей по данным между транзакциями.

Метрики менеджера блокировок отслеживают количество занятых блокировок и сериализуемых транзакций (транзакций с наивысшим уровнем изоляции Serializable), а также количество свободных слотов в хеш-таблицах для каждого параметра. Необходимость существования этих метрик вызвана тем, что размеры хеш-таблиц, хранящих блокировки и сериализуемые транзакции, ограничены, а превышение лимита может вызвать сообщение о нехватке памяти.

Сбор метрик проводится при операциях чтения данных.

Table 61. Метрики менеджера блокировок

Имя метрикиDescriptionТип метрики
qhb.lmgr.locksКоличество занятых блокировокcounter
qhb.lmgr.locks_availableКоличество блокировок, которые еще можно создатьcounter
qhb.lmgr.proc_locksКоличество занятых PGPROC-блокировокcounter
qhb.lmgr.proc_locks_availableКоличество PGPROC-блокировок, которые еще можно создатьcounter
qhb.lmgr.serializable_xidsКоличество активных сериализуемых транзакцийcounter
qhb.lmgr.serializable_xids_availableКоличество сериализуемых транзакций, которые еще можно создатьcounter
qhb.lmgr.pred_locksКоличество занятых предикатных «блокировок»counter
qhb.lmgr.pred_locks_availableКоличество предикатных «блокировок», которые еще можно создатьcounter
qhb.lmgr.pred_lock_targetsКоличество занятых предикатных target-«блокировок»counter
qhb.lmgr.pred_lock_targets_availableКоличество предикатных target-«блокировок», которые еще можно создатьcounter

queryid

К группе queryid относятся метрики, предоставляющие возможность отслеживать статистику планирования и выполнения сервером всех (почти) операторов SQL.

Эта группа отличается тем, что входящие в нее метрики создаются динамически при начале обработки движком нового оператора. В имена новых метрик добавляется уникальное имя или ID запроса. Например, метрика qhb.queryid.c92e145f160e7b9e.exec_calls отражает количество исполнений некоторого оператора SQL. Текст самого оператора можно получить из новой системной таблицы qhb_queryid запросом вида SELECT * FROM qhb_queryid WHERE qid = 'c92e145f160e7b9e'.

Подробнее настройка метрик группы queryid описана в разделе [Настройка метрик queryid].

Table 62. Метрики queryid

Имя метрикиDescriptionТип метрики
qhb.queryid.id.plan_callsЧисло планирований оператораcounter
qhb.queryid.id.total_plan_timeОбщее время, затраченное на планирование оператора, в наносекундахcounter
qhb.queryid.id.min_plan_timeМинимальное время, затраченное на планирование оператора, в наносекундахgauge
qhb.queryid.id.max_plan_timeМаксимальное время, затраченное на планирование оператора, в наносекундахgauge
qhb.queryid.id.mean_plan_timeСреднее время, затраченное на планирование оператора, в наносекундахgauge
qhb.queryid.id.stddev_plan_timeСтандартное отклонение времени, затраченного на планирование оператора, в наносекундахgauge
qhb.queryid.id.exec_callsЧисло выполнений оператораcounter
qhb.queryid.id.total_exec_timeОбщее время, затраченное на выполнение оператора, в наносекундахcounter
qhb.queryid.id.min_exec_timeМинимальное время, затраченное на выполнение оператора, в наносекундахgauge
qhb.queryid.id.max_exec_timeМаксимальное время, затраченное на выполнение оператора, в наносекундахgauge
qhb.queryid.id.mean_exec_timeСреднее время, затраченное на выполнение оператора, в наносекундахgauge
qhb.queryid.id.stddev_exec_timeСтандартное отклонение времени, затраченного на выполнение оператора, в наносекундахgauge
qhb.queryid.id.shared_blks_hitОбщее количество попаданий в разделяемый кэш блоков для этого оператораcounter
qhb.queryid.id.shared_blks_readОбщее количество прочтений разделяемых блоков для этого оператораcounter
qhb.queryid.id.shared_blks_dirtiedОбщее количество разделяемых блоков, «загрязненных» этим операторомcounter
qhb.queryid.id.shared_blks_writtenОбщее количество разделяемых блоков, записанных этим операторомcounter
qhb.queryid.id.local_blks_hitОбщее количество попаданий в локальный кэш блоков для этого оператораcounter
qhb.queryid.id.local_blks_readОбщее количество прочтений локальных блоков для этого оператораcounter
qhb.queryid.id.local_blks_dirtiedОбщее количество локальных блоков, «загрязненных» этим операторомcounter
qhb.queryid.id.local_blks_writtenОбщее количество локальных блоков, записанных этим операторомcounter
qhb.queryid.id.temp_blks_readОбщее количество прочтений временных блоков для этого оператораcounter
qhb.queryid.id.temp_blks_writtenОбщее количество временных блоков, записанных этим операторомcounter
qhb.queryid.id.exec_rowsОбщее количество строк, полученных или затронутых этим операторомcounter
qhb.queryid.id.blk_read_timeОбщее время, затраченное оператором на чтение блоков, в миллисекундах (если включен track_io_timing; иначе 0)counter
qhb.queryid.id.blk_write_timeОбщее время, затраченное оператором на запись блоков, в миллисекундах (если включен track_io_timing; иначе 0)counter
qhb.queryid.id.wal_recordsОбщее количество записей WAL, сгенерированных этим операторомcounter
qhb.queryid.id.wal_fpiОбщее количество образов полных страниц WAL, сгенерированных этим операторомcounter
qhb.queryid.id.wal_bytesОбщий объем WAL, сгенерированный этим оператором, в байтахcounter

Метрики пула соединений QCP

Ниже представлены метрики пула соединений, характеризующие работу пула. Сбор метрик проводится при операциях выполнения запроса.

Table 63. Метрики пула соединений QCP

Имя метрикиDescriptionТип метрики
qcp.queueКоличество запросов в очереди на текущий моментgauge
qcp.obtain_backendВремя ожидания назначения бэкэнда для исполнения запроса клиентаtimer
qcp.obtain_backend_failedПревышено максимальное время ожидания назначения бэкэнда для исполнения запроса клиентаtimer
<адрес субд>.in_useКоличество используемых подключений (бэкэндов) к СУБДgauge

Включение и выключение записи значений для групп метрик

В QHB имеется механизм, который позволяет включать и выключать отправку групп метрик. В каталоге данных расположен файл конфигурации metrics_config.toml, в котором каждая строка соответствует группе метрик:

[is_enabled]
default = true
sys = true
db_stat = true
bgwr = true
wal = true
transaction = true
wait = true
bufmgr = true
mem_stat = true
temp_stat = true
lmgr = true
queryid = false

Переменные принимают значения true (отправка метрик группы разрешена) или false (отправка запрещена).

Table 64. Настройка записи значений для групп метрик

Группа метрикDescriptionЗначение по умолчанию
defaultГруппа по умолчанию (различные метрики, не выделенные в отдельные группы)true
sysМетрики операционной системыtrue
db_statМетрики чтения и записи блоковtrue
bgwrМетрики фонового процесса записиtrue
mem_statМетрики размера памяти "work area"true
temp_statМетрики по данным временных файлов и таблицtrue
walМетрики архивации файлов WALtrue
transactionМетрики транзакцийtrue
waitМетрики событий ожиданияtrue
bufmgrМетрики механизмов управления памятьюtrue
lmgrМетрики механизмов блокировок (основных и предикатных)true
queryidМетрики планирования и выполнения сервером операторов SQLfalse

Примечание
Для сбора метрик операционной системы необходимо также установить в параметре qhb_os_monitoring значение on (включен).

Для просмотра списка групп метрик и их текущего состояния можно воспользоваться SQL-функцией metrics_config_show. Пример вывода этой функции:

SELECT * FROM metrics_config_show();
 group_name  | enabled
-------------+---------
 bgwr        | t
 bufmgr      | t
 db_stat     | t
 default     | t
 lmgr        | t
 mem_stat    | t
 queryid     | f
 sys         | t
 temp_stat   | t
 transaction | t
 wait        | t
 wal         | t
(12 rows)

Помимо задания значений настроек через файл параметров, имеется возможность менять эти значения через функции. Изменения сразу же отображаются в файле конфигурации.

Table 65. Функции включения и выключения отправки метрик

Функция
Description
metrics_config_enable_group ( group_name: cstring, respect_transient_settings: boolean ) → void
Включает отправку значений для метрик заданной группы.
metrics_config_disable_group ( group_name: cstring, respect_transient_settings: boolean ) → void
Выключает отправку значений для метрик заданной группы.
metrics_config_transient_set ( backend_pid: int4, group_name: cstring, option_value: boolean ) → void
Устанавливает временную возможность отправки значений группы метрик для заданного обслуживающего процесса.

С помощью второго параметра respect_transient_settings в функциях metrics_config_enable_group и metrics_config_disable_group можно указать, отдавать ли приоритет параметрам отправки групп метрик, установленным на уровне отдельных обслуживающих процессов.

С помощью функции metrics_config_transient_set можно включить или выключить отправку значений заданной группы метрик для конкретного обслуживающего процесса. Настройка будет действовать в течение всего времени жизни обслуживающего процесса либо до вызова функции metrics_config_enable_group или metrics_config_disable_group со вторым параметром respect_transient_settings = false, который в данном случае распространит действие команды также на те обслуживающие процессы, в которых устанавливалось собственное значение параметров.

При запуске обслуживающего процесса с идентификатором, использовавшимся ранее, для данного идентификатора обнуляются все временные настройки.

Примечание
Все сказанное о выключении записи групп метрик не касается метрик, которые передаются через [SQL-функции][Функции для управления метриками и аннотациями]. При использовании этих функций метрики записываются всегда.



Информационные панели метрик QHB для Grafana

Информационные панели (dashboards) QHB для Grafana расположены в репозитории по следующей ссылке.

QHB поставляется совместно с сервером метрик, который записывает данные метрик в Graphite. Интерфейсом к этим метрикам служит Grafana. Текущий набор информационных панелей для Grafana поставляется в качестве самодокументируемых образцов, на основе которых пользователи при необходимости могут самостоятельно создать панели, более соответствующие их потребностям. Разумеется, поставляемые панели можно использовать и в исходном виде.


Импорт информационных панелей

Экспорт JSON-описания информационных панелей выполнен в Grafana 6.7.2.

Перед импортом JSON-описания необходимо решить, будут ли названия метрик содержать в качестве префикса имя хоста. Именно таким образом устроены наименования метрик внутри панелей, и этот вариант рекомендуется оставить. В начале имен метрик добавлена переменная $server_name, по умолчанию для нее выбрано значение your_host_name. Перед импортом можно заменить в JSON-файлах это значение на наименование одного из хостов. В дальнейшем в этой переменной через интерфейс Grafana можно будет добавить через запятую все имена хостов, с которых будут собираться метрики. Это позволит быстро переключаться при просмотре метрик с одного хоста на другой. Если такая схема использоваться не будет (в случае, если будут просматриваться метрики с единственного хоста), можно удалить в файлах JSON во всех именах метрик префикс $server_name до проведения импорта описания JSON. Однако это более трудоемкий вариант и его выбирать не рекомендуется.

Для импорта описаний информационных панелей необходимо выполнить следующие шаги:

  1. В меню Dashboards вашего сайта Grafana выбрать пункт Manage.
  2. В открывшемся списке папок и панелей выбрать существующую или создать новую папку.
  3. Находясь в выбранной папке, выбрать в правой верхней части страницы пункт Import.
  4. На открывшейся странице можно либо нажать справа вверху кнопку Upload json file и загрузить файл, либо вставить содержимое JSON-файла в поле под заголовком Or paste JSON и нажать кнопку Load.
  5. После этого нужно заполнить необходимые параметры и выполнить загрузку JSON-описания.

Информационная панель «Операционная система»

Информационная панель представляет основные системные показатели:

  • Время работы экземпляра QHB;
  • Средняя загрузка;
  • Использование ОЗУ;
  • Использование памяти;
  • Использование дисковой системы, на которой расположен каталог баз данных.

Информационная панель «QHB»

Информационная панель содержит несколько разделов:

  • Транзакции;
  • Чтение и запись блоков;
  • События ожидания;
  • Контрольные точки и операции с буферами;
  • Архивация WAL.

В каждом разделе представлены наборы тематических панелей, отражающие основные показатели.

Примечание
По умолчанию теги различных событий не заданы, поэтому необходимо самостоятельно настроить вывод дополнительных аннотаций:

  • добавить необходимые комментарии к данным метрик функцией qhb_annotation (см. раздел Annotations);
  • настроить вывод доступных аннотаций в информационной панели Grafana.


QHB-мониторинг для Zabbix

Пример использования сервера Zabbix 6.2 для мониторинга узла сети с QHB и агентом Zabbix.

Необходимые файлы:

  • template_db_qhb.yaml — файл шаблона «QHB by Zabbix agent», который нужно импортировать на сервер Zabbix 6.2;
  • template_db_qhb.conf — файл с пользовательскими параметрами агента Zabbix для опроса QHB;
  • каталог qhb/, содержащий SQL-файлы, к которым обращаются пользовательские параметры.

Архив для Zabbix 6.2 или выше расположен в репозитории по следующей ссылке.

Установка

Примечание
Более подробную информации см. в документации Zabbix по работе с шаблонами агента.

  1. Установите Zabbix-агент на узле сети с QHB

  2. Скопируйте каталог qhb/ в домашний каталог Zabbix-агента /var/lib/ zabbix. Если в /var/lib/ отсутствует каталог zabbix/, то его необходимо создать. Каталог qhb/ содержит SQL-файлы, необходимые для получения метрик из QHB:

    # mkdir -p /var/lib/zabbix/qhb
    # cd /var/lib/zabbix/qhb
    # wget <сервер>/zabbix/qhb/qhb.tar
    # tar -xvf qhb.tar
    # chmod -R 707 /var/lib/zabbix/qhb
    # rm -rf qhb.tar
    
  3. Скопируйте файл template_db_qhb.conf в конфигурационную директорию Zabbix-агента /etc/zabbix/zabbix_agentd.d/:

    # wget <сервер>/zabbix/template_db_qhb.conf
    
  4. Создайте пользователя zbx_monitor с правами «только чтение» и доступом к кластеру QHB:

    CREATE USER zbx_monitor WITH PASSWORD '<PASSWORD>' INHERIT;
    GRANT pg_monitor TO zbx_monitor;
    
  5. Отредактируйте файл qhb_hba.conf, чтобы разрешить подключение к Zabbix. Для этого откройте его в текстовом редакторе nano и вставьте в него следующие строки:

    host all zbx_monitor 127.0.0.1/32 trust
    
  6. Перезагрузите QHB и Zabbix-агент:

    # systemctl restart qhb
    # systemctl restart zabbix_agentd   
    
  7. Импортируйте файл template_db_qhb.yaml шаблона «QHB by Zabbix agent» на сервере Zabbix. Подробнее об импорте шаблонов см. в документации Zabbix по работе с импортом шаблонов.

  8. Установите параметры макроса {$PG.HOST}, {$PG.PORT}, {$PG.USER}, {$PG.PASSWORD}, {$PG.DB} для узла сети с QHB.

  9. Присоедините шаблон «QHB by Zabbix agent» к узлу сети с QHB.

Собираемые параметры

Table 66. Собираемые параметры

ГруппаНазваниеDescription
QHBBgwriter: Buffers allocated per secondКоличество буферов, выделенных за секунду
QHBBgwriter: Buffers written directly by a backend per secondКоличество буферов, записанных за секунду непосредственно обслуживающим процессом
QHBBgwriter: Buffers backend fsync per secondСколько раз за секунду обслуживающему процессу пришлось самому выполнять вызов fsync (обычно этим занимается фоновый процесс записи, даже когда обслуживающий процесс проводит свои операции записи)
QHBBgwriter: Buffers written during checkpoints per secondКоличество буферов, записанных за секунду во время контрольных точек
QHBBgwriter: Buffers written by the background writer per secondКоличество буферов, записанных за секунду фоновым процессом записи
QHBBgwriter: Requested checkpoints per secondКоличество запрошенных контрольных точек, которые были выполнены за секунду
QHBBgwriter: Scheduled checkpoints per secondКоличество запланированных контрольных точек, которые были выполнены за секунду
QHBBgwriter: Checkpoint sync timeОбщее количество времени, затраченное на синхронизацию файлов с диском при обработке контрольных точек
QHBBgwriter: Checkpoint write timeОбщее количество времени, затраченное на запись файлов на диск при обработке контрольных точек, в миллисекундах
QHBBgwriter: Max written per secondСколько раз за секунду фоновый процесс записи останавливал очищающее сканирование из-за того, что записал слишком много буферов
QHBStatus: Cache hit ratio %Доля попаданий в кэш
QHBStatus: Config hashХеш конфигурации QHB
QHBConnections sum: ActiveОбщее количество соединений, выполняющих запросы
QHBConnections sum: IdleОбщее количество соединений, ожидающих от клиента новой команды
QHBConnections sum: Idle in transactionОбщее количество соединений в состоянии транзакции, но не выполняющих запрос
QHBConnections sum: PreparedОбщее количество подготовленных транзакций
QHBConnections sum: TotalОбщее количество соединений
QHBConnections sum: Total %Общее количество соединений в процентах
QHBConnections sum: WaitingОбщее количество ожидающих транзакций
QHBStatus: Ping timeВремя ответа
QHBStatus: PingПроверка связи
QHBReplication: standby countКоличество резервных серверов
QHBReplication: lag in secondsЗадержка репликации с основного сервера в секундах
QHBReplication: recovery roleРоль репликации: 1 — восстановление еще продолжается (режим резервного сервера), 0 — режим основного сервера
QHBReplication: statusСтатус репликации: 0 — потоковая передача снижается, 1 — потоковая передача возрастает, 2 — режим основного сервера
QHBTransactions: Max active transaction timeТекущая максимальная продолжительность активной транзакции
QHBTransactions: Max idle transaction timeТекущая максимальная продолжительность простаивающей транзакции
QHBTransactions: Max prepared transaction timeТекущая максимальная продолжительность подготовленной транзакции
QHBTransactions: Max waiting transaction timeТекущая максимальная продолжительность ожидающей транзакции
QHBStatus: UptimeПолное время работы системы
QHBStatus: VersionВерсия QHB
QHBWAL: Segments countКоличество сегментов WAL
QHBWAL: Bytes writtenОбъем записей WAL в байтах
QHBDB {#DBNAME}: Database sizeРазмер этой базы данных
QHBDB {#DBNAME}: Blocks hit per secondСколько раз дисковые блоки были обнаружены уже в буферном кэше, благодаря чему в чтении не было необходимости
QHBDB {#DBNAME}: Disk blocks read per secondОбщее количество дисковых блоков, прочитанных в этой базе данных
QHBDB {#DBNAME}: Detected conflicts per secondОбщее количество запросов, отмененных из-за конфликтов с восстановлением в этой базе данных
QHBDB {#DBNAME}: Detected deadlocks per secondОбщее количество выявленных взаимоблокировок в этой базе данных
QHBDB {#DBNAME}: Temp_bytes written per secondОбщий объем данных, записанных во временные файлы запросами в этой базе данных
QHBDB {#DBNAME}: Temp_files created per secondОбщее количество временных файлов, созданных запросами в этой базе данных
QHBDB {#DBNAME}: Tuples deleted per secondОбщее количество строк, удаленных запросами в этой базе данных
QHBDB {#DBNAME}: Tuples fetched per secondОбщее количество строк, выбранных запросами в этой базе данных
QHBDB {#DBNAME}: Tuples inserted per secondОбщее количество строк, добавленных запросами в этой базе данных
QHBDB {#DBNAME}: Tuples returned per secondОбщее количество строк, возвращенных запросами в этой базе данных
QHBDB {#DBNAME}: Tuples updated per secondОбщее количество строк, измененных запросами в этой базе данных
QHBDB {#DBNAME}: Commits per secondКоличество зафиксированных транзакций в этой базе данных
QHBDB {#DBNAME}: Rollbacks per secondОбщее количество откатившихся транзакций в этой базе данных
QHBDB {#DBNAME}: Frozen XID before autovacuum %Доля замороженных идентификаторов транзакций перед автоочисткой, в процентах
QHBDB {#DBNAME}: Frozen XID before stop %Доля замороженных идентификаторов транзакций перед остановкой, в процентах
QHBDB {#DBNAME}: Locks totalОбщее количество блокировок в этой базе данных
QHBDB {#DBNAME}: Queries slow maintenance countСчетчик медленных обслуживающих запросов
QHBDB {#DBNAME}: Queries max maintenance timeМаксимальная продолжительность обслуживающего запроса
QHBDB {#DBNAME}: Queries sum maintenance timeСуммарная продолжительность обслуживающих запросов
QHBDB {#DBNAME}: Queries slow query countСчетчик медленных запросов
QHBDB {#DBNAME}: Queries max query timeМаксимальная продолжительность запроса
QHBDB {#DBNAME}: Queries sum query timeСуммарная продолжительность запросов
QHBDB {#DBNAME}: Queries slow transaction countСчетчик медленных транзакционных запросов
QHBDB {#DBNAME}: Queries max transaction timeМаксимальная продолжительность транзакционного запроса
QHBDB {#DBNAME}: Queries sum transaction timeСуммарная продолжительность транзакционных запросов
QHBDB {#DBNAME}: Index scans per secondКоличество сканирований по индексу в этой базе данных
QHBDB {#DBNAME}: Sequential scans per secondКоличество последовательных сканирований в этой базе данных
Исходные элементы ZabbixQHB: Get bgwriterСтатистика по активности фонового процесса записи
Исходные элементы ZabbixQHB: Get connections sumСбор всех метрик из pg_stat_activity
Исходные элементы ZabbixQHB: Get dbstatСбор всех метрик из pg_stat_database для каждой базы данных
Исходные элементы ZabbixQHB: Get locksСбор всех метрик из pg_locks для каждой базы данных
Исходные элементы ZabbixQHB: Get queriesСбор всех метрик по времени выполнения запросов
Исходные элементы ZabbixQHB: Get transactionsСбор метрик по времени выполнения транзакций
Исходные элементы ZabbixQHB: Get WALОсновной элемент для сбора метрик WAL
Исходные элементы ZabbixDB {#DBNAME}: Get frozen XIDКоличество замороженных идентификаторов транзакций
Исходные элементы ZabbixDB {#DBNAME}: Get scansКоличество сканирований, проведенных для таблицы/индекса в этой базе данных

Триггеры

Table 67. Триггеры

НазваниеDescription
QHB: Required checkpoints occurs too frequentlyЗапрашиваемые контрольные точки происходят слишком часто
QHB: Cache hit ratio too lowСлишком низкая доля попаданий в кэш
QHB: Configuration has changedИзменилась конфигурация
QHB: Total number of connections is too highОбщее количество соединений слишком велико
QHB: Response too longСлишком большое время отклика
QHB: Service is downQHB не функционирует
QHB: Streaming lag with {#MASTER} is too highЗадержка репликации с основного сервера слишком велика
QHB: Replication is downРепликация не функционирует
QHB: Service has been restartedВремя функционирования QHB меньше 10 минут
QHB: Version has changedВерсия QHB изменилась
DB {#DBNAME}: Too many recovery conflictsСлишком много конфликтов между основным и резервным серверами при восстановлении
DB {#DBNAME}: Deadlock occurredПроизошла взаимоблокировка
DB {#DBNAME}: VACUUM FREEZE is required to prevent wraparoundДля предотвращения зацикливания идентификаторов транзакций требуется выполнить команду VACUUM FREEZE
DB {#DBNAME}: Number of locks is too highКоличество блокировок слишком велико
DB {#DBNAME}: Too many slow queriesСлишком много медленных запросов
QHB: Failed to get itemsZabbix не получал данные для элементов в течение последних 30 минут

Настройка сбора метрик

Для того чтобы информационной панели отображали данные метрик, необходимо выполнить некоторые настройки.

Настройка сервера метрик

Настройка сервера метрик описана в главе [Сервер метрик].

Рекомендуется в параметре prefix конфигурационного файла /etc/metricsd/config.yaml сервера метрик прописать имя хоста, на котором он работает. Если сделать это для каждого сервера, все метрики будут организованы иерархически, и первый уровень иерархии будет уровнем серверов. В именах метрик в предлагаемых панелях для этих целей присутствует переменная $server_name. Подразумевается, что на хосте работает только один кластер баз данных.


Настройка параметров базы данных

Для настройки отправки метрик до релиза QHB 1.3.0 необходимо в qhb.conf установить параметр metrics_collector_id в значение, с которым запускается сборщик метрик, например 1001. Начиная с релиза QHB 1.3.0 вместо metrics_collector_id используется параметр metrics_collector_path, который по умолчанию имеет значение @metrics-collector (представляет собой путь к сокету домена Unix); сервер метрик по умолчанию запускается именно на этом адресе.

Для настройки отправки аннотаций необходимо в qhb.conf прописать следующие параметры:

  • grafana.address — адрес Grafana, например http://localhost:3000
  • grafana.token — необходимо указать токен, полученный в Grafana по адресу http://localhost:3000/org/apikeys

Пример настроек в qhb.conf для отправки метрик и аннотаций:

# До релиза QHB 1.3.0
# metrics_collector_id = 1001  

# С релиза QHB 1.3.0:
metrics_collector_path = '@metrics-collector'

grafana.address = 'http://localhost:3000'
grafana.token = 'eyJrIjoiNGxTaloxMUNTQkFUMTN0blZqUTN6REN6OWI5YjM1MzMiLCJuIjoidGVzdCIsImlkIjoxfQ=='

Примечание
При необходимости записи данных метрик в CSV-файлы в качестве значения параметра metrics_collector_path нужно указать путь к файлу сокета домена Unix, например /tmp/metrics-collector.sock. Это же значение нужно указать в параметре bind_addr раздела collection в настройках сервера метрик (/etc/metricsd/config.yaml).

Для сбора системных метрик (информационная панель «Операционная система») необходимо установить параметр qhb_os_monitoring в значение on (включен). Можно также задать период сбора системной статистики qhb_os_stat_period (значение по умолчанию — 30 секунд). Не рекомендуется задавать для этого параметра слишком низкое значение, поскольку сбор системной статистики требует некоторых затрат.

В файле параметров можно прописать:

qhb_os_monitoring = on
qhb_os_stat_period = 60 # если период по умолчанию в 30 секунд не устраивает

Либо выполнить команды:

ALTER SYSTEM SET qhb_os_monitoring = ON;
ALTER SYSTEM SET qhb_os_stat_period = 60;
SELECT pg_reload_conf();

Примеры использования метрик в функциях SQL

Помимо встроенных метрик пользователи могут использовать свои метрики через следующие функции SQL.

Тип метрик Timer

Используется при фиксации промежутка времени, единицы измерения — наносекунды.

SELECT qhb_timer_report('qhb.timer.nano',10000000000 /* 10 секунд в наносекундах */);

Тип метрик Counter

Используется, когда нужно зафиксировать количество произошедших за промежуток времени событий.

SELECT qhb_counter_increase_by('qhb.example.counter',10);

Тип метрик Gauge

Используется, когда нужно установить некий статичный показатель в определенное значение или изменить его.

SELECT qhb_gauge_update('qhb.gauge_example.value', 10); /* Установка значения */
SELECT qhb_gauge_add('qhb.gauge_example.value',1); /* Увеличение значения */
SELECT qhb_gauge_sub('qhb.gauge_example.value',1); /* Уменьшение значения */

Annotations

Используются, если нужно добавить комментарий к данным метрик. Первый параметр функции — текст комментария, последующие параметры — теги.

SELECT qhb_annotation('Начало выполнения теста', 'test','billing'); /* Текст аннотации и два тега */