Autonomous Transactions — PL/pgSQL Language Extension

”The legitimate real-world use of autonomous transactions is exceedingly rare. If you find them to be a feature you are using constantly, you’ll want to take a long, hard look at why.”

-- Tom Kyte, Oracle expert

Overview

Autonomous transactions provide a special way to control transactions in stored procedures. An autonomous transaction is a subtransaction that is isolated from the parent transaction. Autonomous transactions can commit (COMMIT) or rollback (ROLLBACK) changes regardless of the parent transaction.



Rationale

  • Logging (audit). The need to store information about the execution of some operation, even if the transaction in which it was performed is rolled back.

  • Debugging tool. Similar to the previous example, the trace information will be recorded even if the debugged transaction fails.

  • Simplifying business logic in some cases, such as executing a series of operations (notification, update) without the risk of losing all results if one operation fails.

Note that the autonomous transaction usage results in an increased workload on the database; it requires additional resource consumption, and there is a need to switch a context. The application logical complexity may increase, which will make maintenance more difficult. It is better to avoid autonomous transactions, and if they are still necessary, to restrict yourself to simple usage models.



Usage

The main syntax block of a stored procedure code (including functions, anonymous blocks, and triggers) may be marked with the optional AUTONOMOUS keyword. It means that the entire function is executed in a single implicit autonomous transaction. Such a function is called an autonomous function. Autonomous functions can be nested; in this case the autonomous session background worker starts a background worker of a new autonomous session. Note that only the main BEGIN..END ​​block of a function can be AUTONOMOUS. Attempting to make any other code block autonomous will result in a syntax error.

The plain usage example (assuming that autonomous_pool_capacity is already set to a non-zero value):

CREATE TABLE test_table (a int);

-- Define a function containing an autonomous transaction
CREATE OR REPLACE FUNCTION func() RETURNS void
LANGUAGE plpgsql
AS $$
BEGIN AUTONOMOUS
    INSERT INTO test_table VALUES (1);
    INSERT INTO test_table VALUES (2);
END;
$$;

-- Execute a function and rollback the parent transaction
START TRANSACTION;
SELECT func();
ROLLBACK;

-- Checking that an autonomous subtransaction has been committed
SELECT * FROM test_table;
 a
---
 1
 2
(2 rows)


Optimization

Autonomous transactions are performed in a separate session, implemented using background worker mechanism with its own database connection. Starting an autonomous session from scratch requires more resources and time than starting a normal subtransaction because a new background worker starts, a new database connection is established.

Therefore, optimization is used in the form of a pool of autonomous sessions: session remains in the pool upon the completion of the transaction and can be reused if the new parent session is compatible with it in the context of the database connection attributes. Autonomous session can have one of the two states: Active and Idle. In the Idle state, an autonomous session is available for possible reuse by other compatible main sessions. A backend is considered compatible if it represents a connection to the same database and with the same role as the Idle session.

Note that an active or inactive autonomous session appeared in the pool occupies one database “connection slot”. No more than max_connections connections can exist at the same time.

The current state of the autonomous session pool can be found out with the custom function autonomous_pool_show().

SELECT * FROM autonomous_pool_show();

Example of the possible autonomous_pool_show() output for a pool size of 5:

 slot | state  | worker | owner | owner_type | database | auth_user | cnt
------+--------+--------+-------+------------+----------+-----------+-----
    0 | Idle   |   1863 |     0 |            |        5 |     16385 |   2
    1 |        |        |       |            |          |           |
    2 | Active |   1880 |  1865 | Backend    |       12 |        10 |   1
    3 |        |        |       |            |          |           |
    4 | Idle   |   1872 |     0 |            |        5 |        10 |   3
(5 rows)

Slots 1 and 3 are free, which is shown as NULL in all columns.

The database and auth_user columns are related to the compatibility of the process and autonomous session. A new backend (regular session) can reuse only those autonomous sessions that have an identical connection.

The last column cnt contains the current usage count for the autonomous session (see autonomous_session_max_uses configuration parameter below).



Configuration

Table 1. Configuration parameters

NameDefaultGUC context
autonomous_pool_capacity0qhbmaster
autonomous_session_max_uses5sighup

The autonomous session pool is located in a shared memory; pool size is limited by the autonomous_pool_capacity setting (the default is 0, which means unavailability of autonomous transactions). You should to restart QHB upon changing this parameter value. The value of this parameter cannot be more than max_worker_processes; otherwise there will be fatal error after QHB start. Since background workers are also used for other purposes, in practice the max_worker_processes value should exceed autonomous_pool_capacity with some "reserve". When increasing the autonomous session pool capacity, you must increase the max_worker_processes value.

It is strictly recommended to limit the autonomous session lifetime or it will accumulate a lot of RAM due to caching when executing SQL queries in the worker. The parameter autonomous_session_max_uses is provided for this (the default is 5, which can be changed without restarting QHB). When exceeding the usage number, the autonomous session will be killed, freeing resources.



Limitations

  • The number of simultaneously running autonomous sessions is limited to autonomous_pool_capacity setting.

  • An autonomous transaction can only connect to the same database as the parent session is connected to.

  • An autonomous transaction is synchronous: the calling transaction always waits for the autonomous transaction to complete.

  • The autonomous transaction isolation level cannot be overridden, it is inherited from the calling transaction.

  • Calling a procedure (CALL) within an autonomous transaction is prohibited because executing COMMIT or ROLLBACK in a procedure (it is prohibited in a function) may conflict with an active autonomous transaction, and it is generally impossible to ensure that there will be no conflict.

  • Parallel queries are not supported.

  • Cursors are not supported.

  • Prepared operations are not supported.

  • Dynamic SQL with parameter placeholders is not supported.

  • Errors and notices are sent to the parent session and thrown again.

  • Exceptions are sent to the parent session. It means that the exception handler code will be executed in the parent transaction, even though the exception itself occurred in an autonomous transaction. You cannot handle an exception in an autonomous transaction.


Examples

The example results are shown for the configuration with autonomous transactions enabled:

echo "autonomous_pool_capacity = 5" >> $PGDATA/qhb.conf
echo "autonomous_session_max_uses = 100" >> $PGDATA/qhb.conf
qhb_ctl restart

Example of Using Autonomous Transactions to Audit or Debug Actions

Let's say we have a table titles and operations with data in it should be automatically recorded in a special log table log.

CREATE TABLE titles (id INT, title_name VARCHAR);
CREATE TABLE log (moment TIMESTAMPTZ, user_name VARCHAR, action VARCHAR);

To define a function to write to log in an autonomous transaction:

CREATE OR REPLACE FUNCTION log_insert() RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
DECLARE
    action VARCHAR;
BEGIN AUTONOMOUS
    IF (TG_OP = 'INSERT') THEN
	action := 'Added id=' || NEW.id::text;
    ELSIF (TG_OP = 'UPDATE') THEN
	action := 'Updated id=' || NEW.id::text;
    ELSIF (TG_OP = 'DELETE') THEN
	action := 'Deleted id=' || OLD.id::text;
    END IF;
    INSERT INTO log VALUES (now(), current_user, action);
    RETURN NEW;
END;
$$;

To ensure that this function is executed in a trigger:

CREATE OR REPLACE TRIGGER log_trigger
AFTER INSERT OR UPDATE OR DELETE
ON titles
FOR EACH ROW
EXECUTE PROCEDURE log_insert();

To perform some operations on the titles table and roll back the transaction:

START TRANSACTION;
INSERT INTO titles VALUES (8001, 'First');
INSERT INTO titles VALUES (8002, 'Second');
INSERT INTO titles VALUES (8003, 'Third');
DELETE FROM titles WHERE id = 8001;
ROLLBACK;
SELECT * FROM titles;
  id  | title_name
------+------------
(0 rows)

To make sure that the operation information is preserved despite the rollback:

SELECT * FROM log;
	    moment             | user_name |     action
-------------------------------+-----------+-----------------
 2025-03-07 19:22:45.32366+03  | qhb       | Added id=8001
 2025-03-07 19:22:45.326015+03 | qhb       | Added id=8002
 2025-03-07 19:22:45.326327+03 | qhb       | Added id=8003
 2025-03-07 19:22:45.326749+03 | qhb       | Deleted id=8001
(3 rows)

If the trigger function text did not contain the AUTONOMOUS keyword, the log table would also be empty.


Example of Using Autonomous Transactions to Simplify Logic

Let's assume that a large number of independent "elementary" operations are to be performed in a single transaction, some (rare) of which may not complete due to a failure. According to the terms of the problem, the loss of some elementary operations is not a critical problem. And we do not want to roll back the entire transaction because of these failed operations. Performing each operation in its own transaction would be unproductive.

Let's simulate an elementary operation by inserting a data row from the work table into the backup table.

CREATE TABLE work (id INT, name VARCHAR);
INSERT INTO work VALUES (1, 'First');
INSERT INTO work VALUES (2, 'Second');
INSERT INTO work VALUES (3, 'Third');
INSERT INTO work VALUES (4, 'Fourth');
INSERT INTO work VALUES (5, 'Fifth');
INSERT INTO work VALUES (6, 'Sixth');
INSERT INTO work VALUES (7, 'Seventh');
INSERT INTO work VALUES (8, 'Eighth');
INSERT INTO work VALUES (9, 'Ninth');
INSERT INTO work VALUES (10, 'Tenth');
CREATE TABLE backup (id INT, name VARCHAR);

Let a failure occur at every odd identifier after 2.

CREATE OR REPLACE FUNCTION process_row(id INT, name VARCHAR) RETURNS void
LANGUAGE plpgsql
AS $$
BEGIN AUTONOMOUS
    IF id = 1 OR id % 2 = 0 THEN
	INSERT INTO backup VALUES(id, name);
    ELSE
	SELECT 1/0;
    END IF;
END;
$$;

We will ignore exceptions when executing the main transaction.

CREATE OR REPLACE FUNCTION do_backup() RETURNS void
LANGUAGE plpgsql
AS $$
DECLARE
    r RECORD;
BEGIN
    FOR r IN SELECT id, name FROM work
    LOOP
        BEGIN
            PERFORM process_row(r.id, r.name);
	EXCEPTION WHEN OTHERS
	THEN
	END;
    END LOOP;
END;
$$;

We start the main transaction.

START TRANSACTION;
SELECT do_backup();
COMMIT;

And we make sure that the “non-failing” data has been copied.

SELECT * FROM backup;
 id |  name  
----+--------
  1 | First
  2 | Second
  4 | Fourth
  6 | Sixth
  8 | Eighth
 10 | Tenth
(6 rows)

Here, the use of autonomous transactions helped to simplify the application program code.

If you remove AUTONOMOUS from the process_row text, then the backup table will be empty after executing do_backup.