-
Notifications
You must be signed in to change notification settings - Fork 186
Postgres deadlocks... now what?
Trying to debug a deadlock issue? Make sure you positively know everything mentioned in this page, otherwise you will painfully re-discover it yourself.
Not debugging deadlocks? It will probably happen to you at some point, so at least skim this article.
A deadlock is a situation in which a DBMS cannot execute all concurrent transactions running in a specific point in time. It must, and it will, automatically kill some of them in order to complete others.
Basic case is two transactions waiting for each other to complete. Since both are waiting none can make any progress, so the DBMS can either kill one of those and let the other finish or just wait forever. Since the latter option is less desirable, the DBMS will happily kill one as soon as the situation is detected (by the way: Postgres chooses which one and you can't know the choice in advance).
There are theoretical and practical reasons why a DBMS can't "magically" guarantee deadlock-free transactions, so this is something that just happens from time to time and we have to live with it for the foreseeable future.
DBMSs try their best to ensure multiple transactions can work concurrently without interfering with one another, but sometimes this is simply not possible. To avoid data corruption Postgres conceptually maintains one lock for every table and every row in the database, and it will grant it only to one transaction at a time, typically the first one that starts working on it. Any others that come afterwards will have to wait until the former transaction is finished, its locks are released and are re-granted before proceeding. Since transactions without locks are basically temporarily suspended no data corruption happens, which is good!
But what if transaction T1 owns a lock L2 that is wanted by T2, while T2 has by chance a lock L1 that T1 is interested in? Well then T1 waits for T2 while T2 waits for T1. They will both wait forever. You got it... It's a deadlock!
Since deadlocks only happen when locks are granted, one strategy to mitigate them is to minimize lock use, and Postgres of course attempts to do that. This is also good for performance, since less locks mean less time wasted waiting.
Postgres implements a mechanism called MVCC that prevents locking in a number of cases, it is similar to copy-on-write in filesystems.
Every time you UPDATE
a row, Postgres does not overwrite it, instead it creates a new copy and still keeps the old version around - the same happens when you DELETE
a row (more details on MVCC are available here).
This has a lot of design consequences, but mainly:
- you need to run the
VACUUM
command from time to time, to get rid of very old rows no longer in use.AUTOVACUUM
is enabled by default, so you typically don't need to worry; - read statements (
SELECT
) never block other read statements (SELECT
); - read statements (
SELECT
) never block write statements (INSERT
,UPDATE
orDELETE
); - write statements (
INSERT
,UPDATE
orDELETE
) never block read statements (SELECT
).
So locks are only relevant, and deadlocks can only happen, when two transactions do write statements (INSERT
, UPDATE
or DELETE
) that conflict with each other.
Locks can also be explicitly required by the application - in that case of course a deadlock can also happen.
Despite being possible only in write-write situations as explained above, deadlocks happen in a surprising number of creative ways.
Simple case example:
CREATE TABLE a(i INT PRIMARY KEY);
CREATE TABLE b(i INT PRIMARY KEY);
Transaction T1 does:
BEGIN;
INSERT INTO a VALUES(1);
Then transaction T2 does:
BEGIN;
INSERT INTO b VALUES(1);
INSERT INTO a VALUES(1);
-- waits for T1 here, specifically for row 1 in table a
Then transaction T1 does:
INSERT INTO b VALUES(1);
-- deadlock here!
Some other possibilities:
-
UPDATE
s instead ofINSERT
s in the above examples; -
SELECT ... FOR UPDATE
instead ofUPDATE
(more info here); - chains of three, four or more transactions each waiting for the following one;
-
SELECT function()
iffunction
writes to any table, or calls any other function that does...; -
TRIGGER
s, just like functions; -
UPDATE
s to rows that refer to other rows viaFOREIGN KEY
. More about that later.
- look at Postgres logs and try to get a rough idea of what is going on;
- if at all possible, try to trigger the problem in a reproducible way. Deadlocks are time and load dependent so this is impossible in most cases, yet every effort should be taken to have the problem happening again predictably;
- if at all possible, get SSH access to the affected server;
- determine what was going on at the exact time the deadlock happened. Was the WebUI in use? Which page(s) were being loaded or submitted? Were there any Taskomatic jobs running? Did any client use XMLRPC APIs for whatever reason? Check all logs as thoroughly as possible to get this information;
- based on what was going on, determine what transactions were in progress and which queries they were running. This can be daunting, especially because Hibernate will typically generate a lot of queries behind the scenes. Upstream recommends having two guys in a room with whiteboard for a long time going through each line of code manually, coming up with some list of updated tables. Having a clear picture of what was going on is very hard but needed!
- ideally, reproduce the issue by manually issuing queries into different pgAdmin III windows or
psql
terminals. It might be hard to catch the exact ordering and timing by hand, so:- try to reproduce the deadlock programmatically. In Java, for example, you can attempt writing one or more unit tests that should execute the same query set and have them running repeatedly for some time - see if you can get them deadlocking and check that the stack trace corresponds to the original error;
- if you identified the involved tables and queries and can reproduce the exact issue reliably, congratulations! You will probably already know how to change the ordering of queries so that deadlocks don't happen anymore. If you can't change the ordering, use
SELECT ... FOR UPDATE
to get locks on whatever rows you need to change later.
Typical deadlock error output, which can be found in /var/lib/pgsql/data/pg_log/*.log
, looks like this:
ERROR: deadlock detected
DETAIL: Process 21535 waits for AccessExclusiveLock on relation 342640 of database 41454; blocked by process 21506.
Process 21506 waits for AccessExclusiveLock on relation 342637 of database 41454; blocked by process 21535.
HINT: See server log for query details.
CONTEXT: SQL statement "UPDATE ..."
Notable points:
-
AccessExclusiveLock
is the kind of lock, in this case, a rather strict one. More on lock types later; - "relation 342640" and "relation 342637" refer to internal Postgres Object Identifiers. You can retrieve legible table names with the following query:
SELECT 342640::regclass, 342637::regclass;
regclass | regclass
----------+----------
b | a
(1 row)
- the reported SQL statement after
CONTEXT
is the one that has been interrupted and killed by Postgres. If there is more than one, then it represents the full "stack trace" of functions that were running at killing time;
In this case situation is pretty clear: a transaction in process 21535 wants to acquire an exclusive lock on table b
, which is locked by a second transaction in process 21506 which in turn wants to lock a
.
That's not the most common form of deadlock output, though. In most cases output is more cryptic:
ERROR: deadlock detected
DETAIL: Process 5455 waits for ShareLock on transaction 44101; blocked by process 27458.
Process 27458 waits for ShareLock on transaction 43490; blocked by process 5455.
Process 5455: select * from rhn_channel.subscribe_server($1, $2, 1, $3) as result
Process 27458: select * from rhn_channel.update_needed_cache($1) as result
HINT: See server log for query details.
CONTEXT: SQL statement "UPDATE ..."
Notable points:
- in
DETAIL
you can find the top-level queries that originated this deadlock,CONTEXT
shows the stack trace (omitted here for brevity); - locks are of a different type with respect to the above case,
ShareLock
. More on that later; - lock is not on a table and not even on a row, but on a transaction.
What's the meaning of a transaction lock?
On a logical level, Postgres has two kinds of locks, table locks and row locks. Row locks obviously only apply to specific rows, while table locks lock the entire tables.
Implementation wise, it turns out that row locks are implemented on top of transaction locks
. It's actually an optimization, as transaction locks are less expensive than real row locks.
Here's how it works:
- the very first thing every transaction does when it is created is to get an exclusive lock on itself. It will be released it upon completion;
- every table has a some extra system columns to implement MVCC, among those there's
xmax
; -
xmax
is an integer, which can be either 0 or a transaction ID; - if it is 0, then it means no transaction has
UPDATE
d orDELETE
d that row so far, in other words it has been freshlyINSERT
ed; - if it is not 0, then it is the ID of the transaction which last
UPDATE
d that row; - whenever a transaction wants to
UPDATE
orDELETE
a row it will check itsxmax
first:- if it is 0, it can proceed by first updating the
xmax
value with its own ID and then the rest of the row; - if it is the ID of a transaction that has finished, it can proceed by overwriting
xmax
; - if it is the ID of a transaction that has not finished yet, then the row is locked and it has to wait. The wait is implemented by requesting a lock on the transaction which ID is
xmax
.
- if it is 0, it can proceed by first updating the
Since all transactions have exclusive locks on themselves, when the original transaction will finish it will release the lock and the row will be editable again.
Note that this "trick" only works for the first transaction that waits for a row. If two or more are waiting, ones beyond the first will get a regular row lock.
You can get a fuller explanation here for more details.
Now we can explain the previous example:
DETAIL: Process 5455 waits for ShareLock on transaction 44101; blocked by process 27458.
Process 27458 waits for ShareLock on transaction 43490; blocked by process 5455.
This really means that transaction 43490 wants to edit some row previously edited by transaction 44101, while transaction 44101 wants to edit some row previously edited by transaction 43490. Being only two transactions, no "real" row lock is requested but only the less expensive transaction locks.
How can I determine what rows are causing this problem?
In general it is difficult as Postgres does not track that (see above). If you know what tables and queries are involved, you can try reproducing the situation manually and take a look at xmax
values, for example:
SELECT id, name, xmax FROM rhnServer;
id | name | xmax
------------+---------------------------------+---------
1000010000 | sles11swtest.lab.dus.novell.com | 2962002
(1 row)
You can retrieve transaction IDs by running this query:
susemanager=# SELECT txid_current();
txid_current
--------------
2962002
(1 row)
In this case, you can see that rhnServer
's only row 1000010000
was updated by the current transaction.
If you have to figure out what locks were requested in some occasion, your best friend is pgAdmin III's Tools
-> Server Status
window (full explanation with screenshot here).
Important things to notice:
- the Activity tab lists active processes/transactions;
- the Locks table lists all currently active locks.
Note that this view does not show table names, just OIDs - you can use the following query to get roughly the same information with proper table names:
SELECT virtualtransaction, relation::regclass, locktype, page, tuple, mode, granted, transactionid
FROM pg_locks
ORDER BY granted, virtualtransaction;
If you do this, you will probably be surprised by lock types. Lock names are actually very confusing - for example RowExclusive
is actually the name of a table lock type! You are thus strongly encouraged to read the lock types page thoroughly from Postgres manual before thinking that it is doing something stupid. It usually doesn't.
If you can reproduce a deadlock issue but still cannot comprehend how it works, you might be interested in having Postgres logging all queries and all of their parameters. Needless to say, this log level is a bit extreme and looking at the results might be overwhelming, still it can be useful.
To activate it edit /var/lib/pgsql/data/postgresql.conf
and add the following line:
log_statement = 'all'
You might also be interested in the Process ID that originated each query to make sense of the flows. Then change the following line to:
log_line_prefix = '%t %d %u %p '
This is an example output:
2015-02-25 16:44:51 CET susemanager susemanager 31444 LOG: execute S_1: BEGIN
2015-02-25 16:44:51 CET susemanager susemanager 31444 LOG: execute <unnamed>: select * from logging.clear_log_id() as result
[...]
2015-02-25 16:44:52 CET susemanager susemanager 31445 LOG: execute S_1: BEGIN
2015-02-25 16:44:52 CET susemanager susemanager 31445 LOG: execute <unnamed>: select * from logging.clear_log_id() as result
Note that it could well be totally unreadable in normal conditions. You might want to stop some services like Taskomatic or, ideally, have some unit test code that can trigger the condition and stop everything else for best clarity.