Quantcast
Channel: Question and Answer » transaction
Viewing all articles
Browse latest Browse all 35

Locking issues under transaction on Postgres

$
0
0

I recently got a “Out of shared memory – You might need to increase max_locks_per_transaction” error.

This is a long running process where I import a bunch of CSV files after proper processing.
I loop through them and open a transaction for each file, process it and then when I’m done with the file, close the transaction and move to the next file.

These files are not big, the biggest clocking in at around 12K lines.

The processing I do, as far as PG is concerned, amounts to doing a few SELECTs (a couple per line, no joins or anything) and then an UPDATE of one existing row (at most one per line).

The problem is, pg_locks fills up with transactions that have no relation, are of type transactionid and have a mode of ExclusiveLock.
I got as many as 20K rows like that in pg_locks and, besides the ones that, as expected, refer ti the two tables I’m using, an overwhelming majority looks like this (sorry, the indentation will be screwed up, I’m posting a link to pastebin too):

"locktype" "database" "relation" "page" "tuple" "virtualxid" "transactionid" "classid" "objid" "objsubid" "virtualtransaction" "pid" "mode" "granted" "fastpath"

"virtualxid" "" "" "" "" "11/18291" "" "" "" "" "11/18291" "6308" "ExclusiveLock" "t" "t"
"transactionid" "" "" "" "" "" "61840165" "" "" "" "11/18291" "6308" "ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61843843" "" "" "" "11/18291" "6308" "ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61833173" "" "" "" "11/18291" "6308" "ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61835511" "" "" "" "11/18291" "6308" "ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61846000" "" "" "" "11/18291" "6308" "ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61838308" "" "" "" "11/18291" "6308" "ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61832936" "" "" "" "11/18291" "6308" "ExclusiveLock" "t" "f"

http://pastebin.com/V4K3AvHD

Basically, none of those records have either a database or a relation, only a transaction id.

The only different row is the first one, which I take as coming from the transaction acquiring a lock on itself.

All the locks are being acquired by the same connection, and apparently only when executing the UPDATE, which really is no harder than

UPDATE t SET foo = COALESCE(foo, 0) + 23.4 WHERE bar = 'hey' and baz = 'ho'

and can affect at most one row.

When I’m done processing the file and commit the transaction, all those thousands of records just disappear from pg_locks, going back to a more normal number of between 15 and 30.
It’s as if each UPDATE somehow acquired a lock, making the record count of pg_locks increase to the tune of one every one or two seconds.

I assume I must be doing something wrong, but I can’t figure out what.

As a temporary workaround I indeed increased max_locks_per_transaction, but I would really like to actually solve the problem.
I don’t think an increase of max_locks_per_transaction is warranted for my use case, which is quite simple.

Also, note that nobody else is accessing the two tables I’m using. Nobody is even accessing any table in the same schema.

I’m on Windows Web Server 2008 R2.

PostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit

Connecting with Npgsql 2.2.5

Can anyone shed some light?

I thought about possible indexing issues, but the table in question is not huge by any means (fewer than 50K rows), and doing a SELECT on it with the same criteria used by the UPDATE is very fast.

Thanks in advance.


Viewing all articles
Browse latest Browse all 35

Trending Articles