I have an UPDATE query that updates one int4 column of a table with around 220,000 rows in it. The subquery to pull the new value of this column executes in approximately 10 seconds. When I run the UPDATE query:
SET mycol = sq.mycol
... ) sq
WHERE mytable.id = sq.id;
It takes somewhere in the area of 5 minutes to execute. I think when Postgres gets around to running COMMIT is where locking begins to occur, and all of my SQL statements begin backing up.
This table is pretty heavily hit on a live website - so things get backed up pretty quickly - and if it gets too bad, I have no choice but to restart the database.
I am running this statement from with in plpgsql function, and I have even tried looping through running smaller UPDATE statements:
FOR row IN EXECUTE SQL1 LOOP
This seems to run better, as it slowly degrades performance, whereas the first one would run fairly quickly, and then at one point everything would start backing up. I attempted to explicitly COMMIT smaller blocks of records, but plpgsql doesn't allow for this.
A few of questions, any of which might lead me to a good solution:
1) Is there just an overall better way to approach this, in plpgsql?
2) Is there some way to tell the function's execution to pause for a number of seconds to allow some backed up queries to complete execution?
3) Is there a way to use a READ UNCOMMITED isolation level or disable locking for this execution? I don't care about any issues associated with dirty/phantom reads while dealing with this table.
Thanks in advance!