SELECT FOR UPDATE postgres

I am working on a mutli-threaded application that is using a table to keep track of what each thread is doing. Now this is something I have done using mysql quite a bit but it seems postgres is different.

Here is my select:
select
                m.membid AS xmembid,
                m.demoids AS xdemoids,
                m.areaids AS xareaids,
                ma.hzip AS xhzip,
                c.ddaccount AS xddaccount,
                z.latitude AS lat,
                z.longitude AS lon,
                mb.partid
              FROM
                members m
              LEFT JOIN creditcards c ON
                m.membid = c.membid
              LEFT JOIN membaddr ma ON
                m.membid=ma.membid
              LEFT JOIN zipcodes z ON
                ma.hzip = z.zipcode
              LEFT JOIN membpart mb ON
                mb.membid = m.membid
              LEFT JOIN targeted_offers_scores_1367949828 t ON
                m.membid = t.membid
              WHERE
                m.cobrid = 116
                AND m.membtype!='anon'
                AND m.active='t'
                AND t.processed = 'f'
              ORDER BY m.membid
              LIMIT 10

Open in new window


Based on the results of this query I want to update targeted_offers_scores_1367949828 and set processed to true. However I want to lock the rows in targeted_offers_scores_1367949828  so the other threads can grab different rows.
skioneAsked:
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

x
 
earth man2Connect With a Mentor Commented:
http://www.postgresql.org/docs/9.1/static/explicit-locking.html#ADVISORY-LOCKS

SELECT pg_advisory_lock(id) FROM foo WHERE id > 12345 LIMIT 100; -- danger!
SELECT pg_advisory_lock(q.id) FROM
(
  SELECT id FROM foo WHERE id > 12345 LIMIT 100
) q; -- ok
0
 
lcohanDatabase AnalystCommented:
I suggest you to wrap your code in  a explicit begin tran/commit tran so all code either succeds or not - "all-or-nothing operation".


http://www.postgresql.org/docs/8.3/static/tutorial-transactions.html
0
 
skioneAuthor Commented:
I understand transactions, I have since found out that PSQL doesn't work with LIMIT like mysql does. So I need a different way but pointing me to the documentation I have already read doesn't really help.
0
Upgrade your Question Security!

Your question, your audience. Choose who sees your identity—and your question—with question security.

 
Ray PaseurCommented:
According to this, it looks like the PostGreSQL LIMIT clause works just like MySQL.
http://www.postgresql.org/docs/8.1/static/queries-limit.html
0
 
skioneAuthor Commented:
I says so but I think when combined with FOR UPDATE/SHARE it doesn't work. At least not in my experiments
0
 
Ray PaseurCommented:
PostGreSQL row-level locking appears to be an inherent component of a transaction.
http://www.postgresql.org/docs/9.1/static/explicit-locking.html
0
 
skioneAuthor Commented:
Perhaps I didn't explain my goals. I am trying to do something that I have done with mysql many times. I am trying to build a queue like system. I grab a SELECTion of rows, process them and then update them, marking them as complete. The select grabs a batch of rows (let
's say 1000 at a time), so I can multi-thread the process. I will eventually need to process 10 million rows in < 24 hours. I can process 2 rows per second, so I need many threads to achieve that goal. The way PSQL handles LIMITs is different than mysql. Normally I would just do a SELECT LIMIT 100 FOR UPDATE but I can't since PSQL fires the LIMIT from row 0 even with an offset when you combine it with a FOR UPDATE/SHARE. So how can I get this done?
0
 
Ray PaseurCommented:
Ahh, now we are maybe starting to peel the onion.  Please show us your test data set, including the SSCCE.  Perhaps we can use your test data to simulate the issues and help you find a way to the answer.
0
 
skioneAuthor Commented:
So I actually have this working without the need to lock rows but I still want to know how to get this done.

Ray_Paseur:
So in mysql I would do something like this:

START TRANSACTION

SELECT m.id 
, c.id AS cid 
, m.sid AS sid 
, m.type AS type 
, m.title AS title 
, m.email AS email 
, m.phone AS phone 
, m.message AS message 
, s.server AS server 
, s.username AS username 
, s.password AS password 
, c.category_name AS category_name 
, m.mid AS mid 
FROM bm_message_queue m 
LEFT JOIN bm_categories c on m.cid=c.id 
LEFT JOIN bm_servers s on c.server=s.id 
WHERE m.stamp <='". $current_time ."' AND m.type = 'sms' 
AND (mid=0 OR mid is null) AND sent=0 limit 
100 FOR UPDATE;


//execute and loop through select:
UPDATE bm_message_queue SET sent =1 WHERE id = {current row ID}

COMMIT

Open in new window


So this pseudo code is pulled directly from a working example, there is some error checking and roll backs but this is the heart of my goal. I select 100 rows and lock them, I loop through them, presumably doing something (like sending a message), then mark it as sent. Nothing gets done until I commit the transaction but since the rows are locked no other thread can touch them
0
All Courses

From novice to tech pro — start learning today.