• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 527
  • Last Modified:

Deleting records from 2 tables vfp

I am running a capacity planning database. This copies all relevant records from oracle and stores them in a table called capacity. The copy routine is run several times a day, and the capacity is seen to rise as orders are added to oracle. I can report that at say 8am there were 5000 orders and at 5pm there were 5500 but have been challenged to provide details.
My solution is to create a copy of the capacity table (called lastcap) before the update, and again after (called thiscap). What I need to do is 'remove'  those records that are in both tables (or read only those that are in the later table). Does anybody know the best way of achieving this?
Richard Teasdale
Richard Teasdale
1 Solution
Each table record should have its own unique ID, e.g OrderID. This is the only requirement for the following query:

SELECT * FROM Capacity

5000 orders isn't so many so it will be fast even without indexes.
CaptainCyrilFounder, Software Engineer, Data ScientistCommented:
SELECT * FROM table1 WHERE id NOT IN (SELECT id FROM table2)
If you don't have unique ID then you have to use some time stamp but you have to ensure timestamp values can do the job, e.g. two orders having same time stamp but just one of them exported the first time will produce incorrect result.

SELECT * FROM Capacity
WHERE TimeStamp > (SELECT MAX(TimeStamp) FROM LastCap)
Richard TeasdaleFinancial ControllerAuthor Commented:
Thanks a lot for very prompt and effective answers! I asked a question about python 2 months ago and am still waiting! Thanks to Captain Cyril, too.
Olaf DoschkeSoftware DeveloperCommented:
What makes me think is, you say you compare lastcap and thiscap, two copies of the capcity table before and after update.

But there is an update routine (in oracle?) which does merge oracle data into capacity.dbf, so at that stage it's already known, which records already were in capacity.dbf and which have to be added.

So I'd say the solution to your problem is already solved and you redo this by the comparison. Even if it's not much to do, when this raises at 500 records per 3 hours, you sooner or later will see a slow down, just even making the copies before and after merges will take longer and longer.

What I would question is, if the only difference is added records, because then you'd only need to store the reccount before and after the data merging, and will know how many records are new in the DBF. And as new records always are appended at the end of a dbf file, you'd then know the new records are from oldreccount+1 to EOF.

And that doesn't need two DBFs to compare.

Bye, Olaf.

PS: If this wasn't oracle, but T-SQL and the merging would be done inside SQL Server, I'd have an idea to both merge data and output the records that changed or were added as new orders with the OUTPUT clause of a MERGE Statement (an SQL command VFP does not have, though, and I don't know about oracle).

And there are even better options about logging changes via triggers and then export that changelog table. Transaction log also is one source of such change data, even without any triggers. Again I only know for SQL Server you can make use of DBCC LOG.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Keep up with what's happening at Experts Exchange!

Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now