Learn how to a build a cloud-first strategyRegister Now

x
?
Solved

Not able to Append Records to Table

Posted on 2011-04-24
9
Medium Priority
?
516 Views
Last Modified: 2012-05-11
Issue: A table in our system would not  append any new records.  Each and every attempt to append threw a duplicate error even though the records were clearly not duplicates.

Background:
The table has been in production for about two years.  At any given time it contains 15 17 million records.  It has an autonum field [ID] as the primary key.  There is another unique index defined on the table which is a combination of two other fields.  None of the records being added (and flagged as duplicates)  are duplicates based on the unique two field index.

The autonum counter in the [ID] field is up to 72 million and the total size of the file is close to but under 1GB.

According to the documentation the Autonum is unique to 4 billion and the maximum MDB size is 2GB.  The table/MDB (there is only one table in the MDB) is far below either of those limits.

A compact/repair is routinely run on the table as part of routine maintenance with intervals no longer that 2 weeks.

I tried appending the records thru code and using an append query.  In all cases all records were rejected with duplicate key violations.

I solved the problem as a describe below but want to understand what happened so I can monitor for it in the future.

My question is Why is each and every record being appended to the table being flagged as a duplicate?


Here is how I addressed the situation.  
1. I copied the original MDB (CIOrders.MDB) as another name (CIOrders_Locked.MDB)
2. I deleted all of the records from the original table (tblOrders) in the original MDB (CIOrders.MDB).
3. I ran a compact repair on the original MDB (CIOrders)
4. LInked table (tblOrders) in MDB (CIOrders_Locked) into MDB (CIOrders).
5 Wrote an append query appending all of the records in the linked table into (tblOrders) in the original MDB (CIOrders)
6. Executed the Query
7. Problem was solved.

All of the same records are in tblOrders as were there originally.  After deleting all of the records in step2. and compacting/repairing the MDB is Step 3., when I appended the records in Step 4,  autonum started over.
0
Comment
Question by:mlcktmguy
  • 3
  • 3
  • 3
9 Comments
 
LVL 44

Expert Comment

by:GRayL
ID: 35457607
With a table that size, pressing limits,  two weeks is too long to do a compact and repair.  I would be doing it daily or every time the mdb is shut down.
0
 
LVL 44

Expert Comment

by:GRayL
ID: 35457611
BTW, when you have the mdb open and running correctly, press File, Database Properties, General Tab, and tell us what the Size is?
0
 
LVL 1

Author Comment

by:mlcktmguy
ID: 35457652
Size is 972MB.
0
Fill in the form and get your FREE NFR key NOW!

Veeam is happy to provide a FREE NFR server license to certified engineers, trainers, and bloggers.  It allows for the non‑production use of Veeam Agent for Microsoft Windows. This license is valid for five workstations and two servers.

 
LVL 44

Expert Comment

by:GRayL
ID: 35457691
I'm thinking the size is approaching limits for one user.  How many concurrent users?
0
 
LVL 1

Author Comment

by:mlcktmguy
ID: 35457717
There may be a few users at a time but no one is using it when the records are being appended.  We have other single table MDBs larger that this one and used in the same manner.  They have less than 15 - 17 million records.   No problems with those.
0
 
LVL 26

Expert Comment

by:Nick67
ID: 35458217
<There is another unique index defined on the table which is a combination of two other fields.>

That's an unusual thing.  Do your problem free tables have the same structure?
What data types are involved in those index fields?
Numeric or text or both?
Is a composite index needed? to enforce uniqueness? for searching or other reasons?
What is it's function?

If you drop that index from the table, compact-and-repair, and then recreate it, can you then append?

If autonumber is at 72M and the number of records is at 15-17M a lot of append-delete is going on.
Large batches or small?
More frequently or less frequently than other problem free tables/mdb's?
Does badness occur after x number of appends attempts, an append of x number of records, or an autonumber going north of x?

What about throwing away the ORIGINAL mdb?

1. create a new mdb CIOrders_New
2. import in just the table structure, and any other objects
3. create a link table to the old mdb
4. append the data
5. rename the old table to CIOrders_Old
6. rename CIOrders_New to CIOrders

If the corruption is in the system tables, this will hopefully deep-six it.
Maybe you've covered all this ground, but it was worth saying aloud, just to see if you've thought of everything obvious already.
0
 
LVL 1

Author Comment

by:mlcktmguy
ID: 35495197
Sorry for the delay, pulled off on other priorities.  

Once the 'badness' started every record is rejected as a duplicate, whether added by using the daily processing logic ADO (.addnew), appended from a query.

The second unique index is not unusual in our applications.  How else could you keep the users from entering duplicates into the file using only a unique index on an autonum field?  On this file the secondary unique index is on Test and OrderID.

The Daily Process:
A comma delimited file containg the records for the day (20k - 50k) is created from another in-house mainframe system.  The created file is imported into this system and the records processed one by one using ADO and added to the respective file in this system.  The users are know to make mistakes and import and process the same comma delimited file multiple times.  We need the secondary unique key to reject legitimate duplicate records or the integrity of the file is compromised.

When the purge process is run about once a week records over a certain age are deleted form the file using an SQL statement.

The file is then compact/repaired.
0
 
LVL 26

Accepted Solution

by:
Nick67 earned 2000 total points
ID: 35495385
Ok,

That's some answers.  I agree that an autonumber keeps the autonumber key unique, but nothing else.
A multi-field key is still a little unusual, but ok
<How else could you keep the users from entering duplicates into the file>
This is at the edge of what I'd do with recordset stuff because 20-50K is getting big.
On the other hand, if you have the CSV file as a linked table, then it's not so hard to build a query
"select * from tblCSV inner join TheBigTable on (tblCSV.Test = TheBigTable.test) AND (tblCSV.OrderID=TheBigTable.OrderID)"
By rights, these would be duplicates.

You'd have to test that!
You could then whack those and not have any duplicates entered in the first place

<the records processed one by one using ADO>
If you are already processing one by one, are you opening and closing the recordset to check for duplicates?
I do DAO but the idea is similar

'pseudo code
for each row in the CSV
set rs = db.openrecordset("select * from theBigTable where OrderId = " & CSV.OrderID & " and Test= " & CSV.test)
if rs.recordcount <> 0 then
    'bail; because it already exists
else
    .addnew yadda yadda
end if
rs.close
next row

If you are, does the badness happen in spite of the checking?

Does a unique multi-field combination ever come back from the dead?
ie this week test = 1 and orderid =1
next week it gets deleted
two weeks from now it re-occurs?

Have you tried just dropping and re-creating the index to see what happens?

Lots of questions!
0
 
LVL 26

Expert Comment

by:Nick67
ID: 35703709
Out of curiosity, what part of the suggestions was the solution?
0

Featured Post

Important Lessons on Recovering from Petya

In their most recent webinar, Skyport Systems explores ways to isolate and protect critical databases to keep the core of your company safe from harm.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

If you need a simple but flexible process for maintaining an audit trail of who created, edited, or deleted data from a table, or multiple tables, and you can do all of your work from within a form, this simple Audit Log will work for you.
In a use case, a user needs to close an opened report by simply pressing the Escape (Esc) key. This can be done by adding macro code in Report_KeyPress or Report_KeyDown event.
In Microsoft Access, learn how to use Dlookup and other domain aggregate functions and one method of specifying a string value within a string. Specify the first argument, which is the expression to be returned: Specify the second argument, which …
With Microsoft Access, learn how to specify relationships between tables and set various options on the relationship. Add the tables: Create the relationship: Decide if you’re going to set referential integrity: Decide if you want cascade upda…

810 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question