Not able to Append Records to Table

Posted on 2011-04-24
Last Modified: 2012-05-11
Issue: A table in our system would not  append any new records.  Each and every attempt to append threw a duplicate error even though the records were clearly not duplicates.

The table has been in production for about two years.  At any given time it contains 15 17 million records.  It has an autonum field [ID] as the primary key.  There is another unique index defined on the table which is a combination of two other fields.  None of the records being added (and flagged as duplicates)  are duplicates based on the unique two field index.

The autonum counter in the [ID] field is up to 72 million and the total size of the file is close to but under 1GB.

According to the documentation the Autonum is unique to 4 billion and the maximum MDB size is 2GB.  The table/MDB (there is only one table in the MDB) is far below either of those limits.

A compact/repair is routinely run on the table as part of routine maintenance with intervals no longer that 2 weeks.

I tried appending the records thru code and using an append query.  In all cases all records were rejected with duplicate key violations.

I solved the problem as a describe below but want to understand what happened so I can monitor for it in the future.

My question is Why is each and every record being appended to the table being flagged as a duplicate?

Here is how I addressed the situation.  
1. I copied the original MDB (CIOrders.MDB) as another name (CIOrders_Locked.MDB)
2. I deleted all of the records from the original table (tblOrders) in the original MDB (CIOrders.MDB).
3. I ran a compact repair on the original MDB (CIOrders)
4. LInked table (tblOrders) in MDB (CIOrders_Locked) into MDB (CIOrders).
5 Wrote an append query appending all of the records in the linked table into (tblOrders) in the original MDB (CIOrders)
6. Executed the Query
7. Problem was solved.

All of the same records are in tblOrders as were there originally.  After deleting all of the records in step2. and compacting/repairing the MDB is Step 3., when I appended the records in Step 4,  autonum started over.
Question by:mlcktmguy
    LVL 44

    Expert Comment

    With a table that size, pressing limits,  two weeks is too long to do a compact and repair.  I would be doing it daily or every time the mdb is shut down.
    LVL 44

    Expert Comment

    BTW, when you have the mdb open and running correctly, press File, Database Properties, General Tab, and tell us what the Size is?
    LVL 1

    Author Comment

    Size is 972MB.
    LVL 44

    Expert Comment

    I'm thinking the size is approaching limits for one user.  How many concurrent users?
    LVL 1

    Author Comment

    There may be a few users at a time but no one is using it when the records are being appended.  We have other single table MDBs larger that this one and used in the same manner.  They have less than 15 - 17 million records.   No problems with those.
    LVL 26

    Expert Comment

    <There is another unique index defined on the table which is a combination of two other fields.>

    That's an unusual thing.  Do your problem free tables have the same structure?
    What data types are involved in those index fields?
    Numeric or text or both?
    Is a composite index needed? to enforce uniqueness? for searching or other reasons?
    What is it's function?

    If you drop that index from the table, compact-and-repair, and then recreate it, can you then append?

    If autonumber is at 72M and the number of records is at 15-17M a lot of append-delete is going on.
    Large batches or small?
    More frequently or less frequently than other problem free tables/mdb's?
    Does badness occur after x number of appends attempts, an append of x number of records, or an autonumber going north of x?

    What about throwing away the ORIGINAL mdb?

    1. create a new mdb CIOrders_New
    2. import in just the table structure, and any other objects
    3. create a link table to the old mdb
    4. append the data
    5. rename the old table to CIOrders_Old
    6. rename CIOrders_New to CIOrders

    If the corruption is in the system tables, this will hopefully deep-six it.
    Maybe you've covered all this ground, but it was worth saying aloud, just to see if you've thought of everything obvious already.
    LVL 1

    Author Comment

    Sorry for the delay, pulled off on other priorities.  

    Once the 'badness' started every record is rejected as a duplicate, whether added by using the daily processing logic ADO (.addnew), appended from a query.

    The second unique index is not unusual in our applications.  How else could you keep the users from entering duplicates into the file using only a unique index on an autonum field?  On this file the secondary unique index is on Test and OrderID.

    The Daily Process:
    A comma delimited file containg the records for the day (20k - 50k) is created from another in-house mainframe system.  The created file is imported into this system and the records processed one by one using ADO and added to the respective file in this system.  The users are know to make mistakes and import and process the same comma delimited file multiple times.  We need the secondary unique key to reject legitimate duplicate records or the integrity of the file is compromised.

    When the purge process is run about once a week records over a certain age are deleted form the file using an SQL statement.

    The file is then compact/repaired.
    LVL 26

    Accepted Solution


    That's some answers.  I agree that an autonumber keeps the autonumber key unique, but nothing else.
    A multi-field key is still a little unusual, but ok
    <How else could you keep the users from entering duplicates into the file>
    This is at the edge of what I'd do with recordset stuff because 20-50K is getting big.
    On the other hand, if you have the CSV file as a linked table, then it's not so hard to build a query
    "select * from tblCSV inner join TheBigTable on (tblCSV.Test = TheBigTable.test) AND (tblCSV.OrderID=TheBigTable.OrderID)"
    By rights, these would be duplicates.

    You'd have to test that!
    You could then whack those and not have any duplicates entered in the first place

    <the records processed one by one using ADO>
    If you are already processing one by one, are you opening and closing the recordset to check for duplicates?
    I do DAO but the idea is similar

    'pseudo code
    for each row in the CSV
    set rs = db.openrecordset("select * from theBigTable where OrderId = " & CSV.OrderID & " and Test= " & CSV.test)
    if rs.recordcount <> 0 then
        'bail; because it already exists
        .addnew yadda yadda
    end if
    next row

    If you are, does the badness happen in spite of the checking?

    Does a unique multi-field combination ever come back from the dead?
    ie this week test = 1 and orderid =1
    next week it gets deleted
    two weeks from now it re-occurs?

    Have you tried just dropping and re-creating the index to see what happens?

    Lots of questions!
    LVL 26

    Expert Comment

    Out of curiosity, what part of the suggestions was the solution?

    Featured Post

    Why You Should Analyze Threat Actor TTPs

    After years of analyzing threat actor behavior, it’s become clear that at any given time there are specific tactics, techniques, and procedures (TTPs) that are particularly prevalent. By analyzing and understanding these TTPs, you can dramatically enhance your security program.

    Join & Write a Comment

    Suggested Solutions

    It took me quite some time to sort out all the different properties of combo and list boxes available from Visual Basic at run-time. Not that the documentation is lacking: the help pages are quite thorough and well written. The problem was rather wh…
    Today's users almost expect this to happen in all search boxes. After all, if their favourite search engine juggles with tens of thousand keywords while they type, and suggests matching phrases on the fly, why shouldn't they expect the same from you…
    Familiarize people with the process of utilizing SQL Server stored procedures from within Microsoft Access. Microsoft Access is a very powerful client/server development tool. One of the SQL Server objects that you can interact with from within Micr…
    Learn how to number pages in an Access report over each group. Activate two pass printing by referencing the pages property: Add code to the Page Footers OnFormat event to capture the pages as there occur for each group. Use the pages property to …

    734 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    21 Experts available now in Live!

    Get 1:1 Help Now