State of play:
Databases ensure data integrity and relationship through both primary and foreign keys, and MS Access is no exception.
Well, that's not completely true ...
A primary key should be used only once, and this is where MS Access fails as the following sample will demonstrate:
Surprise !! The Newly generated ID isn't 11 as we should expect, but 10 again (remember you deleted it?)
Why may you ask?
Microsoft Access internally keeps track of the last ID used, but this ID is reset to the max ID + 1 when you compact the database. This effectively breaks the 3rd primary key rule.
What can we do about this?
Data macros to the rescue
Since MS Access 2010, Microsoft introduced the Data Macro, which can perform various actions before inserting, deleting, updating data and after inserting and after updating data.
These macros (also known as Triggers in other database engines) are executed automatically whenever you add, insert or update data, manually, trough forms, or programatically. They are mostly used to validate data (i.e: ensure data integrity), but we can use them to generate a true ID number that will never - ever - be re-used, even if the database is compacted.
To achieve this, we first need a table that will keep track of the max ID is used. Also, we will need data macros that will pull the max ID before we add a new row, increase the max ID after we save a new row, and prevent any ID update.
So, in this example, create a table, name it IDtrackers
<?xml version="1.0" encoding="UTF-16" standalone="no"?> <DataMacros xmlns="http://schemas.microsoft.com/office/accessservices/2009/11/application"> <DataMacro Event="BeforeChange"> <Statements> <ConditionalBlock> <If> <Condition>[IsInsert]</Condition> <Statements> <Comment>New record, pull the primary key value from IDtrackers table</Comment> <LookUpRecord> <Data> <Reference>IDtrackers</Reference> <WhereCondition>[TableName]="MyTable"</WhereCondition> </Data> <Statements> <Action Name="SetField"> <Argument Name="Field">myTable.ID</Argument> <Argument Name="Value">[IDtrackers].[maxID]</Argument> </Action> </Statements> </LookUpRecord> </Statements> </If> <Else> <Statements> <Comment>Updated record, don't allow the primary key to be updated.</Comment> <ConditionalBlock> <If> <Condition>Updated("ID")</Condition> <Statements> <Action Name="RaiseError"> <Argument Name="Number">5001</Argument> <Argument Name="Description">Primary key can't be updated</Argument> </Action> </Statements> </If> </ConditionalBlock> </Statements> </Else> </ConditionalBlock> </Statements> </DataMacro> </DataMacros>
9. Close and save.
This data macro will automatically fill the ID column when we insert a new record. It will also raise an error if we attempt to modify an existing ID.
In the ribbon, on the Creation tab, choose the 'Create data' macro button, and choose the 'After insert' event.
Copy and paste the following code:
<?xml version="1.0" encoding="UTF-16" standalone="no"?> <DataMacros xmlns="http://schemas.microsoft.com/office/accessservices/2009/11/application"> <DataMacro Event="AfterInsert"> <Statements> <Comment>Find the primary key value and update it</Comment> <LookUpRecord> <Data Alias="Found"> <Reference>IDtrackers</Reference> <WhereCondition>[IDtrackers].[TableName]="MyTable"</WhereCondition> </Data> <Statements> <EditRecord> <Data/> <Statements> <Action Name="SetField"> <Argument Name="Field">Found.maxID</Argument> <Argument Name="Value">[Found].[maxID]+1</Argument> </Action> </Statements> </EditRecord> </Statements> </LookUpRecord> </Statements> </DataMacro> </DataMacros>
Now "Close and save"
This data macro will update the maxID in the IDtrackers table whenever a record is added
Finally, hide your table (this will prevent novices from editing it)
Now you can compact the database and IDs that are thrown away will never show up again
Final note, for perfectionists:
You probably noticed that MS Access auto-numbers start at 1, effectively cutting by 2 the amount of numbers available (guess it is an arbitrary decision from Microsoft, don't ask why)
You can set the starting value in the IDtracker table at the minimum allowed by long data type, in other words:
-2147483648 (-2^31 or 0x80000000)
Hope you found this useful!
Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.
Those are all points/factors that would be helpful to your readers.
I'll test this with my Access 2010 database. The code looks innocuous.
I would say this is something never needed.
Why in the world? An ID value is used to i.e. link tables together. If we set referential integrity between these tables and cascade deletion (which also works in Access) then an ID value cannot be deleted in the main table as long as a child table uses it (without cascade deletion) and with CD it deletes also all entries in child tables so the ID value will not be used anymore for anything - and so it's absolutely OK to reuse the value after compacting.
By the way, you can also set the increment value in Access other than 1 using SQL to specify it.
Using a datamacro I would say can hit the performance in the opposite. Try to do an INSERT...SELECT. In SQL Server for example a trigger is executed only once for such a command (so for all inserted records only once) so there you need to program the trigger in a way that it is able to work with all inserted data. Don't know how Access does it with Datamacros as I never use it (just because I would NEVER use what Access calls macros in general), but if it works the same way you need to check if the macro is called for each and every record or also only once. In the first case a bulk INSERT like INSERT...SELECT would be a performance bottleneck, in the second case you would need to check your code if it still works.
Additionally, if you ever need to upgrade the backend from Access to a real database server you will have fun with converting such macros... also in the case you need to debug such code and I'm sure that the most developers would not look into datamacros to finally find why the insertion of records shows strange behaviour.
In regard to the fact that a primary key should be used only once:
Take an ID card as a sample, it have a unique number.
According to what you say, when the owner dies, the number is free to be used again.
So, the next person asking for an ID card will recieve the number of a dead. Hmmm, doesn't make sens to me.
Concerning performances issues, it depend on the database.
Concerning other database engines, the decision to use another database engine as a back-end is not the purpose of this article.
Concerning debugging, like with every new technologies, developpers will need to learn.
that's the old discussion about what to do with an ID. Normally, in a database the ID is not used for "real" purposes and then there is no problem with any ID. What you are talking about is a counter value like an invoice number - such numbers would never be used with an AutoID, that's not their purpose.
An AutoID should guarantee only that each ID is unique and each new ID is higher than the last because it is usually used as primary key which is also usally clustered index and so it should make sure that no physical data rearrange is needed when you compact as a clustered key is the physical order of the records.
To use a normal counter I would simply use a transaction, then add a new counter value in a counter table, get the ID (not the counter value) and store it in the table where I want to use it. Easy to follow and nothing can happen between these two actions.