Incremental ID's and adding new records - give me your thoughts

Using .NET 2.0 here.
This is a problem I have had for a year now and never got it properly resolved. I prefer to have all my database records identified by integers rather than GUID's, and have them increment to the next highest value on adding to a database. But there are a couple of issues to this approach that I am wanting advice on, and perhaps some code to supplement.

1). When I open an existing record from, say, my customer list, it opens a detailed form view by calling new formCustomer(CustomerID). I have set the data source for the 'details' form to be that one customer rather than pulling the whole table in, and have done this through a stored procedure called through the customerTableAdapter. Usage: this.customerTableAdapter.FillCustomerForm(this.erpDBDataSet.Customer, formID, "CustomerID");
The form is bound to this via a customerBindingSource. Clicking the 'New' button calls the customerBindingSource.AddNew(). I have set the CustomerID in the Customer DataTable to autoincrement. This is a very handy function, except for one thing - because it was bound to one record, and I have maybe viewed CustomerID 150 out of 200, the ID of the new record is set to 151 instead of 201 because that's all the form can see.
What should I do to retrieve the proper highest value, ie. 201? I was thinking that if I called new formCustomer(0) the form could detect a new record being added and set the BindingSource to pull in every customer. That way it will get the right ID but is very inefficient.

2) The above approach would be very bad if a user clicked New, got 201 assigned to the new record, but if it took them an hour before they saved it - any number of people could have added new customer records before then. When the first person goes to save it, would they not overwrite a record which has just been saved to 201? Therefore would it be better not to assign any record at all until the record is first saved? Or would this be dealt with in some way by the TableAdapter class? If so, how would I go about this.

Or if any of you have better suggestions altogether for this approach of auto-incrementing ID's, that would be even better.
Who is Participating?
with an identity column ...

table foo
pkFooId Identity
bar int

insert into foo (bar) values (1)

it will automatically handle errors etc ... no need for a transaction for the single statement and its actually bad practice to run an atomic operation within a transaction (performance)

if you also need the id back ...

executescalar("insert into foo (bar) values (1);select scope_identity();")

Why not just use Identity columns at the database level?

Alexandre SimõesManager / Technology SpecialistCommented:

I also use table Id's as auto-incremented int's.
I only assign the Id value upon save. I create a transaction to make sure that no other user can add reconds while I'm saving mine, save the record and commit the transaction.

You don't specify how you save the record... I usually use a stored procedure.
Using datasets, it may be slower but the row will also have the Id filled after save.

You shouldn't assign Id's (even those not auto-incremented) in other place than inside the save transaction.

Alex :p
Cloud Class® Course: MCSA MCSE Windows Server 2012

This course teaches how to install and configure Windows Server 2012 R2.  It is the first step on your path to becoming a Microsoft Certified Solutions Expert (MCSE).

"I only assign the Id value upon save. I create a transaction to make sure that no other user can add reconds while I'm saving mine, save the record and commit the transaction."

If you are using identity columns this process is assured to be atomic (no need for a transaction)
Alexandre SimõesManager / Technology SpecialistCommented:
But it's not assured that the INSERT code won't throw an error...
I just mentioned the transaction as a best-practice.

Alex :p
smickellAuthor Commented:
I save records by :

I did not have the ID columns set as Identity columns, I have done now. I had, however, set the corresponding DataTable in the DataSet to be the equivalent of an identity column, ie. AutoIncrement = True, AutoIncrementStep = 1. If I saved to the DataSet, and run a TableAdapter.Update() straight after this, would this have been OK?

Ugh. I was testing as I wrote this reply, and it turns out my SQL DB is not updating at all, nor was it ever. Only the local DataSet was being updated. This is a relatively new application and I assumed the data was being correctly written away to the DB when it actually wasn't. Even if I do the most basic procedure of creating a new form and dragging a customer table onto it, I can edit & save to the dataset but not the actual db. Should this be posted as a separate question?
The code in question is:
private void Form1_Load(object sender, EventArgs e)
            // TODO: This line of code loads data into the 'erpDBDataSet.Customer' table. You can move, or remove it, as needed.

        private void customerBindingNavigatorSaveItem_Click(object sender, EventArgs e)
        {   //clicking this should save to the database, but doesn't
smickellAuthor Commented:
*EDIT - Forget the problem in the last post. Turns out my database was copying a fresh version of the MDF file very time I did a rebuild :-O

Also, the other problems seem to have been resolved after setting the tables in SQL Server to be identity columns. They were already set, as I said, as identity columns (auto-increment = true) in the DataSet model.
It's definitely working. I still have it set to generate a unique ID once a new record is initiated - I clicked 'New' a few times and opened a few entry forms and they all sat at ID 196. But when I clicked 'Save' in each form, the DataSet was able to pick up the most recent update and assigned the next records 197, 198, 199 and so on :)  
Is there any down side to having auto-increment set at both the dataset and database level? I'm guessing there's not given that if you just drag a table with identity columns onto the Dataset designer it implements this functionality anyway.
greg - executescalar method also noted. Thanks.
If you are looking at performance, you could use ExecuteNonQuery and return an OUTPUT variable instead of using ExecuteScalar. There's not much to choose between them. ExecuteScalar arguably requires a little less code. Both ExecuteScalar and ExecuteNonQuery use a SqlDataReader. ExecuteScalar uses it to get the value, ExecuteNonQuery just opens it and then closes it.

ExecuteNonQuery has the advantage that you declare the type of the output variable, so you are signing up to a contract. You can also use it to return more than one output variable. The corollary to that is that you can use ExecuteReader to "return more than one output variable" and not have to declare lots of parameters. Its horses for courses, and when you weigh in I prefer ExecuteNonQuery as it is clearer in it's intentions.

gregoryyoung: in situations (parent-child table inserts with the child inserts using the generated parent IDENTITY value as part of the next insert) that need to be ACID would you put the INSERT with the IDENTITY column inside a transaction or not?

"no need for a transaction for the SINGLE statement and its actually bad practice to run an atomic operation within a transaction (performance)"

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.