right now we have some concern on identity column's data type and we would like to change the type from int to bigint, as from replication point of view we want to store more on an identity column so when replicating the identity column is not going to conflict with each other's update from different host on merge replication.
sth like this:
CREATE TABLE [dbo].[header_transaction_fee](
[vms_header_transaction_fee_id] [bigint] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
but if the data type change from int to bigint, if we add an index later on for this bigint column, the performance of building the index will hurt and performance will also impacted as index only good on small int, it should be the clustered index has problem on this but not non clustered index, am I right?
as we are replicating this kind of table to 3 servers and need to create an acceptable range on each server. But someone propose that if we use int, we have a limit of 2billion, which is roughly 650 million range per server. In most cases this will be fine for 10 years possible, some tables it may only be good for 2 years. I guess it comes down to what is being loaded into the table and how often.
1) buy why extend that identity to bigint related to how many years the table can store data? just because of the range of merge replication on that table ? so in the case of merge replication, how can we deal with this kind of range not enough problem for a table keep have data insert and data expected to store in DB for a long time, e.g. 7 years?
2) if we can only enlarge the range of identity, any way to solve this kind of problem except keep increasing the range? any permanent solution for that ?
3) if the type is bigint anyway, really has a performance impact on creating index on that column ? or we usually don't create an index on identity column? and if we create we don't create clustered index on it anyway but non clustered index, this is the way it should be?