Link to home
Start Free TrialLog in
Avatar of jisha123
jisha123

asked on

Maximum Foreign key references problem in SQL Server

Hi,
   One of the limitation of SQL server is that it allows only 253 foreign key references per table.In one of my C# applications i am facing a similar problem where in i have a Users table where no of foreign key references for the primary key ie, UserId is 393 which is exceeding the permitted limit(253) and at run time when ever i insert  a new record in that table and delete the same immediately then i am getting an SQL Stack Overflow exception.
I would like to know about the ways for solving this problem.

Jisha.
ASKER CERTIFIED SOLUTION
Avatar of Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3]
Flag of Luxembourg image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
<<I would like to know about the ways for solving this problem.>>
One way is to normalize your schema.  253 columns means your schema is *not* normalized.
Avatar of twoboats
twoboats

Yeah - Racimo is on it. Sounds like you are using columns that should be rows.

So for eg if you have a column in a table for each user, you should have a user table that has a row for each one instead.

In the user_table you a column UserId.

So child tables of users each have a UserId column - the FK references the user_table.UserId

In a good relational model, I think you'd struggle to find any entity that is related to that many others.



User                                               Account

UserId            Name                      UserId                             Balance
1                     Fred                        1                                     100
2                     Jack                         1                                     200
                                                      2                                     50
                                                      2                                     25
                                                      2                                     300


So account.userid if FK of User.Userid. Fred has 2 accounts, Jack 3. And integrity is enforced with 1 FK


or, we got all on the wrong track...
maybe the data type of the column UserID is tinyint, and hence only allows values up to 255...
in which case the solution is to change from tinyint to int for example.

jisha123, can you clarify, please
2 common errors in design are to believe:

--> that data must be stored the same way it should look on the screen.  Codd specifically mentionned that user view and internal representation of data are separate concepts...  
--> that denormalization optimizes anything.  It just makes things worse.

Normalization = optimization.  Any denormalized schema is necessarily underoptimized.

Hope this helps...
<<or, we got all on the wrong track...
maybe the data type of the column UserID is tinyint, and hence only allows values up to 255...
in which case the solution is to change from tinyint to int for example.>>
I think the problem is simple, the questionner is obviously using SQL Server above its limits:

> One should limit as much as possible the creation/deletion of objects, columns at run time...Tables are *not* RAM arrays and databases should have fixed structures...
> A normalized schema should not have 253 columns...

Hope this helps...
Avatar of jisha123

ASKER

Hi All,
      Thanks to all for your valuable comments.Here is my situation.I have a User table which has UserId as the primary key.Now i have another 250 tables which has got CreatedUserId and UpdatedUserId fields which are nothing but foreign key references to the UserId field in Users table.So approximately users table has got UserId primary key references approx 250 *2 = 500 references which exceeds the limit per table ie 253.Kindly tell me the best approach to solve the above problem.

Jisha.
>Kindly tell me the best approach to solve the above problem.

I think I gave you the only approach by using the intermediate table...
<<  Thanks to all for your valuable comments.Here is my situation.I have a User table which has UserId as the primary key.Now i have another 250 tables which has got CreatedUserId and UpdatedUserId fields which are nothing but foreign key references to the UserId field in Users table.So approximately users table has got UserId primary key references approx 250 *2 = 500 references which exceeds the limit per table ie 253.Kindly tell me the best approach to solve the above problem.>>
I am just curious...Why would you need to use 257 userid foreign keys?
Hi Racimo,
              Do you mean to say we should not set foreign keys to CreatedUserId and UpdatedUserId fields ?
Jisha.
If CreatedUserId and UpdatedUserId are columns in another table, the each only needs one FK reference to UserID in the User table once.

You don't need a FK for each row.
Hi,
    Yes. CreatedUserId and UpdatedUserId needs only one foreign keys to UserId.But we have 250 tables all of which are having createduserid and updateduserid  as FK to userId in Users table and so there will be around 500 foreign key references for userid olumn.

Jisha
250 tables? How come?

Sounds like those 250 tables need optimising.

What do they store?
<<Do you mean to say we should not set foreign keys to CreatedUserId and UpdatedUserId fields ?>>
Jus checking if you really must have 250 foreign keys no the same table...From what I read the answer is no...250 foreign keys on the same table is not the same as 250 foreign keys on 250 separate tables.  In that case, BOL do not specify any upper limit...In any case I do not believe that here should be 250 tables...If these tables are system generated, then chances are you probably need to consolidate your schema....

Hope this helps...
<<Sounds like those 250 tables need optimising.>>
In other words the schema needs normalizing ;)
Quite. But if you've got 250 tables that relate to 1, you probably don't know what normalising is ;)

(note - the correct English spelling of normalising, rather than the sub-optimal american version ;)

<<note - the correct English spelling of normalising, rather than the sub-optimal american version>>
Actually, the correct etymology *is* normaliZation.  Codd voluntariliy used a Z to express the new concept and differentiate it from the general sense word when he introduced normal forms.  From *A Relational Model of Data for Large Shared Data Banks*  1970 paper...

//A relation whose domains are all simple can be represented in storage by a two-dimensional column-homogeneous array of the kind discussed above. Some more complicated data structure is necessary for a relation with one or more nonsimple domains. For this reason (and others to be cited below) the possibility of eliminating nonsimple domains appears worth investigating. [see note 4] There is, in fact, a very simple elimination procedure, which we shall call *normalization*.// --> First time used for what it means relationally...

Regards...
Funny thing is Codd was indeed british ;))
Yopu got the American version of the book - it was changed to accomodate ;)

Sorry to author - off topic. It is Friday though....
<<Yopu got the American version of the book - it was changed to accomodate ;) >>
I don't think so...;)
First, It is *not* a book..but a publication made inside IBM Research Center...Second,I already gave you the reasons of the Z (for the original transcription chek out ) : it has nothing to do with accomodation... It *does* mention Normalization.  Normalization is not normalisation.  It has a very specific meaning.

Regards...

>> In that case, BOL do [sic] not specify any upper limit <<

You must have a customiZed/incomplete BOL -- the ones I have clearly state that 253 is the limit, for SQL 7.0, 2000 *and* 2005 (2k5 notes that FKs *from* a table are technically unlimited, but strongly recommends not going past 253).

Sadly, it looks as if you really need this on all those tables, you will have to add a trigger for each table past the first 126 to verify the userids.

<<You must have a customiZed/incomplete BOL -->>
I draw this information from...

http://msdn2.microsoft.com/en-us/library/ms143432.aspx
//
Foreign key table references *per table* = 253
+
Although a table can contain an unlimited number of FOREIGN KEY constraints, the recommended maximum is 253. Depending on the hardware configuration hosting SQL Server, specifying additional foreign key constraints may be expensive for the query optimizer to process.

To make it short...

per/table <> per/schema
recommanded <> absolute

I am curious to what version of BOL states about that 253 is the upper limit per/schema...???
Racimo: your link shows the MS recommends not having more than 253 foreign keys in one table. I am still puzzled where the original poster got the idea that more than 253 foreign keys pointing to the same table is bad (as they are not a constraint on the referenced table)
<<I am still puzzled where the original poster got the idea that more than 253 foreign keys pointing to the same table is bad (as they are not a constraint on the referenced table)>>
I don't know.  But my guess is that a confusion may arise between per/table limit/recommendations of max number of fk's on one hand and per/schema limit/recommandation...

Whether to say it's bad or not is risky because because we only but a glimpse of the business requirements.   However; having too many fk's on one table shows a strong probability of totally unormalized structure as well as having too many fks pointing on the same pk.

Hope this helps...
"Foreign key table references *per table* = 253 + ...<footnote>..."

I, too, took the stated quote to mean that to a single table can only reference 253 other tables (the footnote is only for SQL 2005).  

Therefore, when this comment appears *later* in the *same* list of limitations:
"REFERENCES per table    253"
I understood that to mean that only 253 references can be made *to* the same table.

I'm guessing it's not that your BOL is incomplete, since you found the earlier limitation, it's that you didn't bother to read it.


>> "<<I am still puzzled where the original poster got the idea that more than 253 foreign keys pointing to the same table is bad (as they are not a constraint on the referenced table)>>
I don't know ...blah blah blah"

Maybe because he got a SQL error when he tried it?  I'm not puzzled by that at all -- when the DBMS gives you an error stating something very clearly, it's almost always true.
<<I, too, took the stated quote to mean that to a single table can only reference 253 other tables (the footnote is only for SQL 2005).  >>
A terminology issue.

*Foreign key table references PER TABLE* (1st line)  limitation mentionning  253 means that no more than 253 foreign keys are advised to be put on the SAME table...Else why would MS put a corresponding footnote such as *Although a table can contain an unlimited number of FOREIGN KEY constraints, the recommended maximum is 253.* because more than 253 PER TABLE would create performance problems (refering to INSERT overhead)!!!  Quite frankly, I do not see any confusion here.  253 FK per/table means 253 foreign keys on the same table.

For the second line, I am not sure what MS means.  So I will do some testing and let the questionner know.

<<I don't know ...blah blah blah">>
I won't comment on this sarcasm.  I have a put a formal complaint on that behavior to the community help and support...

https://www.experts-exchange.com/?qid=22715726


<<Maybe because he got a SQL error when he tried it?  I'm not puzzled by that at all -- when the DBMS gives you an error stating something very clearly, it's almost always true.>>
It seems the problem is liked with the fact that 500 foreign keys references the same primary keys which woul d exceed the limit of the second line.  not sure but it seems unclear...Never had the case...


AFAIK, the FK issue is a SQL limitation and there is no direct way around it.  I don't think the documentation and message are unclear, I think SQL simply has no support for > 253 FKs pointing to one table.

Given that, you will have to use one of the kludgy workarounds:

1) Standard method is to add a trigger for each table past the first 126 to verify the userids; triggers were used before SQL directly supported FK references.

2) Create intermediate table(s) that get referenced by the actual data tables; the intermediate table(s) then reference back to the master lookup table.  I haven't tried this so can't confirm that it will work.
ScottPletcher: "AFAIK, the FK issue is a SQL limitation and there is no direct way around it.  I don't think the documentation and message are unclear, I think SQL simply has no support for > 253 FKs pointing to one table."
Here's my script:

USE tempdb

CREATE table Master(ID int primary key)

INSERT INTO Master VALUES(1)

DECLARE @i smallint, @iStr varchar(5), @sql varchar(100)
SET @i=1
WHILE @i<32001 BEGIN
   SET @iStr=Cast(@i as varchar(5))
   PRINT 'CREATE TABLE Child' + @iStr + ' (masterID INT CONSTRAINT FK_'+@iStr + ' FOREIGN KEY(masterID) REFERENCES Master(ID))'
    SET @i=@i+1
END

After I copy the output of it into the query and execute, I have 32000 child tables all pointing to the same Master table - with no problems (except that the User tables tree loads forever :) )
But the whole point is, when E-R modelling, you are going to be hard pressed to find a necessity for more than a handful of FKs - if you do, your logical model is likely wrong

Good logical model -> good physical model -> well optimised database = good performance
twoboats,

<<But the whole point is, when E-R modelling, you are going to be hard pressed to find a necessity for more than a handful of FKs - if you do, your logical model is likely wrong>>
Actually when studying further relational concepts, you shall realize that using ERD can actually be totally skipped out...

You may want to take a look at the following thread about ERD modeling and relational model.  I think you will find it interesting.

http://groups.google.com/group/comp.databases.theory/browse_frm/thread/c5de101bc81de3a2/cff9bd1d818844ad#cff9bd1d818844ad

<<Good logical model -> good physical model -> well optimised database = good performance>>
Almost there...Allow me to suggest..
Good conceptual model --> Good logical model = fully normalized schema -> good physical implementation -> good performance

AND

Subjective physical implementation -> poor performance

Hope this helps...