I'm currently auditing my tabledefinitions to try to enhance the responsivness of the server in terms of memoryconsumption and IO.
In that respect I have the following question.
Taken the same dataset for a character field who's possible values are evenly distributed in length between 1 and 10 but that could contain larger lengths as well.
What is the difference in impact on the server (and why) between defining this field as
where the second definition buys me more breathing space if a larger value has to be inserted in that field.
to put it extreem a varchar(8000) definition would bring me even more breathingspace... (or - not to exceed the possible 8000 bytes/page limit - varchar(x) where all varchar fields have a size x=(8000-size of nonvarcharfields) / nr of varcharfields