Link to home
Start Free TrialLog in
Avatar of chokka
chokkaFlag for United States of America

asked on

Clarification on SQL Data Size

On assigning Varchar for the FirstName field, Size is set to 60 .. I wish to keep the size as 2000.

What is the difference on increasing size of any datatype to its max rather than keeping it to the actual need ?
Avatar of Gary4
Gary4
Flag of United States of America image

If you set the size to 60 the number of characters will be limited to 60.  If you set it to 2000 it will be limited to 2000.  That actual amount of storage used will be the same, and will vary depending on the number of charterers stored.
Avatar of Aneesh
When you query the field, sql has to allocate the maximum number of bytes in this case 2000, no matter the field contains 1 character or 2000 Character.  that's a waste of memory. So its best practice to keep to a minimum.
Aneesh,

re:> that's a waste of memory
True before an entry made into this field but not after it is updated, true?

So, if we start 20 columns all with varchar(2000); and as we enter data in these 20 columns the db size ought to shrink for first few rows. Does this make sense?
I think Aneesh is talking about when a query is run.  not about that data in the storage file.  I was talking abut the storage file.
Avatar of chokka

ASKER

@eghtebas - Can you please brief your explanation ?
Avatar of chokka

ASKER

Our database is growing drastically .. few tables have more than million records. I wonder, how does this impact the performance, size etc ..
You cant create index on Varchar(2000) column,
@chokka,

My post was question to Aneesh (sorry that I was not clear about it.

He writes, varchar(2000) is waste of memory (I agree). But I am asking for how long the memory space remains wasted because as soon as we enter sting like "ABC" in it 1997 spaces are released back (= 2000 - 3).

I am asking Aneesh for his feed back/ comment where:

If we start our database with 20 columns all with varchar(2000), meaning we have lots of wasted memory space.
But as we do entries all with few character, we then have lots of memory space redeemed.  All this is not to provide new information. I am just reacting and responding to the statement from Aneesh. I myself am looking for an answer because I am following this question. In the process of course trying to help you out.

Mike
Say I have a table  with  one integer (4bytes)  and a varchar (1020 ) field, discarding all the header information, let us assume varchar(1020 ) takes only 1020 bytes.   So the max row  size is 1024 bytes.

Say you have 100 records on that table  of variable sizes say 1 character thru say 60 characters and when you query that table, SQL Engine will allocate 100 times 1024 bytes on memory, that is 100KB.
Depending on how much memory you have on your server and the transactions, sometimes to free up this much memory, Query engine may need to move some other information from Cache back to memory. So this could involve another operation.  
So its always better to define proper datatypes and appropriate size.
SOLUTION
Avatar of Gary4
Gary4
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
@Annesh

That doesn't sound very efficient.  Are you talking about data fields or key fields?

If data fields, are you talking about all fields in the table or only those in the select statement?
Just talking about the data fields. the mentioned table has only 2 fields and say 100 records.

Only fields you mention in the query, in my example, I want to have all the records.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
That right Gary,  You need to keep them minimal. Usually, for upto characters I use CHAR(10) instead of VARCHAR(10),  beyond that size, its varchar() and Varchar(max)
SQL doesn't automatically allocate memory space for the max possible size, instead it uses estimates and/or table stats to determine how much memory to allocate.  However, a large number of oversized certainly could affect this estimate, causing it to reserve more memory than it really needs.

Also, the max length is a type of documentation to those that use the table.  Arbitrarily using some giant number will make it very difficult for anyone to get even a rough estimate of the max size of a column.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of chokka

ASKER

Thank you all ..