Clarification on SQL Data Size

On assigning Varchar for the FirstName field, Size is set to 60 .. I wish to keep the size as 2000.

What is the difference on increasing size of any datatype to its max rather than keeping it to the actual need ?
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

If you set the size to 60 the number of characters will be limited to 60.  If you set it to 2000 it will be limited to 2000.  That actual amount of storage used will be the same, and will vary depending on the number of charterers stored.
Aneesh RetnakaranDatabase AdministratorCommented:
When you query the field, sql has to allocate the maximum number of bytes in this case 2000, no matter the field contains 1 character or 2000 Character.  that's a waste of memory. So its best practice to keep to a minimum.
Mike EghtebasDatabase and Application DeveloperCommented:

re:> that's a waste of memory
True before an entry made into this field but not after it is updated, true?

So, if we start 20 columns all with varchar(2000); and as we enter data in these 20 columns the db size ought to shrink for first few rows. Does this make sense?
Big Business Goals? Which KPIs Will Help You

The most successful MSPs rely on metrics – known as key performance indicators (KPIs) – for making informed decisions that help their businesses thrive, rather than just survive. This eBook provides an overview of the most important KPIs used by top MSPs.

I think Aneesh is talking about when a query is run.  not about that data in the storage file.  I was talking abut the storage file.
chokkaStudentAuthor Commented:
@eghtebas - Can you please brief your explanation ?
chokkaStudentAuthor Commented:
Our database is growing drastically .. few tables have more than million records. I wonder, how does this impact the performance, size etc ..
Aneesh RetnakaranDatabase AdministratorCommented:
You cant create index on Varchar(2000) column,
Mike EghtebasDatabase and Application DeveloperCommented:

My post was question to Aneesh (sorry that I was not clear about it.

He writes, varchar(2000) is waste of memory (I agree). But I am asking for how long the memory space remains wasted because as soon as we enter sting like "ABC" in it 1997 spaces are released back (= 2000 - 3).

I am asking Aneesh for his feed back/ comment where:

If we start our database with 20 columns all with varchar(2000), meaning we have lots of wasted memory space.
But as we do entries all with few character, we then have lots of memory space redeemed.  All this is not to provide new information. I am just reacting and responding to the statement from Aneesh. I myself am looking for an answer because I am following this question. In the process of course trying to help you out.

Aneesh RetnakaranDatabase AdministratorCommented:
Say I have a table  with  one integer (4bytes)  and a varchar (1020 ) field, discarding all the header information, let us assume varchar(1020 ) takes only 1020 bytes.   So the max row  size is 1024 bytes.

Say you have 100 records on that table  of variable sizes say 1 character thru say 60 characters and when you query that table, SQL Engine will allocate 100 times 1024 bytes on memory, that is 100KB.
Depending on how much memory you have on your server and the transactions, sometimes to free up this much memory, Query engine may need to move some other information from Cache back to memory. So this could involve another operation.  
So its always better to define proper datatypes and appropriate size.
Ok, you are not asking me, but I stand by what I said regarding storage.

varchar [ ( n | max ) ]
    Variable-length, non-Unicode string data. n defines the string length and can be a value from 1 through 8,000. max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered + 2 bytes.

I take that to mean that a null value, having a length of zero would only use 2 bytes.

Now what happens during a query, I do not know, and would only last during the query and probably only be local memory for the query.

I really don't see a lot of wasted space in the storage file.

That doesn't sound very efficient.  Are you talking about data fields or key fields?

If data fields, are you talking about all fields in the table or only those in the select statement?
Aneesh RetnakaranDatabase AdministratorCommented:
Just talking about the data fields. the mentioned table has only 2 fields and say 100 records.

Only fields you mention in the query, in my example, I want to have all the records.
Mike EghtebasDatabase and Application DeveloperCommented:
We did mix up storage issue Aneesh first brought it up with performance (query). I hope someone could comment about the scenario I built up on storage aspect of this question. Performance issue is totally different issue.

Hmm, something to consider.  I don't go around allocating unrealistically large sizes, but I haven't had a reason to limit them very small either.

So this would be an augment to keep them down as well as to limit the size of the result set.
Aneesh RetnakaranDatabase AdministratorCommented:
That right Gary,  You need to keep them minimal. Usually, for upto characters I use CHAR(10) instead of VARCHAR(10),  beyond that size, its varchar() and Varchar(max)
Scott PletcherSenior DBACommented:
SQL doesn't automatically allocate memory space for the max possible size, instead it uses estimates and/or table stats to determine how much memory to allocate.  However, a large number of oversized certainly could affect this estimate, causing it to reserve more memory than it really needs.

Also, the max length is a type of documentation to those that use the table.  Arbitrarily using some giant number will make it very difficult for anyone to get even a rough estimate of the max size of a column.
Aneesh RetnakaranDatabase AdministratorCommented:
Here is an example, you need the actual execution plan

use tempdb;
--- Create identitcal tables
number int,
name2000 VARCHAR(2000)

number int,
name60 VARCHAR(60))

-- populate the table

INSERT INTO  T2000(number,name2000)
SELECT number, name
FROM master..spt_values

INSERT INTO  T60(number,name60)
SELECT number, name
FROM master..spt_values

---  Show Actual excution plan before running this

SELECT number,name2000
FROM T2000
ORDER BY number

SELECT number,name60
ORDER BY number

On the execution plan, Click the 'select' operators on both queries and  look for Memory Grant , Do the same for 'Sort' operator and check for Estimated row size.

Scott PletcherSenior DBACommented:
The estimated row size is ~1000, NOT 2000+ bytes.  As I stated, SQL does not automatically assume the max length, but uses an estimated size, but a larger row size will, of course, result in a larger estimate.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
chokkaStudentAuthor Commented:
Thank you all ..
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server 2008

From novice to tech pro — start learning today.