Understanding database size, available space, initial size, autogrowth

Posted on 2008-10-22
Medium Priority
Last Modified: 2012-06-27
I'm a bit confused with the 4 properties in SQL properties page. Currently our database has following properties;

In General tab:
Database Size: 35000MB
Available space: 9500MB

Files tab:
Initial size for mdf: 29000MB
Initial size for ldf:6000MB

I understood the initial size of mdf +ldf=database size.
When I increase Initial size for mdf by 10MB, the Database Size and Available Space are also increased by 10MB.

Question 1: Is the following formular correct? Database Size - Available Space = Actually used physical disk space for mdf and ldf files.

And also for auto growth, this database is set to autogrowth by 1MB currently which doesn't seem to be right to me because this may result frequent expansion of database. Currently the daily increase of new records being entered is around 20MB. So I guess either I have to turn off autogrowth and manually increase by changing Initial Size For MDF or set higher autogrowth value such as 100MB. We already have enought available space 9GB, but just to know how.

Question 2: To manually increase database size, not using autogrowth, do I just increase Intial Size of MDF in Files tab?

Question 3: How much latency does the end user experience when 100MB is increased by autogrowth? I tried to increase 100MB through Management Studio, but it only takes less than 1 sec, and  from end user application I also didn't  recognize any freezing or error message . So it seems that from end user, the duration of increasing the size is not much different than when I change from Management Studio.  I saw lots of comments from other users here saying auto growth or increasing manually  may result performance lag because the database freezes until size increasing process is done, so they recommend to increase size manually during maintenance, but it doesn't seem to be applied to me as I tested.
Question by:crcsupport
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
LVL 25

Accepted Solution

DBAduck - Ben Miller earned 800 total points
ID: 22783039
Question 1: The Database size is the total disk space used by both mdf and ldf files (plus any .ndf or secondary files)  The available space is that space in the allocated disk space that is left empty in the files.

question 2: That is correct, you just put a larger size in the Initial size and it will grow that file.

Question 3: Autogrowth is or should be used as a fail safe, but not in my opinion should it be used as a normal way to grow files.  Normally the growth is by percentage or by MB and that can and will promote fragmentation of the disk file instead of contiguousness.  The freezing or impact of autogrowth is not that it takes a long time to do it, but that it needs to have a file handle lock to grow the file while it is supposed to be being owned by SQL Server and accessed.  The actual initialization of the file is not the consequential part.  It is the interruption of a highly transacted system and having an exclusive file growth operation.  It is costly even though it may not seem so from the outside.

These answers are from my experience and background and I welcome any differing or same opinions by others.
LVL 51

Assisted Solution

by:Mark Wills
Mark Wills earned 800 total points
ID: 22783608
OK, those database files have two sizes, one is the amount of physical disk they have been allocated and the other is how much space they consume (of that allocation). From a "disk free" perspective, if they have been allocated space, then it is not free.

You are quite correct about pre-allocating size , and also do not deny autogrow, just design / calculate the size to  minimise autogrowth issues. The last thing you want is a high dependancy on the slowest part of the system - ie Disk IO. So try to accurately calculate the number and types of transactions that happen over the course of a period (I use weekly and a have a weekly maintenance plan to re-organise, reindex, and "part" shrink etc), and try to accommodate that activity - keeping in mind that updates in place consume log space, but not so much the database space, where as appending new transaction will consume both.

Now some of the allocated space also depends on your recovery mode - e.g. simple pretty much takes care of log file for you, but your recovery is from the last full backup, full, however offers high recoverability (assuming frequent log backups during the day), but means you need to manage the transaction log file yourself (or through maintenance plans).

You can adjust size and growth by using SSMS and right clicking on the database and go into properties, files. For immediate attention to size issues, (ie "full transaction log" you can go into (right click on database) tasks, shrink files and use it interactively...

There are quite a few positing in EE about transaction logs and database size - might be worth having a look...


Assisted Solution

by:Hemantgiri S. Goswami
Hemantgiri S. Goswami earned 400 total points
ID: 22784720
Question 1: IMO Database Size - Available Space = Actually used physical disk space for mdf and ldf files+allocated space
Question 3: IMO, we should keep autogrow enable (provided you have created enough sized db initially) and in fix MB, in your case it won't much time as the size of the database itself is not big, consider a situation where size of database is hundreds of GB  or a TB and auto grow in % would be panic, suggest to refer http://www.extremeexperts.com/SQL/Articles/AutoGrowth.aspx for auto grow option.

LVL 51

Expert Comment

by:Mark Wills
ID: 22786414
OK, back again... and realised my previous posting kind of answered, but not specifically those questions...

Q1 : hopefully answered - physical disk = allocated size, and amount of space remaining in that allocation (for growth) is the available space.
Q2 : right click on database go into properties - window pops-up. on left inside that window is "files" option - change it there... Or can use T-SQL code
Q3 : autogrowth can cause problems becuase it is locking various resources as it does it - it is a bit different to simply allocate more room as you can do interactively, it depends an a wide range of other activities. In the real world, it has memory, log files and a whole pile of concurrency to contend with. So on a relatively lightly loaded system, you will not notice. on medium to heavy days, if it starts autogrowing, then it might be doing it often depending on your settings, and that can lead to heavy fragmentation - not just physical disk, but also things like index paging and then you start to get heavier resource usage and the whole thing becomes impacted - might not notice for a while, but when users start to complain about performance, then it really is a bit late and then you have to do some real work - reactively rather than proactively managing the system...

So, yes look at disk activity, get some statistics on transactional volumes and try to predict growth and design you disk allocation with that growth in mind, then set some realistic auto grow characteristics.

The other thing you must consider is the recovery mode of the database. with FULL recovery - which is the default, you really must run regular transaction log backups. That will release some of the consumed allocation, but only back to the last "dirty" page - it does not reorganise pages as such - just commits, and releases that contiguous space. Transaction logs can re-use that space very effectively, so need to keep reasonably high "avaliable" space. So it can grow on you, even though it may appear that there is sufficient "available" space. Run the full backup nightly and look at the transaction log as part of that full backup in a maintenance plan. That will help keep it under control.  With FULL recovery, you MUST do frequent transaction log backups (like every quarter, half or full hour - whatever the granularity of recovery is required), those logs are the most verbose, and therefore can get big.  But it also means that your calculation is a bit different now, it must be able to handle the peak growth within any one of the transaction log backup intervals (then double it).

But there is also one more thing to consider and that is the size of TEMPDB, it to is impacted by certain database activities, so need to consider that sizing as well when looking at disk capacities. Now, the system databases are all pretty much "simple" recovery (even master which can be set to bulk or full, still acts as simple), so not quite the issue on log files, but still you want to optimise the tempdb size to match your individual database activities.


Author Closing Comment

ID: 31509060
I thank each one of you. My decision was to set the autogrowth as 100MB which covers about 4-5 days as a fail safe and check the file size on regular basis and increase the size manually so that the db doesn't get impacted with autogrowth during office hours. And I recently changed the recovery model to simple so transaction log issue is no longer complicate to me. I lifted from 250 to 500 and splitted to give you all.

Featured Post

Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

JSON is being used more and more, besides XML, and you surely wanted to parse the data out into SQL instead of doing it in some Javascript. The below function in SQL Server can do the job for you, returning a quick table with the parsed data.
Ever wondered why sometimes your SQL Server is slow or unresponsive with connections spiking up but by the time you go in, all is well? The following article will show you how to install and configure a SQL job that will send you email alerts includ…
Via a live example combined with referencing Books Online, show some of the information that can be extracted from the Catalog Views in SQL Server.
Using examples as well as descriptions, and references to Books Online, show the different Recovery Models available in SQL Server and explain, as well as show how full, differential and transaction log backups are performed
Suggested Courses

765 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question