Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 10292
  • Last Modified:

Understanding database size, available space, initial size, autogrowth

I'm a bit confused with the 4 properties in SQL properties page. Currently our database has following properties;

In General tab:
Database Size: 35000MB
Available space: 9500MB

Files tab:
Initial size for mdf: 29000MB
Initial size for ldf:6000MB

I understood the initial size of mdf +ldf=database size.
When I increase Initial size for mdf by 10MB, the Database Size and Available Space are also increased by 10MB.

Question 1: Is the following formular correct? Database Size - Available Space = Actually used physical disk space for mdf and ldf files.

And also for auto growth, this database is set to autogrowth by 1MB currently which doesn't seem to be right to me because this may result frequent expansion of database. Currently the daily increase of new records being entered is around 20MB. So I guess either I have to turn off autogrowth and manually increase by changing Initial Size For MDF or set higher autogrowth value such as 100MB. We already have enought available space 9GB, but just to know how.

Question 2: To manually increase database size, not using autogrowth, do I just increase Intial Size of MDF in Files tab?

Question 3: How much latency does the end user experience when 100MB is increased by autogrowth? I tried to increase 100MB through Management Studio, but it only takes less than 1 sec, and  from end user application I also didn't  recognize any freezing or error message . So it seems that from end user, the duration of increasing the size is not much different than when I change from Management Studio.  I saw lots of comments from other users here saying auto growth or increasing manually  may result performance lag because the database freezes until size increasing process is done, so they recommend to increase size manually during maintenance, but it doesn't seem to be applied to me as I tested.
3 Solutions
DBAduck - Ben MillerPrincipal ConsultantCommented:
Question 1: The Database size is the total disk space used by both mdf and ldf files (plus any .ndf or secondary files)  The available space is that space in the allocated disk space that is left empty in the files.

question 2: That is correct, you just put a larger size in the Initial size and it will grow that file.

Question 3: Autogrowth is or should be used as a fail safe, but not in my opinion should it be used as a normal way to grow files.  Normally the growth is by percentage or by MB and that can and will promote fragmentation of the disk file instead of contiguousness.  The freezing or impact of autogrowth is not that it takes a long time to do it, but that it needs to have a file handle lock to grow the file while it is supposed to be being owned by SQL Server and accessed.  The actual initialization of the file is not the consequential part.  It is the interruption of a highly transacted system and having an exclusive file growth operation.  It is costly even though it may not seem so from the outside.

These answers are from my experience and background and I welcome any differing or same opinions by others.
Mark WillsTopic AdvisorCommented:
OK, those database files have two sizes, one is the amount of physical disk they have been allocated and the other is how much space they consume (of that allocation). From a "disk free" perspective, if they have been allocated space, then it is not free.

You are quite correct about pre-allocating size , and also do not deny autogrow, just design / calculate the size to  minimise autogrowth issues. The last thing you want is a high dependancy on the slowest part of the system - ie Disk IO. So try to accurately calculate the number and types of transactions that happen over the course of a period (I use weekly and a have a weekly maintenance plan to re-organise, reindex, and "part" shrink etc), and try to accommodate that activity - keeping in mind that updates in place consume log space, but not so much the database space, where as appending new transaction will consume both.

Now some of the allocated space also depends on your recovery mode - e.g. simple pretty much takes care of log file for you, but your recovery is from the last full backup, full, however offers high recoverability (assuming frequent log backups during the day), but means you need to manage the transaction log file yourself (or through maintenance plans).

You can adjust size and growth by using SSMS and right clicking on the database and go into properties, files. For immediate attention to size issues, (ie "full transaction log" you can go into (right click on database) tasks, shrink files and use it interactively...

There are quite a few positing in EE about transaction logs and database size - might be worth having a look...

Hemantgiri S. GoswamiModerator - MSDN ForumsCommented:
Question 1: IMO Database Size - Available Space = Actually used physical disk space for mdf and ldf files+allocated space
Question 3: IMO, we should keep autogrow enable (provided you have created enough sized db initially) and in fix MB, in your case it won't much time as the size of the database itself is not big, consider a situation where size of database is hundreds of GB  or a TB and auto grow in % would be panic, suggest to refer http://www.extremeexperts.com/SQL/Articles/AutoGrowth.aspx for auto grow option.

Mark WillsTopic AdvisorCommented:
OK, back again... and realised my previous posting kind of answered, but not specifically those questions...

Q1 : hopefully answered - physical disk = allocated size, and amount of space remaining in that allocation (for growth) is the available space.
Q2 : right click on database go into properties - window pops-up. on left inside that window is "files" option - change it there... Or can use T-SQL code
Q3 : autogrowth can cause problems becuase it is locking various resources as it does it - it is a bit different to simply allocate more room as you can do interactively, it depends an a wide range of other activities. In the real world, it has memory, log files and a whole pile of concurrency to contend with. So on a relatively lightly loaded system, you will not notice. on medium to heavy days, if it starts autogrowing, then it might be doing it often depending on your settings, and that can lead to heavy fragmentation - not just physical disk, but also things like index paging and then you start to get heavier resource usage and the whole thing becomes impacted - might not notice for a while, but when users start to complain about performance, then it really is a bit late and then you have to do some real work - reactively rather than proactively managing the system...

So, yes look at disk activity, get some statistics on transactional volumes and try to predict growth and design you disk allocation with that growth in mind, then set some realistic auto grow characteristics.

The other thing you must consider is the recovery mode of the database. with FULL recovery - which is the default, you really must run regular transaction log backups. That will release some of the consumed allocation, but only back to the last "dirty" page - it does not reorganise pages as such - just commits, and releases that contiguous space. Transaction logs can re-use that space very effectively, so need to keep reasonably high "avaliable" space. So it can grow on you, even though it may appear that there is sufficient "available" space. Run the full backup nightly and look at the transaction log as part of that full backup in a maintenance plan. That will help keep it under control.  With FULL recovery, you MUST do frequent transaction log backups (like every quarter, half or full hour - whatever the granularity of recovery is required), those logs are the most verbose, and therefore can get big.  But it also means that your calculation is a bit different now, it must be able to handle the peak growth within any one of the transaction log backup intervals (then double it).

But there is also one more thing to consider and that is the size of TEMPDB, it to is impacted by certain database activities, so need to consider that sizing as well when looking at disk capacities. Now, the system databases are all pretty much "simple" recovery (even master which can be set to bulk or full, still acts as simple), so not quite the issue on log files, but still you want to optimise the tempdb size to match your individual database activities.

crcsupportAuthor Commented:
I thank each one of you. My decision was to set the autogrowth as 100MB which covers about 4-5 days as a fail safe and check the file size on regular basis and increase the size manually so that the db doesn't get impacted with autogrowth during office hours. And I recently changed the recovery model to simple so transaction log issue is no longer complicate to me. I lifted from 250 to 500 and splitted to give you all.

Featured Post

Hire Technology Freelancers with Gigs

Work with freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely, and get projects done right.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now