• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 716
  • Last Modified:

SQL database autogrowth

I'm setting up some management processes for a SQL Server 2008 database that a client has been using to store data from several legacy systems so it can be accessed by a BI program they have.

The database is currently set to grow by 10 MB at a time, but when they only upload their legacy data about once a month and during that process, they load about 1.5 GB of new data.

Should I simply increase the AutoGrowth size to 1500MB or is there a way to grow the database manually at the beginning of the process, via a stored procedure or a pass-through query?
0
Dale Fye
Asked:
Dale Fye
  • 3
  • 2
2 Solutions
 
DcpKingCommented:
Ouch! Than means that it'll grab 10 MB from the OS 150 times each go! Quite aside from being really slow, just imagine the fragmentation that it'll produce! I'll bet that you've got a similarly small autogrow increment for the transaction log (1%, maybe).

Setting the autogrow to 2GB would be a reasonable way to go: does the data stay there forever, so the DB is continually growing, or does it go away during the course of the month, only to be replaced by a new batch? If the latter, then expanding it by maybe 3 GB, letting the expansion happen, and then setting it back to something small that would never be used might be a good way. However, get the database into shape first, or it won't make much difference. This article by Greg Robidoux may help:

http://www.mssqltips.com/sqlservertip/1165/managing-sql-server-database-fragmentation/

hth

Mike
0
 
Dale FyeAuthor Commented:
Mike,

I've been doing Access development for my client for a couple of years, but their accounting department brought someone in to build this SQL database for them and he did what I would consider a terrible job.  So I'm trying to clean up his mess.  Both the Database and Trans Log autogrow stats were set to 10M, and there were no indices on any of the tables.

The data is generally loaded early in the month, and they plan on keeping it all.  Once I get a little more familiar with how they are using the BI software and what they are actually looking at, I might recommend that they migrate some of the older data out into an archive but for now they plan on letting it grow.

It's been a while since I've used SS (maybe 2002) and even then, we had a DBA to do most of the technical stuff, so I'm new to the management part of all this.  I'll take a closer look at the reindex stuff.

But my real point was is it better to let SQL Server grow the file when it reaches the point that it needs to, or, knowing the volume data that will be loaded, is it better to manually grow the file at the beginning of the upload process?
0
 
JAT-DBACommented:
I would pre-allocate the space that I will need for the year.  We have similar environments that we load data from the legacy system to sql server.
The good thing here is you know the size that you need.  Now its up to you to decide how far you want to pre-allocate your space.  Based on your information of NEW data monthly at 1500 mb then you are looking at roughly 7.6 gb per year.
So you may consider pre-allocation of 8 gb with a 2 gb autogrowth just to be on the safe side.
Then after the year do another pre-allocation by adding 8 gb to the current size.
0
NFR key for Veeam Agent for Linux

Veeam is happy to provide a free NFR license for one year.  It allows for the non‑production use and valid for five workstations and two servers. Veeam Agent for Linux is a simple backup tool for your Linux installations, both on‑premises and in the public cloud.

 
DcpKingCommented:
Grow the files in one fell swoop to what you expect it will take over, say, a year - plus a 10% margin! Then set the autogrow at the same amount. The idea is that the file never grows in little dribs and drabs, but in one contiguous lump that it can then work in, because  otherwise the SQL Server system is at the mercy of the operating system and whatever the fragmentation history of the disk is.

Ideally, try to give SQL Server one huge file to play with that isn't fragmented at all.

hth

Mike
0
 
Dale FyeAuthor Commented:
I initially did the 2G thing, and that worked well, but have subsequently decided to go with a 10G auto grow.

Given that the initial 9 months of data was created using the 10MB autogrowth, I assume that data is fragmented all over the place.  Is there a way to reorganize that data at this point?
0
 
DcpKingCommented:
Well, there's fragmentation of the physical file, and fragmentation of the content. Let's just address the first.

Firstly, please go and read this article by Brad McGehee
http://www.bradmcgehee.com/2011/06/do-you-ever-physically-defragment-your-sql-server-mdf-ldf-files/

It doesn't really matter how you achieve the defrag - obviously the perfect option would be to move everything off the drive, reformat it, and copy back all the files, largest-first.  There are lots of defrag programs, and they're all slow - it's just a side effect of what they do - so the emptier the disk when you start them the quicker you'll be done.


After that, read this article where Kevin Kline has some good advice.

http://www.sqlmag.com/article/news2/does-your-database-need-a-defrag-

This kind of defragging is something within the DB, and you can and should do it periodically as part of your regular maintenance.

Hope this all helps

Mike
0

Featured Post

Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now