Convert SQL 2000 DB/LOG from autogrow/autoshrink to fixed size

Hello,

I have a SQL 2000 Server where all the databases have been set to autogrow and autoshrink. I am trying to eek every last drop of performance out of the box so I want to change the databases and logs to a fixed size.

Can anyone help me with the procedure to do this? I have plenty of disk space, OS sits on a RAID1, Logs sit on a RAID1 and the DB's sit on a RAID10. The app was coded so long ago that it is impossilbe to upgrade it over to 2005, 2008 or 20012.

Thanks much!
LVL 1
hoffmanconsultingllcAsked:
Who is Participating?
 
QlemoBatchelor, Developer and EE Topic AdvisorCommented:
I agree but in one point. After setting up the starting size of each DB, perform a Rebuild Index maintenance task. The regular task can be reindex (Rebuild Index) if time needed and avoiding lock contemption is important, or Rebuild Index else (which is better as the index is built from scratch each time).

If you want to set up the size of each DB via SQL:
ALTER DATABASE [YourDB] MODIFY FILE ( NAME = N'YourDB_dat', SIZE = 4096MB , FILEGROWTH = 1024MB );

Open in new window

where YourDB_dat is the logical name of your MDF file, which as default consist of the DB name and "_dat" appended. It's similar for the transaction log.
But since this is a one-timer, you might be better off with doing it manually in Enterprise Manager or Management Studio.

Don't expect anything from setting this up - your main performance boost will come from the first Rebuild Index you perform, as your DBs are likely to be very fragmented at this time.
0
 
QlemoBatchelor, Developer and EE Topic AdvisorCommented:
Setting fixed sizes is dangerous, because you always set it too small, and DB writes fail, sometimes without you getting notice of it quick enough. I cannot recommend that.

Of course you can (and should) enlarge the file size to something reasonable if you think there is some growth to expect. Setting the autogrowth to a bigger number is what you should do. My chunks for data files are usually 512 MB (but 64 MB is ok, too). Log files may grow by e.g. 64 MB. That way you do not get many expansion requests, leading to low fragmentation, while the expansion will not last too long (blocking any further operation while performing).

An absolute no-go for a productive database is autoshrink. Never ever set that up if the database is perpetual written and appended to. Files will get highly fragmented, data moved across the data files, putting them into reversed (!) order, and so on.
If you have to shrink files, do it manually. If you shrink data files with moving pages, recreate indexes.
0
 
ZberteocCommented:
If you have planty of disk space why are you concern about the growth? Anyway to do that set the growth to ... 0.

This will bite you because sooner than later the log will need to grow, unless you make the log file size so big that will never be filled.
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

 
hoffmanconsultingllcAuthor Commented:
Neither of the above comments provide direction or assistance on how to accomplish this. I want to set the databases and log files to be big enough to handle several years of growth as I have plenty of disk space. I am concerned about the ongoing fragmentation of the DB and Logs that comes with using autogrow and autoshrink.

Can anyone help me with the procedures to do this?

Thanks.
0
 
ZberteocCommented:
Ok, make it as big as you can and then set the size of growth to 1meg. That should do it. If is big enough it will never need to grow but if it will still grow will be very little. Anyway your goal will be achieved by the fact that the big size of the log file will make the growth event extremely rare, if it will eve happen.
0
 
ZberteocCommented:
Actually if the log file size is big enough the size of the growth it will be irrelevant as it will probably never get to that point.
0
 
hoffmanconsultingllcAuthor Commented:
do i need to do anything to the db after this is done to defrag it and or reclaim space? My end goal is to increase performance.
0
 
ZberteocCommented:
Schedule a reindex job every week if necessary, which is equivalent to a defrag for the database file. The frequency you choose depends of how often the deletes and inserts are performed. Updates don't affect data. So fi you have frequent inserts/deletes once per week should do it, if not you can stretch t to 2 week or even once a month.

Schedule the job sometimes during a week-end night when the least activity is expected. The reindex locks the tables, one at a time, and it could affect resources and performance if the are big tables involved.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.