Link to home
Start Free TrialLog in
Avatar of MrVault
MrVault

asked on

how to add free space to index files?

we have plenty of disk space free, but our index have 0 free space in them. I'd like to add more so they aren't auto-growing so often. I would give them more space than I know they'll use for a long time. Is this possible? or do I just set "autogrowth" to that amount I want it to grow by and once it does, change it back? If I change autogrowth do I need to restart?
ASKER CERTIFIED SOLUTION
Avatar of Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3]
Flag of Luxembourg image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault
MrVault

ASKER

GUI is fine. t-sql is cool to learn if you want to show that too.
Avatar of MrVault

ASKER

sorry, I didn't mean add more index files for the same index. I'm not sure how that works or if it helps. I was thinking just give the file for that index more space. how does it work with multiple files? it will be on the same raid spindle set.
not sure, but you might want to first introduce yourself to know what a filegroup is in sql server:
http://msdn.microsoft.com/en-us/library/ms179316.aspx
possibly you know already, I just have the impression you don't or that possibly english is not your native language ...

so, did you create several filegroups in your database?
and if yes, did you create the table and the index into different filegroups?
if still yes, you should actually know that you can add more files to the "index" database ...
Avatar of MrVault

ASKER

I'm laughing here because English is my first and only language. I'm in such a hurry today, multitasking that my writing seems short hand. Anyway, I know what a filegroup is and we created 5. One for the main mdf file, one for the clustered index and one for each of the non-clustered indexes. Each filegroup has a single file in it. We did this so we could see which of the NCI's has the highest IO needs and put that on spindles that can provide it while putting the others on less needy sets. This has worked well for us across many other duplicates of the same setup. In this instance though the avg disk sec/transfer is high (as is Avg Disk Queue length). we are taking measures to help these with faster disks, more of them, bigger cache, plus database tweaks. But in the meantime I know that having to constantly auto-grow can degrade performance. I have heard about having multiple files for an index, but have not done that before. If the extra file is going to go on the same set of disks out of necessity, does that still help the matter? Also, does it matter that the data in the index file changes almost every 10 to 14 days completely due to the nature of our product?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault

ASKER

how does having extra files lead to more space in total?

I don't have enough spindles to dedicate to each file.

so basically the question becomes: Is there any benefit to splitting an index into multiple files if they will be on the san LUN and set of physical disk?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault

ASKER

thanks. so the total backup time and restore time of the same index in 2 files will be shorter than 1 file? what type of magnitude we talking about relatively? I'm aware it depends on size, and hardware performance, etc, but we talking about 5% faster or 25% faster? An article to link to would be great.

Thanks!
in regards to the performance gains: I have no real idea, and found no quick article.
but I do know that if I have a big database with just 1 database file, say 1TB, the backup will run on that 1 single file for say 1 hour.
if the db is split into 5 database files of each 200GB, the backup can run 5 threads in parallel, which can then run a bit more than 1/5th of the hour.
especially with a SAN behind, your read performance for the backup will not be limited to a single spindle, so you can do more or less that math.
given the overhead still of several files, and the write performance being the bottleneck usually on the destination disks, but still you could then expect it to be done in 15 minutes instead of 60 ...

side note on considerations for filegroups usage:
http://www.sql-server-performance.com/2006/filegroups/
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault

ASKER

thanks for that article link angellll. I saw the part where it says reads will also be faster as it can read across multiple files. Here's a very important question though:

How do I measure the impact in performance? How do I see what the most intensive queries are in our database (by total calls, total duration across all, duration for a single one, cpu time), measure their current time to run so that I can measure the impact of splitting it into multiple files? right now we have 12 x 600GB 15K SAS drives. We originally created 6 raid1 sets and then tried  3 raid10 sets and gave a third of highest IO indexes to each raid set. turns out some are more intense than others so 4 disks was not enough for that index. we tried 8 and 4, etc but couldn't find a good setup. finally we created a single large 12 disk raid10 set and the average seconds per read or write dropped dramatically. so right now that's where we're at. so my question becomes, if we have multiple high IO indexes (with 1 or 2 being higher significantly than the others), how do we know how many files we should split the index filegroup file into? 2, 4, 12? no offense to you, but there are a ton of articles out there proclaiming how to speed up my sql server, but very few that seem to provided step by step on how to measure if it's actually getting better. I'm a jack of all trades, so I don't have the time do become a seasoned DBA right now. I still have to manage AD, DNS, FIREWALLS, VPN, SAN, SWITCHES, WORKSTATIONS, and PEOPLE across multiple sites. I'd love extra help, but no money for it. Anyway, enough complaining :)
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault

ASKER

I thought sql tracks performance stats of queries somewhere that can be analyzed. like top run queries, top queries by time, etc
Of course the tricky part of using multiple files in a filegroup is that you don't control what SQL puts into each file.  

That is, AFAIK, you can only tell SQL to put a table into a certain file*group*, you *can't* tell SQL to put parts of the same table into different *files*.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault

ASKER

That's it. Thanks!
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault

ASKER

The solution for the original question was to change the initial size of the file in the filegroup (which I did via GUI, but I'm sure there's a T-SQL command for it), then to restart the services.