Kellylee22
asked on
TEMPDB is growing to 30 gigs by running a query
How do I prevent the TempDb from growing while running a query? It is a large query but 30 gigs is causing an issue.
Regards,
Kellylee
Regards,
Kellylee
change DB recovery model to "simple"
30 GB? Access max is 2 GB ... ...
?
mx
?
mx
>Access max is 2 GB ...
Tempdb is SQL Server
Make sure there are no uncommitted transactions lingering about.
Tempdb is SQL Server
Make sure there are no uncommitted transactions lingering about.
Post the query and the schema for all the tables involved.
Try to analyse what the space is used for.
http://msdn.microsoft.com/en-us/library/ms176029.aspx see 'Monitoring space used by queries'
Then look for how to limit the usage. Our help needs info acperkins asked and a guideline on number of records involved could help.
http://msdn.microsoft.com/en-us/library/ms176029.aspx see 'Monitoring space used by queries'
Then look for how to limit the usage. Our help needs info acperkins asked and a guideline on number of records involved could help.
>> How do I prevent the TempDb from growing while running a query? <<
As jogos implied, you cannot do this directly. SQL will use tempdb space to satisfy a query when it needs to. And as jogos stated, to reduce this, you will have to figure out why the query is using that tempdb space and adjust the query to prevent it.
That will take some time, at least, if it is ever finished at all.
You really should pre-allocate enough space to tempdb to handle this query. 30GB is not a lot in today's disk space terms ... that's a little too tight to constrict tempdb. Keep in mind that if tempdb runs out of available space, you're likely to stop almost all activity in that SQL instance.
As jogos implied, you cannot do this directly. SQL will use tempdb space to satisfy a query when it needs to. And as jogos stated, to reduce this, you will have to figure out why the query is using that tempdb space and adjust the query to prevent it.
That will take some time, at least, if it is ever finished at all.
You really should pre-allocate enough space to tempdb to handle this query. 30GB is not a lot in today's disk space terms ... that's a little too tight to constrict tempdb. Keep in mind that if tempdb runs out of available space, you're likely to stop almost all activity in that SQL instance.
" 30GB is not a lot in today's disk space terms ..."
Thats right, but when one query seems to be responsible for pushing that limit. If you can get it to have 30% less using tempdb it could be a huge difference for any other need for tempdb at the same time.
And in the same effort if there are different files, on what storage they are ...
Thats right, but when one query seems to be responsible for pushing that limit. If you can get it to have 30% less using tempdb it could be a huge difference for any other need for tempdb at the same time.
And in the same effort if there are different files, on what storage they are ...
I agreed that tuning the query is the best thing to do ... but that will take time, possibly a lot of time (depending on the expertise of those doing the tuning and/or the complexity of the query).
In the meantime, I believe it is wise to pre-allocate tempdb to have enough space ... shrink some other log file, remove some backup files, whatever, or just buy another drive .
In the meantime, I believe it is wise to pre-allocate tempdb to have enough space ... shrink some other log file, remove some backup files, whatever, or just buy another drive .
Lack of info makes that we don't know if there is a fire or we are dealing with 'there is sometimes a disturbance' while running that query on a bad moment.
And always in performance tuning/resource bottlenecks its a matter of what can we do at this moment to let it let it be runnable and that list of actions that let you gain 3% here, 5% there ... and maybe 30 or 90% on some other optimisation on the long run. And its not the 90% gain in some query that is probably the one that gives you the most profit because its only running once a day out off office hours , the 5% gain on a hight frequency query will be the one that could improves the user experience of performance.
And always in performance tuning/resource bottlenecks its a matter of what can we do at this moment to let it let it be runnable and that list of actions that let you gain 3% here, 5% there ... and maybe 30 or 90% on some other optimisation on the long run. And its not the 90% gain in some query that is probably the one that gives you the most profit because its only running once a day out off office hours , the 5% gain on a hight frequency query will be the one that could improves the user experience of performance.
So, what specifically are you suggesting the requestor do?
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
@Kellylee22
Is this a stored proc or dynamic SQL?
Is this a stored proc or dynamic SQL?
ASKER
Okay, so I'm pulling out my hair. The situation has gotten worse. No one at my company has any idea what is causing this issue. I am running a stroed Proc. That truncates and repopulates a table with 120 columns and 4 million rows of data. I've been running this for the past year with no issues until recently.
I am sitting here watching the tempdb database (not log file) filling up at lighting speed. Please give me some idea what I should do next.
Regards,
Kellylee
I am sitting here watching the tempdb database (not log file) filling up at lighting speed. Please give me some idea what I should do next.
Regards,
Kellylee
See link given to monitor what kind of usage of tempdb.
Ex.1. How much of that 30GB is your process and how much is tempdb of processes being locked by processes waiting for your process to finish. If those proceses just query an readuncommited can help to prevent a proces to be blocked.
Ex2. If you use table variables or temp tables, on the link you can find how to monitor on the size they are using.
And if you can share the procedure we can elaborate on getting it using less tempdb.
But a small calculation. But when sorting, distincts.... on a table with 4 million rows you are quick using GB+ of tempdb.
Ex.1. How much of that 30GB is your process and how much is tempdb of processes being locked by processes waiting for your process to finish. If those proceses just query an readuncommited can help to prevent a proces to be blocked.
Ex2. If you use table variables or temp tables, on the link you can find how to monitor on the size they are using.
And if you can share the procedure we can elaborate on getting it using less tempdb.
But a small calculation. But when sorting, distincts.... on a table with 4 million rows you are quick using GB+ of tempdb.
please post the T-SQL
Right, I requested that here http:#a37357333 nearly two weeks ago with nothing forthcoming.
And what's the specific 'issue'? Using a lot of space is not an issue if everything works fine.
Another day, another hint.
One new value in a table used in a join can duplicate the amount of records in a join. Maybe it does not change the result because it is covered by a union or group by. But exactly that group by or union uses tempdb for its intermediate results. .... again with 4g of rows bytes fill fast.
The query can be the same, but are there changes in the indexes or statistics? And do you do something special for indexes or statistics?
One new value in a table used in a join can duplicate the amount of records in a join. Maybe it does not change the result because it is covered by a union or group by. But exactly that group by or union uses tempdb for its intermediate results. .... again with 4g of rows bytes fill fast.
The query can be the same, but are there changes in the indexes or statistics? And do you do something special for indexes or statistics?