TEMPDB is growing to 30 gigs by running a query

How do I prevent the TempDb from growing while running a query?  It is a large query but 30 gigs is causing an issue.  


Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

change DB recovery model to "simple"
DatabaseMX (Joe Anderson - Microsoft Access MVP)Database Architect / Systems AnalystCommented:
30 GB?  Access max is 2 GB ... ...


>Access max is 2 GB ...
Tempdb is SQL Server

Make sure there are no uncommitted transactions lingering about.

Protecting & Securing Your Critical Data

Considering 93 percent of companies file for bankruptcy within 12 months of a disaster that blocked access to their data for 10 days or more, planning for the worst is just smart business. Learn how Acronis Backup integrates security at every stage

Anthony PerkinsCommented:
Post the query and the schema for all the tables involved.
Try to analyse what the space is used for.
http://msdn.microsoft.com/en-us/library/ms176029.aspx see 'Monitoring space used by queries'

Then look for how to limit the usage. Our help needs info acperkins asked and a guideline on number of records involved could help.
Scott PletcherSenior DBACommented:
>> How do I prevent the TempDb from growing while running a query? <<

As jogos implied, you cannot do this directly.  SQL will use tempdb space to satisfy a query when it needs to.  And as jogos stated, to reduce this, you will have to figure out why the query is using that tempdb space and adjust the query to prevent it.

That will take some time, at least, if it is ever finished at all.

You really should pre-allocate enough space to tempdb to handle this query.  30GB is not a lot in today's disk space terms ... that's a little too tight to constrict tempdb.  Keep in mind that if tempdb runs out of available space,  you're likely to stop almost all activity in that SQL instance.
" 30GB is not a lot in today's disk space terms ..."
Thats right, but when one query seems to be responsible for pushing that limit.  If you can get it to have 30% less using tempdb it could be a huge difference for any other need for tempdb at the same time.
And in the same effort if there are different files, on what storage they are ...  
Scott PletcherSenior DBACommented:
I agreed that tuning the query is the best thing to do ... but that will take time, possibly a lot of time (depending on the expertise of those doing the tuning and/or the complexity of the query).

In the meantime, I believe it is wise to pre-allocate tempdb to have enough space ... shrink some other log file, remove some backup files, whatever, or just buy another drive .
Lack of info makes that we don't know if there is a fire or we are dealing with 'there is sometimes a disturbance' while running that query on a bad moment.
And always in performance tuning/resource bottlenecks its a matter of what can we do at this moment to let it let it be runnable and that list of actions that let you gain 3% here, 5% there ... and maybe 30 or 90%  on some other optimisation on the long run.  And its not the 90% gain in some query that is probably the one that gives you the most profit because its only running once a day out off office hours , the  5% gain on a hight frequency query will be the one that could improves the user experience of performance.
Scott PletcherSenior DBACommented:
So, what specifically are you suggesting the requestor do?
"It is a large query but 30 gigs is causing an issue. "
Define 'causing an issue'.  To much I/O, disk full, ....  Further not yet seen response which query, #records invoved, how memory is used, ....

30Gb can seem a lot, but what is busy on that moment: users or  batches and on databases of which size? And your big query does not block other processes?  If not always more blocked processes are consuming  more tempdb until the finilay can continue ....

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial

Is this a stored proc or dynamic SQL?
Kellylee22Author Commented:
Okay, so I'm pulling out my hair. The situation has gotten worse.  No one at my company has any idea what is causing this issue.  I am running a stroed Proc.  That truncates and repopulates a table with 120 columns and 4 million rows of data.  I've been running this for the past year with no issues until recently.  

I am sitting here watching the tempdb database (not log file) filling up at lighting speed.  Please give me some idea what I should do next.


See link given to monitor what kind of usage of tempdb.
Ex.1.  How much of that 30GB is your process and how much is tempdb of processes being locked by processes waiting for your process to finish. If those proceses just query an readuncommited can help to prevent a proces to be blocked.
Ex2.  If you use table variables or temp tables, on the link you can find how to monitor on the size they are using.  

And if you can share the procedure we can elaborate on getting it using less tempdb.

But a small calculation. But when sorting, distincts.... on a table with 4 million rows you are quick using GB+ of tempdb.  
please post the T-SQL
Anthony PerkinsCommented:
Right, I requested that here http:#a37357333 nearly two weeks ago with nothing forthcoming.
And what's the specific 'issue'?  Using a lot of space is not an issue if everything works fine.
Another day, another hint.

One new value in a table used in a join can duplicate the amount of records in a join. Maybe it does not change the result because it is covered by a union or group by. But exactly that group by or union uses tempdb for its intermediate results. .... again with 4g of rows bytes fill fast.

The query can be the same, but are there changes in the indexes or statistics? And do you do something special for indexes or statistics?

It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server 2008

From novice to tech pro — start learning today.