Go Premium for a chance to win a PS4. Enter to Win

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 41
  • Last Modified:

Using SQL Server 2012 for a database with a lot of inserts, methods to keep the log file size small

Hello experts,

we have a SQL Server 2012. And since 2015 we have a supportive system that keeps growing. It's collecting information from a lot of computers. And those computers keep sending.

Which results in a lot of INSERTs into the database. 1.5 GB of data per day. 62363 submissions per day. Processed in about 250ms per submission. Each containing about 30-60 rows that have to be inserted into a table.

As you might imagine this has quite some impact on our transaction log file. Which now has grown to a size of 17GB.
Our recovery model is set to "simple" which is not "perfect" but it keeps the size of the log file down.

We know that every backup resets the transaction log, which is not equal to freeing up disk space but space in the log file. So we believe we have a chance to reduce the necessary transaction log size by executing the backup more often than only every 24 hours.

Does anyone know what else we can do to keep the transaction file size small?
Do you have any pluralsight-course, udemy-course, book or web recommendation that could help?
Are we doing maybe something wrong here, because why should 1.5 GB of data transfer cause 17 gigs of transaction log file?

Thank you very much for any idea you might have!
0
Stefan Hoffmann
Asked:
Stefan Hoffmann
5 Solutions
 
Vitor MontalvãoMSSQL Senior EngineerCommented:
Our recovery model is set to "simple"
We know that every backup resets the transaction log, which is not equal to freeing up disk space but space in the log file.
That's only true for Bulk and Full Recovery models because it's when you are able to perform a transaction log backup. For Simple the transaction log is truncated every time a CHECKPOINT or COMMIT occurs.

Are we doing maybe something wrong here, because why should 1.5 GB of data transfer cause 17 gigs of transaction log file?
How big is the database? Are you performing regular reindexes on the database?
0
 
ste5anSenior DeveloperCommented:
So we believe we have a chance to reduce the necessary transaction log size by executing the backup more often than only every 24 hours.

This believe is wrong.

The size of your transaction log reflects the necessary log space for the actual load. When you resize it to a smaller size, then the next loading process will request that space again from the OS. This can be a serious performance bottleneck.

because why should 1.5 GB of data transfer cause 17 gigs of transaction log file?
This depends on how the input is processed..
0
 
Megan BrooksSQL Server ConsultantCommented:
When the simple recovery model is selected for a database, SQL Server does indeed truncate the log as it is able. There is no need to dump it periodically -- that is why the model is called "simple." Your application appears to be similar to one that I have been supporting in recent years, and we used simple recovery and did not have this issue. So there would seem to be something "interesting" happening behind the scenes.

The first thing that comes to mind is that there might be transactions that are being held open for extended periods of time. Log space for an operation can only be released after it commits. It's hard to know where to start looking because I haven't seen your application and how it is structured. Does it do massive inserts within transactions? This can result either from a single huge insert, which runs in an implicit transaction, or from a series of possibly smaller inserts running inside a single BEGIN TRAN / COMMIT TRAN block. And COMMIT TRAN doesn't actually commit if it is nested, so some outer transaction block may be holding the transaction open even when there appear to be frequent COMMITs.

You should also, before you spend too much time looking at your application, make sure that the log growth occurs as a result of your regular transactional processing, and not due to some sort of periodic maintenance operation that is carried out inside a transaction. If 3rd-party software and support is involved, anything is possible. You can review auto-growth events to see when the growth is happening.

Does any of this so far sound like it might be a reason for the size of your log?
0
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

 
ZberteocCommented:
If your recovery model is set to simple the frequency of the FULL database backups has no effect whatsoever on the log file. The log file is truncated only when the data in it has been committed and became permanent. Until then, if a transaction is open, the size will remain as needed. In simple recovery mode you can NOT make transaction log backups.

On the other hand if your inserts are so big and so frequent keeping the database in simple recovery is not recommended exactly for that reason: SQL engine truncates the log file from those big sizes only to ask the OS for space to grow it again when data is inserted, which will result in performance issues and resources pressure.

I would recommend to change the database to FULL recovery mode and make sure you backup the log file every 10 or 15 minutes, or even more frequent, and don't worry about the log size. It will grow to a maximum needed by the transactions but that space will be released(not reduced/truncated) after every log backup and reused. This way the server will not waste time with truncation and growing back the log file. At the same time you will have to make sure that the inserts that are executed are not too big transactions and that the data is committed after every batch.
0
 
Megan BrooksSQL Server ConsultantCommented:
If the cause for the large log files can be traced to a vendor application, you may just need to live with it (assuming that they reach a certain huge size and stop growing). The same applies if it is due to an in-house application that is considered too costly to fix. But if it is something that can readily be fixed, or is going to lead to more problems than just disk space, figure out what it is and fix it.

By the way, with either simple or full recovery, open transactions are going to prevent the log from being truncated. Full will truncate on a schedule (barring open transactions), while you don't control the timing for simple truncation. Personally, I haven't had a database where the timing of truncation mattered either way.

Simple recovery has a number of uses, one of which is to eliminate maintenance overhead for databases that capture data that has already been archived in some other way prior to entering the database, especially when it is streaming in in high volume as in this system. If there is a fast, straightforward way to replay the incoming data from the archive then full recovery may not provide a significant benefit.
0
 
Stefan HoffmannAuthor Commented:
Thanks for your comments so far.
You have mentioned a lot of things that I need to research on now.
It will take a while. At least a few days.
0
 
Vitor MontalvãoMSSQL Senior EngineerCommented:
Stefan, any news about this issue?
0
 
Vitor MontalvãoMSSQL Senior EngineerCommented:
Recommending to close this question by accepting the above comments.
0

Featured Post

Get your Disaster Recovery as a Service basics

Disaster Recovery as a Service is one go-to solution that revolutionizes DR planning. Implementing DRaaS could be an efficient process, easily accessible to non-DR experts. Learn about monitoring, testing, executing failovers and failbacks to ensure a "healthy" DR environment.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now