Note: This is a continuation of a question that was asked last week. I am adding more points because we still do not have an actual solution though earlier comments were useful. So here we go...
Our transaction log is filling up - 2.5 gigs in 2 hours with a 6gb file. The end result of this has been failure of backups (the database is in simple recovery but during backup it cannot truncate the log so it fills up).
The process that seems to be the problem only appears to be writing about 20,000 records a DAY. So, 1000 records per hour, and while they are wide - about 100 columns - I cannot figure out why a 6gb log should be filling up.
We are running a profiler during the backup right now and have seen 17 MILLION log writes in 2 hours. What does the profiler think a "transactionlog write is? How could there be 17 M writes when only 20,000 records are being created?
The procedure that is running is a procedure that calls a procedure that calls a procedure - I know that this can cause performance problems if variables are being passed, but I have resolved those from a performance perspective. Is there some reason that this "stacking" of procedures would be a problem for transactions?