Link to home
Start Free TrialLog in
Avatar of dakota5
dakota5Flag for United States of America

asked on

transaction log full in Simple Recovery Model

I'm doing a fairly large transfer of data from one table to another (selecting data from one table into another).  The database is using a simple recovery model.  I thought this eliminated the transaction log and prevented it from filling.
But I'm getting a transaction log full error.

Is this possibly related to the data transfer not being committed until the end of the huge data transfer?
How do you get large transactions to automatically commit ?  I don't care about stepping backwards or backing out of the data transfer.

The error is:    The transaction log for database 'ARICASES' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

Where is this column?  Which sys database is it referring to?
Avatar of chapmandew
chapmandew
Flag of United States of America image

How big is the log file before the transfer?  How much space do you have before the transfer?
Avatar of dakota5

ASKER

The transaction log was 1.2gb, it was allowed to grow to 2gb and did grow to 2GB.  But I thought that the transaction log did not need to grow at all if the recovery mode was SIMPLE.
It will need to grow because a large transaction is taking place, but it will not be logged afterwards.  Does that make sense?
ASKER CERTIFIED SOLUTION
Avatar of chapmandew
chapmandew
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of dakota5

ASKER

I understand-- the transaction.log is going to grow temporarily even in the Simple Recover model.

But isn't there a setting to get SQL Server to autocommit after each (or some number) of rows of data, or is there an option for the insert statement that will do this?  This involves about 150 million rows of data

INSERT INTO C_VS_KEYS
                      (INTERNALCASEID, EZINDEX, RECORDTIME)
SELECT     INTERNALCASEID, EZINDEX, RECORDTIME
FROM         C_VITALSIGNS
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of dakota5

ASKER

Thank you to sudhirkrishanan-- I now know what the error message was referring to.

To the main question--
I'm just writing and executing the sql command from within a query window in MS Server Mgmt Studio.  I have SQL Server Business Intelligence Development Studio, which writes SSIS packages, but did not want to get into that.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I believe in simple recovery model when the log file gets filled by 70% it should get flushed out to disk and should be able to accommadate. (Check point should happen). Please let me know if my thoughts are correct.
Avatar of dakota5

ASKER

I'm not sure what you mean by "when it gets filled by 70% it should get flushed out to disk".  Do you mean that MS SQL automatically reduces the size of the transaction log?  Where does the information go?
I noticed that after increasing the max size of the transaction log slightly, it increased then stayed constant, as if the simple recovery model had stopped using the transaction log.