(SQL Server 2008)
I have a database about 12GB in size. It is composed of a number tables holding data organized into "scenarios" - basically a single small master table with several dozen detail tables. A scenario is typically about 2GB of data (that's the size the database increases by adding one).
I recently added relationships from the master table to the many detail tables with cascade on update/delete, because of the need to delete a scenario.
To test, I renamed one by changing the name in the master table. After maybe 40 minutes or so the transaction log file, which started at about 1MB, was 14GB in size and I aborted the operation. I am using simple logging.
I first thought perhaps this was because the PKs on these tables are clustered and the field being updated on these child tables is part of the key - so I thought probably the entirety of the tables was being re-created or something - and some of them are huge. So... redefined most as non-clustered, but the result is the same.
I don't understand how changing this value causes the log to expand to larger than the entire database (and with no end in sight), when the data itself is maybe 1/4th the database.
I don't want to get into the issue of why the tables or keys are designed this way because this is a database designed by others and which I have to work with - I can't at this time do major redesign of the database.
What's going on here? Any ideas?