I have a large database (.mdf file 597 gigs). I discovered that there were certain tables in the database that were getting loaded again and again, where in actuality they should have had a delete and reload each run. As an example there is one table that had 660 million rows and that was the result of 3 loads. The table should have had 220 million.
So I truncated the tables in question reran the ETL.
My backup file .bak appropriately shrunk to a 150 gig .bak. The .mdf remained at 597 gig, so I am running a shrink database on that file.
This step (the shrink) has been running now for over 24 hours.
My question is, that can and perhaps shoud be the case with a file this big?