We have a large database with a partitioned table (1.5 TB partitioned by year 2003 to 2014) that showed errors detected by dbcc checkdb (physical only) -- about 80 errors, all limited to TWO of the partitions. Because the database was running in simple recovery mode, we could not just replace damaged filetables or pages.
Running dbcc checkdb (repair_allow_data_loss) is taking forever (more than 36 hours), and it has failed 5 times so far, asking for expansions of different file groups-- most of which didn't have errors. Each time we have run it, it has run for a while longer (several hours longer) then crashes, asking for expansion of a different filegroup.
I can expand one file group, then two runs later, after it has crashed and asked for expansion of other file groups, it will come back and crash asking for expansion of a file group I have already expanded.
1. Why is it involving filegroups that were not detected as bad by dbcc checkdb originally
2. Every time it crashes, is it losing everything it accomplished in the hours it spent before the crash?
3. Is there some way to do this piecemeal, one partition at a time.