I have done everything I can think of to get this query to perform well and it is just not getting there.
The oddness of the execution plan is that (early?) in the plan tree the query as written (not by me) does a merge join which drastically increases the number of rows. It carries these millions of rows all the way up the tree until right before the select there is a hash match join and the rows are filtered to about 50K. I have not been looking at a lot of execution plans but I have seen a few and this merge join is very stark in Sentry Plan Explorer where the number of rows suddenly increases.
So as is this query runs for hours (up to 7 so far) I did manage to nab an actual execution plan which shows pretty much the same thing only there is a lot of complaining about how much estimated and actual varied. I thought maybe statistics. Ran a query the stats were pretty well out of date. There were some that have never had the stats updated and some were it had been several years and then some which were recent which looks like because of my recent defragmentation efforts. Still no change.
Now, if I hint one of the joins (looks like I may have been rather random about it) as an INNER LOOP JOIN. The query runs in under a minute. The row count is the same and I even have the data from the original in case I need to compare it.
Finally, I have it down to the particular where clause when things go south. It's the last one that would complete the columns in all of the clustered indexes (did I mention there are 6 joined tables?).
I am baffled, I have a solution that is not a solution and I a not sure where to go.