I have an Analysis Server Database that contains 5 cubes. Its 8gb and we're having a few speed problems with it. However, i've seen reports all over the net that much larger cubes (100gb+) can work without a problem so I have been asked to increase the size of the cube before working out why its slow, as there is no point fixing the current speed issues if we find performance becomes unreasonabler again when we add more data, even though it should be able to handle it (make sense? If not post & i'll try and explain again).
However, its not that simple is it? What makes someones Cube 100gb? Is it just pure data? Aggregations? Dimensions?
I'm after some comparisons, so if anyone can take a look at one of their analysis server databases and post an analysis of where the disk space goes that would be very helpful!
The fact table in the biggest cube has 1.4 million rows in it and comes to about 1gb. 3 of the other cubes are about 500mb each. 1 is a whole 84mb.
There are 60 dimensions. Most are only a few mb, the biggest is 1.6gb
I don't think there are any aggregations (too many dimensions I think & the wizard didn't bother - not 100% sure as I didnt make it).
From your answers I hope to be able to bloat my cube to a larger size that we need it to be so I can prove that it won't fail 6 months down the line.
Note: I tried increasing the main fact table to 30 million rows and it didn't bloat it that much.