This question is pretty long-winded, so please bear with me...
Two people have estimated a quantity, call it xm, with results A and B. (To put a practical perspective on it, xm could be the cost of a software project or the weight of the fuel for the space shuttle.)
A and B are range estimates (a1, a2), and (b1, b2), with means am, bm, and variances av, bv, respectively.
Assume you know from controlled tests that:
- A and B are normally distributed.
- A and B have a correlation of 0.
- (a1, a2) and (b1, b2) are 90% confidence intervals (ie, there is a 90% chance that a1<=X<=a2, and a 90% chance that b1<=X<=b2).
In order to get the best estimate for xm, you want to use an average of A and B, ie.,
xm = (1-k)am + kbm
But (and this is my question, finally), what should k be?
NOTE: Intuitively if av=bv, then k should be 0.5, but, also intuitively, if av<bv then k should be >0.5. To illustrate the last point, say (a1, a2) is (250,260) and (b1, b2) is (0,1000). Wouldn't you put more weight on am?
PS. I'm not a student, and this is not a homework question.