Greetings everyone,

My question is going to be a bit of a challenge (well potentially). I am looking for a more efficient way to perform the calculation given the equation below.

I have a column with 7000 data points ordered in time. In this case I am not just calculating the average of the top 50 values for the entire range all in one shot. I am trying to do trending analysis in which the first 49 values are ignored. Once I get to the 50th value in time - I calculate the average of the top 50. Then I go to the 51st value in time and calculate the top 50 values out of a total of 51. I go to the 52nd value in time - calculate the top 50 values out of 52...so on and so forth. So the upper limit on my range gets incremented by 1 when I go to the next value in time.

The equation I am using is as follows:
=(SUM(LARGE(C$2:$C2,ROW(INDIRECT("1:50")))))/50

The above formula has worked for a file that has around 7000 data points (ordered in time) in which I calculated the average value of the top 50 values up to that point in time. I can then plot this average value with respect to time and compare it to other known information (like a change in pressure, temperature, etc) to see if there is a corresponding change in the average value due to a change in pressure, etc.

For those 7000 data points - using the above equation - it takes about 15 minutes on my Pentium 4, 2.6GHz processor with 512MB of ram for it to run the above calculation.

My problem - I have another data file with 55,000 data points and another with over 120,000 data points (which I know Excel will not be able to handle and is a different issue). However, since I don't have access to IBM's BlueGene/L or Japan's MDGrape-3 - is there a more efficient way to make the calculation in Excel?

For the record - I don't have access to Matlab (nor do I know how to use it) but I am sure it would probably do this in a breeze.

Thoughts? Suggestions?

Thanks!
Richard