[Numpy-discussion] numpy.percentile multiple arrays

206 views
Skip to first unread message

questions anon

unread,
Jan 24, 2012, 7:22:52 PM1/24/12
to Discussion of Numerical Python
I need some help understanding how to loop through many arrays to calculate the 95th percentile.
I can easily do this by using numpy.concatenate to make one big array and then finding the 95th percentile using numpy.percentile but this causes a memory error when I want to run this on 100's of netcdf files (see code below).
Any alternative methods will be greatly appreciated.
 

all_TSFC=[]
for (path, dirs, files) in os.walk(MainFolder):
    for dir in dirs:
        print dir
    path=path+'/'
    for ncfile in files:
        if ncfile[-3:]=='.nc':
            print "dealing with ncfiles:", ncfile
            ncfile=os.path.join(path,ncfile)
            ncfile=Dataset(ncfile, 'r+', 'NETCDF4')
            TSFC=ncfile.variables['T_SFC'][:]
            ncfile.close()
            all_TSFC.append(TSFC)
        
big_array=N.ma.concatenate(all_TSFC)
Percentile95th=N.percentile(big_array, 95, axis=0)


Marc Shivers

unread,
Jan 24, 2012, 7:55:48 PM1/24/12
to Discussion of Numerical Python
This is probably not the best way to do it, but I think it would work:

Your could take two passes through your data, first calculating and storing the median for each file and the number of elements in each file.  From those data, you can get a lower bound on the 95th percentile of the combined dataset.  For example, if all the files are the same size, and you've got 100 of them, then the 95th percentile of the full dataset would be at least as large as the 90th percentile of the individual file median values.  Once you've got that cut-off value, go back through your files and just pull out the values larger than your cut-off value.  Then you'd just need to figure out what percentile in this subset would correspond to the 95th percentile in the full dataset. 

HTH,
Marc



_______________________________________________
NumPy-Discussion mailing list
NumPy-Di...@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Brett Olsen

unread,
Jan 24, 2012, 9:26:28 PM1/24/12
to Discussion of Numerical Python

If the range of your data is known and limited (i.e., you have a
comparatively small number of possible values, but a number of repeats
of each value) then you could do this by keeping a running cumulative
distribution function as you go through each of your files. For each
file, calculate a cumulative distribution function --- at each
possible value, record the fraction of that population strictly less
than that value --- and then it's straightforward to combine the
cumulative distribution functions from two separate files:
cumdist_both = (cumdist1 * N1 + cumdist2 * N2) / (N1 + N2)

Then once you've gone through all the files, look for the value where
your cumulative distribution function is equal to 0.95. If your data
isn't structured with repeated values, though, this won't work,
because your cumulative distribution function will become too big to
hold into memory. In that case, what I would probably do would be an
iterative approach: make an approximation to the exact function by
removing some fraction of the possible values, which will provide a
limited range for the exact percentile you want, and then walk through
the files again calculating the function more exactly within the
limited range, repeating until you have the value to the desired
precision.

~Brett

questions anon

unread,
Jan 24, 2012, 10:49:46 PM1/24/12
to Discussion of Numerical Python
thanks for your responses,
because of the size of the dataset I will still end up with the memory error if I calculate the median for each file, additionally the files are not all the same size. I believe this memory problem will still arise with the cumulative distribution calculation and not sure I understand how to write the second suggestion about the iterative approach but will have a go.
Thanks again

Olivier Delalleau

unread,
Jan 24, 2012, 11:00:10 PM1/24/12
to Discussion of Numerical Python
Note that if you are ok with an approximate solution, and you can assume your data is somewhat shuffled, a simple online algorithm that uses no memory consists in:
- choosing a small step size delta
- initializing your percentile p to a more or less random value (a meaningful guess is better though)
- iterate through your samples, updating p after each sample by p += 19 * delta if sample > p, and p -= delta otherwise

The idea is that the 95th percentile is such that 5% of the data is higher, and 95% (19 times more) is lower, so if p is equal to this value, on average it should remain constant through the online update.
You may do multiple passes if you are not confident in your initial value, possibly reducing delta over time to improve accuracy.

-=- Olivier

2012/1/24 questions anon <questio...@gmail.com>
Reply all
Reply to author
Forward
0 new messages