There are better ways to do this.
If you want the exact 95-th percentile from a list then you can build an accumulator that remembers the number of elements seen so far as well as a priority queue to keep the top 5% of the data seen so far. The maximum size of the priority queue should be set just larger than necessary so you can reject most elements out-of-hand.
This approach can become unwieldy as the number of elements seen increases. There are a variety of advanced algorithms that will give you good approximations, but one of the simplest algorithms is to simply keep several queues that each get progressively more down-sampled versions of your data stream. I usually keep 5-8 of these queues and down-sample each one 10x more severely than the next.
Then to compute an extreme quantile (like say the 99.999-th %-ile), you find compute where this percentile would be in each of your queues and take the least down-sampled version as your estimate.
This algorithm is not as accurate as the more clever algorithms available, but it has three major advantages:
1) it is simple enough to get right the first time you code it
2) it can't be fooled by data that is in a diabolical order
Hope this helps.