--
You received this message because you are subscribed to the Google Groups "Sparrow Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sparrow-scheduler-users+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to sparrow-scheduler-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Any centralized scheme will have some throughput limit -- but the number 1500 is not fundamental.-Kay
On Fri, Oct 14, 2016 at 11:16 AM, Max <yuha...@gmail.com> wrote:
Hi Kay,Thanks for your quick response. So, is it a design restriction on Spark? Or any centralized scheme will not be able to achieve a throughput higher than 1500 tasks/second?Thanks again,Max
On Friday, October 14, 2016 at 2:11:32 PM UTC-4, Kay Ousterhout wrote:
Hi Max,The queue has some throughput limit (for Spark, it's about 1500 tasks / second). When tasks are submitted at a higher rate than this, the scheduler can't keep up with the rate at which new things are being added to the queue, so the queue gets longer and longer, and the queuing time approaches infinity.-Kay
On Fri, Oct 14, 2016 at 10:54 AM, Max <yuha...@gmail.com> wrote:
HiI have a question regarding Figure 12 in the Sparrow paper -- what's the main factor causing Spark native scheduler to go to infinite scheduling time? If it is just a FIFO queue, I'm not sure why it would have infinite queuing. I thought it only needs to call "pop" on the FIFO queue and the execution time should be really short.Thanks in advance,Max
--
You received this message because you are subscribed to the Google Groups "Sparrow Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sparrow-scheduler-users+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.