Basically I am looking to gracefully exit on kill signal. I want to
make sure I unschedule all tasks and wait till all jobs are running.
Hello Tim,
the scheduler will simply not reschedule afterwards. It doesn't take
care of killing the running task for you.
> Basically I am looking to gracefully exit on kill signal. I want to
> make sure I unschedule all tasks and wait till all jobs are running.
The scheduler doesn't know what your tasks are about. It just wants to
know when to fire them.
Maybe it would be interesting to have a hook, a method of the job
called when the task get unscheduled. Most people schedule tasks with
ruby blocks, adding a hook to a Schedulable would be easy but to a
block, it would requiring some thinking. Maybe you have suggestions.
Best regards,
--
John Mettraux - http://jmettraux.wordpress.com
I first attempted to simply stop the scheduler and check the job count
but that didn't work because "in" jobs drop off the list as soon as
they start (this is different than the old behavior BTW).
In the end I ended up looking at Thread.list and waiting till that
number was 1. I don't know if this approach would work with EM,
probably not.
It seems to me if you only removed the jobs from the list after they
were done (like the old way) instead of when they start I could keep
polling the list till they were done.
another option might be to create a "running_jobs" list where you
could move the job from the jobs list to the running jobs list.
Hello Tim,
yes, this is different from the old behaviour.
> In the end I ended up looking at Thread.list and waiting till that
> number was 1. I don't know if this approach would work with EM,
> probably not.
Indeed.
> It seems to me if you only removed the jobs from the list after they
> were done (like the old way) instead of when they start I could keep
> polling the list till they were done.
>
> another option might be to create a "running_jobs" list where you
> could move the job from the jobs list to the running jobs list.
I will have a look at that. IIRC, the "running jobs" list idea would
be simpler :
http://github.com/jmettraux/rufus-scheduler/issues/#issue/1
Here is a bit of code my friend Aries wrote.
module Rufus
module Scheduler
class SchedulerCore
def running_jobs
@jobs.running_jobs
end
alias_method :all_jobs_old, :all_jobs
def all_jobs
all_jobs_old.merge running_jobs
end
end
class JobQueue
alias_method :initialize_old, :initialize
def initialize
initialize_old
@running_jobs = []
end
def running_jobs
@running_jobs.inject({}) { |h, j| h[j.job_id] = j; h }
end
alias_method :job_to_trigger_old, :job_to_trigger
def job_to_trigger
@mutex.synchronize do
@running_jobs = @running_jobs.select { |j| j.job_thread &&
j.job_thread.alive? }
if @jobs.size > 0 && Time.now.to_f >= @jobs.first.at
running_job = @jobs.shift
old_job = @running_jobs.find { |j| j.job_id == running_job.job_id }
@running_jobs.delete(old_job) if old_job
@running_jobs << running_job
running_job
else
nil
end
end
end
end
end
end
OK, thanks for the inspiration.