In sched_exec function's comment it says:
"sched_exec - execve() is a valuable balancing opportunity, because at
this point the task has the smallest effective memory and cache footprint."
Right, but - when a execve() is called then this task will start execution (that
means this task will not waiting on the runqueue as TASK_RUNNING/WAKING,
it will get the CPU). At this point - what is the necessity to try
making it balance.
By looking at point of "smallest effective memory and cache footprint" , we are
missing the point that we are unnecessarily pushing task when its
about to execute.
Isn't it? Or I'm missing anything?
Rakib,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Well, if there's an imbalance the 'slow' load-balancer will move it
around eventually anyway, and since it will then have build up a larger
cache footprint it will be even more expensive.
So moving it when its cheapest is the best all-round trade-off, isn't
it?
> So moving it when its cheapest is the best all-round trade-off, isn't
> it?
I don't have any argue with cache footprint issue.
>
>
thanks,
There is no overloaded task, its the runqueue that is overloaded wrt to
other runqueues. The load-balancer has to pick a 'random' task and pray.
Current heuristics try to pick a task that hasn't been on the cpu for a
while, because for those the effective cache footprint is minimal.
> Why the _current_ task?
Because at exec it has effective 0 cache footprint, and is thus an ideal
victim to move about.
By saying overloaded task - I didn't want to mean any perticular task.
I wanted to mean a runqueue of excessive tasks with regard to other
runqueue (sorry for misleading you).
> Current heuristics try to pick a task that hasn't been on the cpu for a
> while, because for those the effective cache footprint is minimal.
>
Yes - current heuristics does this - to make sure that it doesn't have to
wait too long. It pushes process into another runqueue (probably less loaded)
just to make sure that - it will get the CPU a bit quickly. But when a task
got the CPU - we should keep it out of equation. The point of moving task
is - it have to wait less. At exec current task don't have to wait to get CPU.
No, moving tasks isn't (primarily) about latency, it is about ensuring a
fair proportion of service time.
Do you have a particular workload you worry about or are you merely
trying to satisfy your curiosity?
> Do you have a particular workload you worry about or are you merely
> trying to satisfy your curiosity?
>
No, I don't have any particular workload.
Anyway, look at it this way, suppose you have 4 tasks on 2 cpus, cpu0
has 3 tasks and cpu1 has 1 task.
The currently running task on cpu0 does exec and gets moved to cpu1,
even though it gives up time on cpu0, it gains time on cpu1. Because it
was eligible to 1/3 of cpu0's time, whereas it is eligible to 1/2 of
cpu1's time.
So its a win, right?
> Yes - a fair win. But if load balancer moves other tasks from the runqueue
> (2nd or 3rd task from your ex.) and thats how we also can achive 1/2 of cpu1's
> time , right? Those waiting tasks could have effective 0 cache footprint too.
> If they were not run before - right?
Could have, but is very unlikely, and here we have one we know for sure.