In Linux exists the ionice(1) for "get/set program io scheduling class
and priority".
In FreeBSD we've nice(1), renice(8) and even rtprio, idprio(1) but if
I'm understanding correctly, they're related to CPU priorty only, not to
I/O.
�Is there some ionice(1) equivalent in FreeBSD?
--
I must not fear. Fear is the mind-killer. Fear is the little-death that
brings total obliteration. I will face my fear. I will permit it to pass
over me and through me. And when it has gone past I will turn the inner
eye to see its path. Where the fear has gone there will be nothing. Only
I will remain.
Bene Gesserit Litany Against Fear.
_______________________________________________
freebsd...@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stabl...@freebsd.org"
> �Is there some ionice(1) equivalent in FreeBSD?
No.
- Andrew
There is no IO scheduler in FreeBSD outside of some experimental patches
at
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/stable/2009-01/msg00316.html
(I have no idea of their status)
--
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
-- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C
That's not entirely true.
A thread's CPU priority is still going to affect its ability to be
scheduled on the CPU, and if it's waiting in the read() or write()
syscalls, then this will make a difference to how quickly it can
complete the next call.
However, it doesn't explicitly affect relative I/O prioritization. This
is another story entirely. I suspect in a lot of cases adding a weight
to per thread I/O, isn't going to make much difference for disk I/Os
which are being sorted for the geometry (e.g. AHCI NCQ).
So I guess my question is, 'why do you need I/O scheduling, and what
aspect of system performance are you trying to solve with it' ?
Yes. I've already supposed it.
> However, it doesn't explicitly affect relative I/O prioritization. This
> is another story entirely. I suspect in a lot of cases adding a weight
> to per thread I/O, isn't going to make much difference for disk I/Os
> which are being sorted for the geometry (e.g. AHCI NCQ).
>
> So I guess my question is, 'why do you need I/O scheduling, and what
> aspect of system performance are you trying to solve with it' ?
Some shell-scripts based on dd or rsync, for example. Even a daily
antivirus (ClamAV) scanner means an extensive I/O.
--
I must not fear. Fear is the mind-killer. Fear is the little-death that
brings total obliteration. I will face my fear. I will permit it to pass
over me and through me. And when it has gone past I will turn the inner
eye to see its path. Where the fear has gone there will be nothing. Only
I will remain.
Bene Gesserit Litany Against Fear.
Even just renicing the processes should help here. The thread(s) will
still be scheduled to run when they need running, although it will
reduce the rate at which the process(es) involved can saturate the
kernel with I/O requests.
Currently the FreeBSD kernel doesn't really make a distinction between
I/O transactions per process, because of how the unified VM buffer cache
works. read() and write() are satisfied from VM; VFS will cause a
vm_object to be created for each opened vnode, so read() will be
satisfied by the same set of vm_page's as for mmap().
The vnode_pager's getpages() routine will eventually read into physical
pages, using BIO requests (although it's usually the filesystem which
actually does this). The net effect is that VFS shares its buffers with
VM, and this does have some piecemeal benefit as the BIO subsystem will
read from the physical medium in large chunks.
It isn't impossible to account for I/O per-process. The Xen hypervisor
has a similiar problem for per-domain I/O accounting. Currently, Domain
0 is responsible for block I/O, and it can be difficult for its
scheduler to tell things apart for similar reasons.
There have been previous research forks of FreeBSD to implement I/O
scheduling; Eclipse/BSD from Bell Labs was one of them. It might be a
good Google Summer of Code project for an interested computer science
student.
cheers,
BMS
there is actually some good and current code at
http://info.iet.unipi.it/~luigi/FreeBSD/
http://info.iet.unipi.it/~luigi/FreeBSD/geom_sched-20090307.tgz
hopefully should still be working on RELENG_7 as long as you refrain
from removing the scheduler from a live mounted fs. I used it on
RELENG_7, with minor changes should work on R8/HEAD, and I hope to
come up with updated versions by the end of the month when i am
done with the dummynet rewrite.
cheers
luigi
--
I must not fear. Fear is the mind-killer. Fear is the little-death that
brings total obliteration. I will face my fear. I will permit it to pass
over me and through me. And when it has gone past I will turn the inner
eye to see its path. Where the fear has gone there will be nothing. Only
I will remain.
Bene Gesserit Litany Against Fear.
--
I must not fear. Fear is the mind-killer. Fear is the little-death that
brings total obliteration. I will face my fear. I will permit it to pass
over me and through me. And when it has gone past I will turn the inner
eye to see its path. Where the fear has gone there will be nothing. Only
I will remain.
Bene Gesserit Litany Against Fear.
i would say it is pretty solid. I used it on my main workstation
and desktop for a few months last year without a glitch.
cheers
luigi
I appreciate your work on this -- truly I do -- but the above statement
is incredible. This is not meant as a flame-inducer, but there's really
no other way to phrase it:
This IS NOT what "production-ready" means to the rest of us,
particularly those of us in the server world. A single developer
running such code on their workstation for a few months is in no way
identical to that of a heavily I/O-bound server.
I thought freebsd.org (or maybe ISC?) offered some test/development
boxes on the 'net available to developers who could test such code +
perform stress tests over long periods of time? I'm probably mistaken,
but I was under that impression.
--
| Jeremy Chadwick j...@parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, USA |
| Making life hard for others since 1977. PGP: 4BD6C0CB |
exactly - i said "pretty robust" and not "production ready".
There are known issues with multiple disks arrangements (gvinum etc.)
due to a reuse of a field in a structure.
These are solved in 8.x.
cheers
luigi
> I appreciate your work on this -- truly I do -- but the above
> statement is incredible. This is not meant as a flame-inducer, but
> there's really no other way to phrase it:
>
> This IS NOT what "production-ready" means to the rest of us,
> particularly those of us in the server world. A single developer
> running such code on their workstation for a few months is in no way
> identical to that of a heavily I/O-bound server.
>
> I thought freebsd.org (or maybe ISC?) offered some test/development
> boxes on the 'net available to developers who could test such code +
> perform stress tests over long periods of time? I'm probably
> mistaken, but I was under that impression.
>
Luigi wrote:
> i would say it is pretty solid. I used it on my main workstation
> and desktop for a few months last year without a glitch.
In what twilight zone does that mean 'Yes, it is production ready' to
warrant such a nice diatribe?
--
Alexander Kabaev
I can't help but think every program that can use too much IO should
have it's own IO/speed switch of some sort.
I can only hope that in general nix evolution that all programs that can
over use IO will offer a switch to slow it down like Rsync does.
Using a while ionice can be a useful feature it can also be said that
there are too many instances where it's being used as a hack to deal
with a program that isn't offering all the functionality that it should.
Cheers,
Mike
In this thread with due respect to the OP the following might be
considered a fruitless hack but it works!.
Piping a processes output to dd(1) if you have a choice is a pretty fair
temporary solution if a program does not offer that capability.
For instance, I don't know if you are familiar with dump(8) at all, but I
use a -P or pipe from that process to dd(8) to slow down the traffic that
it tries to write over the network for backup purposes and then also give
dump(8) a different nice level so it plays along.
So even if you can cat your output and then read it in from fd(4) using
dd(8) you still have a chance at slowing things down a little or writing
at smaller increments that wont impact your environment as hard.
;)
--
jhell
Port: throttle-1.2
Path: /usr/ports/sysutils/throttle
Info: A pipe bandwidth throttling utility
Maint: po...@FreeBSD.org
B-deps:
R-deps:
WWW: http://klicman.org/throttle/
Might work too.
Vince
Vince
> ;)