As dormando said, if you have a clear distinction between "need ASAP" and "when we get to it", you can just run ASAP jobs as HIGH priority and batch jobs as "LOW" and monitor both queues, ensuring you always have some extra resources to get to those low jobs. Beware here though, these are *preemptive* priorities, which means HIGH jobs will *all* be sent to workers before any NORMAL or LOW jobs are.
However, if you don't actually have a clear and easy way to make that distinction at submit time, then the problem is not gearmand. rabbitmq and kafka are going to work the same if applied in the same simple manner. The simple mechanics of a single FIFO queue are your problem.
You may need a Quality of Service algorithm.
https://en.wikipedia.org/wiki/Network_scheduler has a bunch of them listed. Token bucket filter might work. Basically wherever you're submitting jobs directly to workers, you'll want to precede that with a QoS check of some kind. Some algorithms will buffer and delay, some will drop/reject. You have to figure out what works for your code base and economics.
However, in both cases, you might be being pennywise and pound foolish here. Cloud resources are cheap and virtually limitless, software development resources are the opposite. It's almost trivial to expand your worker pool elastically as demand rises and falls, but not so much to debug a scheduling algorithm.