On Monday, August 17, 2015 at 8:23:41 AM UTC-3, Jens Rantil wrote:
Hi,
I made some stress tests on beanstalkd some years ago using Solaris as host. I just punched millions and millions of job until beanstalkd starts to refuses new jobs - on that server, the process memory allocated up to 4G before refusing jobs.
The jobs are actively refused, your producer(s) is(are) notified and should be able to cache his(their) jobs until the server starts to accept them again.
Once I enabled the consumers, the beanstalkd started to accept new jobs normally. I spawned some extra consumers in order to catch up, and then beanstalkd started to give memory back to the system. No memory leaks. No accepted job was lost.
I don't remember my job sizes, but for sure they didn't reached 256kb.
Since then, I never stressed up beanstalkd again - so perhaps the nowadays mileage may vary.
--
Lisias