Queuing a lot of jobs in batch gives me GEARMAN_COULD_NOT_CONNECT errors

Skip to first unread message

Geoffroy R

Oct 22, 2018, 4:31:01 AM10/22/18
to Gearman

I'm using Gearman to import/parse an xml feed in PHP.

A job will parse the xml(400.000 items) and for each item will create a new job to have it imported.
But if I just call "doBackground" in a loop the first jobs are reaching the gearman server and after some thousands of jobs, the gearman server seems to not be able to process the icoming requests as quickly as needed and just reject the request for some time after accepting these again a bit later.

Error I get:
"PHP Warning:  GearmanClient::doLowBackground(): send_packet(GEARMAN_COULD_NOT_CONNECT) Failed to send server-options packet -> libgearman/connection.cc:433"

I tried to add some sleeping between the calls "usleep(5000);" but this seems to just postpone a bit the issue.

- Is this the expected behaviour?
- How could I improve the Gearman server to accept more jobs as fast as possible?
- Should I add another Gearman server?
- Should I increase the wait time between queuing requests?

Thanks for your help.

Brian Moon

Oct 22, 2018, 9:46:46 AM10/22/18
to gea...@googlegroups.com, Geoffroy R
> --
> You received this message because you are subscribed to the Google
> Groups "Gearman" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to gearman+u...@googlegroups.com
> <mailto:gearman+u...@googlegroups.com>.
> For more options, visit https://groups.google.com/d/optout.


You need to determine why Gearmand is rejecting things. Is it running
out of memory? Is it running out of ports? What is the status of your
gearmand when the issues start?



Geoffroy R

Oct 22, 2018, 11:54:38 AM10/22/18
to Gearman

Thanks for the quick response.

I don't have any error in the gearmand log. (verbose is set as ERROR)

- running out of memory? No, still a lot of memory available
- running out of ports? I don't think so as it is the same process that is pushing the jobs
- what is the status? how can I know that? I still see the processes running, and I suppose it is distributing the jobs to the workers as at some time it accepts jobs again.

As I understand this should not be the case so I will investigate more and try to reproduce it on purpose.
I'm using the Redis persistent queue so maybe I will try to reproduce it without any persistence.


Edward J. Sabol

Oct 22, 2018, 12:24:48 PM10/22/18
to gea...@googlegroups.com, gryc...@gmail.com
On Oct 22, 2018, at 11:54 AM, Geoffroy R <gryc...@gmail.com> wrote:
I don't have any error in the gearmand log. (verbose is set as ERROR)

Try setting the logging level to DEBUG.

When your process stops being able to submit jobs, can a *different* process submit jobs at that point? Knowing that would isolate whether your problem is with the client or the server. If it's with server, my guess is that the server is running out of memory. There's at least one open issue in GitHub concerning a memory leak.

Are you using database persistence? Because that opens another can of worms....

When you have some DEBUG logs and have determined the problem is with the server, please file an issue at https://github.com/gearman/gearmand/issues. Please be sure to mention your operating system and distribution, whether you compiled from source or installed some package, your compiler version (if you compiled from source), and your gearmand command line arguments.


Reply all
Reply to author
0 new messages