--
You received this message because you are subscribed to the Google Groups "Mojolicious" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mojolicious+unsubscribe@googlegroups.com.
To post to this group, send email to mojol...@googlegroups.com.
Visit this group at https://groups.google.com/group/mojolicious.
For more options, visit https://groups.google.com/d/optout.
Finished my first attempt to implelement Redis Backend
My benchmark results
Clean start with 10000 jobs
Enqueued 10000 jobs in 9.98849105834961 seconds (1001.152/s)
29165 has started 4 workers
4 workers finished 1000 jobs each in 155.357006072998 seconds
(25.747/s)
29165 has started 4 workers
4 workers finished 1000 jobs each in 121.667289972305 seconds
(32.877/s)
Requesting job info 100 times
Received job info 100 times in 0.120177030563354 seconds
(832.106/s)
Requesting stats 100 times
Received stats 100 times in 0.540313005447388 seconds (185.078/s)
Repairing 100 times
Repaired 100 times in 9.60826873779297e-05 seconds (1040770.223/s)
And Dan's benchmark result on my enviroment
Clean start with 10000 jobs
Enqueued 10000 jobs in 228.140287876129 seconds (43.833/s)
29268 has started 4 workers
4 workers finished 1000 jobs each in 295.22328209877 seconds
(13.549/s)
29268 has started 4 workers
4 workers finished 1000 jobs each in 224.983679056168 seconds
(17.779/s)
Requesting job info 100 times
Received job info 100 times in 3.12703800201416 seconds (31.979/s)
Requesting stats 100 times
Received stats 100 times in 0.699573993682861 seconds (142.944/s)
Repairing 100 times
Repaired 100 times in 1.10982012748718 seconds (90.105/s)
Some explanations:
I used redis expire feature to automatically delete jobs and workers, so repair results here is irrelevant. I just wrote empty function.
Next, I used MessagePack instead of JSON (with XS implementation on perl side) and lua scripts instead of transaction.
My next step is profile lua scripts (want to try this
https://stackoverflow.com/questions/16370333/can-i-profile-lua-scripts-running-in-redis).
So far, the only reason to store jobs as hashes is ability to
check parents and delayed fields. I think, I can find another ways
to store jobs and it can significantly improve performance.
So far, looks like it's worth the effort to continue. Enqueue is
already bit faster than current pg backend(700 j/s for enqueue and
170 j/s for deque), so I want to believe there is a light in the
end of tunnel and it's not the train.
You received this message because you are subscribed to a topic in the Google Groups "Mojolicious" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/mojolicious/YxCcwq6gGII/unsubscribe.
To unsubscribe from this group and all its topics, send an email to mojolicious...@googlegroups.com.
To unsubscribe from this group and all its topics, send an email to mojolicious+unsubscribe@googlegroups.com.
To post to this group, send email to mojol...@googlegroups.com.
Visit this group at https://groups.google.com/group/mojolicious.
For more options, visit https://groups.google.com/d/optout.