extending Redis server

110 views
Skip to first unread message

Raghava Mutharaju

unread,
Apr 17, 2011, 3:31:20 PM4/17/11
to redi...@googlegroups.com
Hello all,

I use Redis with a java binding, Jedis. I am running Redis over a cluster (running an instance on each node in the cluster) and Each node in the cluster would have 2 running processes -- redis server and my application. In redis, since a value for a key could be a set, it would act as a queue (key being the queue identifier and value being the queue items).

Now, my applications responsibility is to process all the queues on the node in which it is running. Processing an element in the queue could result in further insertions into local queues or queues maintained in the other nodes. Termination condition (for my application running on each node) is that all queues should be empty. Some sort of synchronization is required here because a local check for queue emptiness is not sufficient. For eg., the following scenario can happen

1) My application process P1 running on node N1, would check whether all its queues are empty and terminate.
2) In another node N2, running my application process P2, could insert elements in the queues of N1.

My solution involves keeping track of insertion timestamps on each of the nodes and after a point (when all the nodes say they are done), check whether there are any newer insertions. So I was thinking of creating a proxy server, which takes the insertion request, logs the timestamp and sends the insertion request to redis server. 

If I have to create such a proxy server, where should I start w.r.t code & design? Are there any other alternative solutions to the problem?

Thank you.

Regards,
Raghava.


Josiah Carlson

unread,
Apr 18, 2011, 1:44:41 AM4/18/11
to redi...@googlegroups.com
Have a global Redis instance that keeps queue counts for all nodes in
a zset (you could also pick one instance as the "global" one). The
counts would be updated by each of the queue processing nodes as they
processes, sees new items, etc.

For a node to shut down, it merely needs to check that the maximum
queue size in the global queue size zset is 0.

- Josiah

> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/redis-db?hl=en.
>

Raghava Mutharaju

unread,
Apr 18, 2011, 2:17:12 AM4/18/11
to redi...@googlegroups.com
If I understand this correctly, then all the nodes need to keep polling the global zset.

I wanted to avoid this polling and let the global termination controller do the necessary communication because each processing node already has sufficient tasks to do.

Regards,
Raghava.

Josiah Carlson

unread,
Apr 18, 2011, 1:02:33 PM4/18/11
to redi...@googlegroups.com
On Sun, Apr 17, 2011 at 11:17 PM, Raghava Mutharaju
<m.vijay...@gmail.com> wrote:
> If I understand this correctly, then all the nodes need to keep polling the
> global zset.

With the method I described, yes.

> I wanted to avoid this polling and let the global termination controller do
> the necessary communication because each processing node already has
> sufficient tasks to do.

You've got a global controller that is pinging every queue to
determine how much work they have to do? Even if you remember to poll
all of your queues multiple times, you may be stopping queues
prematurely due to race conditions, etc. (the "C" part of CAP). Is
there any particular reason why you are using a queue on every box
instead of a single single queue that they all make requests from
(something like queue items being large, so slow to transfer across
the network multiple times)?

- Josiah

Raghava Mutharaju

unread,
Apr 18, 2011, 9:43:31 PM4/18/11
to redi...@googlegroups.com
>> You've got a global controller that is pinging every queue to determine how much work they have to do?
Yes, there would be a termination controller (TC) running in some node and it can check the queues running on all the nodes.

The following is what I am planning to do. After each node finishes processing its set of queues, its sends a "done" message to the TC with its timestamp and terminates. TC, after receiving "done" messages from all the nodes, starts the verification phase to determine whether processing on all the nodes can indeed be terminated. It enquires redis server on each node for the latest insertion timestamp and compares it with the timestamp it has for that node. If the timestamp from the redis server is newer, then it indicates that there was an insertion into the queue(s) of that node after that node has terminated. So TC asks the queue processing to restart on that node. The whole cycle is repeated until on all nodes, timestamps that TC has is equal to the insertion timestamps from redis server.

In order to implement the above technique, I would need the insertion timestamp from the redis server. For this I thought of writing a proxy server which notes the insertion timestamp and passes the actual command to redis server. I also came across the "monitor" command of redis just now, thinking if I can put that to use without writing any wrappers over redis server.

>> Is there any particular reason why you are using a queue on every box
Data needs to spread across the cluster for scalability reasons and so each node maintains that data in the form of queues.


Regards,
Raghava.

Josiah Carlson

unread,
Apr 19, 2011, 2:14:14 AM4/19/11
to redi...@googlegroups.com
On Mon, Apr 18, 2011 at 6:43 PM, Raghava Mutharaju
<m.vijay...@gmail.com> wrote:
>>> You've got a global controller that is pinging every queue to determine
>>> how much work they have to do?
> Yes, there would be a termination controller (TC) running in some node and
> it can check the queues running on all the nodes.
> The following is what I am planning to do. After each node finishes
> processing its set of queues, its sends a "done" message to the TC with its
> timestamp and terminates. TC, after receiving "done" messages from all the
> nodes, starts the verification phase to determine whether processing on all
> the nodes can indeed be terminated. It enquires redis server on each node
> for the latest insertion timestamp and compares it with the timestamp it has
> for that node. If the timestamp from the redis server is newer, then it
> indicates that there was an insertion into the queue(s) of that node after
> that node has terminated. So TC asks the queue processing to restart on that
> node. The whole cycle is repeated until on all nodes, timestamps that TC has
> is equal to the insertion timestamps from redis server.
> In order to implement the above technique, I would need the insertion
> timestamp from the redis server. For this I thought of writing a proxy
> server which notes the insertion timestamp and passes the actual command to
> redis server. I also came across the "monitor" command of redis just now,
> thinking if I can put that to use without writing any wrappers over redis
> server.

Since the TC needs to ask for a restart, you don't need to worry about
timestamps. When the queue processor starts up, it inserts it's
process id into Redis at a known key. When the TC checks, all it needs
to do is to verify that the queue processor is still running (checking
the pid on the box), and if not, whether there are any items in the
local queue. If there are items, it starts the local queue processor
back up.

No timestamps necessary, no proxy necessary.

>>> Is there any particular reason why you are using a queue on every box
> Data needs to spread across the cluster for scalability reasons and so each
> node maintains that data in the form of queues.

"Scalability reasons" is pretty ambiguous. A few thousand chunks of
data a second? Multi-kilobyte/megabyte chunks of data being passed
around? Too much data to be stored on a single node at once?

I only ask because having another piece of software controlling the
queue processors (starting them back up if necessary, if I understand
your architecture) is yet another piece of software where little bugs
can creep in.

Regards,

Raghava Mutharaju

unread,
Apr 19, 2011, 2:35:17 AM4/19/11
to redi...@googlegroups.com
>> if not, whether there are any items in the local queue. No timestamps necessary
That is right. On each node I would have thousands of queues which would lead to empty check of queue that many number of times. Instead of this, I wanted to make a single call asking for timestamp and comparing it with the timestamp (of that node) that TC holds.

>> Too much data to be stored on a single node at once?
Yes, that is the basis of this work. Apart from that, instead of asking one node to take up processing of 600k+ queues wouldn't it be better if 20 nodes split up the load and process them? Having said that, you are right that data would be going around the cluster and that would take some time.

Regards,
Raghava.

Josiah Carlson

unread,
Apr 19, 2011, 3:45:43 AM4/19/11
to redi...@googlegroups.com
On Mon, Apr 18, 2011 at 11:35 PM, Raghava Mutharaju
<m.vijay...@gmail.com> wrote:
>>> if not, whether there are any items in the local queue. No timestamps
>>> necessary
> That is right. On each node I would have thousands of queues which would
> lead to empty check of queue that many number of times. Instead of this, I
> wanted to make a single call asking for timestamp and comparing it with the
> timestamp (of that node) that TC holds.
>>> Too much data to be stored on a single node at once?
> Yes, that is the basis of this work. Apart from that, instead of asking one
> node to take up processing of 600k+ queues wouldn't it be better if 20 nodes
> split up the load and process them? Having said that, you are right that
> data would be going around the cluster and that would take some time.

That is interesting. You have 600k queues? Why so many? Does each
queue processor take 1 queue, or do they handle many queues? Why not
just 1 queue per host? Can you tag your data so that you don't have as
many queues? Again, how much data is being passed through? Thousands
of items a second? How big is each item to be processed?

Even at 600k/20, that's 30k queues per host. That's still a lot of
keys to be checking for queue items. I'm sure there's a way to cut
that down.

- Josiah

Raghava Mutharaju

unread,
Apr 19, 2011, 1:26:32 PM4/19/11
to redi...@googlegroups.com
One of the big datasets we have would generate that number of queues. The goal is to handle even bigger datasets.

>> Does each queue processor take 1 queue, or do they handle many queues?
It (application process running on each node) handles many queues -- all the queues in that particular node. In the case of 600k/20, it should process 30k queues i.e. there would be 30k keys and each key would have several values (in 100s or 1000s). 

Also note that, all the queues are not for processing. Some of them are helper queues -- so its elements need not be processed. I haven't actually calculated the ratio of processing vs helper queues but processing queues could be 60%-70% of it.

>>  That's still a lot of keys to be checking for queue items
Do you think that this is a lot of load on one node? The number of nodes could be expanded using third party cloud services (Amazon EC2 etc). 

>> Why not just 1 queue per host?
That would too less of a load on 1 host (node) isn't it.

>> Can you tag your data so that you don't have as many queues?
I am already using some tags. In Jedis, there is this feature called key-tags which I am making use of, so that related queues would be assigned to one node.

>> Again, how much data is being passed through?
Is it the data that is passed around among the queues of different nodes? 

>> I'm sure there's a way to cut that down.
That would certainly help. We are looking into it, on how to cut down the number of queues.

Regards,
Raghava.

Josiah Carlson

unread,
Apr 20, 2011, 2:33:35 PM4/20/11
to redi...@googlegroups.com
Just to make sure that we are using the same terminology here, because
I might be a little confused.

When I say "queue", I mean a single list in Redis (or a zset, if you
do prioritization).
When I say "queue item", "item", or "data", I mean a string that has
information to process that piece of information. Something like the
json string: "{'method':'aggregate', 'name':'counter_1', 'value':24}",
which may, for example, mean that a counter needs to be incremented by
24.

On Tue, Apr 19, 2011 at 10:26 AM, Raghava Mutharaju
<m.vijay...@gmail.com> wrote:
> One of the big datasets we have would generate that number of queues. The
> goal is to handle even bigger datasets.

In everything that I've used Redis (and queues in general for), the
only time I've ever needed to create more than one queue is in the
case where I was pushing so many items on the queues that one box
couldn't handle it. That said, I have moved 1000+ items per second
through a single Redis queue sustained for weeks (over 1 billion items
processed without issue). When I did use multiple instances of Redis,
it was because I needed to move 50k items/second.

>>> Does each queue processor take 1 queue, or do they handle many queues?
> It (application process running on each node) handles many queues -- all the
> queues in that particular node. In the case of 600k/20, it should process
> 30k queues i.e. there would be 30k keys and each key would have several
> values (in 100s or 1000s).
> Also note that, all the queues are not for processing. Some of them are
> helper queues -- so its elements need not be processed. I haven't actually
> calculated the ratio of processing vs helper queues but processing queues
> could be 60%-70% of it.

I am still confused as to why you need so many queues. Do you have
priorities? Do you have one queue for each different type of item to
be processed?

>>>  That's still a lot of keys to be checking for queue items
> Do you think that this is a lot of load on one node? The number of nodes
> could be expanded using third party cloud services (Amazon EC2 etc).

The number of queues means that you need to pass a lot of keys through
to pull items from. That seems like a waste to me.

>>> Why not just 1 queue per host?
> That would too less of a load on 1 host (node) isn't it.

Not if you take all of the items in all of the queues and put it in the 1 queue.

>>> Can you tag your data so that you don't have as many queues?
> I am already using some tags. In Jedis, there is this feature called
> key-tags which I am making use of, so that related queues would be assigned
> to one node.

That's a different thing. What I mean is that if you have a json work
item like "{'name':'counter_1', 'count':24}" and you would typically
place it in the "queue:aggregate" queue, you could alter the data to
be "{'queue':'aggregate', 'name':'counter_1', 'count':24}" and put
that work item along with all of the others in a single queue.

>>> Again, how much data is being passed through?
> Is it the data that is passed around among the queues of different nodes?

Single node, all of your nodes, either. As long as you also say which
number you gave. Bytes per item, number of items, desired throughput
(number of items processed per second), the amount of processing time
(ignoring queue times) necessary per item, etc. With that information,
there is some balancing that can be done to get you to the destination
you need.

>>> I'm sure there's a way to cut that down.
> That would certainly help. We are looking into it, on how to cut down the
> number of queues.

Regards

Raghava Mutharaju

unread,
Apr 21, 2011, 5:52:53 PM4/21/11
to redi...@googlegroups.com
sorry for the delay in replying. 

Regarding usage of terminology, we are on the same page. I might have used that terminology in a confusing way. I use regular sets (sadd), not the zsets.

For eg, my queues could be as follows:

p1: {f1, f2, f3, f4, f5}
p2: {m1, m2, m3, m4, m5}

p1 (key) represents a person and the values are his forefathers including his father
p2 (key) represents another person and the values are his foremothers including his mother.

There is certain kind of relationship between the values and keys. During the processing of queues, I would need something like for a person p2, get me all the values. If I mix up all the values and put them in one very large queue then I relationships are lost. Although, this wouldn't be the case if I tag each item with the key but I wouldn't be able to query in the way I specified above (get all values for p2).

During processing, values could be inserted back into the queues. So multiple instances of my application across many nodes would be accessing only 1 queue.

>> When I did use multiple instances of Redis, it was because I needed to move 50k items/second.
That is a very good number. How did you achieve that? Is it because of high bandwidth of the network or something else?

Here are some statistics:

-- Used 2 nodes. The statistics collector program ran on one of these nodes. Let that node be L and other node be O.
-- There are 28k queues on each node. 
-- The largest queue length from both the nodes is 540 (i.e. there are these many queue items in some queue).
-- Max. size of a queue item is 186 bytes (got it using getBytes() method of String class in java). All the items of the queue are not equi-sized.
-- Average queue length is 2 (most of the queues have less than 10 values)
-- On L node, time taken to iterate over all the local queues (and their members), just to collect these statistics is 1.6 secs. 
-- From L node, time taken to fetch all the queues on O node and iterate over the queues and its members is 4.7 secs.

I expected the last two numbers to be more than they are. I haven't yet checked the time taken to process each item in the queue. I will look into where max. time is being spent.

Thank you.

Regards,
Raghava.

Josiah Carlson

unread,
Apr 21, 2011, 10:29:53 PM4/21/11
to redi...@googlegroups.com
On Thu, Apr 21, 2011 at 2:52 PM, Raghava Mutharaju
<m.vijay...@gmail.com> wrote:
> sorry for the delay in replying.
> Regarding usage of terminology, we are on the same page. I might have used
> that terminology in a confusing way. I use regular sets (sadd), not the
> zsets.
> For eg, my queues could be as follows:
> p1: {f1, f2, f3, f4, f5}
> p2: {m1, m2, m3, m4, m5}
> p1 (key) represents a person and the values are his forefathers including
> his father
> p2 (key) represents another person and the values are his foremothers
> including his mother.

This is where the confusion was stemming from. I see "queues" as a way
of organizing tasks to be processed only. In the case of Redis, using
RPUSH and LPOP on the LIST structure. You are actually using SETs as a
way of modeling relationships (paternal and maternal ancestry) as a
sort of directed graph, while simultaneously wanting them to be
processed simultaneously.

Different terminology, and different use-cases.

> There is certain kind of relationship between the values and keys. During
> the processing of queues, I would need something like for a person p2, get
> me all the values. If I mix up all the values and put them in one very large
> queue then I relationships are lost. Although, this wouldn't be the case if
> I tag each item with the key but I wouldn't be able to query in the way I
> specified above (get all values for p2).

Looking back on your earlier problem description, you do the following:
1. pull all related items for a person, maybe inserting other items in
other sets
2. when all to-be-processed sets have been processed across all nodes,
shut down your processors

Question: how are each of your nodes discovering which sets need to be
processed, and how do you know what order to process them in?

Regards,
- Josiah

> During processing, values could be inserted back into the queues. So
> multiple instances of my application across many nodes would be accessing
> only 1 queue.
>>> When I did use multiple instances of Redis, it was because I needed to
>>> move 50k items/second.
> That is a very good number. How did you achieve that? Is it because of high
> bandwidth of the network or something else?

We were in EC2. It only took a couple boxes, as our queue items were small.

Raghava Mutharaju

unread,
Apr 21, 2011, 11:29:17 PM4/21/11
to redi...@googlegroups.com
>> In the case of Redis, using RPUSH and LPOP on the LIST structure
aah, that is right. My usage of queue is not the classic FIFO data structure. You are right, that I just use sets of Redis.

>> Question: how are each of your nodes discovering which sets need to be processed, and how do you know what order to process them in?
At the time of loading the data into queues, on each node, I maintain another queue which keeps track of all the keys (local keys i.e. keys within same node) that, that node should process. When processing starts, these queues associated with these keys would be processed. Order does not matter and order of insertions/deletions into queue also doesn't matter (that is why I didn't use RPUSH & LPOP) because there are some triggers (processing tasks) that get fired based on the type of item inserted.

Regards,
Raghava.  

Geoffrey Hoffman

unread,
Apr 21, 2011, 11:33:00 PM4/21/11
to redi...@googlegroups.com
> there are some triggers (processing tasks) that get fired based on the type of item inserted.

I'm curious how the trigger fires?

Raghava Mutharaju

unread,
Apr 22, 2011, 12:14:43 AM4/22/11
to redi...@googlegroups.com
Excuse me if I messed up the terminology again :)

This is like a rule processing system. The items in the queue can be of 3 types. Each item type corresponds to some subset of rules i.e. when this item has to be processed, I know which rules to fire. Each rule would involve certain set of tasks -- these tasks could result in further insertions into queues (local or other node), which is what I mentioned previously.

Regards,
Raghava.

On Thu, Apr 21, 2011 at 11:33 PM, Geoffrey Hoffman <geoffrey...@gmail.com> wrote:
> there are some triggers (processing tasks) that get fired based on the type of item inserted.

I'm curious how the trigger fires?

--

Geoffrey Hoffman

unread,
Apr 22, 2011, 1:08:44 AM4/22/11
to redi...@googlegroups.com
No, I think what you said makes sense, but I was wondering if you actually implemented an actual trigger system in the Redis C source (the thread is extending Redis...) or if you implemented additional method calls in your Java wrapper for when (immediately before or after) you (successfully) write a certain type of key. I'm not a C programmer though, I don't think I am much help to answer your question, but I thought your use case and this thread was interesting; that a trigger might be a rather useful feature for something in my application and perhaps others; and I was wondering how you approached that. For example, I could use something like 

if ( $KEY matches PTN1* ) then LPUSH PTN1SET, $KEY 
if ( $KEY matches PTN2* ) then LPUSH PTN2SET, $KEY 
   
I do something like this in PHP code currently and wondered if a trigger type of feature to Redis could be made generic or if it has been suggested before...
e.g.



<quote>
On Wed, Feb 9, 2011 at 1:40 PM, Demis Bellot <demis.bel...@gmail.com> wrote: 
> Salvatore, I understand that your bandwidth is very limited atm, but the 
> ability to have a trigger will allow client / redis community developers to 
> better provide enhanced / higher-level functionality. 

I don't like the idea of triggers for a number of reasons... it's too 
stateful for a database IMHO. 
But what I like is the idea of publishing changes in the key space via 
our built-in Pub/Sub, and that's pretty trivial to accomplish. 
</quote>

which leads me to want to read more on pub/sub feature and why it is going to get deprecated.

Sergei Tulentsev

unread,
Apr 22, 2011, 3:54:02 AM4/22/11
to redi...@googlegroups.com
which leads me to want to read more on pub/sub feature and why it is going to get deprecated.

Is it? I want to read about that too. :-)
Best regards,
Sergei Tulentsev

Raghava Mutharaju

unread,
Apr 22, 2011, 8:44:50 AM4/22/11
to redi...@googlegroups.com
No, I am not extending Redis to do the trigger style processing. It is part of the code written on top of it.

I did start the thread with the intention of extending Redis server but myself and Josiah
 were discussing alternative solutions (and later on performance). It did get a bit sidetracked from the thread subject :).

Regards,
Raghava.

Josiah Carlson

unread,
Apr 25, 2011, 5:51:23 PM4/25/11
to redi...@googlegroups.com
Back to the topic at hand; I don't really see a problem with your
heavy use of sets, either in terms of memory use, etc. Also, based on
your statement about the use of another set to determine what queues
need to be processed, I think that you can just query that one set to
determine whether you had any further work to do.

To go deeper, 600k sets really isn't all that many. There are some
people who have millions of sets in a single Redis, and others with
millions of hashes. The only real question is whether 1. you have
enough memory, and 2. you have enough processors to process all of
your data.

You stated earlier that you had something like:


p1: {f1, f2, f3, f4, f5}
p2: {m1, m2, m3, m4, m5}

If you wanted to reduce your number of sets, you can easily use a sorted set:
pf -> {1_f1 -> ..., 1_f2 -> ..., 1_f3 -> ..., ...}
pm -> {2_m1 -> ..., 2_m2 -> ..., 2_m3 -> ...}

With the scores being derived from the "id" portion of p<id> that was
turned into pf -> {<id>_item -> <score>, ...} in the way that I
describe in this thread:
https://groups.google.com/forum/#!topic/redis-db/zCv6C8RI3I4 . If you
can encode your ids into a binary string, that should be good for 256
billion unique sets (if necessary), and could reduce your number of
sets from 30k/host to presumably less than a dozen (put all of your
father information in the same zset, all of your mother information in
a different zset, etc.)

One thing to note: sorted sets use significantly more memory, so
before switching to them, you should verify that you have enough.

- Josiah

On Thu, Apr 21, 2011 at 8:29 PM, Raghava Mutharaju

Raghava Mutharaju

unread,
Apr 26, 2011, 1:02:57 PM4/26/11
to redi...@googlegroups.com
Hello Josiah,

Thank you for the reply. Memory isn't a problem. For the data sets I have, the processing power (number of nodes) shouldn't be a problem either.

Have a look at the following 2 scenarios (one suggested by you and my current set up):

Current set up:

1) ID1 --> {val11, val12, val13}
    ID2 --> {val21, val22, val23}

Here vals are related to IDs and so are separated into separate queues (sets) i.e. val21 is related to only ID2 but not to ID1.

Retrieval: I use smembers() of key. 

Suggested set up

2) ID1_2 --> {1_val1, 1_val2, 1_val3, 2_val1, 2_val2, 2_val3}
Here the 2 sets are merged but the values are differentiated using value IDs and their scores.

Retrieval: I have to use ZRANGE, ZRANGEBYSCORE

The number of sets would definitely get reduced since we are merging sets but would the processing speed improve? If anything, use of zrange/zrangebyscore seem to be more expensive. In 2), I would also have the additional overhead of calculating scores.

Regards,
Raghava.

Josiah Carlson

unread,
Apr 26, 2011, 4:44:33 PM4/26/11
to redi...@googlegroups.com
On Tue, Apr 26, 2011 at 10:02 AM, Raghava Mutharaju
<m.vijay...@gmail.com> wrote:
> Hello Josiah,
> Thank you for the reply. Memory isn't a problem. For the data sets I have,
> the processing power (number of nodes) shouldn't be a problem either.
> Have a look at the following 2 scenarios (one suggested by you and my
> current set up):
> Current set up:
> 1) ID1 --> {val11, val12, val13}
>     ID2 --> {val21, val22, val23}
> Here vals are related to IDs and so are separated into separate queues
> (sets) i.e. val21 is related to only ID2 but not to ID1.
> Retrieval: I use smembers() of key.
> Suggested set up

Not suggested; just an alternate method if your primary consideration
is "reduce the number of sets". You had mentioned the hassle of
checking many different sets for size, which I originally took as
"check thousands of sets on every host". If you have a single set that
lists the items that still needs to be processed, then you are down to
1 per host already, and I'd say that you have a sufficient solution
(though I generally use sorted sets if I want unique items with the
ability to pull one item at a time).

> 2) ID1_2 --> {1_val1, 1_val2, 1_val3, 2_val1, 2_val2, 2_val3}
> Here the 2 sets are merged but the values are differentiated using value IDs
> and their scores.
> Retrieval: I have to use ZRANGE, ZRANGEBYSCORE
> The number of sets would definitely get reduced since we are merging sets
> but would the processing speed improve? If anything, use of
> zrange/zrangebyscore seem to be more expensive. In 2), I would also have the
> additional overhead of calculating scores.

Any latency differences are more than likely imperceptible unless you
were connecting over unix domain socket. And even then, the
differences are probably be measured in microseconds per pull.

Regards,

Raghava Mutharaju

unread,
Apr 26, 2011, 7:40:34 PM4/26/11
to redi...@googlegroups.com
I noticed that the time to check the size of many queues is pretty fast actually :). I am trying to reduce the number of sets by tweaking the algorithm. In that way, the processing speed would also reduce. Thank you for all your suggestions.

Regards,
Raghava.
Reply all
Reply to author
Forward
0 new messages