Regards,
- Josiah
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>
it would be cool to see if the 15X reduction in thruput (TPS & latency
both?) going from 100 to 100KB is the same on 10Gb LAN?
I think it may be, I think thruput may be soft interrupt bottlenecked,
but I dont have the hardware to test, so I am guessing 100%.
The 10GB LAN performance is going to gradually get more important for
redis ... but it is taking its time :)
- jak
your findings are pretty consistent w/ a bunch of testing I did almost
2 years ago (but you tested more), so that makes me more confident
that my tests were legit.
On 10GigE testing, it is possible to do on specialised EC2 instances
for not that much money, but then the sporadic EC2 behavior must be
taken into account.
Hopefully someone in the community will run the tests over 10GigE once
and post the results.
Thanks,
jak
On Nov 22, 4:08 pm, Didier Spezia <didier...@gmail.com> wrote:
> Hi Jak,
>
> the 15X reduction is in throughput (so tps only) for local communication
> with an old low-range CPU. With a real network, it is much more (which is
> normal, because the throughput is constrained by the max network bandwidth).
>
> The reduction factor depends on the hardware. For instance on high-end
> boxes, I can reach a 9X factor (it is my best result, with unix domain
> sockets,
> and optimal NUMA placement). On a 1 GigE network, on the same hardware,
> the reduction factor is 113X.
>
> Actually it is not so strange: basically, the cost is quite constant until
> the size of the MTU is reached (about 1500). After this point, the reduction
> is linear with the data size (due to bandwidth limitation). So we would have
> almost the same throughput between objects of 100 and 1000 bytes, but
> a 100 ratio between objects of 1000 and 100000 bytes.
>
> I have not tested them, but jumbo frames could change this behavior by
> extending the range of data size whose reduction factor is mostly constant
> (i.e. from 1500 to 9000 bytes).
>
> Here are some results of redis-benchmark, and a chart showing the throughput
> per data size plotted over log scales:
>
> https://gist.github.com/1386203http://dl.dropbox.com/u/16335841/Data_size.png