Redis slower than SQLite and MySQL?

1,125 views
Skip to first unread message

Gustavo Narea

unread,
Jun 1, 2009, 11:15:21 AM6/1/09
to Redis
Hello, everybody.

I maintain a Python library called repoze.what which has several back-ends to
store data in sources such as XML files, relational databases, and now Redis.

I'm benchmarking them with the same read-only operation and I was surprised to
find that Redis is slower than SQLite and MySQL databases and XML files:
* On my laptop [1]:
1.- xml: 0.000345015525818 seconds average.
2.- sqlite_memory: 0.00144035816193 seconds average.
3.- sqlite_file: 0.00148575305939 seconds average.
4.- mysql_myisam: 0.00195980072021 seconds average.
5.- mysql_innodb: 0.00263843536377 seconds average.
6.- redis: 0.00608263015747 seconds average.
* On my home server [2]:
1.- xml: 0.000507307052612 seconds average.
2.- sqlite_memory: 0.00229377746582 seconds average.
3.- sqlite_file: 0.00240569114685 seconds average.
4.- mysql_myisam: 0.00327970981598 seconds average.
5.- mysql_innodb: 0.00384981632233 seconds average.
6.- redis: 0.00525720119476 seconds average.
* On my online server [3]:
1.- xml: 0.000276575088501 seconds average.
2.- sqlite_memory: 0.00177877664566 seconds average.
3.- sqlite_file: 0.00190009117126 seconds average.
4.- redis: 0.00212898015976 seconds average.
5.- mysql_myisam: 0.00251481771469 seconds average.
6.- mysql_innodb: 0.00296341180801 seconds average.

I'm running the Redis server (revision
6a97a74f5ebd7d95dc241634e0982552418d5bb3) with the default settings
everywhere, except for:
* timeout = 0
* loglevel = warning

The Redis backend [4] uses the Python client included in the client-libraries
folder.

The Redis performance looks extremely odd to me, not only because it's very
slow compared to XML files and RDBs, but also because it's slightly faster on
my home server [2] than on my laptop [1], although the later is much faster.

And it's worth mentioning than the SQLite and MySQL back-ends use an ORM on
top of the client library (SQLAlchemy), so their performance could be even
better.

What could be going wrong here? If somebody wants to download the benchmark
script, you can checkout the SVN repository to install repoze.what [5] and
then install the relevant plugins [6] and finally run the ./scripts/plugins-
benchmarking/runbenchmarks.py script.

Thanks in advance!

[1] Ubuntu 9.04 (64-bits), Intel Core 2 Duo T9300 @ 2.50GHz, 4GiB @ 667MHz
(RAM), 7200rpm (hard disk).
[2] Ubuntu 9.04 (32-bits), Intel Core Due T2400 @ 1.83GHz, 1280MiB (1GiB @
400MHz and 256MiB @ 533MHz) of RAM, 5400rpm (HD).
[3] Xen VPS running Ubuntu 8.10 (64-bits), Intel Core 2 Quad CPU Q6600 @
2.40GHz, 256MiB... Couldn't get more info about the RAM and HD.
[4] http://bitbucket.org/ares/repozewhatpluginsredis/overview/
[5] http://svn.repoze.org/repoze.what/branches/1.X/
[6] easy_install -U repoze.what.plugins.sql repoze.what.plugins.xml \
repoze.what.plugins.redis
--
Gustavo Narea <xri://=Gustavo>.
| Tech blog: =Gustavo/(+blog)/tech ~ About me: =Gustavo/about |

Salvatore Sanfilippo

unread,
Jun 1, 2009, 12:20:12 PM6/1/09
to redi...@googlegroups.com, Redis
Hello sorry for the short reply i'm using a phone: python lib does not
tcp_nodelay by default. Please enable this and retry if you can.

Cheers
Salvatore

Inviato da iPhone

Il giorno 01/giu/09, alle ore 17:15, Gustavo Narea
<m...@gustavonarea.net> ha scritto:

Gustavo Narea

unread,
Jun 1, 2009, 2:32:40 PM6/1/09
to redi...@googlegroups.com
Hello, Salvatore.

Thank you very much for your response.

Unfortunately, the result is basically the same on my laptop:
1.- xml: 0.000360012054443 seconds average.
2.- sqlite_memory: 0.0013888835907 seconds average.
3.- sqlite_file: 0.00153646469116 seconds average.
4.- mysql_myisam: 0.00192084312439 seconds average.
5.- mysql_innodb: 0.0021630525589 seconds average.
6.- redis: 0.00635347366333 seconds average.

Just in case, I set TCP_NODELAY in Redis.connect:
# ...
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
sock.connect((self.host, self.port))
except socket.error, e:
# ...

Thanks,

- Gustavo.

Salvatore Sanfilippo

unread,
Jun 1, 2009, 3:57:03 PM6/1/09
to redi...@googlegroups.com
Hello again

I'll check this once back btw if with Redis benchmark you get sane
numbers chances are you are metering some kind of latency in some of
the layers. I'll reply with more info in few days, thanks foe
reporting this!

Ciao
Salvatore

Inviato da iPhone

Il giorno 01/giu/09, alle ore 20:32, Gustavo Narea

Gustavo Narea

unread,
Jun 1, 2009, 4:13:34 PM6/1/09
to redi...@googlegroups.com
Hi, Salvatore.

Salvatore said:
> I'll check this once back btw if with Redis benchmark you get sane
> numbers chances are you are metering some kind of latency in some of
> the layers.

I'm not dealing with any low-level stuff, except for the TCP_NODELAY parameter
you asked me to set, so I guess that if it's the case, it may be happening in
the Python library for Redis.


> I'll reply with more info in few days, thanks foe
> reporting this!

Perfect, you're welcome! :)

Cheers,

Valentino Volonghi

unread,
Jun 1, 2009, 4:22:41 PM6/1/09
to redi...@googlegroups.com

On Jun 1, 2009, at 8:15 AM, Gustavo Narea wrote:

> The Redis backend [4] uses the Python client included in the client-
> libraries
> folder.
>
> The Redis performance looks extremely odd to me, not only because
> it's very
> slow compared to XML files and RDBs, but also because it's slightly
> faster on
> my home server [2] than on my laptop [1], although the later is much
> faster.
>
> And it's worth mentioning than the SQLite and MySQL back-ends use an
> ORM on
> top of the client library (SQLAlchemy), so their performance could
> be even
> better.


SQLAlchemy is very fast especially when there aren't complex
queries with many objects that need to be created from the query.

The main "problem" with the python library is that it's implemented
in pure python. Mysql, Sqlite, and I suppose also XML, clients are
implemented in C. For single process this is a great boost. Furthermore
the Python client is synchronous and this contributes more to the
performance penalty (even using multiple threads wouldn't use 100%
of the potential speed).

You don't reveal much about your benchmarking code but I'm going to
assume that everything is run in a tight loop in a single process.
This is
an unrealistic scenario, it doesn't measure the speed of redis but the
relative speed of each of the libraries that you are using.

Try with multiple clients (let's say at least 10) and each of them is
your
tight loop. This level of concurrency will slow down everything except
redis that doesn't do synchronous persistency. This is a more realistic
scenario.

If any of my assumptions are wrong then tell me :).

--
Valentino Volonghi aka Dialtone
Now running MacOS X 10.5
Home Page: http://www.twisted.it
http://www.adroll.com

PGP.sig

Salvatore Sanfilippo

unread,
Jun 1, 2009, 4:44:24 PM6/1/09
to redi...@googlegroups.com
Agreed, this should be put into the faq. As a side note when doing
this type of busy loop benchmarks instead to look at timings check how
many cpu seconds every db used. Not perfect but better.

Inviato da iPhone

Il giorno 01/giu/09, alle ore 22:22, Valentino Volonghi <dial...@gmail.com
> ha scritto:

Kless

unread,
Jun 8, 2009, 9:22:15 AM6/8/09
to Redis DB
I've been also very surprised by those results. I get a performance
very similar --a very negative resultfor Redis-- :

***** Benchmark results *****
Group results:
* Results for action #1, starting by the fastest adapters:
1.- xml: 0.00114917755127 seconds average; total: 0.0114917755127
seconds.
2.- sqlite_memory: 0.00331170558929 seconds average; total:
0.0331170558929 seconds.
3.- sqlite_file: 0.00339241027832 seconds average; total:
0.0339241027832 seconds.
4.- mysql_innodb: 0.0041825056076 seconds average; total:
0.041825056076 seconds.
5.- mysql_myisam: 0.00430059432983 seconds average; total:
0.0430059432983 seconds.
6.- redis: 0.0083105802536 seconds average; total: 0.083105802536
seconds.

Permission results:
* Results for action #1, starting by the fastest adapters:
1.- xml: 0.000550293922424 seconds average; total: 0.00550293922424
seconds.
2.- sqlite_memory: 0.00292103290558 seconds average; total:
0.0292103290558 seconds.
3.- sqlite_file: 0.00318250656128 seconds average; total:
0.0318250656128 seconds.
4.- mysql_innodb: 0.00400106906891 seconds average; total:
0.0400106906891 seconds.
5.- mysql_myisam: 0.00495734214783 seconds average; total:
0.0495734214783 seconds.
6.- redis: 0.00885336399078 seconds average; total: 0.0885336399078
seconds.
*****
Ubuntu 9.04 64

$ sudo lshw -short
processor AMD Athlon(tm) 64 X2 Dual Core Proces
memory 2GiB System Memory
disk 250GB ST3250620AS Non-Raid-5 SATA

*****

Here you have the commands to get all installed and so run the tests:

http://dpaste.com/hold/52771/

Gustavo Narea

unread,
Jun 8, 2009, 9:35:35 AM6/8/09
to redi...@googlegroups.com
Hello, Valentino et al.

I'm sorry for the delayed response, but I've been swamped with work and exams
at the university.

You're right about the singled threaded assumptions. I started working to run
the benchmark in multiple threads and I hope to get them working by the end of
the week.

Thanks!

Talk to you soon,


- Gustavo.


Valentino said:
> SQLAlchemy is very fast especially when there aren't complex
> queries with many objects that need to be created from the query.
>
> The main "problem" with the python library is that it's implemented
> in pure python. Mysql, Sqlite, and I suppose also XML, clients are
> implemented in C. For single process this is a great boost. Furthermore
> the Python client is synchronous and this contributes more to the
> performance penalty (even using multiple threads wouldn't use 100%
> of the potential speed).
>
> You don't reveal much about your benchmarking code but I'm going to
> assume that everything is run in a tight loop in a single process.
> This is
> an unrealistic scenario, it doesn't measure the speed of redis but the
> relative speed of each of the libraries that you are using.
>
> Try with multiple clients (let's say at least 10) and each of them is
> your
> tight loop. This level of concurrency will slow down everything except
> redis that doesn't do synchronous persistency. This is a more realistic
> scenario.
>
> If any of my assumptions are wrong then tell me :).
--

Salvatore Sanfilippo

unread,
Jun 8, 2009, 10:30:17 AM6/8/09
to redi...@googlegroups.com
On Mon, Jun 8, 2009 at 3:22 PM, Kless<jona...@googlemail.com> wrote:
>
> I've been also very surprised by those results. I get a performance
> very similar --a very negative resultfor Redis-- :

Hello Kless,

I actually don't buy this test. You can test that Redis primitives are
fast by a number of simple tests, from the redis-bench to hand-coded
tests running in 50/100 instances. Even the single-thread latency
should be shorter in Redis. So basically what I think is that the test
itself is exposed some other kind of latency for some other way I
can't check not being skilled with Python.

Cheers,
Salvatore

--
Salvatore 'antirez' Sanfilippo
http://invece.org

"Once you have something that grows faster than education grows,
you’re always going to get a pop culture.", Alan Kay

Salvatore Sanfilippo

unread,
Jun 8, 2009, 10:36:02 AM6/8/09
to redi...@googlegroups.com
On Mon, Jun 8, 2009 at 3:35 PM, Gustavo Narea<m...@gustavonarea.net> wrote:

> You're right about the singled threaded assumptions. I started working to run
> the benchmark in multiple threads and I hope to get them working by the end of
> the week.

Hello Gustavo,

it's not just about the single-threaded assumption actually, I think
that also it is important to understand why the latency for single
request is larger in Redis. I didn't looked at the code, but I've an
hint: maybe just the Redis adapter performs more low-level request to
the server per iteration? If so you are measuring the round trip time
multiple times, and if so the multi-threaded version will scale well.

Another interesting test not involving coding as already mentioned is
to measure the CPU taken by the different servers to run the test.

Valentino Volonghi

unread,
Jun 8, 2009, 6:01:52 PM6/8/09
to redi...@googlegroups.com

On Jun 8, 2009, at 6:35 AM, Gustavo Narea wrote:
> You're right about the singled threaded assumptions. I started
> working to run
> the benchmark in multiple threads and I hope to get them working by
> the end of
> the week.


Multiple threads in python is not really going to gain you that much.
You really
want multiple processes (and maybe at that point also multiple threads).

Using the processing module from python 2.6 (or easy_installable for
python 2.5)
if going to help you with this.

PGP.sig

Kless

unread,
Jun 9, 2009, 8:53:17 AM6/9/09
to Redis DB
I've been looking for information and questioning in #python and looks
that this problem is not solved with concurrency else with async IO.

Async could be managed with:

- Asyncore module [1]
- Twisted [2] is the non-blocking networking framework most
established, althought it's complex.
- Both concurrence [2] and eventlet [3] are new but more easy to use.


[1] http://docs.python.org/library/asyncore.html
[2] http://twistedmatrix.com/
[3] http://opensource.hyves.org/concurrence/
[4] http://wiki.secondlife.com/wiki/Eventlet
>  PGP.sig
> < 1 KBVerDescargar

Valentino Volonghi

unread,
Jun 9, 2009, 4:12:12 PM6/9/09
to redi...@googlegroups.com

On Jun 9, 2009, at 5:53 AM, Kless wrote:

> I've been looking for information and questioning in #python and looks
> that this problem is not solved with concurrency else with async IO.
>
> Async could be managed with:
>
> - Asyncore module [1]
> - Twisted [2] is the non-blocking networking framework most
> established, althought it's complex.
> - Both concurrence [2] and eventlet [3] are new but more easy to use.

I beg to differ... I'm a Twisted developer and it's a lot simpler than
asyncore
to use. concurrence from what I've seen is not simpler at all and
eventlet
instead is actually simpler but requires greenlets and monkey-patches
the python standard library, plus is less mature as you say.

Having implemented the erlang client I can tell you that the twisted
client
would be marginally different in terms of length. Redis protocol is
now a
lot more regular than it used to be at the beginning and can be parsed
easily.

Anyway for the sake of the benchmark multiple processes would still
allow you to see how fast redis is, the python client will still be
"slow"
though.

PGP.sig

Kless

unread,
Jun 10, 2009, 7:19:22 AM6/10/09
to Redis DB
Valentino,

following your comment I'm using Twisted to build the Redis Async.
client.

Kless

unread,
Jun 11, 2009, 3:43:37 AM6/11/09
to Redis DB
I thought that it would be more easy to working with Twisted from 0
day but it's clear that it's not easy.

If anybody is interested and knows about Twisted there is a project
that could help

http://github.com/sophacles/pybeanstalk/tree/master

Ludovico Magnocavallo

unread,
Jun 11, 2009, 3:52:28 AM6/11/09
to redi...@googlegroups.com
Kless wrote:
> I thought that it would be more easy to working with Twisted from 0
> day but it's clear that it's not easy.

Twisted is a religion, not a framework. Don't listen to Valentino when
he says it's easy... :)

L.

Valentino Volonghi

unread,
Jun 11, 2009, 2:13:29 PM6/11/09
to redi...@googlegroups.com


You still haven't managed to explain what's wrong. Every time you ask
for a way to do something in Twisted I answer with something that is
shorter and more portable than any alternative solution... :)

Let's not start a flame here

PGP.sig

Valentino Volonghi

unread,
Jun 11, 2009, 3:06:15 PM6/11/09
to redi...@googlegroups.com

On Jun 11, 2009, at 12:43 AM, Kless wrote:

>
> I thought that it would be more easy to working with Twisted from 0
> day but it's clear that it's not easy.
>
> If anybody is interested and knows about Twisted there is a project
> that could help
>
> http://github.com/sophacles/pybeanstalk/tree/master


Yes, it's actually not too bad in showing how to use a line protocol
that
can handle multiple commands and switch to raw mode. But don't take
too much from it, it's slightly more complex that it could be.

The main problem is that the solutions without twisted don't separate
the
transport logic from the protocol logic. In that project the twisted
protocol
is split in 2 pieces because the non twisted one doesn't understand what
a line is, the author is essentially introducing complexity (that
still makes
the code simpler in the library, but harder to understand in its
structure)
to use the LineProtocol.

A very simple and stupid implementation of the twisted protocol is this:
but it's incomplete (no multi-bulk reply), untested and could be
imporoved
in many ways (especially in how the state is managed, make the state
managing part richer and isolated and everything will be easier in state
management).

I don't have time or need right now to implement this though.

from twisted.internet import defer
from twisted.protocols import basic

EMPTY = "empty"
BULK = "bulk"
MULTIBULK = "multibulk"

class Response(object):
def __init__(self):
self.d = defer.Deferred()

class Redis(basic.LineReceiver):
_state = EMPTY
_nextSize = 0

def __init__(self):
self._queue = []
self._buffer = ""

def lineReceived(self, line):
def _notImplemented(line):
raise NotImplementedError("What kind of state is %s?!
Last line: %s" % (self._state, line))

type, line = line[0], line[1:]
if type == '$':
self._state = "bulk"
elif type == '*':
self._state = "multibulk"
else:
self._state = "empty"

getattr(self, self._state, _notImplemented)(line)

def empty(self, line):
if type == '+':
d = self._queue.pop(0)
d.callback(line)
return
if type == ':':
d = self._queue.pop(0)
d.callback(bool(int(line)))
return
if type == '-':
d = self._queue.pop(0)
d.errback(line)
return

def bulk(self, line):
self._nextSize = int(line)
self.setRawMode()

def rawDataReceived(self, data):
if self._state == BULK:
self._buffer += data
if len(self._buffer) >= self._nextSize + 2:
# Get the data and get rid of the useless ending \r\n
data = self._buffer[:self._nextSize]
remaining = self._buffer[self._nextSize+2:]
response = self._queue.pop(0)
response.d.callback(data)
self._state = EMPTY
self.setLineMode(remaining)

def sendSomething(self, command, *args, **kwargs):
string = serialize(command, *args, *kwargs)
self.transport.write(string)
self._queue.append(Response())
return self._queue[-1].d

PGP.sig

Ludovico Magnocavallo

unread,
Jun 11, 2009, 3:07:25 PM6/11/09
to redi...@googlegroups.com
Valentino Volonghi wrote:
>
> You still haven't managed to explain what's wrong. Every time you ask
> for a way to do something in Twisted I answer with something that is
> shorter and more portable than any alternative solution... :)

Nah, it's just that every time we discuss it you have already managed to
quaff down one beer too much, and see things through rose-tinted glasses. :)

> Let's not start a flame here

Yup.

L.

Kless

unread,
Jun 12, 2009, 3:28:34 PM6/12/09
to Redis DB

On 11 jun, 07:52, Ludovico Magnocavallo <l...@qix.it> wrote:
> Twisted is a religion, not a framework. Don't listen to Valentino when
> he says it's easy... :)

I'm supposed that the same thought the people that has built Kamaelia
[1], a framework a lot of simpler.


[1] http://www.kamaelia.org/

Salvatore Sanfilippo

unread,
Jun 12, 2009, 6:23:09 PM6/12/09
to redi...@googlegroups.com
On Fri, Jun 12, 2009 at 9:28 PM, Kless<jona...@googlemail.com> wrote:
>
>
> On 11 jun, 07:52, Ludovico Magnocavallo <l...@qix.it> wrote:
>> Twisted is a religion, not a framework. Don't listen to Valentino when
>> he says it's easy... :)

About event-driven programming, I think Tcl was able to get it right years ago:

socket -server handler 9999
proc handler {fd clientaddr clientport} {
set t [clock format [clock seconds]]
puts $fd "Hello $clientaddr:$clientport, current date is $t"
close $fd
}
vwait forever

That's a multiplexing "Hello" server :)
What Tcl lacks is the ability to easily carry the state, but it's
possible to implement it on top of what it already provides easily.
Btw Ruby's eventmachine is pretty cool IMHO. Simple and powerful at
the same time (not as simple as Tcl's but almost, and can carry the
state in an easy way incapsulated into an object).

Valentino Volonghi

unread,
Jun 12, 2009, 7:02:57 PM6/12/09
to redi...@googlegroups.com
On Jun 12, 2009, at 3:23 PM, Salvatore Sanfilippo wrote:

> About event-driven programming, I think Tcl was able to get it right
> years ago:
>
> socket -server handler 9999
> proc handler {fd clientaddr clientport} {
> set t [clock format [clock seconds]]
> puts $fd "Hello $clientaddr:$clientport, current date is $t"
> close $fd
> }
> vwait forever

This is the same server (except I don't wanna type the
self.transport.getPeer().port etc in the line and make it too long):

from datetime import datetime
from twisted.internet import reactor, protocol

class HelloServer(protocol.Protocol):
def dataReceived(self, data):
self.transport.write("Hello you %s" % datetime.utcnow())
self.transport.loseConnection()

f = protocol.ServerFactory()
f.protocol = HelloServer

reactor.listenTCP(9999, f)
reactor.run()

And this is the same thing on SSL (rest of the stuff is unchanged):

from OpenSSL import SSL

class ServerContextFactory:

def getContext(self):
"""Create an SSL context.

This is a sample implementation that loads a certificate from
a file
called 'server.pem'."""
ctx = SSL.Context(SSL.SSLv23_METHOD)
ctx.use_certificate_file('server.pem')
ctx.use_privatekey_file('server.pem')
return ctx

reactor.listenSSL(9999, f, ServerContextFactory())
reactor.run()

You can have both running in the same thing.

putting:

from twisted.internet import gtk2reactor
gtk2reactor.install()

at the top will make this small thing usable inside a gtk2 without
breaking
anything, same for using qt4reactor. Using kqueue, epoll, poll etc
will use
different polling strategies (if you write everything using the plugin
system
you don't even need to change your code to swap reactor).

This is the same thing with a client instead of a server (everything
else is unchanged):

f = protocol.ClientFactory()
f.protocol = HelloServer

reactor.connectTCP(9999, f)
reactor.run()

This is a client that reconnects on unexpected disconnects with
exponential fallback:

f = protocol.ReconnectingClientFactory()

Everything else is unchanged.

This is the same thing that limits connections for every peer and for
every connection
a maximum number of ingress and egress bandwidth:

from twisted.protocols import policies

# 10Kb/s read and write
f = policies.LimitConnectionsByPeer(f, readLimit=10000,
writeLimit=10000)

everything else is unchanged.

And I could go on and on and on for hours with other things (like the
same hello
protocol exposed through SSH or telnet etc etc).

My point being:

All these extensions (and many other design patterns were actually
_invented_
in Twisted Matrix and then reused in other libraries like boost) show
that the
abstractions and design used is sound. Having used Twisted for more
than 4
years now I can also say that I can't go back to not having it. That's
not
because I'm some kind of religious nut (I also know erlang and it's
better than
twisted, but it's a different language) but instead because that's the
right way
to do event driven programming in python.

> That's a multiplexing "Hello" server :)
> What Tcl lacks is the ability to easily carry the state, but it's
> possible to implement it on top of what it already provides easily.
> Btw Ruby's eventmachine is pretty cool IMHO. Simple and powerful at
> the same time (not as simple as Tcl's but almost, and can carry the
> state in an easy way incapsulated into an object).

EventMachine is a slightly broken ripoff of twisted, this further
shows how
Twisted Matrix is sound, they even badly copied deferreds making them a
module and reusable (which is the part that is broken)...

PGP.sig

Aman Gupta

unread,
Jun 12, 2009, 7:08:15 PM6/12/09
to redi...@googlegroups.com

Since we're already so far off-topic.. as the maintainer of
EventMachine, I'm curious to hear more about why reusable deferrables
are broken and what else is wrong with EM.

Aman

Valentino Volonghi

unread,
Jun 12, 2009, 7:39:44 PM6/12/09
to redi...@googlegroups.com

On Jun 12, 2009, at 4:08 PM, Aman Gupta wrote:

> Since we're already so far off-topic.. as the maintainer of
> EventMachine, I'm curious to hear more about why reusable deferrables
> are broken and what else is wrong with EM.


Reusing deferreds is the part that is broken IMHO. The reason is that
they
have state, if functions that are called back from a deferred add other
callbacks to it then it will keep growing indefinitely. Furthermore they
contain the state of execution of the current callbacks so you'd have
to reset that state when another callback fires. Also what happens when
you add a callback to a deferred that is reused? All the places that
used
it will be triggered again even if they have already finished
execution and
their current state is incosistent (because the client disconnected for
example).

These are the main reasons why deferreds should not be reused and it's
a reason very similar to why stack frames are not reused and why
threading
is hard to get right. By reusing them you are almost re-creating dynamic
scoping and dynamic scoping is very hard to use across multiple modules.

Other than that eventmachine is very nice and I envy your amqp
implementation :)

PGP.sig
Reply all
Reply to author
Forward
0 new messages