So it's a cool idea but not a good fit for Redis at least in the short
(one year) timeframe.
Cheers,
Salvatore
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>
--
Salvatore 'antirez' Sanfilippo
open source developer - VMware
http://invece.org
"We are what we repeatedly do. Excellence, therefore, is not an act,
but a habit." -- Aristotele
I don't like the idea of triggers for a number of reasons... it's too
stateful for a database IMHO.
But what I like is the idea of publishing changes in the key space via
our built-in Pub/Sub, and that's pretty trivial to accomplish.
We can add this in Redis unstable for sure. The idea is that you can
subscribe to keys for changes via the usual SUBSCRIBE, or even
PSUBSCRIBE. All the changes are PUBLISH-ed as the exact command and
arguments that modified the key.
Additional channels should provide informations about expiring of keys.
Cheers,
Salvatore
I certainly don't want to delay the advent of Cluster.
--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To post to this group, send email to redi...@googlegroups.com.
To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
Pub/Sub is fire and forget, such a facility should use the Pub/Sub as
it is, so it would not be persistent, and messages can indeed be lost.
I think that if you need a reliable way to do this it's better to use
the AOF as a stream of data, that is guaranteed to have everything
inside, but the AOF is currently of limited use since the file would
continuously enlarged.
But this may be solved once we'll have the AOF that splits itself in parts.
I think that the way to go about it is a bit different, that is, using
MULTI/EXEC to perform the operation and then queue such an operation
into a list, that is then replayed when there is already connection
with the master database. There is no need for this to intercept calls
that are not aware of what is happening under the hoods, since anyway
to make it working well you need help from the application, otherwise
to merge in a meaningless way is more or less impossible.
--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To post to this group, send email to redi...@googlegroups.com.
To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
I think it depends on what you want to achieve. Pub/Sub in Redis is
designed to be fire and forget, however you may want to attach a
listener to key state changes in order to perform operations that are
not a problem even if you don't get all the messages.
Btw it's not clear if this is really convenient or not, and how to do
it, a reason why I did not went forward with this so far.
Redis can't work for all the use cases, I think that we can't force
Pub/Sub to be what it is not, nor we can force Redis to be what it is
not trying to implement offline sync as an external tool. This is the
kind of feature that requires a lot of work to get right and that IMHO
can't work very well as an external component.
Especially since it is not a priority, and with the Redis data model
in general having different nodes getting queries for the same key,
and then merging the two versions, is harder then usually, as we have
complex data types.
So in this stage, why trow complexity and possibly open the window to
bad design in order to be in a rush for implementing something that is
to start mostly out of scope?
> Not wanting to affect the spirit of redis to achieve this, but having a copy
> of the messages automatically added to destination LIST/SET IMHO is an
> elegant solution to the problem.
What's wrong with MULTI/EXEC in order to LPUSH the change, if you want
to build such a complex system you can for sure have a wrapper between
Redis and the client.
So when you do: Redis.set("foo","bar") actually the lib does:
MULTI
SET foo bar
LPUSH dataset.changes <json for: op:set key:foo newval:bar>
EXEC
Adding an automatism for this makes Redis more complex without any
real gain I think.
Cheers,
Salvatore
Redis can't work for all the use cases, I think that we can't force
Pub/Sub to be what it is not, nor we can force Redis to be what it is
not trying to implement offline sync as an external tool.
Form what I understand, because GIT uses compression, the size of an
entire GIT repo,
even with large history, can still be manageable.
I think for offline sync, you can forgo a lot of speed in exchange for
the new features, primarily the ability to resolve
multi-master conflicts.
Here is a quote from Planet CouchDB, refering to the Membase merger:
"At CouchOne we've been focusing on very different problems: mobile,
sync and offline use cases. We make it easy to build applications that
travel with you, allowing you access to your important data no matter
the network conditions. Slow and unreliable connectivity means many
businesses can't rely on the cloud for mission critical apps, all
their data is gone when their network is down. But with Couch powered
apps on your phone, tablet, putting data directly on the machines at
the edge of the network, you have your apps and data with you at all
times and safely backed up to the cloud."
So, IMHO, its worth looking into.
And, sorry for getting so OT on this Redis group !
Cheers,
Aaron
On Wed, Feb 9, 2011 at 1:55 PM, Josiah Carlson <josiah....@gmail.com> wrote:
Thanks, Josiah. Appreciate the feedback.
Form what I understand, because GIT uses compression, the size of an
entire GIT repo,
even with large history, can still be manageable.
I think for offline sync, you can forgo a lot of speed in exchange for
the new features, primarily the ability to resolve
multi-master conflicts.
Here is a quote from Planet CouchDB, refering to the Membase merger:
"At CouchOne we've been focusing on very different problems: mobile,
sync and offline use cases. We make it easy to build applications that
travel with you, allowing you access to your important data no matter
the network conditions. Slow and unreliable connectivity means many
businesses can't rely on the cloud for mission critical apps, all
their data is gone when their network is down. But with Couch powered
apps on your phone, tablet, putting data directly on the machines at
the edge of the network, you have your apps and data with you at all
times and safely backed up to the cloud."
So, IMHO, its worth looking into.
So, how does zfs snapshot + rsync handle a key changing in both
offline node and server node, once
offline node comes back online?