New issue 222 by vinuth.madinur: [Feature Request] Hooks, plugins or script
support.
http://code.google.com/p/redis/issues/detail?id=222
It would be extremely useful to have some hooks so that some custom
functionality could be added. For example, hooks provided at following
places could be powerful and open up many new use cases for those who want
it, without reducing the efficiency of core redis.
1. Before and after execution of any command.
2. When a key's TTL expires.
3. When certain thresholds are met in statistics collection.
4. On Channel Open / Close, etc.,
Some or all of these extension points, could lead to many interesting use
cases for redis.
Thoughts?
--
You received this message because you are listed in the owner
or CC fields of this issue, or because you starred this issue.
You may adjust your issue notification preferences at:
http://code.google.com/hosting/settings
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/redis-db?hl=en.
>
>
--
Sent from Gmail for mobile | mobile.google.com
I'm not sure I like this concept. I greatly like the simplicity of
Redis. I think server-side code goes down a dangerous path, like
stored procedures in relational databases.
Yes to be honest for now I'm not totally convinced that scripting /
plugins are a great idea.
Actually after considering it I think they'll likely give more
problems than benefits. Some example:
- Plugins stability will affect Redis stability. Likely we'll get tons
of bug reports related to broken plugins.
- Once users can implement their own commands:
1) we'll no longer have a common language of primitives (it's like
Lisp macros)
2) 90% of people will surely misuse that, claiming Redis is
slow/broken. There is in many cases a way to address the problems with
lock free algorithms, with MULTI/EXEC, or just combining in a sane way
the Redis primitives. But once you can implement your atomic
primitives I bet many users will instead start adding new
ill-conceived commands.
I'm all for making things more powerful and for letting users to be
free of doing errors usually, but Redis is a mission critical piece of
code and has already built-in all the tools to perform 99% of the
common things without requiring scripting. The last 1% can be achieved
anyway using locks.
Most of the times what users really want is combining operations that
are not atomic per se. Before it was impossible, now it's possible via
MULTI/EXEC many times, but some time you need to read values
performing conditional stuff or alike. When this is needed MULTI/EXEC
is not enough and the issue can be solved either by scripting or
locking, but I think the latter is better, as adding scripting *just*
for isolation/atomicity is not good IMHO.
So because I'm unsure I'll simply avoid adding this feature for the
next months, collecting more info, and rethinking about this issue
again in a few months :)
Cheers,
Salvatore
--
Salvatore 'antirez' Sanfilippo
http://invece.org
"Once you have something that grows faster than education grows,
you’re always going to get a pop culture.", Alan Kay
One of my projects involves a great deal of number-crunching. Nothing
complicated, just averages, summation, and a bit of mapping/filtering
of values, but there is a lot of it. So, for me, I would want to use
an extension system to add some basic statistical commands which can
operate on lists.
Clearly, these commands would not belong in Redis core, but they could
be very useful in the right situation.
I don't have any deep thoughts on the actual implementation of the
extension system, but hopefully this use-case is interesting :)
Adam
> Salvatore 'antirez' Sanfilippohttp://invece.org
>> I'm not sure I like this concept. I greatly like the simplicity of
>> Redis. I think server-side code goes down a dangerous path, like
>> stored procedures in relational databases.
Actually I am not a big fan of stored procedures or triggers, but
they _work_ they are fast and efficient.
> Yes to be honest for now I'm not totally convinced that scripting /
> plugins are a great idea.
> Actually after considering it I think they'll likely give more
> problems than benefits. Some example:
"Groving pains?" ;)
> - Plugins stability will affect Redis stability. Likely we'll get tons
> of bug reports related to broken plugins.
Yes, that is true. Not sure if there is an architectural pattern that
can be followed to avoid such problem.
> - Once users can implement their own commands:
> 1) we'll no longer have a common language of primitives (it's like
> Lisp macros)
Ahhh, umm, yes?, no? Not sure what this means.
> 2) 90% of people will surely misuse that, claiming Redis is
> slow/broken. There is in many cases a way to address the problems with
For sure!
> lock free algorithms, with MULTI/EXEC, or just combining in a sane way
> the Redis primitives. But once you can implement your atomic
> primitives I bet many users will instead start adding new
> ill-conceived commands.
I don't see it as implementing my own commands, something that could
be useful. But about avoiding the roundrip from Redis to the client
program.
Let's face it if you are using Redis as a toy for a few thousand keys,
you will not find problems with data coming and going from clients.
But if you are using it for serious purposes, it is very possible that
frequently processing millions and millions of keys, can be improved
quite a bit just because locality of the data storage and processing.
> I'm all for making things more powerful and for letting users to be
> free of doing errors usually, but Redis is a mission critical piece of
I don't, users suck ;)
> code and has already built-in all the tools to perform 99% of the
> common things without requiring scripting. The last 1% can be achieved
> anyway using locks.
Again, at least in my case it is not about capabilities, but about
speed and taking away bottlenecks (IO)
> Most of the times what users really want is combining operations that
> are not atomic per se. Before it was impossible, now it's possible via
> MULTI/EXEC many times, but some time you need to read values
> performing conditional stuff or alike. When this is needed MULTI/EXEC
> is not enough and the issue can be solved either by scripting or
> locking, but I think the latter is better, as adding scripting *just*
> for isolation/atomicity is not good IMHO.
What about something "generic" as a MapRecue implementation (feel free
to start throwing stones) (this would be related to clustering, data
distributin, etc)
> So because I'm unsure I'll simply avoid adding this feature for the
> next months, collecting more info, and rethinking about this issue
> again in a few months :)
That's fair, you have showed a great amount of success as an Open
Source project manager, I am sure there are thing more important in
the roadmap (like clustering) but in my case, Redis Scripting is about
unleashing data processing speed, not adding more functionality.
Best regards,
--
Aníbal Rojas
Ruby on Rails Web Developer
http://www.google.com/profiles/anibalrojas
1) Drop in a scripting language (e.g. embed V8 inside Redis),
2) Enable something similar to "stored procedures", which can be executed in response to any of the available hook-points. In this case, the procedures themselves would be written using the existing, available Redis commands and NOT in a different language.
The two options listed above are very different beasts, and I think that they are being confused.
-Michael
Could you elaborate on why you would want these in the redis server itself?
For a bunch of these cases, I suspect that what people want is either
a natural integration into their client library so that they can have
the clarity of saying "SAVERAGEVALUES theset" to get the average of
all values in theset that are less than 5. Whether the averaging
happens on the client or the server doesn't seem to be very important
for correctness, and matters in perf to the extent that data transfer
time shows up in profiles. And if it does, then I think you want a way
to quickly get data to an "algorithms server" paired to your redis
"data structures server": Unix domain sockets to a sibling
daemon/proxy on the same machine seems like it could get a long way
there, and let you move the computational load to another core to
boot.
Extensibility is a double-edged sword; I've been biting my tongue as
people recommend it in place of a solid CHANNEL implementation, for
example. Once you have extensibility, you lose a lot of control over
your internal architecture -- the contact surface that represents your
contract with users of your software grows immensely, and you lose the
ability to reason locally (meaning: based on stuff you can see in your
repository) about changes. Doing something like virtual memory once
you have a meaningful extensibility system would likely go from "a
hard piece of software engineering, which we're all glad that
Salvatore did for us" to "a hard piece of software engineering coupled
with a ton of work to make sure that extensions can preserve their
important semantics, and people know how to deal with paged-out values
via that API, etc.".
I have some bias here: extensibility has been critical to the success
of the product I've spent most of my professional life working on, and
I have personally made more wrong decisions about extensibility points
than right ones. Salvatore is a much better API and systems designer
than I am, so I'm sure his record will be better. Nonetheless, I
really think that any serious discussion of extensibility points
should have a *lot* of data behind them -- especially about the
performance and complexity of doing something outside of the core
redis code/process -- before we ask Salvatore to
(I think one useful way to partition the set of possible extensions is
this: "if someone were to implement this extension by welding a proxy
to a redis server, would that solve my problem?" If the answer is
yes, or if the answer is "no, because of performance!!!" but doesn't
come with measurements, I think it falls on the "don't extend redis
core" side of the universe. And if we find that a lot of these)
I don't mean to tell Salvatore his business -- he has built something
that I couldn't have, in terms of its compactness and perfect
suitability to the task and delightful balance of richness and
simplicity. I have to admit, also, that I have entertained "oh man, if
only redis already had ZSORTBYKEYAGE" feature requests while playing
with my own projects! Given the number of (quite thoughtful) posts in
favour of extension mechanisms that I've seen on the list lately,
though, I thought I would weigh in in "defense" of redis continuing to
be the smallest thing that can do its job, because I think that that's
one big reason that it's so fantastic.
And, of course, it's open source, so if I really need LONLYWITHVOWELS
I can fork it to gain some local experience with it, and then
Salvatore will have no choice but to incorporate it after the sheer
awesomeness of my unicode-aware, sometimes-y extension makes it to the
top of reddit. :-)
Mike
sign up to carry the weight of an extension API forever, on our
collective behalf.
(Dammit, I always do that.)
Mike
People will send out reports directly to plugin writers - like people
do for language libraries.
> - Once users can implement their own commands:
> 1) we'll no longer have a common language of primitives (it's like
> Lisp macros)
I think plugins would make the primitives more clear. An example:
adding pub-sub support via plugins would result in following:
- won't pollute the core primitives of Redis, which for me are
database related functionality (i.e. operation on lists, key-values,
sets etc.)
- would add pub-sub support instantly to all clients that implement
support for Redis plugins
- would make it easier to outsource the support and development of pub-
sub to other developers
> 2) 90% of people will surely misuse that, claiming Redis is
> slow/broken. There is in many cases a way to address the problems with
> lock free algorithms, with MULTI/EXEC, or just combining in a sane way
> the Redis primitives. But once you can implement your atomic
> primitives I bet many users will instead start adding new
> ill-conceived commands.
This is a valid point, but if people want to shoot themselfs in the
foot then they can easily do it without plugins.
> I'm all for making things more powerful and for letting users to be
> free of doing errors usually, but Redis is a mission critical piece of
> code and has already built-in all the tools to perform 99% of the
> common things without requiring scripting. The last 1% can be achieved
> anyway using locks.
>
> Most of the times what users really want is combining operations that
> are not atomic per se. Before it was impossible, now it's possible via
> MULTI/EXEC many times, but some time you need to read values
> performing conditional stuff or alike. When this is needed MULTI/EXEC
> is not enough and the issue can be solved either by scripting or
> locking, but I think the latter is better, as adding scripting *just*
> for isolation/atomicity is not good IMHO.
>
> So because I'm unsure I'll simply avoid adding this feature for the
> next months, collecting more info, and rethinking about this issue
> again in a few months :)
I don't think plugins are that critical for Redis - something like the
VM is more important IMO. But plugins would make Redis much more
powerful, while providing a cleaner core and maybe a plugin community.
I could at least see myself scripting Redis and I don't think Redis
solves 99% of the tasks [ unfortunately :) ].
Regards,
amix
> > For more options, visit this group athttp://groups.google.com/group/redis-db?hl=en.
>
> --
> Salvatore 'antirez' Sanfilippohttp://invece.org
--
>> - Plugins stability will affect Redis stability. Likely we'll get tons
>> of bug reports related to broken plugins.
>
> People will send out reports directly to plugin writers - like people
> do for language libraries.
Unfortunately if a plugin is broken, Redis will crash in a random way,
many times in a way that is completely impossible to correlate with a
given plugin. So I guess I'll start not caring about bug reports if
the trace shows a plugin was used? Like Linux kernel bugs reported
with closed source modules?
So I think it's much better to provide scripting capabilities instead
of a C API all in all... if this will be really needed.
With scripting it will be still possible to implement vertical stuff,
but without resorting to C coding.
Btw why using a Redis script is still different than using a language library?
Because with languages 99% of the times you download a library that is
used to perform some work that is at a different layer of abstraction,
like: parsing an RSS feed.
With Redis scripts you implement commands. This is why it's like macro
systems and not libraries IMHO.
It's like if many Ruby people would start using libraries implementing
new control flow primitives, so reading some Ruby code you could find:
forever
...
end
instead of while true
and many things like that. In languages with macros, macros are used
to extend and specialize the language. This is powerful but it turns a
language into something different. People working in a team with a
Lisp application using macros extensively have to learn both Lisp
*and* the macros you are using to understad/work with the code base.
But at least with a programming language this makes a lot of sense,
while with Redis... I doubt. Why? Because Redis export a few
fundamental data types, the same you find in a CS book more or less,
and there are only a small set of really useful primitives against
this types. Having a "common language" of commands is important. We
talk on IRC about common patterns like LPUSH+LTRIM, and so forth. With
scripting a team using Redis will end with many user-implemented
commands that are a specific vertical language to learn. 5% of the
times this will be useful, 95% of the times completely not, at least
this is the experience I've talking with people about Redis design and
usabe. Most people still don't get what the data model really is.
>> - Once users can implement their own commands:
>> 1) we'll no longer have a common language of primitives (it's like
>> Lisp macros)
>
> I think plugins would make the primitives more clear. An example:
> adding pub-sub support via plugins would result in following:
> - won't pollute the core primitives of Redis, which for me are
> database related functionality (i.e. operation on lists, key-values,
> sets etc.)
Pub/Sub done the right way (with high performance) via plugins would
require a so complex and full-featured API in the Redis internals that
the code would be two times bigger :)
> This is a valid point, but if people want to shoot themselfs in the
> foot then they can easily do it without plugins.
It's a lot harder: you have to build your things with the Legos you
got, instead of being able to create your own bricks with strange
forms. And what's worse is that plugins will greatly vary in stability
and design. We can repeat an infinite number of times that Redis is
high quality and has an API and we are not involved with the plugins,
but people will download Redis + a few plugins (99% of the times not
needed), will see the server crashed, and will complain with Redis.
Also to take a stable API means less development speed and ability to
change internals.
I'm pretty sure that the question here is if adding or no scripting,
the API has to be exported to the scripting engine. The ability to
write your own C modules is not a good idea IMHO.
> I don't think plugins are that critical for Redis - something like the
> VM is more important IMO. But plugins would make Redis much more
> powerful, while providing a cleaner core and maybe a plugin community.
> I could at least see myself scripting Redis and I don't think Redis
> solves 99% of the tasks [ unfortunately :) ].
Yes you are right, I'll be more specific, when I say "99% of the
tasks" I really mean: 99% of the tasks in the domain it was designed
for. If Redis is solving something in a bad way currently, it's not
because it lacks scripting IMHO, most of the times, but because it is
suited only for a limited number of tasks.
Btw I think there is a very big distinction between plugins and
scripting. Scripting make the users able to do almost everything can
be done with an API, but without stability concerns.
Cheers,
Salvatore
--
Salvatore 'antirez' Sanfilippo
> Actually I am not a big fan of stored procedures or triggers, but
> they _work_ they are fast and efficient.
Hello Anibal,
Your use cases makes sense indeed, you mean, scripting can actually
make Redis faster in some use cases.
And I agree indeed.
No doubt that scripting makes 200 times more sense than a C-level API IMHO.
>> - Plugins stability will affect Redis stability. Likely we'll get tons
>> of bug reports related to broken plugins.
>
> Yes, that is true. Not sure if there is an architectural pattern that
> can be followed to avoid such problem.
No :) So indeed, I would like to focus this discussion in scripting,
as there is no reason to add an API, really.
I mean, scripts implementing simple primitives are likely to go the
same speed as the C API, just 10000 times safer.
Instead an API in order to create totally new things, even worse, as I
currently consider Redis 2.0 near to feature-complete. The evolution
of Redis should not be adding more features starting from now, but
doing what it can do today much better, with VM, clustering, less
memory, faster, more durable, and so forth.
>> - Once users can implement their own commands:
>> 1) we'll no longer have a common language of primitives (it's like
>> Lisp macros)
>
> Ahhh, umm, yes?, no? Not sure what this means.
The set of the current commands are a language today, once you start
implementing your own you loose this ability. Your client lib code
starts to be full of calls to server-side commands that are not Redis
standard "language".
Today everybody can read code using Redis and understand in no time
what's going on. When you instead will see calls like
redis.MyNewCommand() it will get more complex.
> I don't see it as implementing my own commands, something that could
> be useful. But about avoiding the roundrip from Redis to the client
> program.
>
> Let's face it if you are using Redis as a toy for a few thousand keys,
> you will not find problems with data coming and going from clients.
> But if you are using it for serious purposes, it is very possible that
> frequently processing millions and millions of keys, can be improved
> quite a bit just because locality of the data storage and processing.
Yes but there is one problem in this usage, that is, Redis is single
threaded. If you perform non trivial tasks in scripts, then it *will*
be start being slower and non-responsive while many calls to a script
are performed.
But of course there are also gains, that is, if you happen to many
times something like:
x = llength(list)
if (x < SOMENUMBER) lpush(list,element)
scripting will help with latency, *a lot*.
But another problem here is, will be possible to support scripting
with Redis-cluster?
should we add things that may not be easy to support in the
distributed version of Redis?
This is why I can see with mixed feelings scripting: the right time to
think about it is *post* Redis-cluster.
While about the API, it's a 99% no matter :)
Cheers,
Salvatore
> IMHO Redis operations should be more like axioms on which other more
> sophisticated compound operations can be created using MULTI/EXEC.
> Personally my vote is for more sophisticated transaction support, being able
> to declare and use temporary result variables in a transaction in my opinion
> will go a long way into solving most peoples requirements.
Well... honestly I think that it's better scripting than this, because
that *is* a form of scripting, but just a limited one.
So for instance we will be able o do
X = GET foo
SET foo1 $X
SET bar2 $X
but not
X = GET foo
if X > 10
SET foo1 $X
END
and if you add conditionals... you are building your own broken mini-language :)
Cheers,
Salvatore
Eg: SET key value channel1 channel2
3. A "Before Command Execute" hook can remove these channel names from the command and pass it on to Redis Core.
4. An "After Command Execute" hook can then automatically publish the command executed to "channel1" and "channel2" if Redis core executed the command successfully..
--
I will just add some perspective and point out how the extensions
system works in Tokyo Tyrant. Lua is used and each extension has a
global set of functions, such as _get, _set etc. You basically have
all Tokyo Tyrant commands, but you don't pay the price of latency or
executing in a language such as Python that's a lot slower than Lua/
LuaJIT. The extensions system makes it possible to implement incr in a
following way:
function incr(key, value)
value = tonumber(value)
if not value then
return nil
end
local old = tonumber(_get(key))
if old then
value = value + old
end
if not _put(key, value) then
return nil
end
return value
end
The perfomance penalty of implementing this via Lua is very small in
Tokyo Tyrant and LuaJIT isn't even used. Is this useful? It is for
some things such as custom datatypes, but liked noted before I don't
think it's super critical for future success of Redis.
So +1 on waiting for this feature and focusing on more vital parts of
Redis.
Regards,
amix
> The perfomance penalty of implementing this via Lua is very small in
> Tokyo Tyrant and LuaJIT isn't even used. Is this useful? It is for
> some things such as custom datatypes, but liked noted before I don't
> think it's super critical for future success of Redis.
>
> So +1 on waiting for this feature and focusing on more vital parts of
> Redis.
Indeed, that's the point, the CPU involvement in the processing of the
actual command in Redis is a small percentage of the whole work
performed at every request, so I think that while we may think about
re-evaluating scripting in a future (but after VM and after
Redis-cluster for sure), the API is a different matter, as it's
already pretty clear that speed will not be a huge problem.
Btw in Tokio scripting is much more useful probably, as living with a
plain dictionary is *hard* indeed.
And... I'm starting to think to an interesting idea. When scripting is
used in order to perform atomic operations that are otherwise not
possible, there is probably a smart solution that solves the problem
with MULTI/EXEC without appearing like a dumb version of scripting,
that is: MULTI/EXEC with CAS.
Example:
GETTAG key1 key2
=> return value, two elements: 60b725f10c9c85c70d97880dfe8191b3
2cd6ee2c70b0bde53fbe6cac3c8b8bb1
MULTICAS 60b725f10c9c85c70d97880dfe8191b3 2cd6ee2c70b0bde53fbe6cac3c8b8bb1
... commands
... commands
EXEC => will perform the transaction ONLY if all the key tags are the same.
Our classic example is, implementing INCR.
GETTAG key1 => blablabla
GET key1
... increment the value client-side ...
MULTICAS blablabla
EXEC => will succeed if we are into an isolated condition where "key1"
did not changed
This is simple and solves nearly all our isolation problems. You can
implement atomic commands with this, and so forth.
What's the problem? we need 8 byte field in the key, that is a lot of memory.
Ok, just to be clear, I'm not arguing we want that stuff or I'll
implement it ASAP ;) We are in freeze but it's just to put some new
ideas on the table. Continuing to think just to scripting will not
help as we already know pretty well how this can work :)
Cheers,
Salvatore
p.s. thank you very much to all the people involved in this discussion
Hello Vinuth!
> Some examples:
> 1. Pub / Sub system :
> I understand that a pub / sub system is already implemented. But
> theoretically, it could also be implemented as an extension, without
> touching Redis core.. How?
> 1. "Before Command Execute" hook can intercept SUBSCRIBE /
> UNSUBSCRIBE commands and make a data structure storing these clients.
> 2. Messages can be published by any client by adding an optional
> list of channel names at the end of each command..
>
Yes that's possible but being able to implement arbitrary systems on
top of Redis with an API needs a really big rework of the internals.
The "API" is very generic, there are two main levels it can be
implemented in Redis:
1) just expose commands. So you can register a new command that is the
sum of other commands.
2) allow to build everything in the API side. For instance, provide an
API where the user has access to the "client" object, can just output,
can block and resume it (you need this to implement BLPOP for
instance). Even allow access to the dataset directly.
Note that both 1 and 2 can be exposed in two ways, as a scripting
stuff, or as a C-level API.
So we can select from no-API, to scripting, to very powerful
scripting, to C APIs directly exposed.
Different levels of complexity, for different concepts of what Redis
is. My point is that Redis is not the kind of software where
extensibility is very important currently IMHO, we actually are at the
max level of complexity already for it being still "simple". A few
stuff more, and we can lost the simplicity... so why to bother? That's
the point.
Especially since we have more strong priorities. 2.0 stable first.
Redis-cluster later. Less memory used, more speed, as a further step.
> 2. Virtual Memory and Redis' Main Memory concerns:
> A lot of folks maybe discouraged from using Redis because they can't have
> all their keys in main memory or have concerns about it. So these hooks can
> help them implement their own memory management. For example,
> 1. Add expiry time for all keys.
Well, you can still call EXPIRE after every command. But I see your
point, forgot about just one, and boom, your memory will get full
(happened to me in production...)
But what we need about that is a simple config stuff IMHO.
> 2. Update the expiry time every time a key is accessed. (using a
> "before command execute" hook.)
Not possible... don't play in a decent way with replication / AOF (why
it's documented in the EXPIRE command man page)
> 3. If a key eventually expires, it means it is not being accessed
> and not needed anymore.
> 4. Store the key in a database and delete it entirely from Redis.
> (using a "Key Expired" hook.)
That's application level business IMHO...
> 3. Clustering:
> A DHT based P2P clustering solution can be implemented as an extension
> without impacting core Redis. I guess a few solutions like Akka provide it
> by making Redis as a backend and acting as a middleware between Redis server
> and the client.. But it could also be done as an extension and there could
> be different ways of doing it.
Redis is very small and with *no* deps. Why I would need this as an extension?
When Redis will support clustering it will be an optional stuff.
Possibly even a decoupled daemon (not sure currently).
Like VM, just don't enable it if you don't want it. Redis is a small
system, even with all the features is a small binary that you can
compile in 2 minutes. No need to compile-out stuff IMHO.
I think it's better to follow the easy path: provide a data structure
server with the very important features inside, stable, fast. It will
not cover everything, but will do very well what's able to do.
Cheers,
Salvatore
> 3. If a key eventually expires, it means it is not being accessedThat's application level business IMHO...
> and not needed anymore.
> 4. Store the key in a database and delete it entirely from Redis.
> (using a "Key Expired" hook.)
> I think it's better to follow the easy path: provide a data structure
> server with the very important features inside, stable, fast.
+1
On Apr 6, 5:04 pm, Vinuth Madinur <vinuth.madi...@gmail.com> wrote:
>
> I also had similar needs as Adam and Anibal... computationally intensive
> stuff could be done faster by having it on Redis server.
>
That doesn't always scale, because as soon as you have lots of clients
(we patch Redis for 40,000+ client connections per server) then being
computationally expensive server-side severely limits your ability to
scale out (you need to scale up your hardware). I can see that for
some niche use-cases being able to do computation close to the data is
good idea, but those situations will always be niche (at least
compared to the majority of other users).
Having the option to move computation closer to the data when needed,
and to disable scripting/procedures etc if not needed would be
excellent. It MUST be able to be disabled or have ZERO overhead when
not being used or else you ask every high-traffic-but-using-simple-
datatypes user to take a performance hit. I am also not sure that the
development overhead and complexity that will arise from a scripting
or procedure layer will be worth the effort at this stage in the Redis
project.
My 2 cents, cheers
X = GET foo
if X > 10
SET foo1 $X
END
--
+1 for trickie's response
Let's remember that Redis arose because of a desire for simple, crazy-
fast primitives. The fundamental goals of Redis are at odds with high-
level features like a procedural language.
Maybe somebody needs to create a separate Redis++ project that uses a
Redis core, and then adds on these types of features. That way, those
of us who need maximum bare-metal speed can use Redis core, and those
that want a more feature-rich version can use Redis++. Many other
projects use this approach.
-Nate
any updates?
I've done a quick sketch of how Lua scripts could be added to redis. You
can view the diff here:
https://github.com/georgebashi/redis/compare/master...lua-scripts
Don't try it out unless you're very brave, but it adds a "LUAEXEC" command
which lets you run arbitrary Lua code, and adds a Lua function "redis()"
which lets you call commands and receive back responses in protocol format.
For example, from redis cli:
redis> luaexec "redis('incrby', 'a', '5') ; return '+OK'"
+OK
redis> get a
"5"
As I said, this is a very early version - I've done some more work since,
tidying up the interface a bit, but haven't yet had the time to tidy up and
push that to Github - if anyone's interested, I'd be happy to do so over
the weekend.
When I spoke to Pieter about the patch on IRC, he raised the point about
how having code running server-side would interact with all the features
like replication, sharding, aof, etc. I'd be interested to hear opinions on
it.
Cheers,
George
I too would love to see this, possibly with the option of loading the
scripts from disk as opposed to receiveing them over the wire