If you are talking about custom scripts to perform master/slave
failover, please explain how you do it, so that we can offer advice to
make it better, or kudos for doing it well.
Regards,
- Josiah
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/redis-db?hl=en.
But yeah. Don't make loops in your Redis slaving configuration. It is
never the right thing to do.
Regards,
- Josiah
You can use the plain "SET <key> <value>" or MSET to set your values,
followed by "KEYS" to get all of your keys, then get all of your data
with "GET <key>" or "MGET <key1> <key2> ...".
You can also use HSET/HMSET to create a hash of all of your values,
then use HGETALL to get all of the key/value pairs.
That said, I've not seen an application where what you are proposing
makes sense. Maybe you have some new and interesting thing you are
doing, but more likely, you are just wasting memory. If you describe
your actual problem, we are usually pretty good at offering
suggestions for either getting the most out of Redis, or pointing you
off to a solution that may be better suited.
Regards,
- Josiah
You can use MSET for 1 round trip to set, KEYS for 1 round trip to get
the keys, and MGET for 1 round trip to get all of the values.
Alternatively, you can use HMSET for 1 round trip to set, and HGETALL
for 1 round trip to set.
Even if HMSET/HGETALL/MSET/MGET didn't exist, you could still use
non-transactional pipelines to do all (except the KEYS/pipelined GET
calls) in 1 round trip.
> In my application, named Feed Ranking, I want to rank all feeds of an
> user and then get only 200 feeds to appear in "Feed New". I store an index -
> lambda represented for "time decay" of an user for a day --> Begin a new
> day, a service will get all "lambda - index" of all users to a map <userId,
> lambda> and then my application can get it from inside --> speed up my
> application.
You don't need to get all of the lambdas, you only need to get the
"best" 200. If you stored them as part of a zset, which holds
"members" and "scores", you can get the members with the 200 best
scores (depending on whether the best is higher or lower) with a
single command. You can set the values over the course of the day, and
have your app pull an updated list every few minutes, depending on how
quickly you want score changes to propagate.
It is my finding (after having developed a few such scoring methods)
that any time you have scores that "degrade" over time, it's usually a
mistake, because if you are looking to generate a total ordering on
items, you have to re-score all of your items... which is a waste. You
can typically flip the score over on it's head, and have scores grow
over time based on some event. Take the claimed Reddit scoring method:
http://amix.dk/blog/post/19588 . Any non-linearities with respect to
the score are related to upvotes/downvotes, and not time (the t term
grows linearly, and is based on the unix time epoch, so you can just
use the timestamp as unixtime). On the other hand, the score for
Hacker News has it's non-linearities only induced by time:
http://amix.dk/blog/post/19574 , which requires a bit of math to come
up with an alternate that behaves similarly.
If you can make your score based on some event with a fixed time
component, your score can be calculated once and set until another one
of those events occur. You can also update scores arbitrarily through
the day as events come in, and don't need to deal with "update the
world" stuff that is painful and expensive.
Regards,
- Josiah