Also please attack a debugger launching gdb, then:
attach <pid of your process>
bt
then send us the output of "bt" (back trace).
Also please report what Redis version is.
Thanks!
Salvatore
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/redis-db?hl=en.
>
--
Salvatore 'antirez' Sanfilippo
open source developer - VMware
http://invece.org
"We are what we repeatedly do. Excellence, therefore, is not an act,
but a habit." -- Aristotele
I think that eventually it will start responding again but this may
take a long time.
For this reasons we are moving away from Virtual Memory in the next
version of Redis.
Cheers,
Salvatore
On Mon, Jan 24, 2011 at 4:15 PM, Jonathan Leibiusky <iona...@gmail.com> wrote:
> I think I might download diskstore branch and give it a try, what do you
> think?
What kind of data do you have inside your instance?
Possibly you can disable VM and use 2.2 features that will be able to
load the whole dataset into memory.
The problem with diskstore is that you need to first convert your
redis.db dump into diskstore format, currently it's not possible.
Other questions are, how many keys do you have? What are the data
inside every key? Average length?
Thanks
Cheers,
Salvatore
Ok I can confirm this. The VM is (very slowly) loading the dataset into memory.
What kind of data do you have inside your instance?
On Mon, Jan 24, 2011 at 4:15 PM, Jonathan Leibiusky <iona...@gmail.com> wrote:
> I think I might download diskstore branch and give it a try, what do you
> think?
Possibly you can disable VM and use 2.2 features that will be able to
load the whole dataset into memory.
The problem with diskstore is that you need to first convert your
redis.db dump into diskstore format, currently it's not possible.
Other questions are, how many keys do you have? What are the data
inside every key? Average length?
Thanks
Cheers,
Salvatore
--
Salvatore 'antirez' Sanfilippo
open source developer - VMware
http://invece.org
"We are what we repeatedly do. Excellence, therefore, is not an act,
but a habit." -- Aristotele
> Let me know if there is more information I can provide. And thanks for your
> help!
Ok you are right, I guess... you know much better your data :)
Diskstore is the way to go for your problem... please try reindexing
in diskstore, I think this is the kind of application where it can
work very well and we can get some awesome hint about how it is
working.
So please:
- Use the "unstable" branch on github
- Configure it with the best values of cache-max-memory and
cache-flush-delay you can find. That is, you can start with a max
memory value that is 60% of the RAM in your system in order to be
conservative, and with a flush delay that is as large as you can, for
instane 30 seconds would be cool, or even more if you can live with
such a delay for writes.
- Reload the data inside diskstore. Bulk-writes will be slow of course
as the cache will not be of big use, but indeed the flush delay will
help *a lot* about this.
Then read experience should be great as the most used indexes will
likely stay in RAM.
Also restart is zero-time.
I really look forward to see how this will work for you but I'm
optimist about it.
I'll provide all the help you need as for me this is a great chance to
learn about diskstore strengths and limits in this early stage of
development.
Thank you
But the error appears to be in this case different, like if Redis was
interrupted while creating the initial diskstore structure, and then
restarted. Deleting the whole diskstore with rm -rf and restarting
Redis should be enough. Wait the creation of the on disk structure
before stopping it.
Starting from now it's possible to terminate Redis in bad ways without
problems with the diskstore.
But the first creation needs to build 65536 directories.
I'm here to help with any further problem.
Cheers,
Salvatore
Very strange, please can you check if you actually have folder 'd6/59' ?
Thanks,
There is no need for explicit save, so I wonder what's happening.
Example, if I use this config:
diskstore-enabled yes
diskstore-path redis.ds
# cache-max-memory 10mb
cache-max-memory 5mb
cache-flush-delay 10
# loglevel notice
loglevel debug
And I perform the "SET foo bar" operation I see in the log:
[18939] 24 Jan 20:16:38 . Scheduling key foo for saving
[18939] 24 Jan 20:16:39 - DB 0: 1 keys (0 volatile) in 4 slots HT.
[18939] 24 Jan 20:16:39 - 1 clients connected (0 slaves), 936016 bytes in use
[18939] 24 Jan 20:16:44 - DB 0: 1 keys (0 volatile) in 4 slots HT.
[18939] 24 Jan 20:16:44 - 1 clients connected (0 slaves), 936016 bytes in use
[18939] 24 Jan 20:16:48 . Creating IO save Job for key foo
[18939] 24 Jan 20:16:48 . Queued IO Job 0x1010001c0 type 1 about key 'foo'
[18939] 24 Jan 20:16:48 . 1 IO jobs to process
[18939] 24 Jan 20:16:48 . Thread 4301594624: new job type save:
0x1010001c0 about key 'foo'
[18939] 24 Jan 20:16:48 . Thread 4301594624 completed the job:
0x1010001c0 (key foo)
[18939] 24 Jan 20:16:48 . Processing I/O completed job
[18939] 24 Jan 20:16:48 . COMPLETED Job type save, key: foo
From the scheduling at 16:38 after 10 seconds the key is saved, at 16:48.
What do you see instead?
It is possible that next time you started Redis without specifying the
config file so it started in-memory?
On Mon, Jan 24, 2011 at 8:13 PM, Jonathan Leibiusky <iona...@gmail.com> wrote:
> didn't have those folders. the strange thing is that I deleted it once
> again, changed loglevel to notice and restarted and now it created the
> folders (so maybe I did something wrong before).
>
> now I have a different problem. just for testing I set the cache-flush-delay
> to 10 seconds.
> when the server is up I:
> SET foo bar
>
> wait 2 minutes, and I:
>
> SHUTDOWN
>
> after restarting the key is not there. so it seems like it is not storing
> it.
>
> do I have to explicitly SAVE?,
This is a known issue indeed, basically KEYS is not implemented as a
diskstore operation so will just show entries currently on the
diskstore cache.
The same happens to RADNOMKEY and DBSIZE. All the other commands
should work as expected.
While KEYS will be implemented as diskstore operation, I'm not sure
about DBSIZE and RANDOMKEY, those may just return an error '-ERR not
supported in diskstore mode", or otherwise be implemented with a very
slow implementation.
Another interesting this is that you can BGSAVE while using diskstore,
in order to obtain single-file backups.
But currently it's hard to restore this backups if the database is
bigger than RAM.
We'll soon have a .rdb -> diskstore conversion utility that can do the trick.
Diskstore databases can be hot-copied while the database is running
without problems.
Cheers,
Awesome! Thank you, this is very appreciated and I'm eager to know hot
it will work :)
In Italy now it's 9 pm so in the next hours I may not respond, but
please write anything and tomorrow morning I'll carefully reply to all
your emails.
Thanks!
Most likely the problem is that you are not creating sets one after
the other, but are writing at random in many sorted sets at the same
time. So basically a given key can't be taken in memory for 360
seconds, because there are too many other sets that are changing at
the same time so Redis needs to flush data on disk.
After the first bulk load, write speed should start to be better as
you will stress probably a smaller percentage of the working set.
You can obtain huge performance increases if you try to model your
program to re-create the dataset writing the first key, then the
second, and so forth, sequentially, but this is not always possible...
The work on the b-tree will provide a better write speed, the
filesystem is reliable but very slow indeed...
I wonder if you have some number about commands per second that you
are able to obtain so far.
Cheers,
Salvatore
On Mon, Jan 24, 2011 at 10:26 PM, Jonathan Leibiusky <iona...@gmail.com> wrote:Most likely the problem is that you are not creating sets one after
> Storing data seems to be extremely slow with diskstore. Not sure why.
> I tried setting the cache time to 360 secs thinking it would give
> better results but without luck. Is it because it is creating a file
> per key? Not 100% sure, but it seems like a slow operation.
> Btw it is worth to mention I am using local HD for this test.
the other, but are writing at random in many sorted sets at the same
time. So basically a given key can't be taken in memory for 360
seconds, because there are too many other sets that are changing at
the same time so Redis needs to flush data on disk.
After the first bulk load, write speed should start to be better as
you will stress probably a smaller percentage of the working set.
You can obtain huge performance increases if you try to model your
program to re-create the dataset writing the first key, then the
second, and so forth, sequentially, but this is not always possible...
The work on the b-tree will provide a better write speed, the
filesystem is reliable but very slow indeed...
I wonder if you have some number about commands per second that you
are able to obtain so far.