On Mon, Jun 11, 2012 at 1:23 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:The problem with that is the writer thread blocking behaviour: so you
> (I want to be able to pass in an Executor/ExecutionContext)
cannot share it between Journals through a common executor.
There actually are two possible solutions:
1) Use a non-blocking Reactor, as already cited.
2) Use a fixed-time batching policy, meaning that the writer thread,
rather than blocking and waiting for batches, would pause for a
batching period and then batch everything out, which would allow you
to use a common (scheduled?) executor.
Now, #1 would take lot of time, #2 has performance implications.
On Mon, Jun 11, 2012 at 2:39 PM, Sergio Bossa <sergio...@gmail.com> wrote:On Mon, Jun 11, 2012 at 1:23 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:The problem with that is the writer thread blocking behaviour: so you
> (I want to be able to pass in an Executor/ExecutionContext)
cannot share it between Journals through a common executor.Of course you can, ExecutionContext has a "blocking" method that can take evasive actions when someone is evil and is doing blocking.There actually are two possible solutions:
1) Use a non-blocking Reactor, as already cited.See above.2) Use a fixed-time batching policy, meaning that the writer thread,
rather than blocking and waiting for batches, would pause for a
batching period and then batch everything out, which would allow you
to use a common (scheduled?) executor.
Now, #1 would take lot of time, #2 has performance implications.Sure, but at least that is tunable from the user site, by providing a better Scheduler.
--
Viktor Klang
Akka Tech Lead
On Mon, Jun 11, 2012 at 1:42 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:Sorry, which class are you talking about? Any concrete examples you
> Of course you can, ExecutionContext has a "blocking" method that can take
> evasive actions when someone is evil and is doing blocking.
can point at?
√iktor Ҡlang <viktor...@gmail.com> ha scritto:But that's Scala! :)Btw, I don't honestly see how that could solve the blocking problem; if a thing is blocking by design, that's it: you have to either starve others, or grow the pool size.
If you think it can be done differently, please speak loud :)
Hiram Chirino
Software Fellow | FuseSource Corp.
chi...@fusesource.com | fusesource.com
skype: hiramchirino | twitter: @hiramchirino
blog: Hiram Chirino's Bit Mojo
BTW.. if you have lots of writers, you typically want to batch writes together if your going to be doing disk syncs because the disk sync is the most expensive part of storing data reliably. Creating a journal per mailbox might not be the best idea. Using 1 journal per disk spindle is typically the best performant way to go hardware wise.
BTW.. if you have lots of writers, you typically want to batch writes together if your going to be doing disk syncs because the disk sync is the most expensive part of storing data reliably. Creating a journal per mailbox might not be the best idea. Using 1 journal per disk spindle is typically the best performant way to go hardware wise.
I don't care how the flush is implemented, I just want to keep thread creation down and have the queues persistent.
On Tue, Jun 12, 2012 at 1:22 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:I don't care how the flush is implemented, I just want to keep thread creation down and have the queues persistent.So you'd be okay using the same journal for different actors, your only problem is the way its threads are created ... right?
The mail box requirements are then starting to sound similar to the requirements of a queue in a messaging server. In the Apollo messaging server I ended having all the queues write to a single Journal for the purpose of having a fast syncing write-ahead log who's updates get applied to a leveldb based index (non-synced). The index is used to avoid having to scan the whole journal to get the entries of a specific queue.
On Tue, Jun 12, 2012 at 1:28 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:That's problematic for sure, but if your actors live in different VMs
> if Actors move around and share the same FS, then you'd end up having
> multiple VMs interacting with the same file, which I assume to be
> problematic.
then yo cannot use the same threadpool either, because they live in
different processes.
Btw, I think your use case is pretty clear to me, that is:
1) You want one Journal/File(s) per actor, so that any actor can
interact with it and move between VMs without breaking others.
2) You want all Journals opened on the same VM to share the same thread pool.
Obviously this means you cannot batch messages from multiple actors
together in order to avoid multiple syncs.
If that's correct, I'll think about a not-so-invasive solution for
that and get back here with thoughts.
-- Martin Krasser blog: http://krasserm.blogspot.com code: http://github.com/krasserm twitter: http://twitter.com/mrt1nz
Sooo, Akka is aquitted? ;-)
Am 18.06.12 13:45, schrieb √iktor Ҡlang:
I'm wearing the "shame on me - I blamed the Akka dispatcher" t-shirt :)Sooo, Akka is aquitted? ;-)
On Mon, Jun 18, 2012 at 1:53 PM, Martin Krasser <kras...@googlemail.com> wrote:
Am 18.06.12 13:45, schrieb √iktor Ҡlang:
I'm wearing the "shame on me - I blamed the Akka dispatcher" t-shirt :)Sooo, Akka is aquitted? ;-)
I've spent years polishing that ;-)
Am 18.06.12 13:56, schrieb √iktor Ҡlang:
I know. I just should have spent more time investigating an issue before blah-blah-ing on a mailing list. Was a victim of my limited time :/ - no excuse.
On Mon, Jun 18, 2012 at 1:53 PM, Martin Krasser <kras...@googlemail.com> wrote:
Am 18.06.12 13:45, schrieb √iktor Ҡlang:
I'm wearing the "shame on me - I blamed the Akka dispatcher" t-shirt :)Sooo, Akka is aquitted? ;-)
I've spent years polishing that ;-)
On Mon, Jun 18, 2012 at 12:20 PM, Martin KrasserI suspected reason was indeed inside Journal.IO, you know, Viktor is
<kras...@googlemail.com> wrote:
> The reason for the busy loop is this condition:
> https://github.com/sbtourist/Journal.IO/blob/wip-executors/src/main/java/journal/io/api/DataFileAppender.java#L263
> removing "!shutdown" makes the example running as expected ...
always right ;)
So, looking at the code, that "!shutdown" condition isn't needed
anymore (legacy from previous blocking version where a null batch was
sent to unblock after shutdown) and is actually dangerous as it leads
threads who never get a batch from the queue to infinite loop, so I'll
remove it and push back.
I find it amusing that we discovered this with Akka dispatchers but
not with executors (and I'm still unable to replicate this in the test
cases).