Thread softly, danger is near

93 views
Skip to first unread message

√iktor Klang

unread,
Jun 11, 2012, 4:48:04 AM6/11/12
to Journal.IO
Hey,

I attempted to switch to Journal.IO from Kestrel file based queue for
our durable file-based mailboxes,
however, since we can potentially have millions of actors, it doesn't
really work to have 2 Threads created for every Actor, so I ended up
not being able to switch to Journal.IO.

Not sure if you can solve it, if the design somehow relies on having
threads, but perhaps one could envision switching to Futures/
ExecutionContexts instead of creating Threads?

Cheers,

Chris Vest

unread,
Jun 11, 2012, 7:05:39 AM6/11/12
to jour...@googlegroups.com
Hi,

Your tweet on this prompted me to think about what an optimal design might look like, and I ended up starting work on a prototype based on a single IO-thread design. The IO-thread runs a loop where it drains a queue of IO commands, sorts them (reads in increasing order of file offset; reads before writes; writes kept in-order), syncs all touched files and then finally activates all callbacks.

The idea is that disks generally hate random IO because of the seeking, and loves sequential IO. And for the same reasons they don't like concurrent IO, because it's really just serialized random IO. The hope is that having just one thread do all the IO - and to the furthest extent possible, in-order - will speed things up. The mean latency might get higher because files are only sync'ed at the end, but I hope that throughput is also higher because there should be less sync'ing overall.

The prototype isn't yet usable (I've only spent a couple of hours on it) so I don't yet know if it's a viable design. It's also not based on the Journal.IO source, so I don't know how hard it will be to retrofit. It's mostly just for inspiration, so maybe Sergio will be inspired :P

Cheers,
Chris

√iktor Ҡlang

unread,
Jun 11, 2012, 8:23:28 AM6/11/12
to jour...@googlegroups.com
I only need logical "lanes" and do not really care how it's realized on file, but I need to have one logical persistent queue per Actor, and definitely not any auxiliary threads.
(I want to be able to pass in an Executor/ExecutionContext)

Cheers,
--
Viktor Klang

Akka Tech Lead
Typesafe - The software stack for applications that scale

Twitter: @viktorklang

Sergio Bossa

unread,
Jun 11, 2012, 8:31:43 AM6/11/12
to jour...@googlegroups.com
On Mon, Jun 11, 2012 at 12:05 PM, Chris Vest <mr.chr...@gmail.com> wrote:

> I ended up starting work on a prototype based on a single
> IO-thread design. The IO-thread runs a loop where it drains a queue of IO
> commands, sorts them (reads in increasing order of file offset; reads before
> writes; writes kept in-order), syncs all touched files and then finally
> activates all callbacks.

Yep, that's called Reactor and I used it in the past on Actorom (which
ended up being the first scheduler implementation for Akka so Viktor
may know about it).
By the way, there actually *is* one single writer thread per Journal,
so a Reactor would only improve things when shared among many Journals
(there already was some talking about this on the issue tracker I
think), or am I missing your point?
Also, it wouldn't satisfy Viktor's needs, as he doesn't want any
threads at all :)

Thoughts?

--
Sergio Bossa
http://www.linkedin.com/in/sergiob

Sergio Bossa

unread,
Jun 11, 2012, 8:39:12 AM6/11/12
to jour...@googlegroups.com
On Mon, Jun 11, 2012 at 1:23 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

> (I want to be able to pass in an Executor/ExecutionContext)

The problem with that is the writer thread blocking behaviour: so you
cannot share it between Journals through a common executor.
There actually are two possible solutions:
1) Use a non-blocking Reactor, as already cited.
2) Use a fixed-time batching policy, meaning that the writer thread,
rather than blocking and waiting for batches, would pause for a
batching period and then batch everything out, which would allow you
to use a common (scheduled?) executor.
Now, #1 would take lot of time, #2 has performance implications.

√iktor Ҡlang

unread,
Jun 11, 2012, 8:42:21 AM6/11/12
to jour...@googlegroups.com
On Mon, Jun 11, 2012 at 2:39 PM, Sergio Bossa <sergio...@gmail.com> wrote:
On Mon, Jun 11, 2012 at 1:23 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

> (I want to be able to pass in an Executor/ExecutionContext)

The problem with that is the writer thread blocking behaviour: so you
cannot share it between Journals through a common executor.

Of course you can, ExecutionContext has a "blocking" method that can take evasive actions when someone is evil and is doing blocking.
 
There actually are two possible solutions:
1) Use a non-blocking Reactor, as already cited.

See above.
 
2) Use a fixed-time batching policy, meaning that the writer thread,
rather than blocking and waiting for batches, would pause for a
batching period and then batch everything out, which would allow you
to use a common (scheduled?) executor.
Now, #1 would take lot of time, #2 has performance implications.

Sure, but at least that is tunable from the user site, by providing a better Scheduler.

Cheers,
 

Thoughts?

--
Sergio Bossa
http://www.linkedin.com/in/sergiob

√iktor Ҡlang

unread,
Jun 11, 2012, 8:44:49 AM6/11/12
to jour...@googlegroups.com
On Mon, Jun 11, 2012 at 2:42 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:


On Mon, Jun 11, 2012 at 2:39 PM, Sergio Bossa <sergio...@gmail.com> wrote:
On Mon, Jun 11, 2012 at 1:23 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

> (I want to be able to pass in an Executor/ExecutionContext)

The problem with that is the writer thread blocking behaviour: so you
cannot share it between Journals through a common executor.

Of course you can, ExecutionContext has a "blocking" method that can take evasive actions when someone is evil and is doing blocking.
 
There actually are two possible solutions:
1) Use a non-blocking Reactor, as already cited.

See above.
 
2) Use a fixed-time batching policy, meaning that the writer thread,
rather than blocking and waiting for batches, would pause for a
batching period and then batch everything out, which would allow you
to use a common (scheduled?) executor.
Now, #1 would take lot of time, #2 has performance implications.

Sure, but at least that is tunable from the user site, by providing a better Scheduler.

For all I know perhaps that Scheduler impl is just a Thread that busy-waits to run?
But the big benefit is that I can, from the outside, turn the knobs ;-)

Cheers,
 

Cheers,
 

Thoughts?

--
Sergio Bossa
http://www.linkedin.com/in/sergiob



--
Viktor Klang

Akka Tech Lead
Typesafe - The software stack for applications that scale

Twitter: @viktorklang

Sergio Bossa

unread,
Jun 11, 2012, 8:56:45 AM6/11/12
to jour...@googlegroups.com
On Mon, Jun 11, 2012 at 1:42 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

> Of course you can, ExecutionContext has a "blocking" method that can take
> evasive actions when someone is evil and is doing blocking.

Sorry, which class are you talking about? Any concrete examples you
can point at?

√iktor Ҡlang

unread,
Jun 11, 2012, 9:10:25 AM6/11/12
to jour...@googlegroups.com
On Mon, Jun 11, 2012 at 2:56 PM, Sergio Bossa <sergio...@gmail.com> wrote:
On Mon, Jun 11, 2012 at 1:42 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

> Of course you can, ExecutionContext has a "blocking" method that can take
> evasive actions when someone is evil and is doing blocking.

Sorry, which class are you talking about? Any concrete examples you
can point at?

Sergio Bossa

unread,
Jun 12, 2012, 4:23:59 AM6/12/12
to jour...@googlegroups.com
√iktor Ҡlang <viktor...@gmail.com> ha scritto:


But that's Scala! :)

Btw, I don't honestly see how that could solve the blocking problem; if a thing is blocking by design, that's it: you have to either starve others, or grow the pool size.

If you think it can be done differently, please speak loud :)

√iktor Ҡlang

unread,
Jun 12, 2012, 4:32:14 AM6/12/12
to jour...@googlegroups.com
On Tue, Jun 12, 2012 at 10:23 AM, Sergio Bossa <sergio...@gmail.com> wrote:
√iktor Ҡlang <viktor...@gmail.com> ha scritto:


But that's Scala! :)

Btw, I don't honestly see how that could solve the blocking problem; if a thing is blocking by design, that's it: you have to either starve others, or grow the pool size.

Well, there are 2 issues I need solved:

1) I need to be able to create potentially thousands, tens of thousands, logical, durable file-based queues.
2) And it is not allowed to create its own threads. Preferrably it'd use an ExecutionContext, but an Executor would do as well.

What I can do is to provide a scheduler and a construct for executing chunks of code, if I need gigaperformance, I'll give it its own scheduler and its own constructor for executing chunks of code.

Is that doable?

Cheers,



If you think it can be done differently, please speak loud :)

Hiram Chirino

unread,
Jun 12, 2012, 7:54:49 AM6/12/12
to jour...@googlegroups.com
BTW.. if you have lots of writers, you typically want to batch writes together if your going to be doing disk syncs because the disk sync is the most expensive part of storing data reliably.  Creating a journal per mailbox might not be the best idea.  Using 1 journal per disk spindle is typically the best performant way to go hardware wise.

--

Hiram Chirino

Software Fellow | FuseSource Corp.

chi...@fusesource.com | fusesource.com

skype: hiramchirino | twitter: @hiramchirino

blog: Hiram Chirino's Bit Mojo




√iktor Ҡlang

unread,
Jun 12, 2012, 8:22:41 AM6/12/12
to jour...@googlegroups.com
On Tue, Jun 12, 2012 at 1:54 PM, Hiram Chirino <hi...@hiramchirino.com> wrote:
BTW.. if you have lots of writers, you typically want to batch writes together if your going to be doing disk syncs because the disk sync is the most expensive part of storing data reliably.  Creating a journal per mailbox might not be the best idea.  Using 1 journal per disk spindle is typically the best performant way to go hardware wise.

That's why I said "logical" queues

I don't care how the flush is implemented, I just want to keep thread creation down and have the queues persistent.

Cheers,

Sergio Bossa

unread,
Jun 12, 2012, 8:23:44 AM6/12/12
to jour...@googlegroups.com
On Tue, Jun 12, 2012 at 12:54 PM, Hiram Chirino <hi...@hiramchirino.com> wrote:
 
BTW.. if you have lots of writers, you typically want to batch writes together if your going to be doing disk syncs because the disk sync is the most expensive part of storing data reliably.  Creating a journal per mailbox might not be the best idea.  Using 1 journal per disk spindle is typically the best performant way to go hardware wise.

 That's a very good point, even if some actors/writers may need different batching policies.

Btw, when using one journal per actor/writer, readers should probably find a way to filter for values written by a given writer they're interested in, which means either:
1) Unmarshalling the byte array and look into that for filtering.
2) Providing a kind of "message group" header that readers can read for filtering without having to unmarshal the whole body.

Any thoughts?

Sergio Bossa

unread,
Jun 12, 2012, 8:26:36 AM6/12/12
to jour...@googlegroups.com
On Tue, Jun 12, 2012 at 1:22 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

I don't care how the flush is implemented, I just want to keep thread creation down and have the queues persistent.

So you'd be okay using the same journal for different actors, your only problem is the way its threads are created ... right? 

√iktor Ҡlang

unread,
Jun 12, 2012, 8:28:22 AM6/12/12
to jour...@googlegroups.com
On Tue, Jun 12, 2012 at 2:26 PM, Sergio Bossa <sergio...@gmail.com> wrote:
On Tue, Jun 12, 2012 at 1:22 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

I don't care how the flush is implemented, I just want to keep thread creation down and have the queues persistent.

So you'd be okay using the same journal for different actors, your only problem is the way its threads are created ... right? 

Using the same journal for multiple queues works in the non-migratory setup, but if Actors move around and share the same FS, then you'd end up having multiple VMs interacting with the same file, which I assume to be problematic.

Cheers,

Hiram Chirino

unread,
Jun 12, 2012, 9:40:23 AM6/12/12
to jour...@googlegroups.com
The mail box requirements are then starting to sound similar to the requirements of a queue in a messaging server.  In the Apollo messaging server I ended having all the queues write to a single Journal for the purpose of having a fast syncing write-ahead log who's updates get applied to a leveldb based index (non-synced).  The index is used to avoid having to scan the whole journal to get the entries of a specific queue.

√iktor Ҡlang

unread,
Jun 12, 2012, 9:44:03 AM6/12/12
to jour...@googlegroups.com
On Tue, Jun 12, 2012 at 3:40 PM, Hiram Chirino <hi...@hiramchirino.com> wrote:
The mail box requirements are then starting to sound similar to the requirements of a queue in a messaging server.  In the Apollo messaging server I ended having all the queues write to a single Journal for the purpose of having a fast syncing write-ahead log who's updates get applied to a leveldb based index (non-synced).  The index is used to avoid having to scan the whole journal to get the entries of a specific queue.

The point of having file-based persistent mailboxes is to avoid an extra infrastructure piece to set up, monitor and maintain.
Our Kestrel based queue works flawlessly, but alas…

Cheers,

Sergio Bossa

unread,
Jun 13, 2012, 4:38:10 AM6/13/12
to jour...@googlegroups.com
On Tue, Jun 12, 2012 at 1:28 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

> if Actors move around and share the same FS, then you'd end up having
> multiple VMs interacting with the same file, which I assume to be
> problematic.

That's problematic for sure, but if your actors live in different VMs
then yo cannot use the same threadpool either, because they live in
different processes.

Btw, I think your use case is pretty clear to me, that is:
1) You want one Journal/File(s) per actor, so that any actor can
interact with it and move between VMs without breaking others.
2) You want all Journals opened on the same VM to share the same thread pool.
Obviously this means you cannot batch messages from multiple actors
together in order to avoid multiple syncs.

If that's correct, I'll think about a not-so-invasive solution for
that and get back here with thoughts.

√iktor Ҡlang

unread,
Jun 13, 2012, 4:52:47 AM6/13/12
to jour...@googlegroups.com
On Wed, Jun 13, 2012 at 10:38 AM, Sergio Bossa <sergio...@gmail.com> wrote:
On Tue, Jun 12, 2012 at 1:28 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

> if Actors move around and share the same FS, then you'd end up having
> multiple VMs interacting with the same file, which I assume to be
> problematic.

That's problematic for sure, but if your actors live in different VMs
then yo cannot use the same threadpool either, because they live in
different processes.

That wouldn't be a problem since they get threads on the vm itself, but if I move over half of the actors, and they are using "lanes" of the same file, there'd be a problem.
 

Btw, I think your use case is pretty clear to me, that is:
1) You want one Journal/File(s) per actor, so that any actor can
interact with it and move between VMs without breaking others.

yup
 
2) You want all Journals opened on the same VM to share the same thread pool.
Obviously this means you cannot batch messages from multiple actors
together in order to avoid multiple syncs.


There is no guaranteed delivery and at-most-once semantics, so if you can utilize that for batch, by all means do!

 
If that's correct, I'll think about a not-so-invasive solution for
that and get back here with thoughts.

Cheers,

Sergio Bossa

unread,
Jun 16, 2012, 8:55:59 AM6/16/12
to jour...@googlegroups.com
Hi Viktor,

I rewrote the thread management part by allowing for externally set
writer and disposer threads, you can find the new version in this
branch:
https://github.com/sbtourist/Journal.IO/tree/wip-executors

In order to share executors between journals, just call the setWriter
and setDisposer journal methods.

Any feedback would be highly appreciated :)
Cheers,

Sergio B.

√iktor Ҡlang

unread,
Jun 17, 2012, 11:46:07 AM6/17/12
to jour...@googlegroups.com
Hey Sergio,

I'll have a look in the morning! :-)

cheers,

Martin Krasser

unread,
Jun 18, 2012, 3:32:28 AM6/18/12
to jour...@googlegroups.com
Sergio and Viktor,

not sure whether my findings belong to this or the Akka mailing list but I'll start here.

https://gist.github.com/2947296#file_logging.scala is an experiment I started with the latest code from branch https://github.com/sbtourist/Journal.IO/tree/wip-executors :

The example uses the Akka default dispatcher (an Executor) as Journal.IO writer. However this causes the example application to hang after the 10th write with a busy processor. This is independent of the dispatcher type. Using an executor as Journal.IO writer that is different from the Akka default dispatcher works fine.

This seems to be related to Akka rather than Journal.IO. Maybe it's immediately clear to Viktor why this doesn't work. If not I'll try to find some time later to investigate.

Cheers,
Martin

Am 17.06.12 17:46, schrieb √iktor Ҡlang:

Sergio Bossa

unread,
Jun 18, 2012, 3:38:11 AM6/18/12
to jour...@googlegroups.com
On Mon, Jun 18, 2012 at 8:32 AM, Martin Krasser <kras...@googlemail.com> wrote:

> The example uses the Akka default dispatcher (an Executor) as Journal.IO
> writer. However this causes the example application to hang after the 10th
> write with a busy processor.

Could you tell which Journal.IO line(s) it hangs on?

Martin Krasser

unread,
Jun 18, 2012, 3:44:12 AM6/18/12
to jour...@googlegroups.com

Am 18.06.12 09:38, schrieb Sergio Bossa:
> On Mon, Jun 18, 2012 at 8:32 AM, Martin Krasser <kras...@googlemail.com> wrote:
>
>> The example uses the Akka default dispatcher (an Executor) as Journal.IO
>> writer. However this causes the example application to hang after the 10th
>> write with a busy processor.
> Could you tell which Journal.IO line(s) it hangs on?
>

As I said, this is likely something in an Akka dispatcher; has nothing
to do with Journal.IO (at least, I can say that it is *not* the
journal.write that hangs). I'll let you know once I found out more.

√iktor Ҡlang

unread,
Jun 18, 2012, 4:12:21 AM6/18/12
to jour...@googlegroups.com
Please post a thread dump

Cheers,

Sergio Bossa

unread,
Jun 18, 2012, 4:21:24 AM6/18/12
to jour...@googlegroups.com, jour...@googlegroups.com
Exactly, what I mean is, if you issue a thread dump are there any Journal calls?

Sergio Bossa
Sent by iPhone

Martin Krasser

unread,
Jun 18, 2012, 7:10:26 AM6/18/12
to jour...@googlegroups.com
Here it is: http://pastie.org/4107855

Am 18.06.12 10:21, schrieb Sergio Bossa:

Martin Krasser

unread,
Jun 18, 2012, 7:20:35 AM6/18/12
to jour...@googlegroups.com
The reason for the busy loop is this condition:

https://github.com/sbtourist/Journal.IO/blob/wip-executors/src/main/java/journal/io/api/DataFileAppender.java#L263

removing "!shutdown" makes the example running as expected ...

Am 18.06.12 13:10, schrieb Martin Krasser:

√iktor Ҡlang

unread,
Jun 18, 2012, 7:45:13 AM6/18/12
to jour...@googlegroups.com
Sooo, Akka is aquitted? ;-)

Martin Krasser

unread,
Jun 18, 2012, 7:53:22 AM6/18/12
to jour...@googlegroups.com

Am 18.06.12 13:45, schrieb √iktor Ҡlang:
Sooo, Akka is aquitted? ;-)

I'm wearing the "shame on me - I blamed the Akka dispatcher" t-shirt :)

√iktor Ҡlang

unread,
Jun 18, 2012, 7:56:03 AM6/18/12
to jour...@googlegroups.com
On Mon, Jun 18, 2012 at 1:53 PM, Martin Krasser <kras...@googlemail.com> wrote:

Am 18.06.12 13:45, schrieb √iktor Ҡlang:
Sooo, Akka is aquitted? ;-)

I'm wearing the "shame on me - I blamed the Akka dispatcher" t-shirt :)

I've spent years polishing that ;-)

Martin Krasser

unread,
Jun 18, 2012, 8:05:35 AM6/18/12
to jour...@googlegroups.com

Am 18.06.12 13:56, schrieb √iktor Ҡlang:


On Mon, Jun 18, 2012 at 1:53 PM, Martin Krasser <kras...@googlemail.com> wrote:

Am 18.06.12 13:45, schrieb √iktor Ҡlang:
Sooo, Akka is aquitted? ;-)

I'm wearing the "shame on me - I blamed the Akka dispatcher" t-shirt :)

I've spent years polishing that ;-)

I know. I just should have spent more time investigating an issue before blah-blah-ing on a mailing list. Was a victim of my limited time :/ - no excuse.

√iktor Ҡlang

unread,
Jun 18, 2012, 8:06:01 AM6/18/12
to jour...@googlegroups.com
On Mon, Jun 18, 2012 at 2:05 PM, Martin Krasser <kras...@googlemail.com> wrote:

Am 18.06.12 13:56, schrieb √iktor Ҡlang:


On Mon, Jun 18, 2012 at 1:53 PM, Martin Krasser <kras...@googlemail.com> wrote:

Am 18.06.12 13:45, schrieb √iktor Ҡlang:
Sooo, Akka is aquitted? ;-)

I'm wearing the "shame on me - I blamed the Akka dispatcher" t-shirt :)

I've spent years polishing that ;-)

I know. I just should have spent more time investigating an issue before blah-blah-ing on a mailing list. Was a victim of my limited time :/ - no excuse.

Don't worry about it, happens to the best of us.

Sergio Bossa

unread,
Jun 18, 2012, 8:09:41 AM6/18/12
to jour...@googlegroups.com
On Mon, Jun 18, 2012 at 12:20 PM, Martin Krasser
<kras...@googlemail.com> wrote:

> The reason for the busy loop is this condition:
> https://github.com/sbtourist/Journal.IO/blob/wip-executors/src/main/java/journal/io/api/DataFileAppender.java#L263
> removing "!shutdown" makes the example running as expected ...

I suspected reason was indeed inside Journal.IO, you know, Viktor is
always right ;)

So, looking at the code, that "!shutdown" condition isn't needed
anymore (legacy from previous blocking version where a null batch was
sent to unblock after shutdown) and is actually dangerous as it leads
threads who never get a batch from the queue to infinite loop, so I'll
remove it and push back.

I find it amusing that we discovered this with Akka dispatchers but
not with executors (and I'm still unable to replicate this in the test
cases).

√iktor Ҡlang

unread,
Jun 18, 2012, 8:12:32 AM6/18/12
to jour...@googlegroups.com
On Mon, Jun 18, 2012 at 2:09 PM, Sergio Bossa <sergio...@gmail.com> wrote:
On Mon, Jun 18, 2012 at 12:20 PM, Martin Krasser
<kras...@googlemail.com> wrote:

> The reason for the busy loop is this condition:
> https://github.com/sbtourist/Journal.IO/blob/wip-executors/src/main/java/journal/io/api/DataFileAppender.java#L263
> removing "!shutdown" makes the example running as expected ...

I suspected reason was indeed inside Journal.IO, you know, Viktor is
always right ;)

Lol, don't say that! :D
 

So, looking at the code, that "!shutdown" condition isn't needed
anymore (legacy from previous blocking version where a null batch was
sent to unblock after shutdown) and is actually dangerous as it leads
threads who never get a batch from the queue to infinite loop, so I'll
remove it and push back.

I find it amusing that we discovered this with Akka dispatchers but
not with executors (and I'm still unable to replicate this in the test
cases).

Using newCachedThreadPool?

Sergio Bossa

unread,
Jun 18, 2012, 8:15:56 AM6/18/12
to jour...@googlegroups.com
On Mon, Jun 18, 2012 at 1:12 PM, √iktor Ҡlang <viktor...@gmail.com> wrote:

> Lol, don't say that! :D

Joking, never making any mistakes would mean never learning anything ;)

> Using newCachedThreadPool?

Nope, I'm using a fixed thread pool on my tests, but I don't know
about Martin's tests...

Sergio Bossa

unread,
Jun 18, 2012, 8:19:06 AM6/18/12
to jour...@googlegroups.com
Martin,

I just pushed some changes to the wip-executors branch: could you give it a try?

Martin Krasser

unread,
Jun 18, 2012, 8:24:56 AM6/18/12
to jour...@googlegroups.com

Am 18.06.12 14:09, schrieb Sergio Bossa:
> (and I'm still unable to replicate this in the test
> cases).

Try to run the test code within the same executor as used for the
journal.io writer (and not from another thread, e.g. the main thread) ...

Sergio Bossa

unread,
Jun 18, 2012, 8:29:00 AM6/18/12
to jour...@googlegroups.com
On Mon, Jun 18, 2012 at 1:24 PM, Martin Krasser <kras...@googlemail.com> wrote:

> Try to run the test code within the same executor as used for the journal.io
> writer (and not from another thread, e.g. the main thread) ...

So you managed to reproduce it?
It would be great if you could pull the latest code, add the
previously failing test, verify it doesn't fail anymore and issue a
pull request :)

Martin Krasser

unread,
Jun 18, 2012, 8:32:48 AM6/18/12
to jour...@googlegroups.com

Am 18.06.12 14:19, schrieb Sergio Bossa:
> Martin,
>
> I just pushed some changes to the wip-executors branch: could you give it a try?

great, works.

Martin Krasser

unread,
Jun 18, 2012, 10:03:07 AM6/18/12
to jour...@googlegroups.com
Here's the pull request with the (now succeeding) test. -
https://github.com/sbtourist/Journal.IO/pull/19


Am 18.06.12 14:29, schrieb Sergio Bossa:
> On Mon, Jun 18, 2012 at 1:24 PM, Martin Krasser <kras...@googlemail.com> wrote:
>
>> Try to run the test code within the same executor as used for the journal.io
>> writer (and not from another thread, e.g. the main thread) ...
> So you managed to reproduce it?
> It would be great if you could pull the latest code, add the
> previously failing test, verify it doesn't fail anymore and issue a
> pull request :)
>

--
Reply all
Reply to author
Forward
0 new messages