I have encountered OutOfMemory Exception in developing Liftweb app.
It seems it is related to CometActor(, or scala.actors.Actor)
At OOME timing, I have found 1,000,000 instances of
scala.actors.FJTaskRunner$VolatileTaskRef
in the heap area by using VisualVM.
I have tried to narrow down the problem, and found that
following simple clock app has also the similar problem,
http://gist.github.com/187924
If you are interested in this problem, please try it.
$ git clone git://gist.github.com/187924.git gist-187924
$ cd gist-187924
$ mvn jetty:run
Then, visit http://127.0.0.1:8080/
I will attache the screenshot of VisualVM, which displays
the internal of heap area after 2 hours later.
In this shot, you will find that 490,000 FJTaskRunner$VolatileTaskRef
instances in the heap. In monitoring the VisualVM, I have found
that the number of threads has been monotonically increased
even for the access from only one browser.
Does scala.actors.Actor have still resource leak bug?
I'm trying Liftweb 1.1-SNAPSHOT, Scala 2.7.5
and JDK1.6.0_16 on GNU/Linux .
Sincerely,
--
Atsuhiko Yamanaka
JCraft,Inc.
1-14-20 HONCHO AOBA-KU,
SENDAI, MIYAGI 980-0014 Japan.
Tel +81-22-723-2150
+1-415-578-3454
Skype callto://jcraft/
Hi,
I have encountered OutOfMemory Exception in developing Liftweb app.
It seems it is related to CometActor(, or scala.actors.Actor)
At OOME timing, I have found 1,000,000 instances of
scala.actors.FJTaskRunner$VolatileTaskRef
in the heap area by using VisualVM.
I have tried to narrow down the problem, and found that
following simple clock app has also the similar problem,
http://gist.github.com/187924
If you are interested in this problem, please try it.
$ git clone git://gist.github.com/187924.git gist-187924
$ cd gist-187924
$ mvn jetty:run
Then, visit http://127.0.0.1:8080/
I will attache the screenshot of VisualVM, which displays
the internal of heap area after 2 hours later.
In this shot, you will find that 490,000 FJTaskRunner$VolatileTaskRef
instances in the heap. In monitoring the VisualVM, I have found
that the number of threads has been monotonically increased
even for the access from only one browser.
Does scala.actors.Actor have still resource leak bug?
I'm trying Liftweb 1.1-SNAPSHOT, Scala 2.7.5
and JDK1.6.0_16 on GNU/Linux .
Sincerely,
--
Atsuhiko Yamanaka
JCraft,Inc.
1-14-20 HONCHO AOBA-KU,
SENDAI, MIYAGI 980-0014 Japan.
Tel +81-22-723-2150
+1-415-578-3454
Skype callto://jcraft/
On Wed, Sep 16, 2009 at 10:33 PM, Erik Engbrecht
<erik.en...@gmail.com> wrote:
> The large number of VolatileTaskRefs is a consequence of your thread pool
> growth. Each worker thread maintains an array of VolatileTaskRef objects.
> The VolatileTaskRef objects are reused rather than allocated for each task,
> so they will not be GC'd as long as the worker thread is alive. You can
> tell that the they being properly cleared because size wise if they weren't
> being cleared you see what they are pointing to dominate heap usage.
>
> I'd suggest setting the maximum thread pool size to something reasonable for
> the number of processors and available memory you have. If you don't, the
> scheduler will happily spawn up to 255 threads.
Thank you for your suggestion.
Do you mean system properties
actors.corePoolSize
actors.maxPoolSize
actors.timeFreq
referred in scala.actors. FJTaskScheduler2?
On Wed, Sep 16, 2009 at 11:36 PM, Erik Engbrecht
<erik.en...@gmail.com> wrote:
> Yes, particularly maxPoolSize.
Thank you for prompt reply.
I'll try lower value, for example, 50 for maxPoolSize before going to the bed.
I'll look forward to the good result in the next morning.
On Wed, Sep 16, 2009 at 11:50 PM, Atsuhiko Yamanaka
<atsuhiko...@gmail.com> wrote:
> I'll try lower value, for example, 50 for maxPoolSize before going to the bed.
> I'll look forward to the good result in the next morning.
I got the good result.
By setting maxPoolSize as 50, OOME had not appeared.
With that setting, 50 FJTaskRunner instances was allocated and
204,800 FJTaskRunner$VolatileTaskRef were there in the heap.
I guess that 204,800 corresponds to 50*4096[1]. According to VisualVM, those
204,800 instances cost 2,457,600( == 50*4096*12) bytes.
So, for the default maxPoolSize(255), we will be able to prevent this
kind of OOME
by adding additional 12,533,760 bytes heap size for VolatileTaskRef.
As for the changing the implementation of CometActor in the other thread,
I think current implementation may be enough for "scala 2.7.x" at least,
if we can share above knowledge.
Anyway, thank for your help. Now, my lift app becomes sustainable.
It is a desktop image sharing service and heavily depends on CometActor.
If you are interested in it, it has been experimentally running at
http://lift.jcraft.com/dstream/scala@tohoku2 .