OutOfMemoryError due to too much logging

619 views
Skip to first unread message

Adam

unread,
Dec 13, 2015, 11:15:36 AM12/13/15
to Akka User List
Hi,

I sometimes see this issue where I want to switch logs to DEBUG, but due to the large number of messages the JVM will crash on OutOfMemory which turns out to be due to the accumulation of logging events.
Most logging frameworks have the ability to drop messages beyond a certain number and so do Akka unbounded mailboxes.

Is there a way to achieve this with Akka logging as well?
The behavior of the non blocking unbounded mailbox is what I have in mind (logging frameworks usually add a warning of some kind telling you that messages are being dropped, but I can manage without this for now).

Alternatively, I could just using my logger directly.
It's asynchronous anyway. The only issue with that is that I find the extra MDC data of actor path very useful when looking at piles of DEBUG logs.
It makes zeroing in on a single transaction very simple.

Johan Andrén

unread,
Dec 14, 2015, 3:53:13 AM12/14/15
to Akka User List
Hi Adam,

I think what you are describing is covered in the logging docs: 

It could probably be more clear though, but the system behaving bad with high logging load and configuring an async appender and dropping DEBUG and INFO messages on high load is mentioned.

--
Johan Andrén
Typesafe -  Reactive apps on the JVM
Twitter: @apnylle

Adam

unread,
Dec 14, 2015, 4:27:56 AM12/14/15
to Akka User List
Hi,

I've read it in the past, but was hoping it's not accurate (I'm already using an asynchronous logger as the backend).
It basically says that no - there is no way to achieve what I want with Akka Logging, because while an asynchronous backend may help, the messages still go through some sort of unbounded queue.
So in the case of a fast producer (lots of logging requests) and a slow consumer (logging backend reading from this queue), you will still get an OutOfMemoryError.

OK, so I guess I'll just use those loggers directly and I'll have to populate the MDC on my own.
Probably not that much work anyway.

Thanks!

Maxim Valyanskiy

unread,
Dec 14, 2015, 4:31:53 AM12/14/15
to Akka User List
Hello!

I saw the same problem when default dispatcher was overloaded by a lot of small tasks that was created via map/flatMap/etc operations on Future's. I fixed my problem by moving away heavy batch processing and operations on futures to dedicated dispatcher.

Maxim

воскресенье, 13 декабря 2015 г., 19:15:36 UTC+3 пользователь Adam написал:

אדם חונן

unread,
Dec 14, 2015, 4:35:27 AM12/14/15
to akka...@googlegroups.com
That sounds related, but the problem in the case of logging is that (as far as I know) it runs on the default dispatcher and cannot be reconfigured.
And anyway, I'm thinking of a case where regardless of dispatcher, there's just too many messages and we cannot possibly manage to write them all.

--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to a topic in the Google Groups "Akka User List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/akka-user/Ct9noZyBhtU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Will Sargent

unread,
Dec 14, 2015, 12:28:50 PM12/14/15
to akka...@googlegroups.com
You can mitigate this by using akka-slf4j with Logback and using AsyncAppender in your logback configuration.  It is lossy for debug events, so once the queue fills up to 80% it will start throwing away events.


--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.

Guido Medina

unread,
Dec 18, 2015, 2:34:50 PM12/18/15
to Akka User List
If you want a super fast asynchronous logging then Log4j2 is better than logback because it is based on LMAX.

Benchmark: http://logging.apache.org/log4j/2.x/manual/async.html

Dependencies you will need for Akka & Log4j2:
      <dependency>
         <groupId>de.heikoseeberger</groupId>
         <artifactId>akka-log4j_2.11</artifactId>
         <version>1.0.3</version>
      </dependency>
      <dependency>
         <groupId>org.apache.logging.log4j</groupId>
         <artifactId>log4j-core</artifactId>
         <version>2.5</version>
      </dependency>
      <dependency>
         <groupId>com.lmax</groupId>
         <artifactId>disruptor</artifactId>
         <version>3.3.2</version>
      </dependency>

Akka application.conf:
akka {
  loggers = ["de.heikoseeberger.akkalog4j.Log4jLogger"]
  logging-filter = "de.heikoseeberger.akkalog4j.Log4jLoggingFilter"
  ...
  ...
}

Console asynchronous log4j2.xml:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
   <Appenders>
      <Console name="Console" target="SYSTEM_OUT">
         <PatternLayout pattern="%d{HH:mm:ss.SSS} %level %c{1.} - %msg%n"/>
      </Console>
   </Appenders>
   <Loggers>
      <AsyncLogger name="quickfixj.msg" level="WARN" additivity="false" includeLocation="true">
         <AppenderRef ref="Console"/>
      </AsyncLogger>
      <AsyncLogger name="org.eclipse.persistence" level="WARN" additivity="false" includeLocation="true">
         <AppenderRef ref="Console"/>
      </AsyncLogger>
      <AsyncLogger name="io.vertx.core.net.impl.ConnectionBase" level="OFF" additivity="false" includeLocation="true">
         <AppenderRef ref="Console"/>
      </AsyncLogger>
      <AsyncRoot level="INFO" includeLocation="true">
         <AppenderRef ref="Console"/>
      </AsyncRoot>
   </Loggers>
</Configuration>

Asynchronous daily rolling file log4j2.xml:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
   <Appenders>
      <RollingRandomAccessFile name="file" immediateFlush="false"
                   fileName="some_path/some_app.log"
                   filePattern="some_path/some_app.%d{yyyy-MM-dd}.log">
         <PatternLayout>
            <pattern>%d{HH:mm:ss.SSS} %level %c{1.} - %msg%n</pattern>
         </PatternLayout>
         <Policies>
            <TimeBasedTriggeringPolicy interval="1" modulate="true"/>
         </Policies>
      </RollingRandomAccessFile>
   </Appenders>
   <Loggers>
      <AsyncLogger name="quickfixj.msg" level="WARN" additivity="false" includeLocation="true">
         <AppenderRef ref="file"/>
      </AsyncLogger>
      <AsyncLogger name="org.eclipse.persistence" level="WARN" additivity="false" includeLocation="true">
         <AppenderRef ref="file"/>
      </AsyncLogger>
      <AsyncLogger name="io.vertx.core.net.impl.ConnectionBase" level="OFF" additivity="false" includeLocation="true">
         <AppenderRef ref="file"/>
      </AsyncLogger>
      <AsyncRoot level="INFO" includeLocation="true">
         <AppenderRef ref="file"/>
      </AsyncRoot>
   </Loggers>
</Configuration>

Hope that helps,

Guido.

אדם חונן

unread,
Dec 20, 2015, 11:04:26 AM12/20/15
to akka...@googlegroups.com
Thanks a lot for the detailed answer.
I've been using log4j 2 but with regular loggers and asynchronous appenders (which for some reason are by default not lossy - they block instead).
I'll definitely give this a try.

You received this message because you are subscribed to a topic in the Google Groups "Akka User List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/akka-user/Ct9noZyBhtU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to akka-user+...@googlegroups.com.

Guido Medina

unread,
Dec 20, 2015, 11:33:03 AM12/20/15
to Akka User List
It blocks when the RingBuffer is full, you can always increase the size; that combined with the buffering of some appenders you might never hit the point of blocking, try 1st with "STDOUT" aka "Console" and see what happens, then try with RollingRandomAccessFile appender with autoflush disabled like I posted on my previous post.

And in production it is unlikely that you block due to less debugging messages, right?

אדם חונן

unread,
Dec 21, 2015, 12:51:06 AM12/21/15
to akka...@googlegroups.com

True.
I know that if you opt for the async appender you can set it to blocking="false", although it seems pointless to use it with AsyncLogger (if I understand the docs correctly).

Guido Medina

unread,
Dec 21, 2015, 5:22:46 AM12/21/15
to Akka User List
My point is that you will probably won't have to set "blocking=true" or change "ring buffer size" because in production you usually don't debug as much, a development environment is usually considerable slower than production.

You probably like many of us have an IDE, DB, Desktop applications, etc running on your development environment so everything is potentially shared way more often.

Maybe you should try a test environment close to your production setup and measure there, time with debug and without it using "RollingRandomAccessFile", measure, don't guess ;-)

אדם חונן

unread,
Dec 21, 2015, 7:11:23 AM12/21/15
to akka...@googlegroups.com
Yeah, I got that.
All true, and indeed, we got this when running integration tests where logs are in DEBUG.
So it shouldn't really happen in production.
Having said that, I still can't rely on what should happen.
What if a bug is introduced to the system where WARN messages are being written constantly?
Suddenly instead of just suffering from the extra CPU and IO, which can be sustained in many cases, my entire system could crash.



Guido Medina

unread,
Dec 21, 2015, 7:30:22 AM12/21/15
to Akka User List
The I/O will be constant because of buffering of RandomAccessFile, remember it buffers and then it writes, your worse scenario will be that it will block from time to time, but at least you won't get OOM due to too many messages to an actor.

And you can always adjust your ring buffer size to close your worst scenario, by default it is 256, try 512 or 1024 and see what happens :D

Adam

unread,
Dec 29, 2015, 6:14:22 AM12/29/15
to Akka User List
Hi,

Just wanted to say that I've tried adding blocking="false" to all appenders (log4j2) and that seems to have resolved the OOM issues.
I haven't tried the asynchronous loggers configuration.
I find the asynchronous appender to be sufficient in our case.
Reply all
Reply to author
Forward
0 new messages