Reports Only Run - java.lang.OutOfMemoryError: Requested array size exceeds VM limit

277 views
Skip to first unread message

Peter Luttrell

unread,
Aug 29, 2016, 3:19:31 PM8/29/16
to Gatling User Group
I recently ran the same simulation on 6 boxes for about 24 hours with a total rate of about 2,000 requests per second. This generated 4G simulation.log files on each box. I would like to merge these into a single report, so I'm running Gatling in reports only mode (-ro {sim-folder}), however each time I'm getting the following error. Is there any work around for this, or are we out of luck?

Exception in thread "main" java.lang.OutOfMemoryError: Requested array size exceeds VM limit

       at java.util.Arrays.copyOf(Arrays.java:3332)

       at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)

       at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)

       at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:569)

       at java.lang.StringBuffer.append(StringBuffer.java:369)

       at java.io.BufferedReader.readLine(BufferedReader.java:370)

       at java.io.BufferedReader.readLine(BufferedReader.java:389)

       at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:72)

       at scala.collection.Iterator$JoinIterator.hasNext(Iterator.scala:211)

       at scala.collection.Iterator$ConcatIterator.hasNext(Iterator.scala:192)

       at scala.collection.Iterator$class.foreach(Iterator.scala:893)

       at scala.collection.Iterator$ConcatIterator.foreach(Iterator.scala:168)

       at io.gatling.charts.stats.LogFileReader.firstPass(LogFileReader.scala:86)

       at io.gatling.charts.stats.LogFileReader.io$gatling$charts$stats$LogFileReader$$$anonfun$8(LogFileReader.scala:125)

       at io.gatling.charts.stats.LogFileReader$lambda$$x1$1.apply(LogFileReader.scala:125)

       at io.gatling.charts.stats.LogFileReader$lambda$$x1$1.apply(LogFileReader.scala:125)

       at io.gatling.charts.stats.LogFileReader.parseInputFiles(LogFileReader.scala:63)

       at io.gatling.charts.stats.LogFileReader.<init>(LogFileReader.scala:125)

       at io.gatling.app.LogFileProcessor.initLogFileReader(RunResultProcessor.scala:55)

       at io.gatling.app.LogFileProcessor.processRunResult(RunResultProcessor.scala:37)

       at io.gatling.app.Gatling.start(Gatling.scala:66)

       at io.gatling.app.Gatling$.start(Gatling.scala:57)

       at io.gatling.app.Gatling$.fromArgs(Gatling.scala:49)

       at io.gatling.app.Gatling$.main(Gatling.scala:43)

       at io.gatling.app.Gatling.main(Gatling.scala)



Stéphane LANDELLE

unread,
Aug 29, 2016, 4:09:41 PM8/29/16
to gat...@googlegroups.com
4Gb * 6 boxes = 24Gb

The standard Gatling reporting engine will have a very hard time parsing such amount of data.
The best workaround is FrontLine, that supports clustering out-of-the-box and whose reporting engine is completely different and optimized for such use cases.

Regards,

Stéphane Landelle
GatlingCorp CEO


--
You received this message because you are subscribed to the Google Groups "Gatling User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gatling+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages