/usr/bin/env JVM_ARGS="-Xms2g -Xmx2g" jmeter -t ./cli.jmx -n -l /tmp/jmeter-results.csv -JTEST_RESULTS_FILE=/tmp/jmeter-results.csv
2014/08/14 02:10:12 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2014/08/14 02:10:12 INFO - kg.apc.jmeter.PluginsCMDWorker: Using JMeterPluginsCMD v. 1.1.3
2014/08/14 02:10:12 WARN - kg.apc.jmeter.PluginsCMDWorker: JMeter env exists. No one should see this normally.
2014/08/14 02:10:12 INFO - jmeter.save.CSVSaveService: /tmp/jmeter-results.csv does not appear to have a valid header. Using default configuration.
2014/08/14 02:10:12 WARN - jmeter.save.CSVSaveService: Error parsing field 'timeStamp' at line 1 java.text.ParseException: Unparseable date: "1407970033347,147,Get timeseries intervals,200,OK,Browser client 1-1,text,true,605073,145"
2014/08/14 02:10:12 WARN - jmeter.reporters.ResultCollector: Problem reading JTL file: /tmp/jmeter-results.csv
2014/08/14 02:10:12 INFO - kg.apc.charting.GraphPanelChart: Saving PNG to /Users/marcus/Sites/mysite/benchmarks/results/ResponseTimesOverTime.png
2014/08/14 02:10:12 INFO - kg.apc.jmeter.listener.GraphsGeneratorListener: Successful generation of file results/ResponseTimesOverTime.png by plugin:ResponseTimesOverTime
Clearly there's something up with the CSV files.
If I use the normal graphs (not post-processed) they work fine, though I find many of them hard to read and harder to explain - like how average response time can be longer than the duration of the test, or why latency goes down as thread count increases beyond 250!
Any ideas what I'm missing?
All the timestamps across the bottom are the same; it appears to make no difference if I select relative mode for times. Curiously it's not all the graphs that do this - the latencies over time, response times over time, thread state over time, throughput vs threads, times vs threads graphs all render correctly from the same data, which is what makes me think it's a bug in the plugin, not a problem with the data.