TRA [20150105-11:24:14.075] util: RemoteParrot: parrot[localhost:9999] queue is over capacity
TRA [20150105-11:24:14.076] util: RemoteParrot: parrot[localhost:9999] queue is over capacity
TRA [20150105-11:24:14.076] util: RemoteParrot: parrot[localhost:9999] more over capacity warnings ...Ï
TRA [20150105-11:24:14.078] util: RemoteParrot.sendRequest: parrot[localhost:9999] done sending requests of size=4 to the server
INF [20150105-11:24:14.078] feeder: FeedConsumer.sendRequest: wrote batch of size 4 to localhost:9999 rps=0.00000 depth=21.0000 status=Some(Running) l
ines=4
DEB [20150105-11:24:14.917] feeder: pollParrot: depth is 21.000000 for localhost:9999
[20150105-11:24:15.493] feeder: ParrotFeeder.shutdownAfter: shutting down due to duration timeout of 10.0 seconds
INF [20150105-11:24:15.494] feeder: Lines played: 128 failed: 0
TRA [20150105-11:24:15.494] feeder: ParrotFeeder: shutting down ...
DEB [20150105-11:24:15.494] util: ParrotClusterImpl: shutting down
DEB [20150105-11:24:15.924] feeder: pollParrot: depth is 21.000000 for localhost:9999
ERR [20150105-11:24:21.088] util: Error shutting down Parrot: com.twitter.util.TimeoutException
ERR [20150105-11:24:21.088] util: com.twitter.util.TimeoutException: 5.seconds
ERR [20150105-11:24:21.088] util: at com.twitter.util.Promise.ready(Promise.scala:498)
ERR [20150105-11:24:21.088] util: at com.twitter.util.Promise.result(Promise.scala:503)
ERR [20150105-11:24:21.088] util: at com.twitter.util.Promise$Chained.result(Promise.scala:182)
ERR [20150105-11:24:21.088] util: at com.twitter.util.Await$.result(Awaitable.scala:75)
ERR [20150105-11:24:21.088] util: at com.twitter.util.Future.get(Future.scala:648)
ERR [20150105-11:24:21.088] util: at com.twitter.parrot.util.RemoteParrot.waitFor(RemoteParrot.scala:160)
ERR [20150105-11:24:21.088] util: at com.twitter.parrot.util.RemoteParrot.shutdown(RemoteParrot.scala:120)
ERR [20150105-11:24:21.088] util: at com.twitter.parrot.util.ParrotClusterImpl$$anonfun$shutdown$4.apply(ParrotCluster.scala:323)
ERR [20150105-11:24:21.088] util: at com.twitter.parrot.util.ParrotClusterImpl$$anonfun$shutdown$4.apply(ParrotCluster.scala:321)
ERR [20150105-11:24:21.088] util: at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
ERR [20150105-11:24:21.088] util: at com.twitter.parrot.util.ParrotClusterImpl.shutdown(ParrotCluster.scala:321)
ERR [20150105-11:24:21.088] util: at com.twitter.parrot.feeder.ParrotFeeder.shutdown(ParrotFeeder.scala:91)
ERR [20150105-11:24:21.088] util: at com.twitter.parrot.feeder.ParrotFeeder$$anonfun$start$1.apply$mcV$sp(ParrotFeeder.scala:78)
ERR [20150105-11:24:21.088] util: at com.twitter.ostrich.admin.BackgroundProcess$$anon$1.run(BackgroundProcess.scala:34)
...
INF [20150105-11:42:11.397] stats: JsonStatsLogger exiting by request.INF [20150105-11:42:11.397] stats: JsonStatsLogger exiting.INF [20150105-11:42:11.397] stats: LatchedStatsListener exiting by request.INF [20150105-11:42:11.397] stats: LatchedStatsListener exiting.INF [20150105-11:42:11.397] admin: TimeSeriesCollector exiting by request.INF [20150105-11:42:11.398] admin: TimeSeriesCollector exiting.TRA [20150105-11:42:11.398] feeder: ParrotFeeder: shut downERR [20150105-11:42:12.250] feeder: Exception polling parrot[localhost:9999] - 5.seconds
doConfirm = false
duration = 30 timeUnit = "SECONDS"
log = "./requests.log" reuseFile = true
localMode = true
traceLevel = com.twitter.logging.Level.ALL // Should be commented out when request rate is increased
requestRate = 1 // 50000 requests per second per parrot server is the practical upper limit
- It almost seems, for the first one (queue over capacity warning), that queues are filled rapidly even in low-request-rate tests, causing immediate over-capacity warnings. Is that accurate?
- For the second, I'm not sure why tests refuse to terminate independently. I've seen them work before, but recently I always see the shutdown TimeoutException.
--
---
You received this message because you are subscribed to the Google Groups "Iago Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to iago-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
INF [20150105-15:48:55.164] stats: Starting LatchedStatsListener
INF [20150105-15:48:55.201] stats: Starting JsonStatsLogger
INF [20150105-15:48:55.201] admin: Starting TimeSeriesCollector
INF [20150105-15:48:55.202] admin: Admin HTTP interface started on port 9900.
INF [20150105-15:48:55.206] feeder: Starting Parrot Feeder...
INF [20150105-15:48:55.208] feeder: Starting ParrotPoller
INF [20150105-15:48:55.209] util: Connecting to Parrots
INF [20150105-15:48:55.210] util: Connecting to parrot server at localhost:9999
INF [20150105-15:48:55.524] finagle: Finagle version 6.11.1 (rev=83de11a66b498351418433bcad00cbf4b7dc495c) built at 20140122-140449
INF [20150105-15:48:55.610] feeder: Awaiting 1 servers to stand up and be recognized.
INF [20150105-15:48:56.454] twitter: Context: added handler com.twitter.finagle.tracing.TraceContext
INF [20150105-15:48:56.604] feeder: Queue is empty for server localhost:9999
INF [20150105-15:48:56.614] feeder: ParrotFeeder.runLoad: exiting because log exhausted
INF [20150105-15:48:56.615] feeder: Lines played: 0 failed: 0
INF [20150105-15:48:57.910] feeder: FeedConsumer.sendRequest: wrote batch of size 1000 to localhost:9999 rps=0.00000 depth=989.000 status=Some(Running) lines=1000
INF [20150105-15:48:57.915] feeder: FeedConsumer.sendRequest: wrote batch of size 3 to localhost:9999 rps=0.00000 depth=991.000 status=Some(Running) lines=3
INF [20150105-15:49:01.023] stats: {"TFinagleBinaryProtocol\/fast_encode_failed":0,"TFinagleBinaryProtocol\/larger_than_threadlocal_out_buffer":0,"jvm_buffer_direct_count":18,"jvm_buffer_direct_max":1116769,"jvm_buffer_direct_used":1116769,"jvm_buffer_mapped_count":0,"jvm_buffer_mapped_max":0,"jvm_buffer_mapped_used":0,"jvm_current_mem_CMS_Old_Gen_max":1219166208,"jvm_current_mem_CMS_Old_Gen_used":0,"jvm_current_mem_CMS_Perm_Gen_max":85983232,"jvm_current_mem_CMS_Perm_Gen_used":39082128,"jvm_current_mem_Code_Cache_max":50331648,"jvm_current_mem_Code_Cache_used":2416128,"jvm_current_mem_Par_Eden_Space_max":487653376,"jvm_current_mem_Par_Eden_Space_used":162269488,"jvm_current_mem_Par_Survivor_Space_max":60948480,"jvm_current_mem_Par_Survivor_Space_used":27060680,"jvm_current_mem_used":230828424,"jvm_fd_count":181,"jvm_fd_limit":10240,"jvm_gc_ConcurrentMarkSweep_cycles":0,"jvm_gc_ConcurrentMarkSweep_msec":0,"jvm_gc_ParNew_cycles":1,"jvm_gc_ParNew_msec":23,"jvm_gc_cycles":1,"jvm_gc_msec":23,"jvm_heap_committed":1767768064,"jvm_heap_max":1767768064,"jvm_heap_used":189330168,"jvm_nonheap_committed":41811968,"jvm_nonheap_max":136314880,"jvm_nonheap_used":41497496,"jvm_num_cpus":8,"jvm_post_gc_CMS_Old_Gen_max":1219166208,"jvm_post_gc_CMS_Old_Gen_used":0,"jvm_post_gc_CMS_Perm_Gen_max":85983232,"jvm_post_gc_CMS_Perm_Gen_used":0,"jvm_post_gc_Par_Eden_Space_max":487653376,"jvm_post_gc_Par_Eden_Space_used":0,"jvm_post_gc_Par_Survivor_Space_max":60948480,"jvm_post_gc_Par_Survivor_Space_used":27060680,"jvm_post_gc_used":27060680,"jvm_start_time":1420490931157,"jvm_thread_count":31,"jvm_thread_daemon_count":25,"jvm_thread_peak_count":32,"jvm_uptime":9850,"service":"parrot-feeder","source":"192.168.119.142","timestamp":1420490941}
ERR [20150105-15:49:02.924] util: Error shutting down Parrot: com.twitter.util.TimeoutException
ERR [20150105-15:49:02.924] util: com.twitter.util.TimeoutException: 5.seconds
ERR [20150105-15:49:02.924] util: at com.twitter.util.Promise.ready(Promise.scala:498)
ERR [20150105-15:49:02.924] util: at com.twitter.util.Promise.result(Promise.scala:503)
ERR [20150105-15:49:02.924] util: at com.twitter.util.Promise$Chained.result(Promise.scala:182)
ERR [20150105-15:49:02.924] util: at com.twitter.util.Await$.result(Awaitable.scala:75)
ERR [20150105-15:49:02.924] util: at com.twitter.util.Future.get(Future.scala:648)
ERR [20150105-15:49:02.924] util: at com.twitter.parrot.util.RemoteParrot.waitFor(RemoteParrot.scala:160)
ERR [20150105-15:49:02.924] util: at com.twitter.parrot.util.RemoteParrot.shutdown(RemoteParrot.scala:120)
ERR [20150105-15:49:02.924] util: at com.twitter.parrot.util.ParrotClusterImpl$$anonfun$shutdown$4.apply(ParrotCluster.scala:323)
ERR [20150105-15:49:02.924] util: at com.twitter.parrot.util.ParrotClusterImpl$$anonfun$shutdown$4.apply(ParrotCluster.scala:321)
ERR [20150105-15:49:02.924] util: at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
ERR [20150105-15:49:02.924] util: at com.twitter.parrot.util.ParrotClusterImpl.shutdown(ParrotCluster.scala:321)
ERR [20150105-15:49:02.924] util: at com.twitter.parrot.feeder.ParrotFeeder.shutdown(ParrotFeeder.scala:91)
ERR [20150105-15:49:02.924] util: at com.twitter.parrot.feeder.ParrotFeeder$$anonfun$start$1.apply$mcV$sp(ParrotFeeder.scala:78)
ERR [20150105-15:49:02.924] util: at com.twitter.ostrich.admin.BackgroundProcess$$anon$1.run(BackgroundProcess.scala:34)
your log file should at least be 1,000 lines or more.
Any thoughts on why the shutdown immediately fails like this (I'm assuming this isn't intended)?
new ParrotLauncherConfig {
doConfirm = false
duration = 30 timeUnit = "SECONDS"
victims = "zalimeni:9337" log = "../replay.log" reuseFile = true
jobName = "web" localMode = true
// Comment-out the trace level when you increase the request rate.
traceLevel = com.twitter.logging.Level.ALL
requestRate = 1 // 50000 requests per second per parrot server
// is the practical upper limit
imports="""import com.twitter.parrot.processor.SimpleRecordProcessorimport org.jboss.netty.handler.codec.http.HttpResponse""" loadTest = "new SimpleRecordProcessor(service.get, this)"
}
Server running on port 9337Request received at Tue Jan 06 2015 14:56:34 GMT-0500 (EST) / Total requests: 1Request received at Tue Jan 06 2015 14:56:36 GMT-0500 (EST) / Total requests: 2
...
Request received at Tue Jan 06 2015 14:58:49 GMT-0500 (EST) / Total requests: 120
...
...
def start() {
if (config.duration.inMillis > 0) {
shutdownAfter(config.duration)
config.maxRequests = Integer.MAX_VALUE // don't terminate prematurely
}
...
--
...
--
...
--
--
It doesn't make much sense for both of these to be configured, but we didn't have a way to do exclusive configuration without things getting really annoying, so the logic is that one of them wins if they're both set (I forget which, I think it might be maxRequests).
If you want a test to run for 30s, but be guaranteed to only send N requests, I think the only way to do that is to only include N requests in the log and then don't let the log file get reused.
If you have a test that is intended to be type #2 up there, and it isn't shutting down at the right time, that is a bug.
...
...