Three log errors/warnings

52 views
Skip to first unread message

Michael Zalimeni

unread,
Jan 5, 2015, 12:05:17 PM1/5/15
to iago-...@googlegroups.com
I am seeing a lot of the following three log messages at the end of parrot-feeder.log when running tests with a fairly basic configuration. I am running tests on a MacBook Pro (OSX 10.10) pointed at a server on my local network.

TRA [20150105-11:24:14.075] util: RemoteParrot: parrot[localhost:9999] queue is over capacity
TRA
[20150105-11:24:14.076] util: RemoteParrot: parrot[localhost:9999] queue is over capacity
TRA
[20150105-11:24:14.076] util: RemoteParrot: parrot[localhost:9999] more over capacity warnings ...Ï

TRA
[20150105-11:24:14.078] util: RemoteParrot.sendRequest: parrot[localhost:9999] done sending requests of size=4 to the server
INF
[20150105-11:24:14.078] feeder: FeedConsumer.sendRequest: wrote batch of size 4 to localhost:9999 rps=0.00000 depth=21.0000 status=Some(Running) l
ines
=4
DEB
[20150105-11:24:14.917] feeder: pollParrot: depth is 21.000000 for localhost:9999
[20150105-11:24:15.493] feeder: ParrotFeeder.shutdownAfter: shutting down due to duration timeout of 10.0 seconds
INF
[20150105-11:24:15.494] feeder: Lines played: 128 failed: 0
TRA
[20150105-11:24:15.494] feeder: ParrotFeeder: shutting down ...
DEB
[20150105-11:24:15.494] util: ParrotClusterImpl: shutting down
DEB
[20150105-11:24:15.924] feeder: pollParrot: depth is 21.000000 for localhost:9999
ERR [20150105-11:24:21.088] util: Error shutting down Parrot: com.twitter.util.TimeoutException
ERR
[20150105-11:24:21.088] util: com.twitter.util.TimeoutException: 5.seconds
ERR
[20150105-11:24:21.088] util:     at com.twitter.util.Promise.ready(Promise.scala:498)
ERR
[20150105-11:24:21.088] util:     at com.twitter.util.Promise.result(Promise.scala:503)
ERR
[20150105-11:24:21.088] util:     at com.twitter.util.Promise$Chained.result(Promise.scala:182)
ERR
[20150105-11:24:21.088] util:     at com.twitter.util.Await$.result(Awaitable.scala:75)
ERR
[20150105-11:24:21.088] util:     at com.twitter.util.Future.get(Future.scala:648)
ERR
[20150105-11:24:21.088] util:     at com.twitter.parrot.util.RemoteParrot.waitFor(RemoteParrot.scala:160)
ERR
[20150105-11:24:21.088] util:     at com.twitter.parrot.util.RemoteParrot.shutdown(RemoteParrot.scala:120)
ERR
[20150105-11:24:21.088] util:     at com.twitter.parrot.util.ParrotClusterImpl$$anonfun$shutdown$4.apply(ParrotCluster.scala:323)
ERR
[20150105-11:24:21.088] util:     at com.twitter.parrot.util.ParrotClusterImpl$$anonfun$shutdown$4.apply(ParrotCluster.scala:321)
ERR
[20150105-11:24:21.088] util:     at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
ERR
[20150105-11:24:21.088] util:     at com.twitter.parrot.util.ParrotClusterImpl.shutdown(ParrotCluster.scala:321)
ERR
[20150105-11:24:21.088] util:     at com.twitter.parrot.feeder.ParrotFeeder.shutdown(ParrotFeeder.scala:91)
ERR
[20150105-11:24:21.088] util:     at com.twitter.parrot.feeder.ParrotFeeder$$anonfun$start$1.apply$mcV$sp(ParrotFeeder.scala:78)
ERR
[20150105-11:24:21.088] util:     at com.twitter.ostrich.admin.BackgroundProcess$$anon$1.run(BackgroundProcess.scala:34)

...

INF [20150105-11:42:11.397] stats: JsonStatsLogger exiting by request.
INF [20150105-11:42:11.397] stats: JsonStatsLogger exiting.
INF [20150105-11:42:11.397] stats: LatchedStatsListener exiting by request.
INF [20150105-11:42:11.397] stats: LatchedStatsListener exiting.
INF [20150105-11:42:11.397] admin: TimeSeriesCollector exiting by request.
INF [20150105-11:42:11.398] admin: TimeSeriesCollector exiting.
TRA [20150105-11:42:11.398] feeder: ParrotFeeder: shut down
ERR [20150105-11:42:12.250] feeder: Exception polling parrot[localhost:9999] - 5.seconds

Any thoughts on why this would keep coming up? I'm sending a basic GET (receiving expected 204's back) using ~1000 identical lines in my log file. Settings are all basically standard with the ParrotLauncherConfig defaults:

  doConfirm = false

  duration = 30
  timeUnit = "SECONDS"

  log = "./requests.log"
  reuseFile = true

  localMode = true

  traceLevel = com.twitter.logging.Level.ALL  // Should be commented out when request rate is increased

  requestRate = 1 // 50000 requests per second per parrot server is the practical upper limit

Same result (all three errors, the first occurring throughout execution) with 10 and 30 second runs. The parrot-server process never finishes shutting down and requires a manual kill.
  • It almost seems, for the first one (queue over capacity warning), that queues are filled rapidly even in low-request-rate tests, causing immediate over-capacity warnings. Is that accurate?
  • For the second, I'm not sure why tests refuse to terminate independently. I've seen them work before, but recently I always see the shutdown TimeoutException.
  • The third seems related to the second.


Thanks!

Tom Howland

unread,
Jan 5, 2015, 1:08:36 PM1/5/15
to iago-...@googlegroups.com
I'm not seeing any bugs here. Perhaps I misunderstand. What I think is happening is the feeder quickly fills the server's request queue and then when the timeout happens it waits for the server to finish processing its data but that never happens as there is much more data than time. If you had a higher request rate and didn't "reuseFile" you would not be getting these errors.
  • It almost seems, for the first one (queue over capacity warning), that queues are filled rapidly even in low-request-rate tests, causing immediate over-capacity warnings. Is that accurate?

yes 
  • For the second, I'm not sure why tests refuse to terminate independently. I've seen them work before, but recently I always see the shutdown TimeoutException.

yes. A higher rate would result in the data being consumed before the timeout expires.


--

---
You received this message because you are subscribed to the Google Groups "Iago Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to iago-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Michael Zalimeni

unread,
Jan 5, 2015, 4:00:34 PM1/5/15
to iago-...@googlegroups.com
Great, thanks very much for the clarification.

Unfortunately, it seems that no matter how long my log file is and regardless of whether I have reuseFile turned on or off, I still get TimeoutExceptions in parrot-feeder.log. I just noticed, as well, that it appears to be when the Future is created for the shutdown - not at the end of the test:

INF [20150105-15:48:55.164] stats: Starting LatchedStatsListener
INF
[20150105-15:48:55.201] stats: Starting JsonStatsLogger
INF
[20150105-15:48:55.201] admin: Starting TimeSeriesCollector
INF
[20150105-15:48:55.202] admin: Admin HTTP interface started on port 9900.
INF
[20150105-15:48:55.206] feeder: Starting Parrot Feeder...
INF
[20150105-15:48:55.208] feeder: Starting ParrotPoller
INF
[20150105-15:48:55.209] util: Connecting to Parrots
INF
[20150105-15:48:55.210] util: Connecting to parrot server at localhost:9999
INF
[20150105-15:48:55.524] finagle: Finagle version 6.11.1 (rev=83de11a66b498351418433bcad00cbf4b7dc495c) built at 20140122-140449
INF
[20150105-15:48:55.610] feeder: Awaiting 1 servers to stand up and be recognized.
INF
[20150105-15:48:56.454] twitter: Context: added handler com.twitter.finagle.tracing.TraceContext
INF
[20150105-15:48:56.604] feeder: Queue is empty for server localhost:9999
INF
[20150105-15:48:56.614] feeder: ParrotFeeder.runLoad: exiting because log exhausted
INF
[20150105-15:48:56.615] feeder: Lines played: 0 failed: 0
INF
[20150105-15:48:57.910] feeder: FeedConsumer.sendRequest: wrote batch of size 1000 to localhost:9999 rps=0.00000 depth=989.000 status=Some(Running) lines=1000
INF
[20150105-15:48:57.915] feeder: FeedConsumer.sendRequest: wrote batch of size 3 to localhost:9999 rps=0.00000 depth=991.000 status=Some(Running) lines=3
INF
[20150105-15:49:01.023] stats: {"TFinagleBinaryProtocol\/fast_encode_failed":0,"TFinagleBinaryProtocol\/larger_than_threadlocal_out_buffer":0,"jvm_buffer_direct_count":18,"jvm_buffer_direct_max":1116769,"jvm_buffer_direct_used":1116769,"jvm_buffer_mapped_count":0,"jvm_buffer_mapped_max":0,"jvm_buffer_mapped_used":0,"jvm_current_mem_CMS_Old_Gen_max":1219166208,"jvm_current_mem_CMS_Old_Gen_used":0,"jvm_current_mem_CMS_Perm_Gen_max":85983232,"jvm_current_mem_CMS_Perm_Gen_used":39082128,"jvm_current_mem_Code_Cache_max":50331648,"jvm_current_mem_Code_Cache_used":2416128,"jvm_current_mem_Par_Eden_Space_max":487653376,"jvm_current_mem_Par_Eden_Space_used":162269488,"jvm_current_mem_Par_Survivor_Space_max":60948480,"jvm_current_mem_Par_Survivor_Space_used":27060680,"jvm_current_mem_used":230828424,"jvm_fd_count":181,"jvm_fd_limit":10240,"jvm_gc_ConcurrentMarkSweep_cycles":0,"jvm_gc_ConcurrentMarkSweep_msec":0,"jvm_gc_ParNew_cycles":1,"jvm_gc_ParNew_msec":23,"jvm_gc_cycles":1,"jvm_gc_msec":23,"jvm_heap_committed":1767768064,"jvm_heap_max":1767768064,"jvm_heap_used":189330168,"jvm_nonheap_committed":41811968,"jvm_nonheap_max":136314880,"jvm_nonheap_used":41497496,"jvm_num_cpus":8,"jvm_post_gc_CMS_Old_Gen_max":1219166208,"jvm_post_gc_CMS_Old_Gen_used":0,"jvm_post_gc_CMS_Perm_Gen_max":85983232,"jvm_post_gc_CMS_Perm_Gen_used":0,"jvm_post_gc_Par_Eden_Space_max":487653376,"jvm_post_gc_Par_Eden_Space_used":0,"jvm_post_gc_Par_Survivor_Space_max":60948480,"jvm_post_gc_Par_Survivor_Space_used":27060680,"jvm_post_gc_used":27060680,"jvm_start_time":1420490931157,"jvm_thread_count":31,"jvm_thread_daemon_count":25,"jvm_thread_peak_count":32,"jvm_uptime":9850,"service":"parrot-feeder","source":"192.168.119.142","timestamp":1420490941}
ERR
[20150105-15:49:02.924] util: Error shutting down Parrot: com.twitter.util.TimeoutException
ERR
[20150105-15:49:02.924] util: com.twitter.util.TimeoutException: 5.seconds
ERR
[20150105-15:49:02.924] util:     at com.twitter.util.Promise.ready(Promise.scala:498)
ERR
[20150105-15:49:02.924] util:     at com.twitter.util.Promise.result(Promise.scala:503)
ERR
[20150105-15:49:02.924] util:     at com.twitter.util.Promise$Chained.result(Promise.scala:182)
ERR
[20150105-15:49:02.924] util:     at com.twitter.util.Await$.result(Awaitable.scala:75)
ERR
[20150105-15:49:02.924] util:     at com.twitter.util.Future.get(Future.scala:648)
ERR
[20150105-15:49:02.924] util:     at com.twitter.parrot.util.RemoteParrot.waitFor(RemoteParrot.scala:160)
ERR
[20150105-15:49:02.924] util:     at com.twitter.parrot.util.RemoteParrot.shutdown(RemoteParrot.scala:120)
ERR
[20150105-15:49:02.924] util:     at com.twitter.parrot.util.ParrotClusterImpl$$anonfun$shutdown$4.apply(ParrotCluster.scala:323)
ERR
[20150105-15:49:02.924] util:     at com.twitter.parrot.util.ParrotClusterImpl$$anonfun$shutdown$4.apply(ParrotCluster.scala:321)
ERR
[20150105-15:49:02.924] util:     at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
ERR
[20150105-15:49:02.924] util:     at com.twitter.parrot.util.ParrotClusterImpl.shutdown(ParrotCluster.scala:321)
ERR
[20150105-15:49:02.924] util:     at com.twitter.parrot.feeder.ParrotFeeder.shutdown(ParrotFeeder.scala:91)
ERR
[20150105-15:49:02.924] util:     at com.twitter.parrot.feeder.ParrotFeeder$$anonfun$start$1.apply$mcV$sp(ParrotFeeder.scala:78)
ERR
[20150105-15:49:02.924] util:     at com.twitter.ostrich.admin.BackgroundProcess$$anon$1.run(BackgroundProcess.scala:34)

This test was run with requestRate at 50, duration at 30, and reuseFile (~1000 identical request lines) set to false. Any thoughts on why the shutdown immediately fails like this (I'm assuming this isn't intended)?

Also, just curious, what is the reason for the following requirement for reuseFile?
your log file should at least be 1,000 lines or more.


Thanks again!

James Waldrop

unread,
Jan 5, 2015, 4:32:06 PM1/5/15
to iago-...@googlegroups.com
I'm going to 'let' Tom weigh in on most of these questions, but since I wrote the recommendation around log files being at least 1000 lines I'll explain the logic. The code here is pretty simple / naive and opens the file, reads 'chunkSize' lines and closes it when it hits the end. The overhead of opening and closing the file was a concern of mine, but rather than write more code to fill the queue by keeping track of the log file size vis a vis the queue depth and chunk size, I simply had people make sure they had large log files. This was a pretty meaningless recommendation at Twitter at the time, since our log files were usually measured in GB, but it remains the recommendation absent some smarter handling of filling up the feeder's queue of messages to be sent to the server.

Tom Howland

unread,
Jan 5, 2015, 8:45:06 PM1/5/15
to iago-...@googlegroups.com
Any thoughts on why the shutdown immediately fails like this (I'm assuming this isn't intended)?

It isn't immediate. The log entry

INF [20150105-15:48:56.614] feeder: ParrotFeeder.runLoad: exiting because log exhausted

is written 5 seconds before the timeout.

But in truth, I don't know why you're getting this error. It looks like after you run out of data the messages are sent and then the timeout happens. It may be an error that you're getting a timeout exception but does it matter? The parrot server is hitting your server at the contracted rate is it not? It might be useful to post the parrot server log as well.
Message has been deleted

Michael Zalimeni

unread,
Jan 6, 2015, 4:29:26 PM1/6/15
to iago-...@googlegroups.com
Thanks James - appreciate the clarification. I figured your average log file at Twitter had to be pretty big :-)

Tom, I see what you mean. I was concerned because my server stayed alive, receiving responses, much longer than I expected during short tests, and I didn't know why until now. It appears that if the batchSize property for feeder sends to the server exceeds (requestRate * duration) or maxRequests requests, the parrot feeder will still send all log lines to the server, causing the server to send all of the requests (as designed) before self-terminating.

Just to be certain it wasn't my processor doing something odd, I ran a test against an echo server using the master branch's 'web' example. Settings were kept as-is except for my echo server's address substituted for victim:

new ParrotLauncherConfig {

  doConfirm = false

  duration = 30
  timeUnit = "SECONDS"

  victims = "zalimeni:9337"
  log = "../replay.log"
  reuseFile = true

  jobName = "web"
  localMode = true

//  Comment-out the trace level when you increase the request rate.

  traceLevel = com.twitter.logging.Level.ALL

  requestRate = 1 // 50000 requests per second per parrot server
                        // is the practical upper limit

imports="""
import com.twitter.parrot.processor.SimpleRecordProcessor
import org.jboss.netty.handler.codec.http.HttpResponse
"""
  loadTest = "new SimpleRecordProcessor(service.get, this)"

}

Log contained 80-1000 lines with just "/".

Echo server log:

Server running on port 9337
Request received at Tue Jan 06 2015 14:56:34 GMT-0500 (EST) / Total requests: 1
Request received at Tue Jan 06 2015 14:56:36 GMT-0500 (EST) / Total requests: 2
...
Request received at Tue Jan 06 2015 14:58:49 GMT-0500 (EST) / Total requests: 120
...

My log showed far more requests than I should have gotten when attempting to limit with duration/requestRate or maxRequests. I'm sure this isn't an issue at Twitter but for my small tests I've been hitting the "floor" caused by the length of my log file and severely over-saturating the tests. Good to at least (I believe) understand the root cause. I can at least limit the number of requests by shortening my log file and turning off file reuse.

However, it does look like both timeout shutdowns from duration and log exhaustion shutdowns (from reuseFile=false) in the feeder consistently throw TimeoutExceptions with the feeder failing to poll the server for a shutdown (thereby failing to shut down the server). Maybe it has something to do with the order of closing out listeners/other network resources? To Tom's point, it doesn't affect much other than preventing use of the duration parameter to shut down the server - though I find that feature pretty useful. Regardless, I thought I'd mention what I found in case it was helpful to you or at least someone else also seeing the error without knowing why.
...
Message has been deleted

Michael Zalimeni

unread,
Jan 6, 2015, 5:37:52 PM1/6/15
to iago-...@googlegroups.com
After some more looking, this is the line in ParrotFeeder.scala that's causing maxRequests not to be applied:

def start() {

 
if (config.duration.inMillis > 0) {
    shutdownAfter
(config.duration)
   
config.maxRequests = Integer.MAX_VALUE // don't terminate prematurely
  }

I guess the rationale is that, since there's no way to tell whether the default was used or not, duration trumps maxRequests.

Would it have any adverse effect to instead also have maxRequests default to 0 like duration, and then, only if it is still defaulted to 0 at the point above, set it to either 1000 (maybe with a log statement) or Integer.MAX_VALUE based on whether or not duration was provided? Wanted to ask in case there are implications before making a pull request.
...

Tom Howland

unread,
Jan 6, 2015, 6:58:28 PM1/6/15
to iago-...@googlegroups.com
Good find. I think maxRequests should be MAX_VALUE by default, and this line should be deleted. It should be defaulted to MAX_VALUE in src/main/scala/com/twitter/parrot/config/ParrotLauncherConfig.scala and src/main/scala/com/twitter/parrot/config/ParrotFeederConfig.scala

--

Michael Zalimeni

unread,
Jan 7, 2015, 1:37:10 PM1/7/15
to iago-...@googlegroups.com
Ok, sounds good - thanks!

Whenever you have time, I think I hid my prior post with the most recent one... Was wondering if you had any thoughts on the apparent duration/shutdown issue. It looks like the RequestConsumer calls Await.ready(done) in its shutdown function, with done being set after while (queue.size() > 0) takeOne exits (which the documentation mentions, in that the server finishes processing all queued requests before shutting down).

For my simple echo test, with batchSize unmodified at 1000, my consumer takes an average of 50s to shut down. When setting the feeder to wait a minute or so, shutdown was graceful. Should the feeder use a longer timeout than the finagle default from ParrotCommonConfig, or perhaps server should not block shutdown with a while loop? I know the second option greatly alters functionality, but it seems more intuitive that the server would shut down when told, not after finishing its batch (just IMO). I'm guessing, again, this has somewhat to do with me running small tests, but wanted to ask the experts.

Thanks for all the responses!

...

James Waldrop

unread,
Jan 7, 2015, 2:28:50 PM1/7/15
to iago-...@googlegroups.com
My vague recollection is that the "finish up before we shut down" logic is there because some people may make the assumption that N requests will be sent when they ask for that. If the Feeder shutting down then shuts down the Server, the system being tested will see N - queueDepth requests.

Perhaps a "shut down now, really" mode is needed. For short tests, though, I'd think you'd _want_ the behavior as it stands today in the normal scenario -- otherwise you legitimately run the risk of only seeing one request sent to your service (I don't think you could hit the pathological case of zero requests being sent before the Feeder shuts everything down, but I could be wrong about that!).



--

Michael Zalimeni

unread,
Jan 7, 2015, 2:52:14 PM1/7/15
to iago-...@googlegroups.com
That makes sense - when you say "when they ask for that", are you referring to maxRequests, or a different config value?

"Shut down, really" could be useful. I was thinking when I said that duration should be 60s, and max requests should be 500, the earlier of the two would be honored. When batchSize > maxRequests, the first batch is always processed no matter the time limit. I guess it intuitively felt like duration was a server shutdown hook, rather than for just the feeder... "Go until you run out of time or requests I've allowed, then immediately stop". So maybe duration timeout would be the appropriate place for that? Or would you say a separate scenario/config?

For the second point, would it be alright to request a pull for upping the timeout for the feeder's request to shut down the cluster, so that the servers have time to finish processing their queued requests and respond? It seems that the finagle default of 5 seconds will hardly ever be met under a large batch size, particularly the default of 1000. If so - how long would you think is a good timeout? I was pushing 1 min. with just echo to a remote server...
...

Tom Howland

unread,
Jan 7, 2015, 3:40:01 PM1/7/15
to iago-...@googlegroups.com
I think the correct behavior is not to change the timeout but rather to make it so that the feeder kills the servers. The bug is that doesn't seem to be working. Instead, we're getting timeout errors.

--

James Waldrop

unread,
Jan 7, 2015, 3:40:09 PM1/7/15
to iago-...@googlegroups.com
The idea was that there's two 'styles' of tests people might want to run:

1) a test that sends N requests and then stops
2) a test that runs for N seconds and then stops

It doesn't make much sense for both of these to be configured, but we didn't have a way to do exclusive configuration without things getting really annoying, so the logic is that one of them wins if they're both set (I forget which, I think it might be maxRequests).

The feeder may have some idea of how many requests have been sent by the server via polling, but it doesn't really know. All it has control over is how many requests it queues up. It also doesn't know how long that queue will take to drain after a shutdown is sent. Even the server doesn't really know that, since performance can obviously vary over the course of a run.

If you want a test to run for 30s, but be guaranteed to only send N requests, I think the only way to do that is to only include N requests in the log and then don't let the log file get reused.

You specifically asked:

So maybe duration timeout would be the appropriate place for that? Or would you say a separate scenario/config?

If you have a test that is intended to be type #2 up there, and it isn't shutting down at the right time, that is a bug.

would it be alright to request a pull for upping the timeout for the feeder's request to shut down the cluster, so that the servers have time to finish processing their queued requests and respond?

I think that makes a lot of sense. And honestly, I think the timeout may well need to be max int.


--

Michael Zalimeni

unread,
Jan 7, 2015, 4:26:42 PM1/7/15
to iago-...@googlegroups.com
Tom - understood. That makes sense.

James,

It doesn't make much sense for both of these to be configured, but we didn't have a way to do exclusive configuration without things getting really annoying, so the logic is that one of them wins if they're both set (I forget which, I think it might be maxRequests).

In this case, if we made the duration call an absolute shutdown like you and Tom both suggested, then they could work in tandem. It looks like duration is effectively mandatory in this case, since it currently defaults to 5s in ParrotLauncherConfig. I think the default for that might be safer at 0, since duration is only enforced by the feeder if > 0. That way leaving duration out, like you mentioned in scenario 2), would not kill the test early.

 If you want a test to run for 30s, but be guaranteed to only send N requests, I think the only way to do that is to only include N requests in the log and then don't let the log file get reused.

The most recent pull request *should* guarantee N requests if maxRequests is specified, since it will now be honored even if duration is provided.

If you have a test that is intended to be type #2 up there, and it isn't shutting down at the right time, that is a bug.

That's correct. I'll make a pull request for altering server's behavior so that it shuts down without finishing in a timeout scenario. 
...

Michael Zalimeni

unread,
Jan 14, 2015, 2:28:58 PM1/14/15
to iago-...@googlegroups.com
Just to follow up with this, it took me going back to the documentation today to fully understand what you were saying James. Since stats are recorded automatically by the client  ("client/requests" stat) and the server ("requests_sent" stat), one could get out of sync with the other if all server-queued requests aren't processed after the feeder has sent them over.

I have a branch that has the feeder canceling requests on the server independent of the feeder's lifecycle (the cancel requests call is included in the timer function that sets the feeder state to TIMEOUT). That way, the feeder won't kill off requests unless it has actually timed out - and will still send the cancel request to the server in the middle of an EOF shutdown.

My concern though is that this might affect current use, if the desired end-state is a match between "client/requests" and "requests_sent". Could you give your thoughts on this? Not trying to rock the boat, I just have personal interest in the "shut down now really" scenario since my tests are small. I'm happy to just use my branch for testing when I need it, if it's not desirable.

Regardless, thanks for all the feedback!
...
Reply all
Reply to author
Forward
0 new messages