Having difficulty testing this "cleanly"

160 views
Skip to first unread message

Evan Digby

unread,
Sep 13, 2016, 2:56:08 PM9/13/16
to golang-nuts
Has anyone come across a good way, non-racy way to ensure that N tasks are guaranteed to be completed after a function is called? Essentially I have a “Close” function that must be guaranteed to block until all tasks are finished. Achieving this was pretty simple: wrap each task in an RLock, and then a Lock on close. 


Now I want to write a solid test to guarantee Close will meet that requirement of all tasks must finish first for posterity. In that example, try commenting out the RLock/RUnlock on lines 25/26. You'll see that it no longer outputs many, if any, lines. I'm trying to prevent that from happening in the future by some cowboy refactor!

All of the ways I can come up with involve Sleeping or launching more tasks than I _think_ can be finished in time--obviously not good!

I feel like I must be missing some obvious way to test this and I'll end up feeling silly once someone replies with the solution. I'm okay with that!

John Souvestre

unread,
Sep 13, 2016, 3:01:29 PM9/13/16
to golan...@googlegroups.com

Have you considered using a sync.WaitGroup?

 

John

    John Souvestre - New Orleans LA

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Evan Digby

unread,
Sep 13, 2016, 3:19:16 PM9/13/16
to golang-nuts
Hi John,

Thanks for the reply. I've tried many incarnations that include WaitGroups; however, none seem to achieve the desired result. 

If I add a WaitGroup with a defer done in the handler, and then wait after the Close() then the test itself implements the requirement and won't protect from future refactors. There's no way to test that a WaitGroup is done without waiting for it, and even if there was it would be racy because between the Close() and WaitGroup wait call tasks could complete. If I wrapped the wait and the done in goroutines to see which one happened first, also racy. 

If you have something else in mind can you elaborate on how it would help in this case?

Thanks again!

Evan

aro...@gmail.com

unread,
Sep 13, 2016, 3:34:17 PM9/13/16
to golang-nuts
The WaitGroup is better than the lock approach, since the lock approach could block an outstanding task.  The key to using waitgroups is to call Add() outside of goroutines that might call done:

Egon

unread,
Sep 13, 2016, 3:47:40 PM9/13/16
to golang-nuts
counter := intr64(N)
release := make(chan struct{})

...
for i := 0; i < N; i ++ {
  h.Handle(func() {
      <-release
      atomic.AddInt64(&counter, -1)
  })
}
...

go func(){
    time.Sleep(time.Millisecond) // so we would certainly hit h.Close, before we continue
    for i := 0; i < N; i++ { release <- struct{}{}; time.Sleep(time.Millisecond) }
    // alternatively use runtime.Gosched() instead of Sleep
}()

h.Close()

if atomic.LoadInt64(&counter) > 0 {
    // fail
}

It's not completely fool-proof, but should work well enough in practice.

John Souvestre

unread,
Sep 13, 2016, 3:49:07 PM9/13/16
to golan...@googlegroups.com

Hi Evan.

 

I looked at the code you posted.  I’m having a problem seeing exactly what you are trying to do.  Also, it appears to be using multiple synchronization techniques: Mutex, WaitGroup, and 2 blocking channels.

 

Am I correct in guessing that you want to know how long it took to get all of the tasks running?  Then you want to wait till all of the tasks are complete?  If so, I’m thinking that using 2 WaitGroups would do the job – one to show when all the tasks are running and one to show when all the tasks are done.  No mutex and no blocking channels.

Evan Digby

unread,
Sep 13, 2016, 3:50:50 PM9/13/16
to golang-nuts, aro...@gmail.com
Hi Aroman,

Your approach using the WaitGroup is definitely better in this toy example. The reason I didn't use the WaitGroup is because the non-toy example is wrapping the HTTP Server handler. I have no way to inject an "add" before the goroutine is created since that's handled by Go's HTTP Server without re-implementing the accept->handle loop using the listener. 

Apologies for not giving the full context in the example.  

I'm not sure how it could block an outstanding task since the closed channel is called before the Lock(), so no additional calls to RLock will be made at that point, and the Lock will just wait until all of the RLocks are complete.

Regarding your testing strategy, I do like it better than any of my current strategy; however, There still is a chance that a task could complete between lines 90 and 91:

h.Close()
events <- ALL_TASKS_FINISHED

So this doesn't solve the racy-ness I'm concerned about unless you put an arbitrary sleep in the handlers, which I'm trying to avoid. 

Evan Digby

unread,
Sep 13, 2016, 3:52:27 PM9/13/16
to golang-nuts
Hi Egon,

This is essentially the strategy I'm taking; however, I am hoping to avoid the "Sleep and Pray" method. Reliable in practice, but not guaranteed. Also in a CI of thousands of tests, adding time arbitrarily can extend out the time it takes to test quite a bit.

That said, if a sleep is the only way, a sleep is the only way. I hope it isn't!

Thanks!

Evan

John Souvestre

unread,
Sep 13, 2016, 3:54:21 PM9/13/16
to golang-nuts

Hi again.

 

Ø  There's no way to test that a WaitGroup is done without waiting for it, and even if there was it would be racy because between the Close() and WaitGroup wait call tasks could complete.

 

If you don’t mind being blocked, then Wait is just what you want.  Since it’s occurring in the same function (main) that the Add is taking place earlier, there is no race. 

 

John

    John Souvestre - New Orleans LA

 

From: golan...@googlegroups.com [mailto:golan...@googlegroups.com] On Behalf Of Evan Digby


Sent: 2016 September 13, Tue 14:19
To: golang-nuts

Evan Digby

unread,
Sep 13, 2016, 3:58:38 PM9/13/16
to golang-nuts
I do mind being blocked, because if the test blocks (consider what's in "main" a test) then it is going to enforce the requirement that I want to see in the implementation. It won't protect me, and will make the test obsolete. 

What I'm trying to test is that all handlers are complete after the call to "Close". I don't want the test code to be enforcing that.

Evan Digby

unread,
Sep 13, 2016, 3:59:07 PM9/13/16
to golang-nuts
Hi John,

Consider the "main" function a test, and not a sane usage of the handler.

The multiple sync methods in "main" are trying to accomplish:

1) Ensure all tasks have started (allStarted waitgroup)
2) Wait until we're ready to call "Done" before we proceed with the task (timeToFinish channel block)

These synchronizations wouldn't exist in actual usage. They're just test setup.

The RW mutex is used by the implementation to guarantee that all handlers are complete before we return from "Close", which is what I'm attempting to test.

Thanks again!

John Souvestre

unread,
Sep 13, 2016, 4:06:39 PM9/13/16
to golang-nuts

Ø  The RW mutex is used by the implementation to guarantee that all handlers are complete before we return from "Close", which is what I'm attempting to test.

 

I’m not sure that the mutex is going to work like you want.  There’s no guarantee that the h.RLock() in the goroutine will execute in a timely fashion.  I think that you’d have to call it before the goroutine.

Egon

unread,
Sep 13, 2016, 4:11:17 PM9/13/16
to golang-nuts
On Tuesday, 13 September 2016 22:52:27 UTC+3, Evan Digby wrote:
Hi Egon,

This is essentially the strategy I'm taking; however, I am hoping to avoid the "Sleep and Pray" method. Reliable in practice, but not guaranteed. Also in a CI of thousands of tests, adding time arbitrarily can extend out the time it takes to test quite a bit.

That said, if a sleep is the only way, a sleep is the only way. I hope it isn't!

You would need to modify the handler to make it work other ways.
Alternatively you need to write a code-rewriter for injecting code into the handler.

The only reliable concurrent software I've seen is one that is easy to understand, that is backed by a proof (either formal or informal).

I took a deeper look into the taskHandler code -- and it doesn't finish all the tasks before everything:

I.e. 

R1: Line 24 // go func is delayed for some reason
R2: Line 35 // close is called in main
R2: Line 36
R2: Line 37
R1: Line 25 // go func is started
R1: Line 28 // task is executed
R1: Line 26

ATM. too tired to implement a proper solution, will take a look at it tomorrow.

+ Egon

Evan Digby

unread,
Sep 13, 2016, 4:12:30 PM9/13/16
to golang-nuts
Hi John,

You're absolutely right, there is definitely a race condition that we've accepted here. We're trading that off vs. reimplementing large parts of the existing http server so that we can inject the rlock (or wg.Add(1)) before golang spawns the goroutine for the request.

This race condition is why the current "test" needs the "allStarted" wait group. 

Thanks,

Evan

Evan Digby

unread,
Sep 13, 2016, 4:22:07 PM9/13/16
to golang-nuts
Hi Egon,

I appreciate the effort you've (and everyone has) put in thus-far!

Correct me if I'm wrong, but I think you're pointing out the race condition John pointed. That's a tradeoff we've accepted so we don't need to reimplement much of http.Serve so that we can add a waitgroup Add or Rlock before the goroutine is spawned (among other sync requirements to ensure no new connections are accepted, etc).

Thanks again,

Evan

aro...@gmail.com

unread,
Sep 13, 2016, 4:24:03 PM9/13/16
to golang-nuts, aro...@gmail.com
The mutex approach is fundamentally broken because you can't guarantee that the tasks are all started (and have a read-lock acquired) before you call close.

Consider:
h.Handle(...)  <-- gets past the closed channel check, calls go ..., butthe goroutine doesn't execute yet.
h.Close() <-- closes the close channel, Locks and Unlocks,returns.
...now the goroutine executes and acquires the read lock.

So really, if you can't control the Handle() function, you need two WaitGroups:  one to verify that all goroutines have started before shutting down the task handler and a second one for all goroutines to have finished.  However, it's tricky if we don't know the real use case.

Sounds like you are trying to do graceful http shutdown.  Have you looked at other libraries that do that?  If you don't have a way to account for the time between Handle(..) is called and the goroutine starts, you always might miss a task that got called near the time Close() was called.

- Augusto

Evan Digby

unread,
Sep 13, 2016, 4:31:55 PM9/13/16
to golang-nuts, aro...@gmail.com
Hi John/Egon/Augusto,

I should point out is that all we need to guarantee (barring abnormal termination of course) is that once a task starts processing, it finishes. Partially processed messages are bad, but http requests that don't result in a message being processed at all are okay.

We don't need to guarantee that the result of every Accept in the HTTP server results in a processed message. We handle this on the OPS side by ensuring we stop sending requests to that instance before terminating the process. We just want to make sure, at that point, that the messages which did make it to the handler are flushed.

So the case where:

h.Handle(...)  <-- gets past the closed channel check, calls go ..., butthe goroutine doesn't execute yet.
h.Close() <-- closes the close channel, Locks and Unlocks,returns.
...now the goroutine executes and acquires the read lock.


We actually don't care if "Handle" completes in this example. We only care if that our task handler starts processing a message that it completes the processing.

Thanks again,

Evan

Evan Digby

unread,
Sep 13, 2016, 4:39:59 PM9/13/16
to golang-nuts, aro...@gmail.com
Hi Augusto,

We have considered using 3rd party graceful shutdown libraries; however, they mostly (all?) seem to re-implement the Server type. This means that if Go were to release a security fix, or other urgent fix to this logic, we wouldn't get it right away. We now depend on a 3rd party to keep up-to-date on something that is core-Go.

Since our requirements don't require a full "graceful" shutdown and are easily implemented in a few lines of code by wrapping the handler with an RLock, we would rather minimize the dependence on third party libraries.

Thanks again,

Evan

On Tuesday, 13 September 2016 13:24:03 UTC-7, aro...@gmail.com wrote:

Evan Digby

unread,
Sep 13, 2016, 4:43:19 PM9/13/16
to golang-nuts, aro...@gmail.com
Hi Augusto,

I do owe you an apology. Our production code has the check on the closed channel after the RLock, not before. In this case you are 100% correct.

This was a bad transcription of the production code to the toy example.

Thanks again,

Evan

On Tuesday, 13 September 2016 13:24:03 UTC-7, aro...@gmail.com wrote:

John Souvestre

unread,
Sep 13, 2016, 4:52:05 PM9/13/16
to golang-nuts

Hi Evan.

 

I still don’t quite understand exactly what you are shooting for.  I tried to reimplement what you posted originally.  Check out https://play.golang.org/p/koUJYCKFpa.  Does this come close functionally?

Evan Digby

unread,
Sep 13, 2016, 4:59:57 PM9/13/16
to John Souvestre, golang-nuts
Hi John,

What you've posted is a valid way to implement the handler, but not a way to validate it.

The implementation in the example isn't the problem. It's how to validate the implementation with a test.

If we add a WaitGroup.Wait inside the handler then the test is not valid because it will wait until they're done. If the test does the waiting, then we aren't validating that the implementation itself does the waiting. 

I'm trying to find a clean way to validate that the waiting is done by the Close call.

Thanks again for your effort in this!

Evan

You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/jh-nvt9ukBg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.

John Souvestre

unread,
Sep 13, 2016, 5:04:05 PM9/13/16
to golan...@googlegroups.com

OK.  Give me a minute to add that.  I just wanted to make sure I was headed in the right direction.  J

 

Note:  In looking at your original code I didn’t see any way that the error could happen, so I ignored that case.  Given this, there was no need for the h.closed channel.

 

Back in a few.  J

 

John

    John Souvestre - New Orleans LA

 

Evan Digby

unread,
Sep 13, 2016, 5:06:43 PM9/13/16
to John Souvestre, golan...@googlegroups.com
Hi John,

Thank you!

The h.closed channel, if checked properly (after the RLock), prevents the race condition that Augusto pointed out in his post a few back.

I fixed my implementation here: https://play.golang.org/p/QTkSJAOPtE

Thanks again,

Evan

Egon

unread,
Sep 13, 2016, 5:08:36 PM9/13/16
to golang-nuts, aro...@gmail.com
On Tuesday, 13 September 2016 23:31:55 UTC+3, Evan Digby wrote:
Hi John/Egon/Augusto,

I should point out is that all we need to guarantee (barring abnormal termination of course) is that once a task starts processing, it finishes. Partially processed messages are bad, but http requests that don't result in a message being processed at all are okay.

We don't need to guarantee that the result of every Accept in the HTTP server results in a processed message. We handle this on the OPS side by ensuring we stop sending requests to that instance before terminating the process. We just want to make sure, at that point, that the messages which did make it to the handler are flushed.

So the case where:

h.Handle(...)  <-- gets past the closed channel check, calls go ..., butthe goroutine doesn't execute yet.
h.Close() <-- closes the close channel, Locks and Unlocks,returns.
...now the goroutine executes and acquires the read lock.


We actually don't care if "Handle" completes in this example. We only care if that our task handler starts processing a message that it completes the processing.

How do you actually ensure that it completes processing without hooking into Server? I.e. that buffers and sockets get properly flushed?

Let's take a step back and what are the properties that you need -- 

I assume it's just graceful shutdown where all the pending ServeHTTP requests have been processed?

What should happen when the server crashes -- is it vital for those requests to be processed, once they have been accepted?

What should happen when one handler gets stuck in an infinite wait/loop?

Does the "Close" returning mean you exit main or does hpw does the process termination depend on it? Or is it just another goroutine that is terminating not the whole process?

+ Egon

Evan Digby

unread,
Sep 13, 2016, 5:18:26 PM9/13/16
to Egon, golang-nuts, aro...@gmail.com
Hi Egon,

My requirements are more simple than a graceful http shutdown. I simply require that everything that enters the RLock completes to RUnlock. Accepted requests, or even calls to servehttp can die without issue as long as they haven't entered the processing in the RLock.

If the server crashes we have ways to deal with that, but it's a more DevOps-y process. Recovering from logging, etc.,and should be an edge case that I'm not worried about handling in code. 

If the handler gets stuck in a loop, we will see that in our logging. I don't want the server to die in that case. I want it to keep retrying (we have exponential backoff) and informing us via structured logging of what's going on. If it's an unanticipated loop/block, then there will be a manual investigation into the server's state before we manually kill the process. At that point it becomes similar to the last point, except easier because we already know the state the message was in.

In our use case we will always exit shortly after a close. It's safe to assume the process will die after close returns.

Thanks again,

Evan

--
You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/jh-nvt9ukBg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.

John Souvestre

unread,
Sep 13, 2016, 5:21:09 PM9/13/16
to golan...@googlegroups.com

OK.  Check this one out.  https://play.golang.org/p/kO_96oykG1

 

John

    John Souvestre - New Orleans LA

 

From: Evan Digby [mailto:evan...@gmail.com]
Sent: 2016 September 13, Tue 16:06
To: John Souvestre; golan...@googlegroups.com
Subject: Re: [go-nuts] Having difficulty testing this "cleanly"

 

Hi John,

 

Thank you!

Egon

unread,
Sep 13, 2016, 5:48:19 PM9/13/16
to golang-nuts, egon...@gmail.com, aro...@gmail.com


On Wednesday, 14 September 2016 00:18:26 UTC+3, Evan Digby wrote:
Hi Egon,

My requirements are more simple than a graceful http shutdown. I simply require that everything that enters the RLock completes to RUnlock. Accepted requests, or even calls to servehttp can die without issue as long as they haven't entered the processing in the RLock.

In that case https://play.golang.org/p/RiFEbQvytP. Ps. Still tired, may contain bugs.

John Souvestre

unread,
Sep 13, 2016, 5:52:50 PM9/13/16
to golan...@googlegroups.com

Or, if you have something else you’d rather be doing while waiting…

https://play.golang.org/p/rhwEpWMhqF

 

John

    John Souvestre - New Orleans LA

 

From: John Souvestre [mailto:Jo...@Souvestre.com]
Sent: 2016 September 13, Tue 16:21
To: 'golan...@googlegroups.com'
Subject: RE: [go-nuts] Having difficulty testing this "cleanly"

 

OK.  Check this one out.  https://play.golang.org/p/kO_96oykG1

 

John



    John Souvestre - New Orleans LA

 

From: Evan Digby [mailto:evan...@gmail.com]

Sent: 2016 September 13, Tue 16:06
To: John Souvestre; golan...@googlegroups.com

Subject: Re: [go-nuts] Having difficulty testing this "cleanly"

 

Hi John,

 

Thank you!

Evan Digby

unread,
Sep 13, 2016, 5:58:39 PM9/13/16
to Egon, golang-nuts, aro...@gmail.com
Hi Egon,

Thanks for that. It seems to implement the same requirements as implemented in my example, although I prefer my implementation as it doesn't require a sleep/loop. 

The implementation I provided in the original email, with the exception of the bug that Augusto pointed out, works as desired. The question was not how to implement these requirements, but how to validate them with a test cleanly. 

Unfortunately it doesn't seem that I was clear in my original post since most people are focusing on the implementation of the requirements.

Thanks again for your effort in this--apologies for miscommunication the original question.

Evan 

aro...@gmail.com

unread,
Sep 13, 2016, 8:09:59 PM9/13/16
to golang-nuts, jo...@souvestre.com
The way to test it is something like this:  https://play.golang.org/p/8hN9q9ibIc

Specifically, have the handler push to start & finish channels. In the monitoring goroutine, you can record events for start, finish, & close (and anything else you want to check, maybe Queue?).  After the call to Close() completes, you can check through the event history.  Depending on how you set up your test, you could see, for example, that nothing starts after the CLOSED event.

The test that I made is a little weak in that it tries to verify that exactly N are started which means that to avoid races I don't try to submit to-be-rejected requests until after the channel is closed.  Perhaps it would be better to also have a separate tests that continuously submits requests (e.g. until 100 have failed) and then check the event log to make sure that none start after close and that all started before close are finished before close returns.

I think that satisfies your testing criteria, although it doesn't 100% verify that everything is finished before Close returns -- it's possible that you get lucky and the entire machine freezes on return from Close() and then the queued goroutines flush before the call to read the event log.  If you really want to make that testable, you can put a hook in Close() that sends a signal before it returns.

- Augusto

aro...@gmail.com

unread,
Sep 13, 2016, 8:19:10 PM9/13/16
to golang-nuts, jo...@souvestre.com, aro...@gmail.com
By the way, the latest code at https://play.golang.org/p/QTkSJAOPtE isn't great, because once Close() is called, new tasks will be blocked until existing tasks finish.

That is, once Lock() is called in Close(), subsequent RLocks() are blocked.  At least, this is document in an old version of the docs (1.5.2):
Lock locks rw for writing. If the lock is already locked for reading or writing, Lock blocks until the lock is available. To ensure that the lock eventually becomes available, a blocked Lock call excludes new readers from acquiring the lock.

However, current docs don't indicate that: https://godoc.org/sync#RWMutex.Lock

So if the processing takes a long time, many goroutines may be stuck waiting for it to resolve before being rejected.

- Augusto

Egon

unread,
Sep 14, 2016, 1:41:44 AM9/14/16
to golang-nuts, egon...@gmail.com, aro...@gmail.com
On Wednesday, 14 September 2016 00:58:39 UTC+3, Evan Digby wrote:
Hi Egon,

Thanks for that. It seems to implement the same requirements as implemented in my example, although I prefer my implementation as it doesn't require a sleep/loop. 

I used sleep/loop because I probably was thinking about sleep too much at that time :D. It could have been done with a WaitGroup.

Anyways, coming back to the problem at hand... 
As I said there are two properties that a good concurrency primitive should have:

1. It should be easy to understand.
2. It should have a formal or an informal proof.

Tests are less useful, unless they are supported by the scheduler. If they are not, then you pretty much either end up with quite tests that don't cover the "hard stuff" -- e.g. the problem in your initial code.

Now you can write the tests that implement the program state-machine... crude version https://play.golang.org/p/wBTbA2bQTP ... then write your whole program in terms of Routine -- where each func is an "atomic" instruction... then write the Step function that it goes through all possible orderings... or alternatively use randomized testing and run through only N different orderings... Put as you can imagine it's pretty annoying. (Just a top of the head idea.)

Now there are "testing frameworks" that can help here as well... e.g. https://www.youtube.com/watch?v=zi0rHwfiX1Q ... There's https://golang.org/pkg/testing/quick/ that provides some support for it... i.e. you need to add an "Action" generator and a "State" primitive... and each call to Check does a random action and verifies whether "State" is in a consistent state. (You may need to add an additional func "WouldWait" to do it.)

For actually verifying/proving -- people involved with Raft have came up with easy-to-use framework, relatively speaking... http://verdi.uwplse.org/ ... to say the least, proper verification requires significant effort.

Here's the simplest I was able to make the needed primitive https://play.golang.org/p/MkmarqPzr1 ... and instead of testing "Do" and "Finish" you test the rest of the interface -- they are deterministic -- and non-racy. (Using testing/quick here would be probably ideal.)

+ Egon

Evan Digby

unread,
Sep 14, 2016, 2:28:05 AM9/14/16
to golang-nuts, egon...@gmail.com, aro...@gmail.com
Hi All,

Firstly, I really appreciate the care and attention everyone is putting into this. I'm finding this challenge fun to navigate.

Secondly, I don't feel so silly anymore because clearly this is a non-trivial task. 

I'm going to have to take a bit to let the last couple of suggestions sink in and figure out how to apply them in this case. Too tired to completely grok them tonight (I feel you Egon!)

I'm starting to think there is some value in just sleeping temporarily (in the test only of course), even though I find this to be a less than desirable practice. In this case it may be the least complex and most reliable way. 

That said, Egon and Augusto are absolutely correct. That test avoids validating that no new requests enter the RLock following close, even if it successfully validates that all requests which had already entered RLock are completed. Putting the done channel check inside RLock makes the current code work, but the value of the test is both ensuring the current code works as desired, as well as future refactors. It leaves us wide open for someone to make the mistake I made when transcribing to Go Playground.

So the simple "sleep" solution validates one case, but leaves me with another potentially more difficult case to validate. 

To touch on Augusto's point, I agree that it's less than desirable to have requests that "accidentally" made it in after close is called wait for all other tasks to complete before getting a proper HTTP response. 

The production implementation of "Close" actually waits for the listener to close and for http.Serve to exit completely before attempting to take a lock on the tasks; however, since Serve is launching goroutines, Serve may exit with a connection in the state between the call to the handler and taking out the RLock. 

In our production environment the risk of that should be minimal because procedure dictates that the traffic is shut off before gracefully shutting down the process. That's all fine and good as a procedure, but leaves the code in a state where it doesn't meet the requirements without proper ops procedures. This may be a tradeoff that we accept, but leave things precarious for future developers and ops who inherit this code.

Okay, time for me to sleep!

Thanks again everyone for the time you put into this.

Evan
Reply all
Reply to author
Forward
0 new messages