On Mon, Apr 11, 2016 at 12:46 PM, Hridyesh Pant <
hridye...@gmail.com> wrote:
> Thanks Jeff got it.
> one more question about the recovery_mode option, Doc says
>>>
> It is important that only one node is running in recovery mode! After this
> node has become the leader, other nodes can be started with regular
> configuration.
>>>
> if i have 3 vault server cluster behind the ELB and if leader crash ,the
> one of other two standby will become leader automatically right?
> so my doubt is how the old lock will get removed automatically, do i
> manually start the other server with recovery_mode=1 option.
> i dont want to manual start to unlock , i am looking something like for my
> production support where we don't want single point of failure
> 1.all three server are behind the ELB and advertise_addr pointing to ELB
> address.
> 2. one of the server will become as active node .
> 3. in case active node crashed or not reachable (health check failed) ,one
> of the other standby node become active and acquire the lock.
>
> could you please suggest how i can configure such workflow ?
Hi Hridyesh,
Unfortunately, this is a drawback of the DynamoDB storage backend. It
doesn't have locks tied to an expiring session; the lock is instead a
document with a conditional write. The leader is able to write to that
document and standbys are not; however, if the leader cannot delete
the document being used due to a crash, there is no way for the
standbys to know that it is safe to ignore it. One of the drawbacks is
therefore that in a crash scenario where the leader does not give
ownership willingly, you must use recovery mode.
If this tradeoff is not acceptable to you, I suggest looking at
Consul, or one of the other backends that uses expiring session locks.
Best,
Jeff