Question about going from 1 node to 2 node.

76 views
Skip to first unread message

Puneet Zaroo

unread,
Dec 17, 2014, 9:13:43 PM12/17/14
to etcd...@googlegroups.com
I am running etcd version 0.5.0-alpha.3.

As far as I know, going from a single node to a 2 node cluster involves 2 steps.
1) Add a member node to the cluster

"etcdctl member add infra3 http://10.0.1.13:2380", which will return information of the form 

ETCD_NAME="infra3"

ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380,infra3=http://10.0.1.13:2380"

ETCD_INITIAL_CLUSTER_STATE=existing


2) Use the information returned in the first step as parameters to the etcd process being started on the second node.

When going from 1 node to 2 nodes, the etcd cluster becomes inaccessible till the second node is powered on. Key refreshes also start failing. It seems to be by design, but is there a way to configure the behavior such that a single node cluster remains operational, between the 2 steps. E.g. is there a way to specify the quorum value, which can be bumped up from 1 to 2 once the second node has successfully joined.

thanks in advance,

- Puneet


Yicheng Qin

unread,
Dec 17, 2014, 11:16:24 PM12/17/14
to Puneet Zaroo, etcd...@googlegroups.com
This is an expected behavior because one node doesn't reach the quorum of two-node cluster.

We don't have way to specify the quorum value now.

We assume that member change is rare, and bump up from 1 to 2 should only happen when recovery from disaster or testing. So we don't take this case in consideration.

Is there any specific case that needs this feature?

Thanks
Yicheng

--
You received this message because you are subscribed to the Google Groups "etcd-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to etcd-dev+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Puneet Zaroo

unread,
Dec 18, 2014, 2:59:09 PM12/18/14
to etcd...@googlegroups.com, punee...@gmail.com
Yicheng,
Thanks for the reply. The use-case is when you are growing your cluster from a 1 node setup to 2 nodes. Or is the requirement that a cluster should start with at-least 2 nodes. Another use-case is when due to failure a 2 node cluster goes down to 1. Or is the requirement there as well that at-least a 2 node setup is required for etcd availability ?
regards,
- Puneet

Yicheng Qin

unread,
Dec 18, 2014, 10:51:44 PM12/18/14
to Puneet Zaroo, etcd...@googlegroups.com
We recommend 3 - 9 etcd members in a etcd cluster when use it in production. Generally there is no need to reconfig the cluster often.
[https://github.com/coreos/etcd/blob/master/Documentation/optimal-cluster-size.md]

It is not that good to run 2 members in a cluster, because the majority of 2 is 2. So it needs the agreement of all members to make progress.

Moreover, we support static bootstrap in 2.0:
User doesn't need to go from 1 to 2 to bootstrap.

Puneet Zaroo

unread,
Dec 22, 2014, 1:24:33 PM12/22/14
to etcd...@googlegroups.com, punee...@gmail.com
Thanks for the clarifications. I will think about how to incorporate this information in our setup, which does not keep a separate etcd cluster for consensus, but embeds etcd within our software, which runs on all node. So configuring etcd is not a separate step.
- Puneet
Reply all
Reply to author
Forward
0 new messages