Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Vsan Witness Download 7 ~UPD~

0 views
Skip to first unread message

Alicia Pitsch

unread,
Jan 25, 2024, 11:25:49 AMJan 25
to
<div>Would love to see a follow-up with more examples of witness behavior when FTT numbers are larger than 1. Duncan has a good post on -bricks.com/2013/10/24/4-minimum-number-hosts-vsan-ask/ that you perhaps could expand on.</div><div></div><div></div><div></div><div></div><div></div><div>vsan witness download 7</div><div></div><div>Download Zip: https://t.co/RGxOjyeOmV </div><div></div><div></div><div>If host containing the witness fails, nothing happens. You still have full set of data, and greater than 50% of components. If it is a host failure, and the failure last longer than 60 minutes, witness will be automatically rebuilt.</div><div></div><div></div><div>However, with two replicas there is no way to differentiate between a network partition and a host failure. (In _Virtual_SAN_Whats_New.pdf).</div><div></div><div>How is witness in Vsan to differentiate between a network partition and a host failure?</div><div></div><div></div><div>I thought it was just the witness acting up so I deleted it and deployed the latest OVA from VMware. The warning went away after changing the witness host but the "vSAN cluster partition" error in Skyline health is still present.</div><div></div><div></div><div>The hosts and witness can communicate with each other as tested with the following command from every host vmkping -I VSANvmknic WitnessIP -s 1472 -d -c4. I was also seeing MTU errors while setting up the new witness which where resolved when the MTU on it's vSwitches where modified, all other network checks are green.</div><div></div><div></div><div>To configure this requires configuring vmk0 on the data-nodes for witness type traffic (via CLI only) and tagging vmk0 on the Witness for vsan type traffic - Using WTS (Witness Traffic Separation) is usually the least complicated approach as it removes a lot of the necessities for switch-side configuration (and the complications that can come with these) as you have here.</div><div></div><div></div><div></div><div></div><div></div><div></div><div>I left the port group on the witness appliance the way it came out of the box. That is, vmk1 with vSAN enabled on it, is linked to witnessSwitch, which is connected to vmnic1. This vmnic1 is network adapter 2 on the appliance VM. Network adapter 2 here is connected to vlan60 port group on Dswitch. The vlan60 port group is tagged to vlan 60 (vsan vlan). Dswitch from this physical host has 2 uplinks to the Dswitch from the host. Those two uplinks are trunk ports to a network switch. The network switch has vlan60 on it which trunks to a 10G switch upstream. This 10G switch has the two vsan nodes connected to it with 2 uplinks each. From the 10G switch to those uplinks is a trunk port, entering into the 2 nodes into the same Dswitch as before, where the corresponding vsan vmks are located on the same vlan60 port Dswitch port group as before.</div><div></div><div></div><div>Based on this, am I missing a step after deploying the witness appliance VM? Even though vlan60 gets tagged on the VM settings of the witness so I still need to set vlan ID to 60 on the internal witness Pg it makes? This seems redundant but is it needed for some reason? I have seen articles on static routes, but I don't think I need those right because this is all on the same site?</div><div></div><div></div><div>I played around with it some more and still could just not get it to work. Very strange I think since I set it up exactly the way vmware documents say to set it up, just without direct connect between the two hosts. After further testing, the issue definitely has to do with the vsan network (10.137.60.0/24) subnet being different than the management subnet (10.137.20.0/24) which is what the default gateway on the TCP/IP stack is (10.137.20.1).</div><div></div><div></div><div>Once I changed the vsan witness vmk nic to .20.0 subnet and tagged witness traffic on the hosts management vmk nics, everything started working. Really don't know why it wouldn't work on the other scenario because the link between witness and 2 nodes was L2. I could understand if I had remote witness on different site, and would then need L3 and static routes, etc...</div><div></div><div></div><div>Also, is there a supported process for re-tagging the Witness vmknic's because I tried to uncheck vSAN tag in vCenter and check the other interface as vSAN but the changes didn't commit. Does the witness need to be in maintenance mode first? Or do I need to make these changes in esxcli even though the options are there in vCenter?</div><div></div><div></div><div>I'm currently in the midst of upgrading our server infrastructure. This is for a small company but they'd like as much redundancy as possible. This has led me to explore a 2 -node VSAN as our environment is already VMWare based. I understand that this also requires a a third host serving as a witness. Most of the related documentation I found deals primarily with larger implementations and also discusses what occurs during host failures etc but I've found little that addresses what happens when a witness fails with a simple 2-node implementation. Does the Witness represent a single point of failure? What happens if the Witness is offline/down as there's no redundancy for it in this implementation? Will the servers continue working normally or would it isolate the servers to protect them until the witness is back online? Would any down time be experienced by the users? If things function normally without the witness, what would the time window be to get the witness back online?</div><div></div><div></div><div>The hosts and the witness would all be running locally in a single data center. I've seen where the 2-node approach is often used from branch or remote offices. This wouldn't be the case for us as it represents our "data center." Pursuing the minimum of one more host would provide redundancy for the witness but this isn't feasible financially with the additional hardware and software costs. Basically, I have to weigh the risks of the 2-node or stick with a traditional 2 hosts running on a direct attached SAN.</div><div></div><div></div><div>"Most of the related documentation I found deals primarily with larger implementations and also discusses what occurs during host failures etc but I've found little that addresses what happens when a witness fails with a simple 2-node implementation."</div><div></div><div></div><div>No, as a) and b) - You are essentially running as FTT=0 until you get the Witness back but a Witness is fairly easy and fast to re-deploy if the original is somehow kaput (minimal rebuild of data time too as each witness-component (per Object) is 16MB).</div><div></div><div></div><div>"The hosts and the witness would all be running locally in a single data center. I've seen where the 2-node approach is often used from branch or remote offices. This wouldn't be the case for us as it represents our "data center." Pursuing the minimum of one more host would provide redundancy for the witness but this isn't feasible financially with the additional hardware and software costs."</div><div></div><div></div><div>As per the detailed vsan-stretched-cluster-2-node-guide you should consider how you are going to run the network to this e.g. L2/L3, and/or separate vmk for witness-traffic and vsan-traffic (Witness Traffic Separation).</div><div></div><div></div><div>Thanks so much for your thorough response Bob! That clarified a great deal and gives me more confidence as I continue to consider VSAN as a solution. Dell has assisted me in the server builds to ensure their compliancy with the VSAN HCL. The servers and VSAN include their ProDeploy services so they'll be assisting with the deployment and ongoing support. The servers will interface directly via 10G connections while the witness/monitoring aspect will be through our standard 1G switch separating the traffic. In terms of our IT department, I'm it. I have the costs inline so financially it's feasible. My apprehensions now deal primarily with my lack of resources and fear of the unknown. I'll continue reading the links you provided to gain a more thorough understanding as I make the final decision.</div><div></div><div></div><div>So to make a long story short, we have a 2-node direct-connect VSAN 6.5 hybrid cluster at what we'll call 'Site A'. This setup uses the stretched cluster configuration settings, and the witness appliance runs on a cluster at another site - 'Site B'. I removed the witness appliance from the vCenter inventory at 'Site A' temporarily for reasons I won't go into, but once I did that the stretched cluster configuration's 'Witness host' field is now blank. Attempting to disable stretched cluster and configure it again, I get the error below:</div><div></div><div></div><div>I understand that a VSAN witness for a 2 node deployment can be either a physical host running the free version of ESXi (selected during VSAN 2 node setup, and not added to the cluster beforehand) - but it does require VSAN licensing. The alternative is the VSAN witness appliance, which does not require a license.</div><div></div><div></div><div>The scenario is I have a requirement for a secure site with a 2 node VSAN. The witness will reside at the same site - however this will be a much smaller server and in an effort to keep costs down I would rather just virtualise it with a free copy of ESXi and run the witness appliance then on that host instead.</div><div></div><div> ffe2fad269</div>
0 new messages