Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

_BEST_ Download Vsan

7 views
Skip to first unread message

Purlan Ruais

unread,
Jan 25, 2024, 3:15:41 PMJan 25
to
<div>I have a 4 node hybrid cluster and the entire cluster went down hard today. VCSA runs on the VSAN cluster and it is down as well. I managed to bring up one node at a time and 3 of them have synced up. When I run vsan cluster get (on a good node), it shows 3 nodes (master, agent and backup). This would be fine, but the 4th node won't join and I think it has some data that the cluster needs. I have about 6 or so VMs that are "invalid" in the vsphere web client. One of those is vcenter.</div><div></div><div></div><div>Is there a way to force add my 4th node to the cluster? I did the vsan cluster leave (on the isolated node), rebooted, and did vsan cluster join -u and the uuid of the cluster and it basically made itself it's own cluster and set itself to master.</div><div></div><div></div><div></div><div></div><div></div><div>download vsan</div><div></div><div>Download: https://t.co/MBuC3WizcC </div><div></div><div></div><div>I had to shutdown my 3 node vsan cluster to move, this is my home lab. I have shutdown before following the guide with no issues, I am on vmware 7.0 on ucs c220's. The cluster has been off for 30+ days (longest ever) and I just got it going again today, well not going but on. After restarting, all nodes came up just fine, i then removed them from maintenance mode in the cli and proceeded to running the recover reboot helper script. This is the first time I have seen this error (below). I have checked the time, ntp is working still no go, i set the date back to just a couple days after shutdown, still no go. I'm not sure where to start here and it would be a real bummer if i've lost all the data and need to rebuild. In the UI all the VMs show invalid under status. Has anybody seen this or have any advice on where else to look?</div><div></div><div></div><div> ISPking, If you can't get the script working (looks like a timing issue) you can just do what it does on recover, namely re-tag the vmk that was used for vsan-traffic (either using the host UI or esxcli) and re-populate the unicastagent list on each node if they are blank:</div><div></div><div></div><div></div><div></div><div>this is something that I am going to have to do in the very near future. the only challenge that this one has for me is that my vsan cluster is only 4 nodes and will be moving to a new vcenter all together. My question is: can the same steps be applied with splitting the vsan into two, two node vcenters?</div><div></div><div></div><div>I need to destroy my whole vSAN cluster, currently 6.7. I did this once before a couple years back and I remember that I had a hell of a time trying to get the disks back as I didn't do something first I think... can't remember what. I've found plenty of articles on deleting the vsan itself, but my vcenter appliance is on the vsan, this is my lab. 4 nodes with 1 x cache ssd and 2 x storage ssds in each node. I am going to reload the nodes with esxi7 and then use vCF to rebuild... but I seem to remember the disks being marked as vsan last time and I couldn't reuse them - possible even had to boot a 3rd party util and blank them or something... can anyone tell me how to tear this thing down in the neatest and shortest time possible please? thank you!</div><div></div><div></div><div>seems to be working - slightly different path on my version of the UI Hosts -> Cluster -> Configure tab, scroll down to vSan and expand and then Disk Management and the disk groups are there. 2 out of 4 gones - I wasn't sure about all of that as the vcsa is on the vsan, but I guess the important bits are in memory so it continues to work even though I've pulled the rug out from underneath it, thanks so much for this, I'm on a tight schedule to redeploy with vCF and NSX-T and vRealize as we're taking that all on board in the form of a very large vxRail in 2021 and I want to be the SME on the project!</div><div></div><div></div><div></div><div></div><div></div><div></div><div>Thanks, I thought I had said that my vcenter appliance was running on the cluster, that was why I came here for help as I found several articles on google saying to remove the disk groups and turn off HA - but then you said to do it and ..., no harm done as I am redeploying vcsa anyway. I'm not exactly keen on running the appliance on the single datastore and this is a lab, so I am limited in other storage, though I could probably add another local disk in at least one of my hosts as a safe haven to put things on if I need to, like this. Funny thing is that Dell/EMC set my production vxRail up like this, no extra datastores to put the vcenter or anything else on as the local disks are so small. I've never been completely comfortable with that, but 3 yrs later and its been ok. So back to the problem, I was unable to remove them on the UI be either clearing or editing and deleting. Error message was Failed Cannot change the host configuration... but esxcli to the rescue! I was able to delete all three disks, or clear them or whatever that does and vdq. -q now reports they are all eligible for vsan use... so in my notes, I am just going to put the esxcli commands to remove disks on all the hosts and vdq to make sure that all is good before rebuilding. Seems the simplest and most reliable way to do it to me as, at least until I get around to putting a local storage device in one of the hosts. Thanks again for your help on Christmas Eve! Bonus points for that, now I can get up in the morning and build myself a Christmas present, a new vsan cluster! lol. Christmas is pretty much a non-starter here this year, its all about the food and having a nap afterwards :-). all the best! Bill</div><div></div><div></div><div>I also agree that video is confusing. I think that "NetBackup support for VMware vSANs" currently means only support for intelligent VMware policies (DatastoreType Equal "vsan" etc.) and Resource Limits (Max number of snapshots per DatastoreType).</div><div></div><div></div><div>vsan.check_limits clusterhost</div><div></div><div>Gathers and checks various VSAN related counters like components or disk utilization against their limits. This command can be used against a single ESXi host or a Cluster.</div><div></div><div></div><div>vsan.whatif_host_failures [-n-s] cluster</div><div></div><div>Simulates how host failures would impact VSAN resource usage. The command shows the current VSAN disk usage and the calculated disk usage after a host has failed. The simulation assumes that all objects would be brought back to full policy compliance by bringing up new mirrors of existing data.</div><div></div><div></div><div>vsan.enter_maintenance_mode [-t-e-n-v] host</div><div></div><div>Put the host into maintenance mode. This command is VSAN aware and can migrate VSAN data to another host like the vSphere Web Client does. It also migrates running virtual machines when DRS is enabled.</div><div></div><div></div><div>vsan.resync_dashboard [-r] cluster</div><div></div><div>This command shows what happens when a mirror resync is in progress. If a host fails or is going into maintenance mode, you should watch the resync status here. The command can be run once or with a refresh interval.</div><div></div><div></div><div>vsan.proactive_rebalance [-s-t-v-i-r-o] cluster</div><div></div><div>Starts a proactive rebalance that looks at the distribution of components in the cluster and will proactively begin to balance the distribution of components across ESXi Hosts.</div><div></div><div></div><div>vsan.proactive_rebalance_info cluster</div><div></div><div>Displays information about proactive rebalancing activities, including disk usage statistics and whether or not proactive rebalance is running.</div><div></div><div></div><div>vsan.host_evacuate_data [-a-n-t] host</div><div></div><div>This command is the data evacuation part of entering maintenance mode but without any of the vMotion tasks. The command evacuates data from the host and ensures that VM objects are rebuilt elsewhere in the cluster to maintain full redundancy.</div><div></div><div></div><div>vsan.ondisk_upgrade [-a-f] cluster</div><div></div><div>The command rotates through all ESXi hosts in the cluster, performs pre-checks and upgrades the on-disk format to the latest version. The command performs a rolling upgrade by doing several verification checks prior to evacuate components from each of the disk groups.</div><div></div><div>The allow-reduced-redundancy allows upgrades when there are not enough resources in the cluster to accommodate disk evacuations.</div><div></div><div></div><div>vsan.v2_ondisk_upgrade [-a-f] cluster</div><div></div><div>The command rotates through all ESXi hosts in the cluster, performs pre-checks and upgrades the on-disk format to the latest version. The command performs a rolling upgrade by doing several verification checks prior to evacuate components from each of the disk groups.</div><div></div><div>The allow-reduced-redundancy allows upgrades when there are not enough resources in the cluster to accommodate disk evacuations.</div><div></div><div></div><div>vsan.stretchedcluster.config_witness cluster witness_host preferred_fault_domain</div><div></div><div>Configure a witness host to form a vSAN Stretched Cluster. The name of the cluster, the witness host (path to the host object in RVC) and the preferred fault domain (label) are mandatory. Please note that this command does neither create nor assign ESXi hosts to fault domains. You can use the esxcli vsan faultdomain set command to set failure domains from the RVC.</div><div></div><div> ffe2fad269</div>
0 new messages