Unable To Download ^HOT^ Versions File From Site Recovery Manager Server

234 views
Skip to first unread message

Brigitte Bjork

unread,
Jan 18, 2024, 10:31:20 AM1/18/24
to tranangaical

Unable to retrieve pairs from extension server at :9086/vcdr/vmomi/sdk.Unable to connect to Site Recovery Manager Server at >X:9086/vcdr/vmomi/sdk. Reason: java.net.ConnectException: Timeout connecting to [/X.X.X.X:9086]

unable to download versions file from site recovery manager server


Download File ✫✫✫ https://t.co/oxJkMAbnuw



Site Recovery Manager requires a management network connection between paired sites. The Site Recovery Manager Server instances on the protected site and on the recovery site must be able to connect to each other. In addition, each Site Recovery Manager instance requires a network connection to the Platform Services Controller and the vCenter Server instances that Site Recovery Manager extends at the remote site. Use a restricted, private network that is not accessible from the Internet for all network traffic between Site Recovery Manager sites. By limiting network connectivity, you limit the potential for certain types of attacks.

When Site Recovery Manager recovers a vNIC to an NSX-T opaque nework on a recovery site, after performing reprotect and failback to the original protected site, Site Recovery Manager is unable to apply IP subnet rules for this vNIC.

When you have protected VMs attached to networks with network labels different from the ones that exist on the recovery site, during Test\Recovery\Reprotect the operations succeed, but dummy networks with same network labels from protected site might be created on the recovery vCenter Server. Dummy networks are created only once, not every time you execute the Test\Recovery\Reprotect.

If you put protected virtual machines in recovery plans, then delete all recovery plans containing these VMs, and export your configuration with the VMware Site Recovery Manager 8.1 Configuration Import/Export Tool, the VM recovery settings for those VMs are exported but you are unable to import them later. If you try to import your settings, you see errors like:
Error while importing VM settings for server with guid '6f81a31e-32e0-4d35-b329-783933b50868'.
The rest of your exported configuration is properly imported.

If you use VMware Cloud on AWS as a disaster recovery site and you have configured Hybrid Linked Mode, the Site Recovery plug-in for the vSphere Client shows the UI error Not installed error for vSphere Replication and Site Recovery Manager services. Opening the Configure Replication wizard from the vSphere client shows the Cannot find healthy Site Recovery UI error.

Workaround:
1. Ignore the error and open the Site Recovery user interface at the cloud site - either from the VMC UI Add Ons tab, or directly at
2. Open the Configure Replication wizard from the Site Recovery user interface.

When you use the DR IP Customizer tool in a multiple vCenter Server environment, for example setup with federated PSCs where more then one vCenter Server instance is available on each site, you must specify the option '--vcid UUID' to be used to gather networking information about the virtual machines protected by Site Recovery Manager. If you provide the secondary site vcid, the DR IP Customizer tool connects to the secondary Site Recovery Manager server which does not store the network information for VMs protected with SPPGs. Providing the vcid from the secondary site results in connecting to the wrong vCenter Server and the VMs are not listed in the generated CSV file.

When you try to export the report from the Recovery Plan History or the Recovery Steps screens using MS Edge browser, you get the following error in the dev console.
ERROR XML5610: Quote character expected.
ERROR Error: Invalid argument.
This is a known Microsoft Edge browser issue with XSLTProcessor used to transform server's xml into html.

If you recover an encrypted VM and the encryption key used on the protected site is not available on the recovery site during the recovery process, the recovery fails when Site Recovery Manager powers on the VM.

If you have a VM with multiple disks that are replicated with vSphere Replication to different vSphere Virtual Volumes datastores on the secondary site, a test recovery operation fails. During a test recovery, vSphere Replication tries to create Linked Clones for the vSphere Virtual Volumes replica disks, but the operation fails because Linked Clones across different datastores are not supported. vSphere Replication creates Linked Clones only during a test recovery. The planned recovery, unplanned recovery, and reprotect complete successfully.

If you use a VSS network for which you have not configured a regular network mapping and you run disaster recovery on a recovery plan that contains a storage policy protection group, Site Recovery Manager creates a temporary placeholder mapping for this network. When you complete the temporary placeholder mapping, a network might appear on the secondary site that has the same name as the network on the primary site. If you did not explicitly create this network, it is not a genuine network. However, it is possible to select it as the target for the temporary placeholder mapping and recovery will succeed. The network is then displayed as inaccessible after the recovery completes, even though the recovered VMs are shown as being connected to this network on the recovery site.

If, when you create network mappings, you configure a specific network mapping for testing recovery plans, and if you subsequently delete the main network mapping, the test network mapping is not deleted, even if the recovery site network that you configured is not the target of another mapping. For example:

A protected virtual machine can lose its protection status as well as recovery settings when you rename the datastore associated with the virtual machine. First shut down the Site Recovery Manager server, then rename datastores to avoid losing recovery settings for the virtual machine.

Workaround: To restore the protection status, restart the protected site Site Recovery Manager server or remove the affected datastore from the protection group and then add it back, then reconfigure recovery settings.

Workaround: There is no workaround for incorrect object names in inventory mappings. Check the history report from the failed test or recovery workflow that caused the placeholder mappings to be created. For example, if you know the protected site inventory, you can determine the protected site datacenter, folder, and resource pool that contained the protected virtual machine that failed to recover due to a missing mapping.

After you add a prompt or command in Recovery Steps > Recovery View, you can see the same prompt or command in test view. However if you try to edit a prompt or command in test view, the prompt or command specific to the recovery view might disappear from the list of steps.

When you delete the recovery plan and protection group from the SRM inventory, the placeholder VM is still visible on the recovery site. An error occurs when you try to create a new protection group with the same datastore and virtual machine. When you try to manually delete the placeholder virtual machine from the vCenter Server inventory, an error occurs. Site Recovery Manager marks the virtual machine as orphaned.

If the protection site becomes unreachable during a deactivate operation or during RemoteOnlineSync or RemotePostReprotectCleanup, both of which occur during reprotect, then the recovery plan might fail to progress. In such a case, the system waits for the virtual machines or groups that were part of the protection site to complete those interrupted tasks. If this issue occurs during a reprotect operation, you must reconnect the original protection site and then cancel and restart the recovery plan. If this issue occurs during a recovery, it is sufficient to cancel and restart the recovery plan.

If you use Site Recovery Manager to protect datastores on arrays that support dynamic swap, for example Clariion, running a disaster recovery when the protected site is partially down or running a force recovery can lead to errors when rerunning the recovery plan to complete protected site operations. One such error occurs when the protected site comes back online, but Site Recovery Manager is unable to shut down the protected virtual machines. This error usually occurs when certain arrays make the protected LUNs read-only, making ESXi unable to complete I/O for powered on protected virtual machines.

Running cleanup after a test recovery can fail with the error Error - Cannot unmount datastore 'datastore_name' from host 'hostname'. The operation is not allowed in the current state.. This problem occurs if the host has already unmounted the datastore before you run the cleanup operation.

When a protection group contains no virtual machines and you run a recovery plan of this protection group in planned migration mode from the remote Site Recovery Manager server, the operation fails. The plan goes into Incomplete Recovery state and cannot be deleted and the LUN disconnects from both protection and recovery hosts.

I understand that you are encountering an issue with adding a new server to an existing Protection Group in HPE Nimble storage while configuring your SRM disaster recovery groups. The error message you mentioned, "Unable to find a matching consistency group at the remote site for the local consistency group," indicates that there is a mismatch or inconsistency between the local and remote consistency groups.

Another issue I saw when using the vRO SRM plugin was that when trying to add a second SRM server (the Recovery site), the plugin fell apart. It seems that the general idea is you only automate your Protected site with this plugin, and not both sites through a single vRO instance.

Two main things can cause this error message: If two instances of the same datastore exist at the same disaster recovery (DR) site or if two datastores contain the same data, even if they have different names. You can correct this problem by rescanning the host bus adapter on the DR site, which should cause one of the datastores to disappear.

df19127ead
Reply all
Reply to author
Forward
0 new messages