3par Esxi Implementation Guide

0 views
Skip to first unread message

Carri Seargent

unread,
Aug 4, 2024, 9:15:42 PM8/4/24
to gapekindjolg
Todaywe will deploy the HP 3PAR simulator on VMware vSphere ESXi 5.5. You can also deploy the simulator on VMware Workstation however the steps are slightly different. Please note this is quite a long post as it includes many screenshots and notes.

The 3PAR Simulator requires a VMware ESXi 5.x or a VMware Workstation 9 or 10 setup. The simulator configuration

requires deploying three VMs; two VMs simulating the cluster nodes, a third VM to simulate the enclosure and private

network configuration to enable communication between the three VMs.

The minimum system resources required for each VM are:


The first step is to deploy the OVF package to our virtual environment. The simulator comes with two OVF files, one for the cluster nodes and the other for the enclosure node. We will start by deploying the cluster node OVF.


To begin with, if you intend to deploy a second simulator and link the two to test things like Remote Copy over IP the now is the time to increase the RAM allocated to the cluster nodes. HP recommend at least 4GB of RAM when utilising the RCIP feature. I would suggest creating a DRS rule to keep the simulator VMs together.


It is important to note that the following is a guide only and assumes you are configuring your environment in the same way as myself. My setup includes 2 ESXi hosts using standard vSwitches, if your setup differs you will need to modify the steps to fit.


As you can see, we have the management vNICs for the simulator VMs in vSwitch0 connected to the port group VLAN 500. This vSwitch has access via 4 pNICs. The simulator VMs also have vNICs in vSwitch1, this vSwitch has no pNICS connected to it.


We are finished and can test functionality by opening the 3PAR Management Console and trying to connect. Of course you can ping the management IP you defined earlier to test basic connectivity exists.


If you want to make use of the RCIP feature you will need a second simulator deployed. This is why HP provide 2 serial numbers (remember we have only used one for this deployment). The process is the same as above for the second simulator, deploy the 3 VMs, configure them and test connectivity.


Just to clarify my earlier post, the license expired issue appeared immediately after initial attempt to login and then you were immediately logged out. So there was no opportunity to enter the install stage and enter the supplied serial numbers into the appliance.


Unfortunately this is a known issue confirmed by HP and Ivan. They are working to create a new simulator download which we will all have to grab a copy of. At the moment the best work around is to modify the date/time in the VM BIOS as per some of the comments by other users.


I would suggest checking the date and time values on the 3PAR control nodes and the system you are accessing from. It may be that the date is wrong which could result in your system believing the certificate is either not yet valid or has expired. If the time is wrong then you need to SSH onto the control node with the account 3paradm (default password 3pardata) and use the setdate command.


Having an issue with configuration. I have configured cn01 following the above process, once cn has reboot and up. I choose option 2 and keeps comes back with error cluster not configured choose option 1 to configure cluster node.


I did have a contact but unfortunately I had to take down that information. The best thing I can suggest now is you liaise with your HP representative or reseller to see if they can be of assistance. If in the future I do get a contact at HP again I will update this comment.


VMware vSphere Metro Storage Cluster (vMSC) is a specific storage configuration which is commonly referred to as stretched storage clusters or metro storage clusters. These configurations are usually implemented in environments where disaster and downtime avoidance is a key requirement. This recommended practice document was developed to provide additional insight and information for operation of a vMSC infrastructure in conjunction with VMware vSphere. This paper explains how vSphere handles specific failure scenarios, and it discusses various design considerations and operational procedures. For detailed information about storage implementations, refer to documentation provided by the appropriate VMware storage partner.


Note that initially vMSC storage configurations had to go through a mandatory certification program. As of vSphere 6.0 this is no longer needed. vMSC configurations are now fully partner supported and can be found on the vmware.com website under PVSP (Partner Verified and Supported Products). Before purchasing, designing or implementing please consult the PVSP listing to ensure the partner has filed for PVSP and has tested with the correct vSphere versions. ( )The vMSC listings typically also provide a link to the specifics of the implementation by the partner. As an example, the PVSP Listing for EMC VPLEX provides the following link: . This link provides all tested scenarios and supported components with EMC VPLEX.


This document is intended for individuals with a technical background who design, deploy, or manage a vSphere Metro Storage Cluster infrastructure. This includes but is not limited to technical consultants, infrastructure architects, IT managers, implementation engineers, partner engineers, sales engineers, and customer staff. This solution brief is not intended to replace or override existing certified designs for vSphere Metro Storage Cluster solutions; it instead is meant to supplement knowledge and provide additional information.


A VMware vSphere Metro Storage Cluster configuration is a specific storage configuration that combines replication with array-based clustering. These solutions are typically deployed in environments where the distance between data centers is limited, often metropolitan or campus environments.


The primary benefit of a stretched cluster model is that it enables fully active and workload-balanced data centers to be used to their full potential and it allows for an extremely fast recovery in the event of a host or even full site failure. The capability of a stretched cluster to provide this active balancing of resources should always be the primary design and implementation goal. Although often associated with disaster recovery, vMSC infrastructures are not recommended as primary solutions for pure disaster recovery.


This document does not explain the difference between a disaster recovery and a downtime- or disaster-avoidance solution. For more details on this distinction, refer to Stretched Clusters and VMware vCenter Site Recovery Manager: Understanding the Options and Goals , located here:


The question we typically get is if there is a minimum license edition of vSphere required to create a vSphere Metro Storage Cluster. The answer to that question is no. You can create a stretched cluster with any edition, however if you have a requirement for automated workload balancing from either a CPU or storage perspective than the minimum required license level is vSphere Enterprise Plus, as this license includes vSphere DRS and Storage DRS.


The storage subsystem for a vMSC must be able to be read from and write to both locations simultaneously. All disk writes are committed synchronously at both locations to ensure that data is always consistent regardless of the location from which it is being read. This storage architecture requires significant bandwidth and very low latency between the sites in the cluster. Increased distances or latencies cause delays in writing to disk and a dramatic decline in performance. They also preclude successful vMotion migration between cluster nodes that reside in different locations.


Each vMSC has a specific configuration required by the storage vendor. Make sure to work with your storage vendor for their vMSC setup process and details. Below are a few articles from our storage partners.


vMSC solutions are classified into two distinct types. These categories are based on a fundamental difference in how hosts access storage. It is important to understand the different types of stretched storage solutions because this influences design considerations. The two types are:


With uniform host access configuration, hosts in data center A and data center B have access to the storage systems in both data centers. In effect, the storage area network is stretched between the sites, and all hosts can access all LUNs. NetApp MetroCluster is an example of uniform storage. In this configuration, read/write access to a LUN takes place on one of the two arrays, and a synchronous mirror is maintained in a hidden, read-only state on the second array. For example, if a LUN containing a datastore is read/write on the array in data center A, all vSphere hosts access that datastore via the array in data center A. For vSphere hosts in data center A, this is local access. vSphere hosts in data center B that are running VMs hosted on this datastore send read/write traffic across the network between data centers. In case of an outage or an operator-controlled shift of control of the LUN to data center B, all vSphere hosts continue to detect the identical LUN being presented, but it is now being accessed via the array in data center B.


Our examples use uniform storage because these configurations are currently the most commonly deployed. Many of the design considerations, however, also apply to non-uniform configurations. We point out exceptions when this is not the case.


In this section, we describe the basic architecture referenced in this document. We also discuss basic configuration and performance of the various vSphere features. For an in-depth explanation of each feature, refer to the vSphere 6.5 Availability Guide and the vSphere 6.5 Resource Management Guide . We make specific recommendations based on VMware best practices and provide operational guidance where applicable. It is explained in our failure scenarios how these best practices prevent or limit downtime.

3a8082e126
Reply all
Reply to author
Forward
0 new messages