Lastweek I had a chance to deploy the latest release of vCenter Log Insight 2.0 (currently in public beta) in my lab to give it a spin. I must say, I am very impressed with the slick new UI and some of the new capabilities like the scale-out and high availability feature.
The actual deployment of the Virtual Appliance is pretty straight forward and the only thing I would mention when selecting the OVF Deployment Size is that the default "Small" option is not the smallest configuration possible. There is actually an "Extra Small" option that can be selected in the drop-down menu which is targeted for POCs and lab evaluations. This will help with minimizing the resource constraints for lab environments.
Something that I am always interested in when evaluating a new solution is to see how easy an automated and unattended configuration is. With the help of some of the Log Insight folks, I was able to create the following shell script which will perform a basic configuration of Log Insight which includes the backend database, admin password and NTP servers:
If you decide you want to automate additional configurations. The way you would accomplish this is to first configure everything from the Log Insight configuration UI. Once you are happy with the configuration, SSH into your Log Insight system. In /storage/core/loginsight/config you will find a couple of configuration files loginsight-config.xml#X with a numeric number at the end. If you take a look at the file with the highest number, it will contain the latest changes to Log Insight and the configurations you made using the UI. You can then take that file and update the script to automate the other configuration options.
William is Senior Staff Solution Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. He focuses on Cloud Native, Automation, Integration and Operation for both VMware vSphere Foundation (VVF) & VMware Cloud Foundation (VCF) across Private, Hybrid and Public Cloud
The purpose of this document is to act as a simple guide for proof of concepts involving VMware Cloud Foundation 4.1 and associated infrastructure tasks to configure and manage Software Defined infrastructure.
This document is intended for data center cloud administrators who deploy a VMware Cloud Foundation system in their organization's data center. The information in this guide is written for experienced data center cloud administrators.
This document is not a replacement for VMware product documentation however it should be thought of as a guide to augment existing guidance throughout the lifecycle of a proof-of-concept exercise.
Offical product documentation ( ) should supersede guidance documented here, if there is a divergence.
When referring to any statements made in this document, verification regarding support capabilities, minimums and maximums should be cross-checked against official VMware Technical product documentation at -Cloud-Foundation/index.html and in case of more recent updates or amendments to what is stated here.
This document is laid out into several distinct sections to make the guide more consumable depending on the use case and proof of concept scenario as the guide aims to offer a structured approach during evaluation of VCF features.
Section 4 Solution Deployment guidelines.
Proof of Concept guidelines to deploy SDDC infrastructure using specific features and vSphere with Tanzu, Stretch Cluster, vVOLs or vLCM.
To plan for a successful VCF POC, there is a considerable number of external requirements to ensure success
The key to a successful plan is to use a reasonable hardware configuration that resembles what you plan to use in production.
Physical Network and External services
Certain requirements such as routable VLANS, MTU and DNS and DHCP services are required, these are in summary:
AVNs or Application Virtual Networks are optional to configure but highly recommended to evaluate vRealize Suite and vSphere Tanzu
To use Application Virtual Networks (AVNs) for vRealize Suite components you also need:
Physical Hardware and ESXi Hosts
Refer to the VMware vSAN Design and Sizing Guide for information on design configurations and considerations when deploying vSAN. Be sure the hardware you plan to use is listed on the VMware Compatibility Guide (VCG). BIOS updates, and firmware and device driver versions should be checked to make sure these aspects are updated according to the VCG.
SDDC Manager and other vSphere, vSAN, and NSX components that form the core of VMware Cloud Foundation are initially deployed to an environment known as the Management workload domain. This is a special-purpose grouping of systems devoted to managing the VMware Cloud Foundation infrastructure.
Management Workload Domain Logical View:
In addition to the Cloud Foundation components that are provisioned during the bring-up process, additional virtual machine workloads may be deployed to the Management workload domain if required. These optional workloads may include third party virtual appliances or other virtual machine infrastructure workloads necessary to support a particular Cloud Foundation instance.
The vCenter with internal Platform Service Controller instance deployed to the Management workload domain is responsible for SSO authentication services for all other workload domains and vSphere clusters that are subsequently deployed after the initial Cloud Foundation bring-up is completed.
VMware Cloud Foundation (VCF) deployment is orchestrated by the Cloud builder appliance, which builds and configures VCF components. To deploy VCF, a parameter file (in the form of an Excel workbook or JSON file) is used to set deployment parameters such as host name, IP address, and initial passwords. Detailed descriptions of
The Cloud Builder appliance should be deployed on either an existing vSphere cluster, standalone host, or laptop (requires VMware Workstation or VMware Fusion). The Cloud Builder appliance should have network access to the Management Network segment defined the parameter file to enable connectivity to the ESXi hosts composing the management workload domain.
Alternatively, the parameter workbook may also be downloaded from the Cloud Builder appliance after it has been deployed.
Once the workbook has been completed, the file should be uploaded to the appliance, where upon a script converts the Excel to a JSON file. This JSON file is then validated and used in the bring-up process.
The VMware Cloud Foundation YouTube channel is a useful resource to reference alongside this guide.
Parameters required for configuring VCF during the bring-up process are entered into an Excel workbook, which may be downloaded from the Cloud Builder download page or from the appliance itself. Each version of VCF has a specific version of the parameter workbook associated with it.
There are several worksheets within the Excel workbook. Certain fields are subject to validation based on inputs elsewhere in the workbook. Care should be taken not to copy/paste cells, or otherwise alter the structure of the spreadsheet.
Note: The MTU used here is not reflective of a production environment. the MTU was chosen for internal lab restrictions when creating this document. Supported MTU sizes for are 1600 - 9000 for NSX-T based traffic
Specifications related to host network configurations, as well as object names within the vSphere hierarchy are also specified within this worksheet.
To view an interactive demonstration of this process with step-by-step instructions, please visit Deployment Parameters Worksheet in the VCF resource library on
core.vmware.com.
In order to support Application Virtual Networks (AVNs); BGP peering between the NSX-T Edge Service Gateways and upstream network switches is required for the management domain.
The diagram below shows an overview the BGP AS setup between the two NSX-T Edges deployed with VCF and the physical top of rack switches:
Inside the rack, the two NSX-T edges form one BGP AS (autonomous system). Upstream, we connect to two separate ToR switches, each in their own BGP AS. The two uplink VLANs connect northbound from each edge to both ToRs.
The BGP configuration is defined in the parameter spreadsheet, in the 'Deploy Parameters' tab, under the section 'Application Virtual Networks'. We define the ToR details (as per the diagram above), with the respective IP address, BGP AS and password:
To complete the peering, the IP addresses of the two edges, with the ASN should be configured on the ToR (as BGP neighbors).
Note: BGP Password is required and cannot be blank. NSX-T Supports a maximum of 20 characters for the BGP password.
The Cloud builder appliance must be able to resolve and connect to the NSX-T edges in order to validate the BGP setup, etc.
Note that for the purposes of a PoC, virtual routers (such as Quagga or vyos) could be used to peer with. In this case, make sure that communication northbound for NTP and DNS is available
Hardware components should be checked to ensure they align with the VMware vSphere Compatibility Guide (VCG). Drives and storage controllers must be vSAN certified, and firmware/drivers must be aligned with those specified in the VCG. See section 4.1.1 for a full list of host requirements.
Note that VCF requires identical hardware and software configuration for each ESXi host within a given workload domain, including the Management workload domain.
ESXi should be installed on each host. Hosts must match the ESXi build number specified the VCF Bill of Materials (BOM) for the version of VCF being deployed. Failure to do so may result in failures to upgrade ESXi hosts via SDDC Manager. It is permissible to use a custom image from a hardware vendor as long as the ESXi build number still matches the VCF BOM. The BOM may be located within the Release Notes for each version of VCF.
3a8082e126