VMware has released the VMware Cloud on AWS recently and I’m so excited to read the white paper of the solution and I have been explaining and sharing the details with my junior colleagues at the office. I was really interested about this and I have been watching some YouTube videos at the initial stage of the release. So I thought to write this article based on the white paper to share the details of the initial overview of the product. Hope this will help my colleagues to understand and get the insight of this release.
Components of the Cloud
Virtual Machine Management is separated and in-house IT team can take care of the Virtual Machine without considering the virtual platform Management.
Initial compute Cluster configuration
At the initial state, ESXi host cluster configured with the 4 ESXi hosts along with 512 GB of Memory resulting the total of 2TB of Memory for the Cluster and contains dual CPU sockets that are populated by a custom-built Intel Xeon Processor E5-2686 v4 CPU package. Each socket contains 18 cores running at 2.3GHz, resulting in a physical cluster core count of 144. At this initial state it does not allow you to do the ESXi host configurations and as scaling out the environment we can go up to 16 ESXi hosts which resulting 576 CPU Cores and 8TB Memory. DRS Configuration set to default at the initial stage and VMware uses resource pools to manage the customer workloads and customer can create the child resource pools but affinity rules cannot be configured with the initial release.
Initial HA Cluster configuration
- Host monitoring enabled
- 25 % Percentage-based admission control policy
- Host failures tolerated: 1
- VM and application monitoring enabled
- Host isolation response: power off and restart VMs
VMware on AWS cloud soulution leverages the all Flash VSAN array as the storage. Each host has 8 NVMe devices with total of 10TB capacity distributed in two VSAN disk groups. Initial cluster which is with 4 ESXi hosts has 40TB of disk capacity. you can scale out the datastore capacity by adding additional ESXi host in to the cluster and you will be able to increase the VSAN datastore capacity.
Management VMs only consume 0.9% of the total capacity. The write-caching tier leverages one NVMe device with 1.7TB of storage; the storage capacity tier leverages the other three NVMe devices with a combined 5.1TB of storage. Usable capacity depend on the per-VM storage policy configuration of the VSAN Cluster.
Initial disk fault tolerance is RAID 1 and users can configure RAID 5/6 fault tolerance however if it is RAID 6 fault tolerance configuration it needs at least 6 ESXi hosts in the Cluster.Storage or VM level Encryption is not available at the initial release and NVMe disks firmware level encryption is there by AWS and the encryption keys are not exposed to the VMware or the Customers. Management and Customer workloads are residing in the same VSAN datastore capacity however Cloud SDDC provides two separate logical datastores to keep the management and customer VMs separately.
With this release these VMware clusters restricted to a single AWS region and the Availability Zones (AZ). The cool thing here is failed ESXi hosts replace automatically from the available hosts and VSAN rebuilt automatically without user interaction.
Networking – VMware NSX
NSX integrated with this Cloud solution, as far as I know it’s a custom version of NSX which is optimized to this cloud environment.NSX connects the VMware ESXi™ host and the abstract Amazon Virtual Private Cloud (VPC) networks. If we are scaling out the ESXi host Cluster it will automatically add the ESXis in to the logical Networks. NSX services are delivering using an “as a service” cloud model. This NSX version is fully compatible with the generic VMware services such as vMotion etc.
There are two IPSec layer 3 VPNs for management and VM workloads, Your in house vCenter server and other components will connect through a IPSec VPN to the SDDC Cloud while other VM workloads connecting through the second VPN. NSX is used for all networking and security and is decoupled from Amazon VPC networking. The compute gateway and DLR (Distributed Logical Router) are preconfigured as part of the prescriptive network topology and cannot be changed by the customer. Customers must provide only their own subnets and IP ranges.
It ships with the vSphere 6.5 and it does not require any additional configuration in the SDDC Cloud, it is available.
Automatic Host additions and Recovery
VMware Cloud on AWS allows you to add Additional ESXi hosts on-demand to your cluster and can be removed when you don’t need them. AWS infrastructure has access to the large number of pool of servers and hosts will be added to the cluster in few minutes and can be scrubbed and send back to the pool. This will help customers to maintain high SLAs and Operational level tasks without spending weeks to purchase additional servers and setup those in racks and datacenters. Basically, it eliminates the additional work over head to provision new servers in the datacenters. That is one of the key features in this VMware on AWS Cloud. If there is a failure host in the cluster it will examine, reboot or replace the host. VMware is saying that “Customers are never billed for hosts that are added to a cluster for maintenance or fault tolerance reasons”.
Before it adds the ESXi hosts in to the cluster all the vmkernel ports and relevant logical network configurations will added to the ESXi hosts. After host adding to the Cluster VSAN datastore will pick up the added capacity and VMs can start utilizing the capacity.
vCenter Server Hybrid Link Mode Functionality
- Log in to the vCenter Server instance in their SDDC using their on-premises credentials
- View and manage the inventories of both their on-premises data center and the cloud SDDC from a single vSphere client interface
- Cold-migrate workloads between their on-premises data center and the cloud SDDC
At this stage VMware on AWS SDDC Cloud allows only to migrate VMs from on-premises to the SDDC cloud and also users must have the 6.5d or later versions of vCenter servers in the on-premises to support the HLM. Those things can be identified as the limitations.
Here is the YouTube demo on VMware on AWS presented by Dr. Matt Wood (AWS GM, Product Strategy) and Mark Lohmeyer (VMware VP Products)