VMware Cloud on AWS – Overview and Setup

VMworld 2017 was a great opportunity to learn about VMware on AWS or VMC (VMware Cloud).

What is being offered is a SDDC stack, ESXi, vSAN, and networks built on NSX. AWS will provide bare metal hosts with dual 18 core 2.3 Ghz CPUs, 512 GB of memory and 16 TB of SDD storage, VMware will install and maintain the latest version of their ESXi hypervisor.

Cluster size starts at a minimum of 4 hosts, due to vSAN requirements and scales up to 16 hosts.

A vCenter in it’s own SSO domain will be deployed in AWS, and link with your on-premises vCenter using hybrid linked mode.

NSX is not a requirement for your on-premises vCenter to connect with VMC, the current offering requires a IPSec layer 3 VPN, but the indication I got from talking with folks at VMWorld was that an AWS Direct Connect (dedicated 1 Gbps or 10 Gbps line) will be the preferred method in the future.

See here for a list of AWS Direct Connect locations, in most cases you will need APN Partners to establish network circuits between the AWS Direct Connect location and on-premises environment.

It is important to keep in mind this is a managed service, VMware install, patch and maintain ESXi and vCenter. Customers will not have root or admin access. The environment will be upgraded frequently, perhaps every quarter, and on-premises must be no more than one major release behind the VMC version.

Pricing can be On-Demand per hour, or Reserved for 1 or 3 years

The current public list is can be found here, so around $24,000 per month for a 4 host cluster, On-Demand*.




*From my understanding egress traffic from the SDDC will also be added to the bill.


On-boarding requires that an AWS account is setup, and may require a you create an Amazon VPC (Amazon Virtual Private Cloud) with a private subnets that do not overlap with on-premises network addresses.

See here for further details about on-boarding and VPC network considerations.

Once you have the network scheme clear and on-boarding is completed, the setup is simple, most report around two hours for initial setup and 10 minutes for additional hosts.


The first option screen allows the choice of capacity and AWS region

The next screen should have been well though out beforehand, the management subnet refers to the network for vCenter, ESXi and NSX.
This is a private network and if we are following AWS VPC rules should not overlap your on-premises network addresses.

The subnet sizes need some consideration, as the schemes that are suggested might differ from what you might normally assign, it seems best to use the recommend /20 range for the management subnet that will allow for 16 hosts.


Once setup is complete you get an overview of the capacity, these are sceen shots from the HOL-1887-01-EMT, as a 4 host cluster has 144 CPU and should deliver around 331 GHz of CPU and 2 TB of RAM.

Its the network configuration that will need to input, as you can see the graphic graphic for the System Diagram on the Web interface is clean and simple, however there are quite a few important decisions that will have to be made.

There are two networks, Management and Compute, and two possibilities for either; connection to on-premises through a VPN or accessing vCenter and ESXi hosts from the Internet . Firewalls are set to Deny All traffic so ports will have to be opened.

As most will be using a VPN to connect to on-premises, you should have the details of your previously set up on-site IPSec device at hand.
This is the initial network tab with no configuration done and we will walk through the VPN setup.

Click on VPN and you will need to provide VPN name, remote gateway public and private addresses and remote network scheme, and the pre-shared key.

If it goes well the VPN link will change to a solid line and go green to show its active.

I’ve added local DNS, and some firewall rules for the Management-On-Prem and Internet-Compute networks.
The next steps would be set up the VPN for Compute-On-Prem, create FW rules, DNS entries, etc.

NAT is available, but so far no load balancer.

Overall the network is very simple, it might be useful to look at some additional training on AWS VPC. Nigel Poulton’s Pluralsight VPC Operations is a blast, also acloudguru have very reasonably priced offering – take a look a the AWS SA course, and focus on the VPC modules.

I would like to see how the compute network can be divided up into DMZ or Non-trusted zones and Trusted zones, or if NSX is used to build that from within the vCenter console.
So in that case you might want to look a some NSX training.

Keep in mind the whole purpose of this is to allow you to use your VMware on-premises skill set in the AWS cloud.

However one point needs to be especially clear, the CIDR block for the initial VPC must be well thought out before you launch the SDDC creation.


In the next blogs I’ll talk about elastic resource management, and vmc hardware and storage


2 replies on “VMware Cloud on AWS – Overview and Setup”

  1. Really good summary.

    If I need this for just 24-hours every month to run an set on end-of-month jobs, how do I arrange that?



    1. Hi Chris

      To get onboard you need to contact vmware

      But the use case of 24 hours per month may not be what VMware are looking to provide, I asked a similar question at VMWorld and I got the impression this is for customers who commit to a longer term or those scaling up and down from a base 4 host cluster.

      Check with them anyway, if the answer is no, then consider AWS, they are more than happy to accommodate that type of scenario, additionally you could take advantage of mixing on-demand with spot instances and reduce the job run time, and overall costs.

Comments are closed.