VMworld Sessions – Reference Design for SDDC with NSX and vSphere – NSX Components, vCenter Topology, Connectivity Considerations

Nimish Desai, Senior Technical Product Manager at VMware, network guy, the face behind the
Reference Design for SDDC with NSX and vSphere. This blog covers Part one of the two VMworld advanced technical sessions, focused on establishing a reference architecture, reviewing use case and best practices for NSX architecture.

I have divided Part 1 into three sections to make it a little easier to digest. In this first section NSX Components, vCenter Topology and Connectivity Considerations are discussed, in section 2 covers edge cluster design and section 3 Routing Protocol and Topology,

 

NSX Components

NSX Manager

NSX Manager appliance is deployed in a one to one relationship with vCenter, if multiple NSX managers are required, then each will need it’s own vCenter.
High availability for the manager is provided by vSphere HA, virtual machine traffic on data plane is not affected if nsx manager becomes unavailable.

The nsx manager holds the distributed firewall rules logic, as well as monitoring and logging data, so regular backups should be planned.
After a manager upgrade create new backup, as restore is only possible using the same nsx database schema version.

NSX Controllers

Controllers can be deployed as appliances in the same vSphere cluster as management or edge workloads.
In larger deployments separate vSphere clusters should be used for each nsx workload: management, edge and compute.

Three controller nodes must be deployed in a production environment, loss of a single cluster or cluster majority will not impact data plane.
Three ESXi are hosts recommended for the controller vSphere cluster, DRS anti-affinity rules should be created to ensure controller availability.
Consider storage resiliency and how to avoid loss of storage components failing all three controllers simultaneously.
Oversubscribed Storage IO should be avoided.

 

vCenter Topology

Single vCenter Design

NSX Manager and controllers deployed in same vsphere cluster.
All NSX workloads are combined to reduced host requirements and licensing costs.

nsx-reference-architecture-single-vcenter-design-1

 

Multiple vCenter Scale Out Design

When scalability is required, for example in multi-tenant environments, NSX managers and resource vCenters are deployed in a dedicated Management vCenter.
Separate vCenters, each paired with their own NSX manager are deployed in the management vCenter.
Controllers are deployed in the managed vCenters, usually in an edge vSphere cluster on the managed vCenter

nsx-reference-architecture-scale-out-design-1

Connectivity Considerations

Transport Zone

Normally the span of logical switches, (layer 2 communication domains).
Multiple transport zone allowed but should not be treated as security zones.
One or more vDS can be part of same transport zone.
A logical switch can be part of multiple transport zones.
Multiple vDS allows flexibility, connectivity and operational control.
Consider L2 vs VTEP design, addressing, bandwidth and availability.

Physical Network Topology

Whether POD or leaf spine topology, VXLAN VLAN must be common on all hosts.
(In leaf spine topology the vlan is significant to the local TOR, so vlan can be contiguous, even if subnets are distinct).

VTEP/VDS Uplink Design

Choice depends on simplicity/bandwidth requirements.

Recommended teaming mode is “Route based on Originating Port”.
LACP teaming mode is discouraged.
LACP requires single vtep, no multi vtep support.
No possible to use deterministic mapping of other traffic types, (Management, VMotion, IP Storage etc).

Strong recommendation not to use LACP, on edge cluster.

Single VTEP

For LACP or explicit failover
Bandwidth less than 10 GB (single nic)
Simple operational model
Deterministic traffic mapping required (explicit failover only)

Multiple VTEPS

Bandwidth more than 10 GB.
Flexibility operational model, more teaming options for other traffic types.

VTEP IP Addressing

Common subnet for L2 fabric.
Multiple VTEP subnets (one per rack) for L3 fabrics.
IP Pool or DHCP used to assign addresses.

VDS and Transport Zone Design

Typically edge cluster is confined to single rack.
Recommend to use separate VDS for Compute and Edge Cluster, as edge cluster will access North bound gateway.
Edge cluster will simplify network operations, troubleshooting and security as only these hosts require north-south vlans.

nsx-reference-architecture-vtep-design-2

In the following section, edge cluster design will be discussed

Leave a Reply

Your email address will not be published. Required fields are marked *