In our previous post, we covered some VxRAIL considerations which provide the physical hosting capabilities needed to power the overall platform. Although VxRAIL provides a very robust computing foundation for VCF, the compute domain itself is dependant on the physical network.
It is very important to design your network for VCF from the beginning of the design process. VCF on VxRAIL has very specific requirements which make it different from other private cloud solutions available.
We discussed in part 1 of this series that when you deploy VCF, you get a high end private cloud solution built according to VMware best practices. And when VxRAIL is included, you also extend that cloud solution down to the hardware level.
I wanted to list a few network considerations that are specific to VCF on VxRAIL 3.10.
1.) Utilize Spin / Leaf Network Architectures
Spine and Leaf physical switching topologies have been widely adopted over the last few years and is readily available on the market today.
Scalability, because adding switches to a leaf-spine network provides additional traffic routes, thus increasing scalability
Leaf and spine switches can also be less expensive because they can be built using commodity, fixed configuration switches
Increased redundancy, because each access switch connects to multiple spine switches
Improved performance, from the ability to use numerous network paths at once, as compared to only one at a time with spanning tree protocol
When using VxRAIL, you should utilize a minimum of two TOR switches when connecting each node. 25GbE interfaces are recommended and are readily available. This provides a pathway for going to higher network speeds in the future (5O / 100GbE).
2.) Port Channels and LACP
VxRAIL does NOT support Route based on IP hash, meaning that LAGs, LACP, and port channels are not supported and will not work. This should be taken into account at the beginning of the design process.
3.) Size IP subnets based on MAXIMUM GROWTH
When your VCF is deployed, there is a high probability that it will scale very quickly. You should size your subnets and networks accordingly, as some of these will be very difficult to change later on down the road (If at all possible).
4.) Jumbo Frames
There really is no reason why you should not utilize jumbo frames in modern datacenter architectures. It is supported by the majority of the physical and compute platforms available. Jumbo frames enable higher throughput and takes the load off of your CPU`s. My comment on this is to insure that jumbo frames are consistently configured throughout the stack, meaning at the port-group, vDS and physical L2-L3 layer. This will maximize performance and prevent bottlenecks.
5.) Traffic Separation
You should always separate traffic to insure traffic separation (configure on different adapters) . This provides a load balancing mechanism for each type of traffic and provides a failover setting in the event of a network cable outage.
As with any platform, there will always be a limitation to the number of physical adapters available. The table above shows the physical configuration for a VxRAIL node running VCF 3.10 on VxRAIL 4.7.525.
DellEMC has expanded upon this in VCF 4.1 in order to support up to 8 physical NICs (pNICs) per node:
I will be creating additional posts around this topic, as there are several other network considerations out there for VCF, so stay tuned.
Comments