In the previous part we talked about clusters and the core concepts associated with them. We talked about what they are, how they work, how to create them, register them and manage them end-to-end via Windows Admin Center (WAC). In this chapter we are going to talk briefly about some networking concepts associated with Azure Stack HCI. We will touch briefly upon how networking is typically done in AzS HCI deployments, both on the physical and host-level.
This is well explained in the Microsoft documentation so instead of risking duplicating content, I’ll just sum it up in a few bullet points for easy reference.
- AzS HCI supports a list of network switches validated by official vendors (like Dell). Note that while there may be switches other than the ones listed that work, Microsoft may be unable to support troubleshooting for them.
- Hyperconverged infrastructure is more heavily East-West where a substantial portion of traffic stays within a Layer-2 (VLAN) boundary. Hence, it is recommended that all cluster nodes in a site are physically located in the same rack and connected to the same top-of-rack (ToR) switches.
There are two ways to connect Azure Stack HCI nodes to each other and outside. Switched configuration and switchless configuration.
- Switched configuration uses the switches for North-South traffic. In a nutshell, switches are used to connect the nodes to each other and then switch is connected to the outside world. Here’s an example of a switched configuration:
- Alternatively, you can go for a switchless In this model, you simply connect each node to each other directly without going through a switch. This is also known as back-to-back/full mesh connection. Keep in mind that this model is only for HCI nodes fewer than 3. After 3, the complexity of switchless configuration makes it unusable.
There are advantages and disadvantages to each approach so makes sure to plan your deployment well before the time.
- There are 3 kinds of networking traffic involved: Compute traffic (from and to VMs), Storage traffic (related to S2D) and Management traffic (related to admin functions involving AD, Remote Desktop, WAC, PowerShell, etc.).
- Azure Stack HCI requires choosing a network adapter that has achieved the Windows Server Software-Defined Data Center (SDDC) certification with the Standard or Premium Additional Qualification (AQ).
- Important network adapter capabilities used by Azure Stack HCI include: Dynamic Virtual Machine Multi-Queue (Dynamic VMMQ or d.VMMQ), Remote Direct Memory Access (RDMA), Guest RDMA, Switch Embedded Teaming (SET).
- For stretched clusters, there are a few things you must take into consideration. They are explained in detail here.
Software Defined Networking (SDN)
There’s also another side to networking – Software-defined networking or SDN for short. I will avoid talking much about this at this point because this is very networking-focused and is meant largely for seasoned network engineers (which I am not!). But, for reference, you can read more about it here.
Cool, let’s end this right here! And with this we will conclude our journey together on this Azure Stack HCI learning path. Thank you for following along, I hope it has been useful!
Until next time!