Firewall considerations for gMSA on Azure Kubernetes Service

image

This week I spent some time helping a customer with a gMSA environment on which they were finding some issues in deploying their app. The issues started when they were trying to figure out why the ticket was not being issues for the Window pod with gMSA configured in AKS. I decided to write this blog post to list some of the considerations for different scenarios on which security rules might block the authentication process.

gMSA and its moving parts

To use gMSA on AKS, you must understand that there are many moving parts in play. First, your Kubernetes cluster on AKS is comprised of both Linux and Windows nodes. Your nodes will all be part of a virtual network, but only the Windows nodes will try to reach the Domain Controller (DC).

The DC itself might be in another virtual network, in the same virtual network, or even outside of Azure. Then you have the Azure Key Vault (AKV) on which the secret (username and password) is securely stored. Your AKV should only be available to the proper Windows nodes, no one else.

The problem though, comes when you have Windows nodes on AKS and DCs running on different networks or even sites, and you need to open the proper ports between the Windows nodes and the DC.

Ports to open for and gMSA

We have had documentation on which ports to open for Active Directory for a while. That is relatively well known and can be leveraged here.

The thing to understand is that when using gMSA on AKS, not all these ports need to be opened, and allowing unnecessary traffic might expose you to threats without a need for it. For gMSA, there's no computer or user account being used interactively, and thus we can compile the following list:

Protocol and port Purpose
and UDP 53 DNS
and UDP 88
139 NetLogon
TCP and UDP 389 LDAP
TCP 636 LDAP SSL

Keep in mind this list of ports does not take into consideration ports that your application might need to query AD or perform any other action with the DC. You might need to check for those with the application owner.

Domain Controllers in Azure

You might mitigate a lot of issues by simply adding one (or more) DC to Azure as a VM. By doing that, you have two things that play in your favor:

  1. You keep the authentication process within Azure. Your Windows pods and nodes don't need to reach to an on-premises environment – unless the DC(s) in Azure is down.
  2. You have a better understanding of ports to open between NSGs in Azure rather than traffic between workloads on Azure and DCs on-premises.

On the other hand, you must consider that the DCs in Azure do need to replicate to the DCs on-premises. However, this is a preferred scenario because you know who the DCs are, versus workloads machine that might scale-out or even new workloads/clusters be added in the future. At the end of the day, the scope for opening ports is lower, which minimizes exposure. Please refer to the documentation to understand ports for AD as well.

Hopefully this will help you fix any issues you might be having with gMSA caused by blocked traffic. Keep in mind the ports listed above might not be the full list of ports you need to open, but the minimal set of ports and traffic for the proper authentication. As always, let us know in the comments what are your thoughts and if you have a different scenario.

 

This article was originally published by Microsoft's ITOps Talk Blog. You can find the original article here.