Many companies use AKS to deploy their containerized workloads. To secure their infrastructure, they make it private. They also make the Azure Container Registry private. As a result, no external access is allowed outside of the company network boundary. Access to this private environment will be done through the resource VNET, peered VNET, VPN or Express Route.
This tutorial will provide a guidance to setup a private environment for AKS and ACR with only access from an Azure VM. We will leverage Azure Private Link with Private Endpoint to get access to these resources.
This tutorial will be in two parts. First part will deal with connection between VM and AKS. It will be done with the following steps:
- Create a private AKS cluster within its own VNET
- Create an Azure VM within its own VNET
- Setup connection between the VM and AKS
Then the second part will deal with connection between VM, AKS and ACR, covering these steps:
- Configure access to ACR using Private Endpoint
- Setup connection between the VM and ACR
- Setup connection between the AKS and ACR
At the end of this first part, we should have the following architecture implemented for AKS and VM.
Create a private AKS cluster within its own VNET
From the Azure portal, create a new AKS cluster and make sure to enable Private cluster. The choice between Kubenet and Azure CNI won’t impact our demo. Note here that a new VNET and Subnet will be created for this cluster.
In Integrations section, add and attach an ACR. Even AKS is private here, ACR is still public. But we’ll make it private later. Click next an create the resources.
Check the created resources (AKS, ACR and VNET) inside the AKS Resource Group:
Check also the created Private Endpoint, Network Interface and Private DNS zone inside the AKS node Resource Group.
Check how the Private DNS zone is configured. Note the private IP address (10.240.0.4) within the AKS VNET.
Also note the link to the AKS VNET. This means any resource in the AKS VNET will be able to resolve the private IP of the Private Endpoint for communication to the API server. If we have a VM in this VNET, it will be able to connect to AKS with no additional steps.
Create an Azure VM within its own VNET
To access the API server, we’ll use a JumpBox/DevBox VM. This VM will be hosted in its own VNET. We follow these steps to do that: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-portal.
To add even more security to the environment, we can leverage Azure Bastion to securely connect to the VM. This is optional for this tutorial. Here are the steps https://docs.microsoft.com/en-us/azure/bastion/tutorial-create-host-portal.
Setup connection between the VM and AKS
The API server endpoint has only a private IP and no public IP address. To access the API server, we’ll need to use a VM that has access to the AKS VNET. This access could be achieved by one of the following options:
- Create a VM in the same AKS VNET.
- Use a VM in a peered VNET.
- Use an Express Route or VPN connection.
- Use the AKS command invoke feature.
To connect to AKS from our VM, we will use VNET peering option that that requires 2 steps:
- Create a VNET Peering between VM VNET and AKS VNET.
- Add link to the DevBox VM VNET in the AKS Private DNS zone.
Create a VNET Peering between VM VNET and AKS VNET. Go to one of the two VNETs in the portal, then go to Peering and add a new peering. Choose names for both pairs of peering and select the other VNET.
Check the created peering in both VNETs.
Add a link to the VM VNET in the AKS Private DNS zone: go to the Private DNS zone of AKS, choose Virtual Network links, then add new link. Choose a name and select the VM VNET.
As a result, we should see 2 links in the AKS Private DNS zone: one for the AKS VNET and a second one for the VM VNET.
Now we are all set up and we can connect to the private AKS from the DevBox VM. Let’s connect to the VM, connect to Azure, connect to AKS. And then try to get list of nodes and deploy a Pod into the private cluster.
$ az login
$ az aks get-credentials -g rg-private-aks -n private-aks
Merged "private-aks" as current context in C:Usershoussem.kubeconfig
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-agentpool-12335431-vmss000000 Ready agent 15h v1.22.4
aks-agentpool-12335431-vmss000001 Ready agent 15h v1.22.4
aks-agentpool-12335431-vmss000002 Ready agent 15h v1.22.4
$ kubectl run nginx --image=nginx:1.21.4
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 8s
This a validation that our VM can connect securely through Private Endpoint to the cluster.
In the next part of this tutorial, we’ll cover the remaining steps:
- Configure access to ACR using Private Endpoint.
- Setup connection between the VM and ACR.
- Setup connection between the AKS and ACR.
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.