Sometimes, for security reasons, your organization may need to separate different resources into different tenants. In some extreme cases, the data you need to retrieve is provided by external providers and is beyond your control.
This article helps you access storage account in different tenants without using access keys.
For accessing Azure Storage without account access keys within same tenant, check out: Set up volumes using Azure Storage without account access keys in Azure Kubernetes Service
For accessing other Azure resources across different tenant, check out: Access cross-tenant resources via workload identity on Azure Kubernetes Service
Options
Choose among the following options:
- Use BlobFuse via workload identity to access Azure block blob container
- Use NFS to access Azure block blob container/Azure fileshare
NOTEIn this article, we assume that Tenant 1 is where AKS is located and Tenant 2 is where the storage account is located.
Use BlobFuse via workload identity to access Azure block blob container
In this section, we will use BlobFuse via workload identity to access Azure block blob container.
Before proceeding, ensure that:
- Azure Blob storage CSI driver is enabled.
- OIDC issuer feature and workload identity add-on is enabled.
NOTEAKS version must be 1.34 or higher, or Azure Blob CSI driver version must be v1.27.1 or greater, to proceed with the steps in this section.
Preparation: variables set-up
In this section, it is assumed that the below variables will be used:
# Assmuing AKS cluster is in Tenant 1 and Azure blob stroage is in Tenant 2# Subscription 2 ID in Tenant 2 is required to be filledtenant1=tenant2=subscription2=
# The application you want to create for cross-tenant accessapp=
# AKS cluster name and resource group it locates in Tenant 1rG1=aks=
# Define Kubernetes resources service account and namespace name# Both of them will be newly created in this article# Example has been pre-filled here for conveniencenamespace=blob-fusesvc=blob-fuse-sa
# New stroage account name and new resource group/location in Tenant 2# Both stroage account and resource group will be newly created in this section# Container name has been pre-filledlocation2=rG2=sa=container=aks-containerThe following parameters also need to be pre-defined, after enabling the workload identity add-on:
# Retrieve OIDC Issuer URIoidcIssuer=$(az aks show -n ${aks} -g ${rG1} \ --query oidcIssuerProfile.issuerUrl -o tsv)Remember to get or switch kubeconfig:
az aks get-credentials -n ${aks} -g ${rG1}Tenant 1: create app registration and federated identity credential
- Register an application with multi-tenant account support
az ad app create --display-name ${app} -o none \ --sign-in-audience AzureADMultipleOrgs- Get client ID of the application with its object ID in Tenant 1
appClientId=$(az ad app list --display-name ${app} \ --query '[0].appId' -o tsv)appObjectId=$(az ad app show --id ${appClientId} \ --query id -o tsv)- Create federated identity credential using defined parameters
- Define parameters
fidcParams=$(cat <<EOF{ "name": "kubernetes-federated-credential", "issuer": "${oidcIssuer}", "subject": "system:serviceaccount:${namespace}:${svc}", "description": "Kubernetes service account federated credential for ${namespace}:${svc}", "audiences": [ "api://AzureADTokenExchange" ]}EOF)- Create federated identity credential
az ad app federated-credential create --id ${appObjectId} \ --parameters "${fidcParams}" -o noneTenant 2: storage account preparation and role assignment
- Log into Tenant 2
az login --tenant ${tenant2}- Create service principal for application as an enterprise application
az ad sp create --id ${appClientId} -o none
# This step is to make it shown as an "Enterprise Application"az ad sp update --id ${appClientId} -o none \ --set tags="['WindowsAzureActiveDirectoryIntegratedApp']"- Get object ID of service principal in Tenant 2
spObjectId=$(az ad sp show --id ${appClientId} \ --query id -o tsv)- Create Azure block blob container in Tenant 2
az group create -n ${rG2} -l ${location2} -o none
az storage account create -n ${sa} -g ${rG2} \ --kind StorageV2 -o none \ --sku Standard_LRS \ --allow-shared-key-access false
saId=$(az storage account show \ -n ${sa} -g ${rG2} --query id -o tsv)
az rest --method PUT -o none \ --url https://management.azure.com${saId}/blobServices/default/containers/${container}?api-version=2023-05-01 \ --body "{}"- Grant permission to service principal in Tenant 2
az role assignment create --role "Storage Blob Data Contributor" \ --assignee-object-id ${spObjectId} -o none \ --scope ${saId}/blobServices/default/containers/${container} \ --assignee-principal-type ServicePrincipalAccess storage account from your AKS cluster
- Create service account in AKS cluster
cat <<EOF | kubectl apply -f -apiVersion: v1kind: Namespacemetadata: name: ${namespace}---apiVersion: v1kind: ServiceAccountmetadata: name: ${svc} namespace: ${namespace}EOF- Randomize volumeHandle ID
volUniqId=${sa}#${container}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)- Deploying Kubernetes storage resources
cat <<EOF | kubectl apply -f -kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: azureblob-fuseprovisioner: blob.csi.azure.comparameters: skuName: Standard_LRSreclaimPolicy: DeletemountOptions:9 collapsed lines
- '-o allow_other' - '--file-cache-timeout-in-seconds=120' - '--use-attr-cache=true' - '--cancel-list-on-mount-seconds=10' - '-o attr_timeout=120' - '-o entry_timeout=120' - '-o negative_timeout=120' - '--log-level=LOG_WARNING' - '--cache-size-mb=1000'allowVolumeExpansion: truevolumeBindingMode: Immediate---apiVersion: v1kind: PersistentVolumemetadata: annotations: pv.kubernetes.io/provisioned-by: blob.csi.azure.com name: pv-blob-wispec:9 collapsed lines
capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: azureblob-fuse mountOptions: - -o allow_other - --file-cache-timeout-in-seconds=120 csi: driver: blob.csi.azure.com volumeHandle: ${volUniqId} volumeAttributes: mountWithWorkloadIdentityToken: 'true' storageaccount: ${sa} containerName: ${container} clientID: ${appClientId} resourcegroup: ${rG2} tenantID: ${tenant2} subscriptionid: ${subscription2}---kind: PersistentVolumeClaimapiVersion: v1metadata: name: pvc-blob-wi namespace: ${namespace}spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi volumeName: pv-blob-wi storageClassName: azureblob-fuseEOF- Mount the block storage into Pod
cat <<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata: name: blob-fuse-wi-mount namespace: ${namespace}spec: serviceAccountName: ${svc} containers: - name: demo image: alpine command: ["/bin/sh"] args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"] volumeMounts: - mountPath: /mnt/azure name: volume readOnly: false volumes: - name: volume persistentVolumeClaim: claimName: pvc-blob-wiEOF- Write the message to a file and check if it works after the Pod starts
kubectl exec blob-fuse-wi-mount -n ${namespace} -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'kubectl logs blob-fuse-wi-mount -n ${namespace}Output:
cat: can't open '/mnt/azure/text': No such file or directoryhello!hello!Use NFS to access Azure fileshare
In this section, we will use NFS to access Azure fileshare. The process is very similar to the single-tenant setup. However, instead of using VNet whitelisting, private link is now the only option.
Unless you must use Azure fileshare, this solution is not suitable for multi-tenant (more than 3) management as it increases the complexity of the network architecture.
TIPNo identity-based authentication is required when using NFS. The authentication itself is entirely network-based. This is by design.
Preparation: variables set-up
In this section, it is assumed that the below variables will be used:
# Assmuing AKS cluster is in Tenant 1 and Azure stroage is in Tenant 2tenant1=tenant2=
# AKS cluster name and resource group it locates in Tenant 1rG1=aks=
# Define Kubernetes resources namespace name, it will be newly created in this section# Example has been pre-filled here for conveniencenamespace=fileshare-nfs
# New stroage account name and new resource group/location in Tenant 2# Both stroage account and resource group will be newly created in this section# Fileshare name has been pre-filledlocation2=rG2=sa=fileshare=aks-shareRemember to get or switch kubeconfig:
az aks get-credentials -n ${aks} -g ${rG1}Tenant 2: storage account preparation and network configuration
- Log into Tenant 2
az login --tenant ${tenant2}- Prepare Azure storage account
To use NFS protocol to access Azure fileshare, you need to create Premium storage account:
az group create -n ${rG2} -l ${location2} -o none
# Secure transfer must be disabled to use NFS in Azure fileshareaz storage account create -n ${sa} -g ${rG2} \--kind FileStorage -o none \--sku Premium_LRS --default-action Deny \--public-network-access Disabled \--allow-shared-key-access false \--https-only false
saId=$(az storage account show \-n ${sa} -g ${rG2} --query id -o tsv)
az rest --method PUT -o none \--url https://management.azure.com${saId}/fileServices/default/shares/${fileshare}?api-version=2023-05-01 \--body "{'properties':{'enabledProtocols':'NFS'}}"-
Create private link
Connnection from private links are considered as whitelisted, and in cross-tenant scenario, this is the only solution to make it work.
To create private link, see also: Creating a private endpoint. -
Add VNet Link to AKS VNet In step 3, a private DNS zone called “privatelink.file.core.windows.net” is created. Go into this private DNS zone, find the “Virtual Network Links” under “DNS management”. Then create a new link to the VNet in Tenant 1.
Enable auto registration is not needed for registeration. -
Create VNet peering between AKS and storage account See also: Create a virtual network peering - Resource Manager, different subscriptions and Microsoft Entra tenants
Access storage account from your AKS cluster
- Randomize volumeHandle ID
volUniqId=${sa}#${container}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)- Deploying Kubernetes storage resources
cat <<EOF | kubectl apply -f -kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: azurefile-premium-nfsprovisioner: file.csi.azure.comparameters: protocol: nfs skuName: Premium_LRS7 collapsed lines
reclaimPolicy: DeletemountOptions: - nconnect=4 # Azure Linux node does not support nconnect option - noresvport - actimeo=30allowVolumeExpansion: truevolumeBindingMode: Immediate---apiVersion: v1kind: PersistentVolumemetadata: annotations: pv.kubernetes.io/provisioned-by: file.csi.azure.com name: pv-fileshare-nfsspec:10 collapsed lines
capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: azurefile-premium-nfs mountOptions: - nconnect=4 # Azure Linux node does not support nconnect option - noresvport - actimeo=30 csi: driver: file.csi.azure.com volumeHandle: ${volUniqId} volumeAttributes: storageAccount: ${sa} shareName: ${fileshare} storageEndpointSuffix: core.windows.net protocol: nfs---apiVersion: v1kind: Namespacemetadata: name: ${namespace}---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc-fileshare-nfs namespace: ${namespace}spec: accessModes: - ReadWriteMany storageClassName: azurefile-premium-nfs volumeName: pv-fileshare-nfs resources: requests: storage: 5GiEOF- Mount the fileshare into Pod
cat <<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata: name: fileshare-nfs-mount namespace: ${namespace}spec: containers: - name: demo image: alpine command: ["/bin/sh"] args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"] volumeMounts: - mountPath: /mnt/azure name: volume readOnly: false volumes: - name: volume persistentVolumeClaim: claimName: pvc-fileshare-nfsEOF- Write the message to a file and check if it works after the Pod starts
kubectl exec fileshare-nfs-mount -n ${namespace} -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'kubectl logs fileshare-nfs-mount -n ${namespace}Output:
cat: can't open '/mnt/azure/text': No such file or directoryhello!Use NFS to access Azure block blob container
In this section, we will use NFS to access Azure block blob container. The whole process is identical to accessing Azure fileshare via NFS except the name, so I won’t go detail here.
Preparation: variables set-up
In this section, it is assumed that the below variables will be used:
# Assmuing AKS cluster is in Tenant 1 and Azure stroage is in Tenant 2tenant1=tenant2=
# AKS cluster name and resource group it locates in Tenant 1rG1=aks=
# Define Kubernetes resources namespace name, it will be newly created in this section# Example has been pre-filled here for conveniencenamespace=blob-nfs
# New stroage account name and new resource group/location in Tenant 2# Both stroage account and resource group will be newly created in this section# container name has been pre-filledlocation2=rG2=sa=container=aks-containerRemember to get or switch kubeconfig:
az aks get-credentials -n ${aks} -g ${rG1}Tenant 2: storage account preparation and network configuration
- Log into Tenant 2
az login --tenant ${tenant2}- Prepare Azure storage account
To use NFS protocol to access Azure fileshare, you need to create Premium storage account:
az group create -n ${rG2} -l ${location2} -o none
az storage account create -n ${sa} -g ${rG2} \ --kind StorageV2 --sku Standard_LRS \ --enable-hierarchical-namespace -o none \ --allow-shared-key-access false \ --enable-nfs-v3 --default-action Deny \ --public-network-access Disabled
saId=$(az storage account show \ -n ${sa} -g ${rG2} --query id -o tsv)
az rest --method PUT -o none \ --url https://management.azure.com${saId}/blobServices/default/containers/${container}?api-version=2023-05-01 \ --body "{}"- Create private link, add VNet Link to AKS VNet, and set up VNet peering
NOTEAs mentioned above, I won’t go into detail again. If you need explanation and instruction, check out “Use NFS to access Azure fileshare” in this article.
Access storage account from your AKS cluster
- Randomize volumeHandle ID
volUniqId=${sa}#${container}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)- Deploying Kubernetes storage resources
cat <<EOF | kubectl apply -f -kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: azureblob-nfsprovisioner: blob.csi.azure.comparameters: protocol: nfs skuName: Standard_LRS5 collapsed lines
reclaimPolicy: DeletemountOptions: - nconnect=4 # Azure Linux node does not support nconnect optionallowVolumeExpansion: truevolumeBindingMode: Immediate---apiVersion: v1kind: PersistentVolumemetadata: annotations: pv.kubernetes.io/provisioned-by: blob.csi.azure.com name: pv-blob-nfsspec:8 collapsed lines
capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: azureblob-nfs mountOptions: - nconnect=4 # Azure Linux node does not support nconnect option csi: driver: blob.csi.azure.com volumeHandle: ${volUniqId} volumeAttributes: storageAccount: ${sa} containerName: ${container} storageEndpointSuffix: core.windows.net protocol: nfs---apiVersion: v1kind: Namespacemetadata: name: ${namespace}---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc-blob-nfs namespace: ${namespace}spec: accessModes: - ReadWriteMany storageClassName: azureblob-nfs volumeName: pv-blob-nfs resources: requests: storage: 5GiEOF- Mount the block storage into Pod
cat <<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata: name: blob-nfs-mount namespace: ${namespace}spec: containers: - name: demo image: alpine command: ["/bin/sh"] args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"] volumeMounts: - mountPath: /mnt/azure name: volume readOnly: false volumes: - name: volume persistentVolumeClaim: claimName: pvc-blob-nfsEOF- Write the message to a file and check if it works after the Pod starts
kubectl exec blob-nfs-mount -n ${namespace} -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'kubectl logs blob-nfs-mount -n ${namespace}Output:
cat: can't open '/mnt/azure/text': No such file or directoryhello!Afterword
I initially wrote this article in May 2025, but it was not published due to token refresh issue. After nearly 9 months, the bug has finally been fixed in managed AKS environment, so I can publish this article.
This feature became Generally Available (GA) on May 10, 2025, but the fix is only being deployed as of Feb 8, 2026.
I’m unsure how to comment on the Azure GA process. The bug has been present since the initial GA announcement, yet it took nine months to deploy a fix. Users can self-deploy version 1.27 of the CSI driver during these 9 monthes, but this option would render their AKS cluster unsupported, making it a poor choice overall.
Even though the feature is Generally Available (GA), feature testing is still necessary before deploying it to a production environment. In this case, if a user utilizes workload identity from a user-assigned managed identity, it can take 24 hours for the issue to become reproducible. This leads me to suspect that the developer may not have left the test AKS server idle for a full 24 hours during testing, which allowed this bug to be published in the GA version.
This situation serves as a reminder that a 2-hour test should be regarded as short-term and unstable; thus, extending the test duration as much as possible is crucial before deploying it into production.