2274 words
11 minutes
Set up volumes accessing cross-tenant Azure Storage without account access keys in Azure Kubernetes Service

Sometimes, for security reasons, your organization may need to separate different resources into different tenants. In some extreme cases, the data you need to retrieve is provided by external providers and is beyond your control.
This article helps you access storage account in different tenants without using access keys.

For accessing Azure Storage without account access keys within same tenant, check out: Set up volumes using Azure Storage without account access keys in Azure Kubernetes Service
For accessing other Azure resources across different tenant, check out: Access cross-tenant resources via workload identity on Azure Kubernetes Service

Options#

Choose among the following options:

  • Use BlobFuse via workload identity to access Azure block blob container
  • Use NFS to access Azure block blob container/Azure fileshare
NOTE

In this article, we assume that Tenant 1 is where AKS is located and Tenant 2 is where the storage account is located.

Use BlobFuse via workload identity to access Azure block blob container#

In this section, we will use BlobFuse via workload identity to access Azure block blob container.
Before proceeding, ensure that:

NOTE

AKS version must be 1.34 or higher, or Azure Blob CSI driver version must be v1.27.1 or greater, to proceed with the steps in this section.

Preparation: variables set-up#

In this section, it is assumed that the below variables will be used:

Terminal window
# Assmuing AKS cluster is in Tenant 1 and Azure blob stroage is in Tenant 2
# Subscription 2 ID in Tenant 2 is required to be filled
tenant1=
tenant2=
subscription2=
# The application you want to create for cross-tenant access
app=
# AKS cluster name and resource group it locates in Tenant 1
rG1=
aks=
# Define Kubernetes resources service account and namespace name
# Both of them will be newly created in this article
# Example has been pre-filled here for convenience
namespace=blob-fuse
svc=blob-fuse-sa
# New stroage account name and new resource group/location in Tenant 2
# Both stroage account and resource group will be newly created in this section
# Container name has been pre-filled
location2=
rG2=
sa=
container=aks-container

The following parameters also need to be pre-defined, after enabling the workload identity add-on:

Terminal window
# Retrieve OIDC Issuer URI
oidcIssuer=$(az aks show -n ${aks} -g ${rG1} \
--query oidcIssuerProfile.issuerUrl -o tsv)

Remember to get or switch kubeconfig:

Terminal window
az aks get-credentials -n ${aks} -g ${rG1}

Tenant 1: create app registration and federated identity credential#

  1. Register an application with multi-tenant account support
Terminal window
az ad app create --display-name ${app} -o none \
--sign-in-audience AzureADMultipleOrgs
  1. Get client ID of the application with its object ID in Tenant 1
Terminal window
appClientId=$(az ad app list --display-name ${app} \
--query '[0].appId' -o tsv)
appObjectId=$(az ad app show --id ${appClientId} \
--query id -o tsv)
  1. Create federated identity credential using defined parameters
  • Define parameters
Terminal window
fidcParams=$(cat <<EOF
{
"name": "kubernetes-federated-credential",
"issuer": "${oidcIssuer}",
"subject": "system:serviceaccount:${namespace}:${svc}",
"description": "Kubernetes service account federated credential for ${namespace}:${svc}",
"audiences": [
"api://AzureADTokenExchange"
]
}
EOF
)
  • Create federated identity credential
Terminal window
az ad app federated-credential create --id ${appObjectId} \
--parameters "${fidcParams}" -o none

Tenant 2: storage account preparation and role assignment#

  1. Log into Tenant 2
Terminal window
az login --tenant ${tenant2}
  1. Create service principal for application as an enterprise application
Terminal window
az ad sp create --id ${appClientId} -o none
# This step is to make it shown as an "Enterprise Application"
az ad sp update --id ${appClientId} -o none \
--set tags="['WindowsAzureActiveDirectoryIntegratedApp']"
  1. Get object ID of service principal in Tenant 2
Terminal window
spObjectId=$(az ad sp show --id ${appClientId} \
--query id -o tsv)
  1. Create Azure block blob container in Tenant 2
Terminal window
az group create -n ${rG2} -l ${location2} -o none
az storage account create -n ${sa} -g ${rG2} \
--kind StorageV2 -o none \
--sku Standard_LRS \
--allow-shared-key-access false
saId=$(az storage account show \
-n ${sa} -g ${rG2} --query id -o tsv)
az rest --method PUT -o none \
--url https://management.azure.com${saId}/blobServices/default/containers/${container}?api-version=2023-05-01 \
--body "{}"
  1. Grant permission to service principal in Tenant 2
Terminal window
az role assignment create --role "Storage Blob Data Contributor" \
--assignee-object-id ${spObjectId} -o none \
--scope ${saId}/blobServices/default/containers/${container} \
--assignee-principal-type ServicePrincipal

Access storage account from your AKS cluster#

  1. Create service account in AKS cluster
Terminal window
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: ${namespace}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${svc}
namespace: ${namespace}
EOF
  1. Randomize volumeHandle ID
Terminal window
volUniqId=${sa}#${container}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)
  1. Deploying Kubernetes storage resources
Terminal window
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azureblob-fuse
provisioner: blob.csi.azure.com
parameters:
skuName: Standard_LRS
reclaimPolicy: Delete
mountOptions:
9 collapsed lines
- '-o allow_other'
- '--file-cache-timeout-in-seconds=120'
- '--use-attr-cache=true'
- '--cancel-list-on-mount-seconds=10'
- '-o attr_timeout=120'
- '-o entry_timeout=120'
- '-o negative_timeout=120'
- '--log-level=LOG_WARNING'
- '--cache-size-mb=1000'
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: blob.csi.azure.com
name: pv-blob-wi
spec:
9 collapsed lines
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: azureblob-fuse
mountOptions:
- -o allow_other
- --file-cache-timeout-in-seconds=120
csi:
driver: blob.csi.azure.com
volumeHandle: ${volUniqId}
volumeAttributes:
mountWithWorkloadIdentityToken: 'true'
storageaccount: ${sa}
containerName: ${container}
clientID: ${appClientId}
resourcegroup: ${rG2}
tenantID: ${tenant2}
subscriptionid: ${subscription2}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-blob-wi
namespace: ${namespace}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: pv-blob-wi
storageClassName: azureblob-fuse
EOF
  1. Mount the block storage into Pod
Terminal window
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: blob-fuse-wi-mount
namespace: ${namespace}
spec:
serviceAccountName: ${svc}
containers:
- name: demo
image: alpine
command: ["/bin/sh"]
args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
volumeMounts:
- mountPath: /mnt/azure
name: volume
readOnly: false
volumes:
- name: volume
persistentVolumeClaim:
claimName: pvc-blob-wi
EOF
  1. Write the message to a file and check if it works after the Pod starts
Terminal window
kubectl exec blob-fuse-wi-mount -n ${namespace} -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'
Terminal window
kubectl logs blob-fuse-wi-mount -n ${namespace}

Output:

cat: can't open '/mnt/azure/text': No such file or directory
hello!
hello!

Use NFS to access Azure fileshare#

In this section, we will use NFS to access Azure fileshare. The process is very similar to the single-tenant setup. However, instead of using VNet whitelisting, private link is now the only option.
Unless you must use Azure fileshare, this solution is not suitable for multi-tenant (more than 3) management as it increases the complexity of the network architecture.

TIP

No identity-based authentication is required when using NFS. The authentication itself is entirely network-based. This is by design.

Preparation: variables set-up#

In this section, it is assumed that the below variables will be used:

Terminal window
# Assmuing AKS cluster is in Tenant 1 and Azure stroage is in Tenant 2
tenant1=
tenant2=
# AKS cluster name and resource group it locates in Tenant 1
rG1=
aks=
# Define Kubernetes resources namespace name, it will be newly created in this section
# Example has been pre-filled here for convenience
namespace=fileshare-nfs
# New stroage account name and new resource group/location in Tenant 2
# Both stroage account and resource group will be newly created in this section
# Fileshare name has been pre-filled
location2=
rG2=
sa=
fileshare=aks-share

Remember to get or switch kubeconfig:

Terminal window
az aks get-credentials -n ${aks} -g ${rG1}

Tenant 2: storage account preparation and network configuration#

  1. Log into Tenant 2
Terminal window
az login --tenant ${tenant2}
  1. Prepare Azure storage account

To use NFS protocol to access Azure fileshare, you need to create Premium storage account:

Terminal window
az group create -n ${rG2} -l ${location2} -o none
# Secure transfer must be disabled to use NFS in Azure fileshare
az storage account create -n ${sa} -g ${rG2} \
--kind FileStorage -o none \
--sku Premium_LRS --default-action Deny \
--public-network-access Disabled \
--allow-shared-key-access false \
--https-only false
saId=$(az storage account show \
-n ${sa} -g ${rG2} --query id -o tsv)
az rest --method PUT -o none \
--url https://management.azure.com${saId}/fileServices/default/shares/${fileshare}?api-version=2023-05-01 \
--body "{'properties':{'enabledProtocols':'NFS'}}"
  1. Create private link
    Connnection from private links are considered as whitelisted, and in cross-tenant scenario, this is the only solution to make it work.
    To create private link, see also: Creating a private endpoint.

  2. Add VNet Link to AKS VNet In step 3, a private DNS zone called “privatelink.file.core.windows.net” is created. Go into this private DNS zone, find the “Virtual Network Links” under “DNS management”. Then create a new link to the VNet in Tenant 1.
    Enable auto registration is not needed for registeration.

  3. Create VNet peering between AKS and storage account See also: Create a virtual network peering - Resource Manager, different subscriptions and Microsoft Entra tenants

Access storage account from your AKS cluster#

  1. Randomize volumeHandle ID
Terminal window
volUniqId=${sa}#${container}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)
  1. Deploying Kubernetes storage resources
Terminal window
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile-premium-nfs
provisioner: file.csi.azure.com
parameters:
protocol: nfs
skuName: Premium_LRS
7 collapsed lines
reclaimPolicy: Delete
mountOptions:
- nconnect=4 # Azure Linux node does not support nconnect option
- noresvport
- actimeo=30
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: file.csi.azure.com
name: pv-fileshare-nfs
spec:
10 collapsed lines
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: azurefile-premium-nfs
mountOptions:
- nconnect=4 # Azure Linux node does not support nconnect option
- noresvport
- actimeo=30
csi:
driver: file.csi.azure.com
volumeHandle: ${volUniqId}
volumeAttributes:
storageAccount: ${sa}
shareName: ${fileshare}
storageEndpointSuffix: core.windows.net
protocol: nfs
---
apiVersion: v1
kind: Namespace
metadata:
name: ${namespace}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-fileshare-nfs
namespace: ${namespace}
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile-premium-nfs
volumeName: pv-fileshare-nfs
resources:
requests:
storage: 5Gi
EOF
  1. Mount the fileshare into Pod
Terminal window
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: fileshare-nfs-mount
namespace: ${namespace}
spec:
containers:
- name: demo
image: alpine
command: ["/bin/sh"]
args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
volumeMounts:
- mountPath: /mnt/azure
name: volume
readOnly: false
volumes:
- name: volume
persistentVolumeClaim:
claimName: pvc-fileshare-nfs
EOF
  1. Write the message to a file and check if it works after the Pod starts
Terminal window
kubectl exec fileshare-nfs-mount -n ${namespace} -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'
Terminal window
kubectl logs fileshare-nfs-mount -n ${namespace}

Output:

cat: can't open '/mnt/azure/text': No such file or directory
hello!

Use NFS to access Azure block blob container#

In this section, we will use NFS to access Azure block blob container. The whole process is identical to accessing Azure fileshare via NFS except the name, so I won’t go detail here.

Preparation: variables set-up#

In this section, it is assumed that the below variables will be used:

Terminal window
# Assmuing AKS cluster is in Tenant 1 and Azure stroage is in Tenant 2
tenant1=
tenant2=
# AKS cluster name and resource group it locates in Tenant 1
rG1=
aks=
# Define Kubernetes resources namespace name, it will be newly created in this section
# Example has been pre-filled here for convenience
namespace=blob-nfs
# New stroage account name and new resource group/location in Tenant 2
# Both stroage account and resource group will be newly created in this section
# container name has been pre-filled
location2=
rG2=
sa=
container=aks-container

Remember to get or switch kubeconfig:

Terminal window
az aks get-credentials -n ${aks} -g ${rG1}

Tenant 2: storage account preparation and network configuration#

  1. Log into Tenant 2
Terminal window
az login --tenant ${tenant2}
  1. Prepare Azure storage account

To use NFS protocol to access Azure fileshare, you need to create Premium storage account:

Terminal window
az group create -n ${rG2} -l ${location2} -o none
az storage account create -n ${sa} -g ${rG2} \
--kind StorageV2 --sku Standard_LRS \
--enable-hierarchical-namespace -o none \
--allow-shared-key-access false \
--enable-nfs-v3 --default-action Deny \
--public-network-access Disabled
saId=$(az storage account show \
-n ${sa} -g ${rG2} --query id -o tsv)
az rest --method PUT -o none \
--url https://management.azure.com${saId}/blobServices/default/containers/${container}?api-version=2023-05-01 \
--body "{}"
  1. Create private link, add VNet Link to AKS VNet, and set up VNet peering
NOTE

As mentioned above, I won’t go into detail again. If you need explanation and instruction, check out “Use NFS to access Azure fileshare” in this article.

Access storage account from your AKS cluster#

  1. Randomize volumeHandle ID
Terminal window
volUniqId=${sa}#${container}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)
  1. Deploying Kubernetes storage resources
Terminal window
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azureblob-nfs
provisioner: blob.csi.azure.com
parameters:
protocol: nfs
skuName: Standard_LRS
5 collapsed lines
reclaimPolicy: Delete
mountOptions:
- nconnect=4 # Azure Linux node does not support nconnect option
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: blob.csi.azure.com
name: pv-blob-nfs
spec:
8 collapsed lines
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: azureblob-nfs
mountOptions:
- nconnect=4 # Azure Linux node does not support nconnect option
csi:
driver: blob.csi.azure.com
volumeHandle: ${volUniqId}
volumeAttributes:
storageAccount: ${sa}
containerName: ${container}
storageEndpointSuffix: core.windows.net
protocol: nfs
---
apiVersion: v1
kind: Namespace
metadata:
name: ${namespace}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-blob-nfs
namespace: ${namespace}
spec:
accessModes:
- ReadWriteMany
storageClassName: azureblob-nfs
volumeName: pv-blob-nfs
resources:
requests:
storage: 5Gi
EOF
  1. Mount the block storage into Pod
Terminal window
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: blob-nfs-mount
namespace: ${namespace}
spec:
containers:
- name: demo
image: alpine
command: ["/bin/sh"]
args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
volumeMounts:
- mountPath: /mnt/azure
name: volume
readOnly: false
volumes:
- name: volume
persistentVolumeClaim:
claimName: pvc-blob-nfs
EOF
  1. Write the message to a file and check if it works after the Pod starts
Terminal window
kubectl exec blob-nfs-mount -n ${namespace} -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'
Terminal window
kubectl logs blob-nfs-mount -n ${namespace}

Output:

cat: can't open '/mnt/azure/text': No such file or directory
hello!

Afterword#

I initially wrote this article in May 2025, but it was not published due to token refresh issue. After nearly 9 months, the bug has finally been fixed in managed AKS environment, so I can publish this article.
This feature became Generally Available (GA) on May 10, 2025, but the fix is only being deployed as of Feb 8, 2026.

I’m unsure how to comment on the Azure GA process. The bug has been present since the initial GA announcement, yet it took nine months to deploy a fix. Users can self-deploy version 1.27 of the CSI driver during these 9 monthes, but this option would render their AKS cluster unsupported, making it a poor choice overall.
Even though the feature is Generally Available (GA), feature testing is still necessary before deploying it to a production environment. In this case, if a user utilizes workload identity from a user-assigned managed identity, it can take 24 hours for the issue to become reproducible. This leads me to suspect that the developer may not have left the test AKS server idle for a full 24 hours during testing, which allowed this bug to be published in the GA version.

This situation serves as a reminder that a 2-hour test should be regarded as short-term and unstable; thus, extending the test duration as much as possible is crucial before deploying it into production.

Set up volumes accessing cross-tenant Azure Storage without account access keys in Azure Kubernetes Service
https://blog.joeyc.dev/posts/aks-non-token-storage-cross-tenant/
Author
Joey Chen
Published at
2026-02-26
License
CC BY-SA 4.0