Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable possibility to set private DNS on AKS provisioning #7163

Closed
gaktive opened this issue Oct 12, 2022 · 14 comments · Fixed by #10340 or #10744
Closed

Enable possibility to set private DNS on AKS provisioning #7163

gaktive opened this issue Oct 12, 2022 · 14 comments · Fixed by #10340 or #10744
Assignees
Labels
area/aks ember Ember UI Issue JIRA kind/enhancement priority/1 QA/manual-test Indicates issue requires manually testing team/highlander Highlander
Milestone

Comments

@gaktive
Copy link
Member

gaktive commented Oct 12, 2022

Internal reference: SURE-3392

Request description:
Enable the possibility to set private DNS on AKS provisioning cluster. Currently, clusters are manually provisioned (or via Terraform) and skips the UI completely.

Actual behavior:
Currently, there is no setting in Rancher to set a private DNS for AKS cluster.

Expected behavior:
Add the possibility to set private DNS on AKS provisioning cluster.

Additional notes:
Based on one environment, the network infrastructure is more or less the same as the "hub and spoke" on the page: https://docs.microsoft.com/en-us/azure/aks/private-clusters#hub-and-spoke-with-custom-dns

Their DNS is centralized in the hub network, and they want to run AKS clusters in spoke networks.
To get it to work, they need to be able to point the AKS clusters to the central private DNS zone. Otherwise, the created AKS cluster's Kubernetes API endpoint address won't resolve, and the cluster will fail to provision, and the VMs can't resolve the Kubernetes API address.

Additional Azure documentation on how to set a custom private DNS zone: https://docs.microsoft.com/en-us/azure/aks/private-clusters#create-a-private-aks-cluster-with-a-custom-private-dns-zone

Usage: under clusters.tf, within the resource JSON, there'd be a private_dns_zone_id to leverage

@nwmac
Copy link
Member

nwmac commented Oct 26, 2022

I think we need to talk to backend about this to understand where we would wire this in to the request we send to create an AKS cluster

@mantis-toboggan-md
Copy link
Member

There's no private dns zone field defined in the relevant schema so I think we'll need backend involvement to add support for the feature then UI work to expose it; the backend issue is here

@gaktive
Copy link
Member Author

gaktive commented Jul 11, 2023

We should be unblocked now by rancher/aks-operator#131 -- @mantis-toboggan-md can you take a look and confirm?

@mantis-toboggan-md
Copy link
Member

@gaktive I don't think we're unblocked here. I still don't see this field defined in the aksClusterConfigSpec schema; I believe we're waiting on rancher/rancher#39422

@gaktive gaktive added team/highlander Highlander and removed team/area2 Hostbusters labels Jul 13, 2023
@gaktive
Copy link
Member Author

gaktive commented Jul 13, 2023

Taking Team 2 (Hostbusters) off and putting Highlander instead.

@zube zube bot added the area/aks label Aug 22, 2023
@gaktive gaktive modified the milestones: v2.8.0, v2.8.next1 Sep 1, 2023
@mjura
Copy link
Contributor

mjura commented Oct 31, 2023

To be specific we are talking about this case when AKS is deployed as private cluster and we are setting cluster with these options:

--enable-managed-identity --assign-identity <resourceID> --private-dns-zone <custom private dns zone or custom private dns subzone resourceID>

Here is the link to documentation:

https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=azure-portal#create-a-private-aks-cluster-with-a-custom-private-dns-zone-or-private-dns-subzone

Fields which have to be set:
https://github.com/rancher/aks-operator/blob/release-v2.8/pkg/apis/aks.cattle.io/v1/types.go#L127
https://github.com/rancher/aks-operator/blob/release-v2.8/pkg/apis/aks.cattle.io/v1/types.go#L144
https://github.com/rancher/aks-operator/blob/release-v2.8/pkg/apis/aks.cattle.io/v1/types.go#L146

@mantis-toboggan-md
Copy link
Member

@cpinjani is the expected format for privateDNSZone documented somewhere? What you have quoted doesn't seem to line up with the error message in your screenshot

@cpinjani
Copy link

cpinjani commented Apr 1, 2024

@mantis-toboggan-md In the UI field "Private DNS Zone ID", user is suppose to provide full resource-id of privateDNSZone which on Azure postal is of format, as error message states. Reference

/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Network/privateDnsZones/<private_dns_zone_name>

image

Let me know if this clarifies.

@cpinjani
Copy link

Validation passed on build: v2.9-c1d92781561c0d4d5a18b32b2f312655dfe67e9c-head
User able to set privateDNS on AKS provisioning.

image

Spec:

aksConfig:
    authBaseUrl: null
    authorizedIpRanges: null
    azureCredentialSecret: cattle-global-data:cc-dcgtt
    baseUrl: null
    clusterName: cpinjani-priv
    dnsPrefix: cpinjani-priv
    dnsServiceIp: 10.0.0.10
    dockerBridgeCidr: null
    httpApplicationRouting: null
    imported: false
    kubernetesVersion: 1.29.2
    linuxAdminUsername: azureuser
    loadBalancerSku: standard
    logAnalyticsWorkspaceGroup: null
    logAnalyticsWorkspaceName: null
    managedIdentity: true
    monitoring: null
    networkPlugin: kubenet
    networkPolicy: calico
    nodePools:
      - availabilityZones:
          - '1'
          - '2'
          - '3'
        count: 1
        maxPods: 110
        maxSurge: '1'
        mode: System
        name: agentpool
        orchestratorVersion: 1.29.2
        osDiskSizeGB: 128
        osDiskType: Managed
        osType: Linux
        vmSize: Standard_DS2_v2
    outboundType: loadBalancer
    podCidr: null
    privateCluster: true
    privateDnsZone: >-
      /subscriptions/<REDACTED>/resourceGroups/<REDACTED>/providers/Microsoft.Network/privateDnsZones/<REDACTED>
    resourceGroup: <REDACTED>
    resourceLocation: eastus
    serviceCidr: 10.0.0.0/16
    subnet: null
    tags:
      Account Type: group
    userAssignedIdentity: >-
      /subscriptions/<REDACTED>/resourcegroups/<REDACTED>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<REDACTED>
    virtualNetwork: null
    virtualNetworkResourceGroup: null

@valaparthvi
Copy link

Validated again on Rancher(v2.9-a9355940c0629ca419b7dd3e4098c4ea0e52c0c0-head); Dashboard(master 46a44c1), it is fixed.

Cluster update works as expected too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment