AKS
This guide provides detailed instructions for installing Kuberise.io on an Azure Kubernetes Service (AKS) cluster. It covers the preparation steps, necessary configurations, and how to ensure that all components work seamlessly within your AKS environment.
Prerequisites
Before you begin, ensure you have the following:
- Azure Account: Access to an Azure account with permissions to create and manage AKS clusters.
- AKS Cluster: A running AKS cluster. You can create one using Azure Portal, Azure CLI, or Infrastructure as Code tools like Terraform.
- CLI Tools: Install
kubectl
,helm
,git
,htpasswd
, andopenssl
on your local machine. - Azure Roles and Policies: Proper Azure roles and policies configured for service accounts.
Preparation
1. AKS Cluster Setup
- Ensure that Workload Identity and OIDC Issuer are enabled in your AKS cluster to securely manage identities and access.
- Make sure you have created a private hosted zone in Azure for the internal domain address.
- Create an
azure.json
config file and aninternal-dns
service account in theinternal-dns
namespace associated with an identity that has permission to configure DNS records in that private DNS zone.
Sample terraform code for creating a private DNS zone and kubernetes service account:
provider "kubernetes" {
host = azurerm_kubernetes_cluster.this.kube_config.0.host
client_certificate = base64decode(azurerm_kubernetes_cluster.this.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.this.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.this.kube_config.0.cluster_ca_certificate)
}
# Create a private DNS zone
resource "azurerm_private_dns_zone" "kuberise_internal" {
name = local.dns_zone_name
resource_group_name = azurerm_resource_group.this.name
}
# Link Private DNS Zone to VNet
resource "azurerm_private_dns_zone_virtual_network_link" "internal" {
name = "kuberise-internal-link"
resource_group_name = azurerm_resource_group.this.name
private_dns_zone_name = azurerm_private_dns_zone.kuberise_internal.name
virtual_network_id = azurerm_virtual_network.this.id
registration_enabled = true # This enables auto-registration of DNS records
}
resource "azurerm_user_assigned_identity" "this" {
name = "internal-dns"
location = azurerm_resource_group.this.location
resource_group_name = azurerm_resource_group.this.name
}
resource "azurerm_federated_identity_credential" "this" {
name = "${azurerm_kubernetes_cluster.this.name}-ServiceAccount-${local.internal_dns_ns}-${local.internal_dns_sa}"
resource_group_name = azurerm_resource_group.this.name
audience = ["api://AzureADTokenExchange"]
issuer = azurerm_kubernetes_cluster.this.oidc_issuer_url
parent_id = azurerm_user_assigned_identity.this.id
subject = "system:serviceaccount:${local.internal_dns_ns}:${local.internal_dns_sa}"
}
resource "azurerm_role_assignment" "private_dns_zone_contributor" {
principal_id = azurerm_user_assigned_identity.this.principal_id
role_definition_name = "Private DNS Zone Contributor"
scope = azurerm_private_dns_zone.kuberise_internal.id
skip_service_principal_aad_check = true
}
resource "azurerm_role_assignment" "resource_group_reader" {
principal_id = azurerm_user_assigned_identity.this.principal_id
role_definition_name = "Reader"
scope = azurerm_resource_group.this.id
skip_service_principal_aad_check = true
}
# Create internal-dns namespace if it doesn't exist
resource "kubernetes_namespace" "internal_dns_ns" {
metadata {
name = local.internal_dns_ns
}
}
resource "kubernetes_service_account" "internal_dns_sa" {
metadata {
name = local.internal_dns_sa
namespace = local.internal_dns_ns
annotations = {
"azure.workload.identity/client-id" = azurerm_user_assigned_identity.this.client_id
}
labels = {
"azure.workload.identity/use" = "true"
}
}
}
# Fetch the current Azure client configuration
data "azurerm_client_config" "current" {}
resource "kubernetes_secret" "internal_dns_secret" {
metadata {
name = "azure-config-file"
namespace = local.internal_dns_ns
}
data = {
"azure.json" = jsonencode({
tenantId = data.azurerm_client_config.current.tenant_id
subscriptionId = data.azurerm_client_config.current.subscription_id
resourceGroup = azurerm_resource_group.this.name
userAssignedIdentityID = azurerm_user_assigned_identity.this.client_id
useWorkloadIdentityExtension = true
})
}
}
# Output the clientId of managed identity with DNS zone permissions
output "internal_dns_client_id" {
value = azurerm_user_assigned_identity.this.client_id
}
2. Kubernetes Secret for Public DNS Provider (e.g., Cloudflare)
If your public dns provider is Cloudflare, create a secret containing an API Token to allow ExternalDNS to interact with Cloudflare:
kubectl create secret generic cloudflare \
--namespace external-dns \
--from-literal=cloudflare_api_token=YOUR_CLOUDFLARE_API_TOKEN
Replace YOUR_CLOUDFLARE_API_TOKEN
with your actual Cloudflare API token.
This configuration ensures that ExternalDNS can authenticate with Cloudflare and manage DNS records for your domain.
Installation Steps
1. Fork and Clone the Repository
Fork the Kuberise.io repository and clone it to your local machine:
# Clone your forked repository
git clone https://github.com/yourusername/kuberise.io.git
cd kuberise.io
2. Update Configuration Files
Modify the values-aks.yaml
file to enable and configure the necessary components for your AKS cluster.
Enable Components in values-aks.yaml
:
# values-aks.yaml
helm:
external-dns:
enabled: true
internal-dns:
enabled: true
# Use the external-dns chart from Kubernetes SIGs
repoURL: https://kubernetes-sigs.github.io/external-dns/
targetRevision: 1.15.0
chart: external-dns
ingress-nginx-external:
enabled: true
ingress-nginx-internal:
enabled: true
cert-manager:
enabled: true
# Enable other components as needed
external-dns
andinternal-dns
: Both instances of ExternalDNS are enabled.ingress-nginx-external
andingress-nginx-internal
: Both ingress controllers are enabled.cert-manager
: Enabled for SSL certificate management.
3. Configure Helm Chart Values
Adjust the values for different platform tools and applications in the aks directory to suit your environment.
4. Install Kuberise.io
Run the installation script provided by Kuberise.io:
./scripts/install.sh [CONTEXT] [NAME] [REPO_URL] [REVISION] [DOMAIN] [TOKEN]
Parameters:
[CONTEXT]
: Kubernetes context name (e.g.,aks-context
)[NAME]
: Platform name used in your values files (e.g.,aks
)[REPO_URL]
: URL to your forked repository[REVISION]
: Git branch or tag (e.g.,main
)[DOMAIN]
: Your domain name (e.g.,aks.kuberise.dev
)[TOKEN]
: GitHub token for accessing the repository if it's private (optional). This token is also used by ArgoCD Image Updater to write updated manifests to the repository.
Example:
./scripts/install.sh aks-context aks https://github.com/yourusername/kuberise.io.git main aks.kuberise.dev
This script sets up ArgoCD and the app-of-apps pattern, deploying all the enabled components defined in your values files.
5. Verify the Installation
Check that all pods are running:
kubectl get pods --all-namespaces
Ensure that:
- InternalDNS is updating DNS records in Azure Private DNS.
- ExternalDNS is updating DNS records in your public DNS provider.
- Ingress-NGINX-External controllers are up and configured correctly. You can access dashboards (e.g.,
https://grafana.aks.kuberise.dev
). - Ingress-NGINX-Internal controllers are up and configured correctly. You can access internal services from other pods in the cluster (e.g.,
curl kuberise.internal/backend
). - Cert-Manager is issuing certificates.
Additional Configuration
1. NGINX Ingress Configuration
External Ingress-NGINX Controller
Set up the external Ingress-NGINX controller with the necessary service annotations to integrate with Azure and ExternalDNS.
Example: In this example, the public domain of the cluster is *.aks.kuberise.dev
.
controller:
service:
annotations:
# Use an external (public) load balancer
service.beta.kubernetes.io/azure-load-balancer-internal: "false"
# Health probe path for the load balancer
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
# ExternalDNS annotation for wildcard domain
external-dns.alpha.kubernetes.io/hostname: "*.aks.kuberise.dev"
# ExternalDNS owner identifier
external-dns.alpha.kubernetes.io/owner: "aks-externaldns"
extraArgs:
# Default SSL certificate to use if none is specified in Ingress resources
default-ssl-certificate: cert-manager/wildcard-tls-letsencrypt-production
external-dns.alpha.kubernetes.io/hostname
: Specifies the wildcard domain for your public services.external-dns.alpha.kubernetes.io/owner
: Identifies the owner of the DNS records, useful when multiple instances of ExternalDNS are running.service.beta.kubernetes.io/azure-load-balancer-internal
: Set to"false"
to create an external (public) load balancer.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path
: Sets the health probe path for the load balancer.default-ssl-certificate
: Defines the secret containing the default SSL certificate for the Ingress-NGINX controller. This is useful for using a wildcard certificate for all public services.
There is additional configuration for ingress-nginx-external
which is in the default folder, common for all platforms.
controller:
metrics:
enabled: true
serviceMonitor:
enabled: true
ingressClassResource:
enabled: true
name: nginx-external
default: false
controllerValue: "k8s.io/ingress-nginx-external"
ingressClass: nginx-external
watchIngressWithoutClass: false
electionID: ingress-controller-leader-external
ingressClassByName: true
ingressClassResource
:enabled: true
: Enables the creation of anIngressClass
resource.name: nginx-external
: Sets the name of the ingress class.controllerValue: "k8s.io/ingress-nginx-external"
: Specifies the controller value to identify the ingress controller.
watchIngressWithoutClass: false
: Ensures the controller only watches ingresses with the specified class.
Internal Ingress-NGINX Controller
Set up the internal Ingress-NGINX controller for internal services.
controller:
service:
annotations:
# Use an internal (private) load balancer
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
# Health probe path for the load balancer
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
# ExternalDNS annotation for internal hostname
external-dns.alpha.kubernetes.io/internal-hostname: "kuberise.internal"
# ExternalDNS owner identifier
external-dns.alpha.kubernetes.io/owner: "internal-dns"
watchIngressWithoutClass: false
service.beta.kubernetes.io/azure-load-balancer-internal
: Set to"true"
to create an internal (private) load balancer.external-dns.alpha.kubernetes.io/internal-hostname
: Specifies the internal hostname for your services.external-dns.alpha.kubernetes.io/owner
: Identifies the owner for private DNS records.
Additional configuration in the default folder:
controller:
metrics:
enabled: true
serviceMonitor:
enabled: true
ingressClassResource:
enabled: true
name: nginx-internal
default: false
controllerValue: "k8s.io/ingress-nginx-internal"
ingressClass: nginx-internal
watchIngressWithoutClass: false
electionID: ingress-controller-leader-internal
ingressClassByName: true
ingressClassResource
:enabled: true
: Enables the creation of anIngressClass
resource.name: nginx-internal
: Sets the name of the ingress class.controllerValue: "k8s.io/ingress-nginx-internal"
: Specifies the controller value for the internal ingress controller.
service.type: ClusterIP
: By default in the default configuration, it ensures the service is of typeClusterIP
so that it's only accessible within the cluster and no external load balancer is created.
2. Configure ExternalDNS for Public DNS Provider
Configure ExternalDNS to interact with your public DNS provider.
Example: Using Cloudflare as the DNS provider.
sources:
- service # Only external ingress service needs DNS translation
provider: cloudflare
cloudflare:
# Kubernetes secret containing Cloudflare API token
secretName: "cloudflare"
sources
:- service
: Configured to monitor services for DNS records. The only resource that needs DNS translation is the service of external ingress-nginx.
provider: cloudflare
: Specifies Cloudflare as the DNS provider. You can use other providers like Azure, AWS, etc.cloudflare.secretName
: Name of the Kubernetes secret containing the Cloudflare API token.
3. Configure Internal DNS
To manage DNS records for internal services, deploy a second instance of ExternalDNS and rename it to internal-dns
.
ServiceAccount of the internal-dns
instance should be associated with an identity that has permission to configure DNS records in the private DNS zone. For that purpose, the service account should have the label azure.workload.identity/use: "true"
and annotation azure.workload.identity/client-id
with the client ID of the managed identity. But for the sake of full automation, You can create this service account in terraform where you create the AKS cluster and the private DNS zone.
sources:
- service # Only internal ingress service needs DNS translation
provider:
name: azure-private-dns
podLabels:
azure.workload.identity/use: "true"
domainFilters:
- kuberise.internal
extraVolumes:
- name: azure-config-file
secret:
secretName: azure-config-file
extraVolumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
sources
:- service
: Configured to monitor services for DNS records. The only resource that needs DNS translation is the service of internal ingress-nginx.
provider.name: azure-private-dns
: Specifies Azure Private DNS as the provider.podLabels
:azure.workload.identity/use: "true"
: Label to identify the service account associated with the managed identity.
domainFilters
:- kuberise.internal
: Filters the DNS records for the internal domain.
extraVolumes
:secretName: azure-config-file
: Name of the Kubernetes secret containing the Azure configuration file. For the sake of automation, this secret is created in terraform.
Cloudflare Token
If you are using Cloudflare for your DNS, you can create a cloudflare API token and put it in environment variable CLOUDFLARE_API_TOKEN
, then the installation script will automatically create a Kubernetes secret and the ExternalDNS will use it to update the DNS records for your External Ingresses.
Post Installation
The ./script/install.sh
script is idempotent, you can run it multiple times to update your installation without any problem. You need to run the install.sh script again, if you change values of the ArgoCD helm chart or the install.sh script itself. Also you have to run install.sh script for each platform separately. For example if you want to create multiple platform for different environments or for different purposes, you have to run the install.sh script for each platform.