EKS
This guide provides detailed instructions for installing Kuberise.io on an Amazon Elastic Kubernetes Service (EKS) cluster. It covers the preparation steps, necessary configurations, and how to ensure that all components work seamlessly within your EKS environment.
Prerequisites
Before you begin, ensure you have the following:
- AWS Account: Access to an AWS account with permissions to create and manage EKS clusters.
- EKS Cluster: A running EKS cluster. You can create one using AWS Console, AWS CLI, or Infrastructure as Code tools like Terraform.
- CLI Tools: Install
kubectl
,helm
,git
,htpasswd
, andopenssl
on your local machine. - IAM Roles and Policies: Proper IAM roles and policies configured for service accounts.
Preparation
1. Install AWS Load Balancer Controller
Ensure the AWS Load Balancer Controller is installed in your EKS cluster. This controller manages the AWS Elastic Load Balancers for Kubernetes services of type LoadBalancer
.
Using Terraform:
When creating your EKS cluster with Terraform, you can include the AWS Load Balancer Controller by:
- Defining IAM Policy: Create an IAM policy with the necessary permissions for the controller.
- Creating IAM Role: Attach the policy to an IAM role associated with the Kubernetes service account used by the controller.
- Deploying the Controller: Use the Helm provider in Terraform to install the AWS Load Balancer Controller Helm chart.
Example:
data "aws_iam_policy_document" "aws_lbc" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["pods.eks.amazonaws.com"]
}
actions = [
"sts:AssumeRole",
"sts:TagSession"
]
}
}
resource "aws_iam_role" "aws_lbc" {
name = "${aws_eks_cluster.eks.name}-aws-lbc"
assume_role_policy = data.aws_iam_policy_document.aws_lbc.json
}
resource "aws_iam_policy" "aws_lbc" {
policy = file("./iam/AWSLoadBalancerController.json")
name = "AWSLoadBalancerController"
}
resource "aws_iam_role_policy_attachment" "aws_lbc" {
policy_arn = aws_iam_policy.aws_lbc.arn
role = aws_iam_role.aws_lbc.name
}
resource "aws_eks_pod_identity_association" "aws_lbc" {
cluster_name = aws_eks_cluster.eks.name
namespace = "kube-system"
service_account = "aws-load-balancer-controller"
role_arn = aws_iam_role.aws_lbc.arn
}
resource "helm_release" "aws_lbc" {
name = "aws-load-balancer-controller"
repository = "https://aws.github.io/eks-charts"
chart = "aws-load-balancer-controller"
namespace = "kube-system"
version = "1.10.1"
set {
name = "clusterName"
value = aws_eks_cluster.eks.name
}
set {
name = "serviceAccount.name"
value = "aws-load-balancer-controller"
}
set {
name = "vpcId"
value = aws_vpc.main.id
}
depends_on = [helm_release.cluster_autoscaler]
}
Note: Ensure the service account aws-load-balancer-controller
in the kube-system
namespace is annotated with the IAM role ARN.
2. Configure External Ingress-NGINX Service Annotations
Set up the external Ingress-NGINX controller with the necessary service annotations to integrate with AWS and ExternalDNS:
Example: In this example, the public domain of the cluster is *.eks.kuberise.dev
.
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
external-dns.alpha.kubernetes.io/hostname: "*.eks.kuberise.dev"
external-dns.alpha.kubernetes.io/owner: "eks-externaldns"
extraArgs:
default-ssl-certificate: cert-manager/wildcard-tls-letsencrypt-production
external-dns.alpha.kubernetes.io/hostname
: Specifies the wildcard domain for your public services.external-dns.alpha.kubernetes.io/owner
: Identifies the owner of the DNS records, useful when multiple instances of ExternalDNS are running.service.beta.kubernetes.io/aws-load-balancer-type
: Specifies that the load balancer should be external, meaning it will be accessible from the internet.service.beta.kubernetes.io/aws-load-balancer-nlb-target-type
: Indicates that the Network Load Balancer (NLB) should use IP addresses for routing traffic to targets.service.beta.kubernetes.io/aws-load-balancer-scheme
: Configures the load balancer to be internet-facing, making it accessible from the public internet.default-ssl-certificate
: Defines the secret containing the default SSL certificate for the Ingress-NGINX controller. If you don't specify a secret containing a certificate for each pubic service, this default certificate will be used and because it is wildcard, it will work for all public services.
These annotations ensure that ExternalDNS can automatically create and manage DNS records for your external ingress. and that the AWS Load Balancer Controller provisions the correct type of load balancer for public access.
There are some additional configuration for ingress-nginx-external which are in default folder. It means that it is the same for all other platforms in this repository.
controller:
metrics:
enabled: true
serviceMonitor:
enabled: true
ingressClassResource:
enabled: true
name: nginx-external
default: false
controllerValue: "k8s.io/ingress-nginx-external"
ingressClass: nginx-external
watchIngressWithoutClass: false
electionID: ingress-controller-leader-external
ingressClassByName: true
nginx-external
: This is the name of the ingress class that the Ingress-NGINX controller will use. It helps to distinguish between different ingress controllers running in the same cluster.controllerValue
: This specifies the value that the ingress controller will use to identify itself. In this case, it is set to"k8s.io/ingress-nginx-external"
, which matches thenginx-external
ingress class. This ensures that the correct ingress controller handles the ingress resources associated with this class.
3. Define ExternalDNS Sources
Configure ExternalDNS to monitor only the external Ingress-NGINX service for DNS records:
sources:
- service
- Explanation: Since the external Ingress-NGINX service is the only service exposed to the internet, limiting ExternalDNS to this source ensures that only one wildcard DNS record is created for it.
4. Configure ExternalDNS for Public DNS Provider
Based on your public DNS provider, add the necessary configuration so that ExternalDNS can create DNS records for your domain. Here is an example for the Cloudflare provider:
Example: Configure ExternalDNS to use Cloudflare as the DNS provider.
provider: cloudflare
cloudflare:
secretName: "cloudflare" # Name of the Kubernetes secret containing Cloudflare API token
cloudflare.secretName
: The name of the Kubernetes secret in theexternal-dns
namespace that contains your Cloudflare API token.cloudflare.proxied
: Set totrue
if you want to enable Cloudflare's proxy feature (optional).
Create the Cloudflare Secret:
kubectl create secret generic cloudflare \
--namespace external-dns \
--from-literal=cloudflare_api_token=YOUR_CLOUDFLARE_API_TOKEN
Replace YOUR_CLOUDFLARE_API_TOKEN
with your actual Cloudflare API token.
This configuration ensures that ExternalDNS can authenticate with Cloudflare and manage DNS records for your domain.
5. Configure Internal DNS
To manage DNS records for internal services, a second ExternalDNS installation is used. This is necessary because it is not possible to manage two different DNS providers with one ExternalDNS deployment. This second installation is called internal-dns
and is deployed in the internal-dns
namespace. The provider is AWS, and the internal DNS domain is kuberise.internal
.
Example: Configure ExternalDNS for internal services.
provider: aws
domainFilters:
- "kuberise.internal"
provider
: Specifies AWS as the DNS provider.domainFilters
: Filters the domains managed by this ExternalDNS instance tokuberise.internal
.
Sample Internal Microservice:
A sample internal microservice called backend
can be deployed and accessed using the kuberise.internal/backend
address. You can change the internal domain address as needed, but ensure to create the private hosted zone with that address in your AWS account.
6. Configure Internal Ingress-NGINX Service Annotations
Set up the internal Ingress-NGINX controller with the necessary service annotations to integrate with AWS and ExternalDNS:
Example: In this example, the internal domain of the cluster is kuberise.internal
.
controller:
service:
annotations:
external-dns.alpha.kubernetes.io/internal-hostname: "kuberise.internal"
external-dns.alpha.kubernetes.io/access: "private"
external-dns.alpha.kubernetes.io/owner: "private-dns"
external-dns.alpha.kubernetes.io/internal-hostname
: Sets the internal hostname for the service, which will be used by ExternalDNS to create DNS records.external-dns.alpha.kubernetes.io/access
: Indicates that the DNS records created by ExternalDNS should be private.external-dns.alpha.kubernetes.io/owner
: Specifies the owner of the DNS records, which helps in managing and identifying the records created by ExternalDNS.
These annotations ensure that ExternalDNS can automatically create and manage DNS records for your internal ingress service.
There are some additional configuration for ingress-nginx-internal which are in default folder. It means that it is the same for all other platforms in this repository.
controller:
metrics:
enabled: true
serviceMonitor:
enabled: true
service:
type: ClusterIP
ingressClassResource:
enabled: true
name: nginx-internal
default: false
controllerValue: "k8s.io/ingress-nginx-internal"
ingressClass: nginx-internal
watchIngressWithoutClass: false
electionID: ingress-controller-leader-internal
ingressClassByName: true
service.type
: Specifies that the service type should beClusterIP
, meaning it will be accessible only within the cluster and AWS will not provision a load balancer for it.nginx-internal
: This is the name of the ingress class that the Ingress-NGINX controller will use for internal services.controllerValue
: This specifies the value that the ingress controller will use to identify itself. In this case, it is set to"k8s.io/ingress-nginx-internal"
, which matches thenginx-internal
ingress class. This ensures that the correct ingress controller handles the ingress resources associated with this class.
Installation Steps
1. Fork and Clone the Repository
Fork the Kuberise.io repository and clone it to your local machine:
git clone https://github.com/yourusername/kuberise.io.git
cd kuberise.io
2. Update Configuration Files
Modify the values-eks.yaml
file to enable and configure the necessary components for your EKS cluster.
Enable Components in values-eks.yaml
:
helm:
external-dns:
enabled: true
ingress-nginx-external:
enabled: true
cert-manager:
enabled: true
# Enable other components as needed
3. Configure Helm Chart Values
Adjust the values for different platform tools and applications in the values/eks
directory.
4. Install Kuberise.io
Run the installation script provided by Kuberise.io:
./scripts/install.sh [CONTEXT] [NAME] [REPO_URL] [REVISION] [DOMAIN] [TOKEN]
Parameters:
[CONTEXT]
: Kubernetes context name (e.g.,eks
)[NAME]
: Platform name used in your values files (e.g.,eks
)[REPO_URL]
: URL to your forked repository[REVISION]
: Git branch or tag (e.g.,main
)[DOMAIN]
: Your domain name (e.g.,eks.kuberise.dev
)[TOKEN]
: GitHub token for accessing the repository if it is private (optional). Also it is used by ArgoCD Image Updater to write the updated manifests to the repository.
Example:
./scripts/install.sh eks-context eks https://github.com/yourusername/kuberise.io.git main eks.kuberise.dev
This script sets up ArgoCD and the app-of-apps pattern, deploying all the enabled components defined in your values files.
5. Verify the Installation
Check that all pods are running:
kubectl get pods --all-namespaces
Ensure that:
- InternalDNS is updating DNS records in Route53.
- ExternalDNS is updating DNS records in your public DNS provider.
- Ingress-NGINX-External controllers are up and configured correctly and you can access dashboards (e.g., https://grafana.eks.kuberise.dev).
- Ingress-NGINX-Internal controllers are up and configured correctly and you can access internal services from other pods in the cluster. (e.g.,
curl kuberise.internal/backend
). - Cert-Manager is issuing certificates.
Cloudflare Token
If you are using Cloudflare for your DNS, you can create a cloudflare API token and put it in environment variable CLOUDFLARE_API_TOKEN
, then the installation script will automatically create a Kubernetes secret and the ExternalDNS will use it to update the DNS records for your External Ingresses.
Post Installation
The ./script/install.sh
script is idempotent, you can run it multiple times to update your installation without any problem. You need to run the install.sh script again, if you change values of the ArgoCD helm chart or the install.sh script itself. Also you have to run install.sh script for each platform separately. For example if you want to create multiple platform for different environments or for different purposes, you have to run the install.sh script for each platform.