Saturday, July 15, 2023

Namespace backup/Restore in K8s

There are several tools and methods available to make the backup and restore process for a Kubernetes cluster easier and more efficient. Some popular tools for Kubernetes cluster backup and restore are:

  1. Velero (formerly Heptio Ark): Velero is an open-source tool that simplifies backup and restore operations for Kubernetes clusters. It allows you to take cluster-wide backups and restore them with ease. Velero supports various cloud providers and storage solutions for backup and restore operations.

  2. kubectl: You can use the kubectl command-line tool to backup and restore resources in Kubernetes. By exporting and importing YAML manifests for each resource, you can recreate the cluster state. However, this method might not be as efficient as using specialized backup tools.

  3. Arkade: Arkade is a simple package manager for Kubernetes that provides an easy way to install Velero and other Kubernetes tools. With Arkade, you can quickly install Velero and start using it for backup and restore operations.

  4. Kasten K10: Kasten K10 is a data management platform designed specifically for Kubernetes. It provides features like application-centric backup and restore, disaster recovery, and data migration.

  5. Stash: Stash is another Kubernetes backup and restore tool that provides volume snapshots, backup schedules, and point-in-time recovery for Kubernetes resources.

Each tool has its strengths and is suited for different use cases. Velero is one of the most widely used tools for Kubernetes backup and restore due to its flexibility and support for various cloud providers and storage solutions. It is recommended to evaluate these tools based on your specific requirements and choose the one that best fits your needs.

 

Among the five tools mentioned, Velero (formerly Heptio Ark) is the most popular open-source and free tool for Kubernetes backup and restore. Velero is efficient and well-suited for use on Amazon EKS clusters. It is widely adopted by the Kubernetes community and has good support for various cloud providers, including AWS.

Velero allows you to take cluster-wide backups, including persistent volumes, and restore them easily. It supports incremental backups, so only changes since the last backup are stored, reducing storage requirements. Velero also provides backup hooks to trigger custom scripts before and after backup or restore operations, allowing you to customize the process to fit your specific needs.

For an EKS cluster on AWS, Velero integrates seamlessly with AWS services like Amazon S3 for backup storage and AWS IAM for authentication. This makes it a convenient choice for Kubernetes backup and restore operations on AWS.

Overall, Velero is a reliable, efficient, and free option for backing up and restoring your Kubernetes resources on an EKS cluster. It is worth considering as your backup and disaster recovery solution for EKS.

 Velero

##################### Create AWS EKS clsuster #######################################################################

## Chocolatey links
https://chocolatey.org/install

## Pre-requisite links
https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html

## Create EKS cluster
eksctl create cluster --name eksbackuprestore --node-type t2.large --nodes 1 --nodes-min 1 --nodes-max 2 --region us-east-1 --zones=us-east-1a,us-east-1b,us-east-1c

## Get EKS Cluster service
eksctl get cluster --name eksbackuprestore --region us-east-1

## Update Kubeconfig
aws eks update-kubeconfig --name eksbackuprestore

## Get EKS Pod data.
kubectl get pods --all-namespaces

## Delete EKS cluster
eksctl delete cluster --name eksbackuprestore --region us-east-1

##################################################CREATE AWS EKS BACKUP AND RESTORE #######################################################
1. CREATE S3 BUCKET
aws s3api create-bucket --bucket awseksbackupmanjeet --region us-east-1


2. INSTALL VELERO CLIENT
choco install velero

in linux download from below git:

https://github.com/vmware-tanzu/velero/releases/tag/v1.10.3


3. Install Velero on EKS [--secret-file --secret-file /root/.aws/credentials, has to be changed]

cd /root/Download/velero-v1.10.3-linux-amd64
./velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.0.1 --bucket awseksbackupmanjeet --backup-location-config region=us-east-1 --snapshot-location-config region=us-east-1 --secret-file /root/.aws/credentials
kubectl get all -n velero


4. DEPLOY TEST APPLICATION
kubectl create namespace monitoring
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 -n monitoring
kubectl create deployment nginx --image=nginx -n monitoring


5. VERIFY DEPLOYMENT
kubectl get deployments -n monitoring


6. BACKUP AND RESTORE
velero backup create <backupname> --include-namespaces <namespacename>
./velero backup create monitoring --include-namespaces monitoring


7. DESCRIBE BACKUP
velero backup describe <backupname>
./velero backup describe monitoring


8. DELETE ABOVE DEPLOYMENT
kubectl delete namespace monitoring


9. RESTORE BACKUP ON SAME CLUSTER.
./velero restore create --from-backup monitoring


10. RESTORE ON ONTHER EKS CLUSTER
*************** Install the velero on both the clusters but make sure that cluster points to the same S3 bucket ****************************
 

./velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.0.1 --bucket awseksbackupmanjeet --backup-location-config region=us-east-1 --snapshot-location-config region=us-east-1 --secret-file /root/.aws/credentials

./velero restore create --from-backup monitoring
#############################################################################################################################################

 

Some more cmd in  velero

Create a Backup: To create a backup of your Kubernetes resources,

velero backup create <backup-name> --include-namespaces=<namespace1,namespace2> --exclude-resources=<resource1,resource2>

Replace <backup-name> with a suitable name for your backup, <namespace1,namespace2> with the namespaces you want to include in the backup, and <resource1,resource2> with specific resources you want to exclude from the backup

List Backups:To see a list of existing backups,

velero backup get

Restore from Backup: To restore a previously created backup, 

velero restore create --from-backup <backup-name>

Schedule Backups (Optional):

velero schedule create <schedule-name> --schedule="0 1 * * *" --include-namespaces=<namespace1,namespace2>

Delete Backups (Optional):
velero backup delete <backup-name>

 

 Backup by kubectl command-line tool

 To take a backup of your entire Kubernetes cluster in Amazon Elastic Kubernetes Service (EKS). One way to achieve this is by using the kubectl command along with kubectl get and kubectl describe commands to export the current state of all Kubernetes resources in the cluster. Here are the steps to take a backup of your entire EKS cluster:

  1. Install kubectl: If you haven't already, install kubectl on your local machine. You can find installation instructions for kubectl on the Kubernetes official documentation website.

  2. Authenticate with EKS: Ensure that you have the necessary permissions and AWS CLI configured with the appropriate credentials to access your EKS cluster.

  3. Export Kubernetes Resources: Use the kubectl get and kubectl describe commands with appropriate flags to export the current state of all Kubernetes resources in the cluster. For example:

    # Export all resources in the default namespace
    kubectl get all --all-namespaces -o yaml > cluster_backup.yaml

    # Export all custom resources in the default namespace
    kubectl get crd --all-namespaces -o yaml > cluster_custom_resources.yaml

    # Export secrets in the default namespace
    kubectl get secrets --all-namespaces -o yaml > cluster_secrets.yaml

    # Export ConfigMaps in the default namespace
    kubectl get configmaps --all-namespaces -o yaml > cluster_configmaps.yaml

    # Export PersistentVolumeClaims (PVCs) in the default namespace
    kubectl get pvc --all-namespaces -o yaml > cluster_pvcs.yaml 

     

    1. This will create YAML files with the current state of the respective resources.

    2. Store the Backups: Store the exported YAML files in a secure location outside the cluster, such as an S3 bucket or a version control system like Git.

    3. Document the Backup Process: Document the backup process, including the commands used, the resources exported, and any additional considerations or configurations specific to your cluster.

       

      Remember that the above commands will export resources from all namespaces in the cluster. If you have multiple namespaces or custom resource definitions (CRDs), adjust the commands accordingly to capture all resources you want to back up.

Helm Sync with current deployment

To safely sync your Helm chart with the current running environment in your production cluster, you can follow these steps:

  1. Fetch the current Helm release's values

    helm upgrade RELEASE_NAME path/to/your/chart --values current_values.yaml 

    This command upgrades the Helm release with the new chart, using the saved current values to maintain the existing configuration.

     

By following these steps, you can safely synchronize your Helm chart with the current running environment, make the necessary modifications, and upgrade the release with minimal impact on your production cluster. However, it's always recommended to have proper testing and a rollback strategy in place before making changes to a production environment. 

helm get values RELEASE_NAME > current_values.yaml 

This command will retrieve the current values of the Helm release and save them to a file (current_values.yaml). These values represent the configuration of the deployed resources.

Compare the current values with the values in your Helm chart:shell 

diff -u current_values.yaml path/to/your/chart/values.yaml 

This step allows you to identify any differences between the current configuration and the values specified in your Helm chart. Review the differences and assess the impact of the changes.

Make necessary changes to your Helm chart or the values.yaml file

If you need to modify the Helm chart itself, make the necessary changes directly in the chart's templates or other relevant files

If you prefer to use a values.yaml file to override the default values in the Helm chart, modify the values.yaml file accordingly.

Test the changes locally: Before applying any changes to the production environment, it's recommended to test them in a non-production environment, such as a staging or development cluster. Deploy the updated Helm chart with the modified values and verify that the application works as expected.

 

Upgrade the Helm release in the production environment: Once you have verified the changes in a non-production environment, you can proceed with upgrading the Helm release in the production cluster. Use the following command to upgrade the release:

 

Tuesday, July 11, 2023

ALB Ingress - External DNS Install #Section 14

 External DNS

ExternalDNS:

ExternalDNS is a tool specifically designed for managing DNS records that reside outside the Kubernetes cluster. It allows you to synchronize Kubernetes Services and Ingress resources with external DNS providers, such as cloud-based DNS services (e.g., AWS Route 53, Google Cloud DNS, Azure DNS) or on-premises DNS servers.

 


Step-01: Introduction

  • External DNS: Used for Updating Route53 RecordSets from Kubernetes
  • We need to create IAM Policy, k8s Service Account & IAM Role and associate them together for external-dns pod to add or remove entries in AWS Route53 Hosted Zones.
  • Update External-DNS default manifest to support our needs
  • Deploy & Verify logs

Step-02: Create IAM Policy

  • This IAM policy will allow external-dns pod to add, remove DNS entries (Record Sets in a Hosted Zone) in AWS Route53 service
  • Go to Services -> IAM -> Policies -> Create Policy
    • Click on JSON Tab and copy paste below JSON
    • Click on Visual editor tab to validate
    • Click on Review Policy
    • Name: AllowExternalDNSUpdates
    • Description: Allow access to Route53 Resources for ExternalDNS
    • Click on Create Policy
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "route53:ChangeResourceRecordSets"
      ],
      "Resource": [
        "arn:aws:route53:::hostedzone/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ListHostedZones",
        "route53:ListResourceRecordSets"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}
  • Make a note of Policy ARN which we will use in next step
# Policy ARN
arn:aws:iam::180789647333:policy/AllowExternalDNSUpdates

Step-03: Create IAM Role, k8s Service Account & Associate IAM Policy

  • As part of this step, we are going to create a k8s Service Account named external-dns and also a AWS IAM role and associate them by annotating role ARN in Service Account.
  • In addition, we are also going to associate the AWS IAM Policy AllowExternalDNSUpdates to the newly created AWS IAM Role.

Step-03-01: Create IAM Role, k8s Service Account & Associate IAM Policy

# Template
eksctl create iamserviceaccount \
    --name service_account_name \
    --namespace service_account_namespace \
    --cluster cluster_name \
    --attach-policy-arn IAM_policy_ARN \
    --approve \
    --override-existing-serviceaccounts

# Replaced name, namespace, cluster, IAM Policy arn 
eksctl create iamserviceaccount \
    --name external-dns \
    --namespace default \
    --cluster eksdemo1 \
    --attach-policy-arn arn:aws:iam::180789647333:policy/AllowExternalDNSUpdates \
    --approve \
    --override-existing-serviceaccounts

Step-03-02: Verify the Service Account

  • Verify external-dns service account, primarily verify annotation related to IAM Role
# List Service Account
kubectl get sa external-dns

# Describe Service Account
kubectl describe sa external-dns
Observation: 
1. Verify the Annotations and you should see the IAM Role is present on the Service Account

Step-03-03: Verify CloudFormation Stack

  • Go to Services -> CloudFormation
  • Verify the latest CFN Stack created.
  • Click on Resources tab
  • Click on link in Physical ID field which will take us to IAM Role directly

Step-03-04: Verify IAM Role & IAM Policy

  • With above step in CFN, we will be landed in IAM Role created for external-dns.
  • Verify in Permissions tab we have a policy named AllowExternalDNSUpdates
  • Now make a note of that Role ARN, this we need to update in External-DNS k8s manifest
# Make a note of Role ARN
arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-defa-Role1-JTO29BVZMA2N

Step-03-05: Verify IAM Service Accounts using eksctl

  • You can also make a note of External DNS Role ARN from here too.
# List IAM Service Accounts using eksctl
eksctl get iamserviceaccount --cluster eksdemo1

# Sample Output
Kalyans-Mac-mini:08-06-ALB-Ingress-ExternalDNS kalyanreddy$ eksctl get iamserviceaccount --cluster eksdemo1
2022-02-11 09:34:39 [ℹ]  eksctl version 0.71.0
2022-02-11 09:34:39 [ℹ]  using region us-east-1
NAMESPACE	NAME				ROLE ARN
default		external-dns			arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-defa-Role1-JTO29BVZMA2N
kube-system	aws-load-balancer-controller	arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-kube-Role1-EFQB4C26EALH
Kalyans-Mac-mini:08-06-ALB-Ingress-ExternalDNS kalyanreddy$ 

Step-04: Update External DNS Kubernetes manifest

Change-1: Line number 9: IAM Role update

  • Copy the role-arn you have made a note at the end of step-03 and replace at line no 9.
    eks.amazonaws.com/role-arn: arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-defa-Role1-JTO29BVZMA2N

Chnage-2: Line 55, 56: Commented them

  • We used eksctl to create IAM role and attached the AllowExternalDNSUpdates policy
  • We didnt use KIAM or Kube2IAM so we don't need these two lines, so commented
      #annotations:  
        #iam.amazonaws.com/role: arn:aws:iam::ACCOUNT-ID:role/IAM-SERVICE-ROLE-NAME    

Change-3: Line 65, 67: Commented them

        # - --domain-filter=external-dns-test.my-org.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
       # - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization

Change-4: Line 61: Get latest Docker Image name

    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: k8s.gcr.io/external-dns/external-dns:v0.10.2

Step-05: Deploy ExternalDNS

  • Deploy the manifest
# Change Directory
cd 08-06-Deploy-ExternalDNS-on-EKS

# Deploy external DNS
kubectl apply -f kube-manifests/

# List All resources from default Namespace
kubectl get all

# List pods (external-dns pod should be in running state)
kubectl get pods

# Verify Deployment by checking logs
kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+')

ALB Ingress - SSL & SSL Redirect - #Section 13

 ALB Ingress SSL

 
 ALB Ingress SSL" refers to the use of an Application Load Balancer (ALB) as an ingress controller in a Kubernetes environment to manage SSL/TLS termination for incoming traffic. This setup allows you to secure your Kubernetes services by terminating SSL/TLS encryption at the ALB level before forwarding the traffic to your backend services.

Here's a more detailed explanation of the components and concepts involved:

  1. Ingress Controller: An ingress controller is a Kubernetes resource that manages external access to services within a cluster. It acts as a reverse proxy and routes incoming traffic to the appropriate services based on rules defined in Ingress resources.

  2. Application Load Balancer (ALB): An ALB is a load balancer provided by Amazon Web Services (AWS) that is used to distribute incoming application traffic across multiple targets. ALBs are capable of handling layer 7 (application layer) traffic and can route requests based on content, URL, or other characteristics.

  3. SSL/TLS Termination: SSL (Secure Sockets Layer) and its successor TLS (Transport Layer Security) are cryptographic protocols that provide secure communication over a network. SSL/TLS termination involves decrypting incoming encrypted traffic (HTTPS) at the load balancer and forwarding it to the backend services in plain HTTP.

     

    When you implement ALB Ingress SSL, you are setting up an ALB to handle SSL/TLS encryption and termination for the incoming HTTPS traffic.

 
 

  


 

Step-01: Introduction

  • We are going to register a new DNS in AWS Route53
  • We are going to create a SSL certificate
  • Add Annotations related to SSL Certificate in Ingress manifest
  • Deploy the manifests and test
  • Clean-Up

Step-02: Pre-requisite - Register a Domain in Route53 (if not exists)

  • Goto Services -> Route53 -> Registered Domains
  • Click on Register Domain
  • Provide desired domain: somedomain.com and click on check (In my case its going to be stacksimplify.com)
  • Click on Add to cart and click on Continue
  • Provide your Contact Details and click on Continue
  • Enable Automatic Renewal
  • Accept Terms and Conditions
  • Click on Complete Order

Step-03: Create a SSL Certificate in Certificate Manager

  • Pre-requisite: You should have a registered domain in Route53
  • Go to Services -> Certificate Manager -> Create a Certificate
  • Click on Request a Certificate
    • Choose the type of certificate for ACM to provide: Request a public certificate
    • Add domain names: *.yourdomain.com (in my case it is going to be *.stacksimplify.com)
    • Select a Validation Method: DNS Validation
    • Click on Confirm & Request
  • Validation
    • Click on Create record in Route 53
  • Wait for 5 to 10 minutes and check the Validation Status

Step-04: Add annotations related to SSL

  • 04-ALB-Ingress-SSL.yml
    ## SSL Settings
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/632a3ff6-3f6d-464c-9121-b9d97481a76b
    #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used)    

Step-05: Deploy all manifests and test

Deploy and Verify

# Deploy kube-manifests
kubectl apply -f kube-manifests/

# Verify Ingress Resource
kubectl get ingress

# Verify Apps
kubectl get deploy
kubectl get pods

# Verify NodePort Services
kubectl get svc

Verify Load Balancer & Target Groups

  • Load Balancer - Listeneres (Verify both 80 & 443)
  • Load Balancer - Rules (Verify both 80 & 443 listeners)
  • Target Groups - Group Details (Verify Health check path)
  • Target Groups - Targets (Verify all 3 targets are healthy)

Step-06: Add DNS in Route53

  • Go to Services -> Route 53
  • Go to Hosted Zones
    • Click on yourdomain.com (in my case stacksimplify.com)
  • Create a Record Set
    • Name: ssldemo101.stacksimplify.com
    • Alias: yes
    • Alias Target: Copy our ALB DNS Name here (Sample: ssl-ingress-551932098.us-east-1.elb.amazonaws.com)
    • Click on Create

Step-07: Access Application using newly registered DNS Name

  • Access Application
  • Important Note: Instead of stacksimplify.com you need to replace with your registered Route53 domain (Refer pre-requisite Step-02)
# HTTP URLs
http://ssldemo101.stacksimplify.com/app1/index.html
http://ssldemo101.stacksimplify.com/app2/index.html
http://ssldemo101.stacksimplify.com/

# HTTPS URLs
https://ssldemo101.stacksimplify.com/app1/index.html
https://ssldemo101.stacksimplify.com/app2/index.html
https://ssldemo101.stacksimplify.com/

 

ALB-Ingress-SSL-Redirect

Step-01: Add annotations related to SSL Redirect

  • File Name: 04-ALB-Ingress-SSL-Redirect.yml
  • Redirect from HTTP to HTTPS
    # SSL Redirect Setting
    alb.ingress.kubernetes.io/ssl-redirect: '443'   

Step-02: Deploy all manifests and test

Deploy and Verify

# Deploy kube-manifests
kubectl apply -f kube-manifests/

# Verify Ingress Resource
kubectl get ingress

# Verify Apps
kubectl get deploy
kubectl get pods

# Verify NodePort Services
kubectl get svc

Verify Load Balancer & Target Groups

  • Load Balancer - Listeneres (Verify both 80 & 443)
  • Load Balancer - Rules (Verify both 80 & 443 listeners)
  • Target Groups - Group Details (Verify Health check path)
  • Target Groups - Targets (Verify all 3 targets are healthy)

Step-03: Access Application using newly registered DNS Name

  • Access Application
# HTTP URLs (Should Redirect to HTTPS)
http://ssldemo101.stacksimplify.com/app1/index.html
http://ssldemo101.stacksimplify.com/app2/index.html
http://ssldemo101.stacksimplify.com/

# HTTPS URLs
https://ssldemo101.stacksimplify.com/app1/index.html
https://ssldemo101.stacksimplify.com/app2/index.html
https://ssldemo101.stacksimplify.com/

Step-04: Clean Up

# Delete Manifests
kubectl delete -f kube-manifests/

## Delete Route53 Record Set
- Delete Route53 Record we created (ssldemo101.stacksimplify.com)
 

ALB Ingress - ALB Context Path Routing #Section 12

ALB Context Path Routing

ALB Context Path Routing can be used to achieve more granular routing of incoming requests to different Kubernetes services based on the URL path. This can be particularly useful when you want to host multiple services under a single domain name or IP address and route traffic based on specific paths.

Here's how ALB Context Path Routing works:

  1. Ingress Configuration: In your Kubernetes Ingress resource, you specify rules that define different context paths and associate them with different backend services (target groups) using annotations.

  2. ALB Configuration: The AWS ALB Ingress Controller translates the Ingress resource's context path rules and annotations into corresponding rules on the AWS ALB.

  3. Request Routing: When a client sends an HTTP request, the AWS ALB examines the URL's path. Based on the path, the ALB routes the request to the appropriate target group, which corresponds to a specific Kubernetes service.

  4. Backend Processing: The Kubernetes service associated with the selected context path processes the request and generates a response, which is then sent back through the ALB to the client.

For example, suppose you have two Kubernetes services: ServiceA and ServiceB, and you want to route requests based on context paths:

 

 



Step-01: Introduction

  • Discuss about the Architecture we are going to build as part of this Section
  • We are going to deploy all these 3 apps in kubernetes with context path based routing enabled in Ingress Controller
    • /app1/* - should go to app1-nginx-nodeport-service
    • /app2/* - should go to app1-nginx-nodeport-service
    • /* - should go to app3-nginx-nodeport-service
  • As part of this process, this respective annotation alb.ingress.kubernetes.io/healthcheck-path: will be moved to respective application NodePort Service.
  • Only generic settings will be present in Ingress manifest annotations area 04-ALB-Ingress-ContextPath-Based-Routing.yml
  •  

Step-02: Review Nginx App1, App2 & App3 Deployment & Service

  • Differences for all 3 apps will be only two fields from kubernetes manifests perspective and their naming conventions
    • Kubernetes Deployment: Container Image name
    • Kubernetes Node Port Service: Health check URL path
  • App1 Nginx: 01-Nginx-App1-Deployment-and-NodePortService.yml
    • image: stacksimplify/kube-nginxapp1:1.0.0
    • Annotation: alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html
  • App2 Nginx: 02-Nginx-App2-Deployment-and-NodePortService.yml
    • image: stacksimplify/kube-nginxapp2:1.0.0
    • Annotation: alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html
  • App3 Nginx: 03-Nginx-App3-Deployment-and-NodePortService.yml
    • image: stacksimplify/kubenginx:1.0.0
    • Annotation: alb.ingress.kubernetes.io/healthcheck-path: /index.html

Step-03: Create ALB Ingress Context path based Routing Kubernetes manifest

  • 04-ALB-Ingress-ContextPath-Based-Routing.yml
# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-cpr-demo
  annotations:
    # Load Balancer Name
    alb.ingress.kubernetes.io/load-balancer-name: cpr-ingress
    # Ingress Core Settings
    #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource)
    alb.ingress.kubernetes.io/scheme: internet-facing
    # Health Check Settings
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP 
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port
    #Important Note:  Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer    
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
    alb.ingress.kubernetes.io/success-codes: '200'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'   
spec:
  ingressClassName: my-aws-ingress-class   # Ingress Class                  
  rules:
    - http:
        paths:      
          - path: /app1
            pathType: Prefix
            backend:
              service:
                name: app1-nginx-nodeport-service
                port: 
                  number: 80
          - path: /app2
            pathType: Prefix
            backend:
              service:
                name: app2-nginx-nodeport-service
                port: 
                  number: 80
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app3-nginx-nodeport-service
                port: 
                  number: 80              

# Important Note-1: In path based routing order is very important, if we are going to use  "/*", try to use it at the end of all rules.                                        
                        
# 1. If  "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster
# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"`                      

Step-04: Deploy all manifests and test

# Deploy Kubernetes manifests
kubectl apply -f kube-manifests/

# List Pods
kubectl get pods

# List Services
kubectl get svc

# List Ingress Load Balancers
kubectl get ingress

# Describe Ingress and view Rules
kubectl describe ingress ingress-cpr-demo

# Verify AWS Load Balancer Controller logs
kubectl -n kube-system  get pods 
kubectl -n kube-system logs -f aws-load-balancer-controller-794b7844dd-8hk7n 

Step-05: Verify Application Load Balancer on AWS Management Console**

  • Verify Load Balancer
    • In Listeners Tab, click on View/Edit Rules under Rules
  • Verify Target Groups
    • GroupD Details
    • Targets: Ensure they are healthy
    • Verify Health check path
    • Verify all 3 targets are healthy)
# Access Application
http://<ALB-DNS-URL>/app1/index.html
http://<ALB-DNS-URL>/app2/index.html
http://<ALB-DNS-URL>/

Step-06: Test Order in Context path based routing

Step-0-01: Move Root Context Path to top

  • File: 04-ALB-Ingress-ContextPath-Based-Routing.yml
  ingressClassName: my-aws-ingress-class   # Ingress Class                  
  rules:
    - http:
        paths:      
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app3-nginx-nodeport-service
                port: 
                  number: 80           
          - path: /app1
            pathType: Prefix
            backend:
              service:
                name: app1-nginx-nodeport-service
                port: 
                  number: 80
          - path: /app2
            pathType: Prefix
            backend:
              service:
                name: app2-nginx-nodeport-service
                port: 
                  number: 80

Step-06-02: Deploy Changes and Verify

# Deploy Changes
kubectl apply -f kube-manifests/

# Access Application (Open in new incognito window)
http://<ALB-DNS-URL>/app1/index.html  -- SHOULD FAIL
http://<ALB-DNS-URL>/app2/index.html  -- SHOULD FAIL
http://<ALB-DNS-URL>/  - SHOULD PASS

Step-07: Roll back changes in 04-ALB-Ingress-ContextPath-Based-Routing.yml

spec:
  ingressClassName: my-aws-ingress-class   # Ingress Class                  
  rules:
    - http:
        paths:      
          - path: /app1
            pathType: Prefix
            backend:
              service:
                name: app1-nginx-nodeport-service
                port: 
                  number: 80
          - path: /app2
            pathType: Prefix
            backend:
              service:
                name: app2-nginx-nodeport-service
                port: 
                  number: 80
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app3-nginx-nodeport-service
                port: 
                  number: 80              

Step-08: Clean Up

# Clean-Up
kubectl delete -f kube-manifests/
 

 

 

 

 




Sunday, July 9, 2023

ALB Ingress - Default Backend/Ingress Rules #Section 11

There are around 30 annotation in AWS
 

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.5/guide/ingress/annotations/ 


Default Backend



Rule based



Step-02: Review App1 Deployment kube-manifest

  • File Location: 01-kube-manifests-default-backend/01-Nginx-App1-Deployment-and-NodePortService.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1-nginx-deployment
  labels:
    app: app1-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app1-nginx
  template:
    metadata:
      labels:
        app: app1-nginx
    spec:
      containers:
        - name: app1-nginx
          image: stacksimplify/kube-nginxapp1:1.0.0
          ports:
            - containerPort: 80

Step-03: Review App1 NodePort Service

  • File Location: 01-kube-manifests-default-backend/01-Nginx-App1-Deployment-and-NodePortService.yml
apiVersion: v1
kind: Service
metadata:
  name: app1-nginx-nodeport-service
  labels:
    app: app1-nginx
  annotations:
#Important Note:  Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer    
#    alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html
spec:
  type: NodePort
  selector:
    app: app1-nginx
  ports:
    - port: 80
      targetPort: 80  

Step-04: Review Ingress kube-manifest with Default Backend Option

  • Annotations
  • File Location: 01-kube-manifests-default-backend/02-ALB-Ingress-Basic.yml
# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginxapp1
  labels:
    app: app1-nginx
  annotations:
    #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource)
    # Ingress Core Settings
    alb.ingress.kubernetes.io/scheme: internet-facing
    # Health Check Settings
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP 
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port
    alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html    
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
    alb.ingress.kubernetes.io/success-codes: '200'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
  ingressClassName: ic-external-lb # Ingress Class
  defaultBackend:
    service:
      name: app1-nginx-nodeport-service
      port:
        number: 80                    

Step-05: Deploy kube-manifests and Verify

 # Change Directory
cd 08-02-ALB-Ingress-Basics

# Deploy kube-manifests
kubectl apply -f 01-kube-manifests-default-backend/

# Verify k8s Deployment and Pods
kubectl get deploy
kubectl get pods

# Verify Ingress (Make a note of Address field)
kubectl get ingress
Obsevation: 
1. Verify the ADDRESS value, we should see something like "app1ingress-1334515506.us-east-1.elb.amazonaws.com"

# Describe Ingress Controller
kubectl describe ingress ingress-nginxapp1
Observation:
1. Review Default Backend and Rules

# List Services
kubectl get svc

# Verify Application Load Balancer using 
Goto AWS Mgmt Console -> Services -> EC2 -> Load Balancers
1. Verify Listeners and Rules inside a listener
2. Verify Target Groups

# Access App using Browser
kubectl get ingress
http://<ALB-DNS-URL>
http://<ALB-DNS-URL>/app1/index.html
or
http://<INGRESS-ADDRESS-FIELD>
http://<INGRESS-ADDRESS-FIELD>/app1/index.html

# Sample from my environment (for reference only)
http://app1ingress-154912460.us-east-1.elb.amazonaws.com
http://app1ingress-154912460.us-east-1.elb.amazonaws.com/app1/index.html

# Verify AWS Load Balancer Controller logs
kubectl get po -n kube-system 
## POD1 Logs: 
kubectl -n kube-system logs -f <POD1-NAME>
kubectl -n kube-system logs -f aws-load-balancer-controller-65b4f64d6c-h2vh4
##POD2 Logs: 
kubectl -n kube-system logs -f <POD2-NAME>
kubectl -n kube-system logs -f aws-load-balancer-controller-65b4f64d6c-t7qqb

Step-06: Clean Up

# Delete Kubernetes Resources
kubectl delete -f 01-kube-manifests-default-backend/

Step-07: Review Ingress kube-manifest with Ingress Rules

  • Discuss about Ingress Path Types

  • Better Path Matching With Path Types

  • Sample Ingress Rule

  • ImplementationSpecific (default): With this path type, matching is up to the controller implementing the IngressClass. Implementations can treat this as a separate pathType or treat it identically to the Prefix or Exact path types.

  • Exact: Matches the URL path exactly and with case sensitivity.

  • Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.

  • File Location: 02-kube-manifests-rules\02-ALB-Ingress-Basic.yml

# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginxapp1
  labels:
    app: app1-nginx
  annotations:
    # Load Balancer Name
    alb.ingress.kubernetes.io/load-balancer-name: app1ingressrules
    #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource)
    # Ingress Core Settings
    alb.ingress.kubernetes.io/scheme: internet-facing
    # Health Check Settings
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP 
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port
    alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html    
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
    alb.ingress.kubernetes.io/success-codes: '200'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
  ingressClassName: ic-external-lb # Ingress Class
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app1-nginx-nodeport-service
                port: 
                  number: 80
      

# 1. If  "spec.ingressClassName: ic-external-lb" not specified, will reference default ingress class on this kubernetes cluster
# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"`

Step-08: Deploy kube-manifests and Verify

# Change Directory
cd 08-02-ALB-Ingress-Basics

# Deploy kube-manifests
kubectl apply -f 02-kube-manifests-rules/

# Verify k8s Deployment and Pods
kubectl get deploy
kubectl get pods

# Verify Ingress (Make a note of Address field)
kubectl get ingress
Obsevation: 
1. Verify the ADDRESS value, we should see something like "app1ingressrules-154912460.us-east-1.elb.amazonaws.com"

# Describe Ingress Controller
kubectl describe ingress ingress-nginxapp1
Observation:
1. Review Default Backend and Rules

# List Services
kubectl get svc

# Verify Application Load Balancer using 
Goto AWS Mgmt Console -> Services -> EC2 -> Load Balancers
1. Verify Listeners and Rules inside a listener
2. Verify Target Groups

# Access App using Browser
kubectl get ingress
http://<ALB-DNS-URL>
http://<ALB-DNS-URL>/app1/index.html
or
http://<INGRESS-ADDRESS-FIELD>
http://<INGRESS-ADDRESS-FIELD>/app1/index.html

# Sample from my environment (for reference only)
http://app1ingressrules-154912460.us-east-1.elb.amazonaws.com
http://app1ingressrules-154912460.us-east-1.elb.amazonaws.com/app1/index.html

# Verify AWS Load Balancer Controller logs
kubectl get po -n kube-system 
kubectl logs -f aws-load-balancer-controller-794b7844dd-8hk7n -n kube-system

Step-09: Clean Up

# Delete Kubernetes Resources
kubectl delete -f 02-kube-manifests-rules/

# Verify if Ingress Deleted successfully 
kubectl get ingress
Important Note: It is going to cost us heavily if we leave ALB load balancer idle without deleting it properly

# Verify Application Load Balancer DELETED 
Goto AWS Mgmt Console -> Services -> EC2 -> Load Balancers

 

Saturday, July 8, 2023

ALB Ingress - Install ALB Controller #Section 10

 

 


 


 

AWS Load Balancer Controller Install on AWS EKS

 

Step-00: Introduction

  1. Create IAM Policy and make a note of Policy ARN
  2. Create IAM Role and k8s Service Account and bound them together
  3. Install AWS Load Balancer Controller using HELM3 CLI
  4. Understand IngressClass Concept and create a default Ingress Class
 

Step-01: Pre-requisites

Pre-requisite-1: eksctl & kubectl Command Line Utility

  • Should be the latest eksctl version
# Verify eksctl version
eksctl version

# For installing or upgrading latest eksctl version
https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html

# Verify EKS Cluster version
kubectl version --short
kubectl version
Important Note: You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.20 kubectl client works with Kubernetes 1.19, 1.20 and 1.21 clusters.

# For installing kubectl cli
https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html

 

Pre-requisite-2: Create EKS Cluster and Worker Nodes (if not created)

# Create Cluster (Section-01-02)
eksctl create cluster --name=eksdemo1 \
                      --region=us-east-1 \
                      --zones=us-east-1a,us-east-1b \
                      --version="1.21" \
                      --without-nodegroup 


# Get List of clusters (Section-01-02)
eksctl get cluster   

# Template (Section-01-02)
eksctl utils associate-iam-oidc-provider \
    --region region-code \
    --cluster <cluter-name> \
    --approve

# Replace with region & cluster name (Section-01-02)
eksctl utils associate-iam-oidc-provider \
    --region us-east-1 \
    --cluster eksdemo1 \
    --approve

# Create EKS NodeGroup in VPC Private Subnets (Section-07-01)
eksctl create nodegroup --cluster=eksdemo1 \
                        --region=us-east-1 \
                        --name=eksdemo1-ng-private1 \
                        --node-type=t3.medium \
                        --nodes-min=2 \
                        --nodes-max=4 \
                        --node-volume-size=20 \
                        --ssh-access \
                        --ssh-public-key=kube-demo \
                        --managed \
                        --asg-access \
                        --external-dns-access \
                        --full-ecr-access \
                        --appmesh-access \
                        --alb-ingress-access \
                        --node-private-networking 

Pre-requisite-3: Verify Cluster, Node Groups and configure kubectl cli if not configured

  1. EKS Cluster
  2. EKS Node Groups in Private Subnets
# Verfy EKS Cluster
eksctl get cluster

# Verify EKS Node Groups
eksctl get nodegroup --cluster=eksdemo1

# Verify if any IAM Service Accounts present in EKS Cluster
eksctl get iamserviceaccount --cluster=eksdemo1
Observation:
1. No k8s Service accounts as of now. 

# Configure kubeconfig for kubectl
eksctl get cluster # TO GET CLUSTER NAME
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
aws eks --region us-east-1 update-kubeconfig --name eksdemo1

# Verify EKS Nodes in EKS Cluster using kubectl
kubectl get nodes

# Verify using AWS Management Console
1. EKS EC2 Nodes (Verify Subnet in Networking Tab)
2. EKS Cluster
 

Step-02: Create IAM Policy

  • Create IAM policy for the AWS Load Balancer Controller that allows it to make calls to AWS APIs on your behalf.
  • As on today 2.3.1 is the latest Load Balancer Controller
  • We will download always latest from main branch of Git Repo
  • AWS Load Balancer Controller Main Git repo
# Change Directroy
cd 08-NEW-ELB-Application-LoadBalancers/
cd 08-01-Load-Balancer-Controller-Install

# Delete files before download (if any present)
rm iam_policy_latest.json

# Download IAM Policy
## Download latest
curl -o iam_policy_latest.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
## Verify latest
ls -lrta 

## Download specific version
curl -o iam_policy_v2.3.1.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.3.1/docs/install/iam_policy.json


# Create IAM Policy using policy downloaded 
aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy_latest.json

## Sample Output
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ aws iam create-policy \
>     --policy-name AWSLoadBalancerControllerIAMPolicy \
>     --policy-document file://iam_policy_latest.json
{
    "Policy": {
        "PolicyName": "AWSLoadBalancerControllerIAMPolicy",
        "PolicyId": "ANPASUF7HC7S52ZQAPETR",
        "Arn": "arn:aws:iam::180789647333:policy/AWSLoadBalancerControllerIAMPolicy",
        "Path": "/",
        "DefaultVersionId": "v1",
        "AttachmentCount": 0,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2022-02-02T04:51:21+00:00",
        "UpdateDate": "2022-02-02T04:51:21+00:00"
    }
}
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ 
  • Important Note: If you view the policy in the AWS Management Console, you may see warnings for ELB. These can be safely ignored because some of the actions only exist for ELB v2. You do not see warnings for ELB v2.

Make a note of Policy ARN

  • Make a note of Policy ARN as we are going to use that in next step when creating IAM Role.
# Policy ARN 
Policy ARN:  arn:aws:iam::180789647333:policy/AWSLoadBalancerControllerIAMPolicy
 

Step-03: Create an IAM role for the AWS LoadBalancer Controller and attach the role to the Kubernetes service account

  • Applicable only with eksctl managed clusters
  • This command will create an AWS IAM role
  • This command also will create Kubernetes Service Account in k8s cluster
  • In addition, this command will bound IAM Role created and the Kubernetes service account created

Step-03-01: Create IAM Role using eksctl

# Verify if any existing service account
kubectl get sa -n kube-system
kubectl get sa aws-load-balancer-controller -n kube-system
Obseravation:
1. Nothing with name "aws-load-balancer-controller" should exist

# Template
eksctl create iamserviceaccount \
  --cluster=my_cluster \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \ #Note:  K8S Service Account Name that need to be bound to newly created IAM Role
  --attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \
  --override-existing-serviceaccounts \
  --approve


# Replaced name, cluster and policy arn (Policy arn we took note in step-02)
eksctl create iamserviceaccount \
  --cluster=eksdemo1 \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --attach-policy-arn=arn:aws:iam::180789647333:policy/AWSLoadBalancerControllerIAMPolicy \
  --override-existing-serviceaccounts \
  --approve
 
  • Sample Output
# Sample Output for IAM Service Account creation
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ eksctl create iamserviceaccount \
>   --cluster=eksdemo1 \
>   --namespace=kube-system \
>   --name=aws-load-balancer-controller \
>   --attach-policy-arn=arn:aws:iam::180789647333:policy/AWSLoadBalancerControllerIAMPolicy \
>   --override-existing-serviceaccounts \
>   --approve
2022-02-02 10:22:49 [ℹ]  eksctl version 0.82.0
2022-02-02 10:22:49 [ℹ]  using region us-east-1
2022-02-02 10:22:52 [ℹ]  1 iamserviceaccount (kube-system/aws-load-balancer-controller) was included (based on the include/exclude rules)
2022-02-02 10:22:52 [!]  metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
2022-02-02 10:22:52 [ℹ]  1 task: { 
    2 sequential sub-tasks: { 
        create IAM role for serviceaccount "kube-system/aws-load-balancer-controller",
        create serviceaccount "kube-system/aws-load-balancer-controller",
    } }2022-02-02 10:22:52 [ℹ]  building iamserviceaccount stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2022-02-02 10:22:53 [ℹ]  deploying stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2022-02-02 10:22:53 [ℹ]  waiting for CloudFormation stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2022-02-02 10:23:10 [ℹ]  waiting for CloudFormation stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2022-02-02 10:23:29 [ℹ]  waiting for CloudFormation stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2022-02-02 10:23:32 [ℹ]  created serviceaccount "kube-system/aws-load-balancer-controller"
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ 

Step-03-02: Verify using eksctl cli

# Get IAM Service Account
eksctl  get iamserviceaccount --cluster eksdemo1

# Sample Output
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ eksctl  get iamserviceaccount --cluster eksdemo1
2022-02-02 10:23:50 [ℹ]  eksctl version 0.82.0
2022-02-02 10:23:50 [ℹ]  using region us-east-1
NAMESPACE	NAME				ROLE ARN
kube-system	aws-load-balancer-controller	arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-kube-Role1-1244GWMVEAKEN
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ 
 

Step-03-03: Verify CloudFormation Template eksctl created & IAM Role

  • Goto Services -> CloudFormation
  • CFN Template Name: eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller
  • Click on Resources tab
  • Click on link in Physical Id to open the IAM Role
  • Verify it has eksctl-eksdemo1-addon-iamserviceaccount-kube-Role1-WFAWGQKTAVLR associated

Step-03-04: Verify k8s Service Account using kubectl

# Verify if any existing service account
kubectl get sa -n kube-system
kubectl get sa aws-load-balancer-controller -n kube-system
Obseravation:
1. We should see a new Service account created. 

# Describe Service Account aws-load-balancer-controller
kubectl describe sa aws-load-balancer-controller -n kube-system
  • Observation: You can see that newly created Role ARN is added in Annotations confirming that AWS IAM role bound to a Kubernetes service account
  • Output
## Sample Output
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ kubectl describe sa aws-load-balancer-controller -n kube-system
Name:                aws-load-balancer-controller
Namespace:           kube-system
Labels:              app.kubernetes.io/managed-by=eksctl
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-kube-Role1-1244GWMVEAKEN
Image pull secrets:  <none>
Mountable secrets:   aws-load-balancer-controller-token-5w8th
Tokens:              aws-load-balancer-controller-token-5w8th
Events:              <none>
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ 
 

Step-04: Install the AWS Load Balancer Controller using Helm V3

Step-04-01: Install Helm

# Install Helm (if not installed) MacOS
brew install helm

# Verify Helm version
helm version

Step-04-02: Install AWS Load Balancer Controller

  • Important-Note-1: If you're deploying the controller to Amazon EC2 nodes that have restricted access to the Amazon EC2 instance metadata service (IMDS), or if you're deploying to Fargate, then add the following flags to the command that you run:
--set region=region-code
--set vpcId=vpc-xxxxxxxx
  • Important-Note-2: If you're deploying to any Region other than us-west-2, then add the following flag to the command that you run, replacing account and region-code with the values for your region listed in Amazon EKS add-on container image addresses.
  • Get Region Code and Account info
--set image.repository=account.dkr.ecr.region-code.amazonaws.com/amazon/aws-load-balancer-controller
# Add the eks-charts repository.
helm repo add eks https://aws.github.io/eks-charts

# Update your local repo to make sure that you have the most recent charts.
helm repo update

# Install the AWS Load Balancer Controller.
## Template
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=<cluster-name> \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=<region-code> \
  --set vpcId=<vpc-xxxxxxxx> \
  --set image.repository=<account>.dkr.ecr.<region-code>.amazonaws.com/amazon/aws-load-balancer-controller

## Replace Cluster Name, Region Code, VPC ID, Image Repo Account ID and Region Code  
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=eksdemo1 \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=us-east-1 \
  --set vpcId=vpc-0165a396e41e292a3 \
  --set image.repository=602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-load-balancer-controller
  • Sample output for AWS Load Balancer Controller Install steps
## Sample Ouput for AWS Load Balancer Controller Install steps
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
>   -n kube-system \
>   --set clusterName=eksdemo1 \
>   --set serviceAccount.create=false \
>   --set serviceAccount.name=aws-load-balancer-controller \
>   --set region=us-east-1 \
>   --set vpcId=vpc-0570fda59c5aaf192 \
>   --set image.repository=602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-load-balancer-controller
NAME: aws-load-balancer-controller
LAST DEPLOYED: Wed Feb  2 10:33:57 2022
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ 

Step-04-03: Verify that the controller is installed and Webhook Service created

 # Verify that the controller is installed.
kubectl -n kube-system get deployment 
kubectl -n kube-system get deployment aws-load-balancer-controller
kubectl -n kube-system describe deployment aws-load-balancer-controller

# Sample Output
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           27s
Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ 

# Verify AWS Load Balancer Controller Webhook service created
kubectl -n kube-system get svc 
kubectl -n kube-system get svc aws-load-balancer-webhook-service
kubectl -n kube-system describe svc aws-load-balancer-webhook-service

# Sample Output
Kalyans-MacBook-Pro:aws-eks-kubernetes-masterclass-internal kdaida$ kubectl -n kube-system get svc aws-load-balancer-webhook-service
NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
aws-load-balancer-webhook-service   ClusterIP   10.100.53.52   <none>        443/TCP   61m
Kalyans-MacBook-Pro:aws-eks-kubernetes-masterclass-internal kdaida$ 

# Verify Labels in Service and Selector Labels in Deployment
kubectl -n kube-system get svc aws-load-balancer-webhook-service -o yaml
kubectl -n kube-system get deployment aws-load-balancer-controller -o yaml
Observation:
1. Verify "spec.selector" label in "aws-load-balancer-webhook-service"
2. Compare it with "aws-load-balancer-controller" Deployment "spec.selector.matchLabels"
3. Both values should be same which traffic coming to "aws-load-balancer-webhook-service" on port 443 will be sent to port 9443 on "aws-load-balancer-controller" deployment related pods. 

Step-04-04: Verify AWS Load Balancer Controller Logs

# List Pods
kubectl get pods -n kube-system

# Review logs for AWS LB Controller POD-1
kubectl -n kube-system logs -f <POD-NAME> 
kubectl -n kube-system logs -f  aws-load-balancer-controller-86b598cbd6-5pjfk

# Review logs for AWS LB Controller POD-2
kubectl -n kube-system logs -f <POD-NAME> 
kubectl -n kube-system logs -f aws-load-balancer-controller-86b598cbd6-vqqsk

Step-04-05: Verify AWS Load Balancer Controller k8s Service Account - Internals

# List Service Account and its secret
kubectl -n kube-system get sa aws-load-balancer-controller
kubectl -n kube-system get sa aws-load-balancer-controller -o yaml
kubectl -n kube-system get secret <GET_FROM_PREVIOUS_COMMAND - secrets.name> -o yaml
kubectl -n kube-system get secret aws-load-balancer-controller-token-5w8th 
kubectl -n kube-system get secret aws-load-balancer-controller-token-5w8th -o yaml
## Decoce ca.crt using below two websites
https://www.base64decode.org/
https://www.sslchecker.com/certdecoder

## Decode token using below two websites
https://www.base64decode.org/
https://jwt.io/
Observation:
1. Review decoded JWT Token

# List Deployment in YAML format
kubectl -n kube-system get deploy aws-load-balancer-controller -o yaml
Observation:
1. Verify "spec.template.spec.serviceAccount" and "spec.template.spec.serviceAccountName" in "aws-load-balancer-controller" Deployment
2. We should find the Service Account Name as "aws-load-balancer-controller"

# List Pods in YAML format
kubectl -n kube-system get pods
kubectl -n kube-system get pod <AWS-Load-Balancer-Controller-POD-NAME> -o yaml
kubectl -n kube-system get pod aws-load-balancer-controller-65b4f64d6c-h2vh4 -o yaml
Observation:
1. Verify "spec.serviceAccount" and "spec.serviceAccountName"
2. We should find the Service Account Name as "aws-load-balancer-controller"
3. Verify "spec.volumes". You should find something as below, which is a temporary credentials to access AWS Services
CHECK-1: Verify "spec.volumes.name = aws-iam-token"
  - name: aws-iam-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: sts.amazonaws.com
          expirationSeconds: 86400
          path: token
CHECK-2: Verify Volume Mounts
    volumeMounts:
    - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
      name: aws-iam-token
      readOnly: true          
CHECK-3: Verify ENVs whose path name is "token"
    - name: AWS_WEB_IDENTITY_TOKEN_FILE
      value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token 

Step-04-06: Verify TLS Certs for AWS Load Balancer Controller - Internals

# List aws-load-balancer-tls secret 
kubectl -n kube-system get secret aws-load-balancer-tls -o yaml

# Verify the ca.crt and tls.crt in below websites
https://www.base64decode.org/
https://www.sslchecker.com/certdecoder

# Make a note of Common Name and SAN from above 
Common Name: aws-load-balancer-controller
SAN: aws-load-balancer-webhook-service.kube-system, aws-load-balancer-webhook-service.kube-system.svc

# List Pods in YAML format
kubectl -n kube-system get pods
kubectl -n kube-system get pod <AWS-Load-Balancer-Controller-POD-NAME> -o yaml
kubectl -n kube-system get pod aws-load-balancer-controller-65b4f64d6c-h2vh4 -o yaml
Observation:
1. Verify how the secret is mounted in AWS Load Balancer Controller Pod
CHECK-2: Verify Volume Mounts
    volumeMounts:
    - mountPath: /tmp/k8s-webhook-server/serving-certs
      name: cert
      readOnly: true
CHECK-3: Verify Volumes
  volumes:
  - name: cert
    secret:
      defaultMode: 420
      secretName: aws-load-balancer-tls

Step-04-07: UNINSTALL AWS Load Balancer Controller using Helm Command (Information Purpose - SHOULD NOT EXECUTE THIS COMMAND)

 
  • This step should not be implemented.
  • This is just put it here for us to know how to uninstall aws load balancer controller from EKS Cluster
# Uninstall AWS Load Balancer Controller
helm uninstall aws-load-balancer-controller -n kube-system 

Step-05: Ingress Class Concept

Step-06: Review IngressClass Kubernetes Manifest

  • File Location: 08-01-Load-Balancer-Controller-Install/kube-manifests/01-ingressclass-resource.yaml
  • Understand in detail about annotation ingressclass.kubernetes.io/is-default-class: "true"
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: my-aws-ingress-class
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: ingress.k8s.aws/alb

## Additional Note
# 1. You can mark a particular IngressClass as the default for your cluster. 
# 2. Setting the ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will ensure that new Ingresses without an ingressClassName field specified will be assigned this default IngressClass.  
# 3. Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/ingress_class/

Step-07: Create IngressClass Resource

# Navigate to Directory
cd 08-01-Load-Balancer-Controller-Install

# Create IngressClass Resource
kubectl apply -f kube-manifests

# Verify IngressClass Resource
kubectl get ingressclass

# Describe IngressClass Resource
kubectl describe ingressclass my-aws-ingress-class