Friday, June 16, 2023

AWS CloudWatch Container Insight

 

Container Map

Container Resource

Performance Dashboards

Log Groups

Log Insights

Alarms 


Automatic Performance Dashboard


Monitoring EKS using CloudWatch Container Insigths

Step-01: Introduction

    What is CloudWatch?
    What are CloudWatch Container Insights?
    What is CloudWatch Agent and Fluentd?

  1. CloudWatch: CloudWatch is a monitoring and observability service provided by Amazon Web Services (AWS). It collects and tracks various metrics, logs, and events from AWS resources and applications. CloudWatch allows you to gain insights into the performance, health, and availability of your AWS infrastructure and applications. It provides features like dashboards, alarms, logs, and automated actions to help you monitor, troubleshoot, and optimize your resources.

  2. CloudWatch Container Insights: CloudWatch Container Insights is a feature of CloudWatch that provides monitoring and analysis capabilities specifically designed for containerized environments. It helps you understand the performance and resource utilization of your containerized applications running on services like Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), and Kubernetes clusters.

    CloudWatch Container Insights collects and displays key metrics, logs, and metadata related to containers, tasks, pods, and services. It offers pre-defined dashboards, automated alarms, and performance recommendations to help you monitor, troubleshoot, and optimize your containerized applications.

  3. CloudWatch Agent: The CloudWatch Agent is a software component that runs on EC2 instances, on-premises servers, or virtual machines to collect and send system-level metrics, logs, and custom metrics to CloudWatch. It enables you to monitor system-level metrics, such as CPU usage, memory utilization, disk space, and network performance, as well as application-level metrics.

    1. Fluentd: Fluentd is an open-source data collection agent that can be used to collect, transform, and forward logs and other data from various sources to different destinations. It provides a unified logging layer that supports a wide range of input sources and output destinations.

    In the context of CloudWatch, Fluentd can be used as a log collector and forwarder to gather logs from various sources within your infrastructure and send them to CloudWatch Logs. It acts as an intermediary between log sources (such as containers, applications, or system logs) and CloudWatch Logs, enabling you to centralize and analyze logs in CloudWatch.

    By deploying the CloudWatch Agent and using Fluentd, you can collect both system-level metrics and application logs, and send them to CloudWatch for monitoring, analysis, and troubleshooting purposes.



Step-02: Associate CloudWatch Policy to our EKS Worker Nodes Role

    Go to Services -> EC2 -> Worker Node EC2 Instance -> IAM Role -> Click on that role

# Sample Role ARN
arn:aws:iam::180789647333:role/eksctl-eksdemo1-nodegroup-eksdemo-NodeInstanceRole-1FVWZ2H3TMQ2M

# Policy to be associated
Associate Policy: CloudWatchAgentServerPolicy

Step-03: Install Container Insights
Deploy CloudWatch Agent and Fluentd as DaemonSets

    This command will
        Creates the Namespace amazon-cloudwatch.
        Creates all the necessary security objects for both DaemonSet:
            SecurityAccount
            ClusterRole
            ClusterRoleBinding
        Deploys Cloudwatch-Agent (responsible for sending the metrics to CloudWatch) as a DaemonSet.
        Deploys fluentd (responsible for sending the logs to Cloudwatch) as a DaemonSet.
        Deploys ConfigMap configurations for both DaemonSets.

# Template
curl -s https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/<REPLACE_CLUSTER_NAME>/;s/{{region_name}}/<REPLACE-AWS_REGION>/" | kubectl apply -f -

# Replaced Cluster Name and Region
curl -s https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/my-cluster1/;s/{{region_name}}/us-east-1/" | kubectl delete -f -

Verify
# List Daemonsets
kubectl -n amazon-cloudwatch get daemonsets

Step-04: Deploy Sample Nginx Application
# Deploy
kubectl apply -f kube-manifests

 --------------------------------------------

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-nginx-deployment
  labels:
    app: sample-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-nginx
  template:
    metadata:
      labels:
        app: sample-nginx
    spec:
      containers:
        - name: sample-nginx
          image: stacksimplify/kubenginx:1.0.0
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: "5m"
              memory: "5Mi"
            limits:
              cpu: "10m"
              memory: "10Mi"       
---
apiVersion: v1
kind: Service
metadata:
  name: sample-nginx-service
  labels:
    app: sample-nginx
spec:
  selector:
    app: sample-nginx
  ports:
  - port: 80
    targetPort: 80         

--------------------------------------------------------------------



Step-05: Generate load on our Sample Nginx Application
# Generate Load
kubectl run --generator=run-pod/v1 apache-bench -i --tty --rm --image=httpd -- ab -n 500000 -c 1000 http://sample-nginx-service.default.svc.cluster.local/ 

kubectl run  apache-bench -i --tty --rm --image=httpd -- ab -n 500000 -c 1000 http://sample-nginx-service.default.svc.cluster.local/

Step-06: Access CloudWatch Dashboard

    Access CloudWatch Container Insigths Dashboard

Step-07: CloudWatch Log Insights

    View Container logs
    View Container Performance Logs

Step-08: Container Insights - Log Insights in depth

    Log Groups
    Log Insights
    Create Dashboard

Create Graph for Avg Node CPU Utlization

    DashBoard Name: EKS-Performance
    Widget Type: Bar
    Log Group: /aws/containerinsights/eksdemo1/performance

STATS avg(node_cpu_utilization) as avg_node_cpu_utilization by NodeName
| SORT avg_node_cpu_utilization DESC

Container Restarts

    DashBoard Name: EKS-Performance
    Widget Type: Table
    Log Group: /aws/containerinsights/eksdemo1/performance

STATS avg(number_of_container_restarts) as avg_number_of_container_restarts by PodName
| SORT avg_number_of_container_restarts DESC

Cluster Node Failures

    DashBoard Name: EKS-Performance
    Widget Type: Table
    Log Group: /aws/containerinsights/eksdemo1/performance

stats avg(cluster_failed_node_count) as CountOfNodeFailures
| filter Type="Cluster"
| sort @timestamp desc


CPU Usage By Container

    DashBoard Name: EKS-Performance
    Widget Type: Bar
    Log Group: /aws/containerinsights/eksdemo1/performance

stats pct(container_cpu_usage_total, 50) as CPUPercMedian by kubernetes.container_name
| filter Type="Container"

Pods Requested vs Pods Running

    DashBoard Name: EKS-Performance
    Widget Type: Bar
    Log Group: /aws/containerinsights/eksdemo1/performance

fields @timestamp, @message
| sort @timestamp desc
| filter Type="Pod"
| stats min(pod_number_of_containers) as requested, min(pod_number_of_running_containers) as running, ceil(avg(pod_number_of_containers-pod_number_of_running_containers)) as pods_missing by kubernetes.pod_name
| sort pods_missing desc


Application log errors by container name

    DashBoard Name: EKS-Performance
    Widget Type: Bar
    Log Group: /aws/containerinsights/eksdemo1/application

stats count() as countoferrors by kubernetes.container_name
| filter stream="stderr"
| sort countoferrors desc

    Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-view-metrics.html

Step-09: Container Insights - CloudWatch Alarms
Create Alarms - Node CPU Usage

    Specify metric and conditions
        Select Metric: Container Insights -> ClusterName -> node_cpu_utilization
        Metric Name: eksdemo1_node_cpu_utilization
        Threshold Value: 4
        Important Note: Anything above 4% of CPU it will send a notification email, ideally it should 80% or 90% CPU but we are giving 4% CPU just for load simulation testing
    Configure Actions
        Create New Topic: eks-alerts
        Email: dkalyanreddy@gmail.com
        Click on Create Topic
        Important Note:** Complete Email subscription sent to your email id.
    Add name and description
        Name: EKS-Nodes-CPU-Alert
        Descritption: EKS Nodes CPU alert notification
        Click Next
    Preview
        Preview and Create Alarm
    Add Alarm to our custom Dashboard
    Generate Load & Verify Alarm

# Generate Load
kubectl run --generator=run-pod/v1 apache-bench -i --tty --rm --image=httpd -- ab -n 500000 -c 1000 http://sample-nginx-service.default.svc.cluster.local/


Step-10: Clean-Up Container Insights
# Template
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/cluster-name/;s/{{region_name}}/cluster-region/" | kubectl delete -f -

# Replace Cluster Name & Region Name
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/my-cluster1/;s/{{region_name}}/us-east-2/" | kubectl delete -f -

Step-11: Clean-Up Application

# Delete Apps
kubectl delete -f  kube-manifests


References
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-EKS.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-Setup.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-reference-performance-entries-EKS.html

Friday, June 9, 2023

helm activity

 Run the following command to get the list of deployed Helm releases:

 
helm list --all-namespaces
 



If you want to filter the list by a specific namespace, you can specify the --namespace flag followed by the desired namespace.

 
helm list --namespace default
 

[root@node]# helm list --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
mysql default 1 2023-06-08 14:35:36.121173384 +0000 UTC deployed mysql-9.10.2 8.0.33
valut-test default 1 2023-06-08 16:39:35.916887124 +0000 UTC deployed vault-0.2.2 1.13.2
vault-testing default 1 2023-06-09 02:55:52.502603722 +0000 UTC deployed vault-0.2.2 1.13.2 what is the release in this example


If you want to retrieve the chart details of a specific release, you can use the helm get command followed by the release name.

 
helm get mysql --namespace default
  



To download a running Helm release to your local machine, you can use the helm get manifest command followed by the release name. This command will retrieve the manifest file for the specified release.

 
helm get manifest mysql --namespace default > manifest.yaml
  

The command will redirect the output to a file named manifest.yaml in your current directory. You can specify a different file name or path if desired.


After running this command, you will have the manifest file manifest.yaml containing the YAML representation of the deployed resources in the specified release.

Download Running Helm chart

To download the running MySQL Helm chart, you can follow these steps:
1. Retrieve the values used for the MySQL release by running the following command:

 
helm get values mysql > values.yaml
 


This command will save the values in a file called values.yaml.

2. Create a new Helm chart directory structure. You can use the helm create command to generate the basic structure for a new chart. For example:

 
helm create my-mysql-chart
 



3. Replace the values.yaml file in the newly created chart directory with the one you retrieved in step 1:

 
cp values.yaml my-mysql-chart/values.yaml
 



4. You can further customize the chart by modifying the other template files in the my-mysql-chart directory according to your requirements.
By following these steps, you will have a new Helm chart (my-mysql-chart) with the same configuration as the running MySQL release.
 

How to download running pod/deployment yaml file

Export Pod Data: If you want to back up specific files or directories within a running pod, you can export the data using kubectl cp command. For example, to copy the contents of a directory named data within a pod named my-pod to your local machine, you can use the following command:

 
kubectl cp my-pod:/path/to/data /local/path
 
kubectl cp jenkins-test-64c77f4595-d769p:/home/jenkins /root/myfolder
 



you can also use the kubectl get pod <pod-name> -o yaml command to retrieve the YAML configuration of a pod.

 
kubectl get pod vault-testing-injector-64c77f4595-d769p -o yaml
 


To fetch the YAML file used to deploy a running pod or deployment,. Here's how you can do it:

 
kubectl get pod vault-testing-injector-64c77f4595-d769p --namespace default -o yaml > pod.yaml 
 

It will store the configuration in pod.yaml file on your local, same way if you want to store deployment/service or any kind of running yaml in kubernetes just put after get and ID, and after that all same like above

Wednesday, June 7, 2023

Vault and CoreDNS

Vault

In Kubernetes, a "vault" typically refers to HashiCorp Vault, which is an open-source tool for managing secrets and sensitive data in distributed systems. It provides a secure and centralized way to store, access, and manage secrets such as passwords, API keys, certificates, and more.

HashiCorp Vault can be integrated with Kubernetes to securely manage secrets and provide them to applications running within the cluster. When secrets are stored in Vault, they are encrypted and protected, and access to them can be controlled using fine-grained policies and access controls.

Using Vault in Kubernetes offers several advantages:

  1. Centralized Secret Management: Vault provides a central location for storing secrets, reducing the risk of secrets being exposed or leaked.

  2. Dynamic Secrets: Vault can generate short-lived secrets dynamically when requested, reducing the risk of long-lived secrets being compromised.

  3. Auditing and Access Controls: Vault logs access to secrets and provides detailed audit logs. It also allows administrators to define fine-grained access controls and policies for secret retrieval.

  4. Integration with Kubernetes: Vault can integrate with Kubernetes using the Kubernetes authentication method. This allows Kubernetes applications to authenticate with Vault and retrieve secrets specific to their needs.

By leveraging Vault in a Kubernetes environment, developers and operators can maintain a higher level of security for their applications by securely managing secrets and reducing the risk of accidental exposure or unauthorized access.

 

Vault installation on EKS using helm

Here are the step-by-step instructions, along with the real commands, to set up Scenario 1: Secure Storage of Database Credentials using Vault and Amazon EKS:

Step 1: Set up an EKS Cluster 

Follow the official AWS documentation or use the AWS Management Console to create an EKS cluster. Make sure you have the necessary permissions and credentials to create and manage the cluster.

Step 2: Install and Configure Vault on EKS

Deploy Vault as a containerized application on your EKS cluster using Kubernetes manifests. You can create a vault.yaml file with the following content:

 vault.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: vault
spec:
replicas: 1
selector:
matchLabels:
app: vault
template:
metadata:
labels:
app: vault
spec:
containers:
- name: vault
image: vault:latest
args:
- "server"
- "-dev"
- "-dev-root-token-id=myroot"
ports:
- containerPort: 8200

 

Create the deployment using the following command:

kubectl apply -f vault.yaml

Step 3: Enable and Configure the Kubernetes Authentication Method in Vault Enable the Kubernetes authentication method in Vault and configure it to authenticate the EKS cluster. Run the following commands: 

kubectl create serviceaccount vault-auth-sa

kubectl apply -f vault-auth-role.yaml

 

Create a file named vault-auth-role.yaml with the following contents:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-auth-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-auth-sa
namespace: default

 Step 4: Create a Vault Policy 

Create a Vault policy that grants read access to the path where the database credentials will be stored. For example, create a file named database-policy.hcl with the following contents:

path "secret/database/*" {
capabilities = ["read"]
}

 

Create the policy using the following command:

vault policy write database-policy database-policy.hcl

 Step 5: Store the Database Credentials in Vault 

Use the Vault CLI or API to securely store the database credentials in Vault. For example, to store the credentials for a MySQL database, run the following command:

vault kv put secret/database/mysql username="myuser" password="secretpassword"

 Step 6: Update Microservice Deployments 

Update the Kubernetes deployment manifests for each microservice to include the necessary environment variables and configurations to authenticate with Vault and retrieve the database credentials. Modify your existing microservice deployment YAML file, adding the following:

env:
- name: VAULT_ADDR
value: "http://vault:8200" # Replace 'vault' with the actual Vault address
- name: VAULT_ROLE
value: "my-role" # Replace with the Vault role created in Step 7

 Step 7: Access Database Credentials from Vault 

Modify your microservice code to retrieve the database credentials from Vault using the Vault API or a Vault client library. You can use the following code snippet as an example:

import os
import hvac

vault_addr = os.getenv("VAULT_ADDR")
vault_role = os.getenv("VAULT_ROLE")

client = hvac.Client(url=vault_addr)
client.auth_kubernetes(role=vault_role)

secrets = client.secrets.kv.v2.read_secret_version(path='database/mysql')

username = secrets['data']['data']['username']
password = secrets['data']['data']['password']

# Use the retrieved credentials to connect to the MySQL database

 

Make sure to import the necessary dependencies (such as the hvac library for Python) and handle any error conditions in your actual microservice code.

These steps provide a high-level guide to setting up Vault with Amazon EKS for secure storage and retrieval of database credentials. You can adapt them to your specific requirements and databases.

Scenario 2: Secure Management of API Keys/Secrets using Vault and Amazon EKS:

Step 1: Set up an EKS Cluster 

Follow the official AWS documentation or use the AWS Management Console to create an EKS cluster. Ensure you have the necessary permissions and credentials to create and manage the cluster.

 Step 2: Install and Configure Vault on EKS

Deploy Vault as a containerized application on your EKS cluster using Kubernetes manifests. Create a vault.yaml file with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
name: vault
spec:
replicas: 1
selector:
matchLabels:
app: vault
template:
metadata:
labels:
app: vault
spec:
containers:
- name: vault
image: vault:latest
args:
- "server"
- "-dev"
- "-dev-root-token-id=myroot"
ports:
- containerPort: 8200

 

Create the deployment using the following command:

kubectl apply -f vault.yaml

Step 3: Enable and Configure the Kubernetes Authentication Method in Vault Enable the Kubernetes authentication method in Vault and configure it to authenticate the EKS cluster. Run the following commands:

kubectl create serviceaccount vault-auth-sa

kubectl apply -f vault-auth-role.yaml

Create a file named vault-auth-role.yaml with the following contents:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-auth-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-auth-sa
namespace: default

Step 4: Create a Vault Policy 

Create a Vault policy that grants read access to the paths where the API keys or secrets will be stored. For example, create a file named api-keys-policy.hcl with the following contents:
 

path "secret/api-keys/*" {
capabilities = ["read"]
}

 

Create the policy using the following command:

vault policy write api-keys-policy api-keys-policy.hcl

Step 5: Store the API Keys/Secrets in Vault 

Use the Vault CLI or API to securely store the API keys/secrets in Vault. For example, to store a third-party API key, run the following command:

vault kv put secret/api-keys/third-party key="myapikey"

 

 Step 6: Update Application Deployments 

Update the Kubernetes deployment manifests for your application to include the necessary environment variables and configurations to authenticate with Vault and retrieve the API keys/secrets. Modify your existing application deployment YAML file, adding the following:

 

env:
- name: VAULT_ADDR
value: "http://vault:8200" # Replace 'vault' with the actual Vault address
- name: VAULT_ROLE
value: "my-role" # Replace with the Vault role created in Step 7

 

Step 7: Access API Keys/Secrets from Vault 

Modify your application code to retrieve the API keys/secrets from Vault using the Vault API or a Vault client library. Here's an example using Python:

 

import os
import hvac

vault_addr = os.getenv("VAULT_ADDR")
vault_role = os.getenv("VAULT_ROLE")

client = hvac.Client(url=vault_addr)
client.auth_kubernetes(role=vault_role)

secrets = client.secrets.kv.v2.read_secret_version(path='api-keys/third-party')

api_key = secrets['data']['data']['key']

# Use the retrieved API key in your application

 

Make sure to import the necessary dependencies (such as the hvac library for Python) and handle any error conditions in your actual application code.

These steps provide a high-level guide to setting up Vault with Amazon EKS for secure management of API keys/secrets. You can adapt them to your specific requirements and add more API keys/secrets as needed.

 CoreDNS

CoreDNS is a versatile, lightweight, and extensible DNS server that is commonly used as the default DNS provider in Kubernetes clusters. It is responsible for handling DNS resolution requests within the cluster.

In a Kubernetes environment, CoreDNS serves as the primary DNS server for all DNS queries made by pods, services, and other components. It provides name resolution for both internal cluster DNS and external DNS lookups.

CoreDNS can be deployed as a standalone pod or as a Deployment with multiple replicas for high availability. It integrates seamlessly with Kubernetes through the CoreDNS ConfigMap, which defines the DNS zones, forwarders, and other configuration settings.

Some key features and functionalities of CoreDNS in Kubernetes include:

  1. Service Discovery: CoreDNS dynamically discovers services and endpoints within the cluster, allowing pods and services to communicate with each other using their DNS names.

  2. DNS-Based Service Discovery: CoreDNS supports DNS-based service discovery, allowing pods to discover and connect to other services using their DNS names rather than IP addresses.

  3. DNS Forwarding: CoreDNS can forward DNS queries to external DNS servers, such as those provided by your DNS provider or configured in the cluster.

  4. Customization and Extensibility: CoreDNS supports plugins that enable custom configurations, such as zone transfers, caching, middleware integration, and more.

Overall, CoreDNS plays a crucial role in providing DNS resolution services within Kubernetes clusters, facilitating communication between various components and enabling seamless service discovery.

To set up CoreDNS in an Amazon EKS cluster with real-time examples, you can follow these steps:

 

Vault real practical 

you should have below kind of cluster
eksctl create cluster \
    --name learn-vault \
    --nodes 3 \
    --with-oidc \
    --ssh-access \
    --ssh-public-key learn-vault \
    --managed

add-on CSIDriver atleast

check these roles
eksctl-my-cluster1-nodegroup-my-n-NodeInstanceRole-1B9HIL62FL5MO
AmazonEBSCSIDriverPolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
AmazonEKSWorkerNodePolicy
AmazonSSMManagedInstanceCore

helm install mysql bitnami/mysql
helm install vault hashicorp/vault


kubectl exec vault-0 -- vault status

Initialize Vault with one key share and one key threshold.
kubectl exec vault-0 -- vault operator init \
    -key-shares=1 \
    -key-threshold=1 \
    -format=json > cluster-keys.json

Display the unseal key found in cluster-keys.json.
cat cluster-keys.json | jq -r ".unseal_keys_b64[]"

Create a variable named VAULT_UNSEAL_KEY to capture the Vault unseal key.
VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r ".unseal_keys_b64[]")

Unseal Vault running on the vault-0 pod.
kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY

kubectl exec vault-0 -- vault status

Display the root token found in cluster-keys.json.
cat cluster-keys.json | jq -r ".root_token"

Create a variable named CLUSTER_ROOT_TOKEN to capture the Vault unseal key.
CLUSTER_ROOT_TOKEN=$(cat cluster-keys.json | jq -r ".root_token")

Login with the root token on the vault-0 pod.
kubectl exec vault-0 -- vault login $CLUSTER_ROOT_TOKEN


Create a Vault database role

Enable database secrets at the path database.
kubectl exec vault-0 -- vault secrets enable database

Configure the database secrets engine with the connection credentials for the MySQL database.
kubectl exec vault-0 -- vault write database/config/mysql \
    plugin_name=mysql-database-plugin \
    connection_url="{{username}}:{{password}}@tcp(mysql.default.svc.cluster.local:3306)/" \
    allowed_roles="readonly" \
    username="root" \
    password="$ROOT_PASSWORD"


Create a database secrets engine role named readonly.
kubectl exec vault-0 -- vault write database/roles/readonly \
    db_name=mysql \
    creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" \
    default_ttl="1h" \
    max_ttl="24h"


Read credentials from the readonly database role.
kubectl exec vault-0 -- vault read database/creds/readonly

Configure Vault Kubernetes authentication


Start an interactive shell session on the vault-0 pod.
kubectl exec --stdin=true --tty=true vault-0 -- /bin/sh

Enable the Kubernetes authentication method.
#$ vault auth enable kubernetes

Configure the Kubernetes authentication method to use the location of the Kubernetes API.
#$ vault write auth/kubernetes/config \
    kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"

Write out the policy named devwebapp that enables the read capability for secrets at path database/creds/readonly
#$ vault policy write devwebapp - <<EOF
path "database/creds/readonly" {
  capabilities = ["read"]
}
EOF


Create a Kubernetes authentication role named devweb-app.
#$ vault write auth/kubernetes/role/devweb-app \
      bound_service_account_names=internal-app \
      bound_service_account_namespaces=default \
      policies=devwebapp \
      ttl=24h

exit


Deploy web application

Define a Kubernetes service account named internal-app.
cat > internal-app.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: internal-app
EOF

Create the internal-app service account.
kubectl apply --filename internal-app.yaml

Define a pod named devwebapp with the web application.
cat > devwebapp.yaml <<EOF
---
apiVersion: v1
kind: Pod
metadata:
  name: devwebapp
  labels:
    app: devwebapp
  annotations:
    vault.hashicorp.com/agent-inject: "true"
  
  vault.hashicorp.com/agent-cache-enable: "true"
   
vault.hashicorp.com/role: "devweb-app"
   
vault.hashicorp.com/agent-inject-secret-database-connect.sh: "database/creds/readonly"
   
vault.hashicorp.com/agent-inject-template-database-connect.sh: |
      {{- with secret "database/creds/readonly" -}}
      mysql -h my-release-mysql.default.svc.cluster.local --user={{ .Data.username }} --password={{ .Data.password }} my_database

      {{- end -}}

spec:
  serviceAccountName: internal-app
  containers:
    - name: devwebapp
      image: jweissig/app:0.0.1
EOF

Create the devwebapp pod.
kubectl apply --filename devwebapp.yaml


Get all the pods within the default namespace.
kubectl get pods


Display the secrets written to the file /vault/secrets/database-connect.sh on the devwebapp pod.
kubectl exec --stdin=true \
    --tty=true devwebapp \
    --container devwebapp \
    -- cat /vault/secrets/database-connect.sh

 https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-amazon-eks

Saturday, June 3, 2023

Kubernetes config and secrets

ConfigMaps

ConfigMaps in Kubernetes are used to store configuration data that can be accessed by applications running within pods. They provide a way to decouple configuration from application code, allowing you to modify the configuration without redeploying the application.

Here are some reasons why ConfigMaps are commonly used in Kubernetes:

  1. Separation of Configuration: ConfigMaps allow you to separate configuration data from your application code. This means you can change the configuration without modifying and redeploying the application. It promotes the "configuration as code" principle, enabling more flexibility and agility in managing your application's configuration.

  2. Environment-specific Configuration: ConfigMaps allow you to create environment-specific configurations. For example, you can have different ConfigMaps for development, staging, and production environments, with each containing the appropriate configuration values for that environment.

  3. Consistency: ConfigMaps help ensure consistency across multiple deployments. By using ConfigMaps, you can provide consistent configuration data to different pods or services running within your cluster.

  4. Dynamic Updates: ConfigMaps can be updated dynamically without restarting the associated pods. When you update a ConfigMap, any pod using that ConfigMap will automatically reflect the updated configuration values. This allows for live configuration updates without interrupting the running application.

  5. Flexible Access: ConfigMaps can be accessed by pods as environment variables or mounted as volumes. This flexibility allows applications to access configuration data in a way that suits their needs. Environment variables are useful for simple key-value configurations, while volume mounts are more suitable for larger configuration files or directories.

  6. Integration with Kubernetes Ecosystem: ConfigMaps can be used in conjunction with other Kubernetes features such as deployments, replica sets, and stateful sets. They can be referenced by other objects in the cluster, making it easy to manage and share configuration across different components.

By leveraging ConfigMaps, you can maintain better control over your application's configuration, promote reusability, and simplify the management of configuration data in a Kubernetes environment.

 

Here's an example of a Deployment in Kubernetes that uses a ConfigMap but need to restart pod/deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: nginx-config

ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
NGINX_PORT: "80"
NGINX_WORKER_PROCESSES: "2"

Note: To apply the updated configuration, we need to delete the deployment then in new deployment port info will change.

 

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
env:
- name: NGINX_PORT
valueFrom:
configMapKeyRef:
name: nginx-config
key: nginx-port
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx-port: "8080"


 

Here's an example of a Deployment with a ConfigMap that allows for dynamic updates without restarting the pod:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
env:
- name: NGINX_PORT
valueFrom:
configMapKeyRef:
name: nginx-config
key: nginx-port
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx-port: "8080"

 

edit cmd for configmap yaml file

kubectl edit configmap nginx-config


Note :- that the dynamic update of the ConfigMap only applies to certain types of configuration changes, such as environment variables. If other parts of the pod's configuration, such as volume mounts or command arguments, are modified, a pod restart may still be required to apply the changes.

Secrets

In Kubernetes, secrets are used to store and manage sensitive information, such as passwords, API keys, and TLS certificates. Secrets provide a secure way to distribute and manage sensitive data within a cluster. They are typically used by applications or services running in containers to access sensitive information needed for their operation.

Here are some key features and use cases of secrets in Kubernetes:

  1. Secure storage: Secrets are stored securely in the Kubernetes cluster and are encrypted at rest.

  2. Containerized application access: Secrets can be mounted as files or exposed as environment variables inside containers, allowing applications to securely access sensitive information.

  3. Application configuration: Secrets can be used to store configuration data, such as database connection strings or API credentials, which can be accessed by applications during runtime.

  4. Image pull secrets: Secrets can be used to store authentication credentials for private container image repositories, enabling containers to pull images securely.

  5. TLS certificates: Secrets can store TLS certificates and private keys, which can be used for secure communication over HTTPS or other TLS-enabled protocols.

  6. Fine-grained access control: Secrets can be assigned specific permissions to control which users or applications can access them.

By using secrets, Kubernetes ensures that sensitive information is kept secure and separate from other parts of the application's configuration, reducing the risk of accidental exposure or unauthorized access.

Here's an example deployment YAML file that uses a secret and updates the environment variables in the pod without restarting it:

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp-image
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secrets
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secrets
key: password


In this example, we have a deployment for a containerized application called "myapp". The container image is specified as "myapp-image". The application requires two environment variables, DB_USERNAME and DB_PASSWORD, which will be sourced from a secret named "db-secrets".

To update the values of the DB_USERNAME and DB_PASSWORD without restarting the pod, you can edit the secret directly using the kubectl edit secret command. but note, 1st you must encode it with base64 and then put it in secrets file, For example:

kubectl edit secret db-secrets


apiVersion: v1
kind: Secret
metadata:
name: db-secrets
type: Opaque
data:
username: dXNlcm5hbWU= # Base64 encoded value of the username
password: cGFzc3dvcmQ= # Base64 encoded value of the password

 

cmd for base64 encoding as below

echo -n 'myusername' | base64
echo -n 'mypassword' | base64


cmd for base64 decoding as below

echo "bXlzZWNyZXRwYXNzd29yZA==" | base64 --decode
mysecretpassword



Let's consider a scenario where we have an application that requires an API key to access an external service. We'll store the API key as a secret and mount it as a file inside the pod.

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app-container
image: my-app-image
volumeMounts:
- name: api-key-volume
mountPath: /etc/my-app
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: my-secrets
key: api-key
volumes:
- name: api-key-volume
secret:
secretName: my-secrets

In this example, we have a pod named "my-app" that runs a container using the "my-app-image" image. The pod is configured to mount the secret "my-secrets" as a volume inside the pod at the path "/etc/my-app". The container's environment variable "API_KEY" is set to the value of the "api-key" key from the secret.

To update the secret and reflect the changes in the pod without restarting it, you can follow these steps:

Edit the secret using the kubectl edit secret command:

kubectl edit secret my-secrets
  1. Modify the value of the "api-key" key to the new API key.

  2. Save the changes and exit the editor.

The updated secret will be automatically propagated to the mounted volume in the pod. The container's environment variable "API_KEY" will reflect the new value without requiring a pod restart. This allows the application inside the pod to access the updated API key in real-time without any disruptions.

 

Different b/w secrets & config map

Secrets and ConfigMaps in Kubernetes are both used to manage and pass configuration data to applications running in pods. However, there are some key differences between them:

  1. Data Sensitivity: Secrets are designed to store sensitive information such as passwords, API keys, and certificates. They are base64 encoded and stored in etcd, the Kubernetes key-value store, in an encrypted format. ConfigMaps, on the other hand, are intended for non-sensitive configuration data.

  2. Data Format: Secrets store data as key-value pairs, just like ConfigMaps. However, Secrets are specifically designed to handle binary data, such as SSL certificates and private keys. ConfigMaps, by default, handle data as plain text.

  3. Data Access: Secrets are mounted as files or environment variables in the pod's file system, making it easy to consume sensitive data in applications. ConfigMaps, too, can be mounted as files or environment variables, but they are typically used for non-sensitive configuration data.

  4. Encryption and Security: Secrets are encrypted at rest and in transit, ensuring the security of sensitive information. ConfigMaps do not provide the same level of encryption and security as Secrets.

Use Secrets for sensitive information and ConfigMaps for non-sensitive configuration data.

 

Deploy an SSL certificate in Kubernetes


you can use the Secret resource to store the certificate and key. Here's an example that demonstrates how to deploy an SSL certificate for an NGINX deployment:

apiVersion: v1
kind: Secret
metadata:
name: tls-secret
data:
tls.crt: <base64-encoded-certificate>
tls.key: <base64-encoded-private-key>
type: kubernetes.io/tls
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 443
volumeMounts:
- name: tls-volume
mountPath: /etc/nginx/certs
readOnly: true
volumes:
- name: tls-volume
secret:
secretName: tls-secret

In this example, we first define a Secret resource named "tls-secret" to store the SSL certificate and private key. The data field contains the base64-encoded values of the certificate and key. Make sure to replace <base64-encoded-certificate> and <base64-encoded-private-key> with the actual base64-encoded values of your SSL certificate and key.

Next, we define a Deployment resource for NGINX. The NGINX container is configured to use port 443, and we mount the tls-secret secret as a volume at the path /etc/nginx/certs. The readOnly option ensures that the certificate and key are accessible to NGINX but cannot be modified from within the container.

To deploy the SSL certificate, save the above YAML configuration to a file (e.g., nginx-ssl.yaml) and apply it using the kubectl apply command:

kubectl apply -f nginx-ssl.yaml 

Kubernetes will create the secret and deploy the NGINX deployment with the SSL certificate. NGINX will be able to use the certificate and key for SSL/TLS encryption.

Note: Remember to replace <base64-encoded-certificate> and <base64-encoded-private-key> with the actual base64-encoded values of your SSL certificate and key. You can encode the certificate and key using the base64 command-line tool:

base64 -w0 certificate.crt
base64 -w0 private.key


Replace certificate.crt and private.key with the actual file names of your certificate and key files.

 

How to check if certificate is Valid ?

Get the details of the Secret that contains the certificate:

kubectl get secret <secret-name> -o yaml

Replace <secret-name> with the actual name of the Secret that contains the certificate.

Extract the certificate and key data from the Secret:

kubectl get secret <secret-name> -o jsonpath='{.data.tls\.crt}' | base64 --decode > certificate.crt
kubectl get secret <secret-name> -o jsonpath='{.data.tls\.key}' | base64 --decode > privatekey.key

These commands will save the certificate and private key data into separate files (certificate.crt and privatekey.key, respectively).

Verify the validity of the certificate:

openssl x509 -in certificate.crt -text -noout

 This command will display detailed information about the certificate, including its validity period, subject, issuer, and other details. Look for the "Validity" section to ensure the certificate is still valid.

Additionally, you can check the certificate's expiration date specifically:

openssl x509 -in certificate.crt -enddate -noout

This command will display the expiration date of the certificate.

By following these steps, you can extract the certificate from the Secret and verify its validity using OpenSSL commands.


How to test certicate after attaching it in Pod/Deployment ?

Once you have attached the certificate in Kubernetes, you can test it by performing an HTTPS request to the relevant service or endpoint using tools like cURL or a web browser. Here's how you can test the certificate:

  1. Identify the hostname or IP address associated with the service or endpoint for which the certificate is attached.

  2. Use cURL to perform an HTTPS request:

    curl --cacert <path-to-certificate> https://<hostname-or-ip-address>

Replace <path-to-certificate> with the path to the certificate file on your local system, and <hostname-or-ip-address> with the actual hostname or IP address associated with the service.

For example:

curl --cacert certificate.crt https://example.com

This command sends an HTTPS request to the specified hostname or IP address, using the provided certificate for verification.

Note: If the certificate is self-signed or from a custom Certificate Authority (CA) that is not trusted by your system, you may need to use the --insecure option with cURL to ignore certificate validation errors:

curl --insecure https://<hostname-or-ip-address>
 
 Analyze the response to determine if the certificate is valid:
  • If the request is successful and you receive the expected response, it indicates that the certificate is valid and properly configured.
  • If you encounter any certificate validation errors or warnings, it suggests that there may be an issue with the certificate, such as expiration, mismatched hostname, or an untrusted CA.

By performing an HTTPS request to the service or endpoint using the attached certificate, you can verify its validity and ensure that it is functioning as expected.

Friday, June 2, 2023

Project

 

 

How to setup AWS cli and access for aws account.

1.Create a AWS user. 

Add in access group, suppose you want to give full permission like AdmistratorAccess do like as below.



2. create access key for that user 



3. Install aws cli on your local computer, go to below doc and install according to your OS.

https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html


 

 

4. Configure aws cli Access key

aws configure

 

Done now you are connected with your AWS account via AWS cli.

 

Now Install Terraform

https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli 

after install terraform verify it

terraform --help

now Create 1 folder go inside that folder and start terraform work.

Let suppose you have created 1 terraform script as below, store it inside that folder.

script name suppose - aws_ec2_jenkins_docker_install.tf

provider "aws" {
region = "us-east-2" # Update with your desired region
}

resource "aws_key_pair" "jenkins_keypair" {
key_name = "jenkins-keypair"
public_key = file("/root/.ssh/id_rsa.pub") # Replace with the path to your public key
}

resource "aws_instance" "jenkins_instance" {
ami = "ami-03a0c45ebc70f98ea" # Replace with the desired AMI ID
instance_type = "t2.small" # Replace with the desired instance type
key_name = aws_key_pair.jenkins_keypair.key_name
vpc_security_group_ids = [aws_security_group.jenkins_sg.id]

user_data = <<-EOF
#!/bin/bash
sudo apt-get update
sudo apt-get install -y docker.io
sudo docker pull jenkins/jenkins:lts
 
sudo docker run -d -p 8080:8080 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts

sleep 30 # Wait for Jenkins to start

jenkins_password=$(sudo docker exec $(sudo docker ps -q --filter "ancestor=jenkins/jenkins:lts") cat /var/jenkins_home/secrets/initialAdminPassword)
echo "Jenkins initial admin password: $jenkins_password"
EOF
}

resource "aws_eip" "jenkins_eip" {
instance = aws_instance.jenkins_instance.id
}

resource "aws_security_group" "jenkins_sg" {
name = "jenkins-sg"
description = "Security group for Jenkins"
vpc_id = "vpc-0d67054cd23a8f716" # Replace with the desired VPC ID

ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_eip_association" "jenkins_eip_association" {
instance_id = aws_instance.jenkins_instance.id
allocation_id = aws_eip.jenkins_eip.id
}

output "jenkins_login_password" {
value = aws_instance.jenkins_instance.user_data
}

output "jenkins_url" {
value = "http://${aws_eip.jenkins_eip.public_ip}:8080" 
}
 
 
 
 

now from inside that folder

run below cmd

terraform init  - it will download all plugin according to your provider

terraform plan - before execution of script you should run plan that will show you infra that will create if you execute that script.

terraform apply - once you apply it will create whole infra you defined inside the script.

now you can go to aws and check.

terraform destroy - it will delete all infra created by terraform apply.


After jenkins started installed below plugins on jenkins

Docker Pipeline










Amazon ECR plugin

Configure Gitlab in jenkins plugin

  Dashboard -> Manage Jenkins->  Configure System


create pipeline in Jenkins

 

 

Gitlab connection

 

Pipeline - Pipeline script from SCM

repo - https://gitlab.com/manjeetyadav19/sample_microservice_cicd_withjenkins_deployoneks.git

Branch specifier */main

Explain repo code.

we are building a simple nodejs app, that will print a message on web URL - Hello World-

index.js

const http = require('http');

const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, World!\n');
});

const port = process.env.PORT || 3000;
server.listen(port, () => {
console.log(`Server running on port ${port}`);
});
 
build it and try on local 1st if you want to try.
npm init
it will create a package.json and package.lock.json file 
 npm start and node index.js
 go and check on the browser http://localhost:3000
After you build the code, Write a docker file to create above app as microservice.
Dockerfile 
# Use the official Node.js image as the base
FROM node:14

# Set the working directory
WORKDIR /app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the web app code to the working directory
COPY . .

# Expose the port that the app will run on
EXPOSE 3000

# Start the app
CMD [ "node", "index.js" ]



 and if you want you can test as docker container also on local
docker build -t sampleapp . 
docker run -d -p 3000:3000 sampleapp
docker ps    - you will see running container sampleapp 
go and check the browser you should able to see 
same hello world on browser. 
Now create a Jenkinsfile for the CI and push image to ECR repository.
Jenkinsfile
pipeline {
agent any
environment {
ECR_REGISTRY = "896757523510.dkr.ecr.us-east-2.amazonaws.com" // Replace with your ECR registry URL
ECR_REPO = "sampleapp" // Replace with your ECR repository name
IMAGE_TAG = "latest" // Replace with your desired image tag

}

stages {
stage('Checkout') {
steps {
checkout scm
}
}

stage('Build Docker Image') {
steps {
script {
def dockerImage = docker.build("${ECR_REGISTRY}/${ECR_REPO}:${IMAGE_TAG}", "-f Dockerfile .")
}
}
}


stage('Push Docker Image to ECR') {
steps {
script {
def dockerWithRegistry = { closure ->
docker.withRegistry("https://${ECR_REGISTRY}", 'ecr:us-east-2:aws-access-key', closure)
}

dockerWithRegistry {
sh "docker push ${ECR_REGISTRY}/${ECR_REPO}:${IMAGE_TAG}"
}
}
}
}
}
}
  
For above jenkins file pls define below value
Create ECR repository also, as above.  
ECR_REGISTRY = "896757523510.dkr.ecr.us-east-2.amazonaws.com" // Replace with your ECR registry URL
ECR_REPO = "sampleapp" // Replace with your ECR repository name
IMAGE_TAG = "latest" // Replace with your desired image tag
ecr:us-east-2:aws-access-key'
aws-access-key is a aws credential, pls create in jenkins credential
  
Push all code to gilab repo main branch.
 

configure agent on Jenkins


Step 1. Create Virtual Machine / or connect docker host machine
First, create a virtual machine for the Jenkins agent.

Step 2. Install Java
# apt update && apt install openjdk-8-jdk

You can check if Java is installed.
java -version

Step 3. Add New Jenkins User
# useradd -m -d /var/lib/jenkins/ jenkins


Step 4. Configure Jenkins Master Credentials
The master must have private (jenkins_id_rsa) and public (jenkins_id_rsa.pub) ssh keys.

jenkins@test-jenkins-vm:~/.ssh$ ls | grep jenkins
jenkins_id_rsa
jenkins_id_rsa.pub

If necessary, you can generate it by the command:
# ssh-keygen -b 2048 -t rsa

$ chmod 600 jenkins_id_rsa*

Step 5. Copy the SSH Key from Master to Agent
Connect to the Jenkins Agent and create a directory .ssh into the Jenkins user home directory.
# mkdir .ssh

Now in the .ssh directory, create the file authorized_keys and copy the contents of jenkins_id_rsa.pub into it.


Step 6. Configure Jenkins Master Credentials
Manage Jenkins - Manage Credential - Add Credential

Now choose the authentication method.

Kind: SSH Username with a private key
Scope: Global
Username: jenkins
Private key: Enter directly and paste the ‘jenkins_id_rsa’ private key of Jenkins user from the master server.

Step 7. Add New Slave Nodes
On the Jenkins dashboard, click the ‘Manage Jenkins’ menu, and click ‘Manage Nodes’.
Click the ‘New Node’.

Type the node name ‘test-jenkins-slave’, choose the ‘permanent agent’, and click ‘OK’.

Step 8. Edit Node Information Details.

Now type node information details.

Description: test-jenkins-slave node agent server
Remote root directory: /var/lib/jenkins
Labels: test-jenkins-slave
Launch method: Launch slave agent via SSH,
Host: ‘10.0.0.5’ (it’s my test-jenkins-slave external IP)
Authentication: using ‘Jenkins’ credential.

 After configuring agent on jenkins

Create EKS cluster

go to jenkins agent and login as jenkins user

create below yaml file
cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: my-cluster
region: us-east-2

nodeGroups:
- name: my-nodegroup
instanceType: t3.small
desiredCapacity: 2

#eksctl create cluster -f cluster.yaml

Note:- After EKS cluster you have created, you need to go jenkins agent and configure aws cli for credential over there

#aws eks update-kubeconfig --region us-west-2 --name my-cluster

#kubectl get nodes


user below pipeline to deployment.


pipeline {
    agent any
    environment {
        ECR_REGISTRY = "896757523510.dkr.ecr.us-east-2.amazonaws.com" // Replace with your ECR registry URL
        ECR_REPO = "sampleapp" // Replace with your ECR repository name
        IMAGE_TAG = "latest" // Replace with your desired image tag
        KUBECONFIG_PATH = '~/.kube/config' // Path to the kubeconfig file inside Jenkins container
        NAMESPACE = "default" // Replace with your target namespace
    }

    stages {
        stage('Checkout') {
            steps {
                checkout scm
            }
        }

        stage('Build Docker Image') {
            steps {
                script {
                    def dockerImage = docker.build("${ECR_REGISTRY}/${ECR_REPO}:${IMAGE_TAG}", "-f Dockerfile .")
                }
            }
        }

        stage('Push Docker Image to ECR') {
            steps {
                script {
                    def dockerWithRegistry = { closure ->
                        docker.withRegistry("https://${ECR_REGISTRY}", 'ecr:us-east-2:aws-access-key', closure)
                    }

                    dockerWithRegistry {
                        sh "docker push ${ECR_REGISTRY}/${ECR_REPO}:${IMAGE_TAG}"
                    }
                }
            }
        }

        stage('Deploy to Kubernetes') {
            steps {
                script {
                    // Copy the kubeconfig file to the workspace
           
                    sh "cp ${KUBECONFIG_PATH} kubeconfig.yaml"
                    //sh "cp KUBECONFIG kubeconfig.yaml"

                    
                    // Apply the deployment to the Kubernetes cluster
                    sh "kubectl --kubeconfig=kubeconfig.yaml apply -f sampleapp-deployment.yaml -n ${NAMESPACE}"
                }
            }
        }
    }
}