How to create eks cluster ?
eks cluster creation without nodegroup
eksctl create cluster --name=eksdemo1 \
--region=us-east-1 \
--zones=us-east-1a,us-east-1b \
--without-nodegroup
enable IAM roles for service accounts within the cluster. it requires an IAM OIDC provider to authenticate and authorize the IAM
roles used by your Kubernetes service accounts. This allows you to
assign fine-grained permissions to your applications running on the
cluster.
eksctl utils associate-iam-oidc-provider \
--region us-east-1 \
--cluster eksdemo1 \
--approve
By running above command, you establish the necessary link between your
EKS cluster and the IAM OIDC provider. This enables you to create IAM
roles and map them to Kubernetes service accounts for role-based access
control (RBAC) within the cluster.
The eksctl create nodegroup command is used to create a managed node group within an existing Amazon EKS cluster
eksctl create nodegroup --cluster=eksdemo1 \
--region=us-east-1 \
--name=eksdemo1-ng-private1 \
--node-type=t3.medium \
--nodes-min=2 \
--nodes-max=4 \
--node-volume-size=20 \
--ssh-access \
--ssh-public-key=/root/.ssh/id_rsa.pub \
--managed \
--asg-access \
--external-dns-access \
--full-ecr-access \
--appmesh-access \
--alb-ingress-access \
--node-private-networking
create eks cluster in multi zone
eksctl create cluster --name=eksdemo1 \
--region=us-east-1 \
--zones=us-east-1a,us-east-1b \
--node-type=t3.large \
--nodes-min=3 \
--nodes-max=4
cmd to update eks cluster credential on local .kube/config file
aws eks update-kubeconfig --region us-east-1 --name eksdemo1
delete eks cluster
eksctl delete cluster --name=eksdemo1
--------------------------------------------------------------------------------------------------------
Prometheus and Grafana deployment on EKS
1. install Prometheus with helm
helm install prometheus prometheus-community/prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2" \
--set server.service.type=LoadBalancer
2. create grafana configuration file to get data from prometheus endpoint/datasource
cat << EoF > monitoring/grafana/grafana.yaml
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: Prometheus
url: http://prometheus-server.prometheus.svc.cluster.local
access: proxy
isDefault: true
EoF
3. Create Grafana namespace
kubectl create namespace grafana
kubectl get all namespace grafana
4. install grafana via helm
helm install grafana grafana/grafana \
--namespace grafana \
--set persistence.storageClassName="gp2" \
--set persistence.enabled=true \
--set adminPassword='secretpass' \
--values monitoring/grafana/grafana.yaml \
--set service.type=LoadBalancer
5. Port-forwarding prometheus service on local
kubectl port-forward service/prometheus-server 8080:80 -n prometheus
5. Port-forwarding grafana service on local
kubectl port-forward service/grafana 8080:80 -n grafana
Source video
https://www.youtube.com/watch?v=kADwAYTErgA
How to get grafana admin password ?
kubectl get secret -n grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode
How to download values.yaml file for grafana helm chart on local ?
helm show values grafana/grafana --version 6.57.4 > values.yaml
How to implement those changes to grafana helm chart?
helm upgrade grafana grafana/grafana -f value_grafana.yaml -n grafana
-------------------------------------------------------------------------------------------------
EKS logging with EFK
https://github.com/TechonTerget/efk_on_k8s/blob/feature/README.md
Download above repo in one of folder and cd that folder.
1. create namespace and create below yaml files
kubectl create namespace kube-logging
kubectl create -f es.yaml -n kube-logging
kubectl create -f efk.yaml -n kube-logging
kubectl create -f kibana.yaml -n kube-logging
2. check service for kibana
kubectl get svc kibana -n kube-logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kibana ClusterIP 10.100.109.150 <none> 5601/TCP 2m28s
3. edit the service and change as below
kubectl edit svc kibana -n kube-logging
change type: ClusterIP to NodePort
kubectl get pods -n kube-logging -o wide
4. check external ip
kubectl get nodes -o wide
5. port-forwarding on local if need
kubectl port-forward svc/kibana-kibana 5601 -n monitoring
go to browser check with nodeport and public/externalip
https://www.youtube.com/watch?v=rnKNfLArS7M&t=997s
https://devopscube.com/setup-efk-stack-on-kubernetes/
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes
===========================================
EFK Configuration
1. Create a Kubernetes namespace for monitoring tools
kubectl create namespace monitoring
2.Add the helm repo for Elastic Search
helm repo add elastic https://helm.elastic.co
helm repo update
3.Install Elastic Search using Helm
helm install elasticsearch elastic/elasticsearch -n monitoring --set replicas=1
NOTES:
1. Watch all cluster members come up.
$ kubectl get pods --namespace=monitoring -l app=elasticsearch-master -w
2. Retrieve elastic user's password.
$ kubectl get secrets --namespace=monitoring elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
3. Test cluster health using Helm test.
$ helm --namespace=monitoring test elasticsearch
M1x6oYxFeW6XpSdH
4.Install Kibana using Helm
helm install kibana elastic/kibana -n monitoring
NOTES:
1. Watch all containers come up.
$ helm install kibana elastic/kibana -n monitoring
2. Retrieve the elastic user's password.
$ kubectl get secrets --namespace=monitoring elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
3. Retrieve the kibana service account token.
$ kubectl get secrets --namespace=monitoring kibana-kibana-es-token -ojsonpath='{.data.token}' | base64 -d
AAEAAWVsYXN0aWMva2liYW5hL2tpYmFuYS1raWJhbmE6Z09YbmtiamZRemUzdkY2QTlTTjZOZw
5.Install Fluentd
kubectl apply -f ./fluentd-config-map.yaml
kubectl apply -f ./fluentd-dapr-with-rbac.yaml
6. Ensure that Fluentd is running as a deamonset
kubectl get pods -n kube-system -w
7. Install Dapr with enabling JSON-formatted logs
helm repo add dapr https://dapr.github.io/helm-charts/
helm repo update
kubectl create namespace dapr-system
helm install dapr dapr/dapr -n dapr-system --set global.logAsJson=true
8. Deploying and viewing application logs
kubectl apply -f ./counter.yaml
9. Search logs
kubectl port-forward svc/kibana-kibana 5601 -n monitoring
10.Browse
http://localhost:5601
11 Checking logs in Kibana
curl -XGET 'http://192.168.115.157:9200/_cluster/state?pretty'
https://github.com/RekhuGopal/PythonHacks/tree/main/AWSBoto3Hacks/EFK-Stack-On-AWS-EKS
No comments:
Post a Comment